RE: [xmca] Publish and/or Perish: ISI practices

From: dale cyphert <dale.cyphert who-is-at uni.edu>
Date: Tue Jul 08 2008 - 09:54:36 PDT

In our College (of Business), the rankings are not used during the p&t meetings per se, but used as evidence for or against including a journal on The List. This is, for us, a much more salient issue. Publications are to be targeted toward Journals that have been vetted by the department faculty, and a publication that appears in something lesser is ...well, lesser.

There could be great reasons to publish in unlisted journals ...work with a colleague in another discipline, for instance, or "public intellectual" activities that wind up in non-academic periodical. But, these are dangerous activities for a pre-tenured individual, just as too much service or too much student interaction are good but time-consuming when the focus ought to be elsewhere at that stage in ones career.

Bottom line is that ISI rankings are only one of several factors that can go into putting a journal on The List.

dale

-----Original Message-----
>From Eugene Matusov <ematusov@udel.edu>
Sent Tue 7/8/2008 9:56 AM
To 'eXtended Mind, Culture, Activity' <xmca@weber.ucsd.edu>
Subject RE: [xmca] Publish and/or Perish: ISI practices

Dear Peter and everybody--

Thanks a lot for your detailed reply and sharing your thinking. If I may, I
still have a few questions for you:

1) I still cannot visualize how you (and your colleagues) use the rating of
the journal in your evaluation practice. For example, would you (and your
colleagues) reject/discount a publication from the candidate's list of
publications, if it is published in a peer-reviewed journal with an ISI low
score (e.g., high acceptance rate and "low impact" -- usually
characteristics of a new journal)? Or this publication will count but you
would attach some kind of low weight to this publication (real numerical or
virtual mental)? Can you give some illustrative scenarios that might help
visualization, please?

2) What would you say somebody who would disagree with your statement that
the journal rating means anything at all, because your reasoning seems to be
based on a logical fallacy by association? From the fact that the journal,
in which candidate published her or his work, has low rejection rate and
"low impact" score, does not necessary mean that the candidate's publication
has low quality of his or her scholarship. The reverse is true as well. Even
if we assume, as you seemed to do, that the ISI scores of journals are
somehow meaningful, being in "good" or "bad" company does not necessary
define the character or quality of the candidate's work. What do you think
about this argument?

3) From Peter's description, I am making an observation that can be wrong
(Peter, please, correct me) that the ISI journal scores are mainly used by
colleagues who cannot directly judge the quality of the candidate's
scholarship. Am I correct in this observation?

4) Why do you and your colleague do not reply mainly on judgment of external
reviewers who evaluate the candidate's scholarship directly without any
problematic mediation by the "reputation" of journals in which the candidate
published her or his work?

What do you think?

Eugene

> -----Original Message-----
> From: xmca-bounces@weber.ucsd.edu [mailto:xmca-bounces@weber.ucsd.edu]
> On Behalf Of Peter Smagorinsky
> Sent: Tuesday, July 08, 2008 6:25 AM
> To: 'eXtended Mind, Culture, Activity'
> Subject: RE: [xmca] Publish and/or Perish: ISI practices
>
> A few things in response to this email and others:
>
> First, I recognize that impact rankings are insufficient in many ways,
> much
> like the US News and World Report university rankings, which also are
> gamed
> by institutions (e.g., waiving application fees to increase
> applications
> solely for the purpose of rejecting more applicants make the school
> appear
> more competitive). Believe me, I know that the system is flawed, as are
> most
> systems that make much of a handful of indicators.
>
> At the same time, the journals I think highly of and read do tend to
> get
> high impact scores, so the impact rankings are not insignificant. Like
> an
> SAT score on an application, it doesn't mean everything, but it also
> doesn't
> mean nothing.
>
> As to how I use an impact score on a tenure/promotion review: I tend to
> review cases in which many people with decision-making power are not
> entirely familiar with the candidate's field. My own field is English
> Education, and so I review a lot of English Ed faculty who tend to be
> in one
> of two types of departments, or activity settings if you will: An
> English
> department, where the person occupies the 3rd of 3 status tiers
> (English
> literature rules, Composition and Rhetoric is a minor field, and
> English Ed
> is the dog who gets kicked at the end of a bad day--the closer a
> faculty
> member is to the rank-and-file proletariat, the lower the status of the
> position). In a College of Education, most English Ed faculty are in a
> Curriculum and Instruction department, which takes the "Noah's Ark"
> approach
> of housing two of every kind: two Social Studies Ed (one secondary, one
> elementary), two English Ed, and so on. The people in Mathematics Ed
> might
> not know the relative status of the journals and English Ed faculty
> member
> might know, so I profile each journal. Here are some samples. Not all
> include an impact factor, because not all journals are on the list. The
> idea
> is to include impact factor as part of the review of each journal.
> Because I
> write a lot of reviews of t/p cases (about 40 thus far), I maintain a
> journal databank so that I don�t have to reinvent the wheel with each
> evaluation I write, which has numbered as many as 9 in one year.
>
> OK, here are some journals I've profiled that include impact rankings.
> I'll
> throw in one for which I don't have an impact ranking just for purposes
> of
> contrast:
>
> The American Journal of Education is a high-stature journal edited at
> the
> University of Chicago and published by the University of Chicago Press.
> Throughout its history�and it has been published consecutively since
> 1891�it
> has been a premier journal, often with a 10% acceptance rate or less. I
> am
> perhaps biased in my high regard for AJE, having earned my M.A.T. and
> Ph.D.
> at the University of Chicago, having served on the journal�s editorial
> board, and having published two articles and a book review in it
> myself. But
> I believe that it ranks among the best general-interest education
> journals,
> along with Teachers College Record, Harvard Educational Review,
> American
> Educational Research Journal, and a select handful of other journals.
> Average rank in impact factor among all educational research journals,
> 1999-2005: 53rd; Highest rank: #18 (see
> http://www.sciedu.ncue.edu.tw/board_docs/SSCI2005-1999.doc)
>
> Anthropology and Education Quarterly is the journal of the Council on
> Anthropology and Education, a professional association of
> anthropologists
> and educational researchers affiliated with the American
> Anthropological
> Association. It is a peer-reviewed, quarterly journal with a
> distinguished
> reputation. According to the journal website, in 2003 the editors
> accepted
> 11% of manuscripts submitted for review (including both initial
> submissions
> and revised and resubmitted papers), making it among the field�s most
> highly
> selective journals. Average rank in impact factor among all educational
> research journals, 1999-2005: 61.67th; Highest rank: #37 (see
> http://www.sciedu.ncue.edu.tw/board_docs/SSCI2005-1999.doc)
>
> College Composition and Communication is a refereed journal published
> by the
> National Council of Teachers of English with an acceptance rate between
> 10%-25%. I haven�t read this journal is quite a few years, but it is
> the
> journal for scholars concerned with writing instruction and assessment
> at
> the university level. The Conference on College Composition and
> Communication, which sponsors the journal, holds the field�s primary
> annual
> meeting for first-year composition faculty and others interested in
> composition theory and its politics.
>
> Critical Inquiry in Language Studies: An International Journal is the
> peer-reviewed, quarterly official journal of the International Society
> for
> Language Studies. It identifies its contributions as multidisciplinary
> and
> international, and accepts about 20% of submitted articles. According
> to its
> website, �CILS seeks manuscripts that present original research on
> issues of
> language, power, and community within educational, political, and
> sociocultural contexts with broader reference to international and/or
> historical perspective. Equally welcome are manuscripts that address
> the
> development of emergent research paradigms and methodology related to
> language studies. Though CILS seeks to present a balance of research
> from
> contributing disciplines, interdisciplinary foci are encouraged.� The
> journal boasts an impressive editorial board, including Michael Apple,
> Dennis Baron, Charles Bazerman, Sari Knopp Biklen, Carole Edelsky,
> James
> Gee, James Lantolf, Cynthia Lewis, Allan Luke, Donaldo Macedo, Alastair
> Pennycook, Guadalupe Vald�s, and other luminaries. Although I am not
> familiar with the journal, its profile suggests that it is a journal of
> some
> stature, and that a publication listing with CILS is an asset to one�s
> curriculum vita.
>
> Curriculum Inquiry is a highly regarded �niche� journal (i.e., one that
> features a particular research topic) published by Blackwood, a
> respectable
> publisher of educational materials. I am not familiar with this
> journal
> other than by reputation, but found some impressive encomium by
> distinguished researchers at the journal�s website:
> "One of the top general education journals. It is the finest
> publication in
> the English speaking world that focuses on curriculum planning,
> teaching and
> evaluation."
> Elliot Eisner, Stanford University, USA
> "One of the most lively and stimulating journals. Its dedication to
> exploring issues and pursuing debates, across a wide range of issues,
> is
> second to none. "
> Martyn Hammersley, Open University, UK
> "One of the few education journals to open up contemporary theoretical
> perspective on general education."
> Maxine Greene, Columbia University, USA
> Given the stature of these commentators, it would be hard to regard
> Curriculum Inquiry as anything but a powerhouse journal in the area of
> curriculum studies. Average rank in impact factor among all educational
> research journals, 1999-2005: 79.16th; Highest rank: #66 (see
> http://www.sciedu.ncue.edu.tw/board_docs/SSCI2005-1999.doc)
>
>
>
> Peter Smagorinsky
> The University of Georgia
> 125 Aderhold Hall
> Athens, GA 30602
> smago@uga.edu/phone:706-542-4507
> http://www.coe.uga.edu/lle/faculty/smagorinsky/index.html
>
>
> -----Original Message-----
> From: xmca-bounces@weber.ucsd.edu [mailto:xmca-bounces@weber.ucsd.edu]
> On
> Behalf Of Eugene Matusov
> Sent: Monday, July 07, 2008 6:23 PM
> To: 'eXtended Mind, Culture, Activity'
> Cc: jewett@udel.edu; 'UD-PIG'; 'Tonya Gau Bartell'; 'Bob Hampel';
> rosa@udel.edu; rhayes@mundo-r.com
> Subject: RE: [xmca] Publish and/or Perish: ISI practices
>
> Dear XMCA folks--
>
>
>
> I'm also concerned with the apparent proliferation of the ISI web of
> knowledge practices of rating academic journals for evaluation of
> scholarship. I'm not very knowledgeable about it and do not have
> firsthand
> experience of it (fortunately for me!) but I have heard from my foreign
> colleagues their concerns and stories about the proliferation of the
> ISI in
> the academia.
>
>
>
> Here I want to offer my tentative analysis of the ISI practice using
> what I
> call "questionable claims." These are my claims based on my limited
> experiences of participation in academia, observations, stories of my
> colleagues, rumors, speculations and so on. I treat them cautiously
> because
> although they may sound very reasonable (at least for me), they can be
> partially or fully wrong.
>
>
>
> Questionable claim#1. Academic practice involves summative assessment
> of a
> scientist's contributions to the field of the scientist specialization
> and
> (claimed) expertise. These summative assessments are often both
> qualitative
> and quantitative by their nature. Like any summative assessment,
> summative
> assessments in the academia are about sorting people on success and
> failure.
> Institutionally recognized successes provide the person with access to
> social goodies while institutionally recognized failures block this
> access.
> My observation on the US academia suggests the following commonly
> occurring
> summative assessments in the institutional academia:
>
> A. Defended vs. non-defended dissertation;
>
> B. Getting vs. not getting an academic job;
>
> C. Renewal vs. non-renewal a contract;
>
> D. Getting tenure vs. not getting tenure;
>
> E. Getting promotion vs. not getting promotion;
>
> F. Publishing vs. non-publishing a scholarly manuscript in a
> recognized
> publication source (a peer-reviewed journal, book, and so on);
>
> G. Getting vs. not getting a research grant;
>
> H. Getting good vs. bad annual evaluation form the department
> administration (in my institution, this is probably least consequential
> summative assessment);
>
> I. Did I miss something?
>
>
>
> Many (but not all) of the listed summative assessments depend on 1F,
> namely,
> academic publications. That is why �publish or perish� is a rather
> accurate
> motto. Interestingly enough, but even dissertation defense can be
> linked to
> publications. For example, in Norway (University of Bergen), I observed
> dissertation defense that required publication of 3 journals in
> selected
> peer-reviewed academic (international or nation) journals. These
> publications, republished in a special brochure with some explanations,
> constitute the dissertation itself. But as far as I know, it is not a
> practice in US (am I wrong?).
>
>
>
> Questionable claim#2. Summative assessment is unavoidable and good for
> the
> science practice for the following reasons:
>
> A. �Dead wood�: It is a good idea for the practice of science (and
> arguably
> academic teaching � but this is even more questionable) to weed out
> people
> who do not do science;
>
> B. �Limited resources�: Since resources are always limited, it is a
> good
> idea to prioritize supporting highly productive, important, and/or
> promising
> scientists and their research programs over less or non productive,
> important, and/or promising ones;
>
> C. �Accountability�: The society puts its trust and needed resources
> in the
> science practice and, thus, it legitimately expects that somebody would
> supervise the science practice delivering on its promise of its social
> contract with the society;
>
> D. �Quality of scholarship discourse�: It is arguably a good idea for
> the
> science practice itself to involve scientists in debating what
> constitutes
> the quality of their scholarship;
>
> E. �Focus�. Summative assessment creates necessary focus of what
> texts,
> ideas, and people are important and worth attention from others and
> resources;
>
> F. �Scientific reputation.� Summative assessment can help create and
> enact
> scientific reputations needed for effective science making;
>
> G. �Professionalization of science.� If the science practice wants to
> remain professional and recognized as such by the society, it should
> have
> self-policing in a form of summative assessments;
>
> H. Did I miss something?
>
>
>
> Thus, if I�m correct that there is a great extrinsic and intrinsic need
> for
> summative assessments of scholars� contributions, the issue is not
> whether
> to do or not but by whom and how.
>
>
>
> Questionable claim#3. Summative assessment can be very painful for the
> assessed scholar and detrimental for the science practice at large:
>
> A. �Pain and distraction�. Since summative assessment sorts people for
> those who get social goodies and those who will be denied these
> goodies;
> professional, psychological, social, and economic well-being of the
> assessed
> (and often their families) can be in jeopardy. It often leads to
> anxiety,
> depression, and pain distracting the assessed scientists (and their
> environment) from the science making practice itself (and other related
> professional practices);
>
> B. �Error#1 demoralization�. There is always a possibility that one
> who
> deserves the social goodies won�t get them as a result of the summative
> assessment;
>
> C. �Error#2 demoralization�. There is always a possibility that one
> who
> does not deserve the social goodies will get them as a result of the
> summative assessment;
>
> D. �Abuse�. There is always a possibility that summative assessment
> can be
> diverted by personal, social, or political interests that are nothing
> to do
> with the summative assessment of the scholar�s contributions to the
> academic
> field (this may include, for example, paradigm wars, political
> suppression
> of scientific results, and even sexual harassment);
>
> E. �Culture of fear�. Summative assessment creates a culture of fear
> in
> scientific communities and institutions, in which people are afraid to
> do
> and to say what they want to (or even must) do and say because they are
> too
> concerned (often justifiably) that what they do and say may affect
> their
> summative assessments performed by others near them;
>
> F. �Long term contributions�. Sometimes it takes long time for a
> particular
> contribution to mature and to be recognized by a scientific community;
>
> G. �Reducing risks, innovations, and creativity by conforming to the
> status
> quo�. Summative assessment often pushes scholars to play safe by not
> taking
> risks and by stifling their own creativity because they are afraid that
> radical innovations in their scholarship might not be recognized by
> many who
> will perform the summative assessment or in time of the assessment;
>
> H. �Quality vs. quantity: Paper tiger.� It is difficult to decide how
> fully
> to take into account the quality and quantity of someone�s scholarship.
> Summative assessment often forces scholars to do a lot of research
> papers
> rather than to invest time and efforts on a few or even one but better
> quality. There is also possible proliferation of a community of
> scholarly
> writers over scholarly readers;
>
> I. �Medium bias�. Scientific contributions are often reduced to
> published
> texts authored by the assessed scholars. Individual authorship is
> prioritized over collective. However, it can be argued (and shown
> through
> anecdotes) that other contributions (such as oral or through certain
> actions) can be very important for the science practice. These
> contributions
> are not often appreciated and evaluated by existing summative
> assessments;
>
> J. �Inhibition of learning�. Summative assessments, focused on
> revealing
> and punishing the candidate�s deficits, makes mistake-making, the
> foundation
> of any learning, costly. People often inhibit their own learning by
> hiding
> their mistakes and not asking for help;
>
> K. �Culture of distrust and adversary�. Being summatively assessed by
> colleagues can easily create long lasting adversaries in scientific
> communities (it is often painful to know that some of your colleagues
> think
> that your scholarship is mediocre);
>
> L. �Quality is a part of scholarship.� Defining the quality of
> scholarship
> and what the scholarship is are a part of scholarship itself. Summative
> assessment itself has to be scrutinized by the scientific discourse
> (and
> thus, arguably stop being summative assessment);
>
> M. �Future is unpredictable.� Past performance cannot always predict
> future
> performance in both directions: successful past performance may lead to
> poor
> future performance and poor past performance can lead to excellent
> future
> performance;
>
> N. Did I forget something?
>
>
>
> Questionable claim#4. There are three major types of summative
> assessment:
>
> A. Mainly judgment-based (e.g., professional peer review);
>
> B. Mainly procedure-based (e.g., the ISI web of knowledge rating of
> journals and citation rates of the candidate�s publications can be used
> for
> developing a formula calculating �the contribution score� of the
> candidate.
> If the score is higher than the certain numerical criterion, the
> candidate
> is successful, if not; he or she fails the evaluation. As far as I
> know, a
> similar procedure-based system is used in Spain. Am I right?);
>
> C. Judgment-procedure hybrid (e.g., the candidates� publications can
> be
> limited to those published in �respectful journals� usually defined by
> the
> ISI practice � i.e., a procedure-based model, -- but those publications
> are
> still professionally peer-reviewed by recognized experts, -- i.e., a
> judgment-based model).
>
>
>
> Peter, you wrote, �I really can't explain or defend the charts and how
> they're compiled; I simply provide one that I use when evaluating
> tenure/promotion cases.� Can you describe, please, how do you use the
> ISI to
> do summative assessments in your institution (e.g., to evaluate
> tenure/promotion cases)?
>
>
>
> In my institution, School of Education at the University of Delaware,
> summative assessments are mainly judgment-based. My colleague Bob
> Hampel and
> I wrote recently a paper on this issue at
> http://www.aaup.org/AAUP/pubsres/academe/2008/JF/Feat/matu.htm
>
>
>
> Questionable claim#5. A procedural model of summative assessment in
> academia
> has several advantages over a judgment-based model:
>
> A. Summative assessments and following administrative decisions can be
> made
> by people alienated from the field of the candidate or even by non-
> scholars
> (i.e., administrators);
>
> B. It is time, effort, and people effective (however, the ISI has to
> be
> paid for the data);
>
> C. It does not rely on accurate identification of experts in the field
> of
> the candidate�s specialization (and/or paradigm);
>
> D. It is impersonal and alienated (this is often confused with
> �objectivity�) and as a consequence it has following benefits:
>
> a. It is legally defensible;
>
> b. It is always procedurally fair and perceptually less arbitrary from
> case
> to case (it be not necessarily true in reality since the biases of the
> ISI
> are hidden and not transparent);
>
> c. It is psychologically and socially safer (imagine that you failed
> some
> institutional summative assessment � it is probably much easier for you
> psychologically and socially blame some kind of impersonal procedure
> giving
> you a lower score -- than your colleagues who personally and
> professionally
> judged your scholarship as mediocre);
>
> d. It does not affect the social climate at the workplace to make it
> adversarial (at least not as much as a judgment-based model does);
>
> E. It is unified and standard across different cases, people, various
> and
> unrelated fields of science, and administrative units of universities
> and
> ministries of Higher Education;
>
> F. It is easy for administration to institutionally balance �supply�
> of and
> �demand� for scientists by adjusting the cut-off criterion number of
> their
> �contribution score�;
>
> G. Did I forget something else?
>
>
>
> I wonder if these benefits drive proliferation of the ISI practice and
> other
> procedural models in academia across the world. Or is it something else
> that
> I missed?
>
>
>
> Questionable claim#6. A judgment-based model of summative assessment in
> academia has several advantages over a procedural model:
>
> A. Judgment-based summative assessment can be more meaningful and
> contextual than a procedure-based one;
>
> B. It is nuanced;
>
> C. It can take into account more complex, contextual, and substantive
> factors than just mechanical factors such as, for example: 1) a journal
> rate
> of rejections and 2) citations following the candidates� publications
> (as in
> the ISI practice);
>
> D. While judging the quality of the candidate�s scholarship, a
> judgment-based summative assessment can contextually define what
> constitutes
> this quality of scholarship for the given candidate in the given
> specific
> field of the candidate�s expertise;
>
> E. Arguably, under the right conditions, a judgment-based model of
> summative assessment can easier prevent the candidates from the
> causalities
> of paradigm wars (arguably the pool of possible professional peer
> reviewers
> can be selected to avoid a possibility of paradigm wars, while this can
> be
> hidden in the procedure-based model � it is probably easier to publish
> in
> �respected journals� for scholars belonging to the mainstream vs. newly
> emerging paradigms);
>
> F. Did I miss something?
>
>
>
> Questionable claim#7. A procedure-based models of summative assessment
> in
> the academia (especially ones using the ISI web of knowledge practice)
> have
> been spreading internationally and in the US.
>
>
>
> Does somebody have any data supporting or undermining this claim? If
> so, why
> does it happen now? Any ideas? Is it because, the ISI proliferation has
> become possible with the development of Internet?
>
>
>
> Questionable claim#8. Procedure-based models of summative assessment in
> academia might have the major negative consequence by making the entire
> science practice more conservative, less innovative, less inviting for
> a new
> scientific paradigm questioning the status quo, and encouraging
> emerging
> scholars to play safe. It can be even truer in social sciences and
> humanities than in the natural sciences.
>
>
>
> I do not know if there is any research supporting or undermining this
> claim.
>
>
>
> Questionable claim#9. By investigating reasons and concerns that make
> the
> ISI practice (and other procedure-based models of summative assessment)
> more
> attractive for administrators and scholars organized into department
> units,
> it is possible to offer to them alternative, judgment-based, models
> that
> might be still attractive to them.
>
>
>
> By the way, Peter, do you know why and historically how your department
> accepted the ISI procedural model of the institutional summative
> assessments? What was before it? Did you have any discussions of
> alternatives? What caused the change and how people justify the current
> practice? I think it can be very useful to know for us to understand
> this
> practice. In my department, so far, all attempts to introduce
> procedure-based models/policies of summative assessment have been
> defeated.
>
>
>
> What do you think?
>
>
>
> Eugene
>
>
>
>
>
> > -----Original Message-----
>
> > From: xmca-bounces@weber.ucsd.edu [mailto:xmca-
> bounces@weber.ucsd.edu]
>
> > On Behalf Of Peter Smagorinsky
>
> > Sent: Saturday, July 05, 2008 9:28 AM
>
> > To: 'eXtended Mind, Culture, Activity'
>
> > Subject: RE: [xmca] Publish and/or Perish
>
> >
>
> > I really can't explain or defend the charts and how they're compiled;
> I
>
> > simply provide one that I use when evaluating tenure/promotion cases.
>
> > Sorry,Peter
>
> >
>
> > Peter Smagorinsky
>
> > The University of Georgia
>
> > 125 Aderhold Hall
>
> > Athens, GA 30602
>
> > smago@uga.edu/phone:706-542-4507
>
> > http://www.coe.uga.edu/lle/faculty/smagorinsky/index.html
>
> >
>
> >
>
> > -----Original Message-----
>
> > From: xmca-bounces@weber.ucsd.edu [mailto:xmca-
> bounces@weber.ucsd.edu]
>
> > On
>
> > Behalf Of David H Kirshner
>
> > Sent: Saturday, July 05, 2008 9:08 AM
>
> > To: eXtended Mind, Culture, Activity
>
> > Subject: RE: [xmca] Publish and/or Perish
>
> >
>
> > Peter,
>
> >
>
> > Can you clarify a few points about the list:
>
> >
>
> > Why are some central journals, like Educational Researcher, not
>
> > included and
>
> > others, like Review of Research in Education, not listed with
> complete
>
> > entries?
>
> >
>
> > I'm assuming from the low score for Harvard Ed Review that impact is
>
> > calculated by frequency of citation, which means that another key
>
> > measure of
>
> > journal quality--acceptance rate--is ignored. Is that correct?
>
> >
>
> > Thanks.
>
> > David
>
> >
>
> >
>
> > -----Original Message-----
>
> > From: xmca-bounces@weber.ucsd.edu [mailto:xmca-
> bounces@weber.ucsd.edu]
>
> > On
>
> > Behalf Of Peter Smagorinsky
>
> > Sent: Saturday, July 05, 2008 4:56 AM
>
> > To: 'eXtended Mind, Culture, Activity'
>
> > Subject: RE: [xmca] Publish and/or Perish
>
> >
>
> > Attached is one "impact factor" list I found for journals in
> education.
>
> > p
>
> >
>
> > Peter Smagorinsky
>
> > The University of Georgia
>
> > 125 Aderhold Hall
>
> > Athens, GA 30602
>
> > smago@uga.edu/phone:706-542-4507
>
> > http://www.coe.uga.edu/lle/faculty/smagorinsky/index.html
>
> >
>
> >
>
> > -----Original Message-----
>
> > From: xmca-bounces@weber.ucsd.edu [mailto:xmca-
> bounces@weber.ucsd.edu]
>
> > On
>
> > Behalf Of Cathrene Connery
>
> > Sent: Friday, July 04, 2008 7:38 PM
>
> > To: mcole@weber.ucsd.edu; eXtended Mind, Culture, Activity
>
> > Cc: eXtended Mind, Culture, Activity
>
> > Subject: Re: [xmca] Publish and/or Perish
>
> >
>
> > So, who has a list of the ISI journals? Anyone willing to share?
>
> > Cathrene
>
> >
>
> >
>
> >
>
> > The BIG down side is total assimilation to the existing mainstream,
>
> > David.
>
> > >
>
> > > I personally suggest a multi-valenced approach that includes ISI
>
> > > highly rated journals and deviant ones, like MCA.
>
> > >
>
> > > Michael left out part of the GOOD news. MCA has a rating that
> should
>
> > > win it ISI inclusion by year's end.
>
> > >
>
> > > I assume the PLAY article for discussion made it to everyone.
> People
>
> > > reading this weekend?
>
> > > mike
>
> > >
>
> > > On Fri, Jul 4, 2008 at 1:50 PM, David Preiss <davidpreiss@uc.cl>
>
> > wrote:
>
> > >
>
> > >> As a young scholar, I totally ENDORSE this petition, Michael.
>
> > Indeed,
>
> > >> I have always thought that MCA`s influence and intellectual appeal
>
> > is
>
> > >> not commensurate to its lack of inclusion in ISI. Alas, ISI! No
>
> > >> chance but to play according to its rules, I guess.
>
> > >> david
>
> > >>
>
> > >>
>
> > >> On Jul 4, 2008, at 4:39 PM, Wolff-Michael Roth wrote:
>
> > >>
>
> > >> Hi all,
>
> > >>> Mike and I have had a conversation off line. He suggested I
> should
>
> > >>> write to the list. It concerns the increasing pressure on all of
> us
>
> > >>> to publish in "good" journals, and universities increasingly use
> as
>
> > >>> a measure the presence and impact factor ranking in ISI Web of
>
> > >>> Science as a measure. This is especially true for Asian countries
>
> > >>> and other countries. With my graduate students, we always make
>
> > >>> selections based on this criterion, because I want them to be
>
> > >>> successful in their home countries and careers.
>
> > >>>
>
> > >>> In the sciences, this has long been common practice; now the
> social
>
> > >>> sciences are swept up by the same trend. I have recently been
>
> > >>> bombarded by publishers whose journals have increased in their
>
> > >>> impact factor.
>
> > >>> Furthermore, there are a number of companies that make the
> rankings
>
> > >>> of their journal a key bit of information on the website.
>
> > >>>
>
> > >>> (Some of) You may be asking what this has to do with you. Well,
>
> > >>> since I have been editing journals (besides MCA, I also do
> CULTURAL
>
> > >>> STUDIES OF SCIENCE EDUCATION and FQS: FORUM QUALITATIVE SOCIAL
>
> > >>> RESEARCH), I have been asked by new faculty members about
> rejection
>
> > >>> rates, rankings, etc. And I have been asked by department heads
> and
>
> > >>> deans as well.
>
> > >>>
>
> > >>> Some may decide to opt out, which would come with dire
> consequences
>
> > >>> for many, who might find themselves in the position of not being
>
> > >>> tenured or promoted.
>
> > >>>
>
> > >>> Unfortunately, we (MCA) currently are not in ISI Web of Science,
>
> > >>> which places those of you who publish in the journal in an
>
> > >>> unfortunate situation.
>
> > >>>
>
> > >>> One of the ways in which you, the community as a whole can be
>
> > >>> proactive producing the conditions that would convince ISI to
> make
>
> > >>> MCA one of the listed and ranked journals is to make it a habit
> to
>
> > >>> cite RECENT articles you have been reading in MCA. Here is why:
>
> > >>>
>
> > >>> The impact factor for 2007 (which is what was made available just
> a
>
> > >>> few days ago), for example, is calculated using the following
>
> > formula:
>
> > >>>
>
> > >>> Number of citations in 2007 referencing
>
> > >>> articles published in 2005 and 2006 impact factor =
>
> > >>>
>
> > ---------------------------------------------------------------------
> --
>
> > -----
>
> > ---------------
>
> > >>> Number of citable articles
>
> > >>> published in 2005 and 2006
>
> > >>>
>
> > >>> (They may not take into account self-citation, but I am not sure)
>
> > >>>
>
> > >>> So the impact factor is 1 when a journal had 60 references from
> the
>
> > >>> outside while having published 60 articles (over 2005 and 2006).
>
> > >>>
>
> > >>> You see, as a community, you can help yourself by citing MCA work
>
> > in
>
> > >>> other journals. With high rankings, MCA will be included in ISI
> and
>
> > >>> then you and your peers will be rated higher at your institution
>
> > >>> because it is part of ISI.
>
> > >>>
>
> > >>> Have a nice weekend all of you,
>
> > >>> Sincerely,
>
> > >>> Michael
>
> > >>>
>
> > >>>
>
> > >>> Wolff-Michael Roth, Editor-in-Chief
>
> > >>> MIND, CULTURE, AND ACTIVITY
>
> > >>> Email: mroth@uvic.ca
>
> > >>> Journal: http://www.tandf.co.uk/journals/1074-9039
>
> > >>> Submissions: http://mc.manuscriptcentral.com/mca
>
> > >>>
>
> > >>> _______________________________________________
>
> > >>> xmca mailing list
>
> > >>> xmca@weber.ucsd.edu
>
> > >>> http://dss.ucsd.edu/mailman/listinfo/xmca
>
> > >>>
>
> > >>
>
> > >> David Preiss, Ph.D.
>
> > >> Subdirector de Extensi�n y Comunicaciones Escuela de Psicolog�a
>
> > >> Pontificia Universidad Catolica de Chile Av Vicu�a Mackenna - 4860
>
> > >> 7820436 Macul
>
> > >> Santiago, Chile
>
> > >>
>
> > >> Fono: 3544605
>
> > >> Fax: 3544844
>
> > >> e-mail: davidpreiss@uc.cl
>
> > >> web personal: http://web.mac.com/ddpreiss/ web institucional:
>
> > >> http://www.epuc.cl/profesores/dpreiss
>
> > >>
>
> > >>
>
> > >>
>
> > >>
>
> > >> _______________________________________________
>
> > >> xmca mailing list
>
> > >> xmca@weber.ucsd.edu
>
> > >> http://dss.ucsd.edu/mailman/listinfo/xmca
>
> > >>
>
> > > _______________________________________________
>
> > > xmca mailing list
>
> > > xmca@weber.ucsd.edu
>
> > > http://dss.ucsd.edu/mailman/listinfo/xmca
>
> > >
>
> >
>
> >
>
> > _______________________________________________
>
> > xmca mailing list
>
> > xmca@weber.ucsd.edu
>
> > http://dss.ucsd.edu/mailman/listinfo/xmca
>
> > _______________________________________________
>
> > xmca mailing list
>
> > xmca@weber.ucsd.edu
>
> > http://dss.ucsd.edu/mailman/listinfo/xmca
>
> >
>
> > _______________________________________________
>
> > xmca mailing list
>
> > xmca@weber.ucsd.edu
>
> > http://dss.ucsd.edu/mailman/listinfo/xmca
>
> _______________________________________________
> xmca mailing list
> xmca@weber.ucsd.edu
> http://dss.ucsd.edu/mailman/listinfo/xmca
>
> _______________________________________________
> xmca mailing list
> xmca@weber.ucsd.edu
> http://dss.ucsd.edu/mailman/listinfo/xmca

_______________________________________________
xmca mailing list
xmca@weber.ucsd.edu
http://dss.ucsd.edu/mailman/listinfo/xmca

_______________________________________________
xmca mailing list
xmca@weber.ucsd.edu
http://dss.ucsd.edu/mailman/listinfo/xmca
Received on Tue Jul 8 09:58 PDT 2008

This archive was generated by hypermail 2.1.8 : Fri Aug 01 2008 - 00:30:07 PDT