jeudi 19 novembre 2009

Article-Level Metrics And The Evolution Of Scientific Impact

 
 

Envoyé par io2a via Google Reader :

 
 

via Scholarship 2.0: An Idea Whose Time Has Come de gerry.mckiernan@gmail.com (Gerry) le 18/11/09

Neylon C, Wu S (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biol 7(11): e1000242. doi:10.1371/journal.pbio.1000242 / Published: November 17, 2009

Formally published papers that have been through a traditional prepublication peer review process remain the most important means of communicating science today. Researchers depend on them to learn about the latest advances in their fields and to report their own findings. The intentions of traditional peer review are certainly noble: ... . In principle, this system enables science to move forward on the collective confidence of previously published work. Unfortunately, the traditional system has inspired methods of measuring impact that are suboptimal for their intended use.

Measuring Impact

Peer-reviewed journals have served an important purpose in evaluating submitted papers and readying them for publication. In theory, one could browse the pages of the most relevant journals to stay current with research on a particular topic. But as the scientific community has grown, so has the number of journals—to the point where over 800,000 new articles appeared in PubMed in 2008 ... and the total is now over 19 million ... . The sheer number makes it impossible for any scientist to read every paper relevant to their research, and a difficult choice has to be made about which papers to read. Journals help by categorizing papers by subject, but there remain in most fields far too many journals and papers to follow.

As a result, we need good filters for quality, importance, and relevance to apply to scientific literature. There are many we could use but the majority of scientists filter by preferentially reading articles from specific journals—1those they view as the highest quality and the most important. These selections are highly subjective but the authors' personal experience is that most scientists, when pressed, will point to the Thomson ISI Journal Impact Factor [1] as an external and "objective" measure for ranking the impact of specific journals and the individual articles within them.

Yet the impact factor, which averages the number of citations per eligible article in each journal, is deeply flawed both in principle and in practice as a tool for filtering the literature. It is mathematically problematic ... with around 80% of a journal impact factor attributable to around 20% of the papers, even for journals like Nature ... . It is very sensitive to the categorisation of papers as "citeable" ... and it is controlled by a private company that does not have any obligation to make the underlying data or processes of analysis available. [snip]

Though the impact factor is flawed, it may be useful for evaluating journals in some contexts, and other more sophisticated metrics for journals are emerging ... . But for the job of assessing the importance of specific papers, the impact factor—or any other journal-based metric for that matter—cannot escape an even more fundamental problem: it is simply not designed to capture qualities of individual papers.

Article-Level Metrics

If choosing which articles to read on the basis of journal-level metrics is not effective, then we need a measure of importance that tells us about the article. It makes sense that when choosing which of a set of articles to read, we should turn to "article-level metrics," yet in practice data on individual articles are rarely considered, let alone seriously measured.

Perhaps the main reason for this absence is a practical one. Accurate determining the importance of an article takes years and is very difficult to do objectively. The "gold standard" of article impact is formal citations in the scholarly literature, but citation metrics have their own challenges. One is that citation metrics do not take the "sentiment" of the citation into account, so while an article that is heavily cited for being wrong is perhaps important in its own way ... , using citation counts without any context can be misleading. The biggest problem, though, is the time-delay inherent in citations. [snip]

The Trouble with Comments

A common solution proposed for getting rapid feedback on scientific publications is inspired by the success of many Web-based commenting forums. Sites like Stack Overflow, Wikipedia, and Hacker News each have an expert community that contributes new information and debates its value and accuracy. It is not difficult to imagine translating this dynamic into a scholarly research setting where scientists discuss interesting papers. A spirited, intelligent comment thread can also help raise the profile of an article and engage the broader community in a conversation about the science.

Unfortunately, commenting in the scientific community simply hasn't worked, at least not generally. [snip]

[snip]

Part of this resistance to commenting may relate to technical issues, but the main reason is likely social. For one thing, researchers are unsure how to behave in this new space. We are used to criticizing articles in the privacy of offices and local journal clubs, not in a public, archived forum. [snip]

Another issue is that the majority of people making hiring and granting decisions do not consider commenting a valuable contribution. [snip]

Then there is simply the size of the community. [snip] But it also means that if only 100 people read a paper, it will be lucky if even one of them leaves a comments

Technical Solutions to Social Problems

Given the lack of incentive, are there ways of capturing article-level metrics from what researchers do anyway? A simple way of measuring interest in a specific paper might be via usage and download statistics; for example, how many times a paper has been viewed or downloaded, how many unique users have shown an interest, or how long they lingered. [snip] These statistics may not be completely accurate but they are consistent, comparable, and considered sufficiently immune to cheating to be the basis for a billion dollar Web advertising industry.

A more important criticism of download statistics is that it is a crude measure of actual use. How many of the downloaded papers are even read, let alone digested in detail and acted upon? What we actually want to measure is how much influence an article has, not how many people clicked on the download button thinking they "might read it later." A more valuable metric might be the number of people who have actively chosen to include the paper in their own personal library. [snip]

Examples of such tools are Zotero, Citeulike, Connotea, and Mendeley, which all allow the researcher to collect papers into their library while they are browsing on the Web, often in a single click using convenient "bookmarklets." The user usually has the option of adding tags, comments, or ratings as part of the bookmarking process. [snip]

Metrics collected by reference management software are especially intriguing because they offer a measure of active interest without requiring researchers to do anything more than what they are already doing. Scientists collect the papers they find interesting, take notes on them, and store the information in a place that is accessible and useful to them. [snip]

Part of the solution to encouraging valuable contributions, then, may simply be that the default settings involve sharing and that people rarely change them. A potentially game-changing incentive, however, may be the power to influence peers. [snip]

It is too early to tell whether any specific tools will last, but they already demonstrate an important principle: a tool that works within the workflow that researchers are already using can more easily capture and aggregate useful information. [snip]

The Great Thing about Metrics…Is That There Are So Many to Choose From

There are numerous article-level metrics ... and each has its own advantages and problems. Citation counts are an excellent measure of influence and impact but are very slow to collect. Download statistics are rapid to collect but may be misleading. Comments can provide valuable and immediate feedback, but are currently sparse ... .. Bookmarking statistics can be both rapid to collect and contain high quality information but are largely untested and require the widespread adoption of unfamiliar tools. Alongside these we have "expert ratings" by services such as Faculty of 1000 and simple rating schemes.

[snip]

"Other Indicators of Impact" include ratings and comments, which, like page views, are immediate but may offer more insight because users are more likely to have read the article and found it compelling enough to respond. Additional other indicators are bookmarks, used by some people to keep track of articles of interest to them, and blog posts and trackbacks, which indicate where else on the Web the article has been mentioned and can be useful for linking to a broader discussion. It is clear that all of the types of data provide different dimensions, which together can give a clearer picture of an article's impact.

[snip] As recently shown ... , scientific impact is not a simple concept that can be described by a single number. The key point is that journal impact factor is a very poor measure of article impact. And, obviously, the fact that an article is highly influential by any measure does not necessarily mean it should be.

Many researchers will continue to rely on journals as filters, but the more you can incorporate effective filtering tools into your research process, the more you will stay up-to-date with advancing knowledge. The question is not whether you should take article-level metrics seriously but how you can use them most effectively to assist your own research endeavours. We need sophisticated metrics to ask sophisticated questions about different aspects of scientific impact and we need further research into both the most effective measurement techniques and the most effective uses of these in policy and decision making. For this reason we strongly support efforts to collect and present diverse types of article-level metrics without any initial presumptions as to which metric is most valuable. [snip]

As Clay Shirky famously said ... , you can complain about information overload but the only way to deal with it is to build and use better filters. It is no longer sufficient to depend on journals as your only filter; instead, it is time to start evaluating papers on their own merits. Our only options are to publish less or to filter more effectively, and any response that favours publishing less doesn't make sense, either logistically, financially, or ethically. The issue is not how to stop people from publishing, it is how to build better filters, both systematically and individually. At the same time, we can use available tools, networks, and tools built on networks to help with this task.

So in the spirit of science, let's keep learning and experimenting, and keep the practice and dissemination of science evolving for the times.

References

Source

[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1000242]

!!! Thanks To / Garrett Eastman / Librarian / Rowland Institute at Harvard / For The HeadsUp !!!

>>> While These Insights and Suggestions Are An Important Contribution To The Conversation , In Many Ways The Views And Recommendation Are Far From Radical <<<

See My Presentation Delivered At the Workshop On Peer Review, Trieste, Italy, May 23-24 2003

"Alternative Peer Review: Quality Management for 21st Century Scholarship"

[http://www.public.iastate.edu/~gerrymck/APR-1.ppt]

>>> See In Particular > 'Seize The E! Section >>> Embrace the potential of the digital environment to facilitate access, retrieval, use, and navigation of electronic scholarship.

>>It's A Large PPT (200+ Slides) But IMHO ... Well Worth The Experience [:-)]<<

AND

The Big Picture(sm): Visual Browsing in Web and non-Web Databases

[http://www.public.iastate.edu/~CYBERSTACKS/BigPic.htm]

To ReQuote T.S. Elloit >

"Where is the wisdom we have lost in knowledge? Where is the knowledge that we have lost in information?"/ T.S. Eliot / The Rock (1934) pt.1

To Quote Me >

"It's Not About Publication, It's About Ideas"

>> We Now Have The Computational Power To Make Real-Time Conceptual Navigation An EveryDay Occurrence <<<

!! Let Us Use It To Navigate Ideas !!!

Indeed Let Us Continue "... experimenting, and keep the practice and dissemination of science evolving for the times."

 
 

Ce que vous pouvez faire à partir de cette page :

 
 

Aucun commentaire:

Enregistrer un commentaire