,

Confessions of an early career scientist

Confession 1: No, I do not have publications in Cell, Science, or Nature. In fact, I’ve never submitted to any of those journals. And I don’t plan on it. Why? Because they’re not open access and the publishers don’t have a good history of supporting open science. Are there more nuanced reasons? Sure, but plus or…

Confession 1: No, I do not have publications in Cell, Science, or Nature. In fact, I’ve never submitted to any of those journals. And I don’t plan on it. Why? Because they’re not open access and the publishers don’t have a good history of supporting open science. Are there more nuanced reasons? Sure, but plus or minus details, that’s pretty much it. Of course, I realize not having publications in these journals puts me at a competitive disadvantage. Am I worried? Yes, but mostly because it speaks to how incredibly flawed our current system of evaluating scientists really is.  It’s a symptom of a far bigger problem. But I refuse to be part of that problem, even if it costs me.

Confession 2: No, I do not have publications in any high Impact Factor journals. None of my published articles has appeared in a journal with an Impact Factor much above 3. One was published in a journal that has yet to receive an IF. Do I care? Absolutely not. Studies have shown that IF is not correlated with scientific quality or impact. In fact, it’s highly correlated with retraction rate. Do other people care? Unfortunately, yes. Hiring and tenure committees consider journal IF as a proxy for the quality of a scientist’s work. It’s absurd. Unscientific. And it has to change.

Confession 3: Yes, I have “only” four published articles in total. Since graduating, I have published 1 article per year. I have done this while not having a permanent position. Or a lab. Or any research funds. Or a fixed home for the last two years. Oh, and I also have two kids. So, you can call that low productivity. You can ask me, where is the exponential increase in publication rate? You can complain that my h-index is only 2 and my i10 index is only 1. Or, you can stop for a moment to think about what it took to get those publications out. You can realize what I would be able to do with a permanent position, a lab, research funds, a fixed home, and some decent affordable childcare. And then, maybe you can ask instead how you can help.

Confession 4: No, my publications don’t yet demonstrate a single, unified line of work. Why? Two reasons, one academic and one practical. Academic reason? I’m interested in many things, and I’ve found ways to apply my love of physiology, neuroscience, and mathematics to different research problems. You think that’s a bad thing? Sue me. I think it’s a beautiful thing. New problems keep me interested and my perspective fresh. I may enter a new subfield with less background knowledge than the expert, but I also enter with less prejudgements. Maybe I can see a tired, old problem in a new way that breaks it open. And the practical reason? Survival. There simply aren’t enough jobs in academia to go around, and even less if you stay in one small subfield. Being able to work on a variety of projects using different experimental or theoretical tools gives me options, opens more doors. Adaptability, plain and simple. Maybe I don’t work full-time on the problems I envisioned myself working on when I started out in science. But I do work on fascinating problems and I enjoy it.

This is usually where someone would ask for forgiveness. But I don’t think my sins are that great. The fact that academia views them as sins is more telling. I’m not sure I’m the one who needs to clean up my act.

Tags:

Responses to “Confessions of an early career scientist”

  1. Dennis

    Completely with you, here. Btw, one publication per year since graduation in a field like neuroscience as a single person is in fact pretty productive.

  2. namnezia

    I think its commendable that you’ve managed to stay so productive, but this outright dismissal of science published in high-impact journals is plain and simple ridiculous. It takes a lot of hard work, patience, resilience and perseverance to have a paper published in a high impact journal and to dismiss this as someone just “playing the game” is not accurate. I agree that the journal does not always necessarily reflect the quality of the science. Having published in both high and low IF journals, I can say that some of what I think are my strongest and favorite pubs are not in high IF journals. But overall, those that did get into high IF journals overall ARE better and more interesting to a wide audience, are more highly cited and I have received the most feedback over (invites for talks, emails, convos as meetings, etc.). Are these papers “more likely to get retracted”? I don’t think so. And everyone has personal circumstances that can interfere with their professional life, you have no idea what others go through and how they cope with various things in life. I’m not diminishing what you have done, but you should not do the same by implying that everyone else “has it easy”.

    1. Erin McKiernan

      I wasn’t suggesting dismissing all science published in high IF journals. I agree that some great science is published in those journals, and that it does take a lot of hard work by researchers to produce those publications. I was simply trying to point out that something isn’t necessarily better science just because it’s published in a high IF journal, which is often an assumption made by many hiring and tenure committees. I also never said that everyone else “has it easy”, nor did I mean to imply that. Nothing about having a career in science is easy, no matter how you decide to do it. You’re right. I don’t know what others go through to be successful in science, just as they don’t know what I go through. That was exactly my point. I wanted to give an example (necessarily using myself because I’m privy to the information) of what some people might go through, and that isn’t captured by simply looking at raw numbers. I know there are many, many scientists struggling with similar problems. Far from diminishing that, I wanted to talk about it and point to how the flawed evaluation system in science is creating some of these problems.

    2. Dennis Eckmeier

      Some articles in high IF journals are indeed cited very often, but by far not all of them. A minority of articles are very successfull and pull up the rest, when calculating the mean. The median is, however, not that impressive (pretty good, still, but not impressive):
      http://www.nature.com/neuro/journal/v6/n8/full/nn0803-783.html (yes, published in nature, duh)

      I agree, that it is hard to publish in high IF journals, but the articles I read in high IF – although many are great – also tell a story of how non-scientific factors play into the selection process and how far scientists go in terms of overselling bold statements.

      So, saying that high quality is published in high IF journal is true. The converse argument, however, is not. And that’s what is wrong with impact factors calculated for a journal as a measure for the quality of a individual studies and researchers who published there. It’s plain wrong. And I am sorry for everybody who published awesome work in high IF journals, because them being outstanding is what boosted hundreds of other researcher’s careers, who published mediocre work in the same journal.

      1. Erin McKiernan

        Well put, Dennis! You expressed many of the ideas I wanted to but couldn’t quite get out. Thank you for that. I also think it’s important to consider that due to the glam publishing culture, some papers are being cited, at least in part, because they were published in high IF journals and others are citing them. The article you linked to in Nature Neuroscience admits this and refers to the problem of ‘feedback loops’. They conclude:

        [Citation data] is a blunt instrument at best, and when complex distributions are reduced to simple averages, then much of the usefulness is lost. Journal impact factors cannot be used to quantify the importance of individual papers or the credit due to their authors, and one of the minor mysteries of our time is why so many scientifically sophisticated people give so much credence to a procedure that is so obviously flawed.

      2. Björn Brembs

        “So, saying that high quality is published in high IF journal is true.”

        I the citation data you cite your only evidence for that? You mean citations like: “Our data falsify the claims of Smith et al. (2012)”? I’m not so sure citations are even slightly correlated with ‘quality’. 🙂 Citations probably correlate better with utility or attention than quality.
        Erin cited our paper which summarizes the correlation (or lack thereof) of IF with several measures of quality – the data contradict your statement I quote above. In fact, it seems rather like the reverse – if anything, hi-IF is slightly indicative of low quality. Look at the data, it’s open access 🙂

        1. Erin McKiernan

          Good point! I think the influence of ‘negative citations’, i.e. “Our data falsify Smith et al. (2012)”, often gets ignored when looking at raw citation counts. I imagine, for example, that many people are going to be citing the arsenic life paper, but as an example of how not to do science. The paper’s citation count in this respect should certainly not be taken as a measure of quality (quite the opposite). Unfortunately, the pressure to publish in high IF journals is pushing more researchers to submit oversold, shoddy, and even fraudulent science. I think this is what is being reflected in the data presented in Björn’s paper. I read Dennis’ statement, however, as acknowledging that some high-quality work is published in high IF journals and that we shouldn’t, as others argued here as well, simply discount work just because it’s published there. We have to assess each piece of work independently, which is why it doesn’t really matter where it’s published. The same rigorous post-publication assessment should happen whether the paper is published in Science or a journal no one has ever heard of.

      3. Mike Taylor

        Yes; but in the (completely correct) recognition of ways in which raw citation count is imperfect as a measure of a paper’s worth, let’s not forget that it’s a huge, huge step forward from counting the IF of the journal it appears in.

        1. Erin McKiernan

          Absolutely! And there should be ways to further improve on counting citations. For example, it shouldn’t be too hard to write tools that filter out self-citations and maybe even negative citations (via e.g. keyword searches?).

      4. Björn Brembs

        Of course there are good papers in GlamMagz, too – or the correlations would be much stronger. Here’s a good example of how to reduce the pressure on hi-IF (and, coincidentally, almost identical to the way I screen postdoctoral candidates):
        http://t.co/MSinp7B15q

    3. Ethan O. Perlstein (@eperlste)

      Who pays for all your low and high IF publications? I suspect the taxpayer foots most of the bill, though I’d be happy to be proved wrong in your case.

      OA has nothing to do with how hard any of us work. Our research programs wouldn’t exist without public support. That’s the point. Any excuse you make is an apology or acceptance or rationalization of a immoral system that’s collapsing under its own weight anyway.

      eLIFE, PLOS Bio, PNAS can get your IF rocks off, so there really are zero excuses for publishing “high quality” work that’s accessible to anyone on Earth with an Internet connection.

      Academics have made great careers being OA, eg, Mike Eisen and his descendants. It is possible, though not yet mainstream. Part of why I left academia is the culture of excuse-making made possible on the public dime.

    4. Orange

      There was no “outright dismissal” of anything. Did you make your way through academia by demonizing others?

    5. Mike Taylor

      nmzenia says: “It takes a lot of hard work, patience, resilience and perseverance to have a paper published in a high impact journal and to dismiss this as someone just “playing the game” is not accurate.”

      It is accurate. Yes, publishing in CSN takes a lot of hard work — so does publishing science anywhere, because research is hard work. Meanwhile, “patience”, “resilience” and “perseverance” are all synonyms for “playing the game”.

      I don’t want my tax money going on grants to researchers who are good at patience, resilience and perseverance; I want it going to researchers who are good at research.

  3. Ethan O. Perlstein (@eperlste)

    Great, bold post Erin! I wish you had more professional security as an academic scientist. If I had broken into academia, I don’t know if I would have avoided buckling under the pressure to go along and get along. Keep up the good work!

    Also, as you gradually expand your readership/following/fan base, don’t forget about crowdfunding. Yes, the sums raised right now are generally less than $25k and don’t yet reproducibly support FTE, but we’re getting there. Slow and steady, like the musicians and artists who pioneered the form on Kickstarter in 2008-2010. (And I’m more than happy to provide consulting advice!)

    1. Erin McKiernan

      Thanks, Ethan! I wondered at first whether I should post this at all, and hovered over the publishing button a little longer than I usually do. I realize ‘bold’ could translate into “I’m hoping you didn’t just shoot yourself in the foot!” :). But I think it’s important I do this on my own terms. And let me just say, that’s one of the things I admire about you. You are certainly doing things on your own terms, in a way that many are not brave enough to do. I wish you success. And I promise not to forget about crowdfunding! I think it may become a vital resource for scientists in a time when we have to get creative to keep going.

  4. Aug 11: Leaving Academe Edition | New Religion and Culture Daily

    […] + Confessions of an early career scientist […]

  5. Jonas Kubilius

    Thank you for sharing this. I think many people have concerns just like you’re raising — are sacrifices we commit in order to stay productive worth it? And is our productivity measured properly? Now, I’m against IF as much as anyone (honestly, I couldn’t put a number on any journal’s IF), but then again: how do you think you will be able to stay active if you dismiss the present system completely? For example, you already mention not having stable funding or a fixed position. Such insecurity is probably not helping you with your research but the system is not going to change soon enough.

    I don’t have an answer to it myself. Perhaps part time science, part time something else is a solution for the new age indie scientist. Or maybe playing the game to a certain extent will do (So you want me to publish? Alright, I will do smaller papers in between the “real” ones. Etc.).

    But sometimes I also wonder why us, researchers, are at this rather privileged position where we somehow expect that somebody should give us money to do what we love doing. After all, everyone else who demand such independence are paying the price: artists, journalists, start-ups (to a certain extent).

  6. Beth Hellen (@PhdGeek)

    Although some of the details are different, I could have written each of those headings. There seem to be quite a lot of us in the same boat!

  7. mqtran

    thanks for writing this… helps me deal with my anxiety

  8. Anand

    Hear hear! I struggle to get through each and every day (bordering on depression), constantly worried about how i’m going to support my wife who’s studying, pay the bills and lead a decent life. I want to buy a house some day and start a family…but when? I’m coming up to the end of a two year contract and will most likely end up with two publications at the most. I am struggling to find another post-doc position, even though i’ve got great ideas which i want to develop I’m constantly getting shafted by grant bodies because I don’t come from a big lab with twenty five postdocs all on a Nature or Science paper, my research methodology is slow which means output is slow…What really gets me is how gov’t funding bodies are encouraging transition to the industry…I make this one very obvious point – there is no neuroscience R&D based industry in Australia. None.

    Is this the price we pay for suffering from an addiction to science?

  9. Orange

    A brilliant piece, Erin. I heartily agree.

  10. Klara

    I wish I dared to be as bold as you are. I think it is important indeed that our research is open access to everyone. But when I got the chance last year to publish my work in an approx. IF 12 journal, I couldn’t refuse. Not while thinking about my future, where there are bound to people judging me at least partially on research output measured in IF’s… I put it up on ResearchGate so that people can download it, but every now and then I still wonder whether I should’ve tried harder to convince the boss to publish open access.

    And I couldn’t agree more when you say that we should also consider the circumstances under which the research was done instead of merely looking at output and that’s it… But again, I’m not sure how many hiring committees will do that. Hope you’ll manage to get where you want to be, without playing the academic games…

  11. lifescientist

    Dear Erin,
    I enjoyed your article as it is a personal and heartfelt piece said with great conviction. The part that particularly resonated with me is the mantra of more is better, which is very field specific and not necessarily true. Most likely stemming from our education, higher grades means better right?

    I concur with yourself and other blog respondent’s criticism of impact factor being a poor metric to define future potential. However as Jonas points out this is the system as it stands and flawed as it may be, unfortunately there has to be some metric of measuring potential. This is not condoning the system and is coming from someone whose recent fellowship rejection comments consisted of ‘interesting idea’ and ‘needs more publications’ (http://diylifesci.wordpress.com/2013/07/14/quick-wins-rather-than-asking-truely-pertinant-questions/).

    Although you encapsulate the feelings and highlight issues faced by many scientists worldwide, unfortunately there is no mention in your post of suggestions of how to improve the current flawed system.

    What do you or any readers of your blog suggest as an alternative system for measuring scientific potential?

    I have often thought it would be interesting to do some sort of psychometric test (which will most likely come with many of its own caveats) to try and measure creativity as the metric or some sort of adjunct metric to looking at publications as a proxy for future potential.

    It would be interesting to hear your thoughts and thanks for the thought provoking blog post.

  12. Tom

    I sympathize with your situation; I looked for a job for a long time myself, and indeed had to face the fact that I might never get one. But now for quite awhile, I’ve been on the other side, as a member of hiring committees, and here’s the problem from there (at least to the degree that I can assume commonalities with my field of literary study). When we review job applications (usually in excess of 100 for one tenure-track position) we are looking for ways to winnow–we have to, because there are so many applicants. We do not have 100 positions, so we have a responsibility both to our institution and to the candidate pool to do our best to try to hire the very best person we can get. If, by professional measures, one person has published significantly better than another, we would be irresponsible, in the absence of other compelling measures, to choose the one who has not published as well. We can’t hire on supposition; we can’t choose someone on the basis that we think he or she would have published more with more support, if there is someone else in the pile who has ALREADY published more. We try to use all the available measures to determine who will be the best candidate–but surely quantity of publications and quality of placements have to be among those measures as predictors of future success. We can’t just ignore the top profiles in the pile. The whole system no doubt needs reform, and has for decades. But within it, please understand that search committees too are trying to do their best.

    I wish you all success. As I prepare to retire and make way for a junior scholar, I wish for a world that is both fairer and imbued with better values.

  13. bashir

    What about the PLoS and Frontiers journals. Those are well known open access journals.

  14. katejeffery

    OK, so, you’re speaking to the chair of a hiring panel, who is highly sympathetic to your views and indeed shares them, and you’re trying to make the case for why she should hire you despite the lack of classical credentials you outlined. What do you say? What is it that makes you a good scientist worthy of a permanent job? And how can you prove it? (After all, anyone can claim to be creative, or curious, or innovative… in committing millions of pounds of taxpayer funds on someone’s future employment, merely claiming is not enough)

    Like Tom, I have been on many hiring panels, and I agree the system needs reforming. I’m happy to help reform it. Just tell me how 🙂

    1. Mike Taylor

      Surely by the time you’re actually talking to the panel, it’s not unreasonable that they should READ your papers?

      Getting through to that stage is (I imagine) the hard part — that the initial sifting from the 200 candidates down to the ten you interview is done by filtering on who has the glamour publications.

      There is a very simple way to improve on this practice: just count citations. Yes, this is VERY far from being perfect, but at least means you’re imperfectly measuring the right thing — the quality of this author’s articles — rather than the wrong thing — quality of other articles published in the same journal the author was in.

      I hope we can find something much, much better than counting citations. But until we do, that would be a huge step forward over using impact factors or other measurements of journal “quality”.

      1. bashir

        I don’t think citation counts are much better than JIF. Or at least both are much worse than actually reading the papers rather than relying on summary statistics.

      2. Mike Taylor

        I thought I’d been pretty clear that I agree counting citations is much worse than reading the papers. I’d love to fix that. But right now the problem is improving the current practice of looking at JIFs. And counting citations is way better than that.

        Anecdotal evidence from my own publications: my four most cited papers are from three journals that had JIFs around 1.0 at the time of publication and from one that has no JIF at all. Those four average 30+ citations each, i.e. about 15 times what the JIF would predict. My letter in Nature (JIF = 38) has been cited one time, according to Google Scholar, but actually zero times since the one is in an in-prep manuscript that Google has picked up prematurely.

        This pattern is not unusual.

        1. Erin McKiernan

          Thanks for this example, Mike. I agree with you that while counting citations is not perfect, it would be an improvement over considering journal IFs. Since the IF is an average, it can under or over represent the influence of a particular article in the field. Citation count gives a more accurate picture of real influence, as long as we are careful to discount self-citations. Even better might be to consider reports from something like Impact Story, which gives additional metrics on how often a given article is being downloaded or discussed by people online i.e. a measure of broader impact beyond official citations.

      3. katejeffery

        Citations are indeed another useful metric, and they are used quite widely, in the form of the h-index. However, that has its own problems (as I’ve recently argued here: http://t.co/58cHx2remZ; see also http://academia-a-to-a.blogspot.de/).

        A further possibility is to use an aggregate score that combines several of these imperfect measures into one that might be a little better – the so-called snowball metric (http://www.snowballmetrics.com/). But, as today’s Nature argues (http://bit.ly/19uxbDq), this just compounds the metrics problem eloquently outlined by Erin.

        Reading actual papers is no longer enough, either, in today’s highly specialised world, though it’s certainly ONE of the necessities in making a hiring decision. The department I oversee includes social psychology, psycholinguistics, neuroimaging, single neuron electrophysiology and many other things. None of us are qualified to judge papers outside our own fields. Even within a field, subject areas can be quite diverse, and so comparing applicants is like comparing apples and oranges: there is very unlikely to be anyone on the panel with enough specialist knowledge to compare two applicants unless they happen to be in the exact same area.

        Panels know all this and are quite capable of taking the metrics limitations into account: e.g., having conversations about whether someone’s Nature paper is just because they came from a big, well resourced lab as opposed to the applicant’s own talent, or whether their high citations simply results from their working in a popular area. In the end, panels use as much information as they can, and they actually do it pretty thoughtfully and intelligently in my experience.

        The truth is, though, we don’t really know what we are trying to measure. What makes a good scientist? We don’t really know for sure. For truly outstanding scientists it is obvious, on all the metrics, but for the vast majority we simply don’t know what we want, other for them to discover something amazing, preferably shortly after we’ve hired them…

        The problem is so ill-defined that I’ve actually argued that maybe we should think of using multiple subjective measures instead: http://bit.ly/15HfgRA.

        So, all the above is just to say, it isn’t that administrators are lazy and unimaginative and it’s not that we ourselves haven’t recognised the imperfections of the measures we use. It’s just that so far, nobody has been able to think of anything better.

      4. lifescientist

        I have to agree with Mike about citations. It doesn’t substiute for actually reading the paper however hiring commitees have a tough job as well, its getting through the first stage that matters. I got some funding for a summer student (I am a postdoc) and had to narrow down candidate numbers (I was suprised about how many applications I recieved) and set up interviews of those canidates myself, it was an eye opening experience. I utilise my google scholar metrics as evidence of future potential on applications and hiring commitees (at least the more progressive ones have taken note of this). I would consider myself to have healthy metrics for my career stage (although that can be somewhat field dependant so its not an absolute metric in its own right). Coming back to my point it would be useful to have another unbiased/adjunct (or unbiased as can be) measure of potential other than ones ability to ‘glam’ up a CV (which I could be percieved as doing). However science is not just about the science but how you communicate it (which could be otherwise known as being a salesman but at least in this case there is often considered evidence behind the claims).

    2. bashir

      Or more specifically a better hire than the other 100+ applicants. Most places can’t hire everyone who’s good. Just one that is “best”, in terms of productivity, fit or whatever.

      1. Dennis Eckmeier

        I wrote a quick post on my blog about what I think why people want the glam pubs in their CVs (see my website)

        In a nutshell: I think, early career researchers ‘need’ glam to get through the first round of job application evaluations, when CVs are quickly scanned to find glam indicators. I very much believe that as soon as people actually speak to applicants, glam is not an issue anymore.

        1. Erin McKiernan

          For others interested in reading Dennis’ post, you can find it here. I think you make several valid points. Search committees, as others commented here, have a lot of applicants to screen and they need something to help make the initial round of cuts. They are looking for gems that stand out. That, in principle, I have no problem with. What I object to is what many hiring committees consider to be gems. You listed 3 things a committee is looking for: (1) high IF publications, (2) heritage (worked with renowned people), and (3) location (worked in renowned place). That sums up well the current system, but I think those are all pretty horrible measures of whether you are a good scientist. As you said, getting high IF papers often has a lot to do with how you sell the science, rather than the science itself. You also mentioned that “working with a genius or next door to a genius doesn’t make you one”. I think we can all agree on this, and yet the current system ends up selecting for people who are good salesmen and have good neighbors. Surely, as scientists, we can come up with something better than this? Wouldn’t it be great if the gems committees were looking for had more to do with questioning established dogmas, coming up with novel methods or new questions, or engaging in scientific outreach and communication? I know those things aren’t as easy to measure as how many IF publications one has or whose lab they came from, but they would certainly speak more to what kind of scientist a person is.

  15. PubPeer

    One of our motivations for creating PubPeer was to address this exact issue of hiring/search committees not having enough time to consistently and thoroughly look at the CVs of all candidates in a given job call (see http://blog.pubpeer.com/?p=6).

    We hope that PubPeer or a site like it will eventually provide more information to search committees than just the impact factors of journals on a CV. If we can all find the time to leave comments on the papers that we have carefully read (those five or six that come out each year, which are close to our interests and enjoyable to read/review), that information would easily (and quickly) replace the glam paper counting of selection committees. This would ultimately be very advantageous to candidates and the departments they join as well as us scientists reading the papers to use for planning our own research.

    1. Erin McKiernan

      I think open post-publication review via forums like PubPeer could be transformative. This article published in 2010 (hat tip to Björn Brembs @brembs) proposes a similar idea: a “natural selection of academic papers” through a “dynamical and transparent peer review process”. Recognition of the contributions made by a particular piece of work would then be generated not by where it is published but by the reviews and comments it generates from other scientists – as it should be.

      1. Pandelis Perakakis

        Thank you Erin for mentioning our 2010 scientometrics paper. Since then we took a step forward and actually built a free, open peer review platform we call LIBRE (Liberating Research: http://www.libreapp.org in twitter: @libreapp) that will launch this October.

        We have a clear proposal for gradually changing research evaluation by universities and funding agencies.

        First we create an alternative free, open, dynamic, transparent, community-based, journal-independent, formal peer review system like the one we propose in our paper. Journal-independent review is gaining momentum right now and LIBRE will not even be the first platform to support such an independent model (see: peerevaluation, publons and more…). In this model, authors can use the independent review process to invite any expert in the field to review their paper. There are many advantages to this, but let me mention one I consider very important. Right now a reviewer is called to evaluate the entire manuscript and usually there are no more than 2-3 reviewers deciding on the fate of the paper. It is unreasonable to assume that —especially with the continuous growth of multidisciplinary research— a single reviewer can have an expert opinion for example on all methodological tools used in a research work. Actually, researchers seek collaborations because they recognise that no single person is expert enough to do all the work alone. However, we expect reviewers to report an expert opinion on every single method used in a paper and, even worse, to do that for free and without any academic recognition! How impossible, not to say stupid, is that??

        Imagine the following situation. I, as an author, invite specific experts to openly report their opinion and/or suggestion for improvements on concrete parts of my manuscript. And I can do that with as many reviewers as I like until I make sure that all aspects of my research have been carefully assessed and approved by the people most responsible to do so. With this system we —scientists, the public, evaluation committees, etc— can end up having to compare a paper for which the only information we have is that it was published in X journal, versus a paper whose methods, hypotheses, conclusions, etc have been rigorously examined by certified experts. I will not extend here more on the benefits that this alternative system has for science because I think they are obvious. What I do want to point out is that developing this alternative system can happen in parallel with traditional academic publishing with no risk for authors as they can accumulate open reviews and at the same time pursue publication in prestigious journals to play the “game” according to current standards. Things will really change when evaluation committees themselves realise that to fund the best team or to hire the best researcher what they need to look is the work’s review track, instead of publication record. Until then, what we —researchers— need to do is to work together to promote a new culture of open, ethical collaboration in order for the new model to reach a criticall mass that will be inevitable to ignore.

      2. Pandelis Perakakis

        I meant “impossible” to ignore 🙂

  16. Stuart Watt (@morungos)

    In a past life, I was involved in quite a few academic recruitments, including those at a senior level (I was director of research for my department at the time). Personally, I found a list of publications was useful, but in interesting ways. The actual count was almost useless, and so were most of the other metrics.

    The idea that metrics work is one based in an economic rational model of academic work. In this model, publications become a currency of recruitment. I’m using this is a metaphor here, but metaphors frame the way we make what seem to be rational decisions [1]. Given a choice between two candidates, the one who is ‘wealthier’ in terms of publications is the ‘rational choice’. The problems of this model are significant: economic rational models do work under certain circumstances [1], but these circumstances require, for example, that a single measure is sufficient. That patently isn’t true for academic recruitment. No metric, no matter how good, will ever be good enough to determine a hiring decision. Also economic rational models reduce the environment to a non-actor role, i.e., the academic literature is a kind of environmental resource to be mined, not a garden to be cultured. This encourages those who are successful at mining the literature to do so, and almost replaces the currency value of publications with people’s ability to create them. In other words, it is very easy to devalue the literature by cheating the metrics. See [2] for details.

    There is other evidence in someone’s resume, of course. I’d look at (*) the order of authorships, looking for a mix of roles, some leading, some supporting; (*) the overlaps between authorships, looking for people who could collaborate in multiple teams, connecting people; (*) the places of publication, again looking for diversity, making sure people weren’t a ‘one-trick pony’, (*) the types of publication, looking for the process people used to develop work from early ideas through to significant projects. And so on, it’s all evidence about the person, and there are many factors — nothing that can be reduced to a single metric.

    And there’s another reason why metrics are bad: Campbell’s Law. Simply: any measure in a high-stakes assessment is subject to distortion. This is precisely what is happening in [2]. The h- and i10- indexes, for example, are was to distort, so people do, because it gives them an advantage in recruitment. This is, again, the rational model at work. But wait, it gets worse. If you look at [2], it promotes intensity in a narrow field. It’s another kind of research that integrates people from different fields; I’d say it’s actually more important, as it’s this that energizes from time to time, by providing new ideas, theories, and methods.

    So my advice to researchers — all researchers at any stage — (for what it’s worth) get a resume to tell a good story about what your research is, and even more importantly, how you do it, how you collaborate. It looks like you’re doing it just fine, so good luck, keep up the good work, and remember that many of us believe in people with your approach to research.

    [1] Lakoff & Johnson (1999) Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. Basic Books

    [2] http://www.academia.edu/934257/How_to_increase_your_papers_citations_and_h_index_in_5_simple_steps

  17. Raphaël Lévy

    Reblogged this on Rapha-z-lab and commented:
    kudos to Erin for this bold and courageous post

  18. ferniglab

    Excellent post. As someone who sits on hiring committees, I would agree that some members may be superficial and just look at journal names. However, those who have actually read the CV usually carry the argument on short listing. This means evaluating a CV in a way similar to that described by Stuart Watts, above. The personal statement also carries a lot of weight. At interview a fair number of people have read the papers. There is nothing like challenging an applicant on science to figure out if they should be hired! As for productivity, a tenured position means a salary for 30+ years, which is a lot of money, so it is more a question of getting a feel for an individual’s adaptability and their capacity for lateral thinking than their past, which may only be 8-12 years (PhD and postdoc) and is in essence a training period.
    A paper a year is very good going and something to be proud of.
    I would also suggest that hiring committees subtract a career year for each child. Rather mean, as anyone with children will know that it takes a lot more out of you, but a move in the right direction. This can yield some interesting results when you consider productivity and a 30+ year future career perspective.

  19. anonymous

    Great post. The only way we, the scientists, can fight against the use of bibliometrics in evaluation and the horrible “publish or perish” culture is to break the correlation between having a lot of publications (overall and in high IF journals) and being a good scientist. It disturbs me greatly that the culture in academia nowadays is such that historical people like Richard Feynman or Linux Pauling would have trouble getting even a postdoc position in a modern university because of publishing too infrequently.

  20. Links 9/9/13 | Mike the Mad Biologist

    […] in Schools Cutting off our nose to spite our face: Scientific funding in the age of sequestration Confessions of an early career scientist (interesting and gutsy; hope it works) Cancer research in crisis: Are the drugs we count on based […]

  21. mbgene

    This is a great post!!!!! I look forward to reading and seeing your other opinions! I feel like we will agree on a lot 🙂

  22. Eduardo G P Fox

    This is a great post, and the accompanying discussion is very inspiring. I at this very moment suffering from these logico-bureaucratic distortions of the hiring system of today. My contract as a postdoc in Switzerland is due within 30 days. Reality in my field around Europe is that you need a first-authorship on a top journal to get a job. Yet I come form Brazil, where reality of getting a job revolves essentially around having many dozens of papers and knowing by heart main scholar topics in your field to show this in a written exam and over a 1h-lecture. Clearly these two realities are incompatible. Result is, it is hard for me getting a position in either realities, having not tuned all my CV investments to one of these models. I have devoted my time to doing what I like in science, in fact producing quite a few papers in not-bad journals and fairly cited, but not enough in the expected profiles in either reality.

    Science has become a blood-sucking system for career-builders, and people are losing the original love with which they came to the field… I shall see how I deal with this.

Leave a reply to Dennis Eckmeier Cancel reply