A few weeks ago, I tried a crowdsourcing experiment and asked people to vote on which unfinished project I should tackle next. I was thrilled by the results. To date, the post has had 448 views (the 2nd highest view count for posts on this infant blog!), the poll registered 48 votes, and I have received many comments from people – in the thread, on Twitter, and by email – expressing how much they like the idea of crowdsourcing the direction of one’s science. I’d like to thank everyone who made this experiment a success by tweeting, retweeting, linking, and commenting. I especially want to thank those 48 people who voted and, in particular, those 5 people who took the time to leave me comments explaining the reasons behind their vote. It is this kind of interaction with people that I find the most rewarding part of being a scientist, and this kind of feedback that makes my science better.

Without further ado, here are the results of the poll (captured from Poll Daddy):

pollresults

There are a few things about these results that I think are worth noting. First, because of the intense pressure to publish, many professors have advised me to go for the “lowest hanging fruit” first. In other words, regardless of interest, tackle the project that is most likely to get you the fastest publication. I hate this perspective for many reasons, not least because it makes you sound more like a factory than a scientist. Interestingly, even though I mentioned in my post that Project 3 was closest to a releasable research output, this project received the least votes. I have no way of knowing why each person voted the way they did (except for those who left specific comments), but it seems to me people were voting more based on how interesting or important the project sounded, rather than how close it might be to publication. In my opinion, this is the way science should work.

Second, I was pleasantly surprised to see the interest in Project 2, which involves developing methods for the automated detection of bursts from electrophysiological recordings. In academia, and I think particularly in neuroscience, there is a lot of emphasis put on publishing flashy, “high-impact” (however you measure that) papers in big journals. Except in rare cases – for example, some of the papers first describing the use of light to excite or inhibit neurons – methods papers are often not flashy and tend to go to less well-known journals. But from the fact that Project 2 took second place, coupled with the comments I received, it seems that many people who voted recognized the importance of having good analysis tools. I believe that neuroscience as a field is suffering from a lack of emphasis on quantitative skills, so I was very encouraged by this result.

Finally, to the winner. The largest number of votes were registered for Project 4: “Effects of altering motor neuron excitability on the motor pattern” (see here for brief description). In the coming weeks, I plan to write a post (or more likely, a series of posts) describing this project, its goals, and preliminary results. I’ll post figures here and also upload them to figshare. I’m very excited to share more about this project, as it’s one of my personal favorites. And don’t worry. If your favorite project didn’t win, the plan is also to share more about the other three projects in time.

Advertisements