eLife’s new publishing policy

October 21, 2022 at 10:25 am | | literature, science community, scientific integrity

Let me preface this post with the admission that I’m likely wrong about my concerns about eLife’s new model. I was actually opposed to preprints when I first heard about them in 2006!

The Journal eLife and Mike Eisen announced it’s new model for publishing papers:

  • Authors post preprint.
  • Authors submit preprint to eLife.
  • eLife editorial board decides whether to reject the manuscript or send out for review.
  • After reviews, the paper will be published no matter what the reviews say. The reviews and an eLife Assessment will be published alongside the paper. At this point, the paper has a DOI and is citable.
  • Authors then have a choice of whether to revise their manuscript or just publish as-is.
  • When the authors decide the paper is finalized, that will become the “Version of Record” and the paper will be indexed on Pubmed.

Very interesting and bold move. The goal is to make eLife and its peer review not a gatekeeper of truth, but instead a system of evaluating and summarizing papers. Mike Eisen hopes that readers will see “eLife” and no longer think “that’s probably good science” and instead think “oh, I should read the reviews to see if that’s a good paper.”

Potential problems

But here are my primary concerns

  1. This puts even more power in the hands of the editors to effectively accept/reject papers. And this process is currently opaque, bias-laden, and authors have no recourse when editors make bad decisions.
  2. The idea that the eLife label will no longer have prestige is naive. The journal has built a strong reputation as a great alternative to the glam journal (Science, Nature, Cell) and that’s not going away. For example, people brag when their papers are reviewed in F1000, and I think the same will apply to eLife Assessments: readers will automatically assume that a paper published in eLife is high-impact, regardless of what the Assessment says.
  3. The value that eLife is adding to the process is diminishing, and the price tag is steep ($2000).
  4. The primary problem I have with peer review is that it is simultaneously overly burdensome and not sufficiently rigorous. This model doesn’t substantially reduce the burden on authors to jump through hoops held by the reviewers (or risk a bad eLife Assessment). It also is less rigorous by lowering the bar to “publication.”

Solutions

Concern #1: I think it’s a step in the wrong direction to grant editors even more power. Over the years, editors haven’t exactly proven themselves to be ideal gatekeepers. How can we ensure that the editors will act fairly and don’t get attracted by shiny objects? That said, this policy might actually put more of a spotlight on the desk-rejection step and yield change. eLife could address this concern in various ways:

  • The selection process could be a lottery (granted, this isn’t ideal because finding editors and reviewers for a crappy preprint will be hard).
  • Editors could be required to apply a checklist or algorithmic selection process.
  • The editorial process could be made transparent by publishing the desk rejection/acceptace along with the reasons.

Concern #2 might resolve itself with time. Dunno. Hard to predict how sentiment will change. But I do worry that eLife is trying to change the entire system, while failing to modify any of the perverse incentives that drive the problems in the first place. But maybe it’s better to try something than to do nothing.

Concern #3 is real, but I’m sure that Mike Eisen would love it if a bunch of other journals adopted this model as well and introduced competition. And honestly, collating and publishing all the reviews and writing a summary assessment of the paper is more than what most journals do now.

Journals should be better gatekeepers

But #4 is pretty serious. The peer review process has always had to balance being sufficiently rigorous to avoid publishing junk science with the need to disseminate new information on a reasonable timescale. Now that preprinting is widely accepted and distributing results immediately is super easy, I am less concerned with latter. I believe that the new role of journals should be as more exacting gatekeepers. But it feels like eLife’s policy was crafted exclusively by editors and authors to give themselves more control, reduce the burden for authors, and shirk the responsibility of producing and vetting good science.

There are simply too many low-quality papers. The general public, naive to the vagaries of scientific publishing, often take “peer-reviewed” papers as being true, which is partially why we have a booming supplement industry. Most published research findings are false. Most papers cannot be replicated. Far too many papers rely on pseudoreplication to get low p-values or fail to show multiple biological replicates. And when was the last time you read a paper where the authors blinded their data acquisition or analysis?

For these reasons, I think that the role of a journal in the age of preprints is to better weed out low-quality science. At minimum, editors and peer reviewers should ensure that authors followed the 3Rs (randomize, reduce bias, repeat) before publishing. And there should be a rigorous checklist to ensure that the basics of the scientific process were followed.

Personally, I think the greatest “value-add” that journals could offer would be to arrange a convincing replication of the findings before publishing (peer replication), then just do away with the annoying peer review dog-and-pony show altogether.

Conclusion

We’ll have to wait and see how this new model plays out, and how eLife corrects stumbling blocks along the way. I have hope that, with good editorial team and good practices/rules around the selection process, eLife might be able to pull this off. Not sure if it’s a model that will scale to other, less trustworthy journals.

But just because this isn’t my personal favorite solution to the problem of scientific publishing, that doesn’t mean that eLife’s efforts won’t help make a better world. I changed my mind about the value of preprints, and I’ll be happy to change my mind about eLife’s new publishing model if it turns out to be a net good!

Replace Peer Review with “Peer Replication”

October 13, 2021 at 1:35 pm | | literature, science and the public, science community, scientific integrity

As I’ve posted before and many others have noted, there is a serious problem with lack of adequate replication in many fields of science. The current peer review process is a dreadful combination of being both very fallible and also a huge hurdle to communicating important science.

Instead of waiting for a few experts in the field to read and apply their stamp of approval to a manuscript, the real test of a paper should be the ability to reproduce its findings in the real world. (As Andy York has pointed out, the best test of a new method is not a peer reviewing your paper, but a peer actually using your technique.) But almost no published papers are subsequently replicated by independent labs, because there is very little incentive for anyone to spend time and resources testing an already published finding. That is precisely the opposite of how science should ideally operate.

Let’s Replace Traditional Peer Review with “Peer Replication”

Instead of sending out a manuscript to anonymous referees to read and review, preprints should be sent to other labs to actually replicate the findings. Once the key findings are replicated, the manuscript would be accepted and published.

(Of course, as many of us do with preprints, authors can solicit comments from colleagues and revise a manuscript based on that feedback. The difference is that editors would neither seek such feedback nor require revisions.)

Along with the original data, the results of the attempted replication would be presented, for example as a table that includes which reagents/techniques were identical. The more parameters that are different between the original experiment and the replication, the more robust the ultimate finding if the referees get similar results.

A purely hypothetical example of the findings after referees attempt to replicate. Of course in reality, my results would always have green checkmarks.

Incentives

What incentive would any professor have to volunteer their time (or their trainees’ time) to try to reproduce someone else’s experiment? Simple: credit. Traditional peer review requires a lot of time and effort to do well, but with zero reward except a warm fuzzy feeling (if that). For papers published after peer replication, the names of researchers who undertook the replication work will be included in the published paper (on a separate line). Unlike peer review, the referees will actually receive compensation for their work in the form of citations and another paper to include on their CV.

Why would authors be willing to have their precious findings put through the wringer of real-world replication? First and foremost, because most scientists value finding truth, and would love to show that their findings hold up even after rigorous testing. Secondly, the process should actually be more rewarding than traditional peer review, which puts a huge burden on the authors to perform additional experiments and defend their work against armchair reviewers. Peer replication turns the process on its head: the referees would do the work of defending the manuscript’s findings.

Feasible Experiments

There are serious impediments to actually reproducing a lot of findings that use seriously advanced scientific techniques or require long times or a lot of resources (e.g. mouse work). It will be the job of editors—in collaboration with the authors and referees—to determine the set of experiments that will be undertaken, balancing rigor and feasibility. Of course, this might leave some of the most complex experiments unreplicated, but then it would be up to the readers to decide for themselves how to judge the paper as a whole.

What if all the experiments in the paper are too complicated to replicate? Then you can submit to JOOT.

Ancillary Benefits

Peer replication transforms the adversarial process of peer review into a cooperation among colleagues to seek the truth. Another set of eyes and brains on an experiment could introduce additional controls or alternative experimental approaches that would bolster the original finding.

This approach also encourages sharing experimental procedures among labs in a manner that can foster future collaborations, inspire novel approaches, and train students and postdocs in a wider range of techniques. Too often, valuable hands-on knowledge is sequestered in individual labs; peer replication would offer an avenue to disseminate those skills.

Peer replication would reduce fraud. Often, the other authors on an ultimately retracted paper only later discover that their coworker fabricated data. It would be nearly impossible for a researcher to pass off fabricated data or manipulated images as real if other researchers actually attempt to reproduce the experimental results. 

Potential Problems

One serious problem with peer replication is the additional time it may take between submission and ultimate publication. On the other hand, it often takes many months to go through the traditional peer review process, and replicating experiments may not actually add any time in many cases. Still this could be mitigated by authors submitting segments of stories as they go. Instead of waiting until the entire manuscript is polished, authors or editors could start arranging replications while the manuscript is still in preparation. Ideally, there would even be a  journal-blind mechanism (like ReviewCommons) to arrange reproducing these piecewise findings.

Another problem is what to do when the replications fail. There would still need to be a judgement call as to whether the failed replication is essential to the manuscript and/or if the attempt at replication was adequately undertaken. Going a second round at attempting a replication may be warranted, but editors would have to be wary of just repeating until something works and then stopping. Pre-registering the replication plan could help with that. Also, including details of the failed replications in the published paper would be a must.

Finally, there would still be the problem of authors “shopping” their manuscript. If the replications fail and the manuscript is rejected, the authors could simply submit to another journal. I think the rejected papers would need to be archived in some fashion to maintain transparency and accountability. This would also allow some mechanism for the peer replicators to get credit for their efforts.

Summary of Roles:

  • Editor:
    • Screen submissions and reject manuscripts with obviously flawed science, experiments not worth replicating, essential controls missing, or seriously boring results.
    • Find appropriate referees.
    • With authors and referees, collaboratively decide which experiments the referees should attempt to replicate and how.
    • Ultimately conclude, in consultation with referees, whether the findings in the papers are sufficiently reproducible to warrant full publication.
  • Authors:
    • Write the manuscript, seek feedback (e.g. via bioRxiv), and make revisions before submitting to the journal.
    • Assist referees with experimental design, reagents, and even access to personnel or specialized equipment if necessary.
  • Referees:
    • Faithfully attempt to reproduce the experimental results core to the manuscript.
    • Optional: Perform any necessary additional experiments or controls to close any substantial flaws in the work.
    • Collate results.
  • Readers:
    • Read the published paper and decide for themselves if the evidence supports the claims, with the confidence that the key experiments have been independently replicated by another lab.
    • Cite reproducible science.

How to Get Started

While it would be great if a journal like eLife simply piloted a peer replication pathway, I don’t think we can wait for Big Publication to initiate the shift away from traditional peer review. Maybe the quickest route would be for an organization like Review Commons to organize a trial of this new approach. They could identify some good candidates from bioRxiv and, with the authors, recruit referees to undertake the replications. Then the entire package could be shopped to journals.

I suspect that once scientists see peer replication in print, it will be hard to take seriously papers vetted only by peer review. Better science will outcompete unreproduced findings.

(Thanks Arthur Charles-Orszag for the fruitful discussions!)

experimenting with preprints

May 9, 2016 at 12:15 pm | | literature, science community

We recently uploaded a preprint to bioRxiv. The goal was to hopefully get some constructive feedback to improve the manuscript. So far, it got some tweets and even an email from a journal editor, but no comments or constructive feedback.

I notice that very few preprints on bioRxiv have any comments at all. Of course, scientists may be emailing each other privately about papers on bioRxiv, and that would be great. But I think a open process would be valuable. F1000Research, for example, has a totally open review process, posting the referee reports right with the article. I might be interested in trying that journal someday.

UPDATE: In the end, we did receive a couple emails from students who had read the preprint for their journal club. They provided some nice feedback. Super nice! We also received some helpful feedback on another preprint, and we updated the manuscript before submitting to a journal. Preprints can be super useful for pre-peer review.

readcube and deepdyve update

June 6, 2013 at 7:48 am | | literature, science community, software

I just wanted to reiterate how great the ReadCube recommendations are. I imported all my PDFs and now check the recommendations every day. I often find great papers (and then later find them popping up in my RSS feeds).

Also, I wanted to let folks know that DeepDyve, the article rental site, is now allowing free 5-min rental of journal articles. Try it out!

google reader alternatives

April 3, 2013 at 8:12 am | | everyday science, literature, science community, software

Now that Google Reader is going the way of the dodo Google Gears, how am I going to keep up with the literature?!? I read RSS feeds of many journal table of contents, because it’s one of the best ways to keep up with all the articles out there (and see the awesome TOC art). So what am I to do?

There are many RSS readers out there (one of my favorites was Feeddler for iOS), but the real problem is syncing! Google servers took care of all the syncing when I read RSS feeds on my phone and then want to continue reading at home on my computer. The RSS readers out there are simply pretty faces on top of Google Reader’s guts.

But now those RSS programs are scrambling to build their own syncing databases. Feedly, one of the frontrunners to come out of the Google Reader retirement, claims that their project Normandy will take care of everything seamlessly. Reeder, another very popular reader, also claims that syncing will continue, probably using Feedbin. Feeddler also says they’re not going away, but with no details. After July 1, we’ll see how many of these programs actually work!

So what am I doing? I’ve tried Feedly and really like how pretty it is and easy it is to use. The real problem with Feedly is that its designed for beauty, not necessarily utility. For instance look how pretty it displays on my iPad:

feedly

But note that its hard to distinguish the journal from the authors and the abstract. And it doesn’t show the full TOC image. Feedly might be faster (you can swipe to move to the next articles), but you may not get as much full information in your brain and might miss articles that might actually interest you.

Here’s Reeder, which displays the title, journal, authors, and TOC art all differently, making it easy to quickly scan each  article:

reeder

 

And Feeddler:

feeddler

I love that Feeddler lets me put the navigation arrow on the bottom right or left, and that it displays a lot of information in nice formatting for each entry. That way, I can quickly flip through many articles and get the full information. The major problem is that it doesn’t have a Mac or PC version, so you’ll be stuck on your phone.

I think I’ll drop Feeddler and keep demoing Reedler and Feedly until July 1 rolls around.

slate

October 3, 2012 at 2:39 pm | | news, nobel, science and the public, science community

Paul and I were interviewed for a Slate.com article about Nobel Prize predictions. More details back at my original post on the 2012 Prize.

2012 nobel prize predictions

September 10, 2012 at 1:35 pm | | nobel, science community

It’s time again for my annual blog post Nobel Prize predictions. This year I’m limiting to the chemistry prizes. Of course there are many more individuals and discoveries that should be listed below and even more who deserve a Nobel Prize!

 

Single-Molecule Spectroscopy

Moerner [awarded in 2014], Orrit

Single-molecule imaging has matured to an important technique in biophysics. Just go to a Biophysical Society meeting and see all the talks and posters with “single molecule” in the title! Single-molecule techniques have begun to answer biological questions that would be obscured in traditional imaging. Moreover, super-resolution techniques such as PALM and STORM rely directly on detecting single molecules and the spectroscopic techniques developed in the late 80s and 90s. W.E. Moerner won the 2008 Wolf Prize in Chemistry.

 

Electrochemistry/Bioinorganic Electron Transfer

Bard, Gray

Al Bard won the 2008 Wolf Prize in Chemistry; Harry Gray won it in 2004.

 

Polymer Synthesis

Matyjaszewski, Frechet

Jean Frechet invented chemically-amplified photoresists and developed dendrimer synthesis. Kris Matyjaszewski won the 2011 Wolf Prize in Chemistry for ATRP polymerization. Of course, others were involved in both discoveries.

 

GPCR Structure

Kobilka, Stevens, and Palczewski

Biomolecule structures have won chemistry Nobels in the past, so I’m including G-protein coupled receptors here. A lot of buzz in the last couple years about GPCRs and Nobel. Good article here.

Update 10/10/12: Kobilka wins.

 

Chaperonins

Horwich, Hartl

Although these are biological molecules, they are still molecules. And many Chemistry Nobels have gone to bio-related discoveries in the last couple decades. Both won the Lasker Award in 2011.

 

Biomolecular Motors

Vale, Spudich, Sheetz

Another bio subject, but you really never know with the Chemistry prize. All three just won the Lasker Award this year.

 

(BTW, check out other predictions at ChemBark and The Curious Wavefunction and Thompson. And my prior predictions.)

(P.S. W.E. Moerner was my PhD advisor. Also, I worked in a collaboration with Kris Matyjaszewski when I was an undergrad.)

Update 9/11/12: I added chaperonins and biomolecular motors because I figure this year’s Chemistry Nobel might be more biological.

Update 10/3/12: Paul and I were interviewed for a Slate.com piece on Nobel Prize predictions. I like Paul’s section, especially about Djerassi. Anyway, here is what I said:

The line between chemistry and other fields (especially biology) is often blurred, and that’s a wonderful thing; but this fact sometimes results in a chemistry Nobel Prize being awarded for a decidedly biological discovery (like the 2009 prize for the structure of the ribosome). This may be exacerbated by the fact that the physiology or medicine prize tends to go to things directly related to health, and the chemistry prize often is used to cover the more basic biological science feats. Personally, I think it is a testament to the central position the field of chemistry holds in the Venn diagram of science.

My top prediction is for single-molecule spectroscopy. In 1989, W.E. Moerner at IBM (now at Stanford) was the first to use light (lasers) to perform measurements on single molecules. Before this, millions or trillions of molecules or more were measured together to detect an average signal. His amazingly difficult feat required ultrasensitive detection techniques, perfect samples, and temperatures just above absolute zero! A year later, Michel Orrit in France observed the fluorescent photons from a single molecule. With those early experiments, Moerner and others laid the experimental groundwork for imaging single molecules.

Single-molecule spectroscopy and imaging has become a subfield unto itself. I performed my Ph.D. research in the Moerner lab, and I know firsthand that the technique reveals events that would otherwise be hidden in averages of “bulk” measurements. Biophysics, the field of understanding how cells and biomolecules operate on a physical level, is particularly aided because rare events can have major effects in biology. (Think of a single cell mutating and then dividing into a tumor.) For example, Sunney Xie at the Pacific Northwest National Laboratory (now at Harvard) performed the early work on how individual enzymes experience multiple states, which otherwise would be averaged away in a bulk experiment. More recently, imaging single molecules has been instrumental in novel “super-resolution” techniques that reveal structures in cells at tenfold higher resolution than ever available before. Several companies (Pacific Biosciences, Helicos, Illumina, Life Technologies) have either released or are developing products that use single-molecule imaging to sequence individual strands of DNA. My prediction is bolstered by others along the same vein. In 2008, Moerner won the Wolf Prize in Chemistry, which is often considered a harbinger for the Nobel. More importantly, The Simpsons were betting on Moerner in 2010. Of course, that was Milhouse’s prediction, and maybe it’s more reasonable to go with Lisa.

My other prediction is for biomolecular motors (aka molecular motors). These are proteins in cells that move important cargo around, and on a more practical level, make muscles contract. Ron Vale (now at University of California, San Francisco) and Michael Sheetz (now at Columbia) discovered kinesin, a protein that walks along tiny tubes and pulls cargo to different parts of the cell. This is supremely important because it would take far too long (months in some cases) for diffusion alone to bring nutrients and signaling molecules to all parts of the cell. (Interestingly, kinesin was discovered from the neurons of squids because they are extraordinarily long cells!) Jim Spudich (at Stanford), Sheetz, Vale, and others have developed many important techniques for studying the actions of these tiny machines. Spudich shared this year’s Lasker Award, which many see portending a Nobel, with Vale and Sheetz.

It’s hard not to allow hope to creep into almost anything we humans do, and I have clearly failed to prevent my own desires from influencing my predictions: I would be thrilled to see either of the above discoveries—or any that I list on my blog—win a prize. But there are many, many deserving scientists who have discovered amazing things and helped millions of people. Unfortunately, only a handful of these amazing individuals will be awarded the ultimate recognition in science. So it goes.

PeerJ

June 8, 2012 at 9:39 am | | literature, science and the public, science community

This is an interesting idea. PeerJ sounds like it’s going to be an open access journal, with a cheap publication fee ($99 for a lifetime membership). I wonder if it will be selective?

I’m more excited about HHMI’s new journal eLife.

self-plagiarism and JACS

April 25, 2012 at 7:52 am | | literature, science community, scientific integrity

Hi all! I’m back! Well, not exactly: I won’t be posting nearly as much as I did a few years ago, but I do hope to start posting more than once a year. Sorry for my absence. There’s no real excuse except my laziness, a new postdoc position, commuting, and a new baby. I suppose those are good excuses, really. Also, I’m sorry to say, that I’ve been cheating on you, posting on another blog. We love each other, and I won’t stop, but I want to keep you Everyday Scientist readers in my live, too. I’m just not going to pay as much attention to you as I used to. You’re cool with that, right?


Anyway, I thought I’d comment on the recent blogstorm regarding Ronald Breslow’s apparently self-plagiarized JACS paper. Read the full stories here (1, 2, 3, etc.).

I feel bad for Breslow, because I like him and I respect his work and I think his paper in JACS is valuable. However, I think he should retract his paper. Sorry, but if some no-name had been caught completely copying and pasting his or her previously published paper(s) and submitting that to JACS as an ostensibly novel manuscript, that paper would be retracted when found out. If he had just copied the intro paragraph, I’d be more forgiving, but the entire document is copied (except, that is, the name of the journal)!

That said, it might be possible to save the JACS paper, but the editors would have to label the article as an Editorial or Perspective or something, and explicitly state that the article is reprinted from previous sources. I know that might not be fair, to give Breslow special treatment, but life isn’t fair. Famous scientists might get away with more than peons. And, honestly, Breslow’s paper remaining in JACS might be good for future humanity, because JACS archive will probably be more accessible than other sources. That way, we’ll be able to look up what to do when space dinosaurs visit us!

wtf?! acs fall meeting deadline is already passed?

March 25, 2011 at 3:41 pm | | conferences, news, science community

Before the Spring meeting has even started? This is not cool.

It’s almost impossible to actually find out, but the deadline for submitting an abstract to the ACS Fall meeting in Denver has already passed. This is how I tried to find out:

First, I went to the ACS website, and clicked on the “Meetings” tab. The Fall 2011 meeting isn’t even listed there (see screenshot on the left). OK, that’s silly.

Next, I searched “deadline” from the ACS homepage and clicked on the top link, “Events & Deadlines.” That brings me to the Events & Deadlines page. Where the Denver meeting doesn’t even have a link. The Anaheim meeting’s link is live, but you can’t click on the Denver meeting. OK, maybe that means the deadline is so far away that you don’t need to worry about it. Wrong. Apparently, the Events & Deadlines page is only for past deadlines. Why have a deadlines page only for past deadlines?!? Wouldn’t future deadlines be a bit more helpful? I guess, the “Events & Deadlines” page is more a shrine to the deadlines you’ve already missed, not intended to help you meet future deadlines.

OK, let’s try going directly to the Denver meeting homepage. Not a lot of info there. But it turns out that, if you click on the symposia link, you’ll find that many of the deadlines have already passed!!! And the Spring meeting hasn’t even started yet! (There’s also this strange PDF I found somewhere on the ACS website; it list different deadlines.)

That really, really sucks. I feel like, with all the stupid emails I get from ACS every day, I’d have seen this deadline coming. I suppose it’s all my own fault: I should have been paying attention. But I figured that the deadline for the next meeting wouldn’t be before the current meeting starts. And I do blame the ACS website: I’ve been looking at the “meetings” tab for info on Denver, but it isn’t even there yet.

My suggestion: Why doesn’t ACS have one deadline for all the divisions, have it after the current meeting is finished, and actually announce that deadline on their webpage?

I am annoyed.

why is author ID taking so long?

March 22, 2011 at 11:13 am | | literature, science community

DOI is magical. Why is it taking so long for the same thing to happen with authors? Arguably, having unique author IDs is more important and helpful than document identifiers. Yet it’s 2011 and there’s no standard way to ID an author.

Thompson has it’s ResearcherID, but it hasn’t really seem to have caught on. And it’s certainly not a open or universal standard, given it’s based off of ISI. ORCID seems to be (slowly) working on a solution to that. NIH claims that it’s working on a Pubmed Author ID project, but what’s the holdup? Hasn’t the problem of multiple authors with the same or similar name been recognized for years?

There must be some technical and economic hurdles that I don’t quite understand. DOI seemed to arrive on the scene pretty early after the internet started becoming mainstream. That was a few years ago.

chemistry should not focus on the origin of life

March 18, 2011 at 10:16 am | | science and the public, science community

Several chemists (e.g. here and here) have recently suggested that the origin of life (OOL) should be the next big question the field of chemistry could tackle.

Here’s why I disagree:

  • OOL research is not (directly) practical. Studying OOL won’t directly result in new technologies, products, or cures that the public can use. I prefer the Deutch and Whitesides approach. There are more pressing challenges that chemists can contribute to solving (cancer, disease, chemistry of biology, global warming, alternative energy sources, etc.). OOL comes across as an intellectual pursuit for armchair chemists.
  • OOL is politically, emotionally, and religiously charged. The last thing we need is idiots trying to cut chemistry funding because their faith says something different than the science. Studying OOL is the perfect way to offend a bunch of folks and make the field of chemistry a target of religious nuts. I don’t think we should guide our research on what religious nuts want, but why kick the beehive?
  • OOL is basically unanswerable. We might be able to test theories of the OOL, but we won’t be able to observe the true origins of life on this planet. Until we invent a time machine. That makes OOL research speculative and uninteresting to me. And even if we could find out, who really cares? Will that change our day-to-day life? OOL seems like more of a religious question than one of science.

Of course, some chemists should work on OOL. Just like some physicists should work on counting the number of alternate universes. But I don’t think chemistry as a whole should devote a major portion of its efforts to the “big questions” like OOL and what the universe was before the Big Bang. Chemistry is a practical science that answers questions about our everyday life. Let’s harness that power instead of trying to be as “cool” and big-question oriented as physics.

There. I hope I offended everyone who works on OOL. :)

P.S. Harry Gray and Jay Labringer have a recent editorial in Science stating that the Big Questions in chemistry are harder to see. They suggest understanding photosynthesis as one of those Questions.

what your laser pointer says about you

March 10, 2011 at 4:50 pm | | conferences, hardware, nerd, science community

Red: You either don’t really care if anyone can see what you’re pointing at or you’re cheap and you use the free pointer you got from a vendor at the expo. Of course, you could be one of those considerate folks who buy very bright red pointers, because you stubbornly like what red looks like even though human eyes are not sensitive to 633 nm. That’s fine.

Green: You want your audience to see what you’re pointing at. Unless you bought a 5+ mW laser (either because you’re showing off or because you didn’t realize how sensitive the human eye is to 532 and bought the brightest laser you could find). In that case, you’re blinding your audience. If you’re going to get a 5 mW laser, get it in red. That’s classy and visible!

Blue: You’re a bad-ass. You don’t care that blue lasers are more expensive and slightly harder to see, you want the audience to know that you’re a real laser jock. (Or maybe you’re worried about leaking 1064 nm from green laser pointers.)

Purple: You’re so bad-ass you’re crazy. You don’t care that the human eye can hardly detect and can’t focus on 405 nm. You want to show that you support Blu-ray.

Yellow: You think blue lasers are soooooo 2009.

Invisible: You have a UV or IR laser pointer? Maybe a tripled or undoubled Nd:YAG? You’re nuts.

Maser pointer: I want one.

interrupting student

March 1, 2011 at 1:11 pm | | science community

A (grad?) student walked into a seminar lecture, went up the the speaker in mid-sentence, and said, “Sorry to interrupt, but can I borrow some chalk?”

Everyone in the seminar room started chuckling at the kid. It was strange and awkward.

What a dumdum.

f1000

February 11, 2011 at 12:10 pm | | literature, science community, stupid technology, wild web

UPDATE2: OK, it turns out that the daily(ish) email isn’t too terrible. I now use it and I’m no longer upset that they don’t have an RSS feed. I correct myself and now fully endorse F1000!

Faculty of 1000 is extremely powerful with a lot of potential, but simultaneously completely worthless.

F1000 is like mini-peer-review post-publishing: it uses its “Faculty,” experts in various fields, to rate publications that those experts think are worth reading. It’s like … nay, it is … getting suggestions on what to read in the recent literature from a large group of experts. That is very cool. Of course, there are various databases like Cite-U-Like and Mendeley that are trying to mine their data to find interesting papers, but there’s something great about getting little mini-reviews from actual people.

OK, so why am I annoyed? F1000 doesn’t have an RSS feed! So I have to remember to go and check the website every week. Even if I happen to remember, there’s no way to mark which reviews I’ve already seen and the new ones. What is this, 2002?

UPDATE: rpg comments below with some good news: F1000 is actively trying to get RSS on the site. The comments also explain why it’s a challenge. I eagerly await RSS.

Next Page >

Powered by WordPress, Theme Based on "Pool" by Borja Fernandez
Entries and comments feeds. Valid XHTML and CSS.
^Top^