man bites dog

August 14, 2009 at 9:47 am | | grad life, news, science community, scientific integrity

Former Stanford graduate student Christopher Sclimenti is suing his former PI, Prof Michele Calos, for patent infringement and plagiarism. See stories here, here, and here. The complaint can be read here.

The summary is as follows:

  • Student was originally on a patent application.
  • At some point, Stanford and/or prof removed student’s name from application, which becomes this patent. Prof Calos is the only inventor listed on the patent.
  • Prof filed a second patent, which is a continuation of the first. Prof Calos is still the only inventor listed.
  • Prof filed two other applications (here and here), still the sole inventor listed, with significant portions copied from the student’s dissertation. (Stanford Daily found about 20 paragraphs in one application that were essentially identical to paragraphs in his dissertation!)

All parties agreed to an Alternative Dispute Resolution (ADR) with a neutral party. The ADR panel concluded that the student was a co-inventor and should have been included in the patent. As a result, Stanford agreed to add him to the issued patent (but I see no evidence that that has occurred yet).

CalosAccording to Stanford’s OTL page, inventorship is different than what most scientists would consider authorship. For instance, “A person who contributed only labor and/or the supervision of routine techniques, but who did not contribute to the idea—the concept of one of the embodiments of the claimed invention—is not considered an inventor.” For the prof and the University to claim that the student was not an inventor, they implied that he was only a technician and did not contribute to conceiving any of the claims in the patent. That’s possible, but the ADR panel disagreed. It seems pretty straightforward that the student should have been on at least the first patent!

Why would Prof Calos and Stanford University fight so hard against their former student, who clearly contributed enough to the invention to appear on the original patent application? Is splitting the royalties from one patent with one extra person, a student who contributed to the work, so terribly painful? Stanford’s policy is to divide the royalties (after paying for the patenting and lawyer fees) 1/3 to the inventors, 1/3 to the department, and 1/3 to the school. So the prof loses half of her royalties by re-including her former student as an inventor, but the University loses nothing.

Is the recent patent application plagiarized from the student’s dissertation? Only if the dissertation was not first self-plagiarized from an earlier paper. Who knows. Regarding the plagiarism complaint, Stanford had this to say:

“I think we’ve really done our part at this point,” Dunkley said. “The inventorship has been corrected. He has been made whole for any amount that he would have received if he had been an inventor from the beginning. So from the University’s perspective, all necessary action has been taken to rectify any differences on the inventorship issue.” (source: Stanford Daily)


That’s not really very satisfying. What if the roles were reversed and a student copied significant portions of his PI’s earlier grant proposal into his dissertation without the PI’s permission? Or submitted a paper without the PI’s knowledge? That student would probably be kicked out of Stanford at a minimum. The least the University could do is investigate this case and if Prof Calos has a history of taking credit for other peoples’ work. Maybe Prof Calos is innocent and the student is trying to steal credit, but it would be nice if the University would check into it.

All in all, the entire situation is not clear-cut. I suspect that the whole incident is the result of large egos, hurt feelings, and greed—from all parties! This is why it is very important to not burn bridges and to try to empathize with your PI or your student. I suspect this conflict could have been resolved early on if all parties had been more understanding and willing to listen and compromise.

Bottom line though, I find it unfortunate that the University would fight one of its own students.

P.S. Guess what Sclimenti is doing now?

A battle brewing over Napoleon Dynamite

August 8, 2009 at 11:08 am | | literature, science community

I was scanning NCBI ROFL the other day and I came across this little gem of a comment:

Napoleon Dynamite Has Asperger’s? Gosh, It’s Called Cultural Competence, You Freakin’ Idiots

Acad Psychiatry 31:248, May-June 2007
doi: 10.1176/appi.ap.31.3.248

This guy is angry. But wait, there’s more! Here is another comment on the offending paper:

Ligers Lived
Acad Psychiatry 31:247, May-June 2007
doi: 10.1176/appi.ap.31.3.247

So what’s all the commotion all about? Here’s the offending paper on PubMed, but I haven’t been able to download the original yet.

Sounds like a hoot.

academic grandfather

June 18, 2009 at 2:45 pm | | grad life, science community

I met my academic grandfather recently: A.J. Sievers was my PhD advisor’s—W.E. Moerner’s—PhD advisor at Cornell. He is a really friendly guy. Also, he told me interesting stories of when he worked for the Varian brothers and beat Hewlett (or was it Packard?) in a ping-pong match!

missing link?

June 16, 2009 at 7:20 am | | science and the public, science community, scientific integrity

I read this interesting editorial in Science about the media hyping of a recent archeological find. Just look at Jørn Hurum, the team leader:


The report of finding the intact skeleton of a monkey-thing was reported in PLOS One, but not before the media hype started. There is a TV documentary of the find and the research on the fossil, and the group is touting the find as the next Rosetta Stone and the “missing link” between apes and humans. For instance, here are a few quotes from the team that reported the find:

“This specimen is like finding the lost ark for archaeologists. It is the scientific equivalent of the Holy Grail.” –Jørn Hurum

“[It is] like the eighth wonder of the world.” –Jens Franzen

What douchebags. This kind of bullshit is seriously likely to undermine the credibility of science in the public eye. Going around claiming that you’ve found the missing link—not to fellow scientist but to the public at large—is very dangerous: when it turns out that your monkey-thing is not a little human, the incident will only add gasoline to the anti-evolution fire. If it really is the missing link, let your fellow paleontologists make those statements.

I find this type of grandstanding by the authors scary, and very reminiscent of the Fleischmann and Pons cold fusion debacle. In fact, I recently watched a 60 Minutes episode about cold fusion, in which Fleischmann stated that his only regrets were naming the effect he saw “fusion” and holding a press conference. In other words, if he and Pons had not overhyped their results directly to the media, then maybe they wouldn’t have been run out of science when their science turned out to have fatal flaws.

Hurum claims that he’s only hyping in this fashion in order to help get children intersted in science. But clearly, his base motivation is to make himself rich and famous. Yes, we should get children excited about real science, but not at the expense of scientific integrity.

Or maybe this little monkey-thing will end up being seen as a great scientific discovery for generations. But I doubt it.

royce doesn’t like 8-page papers

June 10, 2009 at 10:27 pm | | literature, open thread, science community

Royce Murray—the famous analytical chemist and great educator—has written an interesting editorial in Analytical Chemistry. Since 1995, the page limit for AC has been 7 pages, including figures. But papers are getting longer.

“In 1983, 1989, 1994, 2000, 2006, and 2008, the average length of papers published in our research section was 3.8, 6.4, 6.7, 7.1, 7.0, and 8.0 pages, respectively. I consider an average seven-page length already long, and an average of eight pages is alarming.”

royce1Royce is careful to acknowledge the various justifiable reasons papers might be longer today than before, including that figures have grown bigger. But he remains convinced that a 10-page paper is basically unreadable to most AC subscribers. His solutions are twofold. First, Royce implies that authors should write the same information with fewer words and smaller figures. Secondly, he explicitly suggests that authors take advantage of Supporting Information sections.

I am of two minds on this issue. On the one hand, I agree that an 8-page average means that causually reading AC is going to be difficult. On the other hand, I don’t think that dumping results into the SI is an adequate solution: SIs are less carefully written, refereed, and read, and therefore are not an appropriate medium to report scientific data or analysis. In 50 years, will SIs still be accessible? Will our generation of scientists be proud of them if they are? SIs are great for detailed methods as well as superfluous, tangential, or extra results. However, there needs to be a place to be able to publish significant scientific results, even if it takes 10 pages. AC should probably be one of those journals (as should J. Phys. Chem. ABC and others).

So here is my solution. (1) Allow longer papers when they represent a significant body of work. I think Royce agrees here. (2) For a very long but good paper, have the authors split it into two halves. Each half should tell a distinct story, but together could easily be read as a whole. These two halves would be published alongside each other. (3) If a manuscript is 10 pages of drivel and bullshit figures, reject it or require a significant rewrite.

Other thoughts?

allen j. bard

June 5, 2009 at 8:21 am | | science community

bardI have been hosting the famous electrochemist Al Bard for a colloquium at our department. This is the first time I’ve met him, and it’s been a real pleasure. He is a super nice guy and really down-to-earth. I ran him around campus all day meeting with student groups, and he never complained. And at dinner with students, he was very entertaining and had great perspective on the science world, especially publishing (he was the Editor-in-Chief of JACS for years, afterall). Truly one of the nicest famous chemists I’ve ever met.

His talk was excellent: he explained his system and technique very clearly, and the results were interesting. Perfect combination! If you ever get a chance to meet Bard or go to one of his talks, I highly recommend it.

citeulike and mendeley collaborate

June 4, 2009 at 8:34 am | | literature, science community, wild web

It has begun: Mendeley is integrating CiteULike libraries.

CiteULike is an online program for extracting metadata (e.g. authors, titles, etc.) from webpages of journal articles, then storing and organizing that data. I use it to organize the citations and DOIs of all the articles I find interesting.

Mendeley is an online database for storing and sharing PDFs of papers (plus a desktop application). I still don’t find Mendeley to be very helpful yet: still too buggy, too slow, and missing some vital features (e.g. effective metadata extraction, searching for metadata, finding duplicates). But there’s a lot of potential: they’re still in beta!

I use Papers for organizing and reading PDFs, but it only exists on the Mac, and doesn’t have a simple way to share libraries of PDFs across computers. There are probably some serious copyright issues with PDF sharing, and it will be interesting to see how Mendeley and Papers adapts.

UPDATE: Mendeley is quickly getting better. Their metadata extraction is more accurate, there is a PDF viewer, and linking to CUL means tagging is simple. Duplicate finding and merging is still needed, but the software is much better than when I first wrote this. Although I’m still using Papers somewhat because of inertia, I might suggest a newcomer try Mendeley right-off-the-bat.

deja boo?

June 2, 2009 at 9:53 am | | everyday science, literature, open thread, science community, scientific integrity, wild web

I’d like to know everyone’s opinion about Deja Vu, the database of “duplicate” scientific articles. Most of the articles in the database are “unverified,” meaning that they could be entirely legitimate (e.g. a reprint). Some are instances of self-plagiarism: an author recycling his or her own abstract or intro for a new paper or review. A few instances are true plagiarism: one group of authors stealing the words (entire paragraphs or papers) of other authors. You can read more in Science.

I can imagine several possible responses (see the poll below):

  1. Great! Now there’s a way for authors, journals, and institutions to better root out plagiarism and unauthorized copying.
  2. Granted, this is information in the public domain, so authors should expect their work to be scrutinized. However, it’s worrisome to have a computer algorithm put red flags on articles that may be legitimate. Deja Vu is probably a good idea, but needs to be reworked.
  3. Careers will be unfairly destroyed by this approach. Labeling a paper as a “duplicate” sounds negative, even when listed as “sanctioned” or “unverified.” This database takes a guilty-until-proven-innocent approach that has the potential to sully the reputation of good scientists.
  4. Um, haven’t these people seen Terminator 2? What is Deja Vu becomes self-aware and starts killing plagiarists.

[poll id=”2″]

Fortunately, an author can check his or her work in the eTBLAST database before submission, to see if a coauthor copied a section, or if the text will unfairly put up red flags. But I found that the results were confusing (e.g. I can’t find the meaning of the “score” or the “z-score”) and unhelpful (of course papers in the same field will have the same keywords). And the results page was really buggy (maybe just in Firefox?).

Personally, I vote #2: Deja Vu is a good idea, but needs to be more careful about the papers it lists as “duplicates,” even “unverified” or “sanctioned.” When a junior faculty member  gets a hit in the database, his or her name will be associated with plagiarism. Some people will not bother to check if it was a legitimate copy, or even who copied whom. I think that the current approach that Deja Vu takes is reckless and unfair. Even lazy.

Moreover, self-plagiarism is not necessarily bad. Copying your own abstract is different than copying your entire paper. Obviously, at some point, self-plagiarism is unacceptable (e.g. submitting the same paper or review to two journals).

I think this topic deserves more nuance than Deja Vu offers.

(Deja Vu has it’s own survey here.)


May 21, 2009 at 8:59 am | | history, literature, science community

I guess I was the first one to check out this book since 1966.


It wasn’t worth checking out, by the way.


April 28, 2009 at 8:46 am | | literature, science community

Endnotes are the way to go in a scientific paper. Footnotes might work for a book, but they’re distracting and hard to locate quickly in an article. So cut it out, JOC.

The only thing worse than footnotes is startnotes: when all the references are listed at the beginning of the article. How does that make any sense? It really makes the paper look silly:



Is Optics Express just trying to be contrarian?

That paper by Ober is good, though.

some light reading

April 17, 2009 at 7:31 am | | everyday science, grad life, literature, nerd, science community

What do you read…


…while on the toilet?

epernicus: the linkedin for scientist

April 15, 2009 at 7:58 am | | science community, wild web

I’ve been waiting for a LinkedIn for scientists. I’ve finally found one: Epernicus. It’s professional, clean, easy, and standardized. Most importantly, Epernicus recognizes that the scientific community already has a concrete network, which is defined by professor-student, coauthor, and colleague relationships. The site has a “geneology” feature and automatically connects you to your coauthors.

Epernicus profiles are focused on scientific network, research skills and background, and papers. This is what the LinkedIn for scientist should look like. (It’s not perfect: I don’t think there’s enough emphasis on finding and sharing articles. See my idea below.)

Of course, there are many versions of online networks for scientist. Here is a sampling of the ones I’ve found: Labmeeting, Nature Network, Research Gate, Sciorbis, Mendeley, Scientific Commons, Scilink, CiteULike, ACS Member Network, 2Collab, Graduate Junction, and the list goes on. None of these I’ve tested really hit the nail on the head. They all have their own cool features, but they either neglect some essential aspect or are just too clunky to use like I want to use it (i.e. like I use LinkedIn or Facebook).

For instance, I use CiteULike to organize interesting articles, and the site does a good job of linking you to similar readers. But the networking is limited on the site. If CiteULike and Epernicus connected, they could make an awesome site! Just imagine: when your former labmate or coauthor reads a paper, it shows up in a list of papers you might want to read. Articles could be weighted by many factors based on your reading habits, interests, and network, giving you a prioritized list of relevant papers. Data about articles and reading habits could be mined from CiteULike, while geneology and network information are inherent in Epernicus. The way we read papers is bound to change dramatically someday, and Epernicus + CiteULike should tackle that paradigm shift today.

Anyway, if you’re on Epernicus, feel free to connect with me here.

creepy sciencespam

April 3, 2009 at 8:26 am | | science community, wild web

I got a spammy email from one of those science networking sites:

Hello ,

my name is Mirian , it is my pleasure to write you after viewing your profile here in this site which really interests me to communicate with you. it will be better if you can comfirm this message by writing me back via my email ( redacted )so that we can have a comfortable communication . i have something to share with you . i will be waiting to hear from you. have a blessed day.————————————————————from Mirian

I just found it creepy.

please repeat the question

March 14, 2009 at 2:01 pm | | conferences, science community

I always appreciate when a speaker repeats the questions before answering. It can really enhance the Q&A. But I understand how easy it is to forget to repeat the questions, even if you had the intention to before you stood up in front of everybody.

I’m at a little conference on campus today. It’s being filmed for a video archive, so the moderators regularly remind speakers to repeat the questions (lest the context be lost in the video). One speaker forgot to repeat each and every question, and the moderator was very persistent at reminding the speaker. It was all in good spirits, and people chuckled each time the moderator needed to remind the speaker.

An hour or so later, the previous moderator gave a talk. After his presentation, he proceeded to forget to repeat each and every question, and the audience and new moderator were quick to gently remind him! Very cute.

what to do about supporting information

March 9, 2009 at 3:54 pm | | literature, open thread, science community, wild web

I met with David Martinsen from ACS Pubs today to discuss the interface between publishing and technology (internet, Kindle, Facebook, CiteULike, etc.). Interesting discussion. One topic that came up a few times was the supporting information (SI) for articles.

Page limits as well as increasingly complex experimental methods have caused SIs to balloon to sometimes ridiculous lengths. Combine this with the fact that SIs are often barely readable, marginally refereed (if at all), and crammed with unexplained figures, and SIs become ridiculous. Sometime, “see the SI” is a ploy to lull readers and referees into feeling that a statement is supported by data, even when the data is total crap. I’ve seen spectra in SIs that wouldn’t pass muster an undergrad lab, much less a peer-reviewed journal, but they were the primary data of the letter. Nevertheless, much of the core scientific information is often buried in these monstrosities.

Authors need to be encouraged to compile their SIs to be coherent, clean, correct, and scientific supplements to their papers. I have some suggestions. I start with simple suggestions, and move to more fundamental ones:

  1. Offer a single-PDF option. Have an option to download a single PDF that contains all the content for the articles (i.e. the main text as well as the SI). Because the SI often contains vital information to interpret or repeat the results, it’s important to have this data (or synthesis) along with the main text. ACS already has two PDF options (hi-res PDF and PDF with links), so it would be as simple as adding a third link to the page.
  2. Format SIs. This can be as simple as providing a Word template just like there is available for the main text and figures of the article. This shouldn’t increase the editors’ responsibilities, but would make SIs more readable.
  3. Referee SIs. Encourage referees to carefully read and scrutinize the SI, just as they do the main text. If something is unscientific, sloppy, or wrong in the SI, it should not be included. Referees should be encouraged to request SI data or methods be clarified, tested, eliminated, or repeated if necessary as a condition for publication.
  4. Include raw data. Provide space for authors to present raw data if they wish (e.g. structure, NMR, crystallographic, spectral, etc.) in the SI. I personally don’t think this is as important as careful writing, editing, and refereeing of the SI; however, providing raw data could be another level of evidence for the authors’ claims. I don’t think this should be a requirement, because we should be able to trust researchers and not spend all our time redoing someone else’s analysis.
  5. Offer full articles online. Putting 1-3 together, there could be a “full” article form of any paper, which includes all information from the SI, but formatted and organized like a lengthy paper. That way, the paper form of the journal could stay short, while online forms could be complete, yet still professional, readable, and cogent. However, this would add some burden to editors, referees, authors, copy-editors, etc. Paper journals could become more of a collection of executive summaries, with the full scientific data online. (I believe that some Nature journals do this to some extent: having a short Methods section added to the PDF form but not in the paper journal. But there’s still an SI in addition.)

I think #5 is where journals should be going. True, it will add expense to publishers, but that will help them justify the subscription fees when no one receives paper journals anymore.

Other ideas?

UPDATE: Here are some other ideas, which I’ll update as needed:

  • Allow authors to republish. Make it easier for authors to republish the data and figures that are in a SI again in the main text of a subsequent paper. This would permit the SI data to someday be published in a fully refereed form in a follow-up paper. Otherwise, data and figures in the SI will never be properly reported. Of course, there may be a lot of complications with copyright and getting all authors to agree.
< Previous PageNext Page >

Powered by WordPress, Theme Based on "Pool" by Borja Fernandez
Entries and comments feeds. Valid XHTML and CSS.