FluidFM. I would have called it “Squirty AFM.” They put a little hole through the tip of an AFM cantilever, so they can squirt fluid through the tip.
Or add dye to just one cell.
I haven’t had this much fun with a paper in a while. Hot off the RSS press from PRL is the absolute gem: Illusion Optics: The Optical Transformation of an Object into Another Object. A number of things caught my attention in this one. Their first application of their metamaterial image transformation is a feat of gender illusion.
Anyone who has spent one night in Bangkok (Choose your own adventure! The semi NSFW Youtube here, or some boring lyrics here) knows this can be accomplished with far lesser means than metamaterials, but kudos. Personally, I’ll stick with beer goggles. Their next trick involves what I can imagine will be the next great party prank, turning a spoon (in this case what appears to be a 1um spoon) into the illusion of a cup.
Next is rendering a “virtual” hole in a wall. You simply slap their mystical illusion device on the wall (you’ll love this Sam, all the details of what exactly such a device entails, aka the entire basis of the paper, are piled in the supporting materials), and you can look through the virtual hole like it was an actual hole. I call the device of my own invention that can accomplish this a window(TM), but let’s not argue over semantics of fenestration. Porky’s would be proud.
Even more ridiculous, they argue that one can even see into a closed container by simply turning on their device and projecting the illusion of free space where the container should be. Which would totally work, except for this little thing called absorption. So if you decide to hide 8-balls of Coke in a Mylar balloon, you’re totally screwed. And an idiot.
I met my academic grandfather recently: A.J. Sievers was my PhD advisor’s—W.E. Moerner’s—PhD advisor at Cornell. He is a really friendly guy. Also, he told me interesting stories of when he worked for the Varian brothers and beat Hewlett (or was it Packard?) in a ping-pong match!
I read this interesting editorial in Science about the media hyping of a recent archeological find. Just look at Jørn Hurum, the team leader:
The report of finding the intact skeleton of a monkey-thing was reported in PLOS One, but not before the media hype started. There is a TV documentary of the find and the research on the fossil, and the group is touting the find as the next Rosetta Stone and the “missing link” between apes and humans. For instance, here are a few quotes from the team that reported the find:
“This specimen is like finding the lost ark for archaeologists. It is the scientific equivalent of the Holy Grail.” –Jørn Hurum
“[It is] like the eighth wonder of the world.” –Jens Franzen
What douchebags. This kind of bullshit is seriously likely to undermine the credibility of science in the public eye. Going around claiming that you’ve found the missing link—not to fellow scientist but to the public at large—is very dangerous: when it turns out that your monkey-thing is not a little human, the incident will only add gasoline to the anti-evolution fire. If it really is the missing link, let your fellow paleontologists make those statements.
I find this type of grandstanding by the authors scary, and very reminiscent of the Fleischmann and Pons cold fusion debacle. In fact, I recently watched a 60 Minutes episode about cold fusion, in which Fleischmann stated that his only regrets were naming the effect he saw “fusion” and holding a press conference. In other words, if he and Pons had not overhyped their results directly to the media, then maybe they wouldn’t have been run out of science when their science turned out to have fatal flaws.
Hurum claims that he’s only hyping in this fashion in order to help get children intersted in science. But clearly, his base motivation is to make himself rich and famous. Yes, we should get children excited about real science, but not at the expense of scientific integrity.
Or maybe this little monkey-thing will end up being seen as a great scientific discovery for generations. But I doubt it.
Royce Murray—the famous analytical chemist and great educator—has written an interesting editorial in Analytical Chemistry. Since 1995, the page limit for AC has been 7 pages, including figures. But papers are getting longer.
“In 1983, 1989, 1994, 2000, 2006, and 2008, the average length of papers published in our research section was 3.8, 6.4, 6.7, 7.1, 7.0, and 8.0 pages, respectively. I consider an average seven-page length already long, and an average of eight pages is alarming.”
Royce is careful to acknowledge the various justifiable reasons papers might be longer today than before, including that figures have grown bigger. But he remains convinced that a 10-page paper is basically unreadable to most AC subscribers. His solutions are twofold. First, Royce implies that authors should write the same information with fewer words and smaller figures. Secondly, he explicitly suggests that authors take advantage of Supporting Information sections.
I am of two minds on this issue. On the one hand, I agree that an 8-page average means that causually reading AC is going to be difficult. On the other hand, I don’t think that dumping results into the SI is an adequate solution: SIs are less carefully written, refereed, and read, and therefore are not an appropriate medium to report scientific data or analysis. In 50 years, will SIs still be accessible? Will our generation of scientists be proud of them if they are? SIs are great for detailed methods as well as superfluous, tangential, or extra results. However, there needs to be a place to be able to publish significant scientific results, even if it takes 10 pages. AC should probably be one of those journals (as should J. Phys. Chem. ABC and others).
So here is my solution. (1) Allow longer papers when they represent a significant body of work. I think Royce agrees here. (2) For a very long but good paper, have the authors split it into two halves. Each half should tell a distinct story, but together could easily be read as a whole. These two halves would be published alongside each other. (3) If a manuscript is 10 pages of drivel and bullshit figures, reject it or require a significant rewrite.
I have been hosting the famous electrochemist Al Bard for a colloquium at our department. This is the first time I’ve met him, and it’s been a real pleasure. He is a super nice guy and really down-to-earth. I ran him around campus all day meeting with student groups, and he never complained. And at dinner with students, he was very entertaining and had great perspective on the science world, especially publishing (he was the Editor-in-Chief of JACS for years, afterall). Truly one of the nicest famous chemists I’ve ever met.
His talk was excellent: he explained his system and technique very clearly, and the results were interesting. Perfect combination! If you ever get a chance to meet Bard or go to one of his talks, I highly recommend it.
It has begun: Mendeley is integrating CiteULike libraries.
CiteULike is an online program for extracting metadata (e.g. authors, titles, etc.) from webpages of journal articles, then storing and organizing that data. I use it to organize the citations and DOIs of all the articles I find interesting.
Mendeley is an online database for storing and sharing PDFs of papers (plus a desktop application). I still don’t find Mendeley to be very helpful yet: still too buggy, too slow, and missing some vital features (e.g. effective metadata extraction, searching for metadata, finding duplicates). But there’s a lot of potential: they’re still in beta!
I use Papers for organizing and reading PDFs, but it only exists on the Mac, and doesn’t have a simple way to share libraries of PDFs across computers. There are probably some serious copyright issues with PDF sharing, and it will be interesting to see how Mendeley and Papers adapts.
UPDATE: Mendeley is quickly getting better. Their metadata extraction is more accurate, there is a PDF viewer, and linking to CUL means tagging is simple. Duplicate finding and merging is still needed, but the software is much better than when I first wrote this. Although I’m still using Papers somewhat because of inertia, I might suggest a newcomer try Mendeley right-off-the-bat.
I’d like to know everyone’s opinion about Deja Vu, the database of “duplicate” scientific articles. Most of the articles in the database are “unverified,” meaning that they could be entirely legitimate (e.g. a reprint). Some are instances of self-plagiarism: an author recycling his or her own abstract or intro for a new paper or review. A few instances are true plagiarism: one group of authors stealing the words (entire paragraphs or papers) of other authors. You can read more in Science.
I can imagine several possible responses (see the poll below):
- Great! Now there’s a way for authors, journals, and institutions to better root out plagiarism and unauthorized copying.
- Granted, this is information in the public domain, so authors should expect their work to be scrutinized. However, it’s worrisome to have a computer algorithm put red flags on articles that may be legitimate. Deja Vu is probably a good idea, but needs to be reworked.
- Careers will be unfairly destroyed by this approach. Labeling a paper as a “duplicate” sounds negative, even when listed as “sanctioned” or “unverified.” This database takes a guilty-until-proven-innocent approach that has the potential to sully the reputation of good scientists.
- Um, haven’t these people seen Terminator 2? What is Deja Vu becomes self-aware and starts killing plagiarists.
Fortunately, an author can check his or her work in the eTBLAST database before submission, to see if a coauthor copied a section, or if the text will unfairly put up red flags. But I found that the results were confusing (e.g. I can’t find the meaning of the “score” or the “z-score”) and unhelpful (of course papers in the same field will have the same keywords). And the results page was really buggy (maybe just in Firefox?).
Personally, I vote #2: Deja Vu is a good idea, but needs to be more careful about the papers it lists as “duplicates,” even “unverified” or “sanctioned.” When a junior faculty member gets a hit in the database, his or her name will be associated with plagiarism. Some people will not bother to check if it was a legitimate copy, or even who copied whom. I think that the current approach that Deja Vu takes is reckless and unfair. Even lazy.
Moreover, self-plagiarism is not necessarily bad. Copying your own abstract is different than copying your entire paper. Obviously, at some point, self-plagiarism is unacceptable (e.g. submitting the same paper or review to two journals).
I think this topic deserves more nuance than Deja Vu offers.
(Deja Vu has it’s own survey here.)