Jürgen Schmied from GATTAquant came by the other day and let me play around with some of their cool DNA origami fluorescence standards.
The PAINT sample was really cool. It has short oligos on the DNA origami and complementary strands labeled with dyes in solution. The binding/bleaching kinetics are such that each hotspot blinks many times during an acquisition. After a quick 10,000 frame acquisition over 3 min, we collected a dataset that we could easily get a super-resolution image. We used ThunderSTORM to fit the data and correct for drift. But without any other corrections, we could easily resolve the three PAINT hotspots on each DNA origami:
But my favorite sample was actually the confocal test slide. It had two sets of dyes about 350 nm apart permanently labeled on each DNA origami.
This let me test the resolution and image quality using different configurations on our Diskovery confocal/TIRF system.
Each spot contained only about 4-8 dyes. So it was a much greater challenge to our microscope than TetraSpeck beads.
I highly recommend GATTAquant test samples. Very fun.
UPDATE: Jürgen ran my data though GATTAquant’s analysis software and sent me the results below.
The scale bar is only 400 nm. Love it! (Video link here.)
The strange thing is that, despite our suggestions otherwise, the Nature folks chose a not-the-most-interesting figure from the paper. Of course, I’m more than happy that they showed any of our awesome figures! But, instead of showing one of the super-resolution images that Hsiao-lu made, the highlight shows a proof-of-labeling image, which is diffraction-limited. That said, they did select one of the live-cell images. I suppose it could be worse: they could have picked one of the controls. Or not displayed a figure at all.
Thanks Nature. I don’t mean to look a gift horse in the mouth.
Well, I’ll highlight our paper here. And choose my favorite figure (it’s protein localization in a little-bity bacteria):
Or crazy swirling patterns:
For these awesome results, I’m awarding the authors an EDSEL for “Coolest paper (if it isn’t faked) of little filaments spinning around in circles of 2010.”
Not that I have any reason to think that these results are faked. They just seem so crazy and beautiful. Animated, even.
Xiaowei Zhuang and Roger Tsien have published a JACS paper on how Cy5 photoswitches: “Photoswitching Mechanism of Cyanine Dyes.”
Nothing surprising here: this is the mechanism that Tsien proposed informally a couple years ago. But now they have published evidence that the thiol (e.g. BME, a component of the oxygen-scavenging system) attacks a polymethene bond.
I think this paper is a nice characterization of the photochemistry involved in Cy5 photoswitching.
This artistic rendering depicts electrocatalytic reactions one event at a time by measuring the burst of fluorescence (peaks) that occurs as each product molecule is reduced on a carbon nanotube (gray line).
This awesome figure is from C&E News, depicting single molecule fluorescence signals. I suppose that the “sun” in the background is the imaging laser. What are the stars, though? Dust? Well, at least they’re honest about the purity of the buffer.
I’ve seen a few papers recently that attempt to characterize the effect oxygen scavengers or triplet quenchers have on the photostability of single molecules. The main parameters they measure and compare across systems and fluorophores is the “on time”—the average time single molecules are fluorescent before they photobleach—and “off time”—the time spent in transient dark states.
Here’s my question: What about the emission rate? It’s not enough to report and compare only times. Photostability (either regarding bleaching or blinking) is related to a probability that a molecule will enter a permanent (or transient) dark state for every photon absorbed. The “on time” is only relevant when you also know the rate of photon absorption.
Moreover very fast excursions into transient dark states (i.e. triplet lifetimes are typically us or ms—much faster than the camera integration time) will appear as a dimming of the fluorescence, a decrease in the photon emission rate. By removing molecular oxygen (an effective triplet quencher) from solution, fluorophores often become dimmer because the triplet lifetimes increase. Thus, removing oxygen might make your dye last a longer time, but at the expense of brightness. This could more effectively be acheived by just turning down the excitation intensity (with lower background, too)!
So it makes me want to pull my hair out when I read long, detailed articles in prestigious journals that fail to mention even once the rates of photon absorption or emission when they report “on times” as photostability parameters.
Total number of photons is a more fundamental measure of photostability, but it still not enough for the full picture (it doesn’t report on blinkiness, for instance). Total photons plus “on times” is sufficient; or “on times” and emission rates; or enough variables to define the frikkin’ system.
UPDATE: Here’s a good paper that tests specific hypotheses about number and duration of off times. On the Mechanism of Trolox as Antiblinking and Antibleaching Reagent. JACS 2009, ASAP.
In the spirit of the Olympics, I have a new idea: In order to directly probe the dynamics of the Olympics using fluorescence, we need to label athletes and measure their locations and conformations.
For instance, labeling Michael Phelps with Cy3 and Cy5 will allow us to locate his position in the pool with nanometer precision.
Moreover, we should also be able to watch the FRET trace to observe the conformational changes of his arms. In this fashion, we should be able to determine if Phelps travels down the lane using the widely promoted “butterfly” model, or the more controversial “flagellar” model, which posits that his arms spin like propellers.
The basic idea is to use an an azide to disrupt the push-pull character of a known fluorophore, rendering it dark. Because the azide is photoconvertible to an electron donor (the amine), you can photoactivate fluorescence.
Anyway that’s all. Just wanted to talk about myself a little. Which is what a blog is for.
(Or: How I Learned to Stop Worrying and Love the Awards.)
So I give you the first EverydayScientist’ Extraordinary Laud (EDSEL) award for the Coolest Paper of Early January 2008:
Huang, B., Wang, W., Bates, M., Zhuang, X. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 2008 (published online Jan 3).1
Stochastic optical reconstruction microscopy (STORM) is Xiowei’s cool super-resolution technique (Eric Betzig has a similar version called “PALM”). And I’ve already blogged about Bo’s talk at ACS Boston.
There’s not anything revolutionary in this paper: they’ve used their STORM technique, and simply added an extra lens to distort the PSF of single molecules, causing those above or below the focal plane to distort in a consistent manner. That way, they “imprinted” axial information on the image, and can generate 3D representations from the fitting data.2 But, while the technique isn’t a breakthrough, this paper is in Science because the images produced are really amazing:
You can see that the microtubules in the cell move down! Cool. And the supplemental material included this beautiful movie of some microtubules crossing over each other (the scale bar is only 200 nm, below the diffraction limit):
[local /wp-content/uploads/3d-storm_movie.mov View Movie]
And I also really loved this comparison of 2D STORM (top) versus a 100-nm thick x-y cross-section in the 3D image (bottom) of some clathrin-coated pits. You can really see that they are hollow!
Now, all these images are of fixed (read: “dead”) cells. Because STORM imaging requires cycling acquisition, each frame generally takes a long time. This makes living-cell imaging and measuring dynamics difficult.3 And this is really a proof-of-principle study: the results don’t answer any biophysical questions. Nevertheless, the images are really beautiful!
I fully expect this technique—and the other super-resolution approaches—to become another tool in the biophysical toolbox (along with TIRF, FRET, FLIM, FRAP, and other acronyms). Just you wait…
1 I tried to be good and requested permission from AAAS to reprint these images and movies. But they haven’t gotten back to me. So I’ll just post them anyway. Don’t sue me, Bo. [UPDATE: Reprinted with permission from AAAS. I finally received permission to use these images. If you wish to reuse these images you can obtain permission from AAAS by following the guidelines here.]
2It is generally known that the dipole-emission pattern of single emitters contain information about axial depth. It is also straightforward to introduce an astigmatic distortion to the optical system to imbed depth information.
3 Not impossible: someday it will be done. Stefan Hell already is quite fast with his PALMIRA imaging.
We just moved to a different apartment, and started doing some decorating. Man, we went all out. You know, lovin’ it.
And I just don’t get enough of looking at single fluorescent molecules while I’m at lab. I miss them when I’m home. So we decorated our living room so that I never have to feel too far away from my fluorophores:
Figure 1. (left) sample under white light, (middle) white light and single-molecule emission overlaid, and (right) single-molecule emission only. We have some problem with background, but we are able get achieve signal-to-background ratios sufficient for imaging.
I really have no idea why my girlfriend let us do this. It was partially her idea, really. Maybe she wanted to keep me home longer, staring at the walls, writing down filenames, instead of heading into lab.
Bianxiao Cui, a postdoc in Steve Chu‘s lab here at Stanford, gave a job talk in the Chemistry Department last week. The title was approximately “Single-Molecule Analysis of NGF in Live Neurons (Using Quantum Dots).” Anyway, that gets the point of the talk. It was pretty cool. Here are my notes:
Summary: Nerve-growth factor (NGF) is transported down the axon from the distal end near a target to the nerve cell body, and this tells the axon to grow. To image this transport in live cells, they labeled NGF with quantum dots (QDs) using a biotin/streptavidin linkage and tracked the labeled molecules using “pseudo TIRF”—somewhere between TIR illumination and columnated epi so that the excitation extended a little more into the sample than TIR, but kept background low. They used special compartmented sample containers (and later, microfluidic cells) in order to separate the cell body from the axon (because the labeled NGF nonspecifically labels the entire cell body and makes it too difficult to image).
Cui showed some beautiful (false-color) movies of individual QD-NGF-endosome complexes being actively transported down microtubules toward the cell body. Also found were instances where endosomes moved backward (toward distal axon) and endosomes allegedly passed over each other (although it was unclear to me how they proved that the two endosomes were on the same microtubule and if they actually passed over, because each diffraction-limited spot is indistinguishable). She made some claims about only one NGF per endosome (i.e. the endosome does not wait for more passengers), but that part was a little fuzzy to me (although it was probably the point of the talk). Also, they found that labeled NGF co-localized with signaling molecules in the cell body. Cool.
By fitting the point-spread function of the QDs, they were able to localize each endosome to individual microtubules (imaging below the diffraction limit). This was very cool, and Cui showed tracks of endosomes switching microtubules and then changing speed or direction. Finally, she even did a mouse study, but this seemed tacked on. The real results were the imaging and single-particle tracking.
This is Part 1 in a (possibly) multi-part series on techinques that seek to image objects smaller than the diffraction limit of light. Part II is here. Suggestions for other interesting methods to summarize?
It is well known that there is a limit to which you can focus light—approximately half of the wavelength of the light you are using. But this is not a true barrier, because this diffraction limit is only true in the far-field and localization precision can be increased with many photons and careful analysis (although two objects still cannot be resolved); and like the sound barrier, the diffraction barrier is breakable! Let’s explore a few interesting approaches to imaging objects smaller than ~250 nm.
Probably the most conceptual way to break the diffraction barrier is to use a light source and/or a detector that is itself nanometer in scale. Diffraction as we know it is truly a far-field effect: the light from an aperture is the Fourier transform of the aperture in the far-field. [Here’s a neat Applet to play with.] But in the near-field, all of this isn’t necessarily the case. Near-field scanning optical microscopy (NSOM) forces light through the tiny tip of a pulled fiber—and the aperture can be on the order of tens of nanometers. If you bring this tip nanometers away from a molecule, the resolution is not limited by diffraction, but by the size of the tip aperture (because only that one molecule will see the light coming out of the tip). Then you can raster scan the tip over the surface to create an image.
The main down-side to NSOM is the limited number of photons you can force out a tiny tip, and the miniscule collection efficiency (if you are trying to collect fluorescence in the near-field).
Local Enhancement / ANSOM / Bowties
So what if, instead of forcing photons down a tiny tip, we could create a local bright spot in an otherwise diffraction-limited spot? That is the approach that ANSOM and other techniques take. ANSOM is apertureless NSOM: it uses a tip very close to a fluorophore to enhance the local electric field the fluorophore sees. Basically, the ANSOM tip is like a lightning rod which creates a hot spot of light.
The Moerner lab uses some bowtie nanoantennas to greatly and reproducably enhance the electric field in the nanometer gap between the tips two gold triangles. Again, the point is to enhance a very small region of a diffraction-limited spot, thus improving the mismatch between light and nanoscale objects—and breaking the diffraction barrier.
My most recent favorite is STED—stimulated emission depletion. Stefan Hell at the Max Planck Institute developed this method, which uses two laser pulses. The first pulse is a diffraction-limited spot that is tuned to the absorption wavelength, so excites any fluorophores in that region; an immediate second pulse is red-shifted to the emission wavelength and stimulates emission back to the ground state before, thus depeting the excited state of any fluorophores in this depletion pulse. The trick is that the depletion pulse goes through a phase modulator that makes the pulse illuminate the sample in the shape of a donut, so the outer part of the diffraction limited spot is depleted and the small center can still fluoresce. By saturating the depletion pulse, the center of the donut gets smaller and smaller until they can get resolution of tens of nanometers.
This technique also requires a rastor scan like NSOM and standard confocal.
Fitting the PSF
The methods above (and below) use experimental techniques to circumvent the diffraction barrier, but we could also use crafty analysis to increase our ability to know where a nanoscale object is. The image of a point source on a CCD is called a point spread function (PSF), which is limited by diffraction to be no less than approximately half the wavelength of the light. Of course. But what if we simply fit that PSF with a Gaussian to locate the center of the PSF—and thus the location of the fluorophore? Sure! But the precision by which we can locate the center depends on the number of photons we get (as well as the CCD pixel size and other factors). Regardless, groups like the Selvin lab and many others have employed this analysis to localize single fluorophores to a few nanometers! This, of course, requires careful measurements and collecting many photons.
PALM & STORM
What fitting a PSF is to localization, photo-activated localization microscopy (PALM) is to “resolution”—I use this term loosely to mean measuring the distance between objects, not true optical resolution. Eric Betzig and collegues developed PALM; Xiaowei Zhuang at Harvard used a similar techniques and calls it STORM: stochastic optical reconstruction microscopy. [UPDATE: Sam Hess at UMaine developed the technique simultaneously.] The basic premise of both techniques is to fill the imaging area with many dark fluorophores that can be photoactivated into a fluorescing state by a flash of light. Because photoactivation is stochasitic, you can ensure that only a few, well separated molecules “turn on,” then fit their PSFs to high precision (see above). If you wait until the few bright dots photobleach, then you can flash the photoactivating light again and fit the PSFs of different well spaced objects. If you repeat this process many times, you can build up an image molecule-by-molecule; and because the molecules were localized at different times, the “resolution” of the final image can be much higher than that limited by diffraction.
The problem? To get these beautiful pictures, it takes ~12 hours of data collection. This is certainly not the technique to study dynamics (fitting the PSF is better for that).
Good single-molecule fluorophores must meet several criteria. To name a few, the molecules must have high quantum yields, must be photostable, with a large absorption cross-section and a low yields into dark triplet states. (Ideally, a single-molecule fluorophore would have some inherent reporter function, but that’s a story for another day.) There are several popular long-lasting, bright fluorophores out there (e.g. rhodamines, terylenes, Cy3); my research involves developing and characterizing new compounds for single-molecule cellular imaging, which means quantifying the criteria I listed above.
One measure of photostability is the total number of photons each molecule emits before it irreversibly photobleaches (Ntot,emitted). Because photobleaching is a Poisson process that depends on the number of times the molecule reaches the excited state, Ntot,emitted for a fluorophore should not depend on the laser intensity (unless you illuminate beyond the saturation intensity): at a higher intensity, the molecules will absorb and emit photons at a higher rate, but photobleach in less time, yielding the same number of photons than at a lower intensity. So the Ntot,emitted of a particular fluorophore in a particular environment tells you how bright or long-lasting the fluorophore is; the higher the number, the better the fluorophore (all else the same).
I’ve measured the Ntot,emitted for a fluorophore on the bulk and single-molecule level—experiments which require different analysis. For the single-molecule measurement, I record movies of a wide-field illumination of several molecules well spaced in a polymer film. Then I integrate the counts from each molecule and plot the distribution from over 100 individual molecules. The distributions look like this:
The distributions are exponential characteristic of a Poisson process. From this distribution, I can determine the average Ntot,detected and convert this to Ntot,emitted using the detection efficiency of my setup (typically ~10% for epifluorescence).
The bulk measurement is a little less conceptually obvious. I use the same setup, but overdope the dye into the polymer film, so the intensity of the entire field of view versus time in the movie looks like this:
From the exponential fit, you can extract the average time before photobleaching for the molecules. This value, combined with the absorption cross section and the laser intensity, can give you the number of photons absorbed per molecule. Using the quantum yield, you can then calculate the Ntot,emitted.
So what’s my whole point here? Given that the two experiments I describe use different measurements and calculate Ntot,emitted using different parameters, I was really happy to see that the calculated values of Ntot,emitted were very close from bulk and single-molecule experiments when I compared them directly. That’s all.
How do you turn your grayscale CCD to a two-color camera? Filters and fun! Here’s a diagram looking down at the dual-viewer setup on my table:
Figure 1. Diagram of setup (viewed from above)
M1 and M2 are mirrors, F1 and F2 are long- or short-pass filters, and DC1 and DC2 are identical dichroics. DC1 reflects short wavelengths and pass long wavelengths, the filters clean up the two paths, the mirrors bring the paths back together, and DC2 combines the two channels. If the channels are offset a little, then short and long wavelengths are split into two copies of the image onto the CCD.
Here’s a pic of the setup:
Figure 2. Picture of the setup with channels drawn
In other words, the dichroics split the green light off the output and move it to a different region of the CCD. You can recombine the two copies and add color using ImageJ, like this:
Figure 3. The right side is an overlay of the red and green channels in false color