This is Part 1 in a (possibly) multi-part series on techinques that seek to image objects smaller than the diffraction limit of light. Part II is here. Suggestions for other interesting methods to summarize?
It is well known that there is a limit to which you can focus light—approximately half of the wavelength of the light you are using. But this is not a true barrier, because this diffraction limit is only true in the far-field and localization precision can be increased with many photons and careful analysis (although two objects still cannot be resolved); and like the sound barrier, the diffraction barrier is breakable! Let’s explore a few interesting approaches to imaging objects smaller than ~250 nm.
Probably the most conceptual way to break the diffraction barrier is to use a light source and/or a detector that is itself nanometer in scale. Diffraction as we know it is truly a far-field effect: the light from an aperture is the Fourier transform of the aperture in the far-field. [Here’s a neat Applet to play with.] But in the near-field, all of this isn’t necessarily the case. Near-field scanning optical microscopy (NSOM) forces light through the tiny tip of a pulled fiber—and the aperture can be on the order of tens of nanometers. If you bring this tip nanometers away from a molecule, the resolution is not limited by diffraction, but by the size of the tip aperture (because only that one molecule will see the light coming out of the tip). Then you can raster scan the tip over the surface to create an image.
The main down-side to NSOM is the limited number of photons you can force out a tiny tip, and the miniscule collection efficiency (if you are trying to collect fluorescence in the near-field).
Local Enhancement / ANSOM / Bowties
So what if, instead of forcing photons down a tiny tip, we could create a local bright spot in an otherwise diffraction-limited spot? That is the approach that ANSOM and other techniques take. ANSOM is apertureless NSOM: it uses a tip very close to a fluorophore to enhance the local electric field the fluorophore sees. Basically, the ANSOM tip is like a lightning rod which creates a hot spot of light.
The Moerner lab uses some bowtie nanoantennas to greatly and reproducably enhance the electric field in the nanometer gap between the tips two gold triangles. Again, the point is to enhance a very small region of a diffraction-limited spot, thus improving the mismatch between light and nanoscale objects—and breaking the diffraction barrier.
My most recent favorite is STED—stimulated emission depletion. Stefan Hell at the Max Planck Institute developed this method, which uses two laser pulses. The first pulse is a diffraction-limited spot that is tuned to the absorption wavelength, so excites any fluorophores in that region; an immediate second pulse is red-shifted to the emission wavelength and stimulates emission back to the ground state before, thus depeting the excited state of any fluorophores in this depletion pulse. The trick is that the depletion pulse goes through a phase modulator that makes the pulse illuminate the sample in the shape of a donut, so the outer part of the diffraction limited spot is depleted and the small center can still fluoresce. By saturating the depletion pulse, the center of the donut gets smaller and smaller until they can get resolution of tens of nanometers.
This technique also requires a rastor scan like NSOM and standard confocal.
Fitting the PSF
The methods above (and below) use experimental techniques to circumvent the diffraction barrier, but we could also use crafty analysis to increase our ability to know where a nanoscale object is. The image of a point source on a CCD is called a point spread function (PSF), which is limited by diffraction to be no less than approximately half the wavelength of the light. Of course. But what if we simply fit that PSF with a Gaussian to locate the center of the PSF—and thus the location of the fluorophore? Sure! But the precision by which we can locate the center depends on the number of photons we get (as well as the CCD pixel size and other factors). Regardless, groups like the Selvin lab and many others have employed this analysis to localize single fluorophores to a few nanometers! This, of course, requires careful measurements and collecting many photons.
PALM & STORM
What fitting a PSF is to localization, photo-activated localization microscopy (PALM) is to “resolution”—I use this term loosely to mean measuring the distance between objects, not true optical resolution. Eric Betzig and collegues developed PALM; Xiaowei Zhuang at Harvard used a similar techniques and calls it STORM: stochastic optical reconstruction microscopy. [UPDATE: Sam Hess at UMaine developed the technique simultaneously.] The basic premise of both techniques is to fill the imaging area with many dark fluorophores that can be photoactivated into a fluorescing state by a flash of light. Because photoactivation is stochasitic, you can ensure that only a few, well separated molecules “turn on,” then fit their PSFs to high precision (see above). If you wait until the few bright dots photobleach, then you can flash the photoactivating light again and fit the PSFs of different well spaced objects. If you repeat this process many times, you can build up an image molecule-by-molecule; and because the molecules were localized at different times, the “resolution” of the final image can be much higher than that limited by diffraction.
The problem? To get these beautiful pictures, it takes ~12 hours of data collection. This is certainly not the technique to study dynamics (fitting the PSF is better for that).