breaking the diffraction limit of light, part 1
August 19, 2006 at 8:52 pm | sam | literature, single molecules, tutorialThis is Part 1 in a (possibly) multi-part series on techinques that seek to image objects smaller than the diffraction limit of light. Part II is here. Suggestions for other interesting methods to summarize?
It is well known that there is a limit to which you can focus light—approximately half of the wavelength of the light you are using. But this is not a true barrier, because this diffraction limit is only true in the far-field and localization precision can be increased with many photons and careful analysis (although two objects still cannot be resolved); and like the sound barrier, the diffraction barrier is breakable! Let’s explore a few interesting approaches to imaging objects smaller than ~250 nm.
NSOM
Probably the most conceptual way to break the diffraction barrier is to use a light source and/or a detector that is itself nanometer in scale. Diffraction as we know it is truly a far-field effect: the light from an aperture is the Fourier transform of the aperture in the far-field. [Here’s a neat Applet to play with.] But in the near-field, all of this isn’t necessarily the case. Near-field scanning optical microscopy (NSOM) forces light through the tiny tip of a pulled fiber—and the aperture can be on the order of tens of nanometers. If you bring this tip nanometers away from a molecule, the resolution is not limited by diffraction, but by the size of the tip aperture (because only that one molecule will see the light coming out of the tip). Then you can raster scan the tip over the surface to create an image.
The main down-side to NSOM is the limited number of photons you can force out a tiny tip, and the miniscule collection efficiency (if you are trying to collect fluorescence in the near-field).
Local Enhancement / ANSOM / Bowties
So what if, instead of forcing photons down a tiny tip, we could create a local bright spot in an otherwise diffraction-limited spot? That is the approach that ANSOM and other techniques take. ANSOM is apertureless NSOM: it uses a tip very close to a fluorophore to enhance the local electric field the fluorophore sees. Basically, the ANSOM tip is like a lightning rod which creates a hot spot of light.
The Moerner lab uses some bowtie nanoantennas to greatly and reproducably enhance the electric field in the nanometer gap between the tips two gold triangles. Again, the point is to enhance a very small region of a diffraction-limited spot, thus improving the mismatch between light and nanoscale objects—and breaking the diffraction barrier.
STED
My most recent favorite is STED—stimulated emission depletion. Stefan Hell at the Max Planck Institute developed this method, which uses two laser pulses. The first pulse is a diffraction-limited spot that is tuned to the absorption wavelength, so excites any fluorophores in that region; an immediate second pulse is red-shifted to the emission wavelength and stimulates emission back to the ground state before, thus depeting the excited state of any fluorophores in this depletion pulse. The trick is that the depletion pulse goes through a phase modulator that makes the pulse illuminate the sample in the shape of a donut, so the outer part of the diffraction limited spot is depleted and the small center can still fluoresce. By saturating the depletion pulse, the center of the donut gets smaller and smaller until they can get resolution of tens of nanometers.
This technique also requires a rastor scan like NSOM and standard confocal.
Fitting the PSF
The methods above (and below) use experimental techniques to circumvent the diffraction barrier, but we could also use crafty analysis to increase our ability to know where a nanoscale object is. The image of a point source on a CCD is called a point spread function (PSF), which is limited by diffraction to be no less than approximately half the wavelength of the light. Of course. But what if we simply fit that PSF with a Gaussian to locate the center of the PSF—and thus the location of the fluorophore? Sure! But the precision by which we can locate the center depends on the number of photons we get (as well as the CCD pixel size and other factors). Regardless, groups like the Selvin lab and many others have employed this analysis to localize single fluorophores to a few nanometers! This, of course, requires careful measurements and collecting many photons.
PALM & STORM
What fitting a PSF is to localization, photo-activated localization microscopy (PALM) is to “resolution”—I use this term loosely to mean measuring the distance between objects, not true optical resolution. Eric Betzig and collegues developed PALM; Xiaowei Zhuang at Harvard used a similar techniques and calls it STORM: stochastic optical reconstruction microscopy. [UPDATE: Sam Hess at UMaine developed the technique simultaneously.] The basic premise of both techniques is to fill the imaging area with many dark fluorophores that can be photoactivated into a fluorescing state by a flash of light. Because photoactivation is stochasitic, you can ensure that only a few, well separated molecules “turn on,” then fit their PSFs to high precision (see above). If you wait until the few bright dots photobleach, then you can flash the photoactivating light again and fit the PSFs of different well spaced objects. If you repeat this process many times, you can build up an image molecule-by-molecule; and because the molecules were localized at different times, the “resolution” of the final image can be much higher than that limited by diffraction.
The problem? To get these beautiful pictures, it takes ~12 hours of data collection. This is certainly not the technique to study dynamics (fitting the PSF is better for that).
___________________
Sources
NSOM: 1,2
Enhancement: 1,2
STED: 1,2
Fitting PSF: 1
PALM & STORM: 1,2
23 Comments »
RSS feed for comments on this post. TrackBack URI
Leave a comment
Powered by WordPress, Theme Based on "Pool" by Borja Fernandez
Entries and comments feeds.
Valid XHTML and CSS.
^Top^
wikipedia.
Comment by william — August 20, 2006 #
[…] You can read Part I for techniques such as STED, PALM, STORM, etc. In Part 2, we’ll explore the wide-field structured-illumination approach to breaking the diffraction limit of light (BTDLOL). Structured-illumination (SI)—or patterned illumination—relies on both specific microscopy protocols and extensive software analysis post-exposure. But, because SI is a wide-field technique, it is usually able to capture images at a higher rate than confocal-based schemes like STED. (This is only a generalization, because SI isn’t actually super fast. I’m sure someone could make STED fast and SI slow!)The main concept of SI is to illuminate a sample with patterned light and increase the resolution by measuring the fringes in the Moire pattern (from the interference of the illumination pattern and the sample). “Otherwise-unobservable sample information can be deduced from the fringes and computationally restored.”1 […]
Pingback by Everyday Scientist » breaking the diffraction limit of light, part 2 — November 22, 2006 #
You forgot about FPALM, Hess, S.T. et al. BJ, 2006.
Comment by LSC — November 30, 2006 #
Another thing…
“although two objects still cannot be resolved”
This is actually not true… Ram et al., “Beyond Rayleigh’s criterion: A resolution measure with application to single-molecule microscopy”, PNAS 2006 showed that two or more objects can be resolved to arbitraty resolution if you collect enough photons… (at least in theory). Just as you can fit a single diffraction limited object with a PSF and get the center position, you can fit two or more objects with a collection of PSF’s and get all the center positions. Obviously you would need a ton of photons, but it should be possible.
Comment by LSC — November 30, 2006 #
LSC, to fit “two or more” molecules, would you first need to know that the number of objects in the diffraction-limited spot and fit the area to n Gaussians? Or were Ram et al. using photon statistics to resolve multiple molecules? I know that photon-statistics methods exist for sub-diffraction imaging, so maybe that will be part iii of the series! someday…
Comment by sam — December 1, 2006 #
oh, and by the way, i am honored to have your comments, LSC. sorry again for my crediting SHREC to the wrong team. man, you’re making me do more background research for my blog postings!
Comment by sam — December 1, 2006 #
ooo, this FPALM stuff is exciting. first of all, Hess is from U Maine (my home state!). second, a different “Hess” was a co-author on Betzig’s Science paper (are the brothers??). third, i’d like to know the history of the work between these labs. did they both develop PALM/FPALM independently? i feel bad for the folks with the biophys j paper that have the line “At the time of publication, a related work by E. Betzig et al. had recently appeared online in the journal Science” at the end of their paper.
Comment by sam — December 1, 2006 #
oh, did samuel hess work for watt webb?
Comment by sam — December 1, 2006 #
Well, you discovered the Achilles heel of the Ram et al. analysis. You do need to know a priori how many emitters there are to be resolved. Probably an analysis best used in SM experiments, but the important point is that Raleigh’s criterion just doesn’t exist in the photon counting era.
I don’t know any juicy gossip behind PALM and FPALM. I suspect it was done separately. No relation between the Hess’s, I don’t think. That would make for some awkward family gatherings… And yes, S.T. Hess was a member of the Webb group (grad and postdoc).
Comment by LSC — December 1, 2006 #
[…] By fitting the point-spread function of the QDs, they were able to localize each endosome to individual microtubules (imaging below the diffraction limit). This was very cool, and Cui showed tracks of endosomes switching microtubules and then changing speed or direction. Finally, she even did a mouse study, but this seemed tacked on. The real results were the imaging and single-particle tracking. | | […]
Pingback by Everyday Scientist » seminar: bianxiao cui — January 23, 2007 #
[…] gathered (with permission) from a chemistry blog’s review of sub-diffraction microscopy techniques Part I and Part II. For a review, see also reference […]
Pingback by Wikipedia search result — March 28, 2007 #
[…] on sub-diffraction imaging in his collaborator’s living room and brought the world PALM (and a Science paper to boot). C&E News had a story about PALM and Betzig’s […]
Pingback by ChemBark » Blog Archive » Announcing the 2006 Chemmy Awards — April 28, 2007 #
[…] was initially excited that we might be seeing some photoreaction (that would be helpful for some fancy imaging methods). But I figured that it was probably heating from the laser, causing the polymer to melt and […]
Pingback by Everyday Scientist » strange ring — August 13, 2007 #
[…] Hell had some spectacular images from his super-resolution work (STED and RESOLFT and other acronyms). His newer work on fast “PALMIRA“—stochastic […]
Pingback by Everyday Scientist » acs boston update 1: tuesday — August 21, 2007 #
[…] optical reconstruction microscopy (STORM) is Xiowei’s cool super-resolution technique (Eric Betzig has a similar version called “PALM”). And I’ve already blogged about […]
Pingback by Everyday Scientist » EDSEL: coolest paper of 2008 (so far) — January 11, 2008 #
[…] photoactivatable fluorophores that emit many photon include super-resolution schemes (described here and […]
Pingback by Everyday Scientist » a photoactivatable fluorophore — June 24, 2008 #
[…] post here. August 19, […]
Pingback by Everyday Scientist » legacy post: breaking the diffraction limit of light, part 1 — August 19, 2008 #
[…] For instance, labeling Michael Phelps with Cy3 and Cy5 will allow us to locate his position in the pool with nanometer precision. […]
Pingback by Everyday Scientist » olympic FRET — August 20, 2008 #
[…] the development of new super-resolution imaging techniques, photoswitching probes have become very important. Here, I explain the differences in photophysics […]
Pingback by Everyday Scientist » photoactivation vs. photoswitching (part 1) — September 3, 2009 #
[…] photoactivation vs. photoswitching (part 2) October 9, 2009 at 3:43 pm | sam | tutorial […]
Pingback by Everyday Scientist » photoactivation vs. photoswitching (part 2) — October 9, 2009 #
I would like to calculate particle size distribution for spherical objects (liposomes) with a widefield microscope. If the median diameter of the particles were, let’s say 600nm, 98% of my distribution would be within 500nm to 700nm approximately. I would need to measure diameter within that range to a accuracy of lets say 10%. Does this translate to a required resolution of 50nm? I’m thinking this can be done with some form of of sub-diffraction technique however, I’m out of my element. I have hopes for SI or PSF, but it seems the SI technique needs other wavelength harmonics and the PSF (see Selvin note below). Also, I don’t know about the required source of illumination nor the pixel density of the CCD. I have hopes for SI or PSF but I’m not so sure now. I don’t want to go near field.
I had put this application to Dr. Paul Selvin, but he said his technique is good for measuring the centroid with nanometer accuracy but he doesn’t know how to measure the actual size.
Any help is most appreciated.
Charles
Comment by Charles — November 23, 2009 #
these super-resolution techniques would not be very helpful to you. they are good at resolving two small emitters that are close together. you have dispersed large objects that you want to measure precisely.
have you considered light scattering? that’s one of the best technique for measuring particle size. alternatively, if you need to use the microscope, you could consider deconvolving the PSF from your image of the particles, but that may introduce more errors than it solves.
Comment by sam — November 23, 2009 #
I’ve used Dynamic Light Scattering to measure the size distribution of liposomes. Mine had an average diameter of 100nm and it works great. DLS is the easiest think you can try.
Cheers
Fabian
Comment by Fabian — March 12, 2010 #