review of NanoLive microscope
July 12, 2016 at 1:54 pm | sam | hardware, reviewWe got a chance to try out a cool new label-free microscope from NanoLive: the 3D Cell Explorer. It works on a holographic tomography, by rotating a laser beam around the top of the sample and records many transmitted-light images. It then uses software to reconstruct the image with phase and even 3D information. The small index differences of different organelles or regions of the cell results in different retardation of the phase of the transmitted light; in the reconstruction, these areas can be false-colored to give beautiful renderings of cells … all without fluorescent labeling.
We used the Nanolive to watch Naegleria amoeba crawling across a glass surface. These cells move orders of magnitude faster than fibroblasts (20 um/min), so imaging their movement is a serious challenge for many high-resolution microscopes.
The above video is false-colored for different index ranges. It is super cool to see the pseudopods in 3D, and possibly even distinguish the plasma membrane from the actin cortex. The demo went well and it took only about 15 min to take the microscope out of the box and start imaging.
When we demoed the beta version a year or so ago, and it had trouble imaging crawling amoebae: the background subtraction was manual and flaky and the frame rate was too slow. But Nanolive let us try it again after the actual release of the product and things works way better. The background subtraction is now automated and robust, and the frame rate was high enough to watch these fast crawling cells.
I think that this microscope would be a great choice for researchers studying organisms that are not genetically tractable or otherwise cannot be fluorescently labeled. Or for anyone studying organelles that show up with a different index (Naegleria ended up having relatively low-contrast organelles compared to adherent mammalian cells, for instance.)
Pro:
- affordable (about the cost of an EMCCD camera)
- label-free
- low intensity (no phototoxicity or photobleaching)
- simple and user-friendly: easier that setting up DIC in Koehler illumination :)
- small footprint and easy setup
- software is free
- potential for beautiful and amazing data
Con:
- not versatile: it does one thing (but does that one thing well)
- limited to samples with wide top, like a 35 mm dish (not 96-well plates), because the laser beam comes in at an angle
- 3D information on top and bottom of cells is less impressive
Go check it out!
LED illumination review
March 3, 2015 at 4:06 pm | sam | hardware, reviewLED illumination is awesome for epifluorescence. No mechanical shutters, no changing mercury lamps every 200 hours, no hot lamphouses, no worries about letting it cool down before turning the lamp back on, less wasted electricity, immediately ready to use after turning it on, etc.
We have a Lumencor SpectraX on our Nikon TE2000 scope and we love it. It contains multiple LED that are independently triggerable. For high-speed imaging, we bought one new Chroma quad-band dichroic and emission filter set, as well as 4 separate single-band emission filters for our emission filter wheel (although this latter set is not absolutely necessary).
The amazing thing is to be able to run color sequences at the frame rate of the camera (because the SpectraX accepts TTL triggering of each line independently). It is beautiful to see the rainbow of light flashing out of the scope at 20+ frames per second!
https://micro-manager.org/wiki/Hardware-based_synchronization
We use a ESio TTL* box controlled by Micro-Manager and it works great. But you could use an Arduino and some simple wiring using a DE15 breakout board to accomplish the same thing for cheaper.
We haven’t run into any issues with brightness: the SpectraX is bright enough for all our cell imaging experiments. Typically, we run it at 20% power. That said, I’m aware that the very bright peaks in an arc lamp spectrum (e.g. UV, 435, 546) aren’t there in the LED spectra. So for FRAP or something, you may not be able to bleach as fast.
And, of course, a fancy illuminator like the Spectra X is not cheap. But for run-of-the-mill epi imaging, white-light sources like the Lumencor Sola might be a good option. Another downside is that the fans on the Spectra X are audible, but not annoying. Despite that minor issue and the cost, I highly recommend LED illumination (and the Spectra X, specifically).
I recommend you demo a few LED sources from a few companies (e.g. ScopeLED, Lumencor, Sutter, etc.) and make sure it will fit your needs.
____________
* Make sure your camera supports TTL triggering of an external shutter.
GATTAquant microscopy standards
February 13, 2015 at 5:27 pm | sam | review, single moleculesJürgen Schmied from GATTAquant came by the other day and let me play around with some of their cool DNA origami fluorescence standards.
The PAINT sample was really cool. It has short oligos on the DNA origami and complementary strands labeled with dyes in solution. The binding/bleaching kinetics are such that each hotspot blinks many times during an acquisition. After a quick 10,000 frame acquisition over 3 min, we collected a dataset that we could easily get a super-resolution image. We used ThunderSTORM to fit the data and correct for drift. But without any other corrections, we could easily resolve the three PAINT hotspots on each DNA origami:
But my favorite sample was actually the confocal test slide. It had two sets of dyes about 350 nm apart permanently labeled on each DNA origami.
This let me test the resolution and image quality using different configurations on our Diskovery confocal/TIRF system.
Each spot contained only about 4-8 dyes. So it was a much greater challenge to our microscope than TetraSpeck beads.
I highly recommend GATTAquant test samples. Very fun.
UPDATE: Jürgen ran my data though GATTAquant’s analysis software and sent me the results below.
cleaning-chemicals industry takes on the final frontier…
May 5, 2010 at 10:56 am | sam | science and the public, stupid technology…invisible stains. Finally, there are cleaners on the market that can clean up stains that are undetectable by any of the human senses.
Have you seen that commercial for a toilet-bowl cleaner? They say, “Bleach only hides stains.” Then they pour some purple stuff on the ceramic to reveal the hidden “stains.”
Um, what? Isn’t hiding a stain mean you get rid of the stain? By definition? I think it’s nuts that the cleaning-chemicals industry is inventing the problem of invisible stains. I have a solution to this problem: don’t pour that purple stuff in your toilet!
And then there are those “chemical residues” that watch you shower. The invisible, undetectable residues. How do we know that invisible residues are there at all? The TV tells us that they are there, of course. But wait, how do we know that the alternative cleaner doesn’t leave an invisible residue?
All this reminds me of the Emperor’s New Clothes. Or, more aptly, that old joke about the elephant repellant (“You don’t see any elephants around here do you? It must be working, then!”) People need to ignore these silly commercials and relax a little bit about cleaning. Just scrub the toilet bowl every week or two with a brush, then wipe the seat etc. with diluted bleach and be done with it! Jeez!
photoactivation vs. photoswitching (part 2)
October 9, 2009 at 3:43 pm | sam | tutorialIn part 1, I discussed the differences between the definitions of photoactivation and photoswitching. Here, I want to talk about some of the applications of the various classes of photochromic dyes.
Photochromism typically is used for modulating the absorbance or color of a solution or material. The most obvious example is sunglasses that tint in the sunlight. Photochromism can even be applied to batteries!
Photoactivation is ideal for situations you want an emitter to start in a dark form, be controllably activated to a bright form, emit many photons, then disappear irreversibly. An example application is tracking single molecules when the concentration of molecules is high: you want the probes dark to start to maintain a low background, and you want the probe to turn on irreversibly lest the emitter switches off quickly and you can no longer track it.
Another perfect example is super-resolution imaging (via photoactivation and localization) of relatively static structures: you want to turn on a sparse subset of emitters, localize them once with high precision (requires many photons), then have them disappear permanently. Using a photoswitching probe for imaging static structures isn’t ideal, because relocalizing the same spot over and over wastes time and complicates the subsequent image analysis.
Photoswitching is ideal if you want to relocalize over and over, such as if you are performing super-resolution imaging on a moving or dynamic structure. Photoswitching can also be applied to speckle microscopy, which follows fluctuations in dynamic filaments. Others have used photoswitching to modulate a probe, lock-in to that modulating signal, and filter out the unmodulated background from nonswitching fluorophores (see OLID, figure below).
Photoswitching is also necessary for super-resolution techniques such as STED and structured-illumination, which require many cycles of bright and dark. In fact, both these techniques require photoswitching that is very robust and lasts many many cycles.
Side Note Regarding Photons: It is important to note that a compound being switchable does not necessarily mean that it will ultimately emit more photons. In many cases, cycling emission simply spreads the photons into many bins: the total number of photons emitted over all the cycles sum up to the same number to the situation if the fluorophore had been on continuously. For instance, if the unswitching form of the fluorophore usually emits about a million photons (for the best organic dyes), then you cycle it with each cycle emitting 10,000 photons, the photoswitch will generally last 100 cycles on average before photobleaching.
On the other hand, it is conceivable that a photoswitch could resist photobleaching better than its continuous analog. For instance, if the fluorophore bleaches via producing triplet oxygen that builds up around the dye—eventually colliding and reacting with the compound—then switching to an off state might offer some time for the built up concentration of triplet oxygen to dissipate.
On the third hand, if the switch is capable of bleaching from the dark state, then switching may ultimate reduce the total photon count (e.g. the imaging light may pump the dark form into highly excited triplet or other states, leading the compound to basically explode).
photoactivation vs. photoswitching (part 1)
September 3, 2009 at 10:18 am | sam | tutorialWith the development of new super-resolution imaging techniques, photoswitching probes have become very important. Here, I explain the differences in photophysics and applications of photoactivating and photoswitching fluorophores. (These are not hard-and-fast rules, just my definitions of the terms.)
In general, photochromism refers to the reversable light-induced transformation of a compound between two forms that absorb different energies or amounts of light. By switching between the forms, the color of a solution of the chromophore changes. One or both forms may be fluorescent, but it is possible that neither form fluoresces significantly. Frisbees or beads or shirts that change color in sunlight probably contain some photochromic molecules that are switched using UV irradiation. Common photochromic compounds include azobenzenes (structure above), diarylethenes, and spyropyrans.
Photoswitching refers to the reversable light-induced switching of fluorescence (color or intensity), and is often a type of photochromism. (In general, there is really no reason that the term must refer only to fluorescence, but in the context of imaging it is more helpful.) For instance, photoswitching rhodamines cycle between nonfluorescent and fluorescent forms by closing and opening a lactam ring. When the ring is closed (the thermally stable form), the absorbance is in the UV and the compound is nonfluorescent. Upon irradiation with UV light, the lactam can open; this forms a metastable fluorescent compound that absorbs in the green. Eventually, the lactam ring reforms (either via visible-light irradiation or thermally), and the cycle can repeat. Eventually, some photochemical reaction with change the compound (e.g. photo-oxidation), and the cycling will end (see cycling below). This is called photobleaching. Another example of cycling photoswitching includes Cy5 (here and here), which can be attacked by a thiol and rendered nonfluorescent; by irradiating with green light, the thiol pops back off and the Cy5 becomes fluorescent again. Here is a plot of the cycling fluorescence of Cy5:
Photoactivation refers to the irreversable light-induced conversion of a fluorophore from a dark form to a fluorescent form (or from one emission color to a significantly shifted color, e.g. blue to red). Typically, a chemical reaction transforms the compound, thus making the photoactivation irreversable for all practical purposes. For this reason, a photoactivatable fluorophore is sometimes referred to as being photocaged. There are several photoactivatable fluorescent proteins, such as PA-GFP. Another example close to my heart is the azido DCDHF fluorophore: upon irradiation with blue light, an azide photoreacts into an amine (with the loss of N2) and converts a nonfluorescnt compound to a bright emitter.
In the next installment (part 2), I will describe the situations where photoswitching is preferred over photoactivation and vice versa.
on-time is not enough!
March 20, 2009 at 9:05 am | sam | single molecules, tutorialI’ve seen a few papers recently that attempt to characterize the effect oxygen scavengers or triplet quenchers have on the photostability of single molecules. The main parameters they measure and compare across systems and fluorophores is the “on time”—the average time single molecules are fluorescent before they photobleach—and “off time”—the time spent in transient dark states.
Here’s my question: What about the emission rate? It’s not enough to report and compare only times. Photostability (either regarding bleaching or blinking) is related to a probability that a molecule will enter a permanent (or transient) dark state for every photon absorbed. The “on time” is only relevant when you also know the rate of photon absorption.
Moreover very fast excursions into transient dark states (i.e. triplet lifetimes are typically us or ms—much faster than the camera integration time) will appear as a dimming of the fluorescence, a decrease in the photon emission rate. By removing molecular oxygen (an effective triplet quencher) from solution, fluorophores often become dimmer because the triplet lifetimes increase. Thus, removing oxygen might make your dye last a longer time, but at the expense of brightness. This could more effectively be acheived by just turning down the excitation intensity (with lower background, too)!
So it makes me want to pull my hair out when I read long, detailed articles in prestigious journals that fail to mention even once the rates of photon absorption or emission when they report “on times” as photostability parameters.
Total number of photons is a more fundamental measure of photostability, but it still not enough for the full picture (it doesn’t report on blinkiness, for instance). Total photons plus “on times” is sufficient; or “on times” and emission rates; or enough variables to define the frikkin’ system.
UPDATE: Here’s a good paper that tests specific hypotheses about number and duration of off times. On the Mechanism of Trolox as Antiblinking and Antibleaching Reagent. JACS 2009, ASAP.
a vial and a laser
April 3, 2008 at 12:49 pm | sam | everyday scienceI always like seeing a laser pass through a vial of fluorophores.
This is the 488-nm line of an Ar-ion laser passing through a DCDHF fluorophore in water.
fun to look in a microscope
October 26, 2007 at 3:03 am | sam | cool results, everyday scienceSometimes it’s just entertaining to look into a microscope. I like this pic I took the other day:
It looks like a tiny sun. But it’s just a fluorophore solution that has dried up and left some large aggregates (which emit at a longer wavelength—the green is the normal emission—maybe Excimer will like this, at CBC’s new website).
You can also see a bleached region in the middle of the green from the peak of the laser excitation region, and a swath of bleached dye where I moved the stage up and down before the picture. You might even make out some single molecules in the center. Quite impressive for a simple digital camera!
This was actually before I aligned the beam for real experiments, so all the ring patterns are actually interference due to poor alignment. I just thought that the misalignment made it pretty, so I took a color picture.
pretty bubbles
September 10, 2007 at 4:54 pm | sam | cool results, everyday scienceNo, not these.
These:
Another, and the setup:
It’s a pretty simple experiment: bleaching a vial of dye to determine what the fluorophore photodegrades into. I bubbled some air to make sure I get the reaction with oxygen.
acs boston update 2.5: posters
August 23, 2007 at 12:53 pm | sam | conferences, science community, seminarsI saw many cool posters at the PHYS poster session Wednesday evening. Here are my favorite:
- Charles Schroeder did his graduate work with Steve Chu and he’s finishing his postdoc with Sunney Xie; I did a summer REU with Chuck when he was at Stanford. For his work in the Xie lab, he used a custom promotor to incorporate fluorescein UTPs and anti-fluorescein QDs to study RNA and DNA polymerases. Looking at a DNAP/RNAP system on a chain extended from a magnetic bead using flow, he was able to track changes in position of the polymerases and chain length (using a similar technique to what Antoine van Oijen did—I summarized his stuff before). His conclusion was that DNAP moves past the position of RNAP (either over it or pushes it along the chain).
- Stirling Churchman is just finishing up in the Spudich lab. In a collaboration with Henrik Flyvbjerg, a theorist, she has used a better fitting method than Gaussians to fit the PSF for high-precision localization. Instead of Gaussians, they use the theoretical emission pattern of a dipole emitter (taking into account the NA of the objective and the higher rate of emission into the medium with higher index of refraction) to fit the PSF. With this more accurate fitting—and a MLE algorithm—they were able to get the same localization precision using only half the photons! They’re writing up a paper now.
- Volkan Ediz is a grad student in David Yaron’s lab. He uses QM calculations to predict some photophysical properties of a class of asymmetric cyanine dye, predicting barriers to twisting into dark states (see their JACS paper here). Their work is related to some of the work I’ve done and a previous grad student in the Moerner lab did with a different fluorophore; but their calculations are more hard-core. I met Volkan and David last year at ACS San Francisco; they’re really nice!
- Klaus Schaper is at Heinrich-Heine-University Dusseldorf and has been synthesizing rhodamine dyes with triplet quenchers covalently attached. The concept is that the quencher will make the dye brighter by both reducing the triplet lifetime and reducing the probability of photobleaching (from excitation from triplet states or from reactive oxygen produced by interactions with the dye triplet state). He used azobenzene on a sulforhodamine B, and found that he could get at 2.5-fold increase in the maximum emission rate (before saturation and bleaching) in an FCS experiment!
- Franziska Luschtinetz, from the University of Potsam, looked at the changes in the photophysics of biotinilated dyes (called DY-635B and DY-647B) upon binding to streptavadin. She found different effects on the absorption and fluorescence emission spectra, ranging from dye rigidization and H-type excimer behavior, upon the binding. Finally, she also did some time-dependent fluorescence anisotropy and FCS measurements with these dyes. I didn’t actually get to talk to Franziska, but she helpfully provided printouts of her poster!
One more day of ACS coming up!
breaking the diffraction limit of light, part 1
August 19, 2006 at 8:52 pm | sam | literature, single molecules, tutorialThis is Part 1 in a (possibly) multi-part series on techinques that seek to image objects smaller than the diffraction limit of light. Part II is here. Suggestions for other interesting methods to summarize?
It is well known that there is a limit to which you can focus light—approximately half of the wavelength of the light you are using. But this is not a true barrier, because this diffraction limit is only true in the far-field and localization precision can be increased with many photons and careful analysis (although two objects still cannot be resolved); and like the sound barrier, the diffraction barrier is breakable! Let’s explore a few interesting approaches to imaging objects smaller than ~250 nm.
NSOM
Probably the most conceptual way to break the diffraction barrier is to use a light source and/or a detector that is itself nanometer in scale. Diffraction as we know it is truly a far-field effect: the light from an aperture is the Fourier transform of the aperture in the far-field. [Here’s a neat Applet to play with.] But in the near-field, all of this isn’t necessarily the case. Near-field scanning optical microscopy (NSOM) forces light through the tiny tip of a pulled fiber—and the aperture can be on the order of tens of nanometers. If you bring this tip nanometers away from a molecule, the resolution is not limited by diffraction, but by the size of the tip aperture (because only that one molecule will see the light coming out of the tip). Then you can raster scan the tip over the surface to create an image.
The main down-side to NSOM is the limited number of photons you can force out a tiny tip, and the miniscule collection efficiency (if you are trying to collect fluorescence in the near-field).
Local Enhancement / ANSOM / Bowties
So what if, instead of forcing photons down a tiny tip, we could create a local bright spot in an otherwise diffraction-limited spot? That is the approach that ANSOM and other techniques take. ANSOM is apertureless NSOM: it uses a tip very close to a fluorophore to enhance the local electric field the fluorophore sees. Basically, the ANSOM tip is like a lightning rod which creates a hot spot of light.
The Moerner lab uses some bowtie nanoantennas to greatly and reproducably enhance the electric field in the nanometer gap between the tips two gold triangles. Again, the point is to enhance a very small region of a diffraction-limited spot, thus improving the mismatch between light and nanoscale objects—and breaking the diffraction barrier.
STED
My most recent favorite is STED—stimulated emission depletion. Stefan Hell at the Max Planck Institute developed this method, which uses two laser pulses. The first pulse is a diffraction-limited spot that is tuned to the absorption wavelength, so excites any fluorophores in that region; an immediate second pulse is red-shifted to the emission wavelength and stimulates emission back to the ground state before, thus depeting the excited state of any fluorophores in this depletion pulse. The trick is that the depletion pulse goes through a phase modulator that makes the pulse illuminate the sample in the shape of a donut, so the outer part of the diffraction limited spot is depleted and the small center can still fluoresce. By saturating the depletion pulse, the center of the donut gets smaller and smaller until they can get resolution of tens of nanometers.
This technique also requires a rastor scan like NSOM and standard confocal.
Fitting the PSF
The methods above (and below) use experimental techniques to circumvent the diffraction barrier, but we could also use crafty analysis to increase our ability to know where a nanoscale object is. The image of a point source on a CCD is called a point spread function (PSF), which is limited by diffraction to be no less than approximately half the wavelength of the light. Of course. But what if we simply fit that PSF with a Gaussian to locate the center of the PSF—and thus the location of the fluorophore? Sure! But the precision by which we can locate the center depends on the number of photons we get (as well as the CCD pixel size and other factors). Regardless, groups like the Selvin lab and many others have employed this analysis to localize single fluorophores to a few nanometers! This, of course, requires careful measurements and collecting many photons.
PALM & STORM
What fitting a PSF is to localization, photo-activated localization microscopy (PALM) is to “resolution”—I use this term loosely to mean measuring the distance between objects, not true optical resolution. Eric Betzig and collegues developed PALM; Xiaowei Zhuang at Harvard used a similar techniques and calls it STORM: stochastic optical reconstruction microscopy. [UPDATE: Sam Hess at UMaine developed the technique simultaneously.] The basic premise of both techniques is to fill the imaging area with many dark fluorophores that can be photoactivated into a fluorescing state by a flash of light. Because photoactivation is stochasitic, you can ensure that only a few, well separated molecules “turn on,” then fit their PSFs to high precision (see above). If you wait until the few bright dots photobleach, then you can flash the photoactivating light again and fit the PSFs of different well spaced objects. If you repeat this process many times, you can build up an image molecule-by-molecule; and because the molecules were localized at different times, the “resolution” of the final image can be much higher than that limited by diffraction.
The problem? To get these beautiful pictures, it takes ~12 hours of data collection. This is certainly not the technique to study dynamics (fitting the PSF is better for that).
___________________
Sources
NSOM: 1,2
Enhancement: 1,2
STED: 1,2
Fitting PSF: 1
PALM & STORM: 1,2
measuring total photons emitted
August 15, 2006 at 10:09 am | sam | everyday science, single molecules, tutorialGood single-molecule fluorophores must meet several criteria. To name a few, the molecules must have high quantum yields, must be photostable, with a large absorption cross-section and a low yields into dark triplet states. (Ideally, a single-molecule fluorophore would have some inherent reporter function, but that’s a story for another day.) There are several popular long-lasting, bright fluorophores out there (e.g. rhodamines, terylenes, Cy3); my research involves developing and characterizing new compounds for single-molecule cellular imaging, which means quantifying the criteria I listed above.
One measure of photostability is the total number of photons each molecule emits before it irreversibly photobleaches (Ntot,emitted). Because photobleaching is a Poisson process that depends on the number of times the molecule reaches the excited state, Ntot,emitted for a fluorophore should not depend on the laser intensity (unless you illuminate beyond the saturation intensity): at a higher intensity, the molecules will absorb and emit photons at a higher rate, but photobleach in less time, yielding the same number of photons than at a lower intensity. So the Ntot,emitted of a particular fluorophore in a particular environment tells you how bright or long-lasting the fluorophore is; the higher the number, the better the fluorophore (all else the same).
I’ve measured the Ntot,emitted for a fluorophore on the bulk and single-molecule level—experiments which require different analysis. For the single-molecule measurement, I record movies of a wide-field illumination of several molecules well spaced in a polymer film. Then I integrate the counts from each molecule and plot the distribution from over 100 individual molecules. The distributions look like this:
The distributions are exponential characteristic of a Poisson process. From this distribution, I can determine the average Ntot,detected and convert this to Ntot,emitted using the detection efficiency of my setup (typically ~10% for epifluorescence).
The bulk measurement is a little less conceptually obvious. I use the same setup, but overdope the dye into the polymer film, so the intensity of the entire field of view versus time in the movie looks like this:
From the exponential fit, you can extract the average time before photobleaching for the molecules. This value, combined with the absorption cross section and the laser intensity, can give you the number of photons absorbed per molecule. Using the quantum yield, you can then calculate the Ntot,emitted.
So what’s my whole point here? Given that the two experiments I describe use different measurements and calculate Ntot,emitted using different parameters, I was really happy to see that the calculated values of Ntot,emitted were very close from bulk and single-molecule experiments when I compared them directly. That’s all.
[Update: You can find some of the relevant equations in this paper or the SI of this paper.]
bubbling nitrogen
April 10, 2006 at 8:53 pm | sam | everyday scienceThe extent of chemistry chemistry that I do is making solutions of fluorescent dyes. But once in a while, I get to bubble N2 through a solution or something like that. I’m checking to see if this solution will bleach when de-oxygenated. But look how cool I am:
Isn’t that so cool? Here’s what my whole operation looks like (note the awesome poster in our lab):
Powered by WordPress, Theme Based on "Pool" by Borja Fernandez
Entries and comments feeds.
Valid XHTML and CSS.
^Top^