Creatine, a popular sports supplement

July 7, 2010 at 2:38 pm | | nerd, science and the public, stupid technology, tutorial

Creatine, a small-molecule found naturally in red meat (and biosynthesized in our bodies), is a popular supplement for weight lifters. To understand how it works, one needs to know that ATP (adenosine triphosphate) is the body’s energy molecule.  It gives muscles the energy they need to function, but in the process, it loses a phosphate group and is converted to ADP (adenosine diphosphate).

Creatine monophosphate has the ability to convert this low-energy ADP molecule back into the super-charged ATP molecule that muscles crave.

Creatine monophosphate

As a consequence, lifters that supplement with creatine can do more reps, which can lead to better results in the gym on a shorter timescale.  Creatine supplementation also has the effect of increasing water volume in the muscles, causing them to swell and look bigger; this effect subsides quickly once creatine supplementation is stopped.

It is well known that consumption of simple sugars with creatine increases creatine absorption.  When you consume sugar, your blood-sugar level increases, and your body releases insulin in response (assuming you don’t have type 1 diabetes).  Insulin instructs the cells to take up sugar from the bloodstream.  Insulin also has the nice effect of stimulating creatine transporters, which transport creatine from the blood into cells.

Now that the background is finished…

I was at GNC yesterday buying some creatine.  I looked at the ingredients on the GNC-brand creatine and gasped.  Creatine and sucralose!?!?!?!  Sucralose???  OK- a little more background.  Sucralose is a zero-carbohydrate, synthetic, sugar mimic.  As you can see below, it looks a lot like sucrose, only it has some chlorine groups that basically make it unrecognizable by the body’s enzymes.  So your tongue recognizes it, but your waistline doesn’t because it’s not metabolized.

“Sucralose”

Sucrose

Now to the remarkable part.  Sucralose has no effect on the blood sugar level. So this GNC-brand product that I bought containing 17% sucralose/83% creatine is ridiculous.  Uninformed weightlifters don’t want “sugar” calories so the industry introduces 1 g calorie-free sucralose per 5 g creatine, which has no effect on creatine uptake and actually tastes quite disgusting (sucralose is 600x sweeter than sucrose, which is regular sugar).

So who is the bigger idiot?

photoactivation vs. photoswitching (part 2)

October 9, 2009 at 3:43 pm | | tutorial

In part 1, I discussed the differences between the definitions of photoactivation and photoswitching. Here, I want to talk about some of the applications of the various classes of photochromic dyes.

photochromic-lens

Photochromism typically is used for modulating the absorbance or color of a solution or material. The most obvious example is sunglasses that tint in the sunlight. Photochromism can even be applied to batteries!

Photoactivation is ideal for situations you want an emitter to start in a dark form, be controllably activated to a bright form, emit many photons, then disappear irreversibly. An example application is tracking single molecules when the concentration of molecules is high: you want the probes dark to start to maintain a low background, and you want the probe to turn on irreversibly lest the emitter switches off quickly and you can no longer track it.

Another perfect example is super-resolution imaging (via photoactivation and localization) of relatively static structures: you want to turn on a sparse subset of emitters, localize them once with high precision (requires many photons), then have them disappear permanently. Using a photoswitching probe for imaging static structures isn’t ideal, because relocalizing the same spot over and over wastes time and complicates the subsequent image analysis.

Photoswitching is ideal if you want to relocalize over and over, such as if you are performing super-resolution imaging on a moving or dynamic structure. Photoswitching can also be applied to speckle microscopy, which follows fluctuations in dynamic filaments. Others have used photoswitching to modulate a probe, lock-in to that modulating signal, and filter out the unmodulated background from nonswitching fluorophores (see OLID, figure below).

olid

Photoswitching is also necessary for super-resolution techniques such as STED and structured-illumination, which require many cycles of bright and dark. In fact, both these techniques require photoswitching that is very robust and lasts many many cycles.

Side Note Regarding Photons: It is important to note that a compound being switchable does not necessarily mean that it will ultimately emit more photons. In many cases, cycling emission simply spreads the photons into many bins: the total number of photons emitted over all the cycles sum up to the same number to the situation if the fluorophore had been on continuously. For instance, if the unswitching form of the fluorophore usually emits about a million photons (for the best organic dyes), then you cycle it with each cycle emitting 10,000 photons, the photoswitch will generally last 100 cycles on average before photobleaching.

On the other hand, it is conceivable that a photoswitch could resist photobleaching better than its continuous analog. For instance, if the fluorophore bleaches via producing triplet oxygen that builds up around the dye—eventually colliding and reacting with the compound—then switching to an off state might offer some time for the built up concentration of triplet oxygen to dissipate.

On the third hand, if the switch is capable of bleaching from the dark state, then switching may ultimate reduce the total photon count (e.g. the imaging light may pump the dark form into highly excited triplet or other states, leading the compound to basically explode).

photoactivation vs. photoswitching (part 1)

September 3, 2009 at 10:18 am | | tutorial

With the development of new super-resolution imaging techniques, photoswitching probes have become very important. Here, I explain the differences in photophysics and applications of photoactivating and photoswitching fluorophores. (These are not hard-and-fast rules, just my definitions of the terms.)

Azobenzene

In general, photochromism refers to the reversable light-induced transformation of a compound between two forms that absorb different energies or amounts of light. By switching between the forms, the color of a solution of the chromophore changes. One or both forms may be fluorescent, but it is possible that neither form fluoresces significantly. Frisbees or beads or shirts that change color in sunlight probably contain some photochromic molecules that are switched using UV irradiation. Common photochromic compounds include azobenzenes (structure above), diarylethenes, and spyropyrans.

Photoswitching refers to the reversable light-induced switching of fluorescence (color or intensity), and is often a type of photochromism. (In general, there is really no reason that the term must refer only to fluorescence, but in the context of imaging it is more helpful.) For instance, photoswitching rhodamines cycle between nonfluorescent and fluorescent forms by closing and opening a lactam ring. When the ring is closed (the thermally stable form), the absorbance is in the UV and the compound is nonfluorescent. Upon irradiation with UV light, the lactam can open; this forms a metastable fluorescent compound that absorbs in the green. Eventually, the lactam ring reforms (either via visible-light irradiation or thermally), and the cycle can repeat. Eventually, some photochemical reaction with change the compound (e.g. photo-oxidation), and the cycling will end (see cycling below). This is called photobleaching. Another example of cycling photoswitching includes Cy5 (here and here), which can be attacked by a thiol and rendered nonfluorescent; by irradiating with green light, the thiol pops back off and the Cy5 becomes fluorescent again. Here is a plot of the cycling fluorescence of Cy5:

photoswitching

Photoactivation refers to the irreversable light-induced conversion of a fluorophore from a dark form to a fluorescent form (or from one emission color to a significantly shifted color, e.g. blue to red). Typically, a chemical reaction transforms the compound, thus making the photoactivation irreversable for all practical purposes. For this reason, a photoactivatable fluorophore is sometimes referred to as being photocaged. There are several photoactivatable fluorescent proteins, such as PA-GFP. Another example close to my heart is the azido DCDHF fluorophore: upon irradiation with blue light, an azide photoreacts into an amine (with the loss of N2) and converts a nonfluorescnt compound to a bright emitter.

azido-DCDHF

In the next installment (part 2), I will describe the situations where photoswitching is preferred over photoactivation and vice versa.

on-time is not enough!

March 20, 2009 at 9:05 am | | single molecules, tutorial

jablonksi-diagramI’ve seen a few papers recently that attempt to characterize the effect oxygen scavengers or triplet quenchers have on the photostability of single molecules. The main parameters they measure and compare across systems and fluorophores is the “on time”—the average time single molecules are fluorescent before they photobleach—and “off time”—the time spent in transient dark states.

Here’s my question: What about the emission rate? It’s not enough to report and compare only times. Photostability (either regarding bleaching or blinking) is related to a probability that a molecule will enter a permanent (or transient) dark state for every photon absorbed. The “on time” is only relevant when you also know the rate of photon absorption.

Moreover very fast excursions into transient dark states (i.e. triplet lifetimes are typically us or ms—much faster than the camera integration time) will appear as a dimming of the fluorescence, a decrease in the photon emission rate. By removing molecular oxygen (an effective triplet quencher) from solution, fluorophores often become dimmer because the triplet lifetimes increase. Thus, removing oxygen might make your dye last a longer time, but at the expense of brightness. This could more effectively be acheived by just turning down the excitation intensity (with lower background, too)!

So it makes me want to pull my hair out when I read long, detailed articles in prestigious journals that fail to mention even once the rates of photon absorption or emission when they report “on times” as photostability parameters.

Total number of photons is a more fundamental measure of photostability, but it still not enough for the full picture (it doesn’t report on blinkiness, for instance). Total photons plus “on times” is sufficient; or “on times” and emission rates; or enough variables to define the frikkin’ system.

UPDATE: Here’s a good paper that tests specific hypotheses about number and duration of off times. On the Mechanism of Trolox as Antiblinking and Antibleaching Reagent. JACS 2009, ASAP.

phytophotodermatitis

July 3, 2008 at 12:52 pm | | everyday science, great finds, science@home, tutorial

I’m on vacation with my financier fiancée for a week in South Carolina: a few days at our friends’ wedding, a few days at Hilton Head, and a few days with my soon-to-be in-laws. The weather has been beautiful: heat and humidity reminding of my UNC days. The shark that swam past me gave me the creeps, but otherwise Hilton Head was perfect.

One strange event: a friend got a sunburn and a strange rash. The dermatologist asked if she had been making Mojitos or drinking Coronas. Huh? Diagnosis: Phytophotodermatitis. This is really cool: limes (and various other plants) contain furocoumarins (particularly psoralen, structure below), coumarin-type chromophores that absorb strongly in the UV.

Psoralens act as photosensitizers: absorbing UV light and releasing reactive triplets or radicals. With fluorescence quantum yields only around 1-2%, psoralens transition to the triplet state (via El-Sayed’s rule, I expect, from the Jablonski diagram below) and phosphoresce strongly. Chromophores stuck in their triplet state can return to the singlet ground state by coupling with triplet O2, producing a highly reactive singlet O2 species. This may be one mechanism of the photosensitizing properties of psoralens. Alternatively, a psoralen molecule in its triplet state can react directly with DNA or other biomolecules with electron-donating capability. Various other photosensitizing reactions are discussed in an interesting review (Kitamura, N.; Kohtani, S.; Nakagaki, R. J. Photochem. Photobiol. C 2005, 6, 168-185).

So, basically, my friend was spraying tan accelerator on her skin, then sitting in the sun for hours! That equals strangely shaped splotches of sunburn. In fact, psoralens have been used in photochemotherapy (also called PUVA) for certain skin ailments, such as eczema and psoriasis. So be careful squeezing limes on the beach, or picking parsnips or playing with celery in the sun.

Check out some doctory stories in this article: Weber, I. C.; Davis, C. P.; Greeson, D. M. J. Emer. Med. 1999, 17, 235-237.

FRETing about forster vs. fluorescence

March 2, 2008 at 7:19 pm | | everyday science, open thread, tutorial

fret-plot.jpgUntil recently, I thought FRET stood for “Förster resonance energy transfer”; I figured that “fluorescence resonance energy transfer” was a bastardization used by biologists. But a friend challenged me on that point, claiming that fluorescence was more specific and meaningful than Förster. I was all confused.

My reasoning was this: Förster’s equation for long-range dipole-dipole nonradiative energy transfer is a specific case of RET; other cases (e.g. Dexter electron exchange) have different mechanisms and follow different scaling laws. Moreover, because fluorescence is not necessary in the FRET mechanism, I thought it was misleading.

But how true is all that? Does Dexter ET count as RET? Is FRET the only way to transfer the potential to fluoresce from one molecule to another? My friend claimed that Dexter should not be called RET, because it is electron exchange instead of Coulombic.

So I refer to the experts.

Bernard Valeur, in Molecular Fluorescence, says:

The term resonance energy transfer (RET) is often used. In some papers, the acronym FRET is used, denoting fluorescence resonance energy transfer, but this express is incorrect because it is not the fluorescence that is transferred but the electronic energy of the donor. Therefore, it is recommended that either EET (excitation energy transfer or electronic energy transfer) or RET (resonance energy transfer) are used.

That doesn’t help solve the Förster vs. fluorescence dilemma, but instead adds another term (EET, gross) to throw into the mix. But I think this sorta supports my using “Förster” because “fluorescence” is misleading and too broad. Anyway, good ol’ Bernard goes on to carefully describe the different RET mechanisms and formulas.

So what does Joseph R. Lakowicz say? First, he calls it “fluorescence resonance energy transfer,” but then echoes Valeur that RET is a preferable term because “the process does not involve the appearance of a photon.” But Lakowicz also differentiates RET and Dexter electron exchange (because the latter is purely quantum-mechanical).

In Turro’s Modern Molecular Photochemistry, the energy-transfer chapter starts right in with a “Golden Rule” for the transitions between states, and demonstrates that the probability includes an exchange term and a Coulombic term. (Valeur’s book also includes a nice mathematical explanation of the two terms; it might even be in Lakowicz somewhere!)

So now I’m mostly reconvinced that FRET should be Förster resonance energy transfer, not fluorescence. That is, RET is the general term for nonradiative excitation-energy transfers, and FRET is a specific mechanism—the specific mechanism applied in practically all biophysical measurements using RET to study distances.

What do you think? Am I way off?

UPDATE: IUPAC says I’m right.

breaking the diffraction limit of light, part 2

November 22, 2006 at 9:32 pm | | literature, tutorial

Another installation of an n-part series describing sub-diffraction microscopy techniques that strive to break the diffraction limit. See my CUL posts tagged “super-resolution.”

You can read Part I for techniques such as STED, PALM, STORM, etc.


In Part 2, we’ll explore the wide-field structured-illumination approach to breaking the diffraction limit of light (BTDLOL). Structured-illumination (SI)—or patterned illumination—relies on both specific microscopy protocols and extensive software analysis post-exposure. But, because SI is a wide-field technique, it is usually able to capture images at a higher rate than confocal-based schemes like STED. (This is only a generalization, because SI isn’t actually super fast. I’m sure someone could make STED fast and SI slow!)The main concept of SI is to illuminate a sample with patterned light and increase the resolution by measuring the fringes in the Moire pattern (from the interference of the illumination pattern and the sample). “Otherwise-unobservable sample information can be deduced from the fringes and computationally restored.”1

si_moire.jpg

So SI enhances spatial resolution by collecting information from frequency space outside the observable region. The figure below shows this process in reciprocal space: (a) the Fourier transform (FT) of a normal image and (b) of an SI image, with arrows pointing to the additional information from different areas of reciprocal space superimposed; with several images like in b, it is possible to computationally (c) separate and (d) reconstruct the FT image, which has much more resolution information. The reverse FT returns d to a super-resolution image.

si_fourier_image.jpg

But this only enhances the resolution by a factor of 2 (because the SI pattern cannot be focused to anything smaller than half the wavelength of the excitation light). To further increase the resolution, you can introduce nonlinearities, which show up as higher-order harmonics in the FT. In reference 1, Gustafsson uses saturation of the fluorescent sample as the nonlinear effect. A sinusoidal saturating excitation beam produces the distorted fluorescence intensity pattern in the top curve of (a) in the figure below. The nonpolynomial nonlinearity yields a series of higher-order harmonics in the FT, as seen in the top curve of (b) below.

si_nonlinear_illumination_ft.jpg

Each higher-order harmonic in the FT allows another set of images that can be used to reconstruct a larger area in reciprocal space, and thus a higher resolution. In this case, Gustafsson achieves less than 50-nm resolving power (solid line), more than five times that of the microscope in its normal configuration (dashed line).

si_fwhm.jpg

The figure below demonstrates the high resolution, going from (a) the normal microscope image, to (b and c) increasing order of harmonics used in the reconstruction, to (d and e) a high-res image from nine frames of 50-nm fluorescent beads.

si_image.jpg

The main problems with SI are that, in this incarnation, saturating excitation powers cause more photodamage and lower fluorophore photostability, and sample drift must be kept to below the resolving distance. The former limitation might be solved by using a different nonlinearity (such as stimulated emission depletion or reversable photoactivation, both of which are used in other sub-diffraction imaging schemes); the latter limits live-cell imaging and may require faster frame rates or the use of some fiducial markers for drift subtraction. Nevertheless, SI is certainly a strong contender for further application in the field of super-resolution microscopy.

_________________

Sources:

  1. Gustafsson, M. G. L. Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution. PNAS 2005, 102(37), 13081–13086.
  2. Gustafsson, M. G. L. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. of Microsc. 2000, 198(2), 82–87.
  3. Bailey, B.; Farkas, D. L.; Taylor, D. L.; Lanni, F. Enhancement of axial resolution in fluorescence microscopy by standing-wave excitation. Nature 1993, 366, 44–48.

_________________

breaking the diffraction limit of light, part 1

August 19, 2006 at 8:52 pm | | literature, single molecules, tutorial

This is Part 1 in a (possibly) multi-part series on techinques that seek to image objects smaller than the diffraction limit of light. Part II is here. Suggestions for other interesting methods to summarize?

It is well known that there is a limit to which you can focus light—approximately half of the wavelength of the light you are using. But this is not a true barrier, because this diffraction limit is only true in the far-field and localization precision can be increased with many photons and careful analysis (although two objects still cannot be resolved); and like the sound barrier, the diffraction barrier is breakable! Let’s explore a few interesting approaches to imaging objects smaller than ~250 nm.

NSOM
Probably the most conceptual way to break the diffraction barrier is to use a light source and/or a detector that is itself nanometer in scale. Diffraction as we know it is truly a far-field effect: the light from an aperture is the Fourier transform of the aperture in the far-field. [Here’s a neat Applet to play with.] But in the near-field, all of this isn’t necessarily the case. Near-field scanning optical microscopy (NSOM) forces light through the tiny tip of a pulled fiber—and the aperture can be on the order of tens of nanometers. If you bring this tip nanometers away from a molecule, the resolution is not limited by diffraction, but by the size of the tip aperture (because only that one molecule will see the light coming out of the tip). Then you can raster scan the tip over the surface to create an image.

The main down-side to NSOM is the limited number of photons you can force out a tiny tip, and the miniscule collection efficiency (if you are trying to collect fluorescence in the near-field).

Local Enhancement / ANSOM / Bowties
So what if, instead of forcing photons down a tiny tip, we could create a local bright spot in an otherwise diffraction-limited spot? That is the approach that ANSOM and other techniques take. ANSOM is apertureless NSOM: it uses a tip very close to a fluorophore to enhance the local electric field the fluorophore sees. Basically, the ANSOM tip is like a lightning rod which creates a hot spot of light.

The Moerner lab uses some bowtie nanoantennas to greatly and reproducably enhance the electric field in the nanometer gap between the tips two gold triangles. Again, the point is to enhance a very small region of a diffraction-limited spot, thus improving the mismatch between light and nanoscale objects—and breaking the diffraction barrier.

STED
My most recent favorite is STED—stimulated emission depletion. Stefan Hell at the Max Planck Institute developed this method, which uses two laser pulses. The first pulse is a diffraction-limited spot that is tuned to the absorption wavelength, so excites any fluorophores in that region; an immediate second pulse is red-shifted to the emission wavelength and stimulates emission back to the ground state before, thus depeting the excited state of any fluorophores in this depletion pulse. The trick is that the depletion pulse goes through a phase modulator that makes the pulse illuminate the sample in the shape of a donut, so the outer part of the diffraction limited spot is depleted and the small center can still fluoresce. By saturating the depletion pulse, the center of the donut gets smaller and smaller until they can get resolution of tens of nanometers.

sted_hell_betterpic.jpg

This technique also requires a rastor scan like NSOM and standard confocal.

Fitting the PSF
The methods above (and below) use experimental techniques to circumvent the diffraction barrier, but we could also use crafty analysis to increase our ability to know where a nanoscale object is. The image of a point source on a CCD is called a point spread function (PSF), which is limited by diffraction to be no less than approximately half the wavelength of the light. Of course. But what if we simply fit that PSF with a Gaussian to locate the center of the PSF—and thus the location of the fluorophore? Sure! But the precision by which we can locate the center depends on the number of photons we get (as well as the CCD pixel size and other factors). Regardless, groups like the Selvin lab and many others have employed this analysis to localize single fluorophores to a few nanometers! This, of course, requires careful measurements and collecting many photons.

PALM & STORM
What fitting a PSF is to localization, photo-activated localization microscopy (PALM) is to “resolution”—I use this term loosely to mean measuring the distance between objects, not true optical resolution. Eric Betzig and collegues developed PALM; Xiaowei Zhuang at Harvard used a similar techniques and calls it STORM: stochastic optical reconstruction microscopy. [UPDATE: Sam Hess at UMaine developed the technique simultaneously.] The basic premise of both techniques is to fill the imaging area with many dark fluorophores that can be photoactivated into a fluorescing state by a flash of light. Because photoactivation is stochasitic, you can ensure that only a few, well separated molecules “turn on,” then fit their PSFs to high precision (see above). If you wait until the few bright dots photobleach, then you can flash the photoactivating light again and fit the PSFs of different well spaced objects. If you repeat this process many times, you can build up an image molecule-by-molecule; and because the molecules were localized at different times, the “resolution” of the final image can be much higher than that limited by diffraction.

PALM_image.jpg

The problem? To get these beautiful pictures, it takes ~12 hours of data collection. This is certainly not the technique to study dynamics (fitting the PSF is better for that).

___________________
Sources
NSOM: 1,2
Enhancement: 1,2
STED: 1,2
Fitting PSF: 1
PALM & STORM: 1,2

measuring total photons emitted

August 15, 2006 at 10:09 am | | everyday science, single molecules, tutorial

Good single-molecule fluorophores must meet several criteria. To name a few, the molecules must have high quantum yields, must be photostable, with a large absorption cross-section and a low yields into dark triplet states. (Ideally, a single-molecule fluorophore would have some inherent reporter function, but that’s a story for another day.) There are several popular long-lasting, bright fluorophores out there (e.g. rhodamines, terylenes, Cy3); my research involves developing and characterizing new compounds for single-molecule cellular imaging, which means quantifying the criteria I listed above.

One measure of photostability is the total number of photons each molecule emits before it irreversibly photobleaches (Ntot,emitted). Because photobleaching is a Poisson process that depends on the number of times the molecule reaches the excited state, Ntot,emitted for a fluorophore should not depend on the laser intensity (unless you illuminate beyond the saturation intensity): at a higher intensity, the molecules will absorb and emit photons at a higher rate, but photobleach in less time, yielding the same number of photons than at a lower intensity. So the Ntot,emitted of a particular fluorophore in a particular environment tells you how bright or long-lasting the fluorophore is; the higher the number, the better the fluorophore (all else the same).

I’ve measured the Ntot,emitted for a fluorophore on the bulk and single-molecule level—experiments which require different analysis. For the single-molecule measurement, I record movies of a wide-field illumination of several molecules well spaced in a polymer film. Then I integrate the counts from each molecule and plot the distribution from over 100 individual molecules. The distributions look like this:

sm_total_photons.jpg

The distributions are exponential characteristic of a Poisson process. From this distribution, I can determine the average Ntot,detected and convert this to Ntot,emitted using the detection efficiency of my setup (typically ~10% for epifluorescence).

The bulk measurement is a little less conceptually obvious. I use the same setup, but overdope the dye into the polymer film, so the intensity of the entire field of view versus time in the movie looks like this:

3-3_1__bulk_double_exponential.jpg

From the exponential fit, you can extract the average time before photobleaching for the molecules. This value, combined with the absorption cross section and the laser intensity, can give you the number of photons absorbed per molecule. Using the quantum yield, you can then calculate the Ntot,emitted.

So what’s my whole point here? Given that the two experiments I describe use different measurements and calculate Ntot,emitted using different parameters, I was really happy to see that the calculated values of Ntot,emitted were very close from bulk and single-molecule experiments when I compared them directly. That’s all.

[Update: You can find some of the relevant equations in this paper or the SI of this paper.]

dual-color viewer

May 30, 2006 at 9:06 pm | | cool results, everyday science, hardware, single molecules, software, tutorial

How do you turn your grayscale CCD to a two-color camera? Filters and fun! Here’s a diagram looking down at the dual-viewer setup on my table:

notebookii_p84_dualviewer.jpg
Figure 1. Diagram of setup (viewed from above)

M1 and M2 are mirrors, F1 and F2 are long- or short-pass filters, and DC1 and DC2 are identical dichroics. DC1 reflects short wavelengths and pass long wavelengths, the filters clean up the two paths, the mirrors bring the paths back together, and DC2 combines the two channels. If the channels are offset a little, then short and long wavelengths are split into two copies of the image onto the CCD.
Here’s a pic of the setup:

dualviewer_photo.jpg
Figure 2. Picture of the setup with channels drawn

In other words, the dichroics split the green light off the output and move it to a different region of the CCD. You can recombine the two copies and add color using ImageJ, like this:

dualviewer_pics.jpg
Figure 3. The right side is an overlay of the red and green channels in false color

I just measured Fopt for my ‘scope

March 28, 2006 at 5:14 pm | | everyday science, hardware, tutorial

So here’s the deal: the detection efficiency for the stream of photons coming from one single fluorophore is kinda crappy (~10%). But that’s the state of the art; it’s part of the reason single-molecule measurements are so tough … and fun!

The detection efficiency D is:

D = nQFcollFoptFfilter

where nQ is the camera quantum efficiency, Fcoll is the percetage of light collected from the objective, Fopt is the efficiency through all the optics in the microscope, and Ffilter is the transmission through the various filters. Today, I measured Fopt to be 50% at 488 nm. Here’s a page from my notebook that shows my setup:

p73pic

Powered by WordPress, Theme Based on "Pool" by Borja Fernandez
Entries and comments feeds. Valid XHTML and CSS.
^Top^