I bought a $500 camera from Point Grey that has the Sony IMX249 chip. It is a fairly large field of view with intermediate sized pixels (5.86 um), so it has a great dynamic range. The great thing is that it has low dark/read noise of 7-14 electrons per frame and a very high quantum efficiency of 80%. At it runs at up to 40 fps!
While this camera can’t fully compete with scientific CMOS cameras like the Andor Zyla or Hamamatsu Flash4 (and definitely not with the Photometrics Prime95B), because these scientific cameras do a better job cooling (reducing dark counts) and on-chip correction of dead pixels or other pixel-to-pixel variability. But I wondered if this Point Grey camera could be a very cheap replacement for our old interline CCD (a Hamamatsu Orca-ER model C4742-80-12AG).
Recently, Nico wrote a Micro-Manager device adapter for USB3 Point Grey cameras, so I quickly bought the Blackfly BFLY-U3-23S6M-C and was happy to get beautiful images! The picture on the bottom is from the Point Grey and the one on the top is from the old interline camera. At the same exposure time, the images were very similar. And the Point Grey camera could run 4x faster if necessary.
In addition, the Point Grey outputs 16-bit-images with much higher dynamic range than the old 12-bit interline CCD. I misread the specs: the video output is 16 bit, but the A/D converter is still only 12 bit.
So I plan to replace the interline camera with this Point Grey camera for day-to-day microscopy. I’ll let you know if we run into any problems in the future.
Here are two more images, zoomed in and cropped. Top is Hamamatsu Orca-ER and bottom is the Point Grey Blackfly camera:
These were the same exposure time (200 ms) and the same magnification. I’ve decided to replace the Hamamatsu with the Point Grey camera.
I learned during my PhD that you should always have plastic sheeting in lab, because it might just save your equipment when/if a water leak happens. It saved one of our scopes recently, although I wasn’t fast enough to prevent some water damage on an expensive camera. :(
Electrically tunable lenses (ETLs) are polymeric or fluid-filled lenses that have a focal length that changes with an applied current. They have shown some great potential for microscopy, especially in fast, simple z-sweeps.
The above figure shows the ~120 um range of focal depths an ETL installed between the camera and a 40x objective (from reference 1). Note that this arrangement has the drawback of changing the effective magnification at different focal depths; however, this effect is fairly small (20%) and linear over the full range. For high-resolution z-stack imaging of cells, this mag change would not be ideal. But it should be correctable for imaging less sensitive to magnification changes. Basic ETLs cost only a few hundred dollars, a lot cheaper than a piezo stage or objective focuser. Optotune has a lot of information about how to add an ETL to a microscope.
Another cool application of an ETL is in light-sheet microscopy. A recent paper from Enrico Gratton (reference 2) used an ETL to sweep the narrow waist of a light sheet across the sample, and synchronize its motion to match the rolling shutter of a CMOS camera.
The main goal was to cheaply and simply create a light sheet that had a uniform (and minimal) thickness across the entire field of view. Previous low-tech methods to achieve this was to close down an iris, thus reducing the difference in thickness across the sample, but it also reduces the minimal waist size. The high-tech way to do this is creating “propagation-invariant” Bessel or Airy beams. These do not spread out as they propagate, like Gaussian beams do, but creating them and aligning them in microscopes is significantly more challenging.
Gratton’s cheap trick means one can create a flat and thin light sheet for the cost of an ETL and the complexity of synchronizing a voltage ramp signal to the CMOS rolling shutter readout. To be honest, I don’t 100% know how complicated or robust that is in practice. I’m just guessing that it’s simpler than a Bessel beam.
Wang, Z., Lei, M., Yao, B., Cai, Y., Liang, Y., Yang, Y., … Xiong, D. (2015). Compact multi-band fluorescent microscope with an electrically tunable lens for autofocusing. Biomedical Optics Express, 6(11), 4353. doi:10.1364/BOE.6.004353
Hedde, P. N., & Gratton, E. (2016). Selective plane illumination microscopy with a light sheet of uniform thickness formed by an electrically tunable lens. Microscopy Research and Technique, 00(April). doi:10.1002/jemt.22707
A while ago, I installed a very simple filter for the air vent in our scope room. It barely did anything, honestly. The “filter” is nothing more than a very loose mesh of fibers.
So I’ve bumped up to a 20x20x1 inch MERV13 pleated air filter with a paperboard frame. It fits perfectly between the vent and the grate. I even did some actual tape-measuring before purchasing. :) With a little duct tape, I was able to secure and seal the filter into place, then I put the grate back on. (A 16x16x1 inch filter would have fit, too, but I would have had to remove the louvers on the vent before taping on the filter.)
I didn’t notice any reduction of air flow and the room is still under positive pressure. So I’m not concerned with straining the HVAC system of the building.
I hope that it will help keep down the dust in the scope room! I’ll check it in a few months and update.
I’ve changed the filter twice since August. Here’s a side-by-side of the new filter and filter that’s been installed for a few months. I think it’s working:
Photometrics has released the Prime 95B, the first scientific CMOS camera with a back-thinned sensor. This means that the sensor is significantly more sensitive than the front-illuminated versions of other CMOS scientific cameras. So the Prime 95B has a 95% quantum efficiency, whereas other scientific CMOS cameras have 60-70% QE; the newest version of competing CMOS cameras tout 80%+ QE. Back-thinning really helped CCD technology (EMCCDs are back-thinned, for example), but back-thinning CMOS sensors has been more challenging, for some technical reasons that I don’t know.
I demoed the Prime 95B when it was in the Nikon Imaging Center (Kurt wrote up details here). The CMOS camera was installed on a spinning disk confocal along with a 1024×1024 pixel EMCCD. The Prime 95B has 11 um pixels, slightly smaller than the 13 um of the EMCCD’s pixels; this results in a higher spatial sampling rate and thus lower sensitivity for the CMOS, because the photons are spread across more pixels. This can be simply corrected by using a different lens, but we didn’t do that here. So it provided an unlevel playing field, favoring the EMCCD.
Despite that, the Prime 95B matched or outperformed the EMCCD in all the tests we did. The above image compares the EMCCD (left) with the Prime 95B (right) imaging a 100 nm Tetraspeck bead. Below, I compare them imaging a fixed test sample at very low light levels.
The comparisons I made were mainly qualitative. By eye, I was not able to find conditions were the EMCCD outperformed the Prime 95B. That’s saying a lot, especially because the Prime 95B costs approximately half as much! For single-molecule imaging, the EMCCD might still be the king (see Kurt’s curve), but I didn’t have time to perform those detailed or quantitative tests. But for all other imaging and spinning disk confocal, I’d rather have the Prime 95B. No more deciding the optimal EM gain settings and the large dynamic range of the CMOS make it a real winner!
We got a chance to try out a cool new label-free microscope from NanoLive: the 3D Cell Explorer. It works on a holographic tomography, by rotating a laser beam around the top of the sample and records many transmitted-light images. It then uses software to reconstruct the image with phase and even 3D information. The small index differences of different organelles or regions of the cell results in different retardation of the phase of the transmitted light; in the reconstruction, these areas can be false-colored to give beautiful renderings of cells … all without fluorescent labeling.
We used the Nanolive to watch Naegleria amoeba crawling across a glass surface. These cells move orders of magnitude faster than fibroblasts (20 um/min), so imaging their movement is a serious challenge for many high-resolution microscopes.
The above video is false-colored for different index ranges. It is super cool to see the pseudopods in 3D, and possibly even distinguish the plasma membrane from the actin cortex. The demo went well and it took only about 15 min to take the microscope out of the box and start imaging.
When we demoed the beta version a year or so ago, and it had trouble imaging crawling amoebae: the background subtraction was manual and flaky and the frame rate was too slow. But Nanolive let us try it again after the actual release of the product and things works way better. The background subtraction is now automated and robust, and the frame rate was high enough to watch these fast crawling cells.
I think that this microscope would be a great choice for researchers studying organisms that are not genetically tractable or otherwise cannot be fluorescently labeled. Or for anyone studying organelles that show up with a different index (Naegleria ended up having relatively low-contrast organelles compared to adherent mammalian cells, for instance.)
- affordable (about the cost of an EMCCD camera)
- low intensity (no phototoxicity or photobleaching)
- simple and user-friendly: easier that setting up DIC in Koehler illumination :)
- small footprint and easy setup
- software is free
- potential for beautiful and amazing data
- not versatile: it does one thing (but does that one thing well)
- limited to samples with wide top, like a 35 mm dish (not 96-well plates), because the laser beam comes in at an angle
- 3D information on top and bottom of cells is less impressive
Go check it out!
We wanted some coverslip spinners to dry coverslips after washing and rinsing. It’s way faster than blowing them with air. Nico kindly gave me his 3D design file for the coverslip holder, and I modified the box design from here.
Here’s a parts list (Digikey part numbers unless otherwise noted):
- Power supply/AC adaptor (
3V, 500mA5V, 1A)
- 3V DC motors (2)
- wired in parallel
- Momentary switch
- Cushion feet (4)
- Screws for motors
- Mcmaster-Carr part 90116A009
- 3D print coverslip holders (2)
- Safety cover
- Superglue, wire, soldering iron, etc.
And here’s the finished product.
LED illumination is awesome for epifluorescence. No mechanical shutters, no changing mercury lamps every 200 hours, no hot lamphouses, no worries about letting it cool down before turning the lamp back on, less wasted electricity, immediately ready to use after turning it on, etc.
We have a Lumencor SpectraX on our Nikon TE2000 scope and we love it. It contains multiple LED that are independently triggerable. For high-speed imaging, we bought one new Chroma quad-band dichroic and emission filter set, as well as 4 separate single-band emission filters for our emission filter wheel (although this latter set is not absolutely necessary).
The amazing thing is to be able to run color sequences at the frame rate of the camera (because the SpectraX accepts TTL triggering of each line independently). It is beautiful to see the rainbow of light flashing out of the scope at 20+ frames per second!
We haven’t run into any issues with brightness: the SpectraX is bright enough for all our cell imaging experiments. Typically, we run it at 20% power. That said, I’m aware that the very bright peaks in an arc lamp spectrum (e.g. UV, 435, 546) aren’t there in the LED spectra. So for FRAP or something, you may not be able to bleach as fast.
And, of course, a fancy illuminator like the Spectra X is not cheap. But for run-of-the-mill epi imaging, white-light sources like the Lumencor Sola might be a good option. Another downside is that the fans on the Spectra X are audible, but not annoying. Despite that minor issue and the cost, I highly recommend LED illumination (and the Spectra X, specifically).
I recommend you demo a few LED sources from a few companies (e.g. ScopeLED, Lumencor, Sutter, etc.) and make sure it will fit your needs.
* Make sure your camera supports TTL triggering of an external shutter.
I had previously made my own plasma cleaner using a pump and an old microwave. While my homemade version technically worked, it was complicated to use, unwieldily, and inconsistent in performance. In fact, at least one test made my glass coverslips dirtier.
So we purchased a Harrick plasma cleaner. I’ve used these in the past for preparing coverslips for single-molecule imaging as well treating coverslips before forming supported lipid bilayers on the glass. I’ve always found plasma treatment to be simpler and more consistent that chemical methods such as piranha.
You can see a lot of single-molecule level fluorescent impurities on the glass surface before cleaning (these are a few frames stitched together):
And after 4 minutes of plasma treatment (with air as the process gas) it was so clean that I had trouble finding the correct focal plane:
People are also using this plasma cleaner to treat material for PDMS bonding to glass. They say it’s been working very consistently.
So I highly recommend plasma cleaning. It takes literally a few minutes and there’s no hazardous waste to dispose of. The only real drawback is the price: a new cleaner plus pump costs several thousand dollars. In the long run, if we can get consistent science and no haz waste disposal costs, that price will be worth it. (We also split the cost with several other labs on our floor.)
I’ve also heard good things about ozone treatment. Anyone have any comments about ozone vs plasma?
- Very easy to use
- Fast (<5 min) cleaning
- Updated models of Harrick cleaners have a nice hinged door
- Using process gases other than simply air (such as argon) is slightly more complicated, because you’ll need a tank and tubing; oxygen plasma cleaning requires a more expensive pump
We recently purchased new lasers for our TIRF scope. I wanted the flexibility and low cost of a home-built laser combiner, but also I wanted the ease and stability of a turn-key laser box. I stumbled upon Coherent’s Obis Galaxy combiner, which uses up to 8 fiber pigtailed lasers and combines the emission into one output fiber. What I really love about the idea is that you can add lasers in the future as your experimental needs grow. (Or your budget does.)
The other aspect I love is that it’s just plug and play! If I were on vacation when a new laser arrived, anyone in lab should be able to add it to this system.
We also purchased the LaserBox, which supplies power, cooling, and separate digital/analog control to 5 lasers.
The new system just sits on the shelf. It’s tiny:
Here it is in action. The lasers were being triggered and sequenced by the camera and an ESio board, so they were running so fast I had to jiggle my iPhone in order to see the different colors.
One problem that I have faced is that the throughput is lower than spec (should be 60%+, and it’s down at 40%). Coherent is going to repair or replace the unit. And fortunately, we’re only running the lasers at 10% or less for most experiments currently, so there’s no rush to get the throughput higher! (Edit: Coherent immediately replaced the unit and it’s now up to the correct throughput.)
If you’re ever in Genentech Hall UCSF and want a quick demo, drop me a line!
- Flexibility to add laser lines or upgrade lasers in the future at no additional cost (besides the pigtailed laser itself) and no downtime
- Super easy installation
- Cheaper than many of the other turn-key boxes
- No aligning or maintenance needed
- Each laser can be separately triggered and modulated (digital and/or analog)
- Replaceable output fiber if it gets damaged (although it might not be as high-throughput as the original fiber)
- Small and light, so it’s easy to find a place for it in any lab
- No dual-fiber output option
- Two boxes and some fibers going between the two makes it a little less portable than some of the other small boxes
- No space to add optics (e.g. polarizers) in launch
- Fans for LaserBox are not silent
- Power and emission LEDs are too bright
- NA of Coherent fiber is slightly smaller than that of Nikon TIRF illuminator expects, but the effect is barely observable (Coherent is working on a second fiber option that would even better match the TIRF illuminator)
Bottom line: I’d definitely recommend the Galaxy if you’re primary goals are color flexibility and simplicity. If you want more turn-key (and probably stability, but I can’t speak to that yet), there are other boxes to consider: Spectral ILE, Vortran Versa-Lase, Toptica MLE, and so on. Also, if you needed two (or more) fiber output, the Galaxy doesn’t have that option.
Edit 11/10/2014: I’ve found one issue. The NA of the Coherent fiber is smaller (0.055) than the standard Oz Optics fiber that Nikon uses for the TIRF launch (0.11). That means that the illumination is more compact at the sample. Because the beam is Gaussian shaped, that means that the illumination is less flat (i.e. very bright in the center and darker on the edges). I’m going to try a solution using a second fiber with the correct 0.11 NA and an Oz Optics AA-300 lens style universal connector. I’ll update if this works…
Edit 3/5/15: So it turns out that the NA difference isn’t that huge. Most of the discrepancy is just a difference in the way the two manufacturers report the NA. Not only that, but in practice the NA difference makes a tiny change in the illumination area in TIRF. I wouldn’t let the different NA stop you from considering this product. Also, Coherent is working on second fiber option that would even better match the TIRF illuminator.
Edit 7/30/15: The LaserBox has a 50 Ohm impedance for the digital modulation (2 kOhm for analog), because it needs to be able to driven up to 150 MHz, according to Coherent. This makes controlling the digital TTL with an Arduino a challenge, because the Uno is rated for 40 mA max. The ESio board (and maybe the TriggerScope?) can handle the higher currents. That said, the Arduino Uno seems to handle the higher current draw even though it’s not spec’ed to: I have a lot of anecdotal evidence that you can use an Arduino to control Obis lasers. (Maybe not 2 lines simultaneously?)
I really want a plasma cleaner, for cleaning coverslips and activating glass for PDMS bonding, but they cost thousands of dollars. I thought that was a lot of money for a glorified microwave. So I made my own.
Drill a few holes in glass:
Make a PDMS seal (thanks Kate):
Glue the chamber:
We’re ready to go!
Fill the chamber with argon, evacuate it, turn on the microwave oven, and … voila! … a plasma:
Below are slides before and after (right) plasma treatment. You can see the contact angle of water is dramatically reduced.
Well, not really. I found that the plasma really only stays lit with argon. When I flow air in, it extinguishes, but also burns some of the rubber hoses. That adds more dirt to my slides than I want.
Conclusion: don’t do this at home. :)
(Well, that might be a little harsh. It does work well to bond PDMS to glass. And I’ll try a longer etch sometime to see if it will ever clean the coverslips.)
With TIRF and lasers on many fluorescence microscopes these days, there’s a huge risk of seriously damaging your vision. Not so much from a stray beam (which is probably diffuse or your blink reflex will be faster than the damage threshold), but more from looking in the eyepiece without the proper filters in place. A reflected laser beam focused with the eyepiece lenses right onto your retinas can be vary damaging.
(That happened a Berkeley a few years ago, and EH&S asked everyone to take the eyepieces off their TIRF scopes. I removed one, so that you’d only lose one eye.)
Interlocks between your scope port setting and your laser is one option. But that means you can’t ever look at your sample with your eyes (at least the fluorescence). The elegant solution it to put a multi-band emission filter in your eyepiece tube to block any laser light:
I also printed some other parts for our TE2000. After we upgraded our epi illumination source from a Hg lamp to a Lumencor Spectra-X LED, we no longer needed the ND filter sliders on the illuminator tube, because the LED intensity is easily controlled by software. I’ve always hated those sliders, because they are easy to accidentally knock into the wrong position. That, and they aren’t encoded into the image metadata, so you have no idea what slider settings you had when you look at an image 3 months later!
So I removed the ND sliders and replaced them with a nice plug to block the light.
I have my 3D designs on the NIH 3D Print Exchange.
Of course, Nico makes beautiful laser-cut boxes for his Arduino, and Kurt has a nice 3D-printed box. But I think I’ll stick to this reduce/reuse/recycle approach. :)
UPDATE: I guess I’m not the only one. Labrigger posted a similar pic!
UPDATE 2: I made a bigger one to fit two Arduinos:
Before hardware syncing:
For more details: https://micro-manager.org/wiki/Hardware-based_synchronization
EDIT: And now incorporating a Sutter TLED transmitted light: