Skip to main content
  • Original article
  • Open access
  • Published:

Towards digital photon counting cameras for single-molecule optical nanoscopy

Abstract

Background

Optical nanoscopy based on separation of single molecules by stochastic switching and subsequent localization allows surpassing the diffraction limit of light. The growing pursuit towards live-cell imaging using nanoscopy demands advancements in both science and technology.

Results

In this article, we provide an overview of the technological advancements in the development of scientific cameras used for nanoscopy. We discuss the prospects of novel digital photon counting cameras based on a single-photon avalanche diode (SPAD) array camera for optical nanoscopy. Numerical simulations are used to evaluate and compare different scientific cameras for their performance towards single-molecule identification and localization.

Conclusion

A SPAD array camera with single-photon sensitivity and zero read-out noise allows for the detection of extremely weak signals at ultra-fast imaging speeds. With temporal resolution in the order of micro-seconds, a SPAD array camera offers great potential for live-cell imaging with super-resolution.

Introduction

The resolution of conventional fluorescent microscopes is limited by the diffraction of light. Optical nanoscopy bypasses this limitation by adopting either ensemble-based techniques such as stimulated emission depletion (STED), and structured illumination microscopy (SIM) or single-molecule nanoscopy-based techniques such as stochastic optical reconstruction microscopy (STORM), and photoactivated localization microscopy (PALM) (Galbraith and Galbraith 2011 Heilemann 2010 Hell 2009). In this article, we focus on single-molecule nanoscopy. Single-molecule nanoscopy is based on techniques that temporally separate the activation of switchable fluorescent molecules (which can switch between ‘ON’ (fluorescent) and ‘OFF’ (dark) states) over a certain space, resulting in diffraction-limited spots, which are localized with a high degree of precision (Betzig et al. 2006 Folling et al. 2008 Heilemann et al. 2008 Hess et al. 2006 Rust et al. 2006). These techniques have a lateral resolution in the order of tens of nanometers (Betzig et al. 2006).

Today, there is a growing pursuit to apply single-molecule nanoscopy in live-cell imaging with a spatial resolution that is comparable to the spatial resolution of electron microscopy (Sauer 2013 Lakadamyali 2013). One of the main factors limiting live-cell imaging with single-molecule nanoscopy is the imaging speed (Jones et al. 2011). Imaging speed is governed by several factors including photo-physical properties of the fluorescent molecules and the speed of the camera. Recent studies have shown progress in the development of fluorescent molecules which are applicable for live-cell imaging with single-molecule nanoscopy (Ries et al. 2012 Lukinavicius et al. 2013). Furthermore, the introduction of scientific complementary metal oxide semiconductor (sCMOS) cameras have enabled high-speed live-cell imaging possibilities in super-resolution (Huang et al. 2013). Here, we review the technological advancements and evolution of scientific cameras including their contributions towards single-molecule nanoscopy. We introduce a digital photon-counting camera that uses a SPAD array. We also study the performances of these cameras for single-molecule identification and localization purposes by using numerical simulations and outline future prospects.

Background

Single-molecule nanoscopy operates on the principle of stochastic separation and activation of fluorescent molecules in time. Scientific cameras are used to capture the entire sequence of events over time. The activated fluorescent molecules appear as diffraction-limited spots in the image frames captured by the camera. These spots appear randomly in space (depending on the position of the sample objects) and time (depending on the switching kinetics of fluorescent molecules). The spots are identified and then localized with a certain degree of precision. Therefore, single-molecule nanoscopy techniques heavily rely on the spatio-temporal information captured by the camera for the reconstruction of a super-resolution image by using localization algorithms.

Spatial information is obtained from the photon intensity distribution between pixels of a camera. Spatial information produced by the camera is influenced by several factors including the photon-detection efficiency (PDE) and inherent noise sources in the camera. Localization algorithms take into account the photon-intensity distribution between pixels for its localization. Non-uniformities between pixels in both sensitivity and noise degrades the quality of the obtained information. The signal-to-noise ratio (SNR) is used as a measure to estimate the quality of information obtained. It has been established that the localization precision heavily depends on the SNR of the data (Quan et al. 2010 Rieger and Stallinga 2013).

The temporal information obtained by the camera is mainly dictated by the speed of the camera. It provides the crucial information for localization algorithms in identification of the diffraction-limited spots at a given point of time. A high-speed camera with high temporal resolutions can be useful in resolving ambiguities in the identification of diffraction-limited spots. For example, it can be used to identify and distinguish overlapping spots which can appear by the activation of two spatially very closely-placed fluorescent molecules at a given point of time (Barsic and Piestun 2013). Apart from this, the need for higher temporal information is of increasing importance because of the current focus on high-speed, live-cell optical nanoscopy.

High-speed cameras are used for capturing images within very short time intervals. A fast read-out of captured images demands a faster operation of read-out electronics, which leads to an increase in the read-out noise of the camera (Robbins 2011). With extremely short exposure times, it is likely that weak signals that are captured are concealed within the read-out noise of the camera. Therefore, for a given application, it is the read-out noise that limits the speed of operation of a camera. Over the years, the camera technologies have undergone several architectural modifications to achieve high speeds of operation. Here, we provide an overview of the operating principles and architectural organization of these scientific cameras.

State-of-the-art cameras

Two of the most frequently-applied cameras in the market today are the electron multiplying charge coupled devices (EMCCDs) and the sCMOS cameras. These cameras work with a common basic principle which can be identified by three main phases of operation namely:

  1. 1)

    Photon detection: photon-charge conversion

  2. 2)

    Charge sensing: (pre/post) amplification and voltage conversion

  3. 3)

    Quantification

  4. 1)

    Photon detection

The photon detection and charge conversion is probably the most critical part of any image sensor. The quantum efficiency (QE) is a measure of the probability of the photon detector to generate electrical charges upon the absorption of an incident photon. The QE depends on the intrinsic properties of the material of the detector. Semiconductor materials like silicon are used for the manufacturing of the detectors, owing to their high QE (>90%) over a wide spectral range (Blanc et al. 2009). Therefore, semiconductor devices like metal-oxide semiconductor (MOS) capacitors and pinned photo-diodes are widely used to manufacture the detectors for cameras (Blanc et al. 2009). The detectors form the heart of a pixel in a camera. In case of EMCCDs, the entire area of the pixel consists of the detector only. Therefore, the PDE of an EMCCD camera is equivalent to the QE of the detector. In case of sCMOS cameras, the pixels consist of not only the detector, but also some supporting electronic devices. These electronic devices, which are not sensitive to light, are integrated on the same horizontal plane as that of the detector. Hence, the entire pixel area is not sensitive to detect light. The ratio between the actual sensitive area (that of the detector) and the total area of the pixel is termed fill factor (FF). Quantitatively, the PDE of an sCMOS camera is therefore a product of FF and QE of the detector. Hence the PDE of the sCMOS cameras are relatively lower than that of the EMCCD cameras.

  1. 2)

    Charge sensing

Charge sensing and voltage conversion is required for the generation of a signal suitable for image processing (Robbins 2011). Conversion of charge to voltage is achieved by a ‘source follower’ (pre-amplifier) circuit. The source follower circuit typically consists of a transistor with a known conversion gain factor (CGF). In case of EMCCD cameras, an additional conversion step is performed with an electron multiplication stage, which is used to boost the SNR by maximizing the signal to higher levels, thereby making the read-out noise negligible.

  1. 3)

    Quantification

Analog-to-digital converters (ADCs) are used to quantify the incident number of photons per pixel. The voltage levels (obtained after charge-to-voltage conversion) per pixel are sensed by the ADCs to generate a corresponding grayscale intensity value (ADU). Photon counts per pixel are then estimated by taking into account the gains and offset values of the ADCs, amplifier gain and conversion gain factor.

SPAD array cameras

SPADs are avalanche photo-diodes which are sensitive to single photons. A 2-dimensional arrangement of SPADs together with the required supporting electronics forms a SPAD array camera. The working principle of a SPAD array camera can be understood by closely studying the three operating phases of the SPAD device itself.

  1. 1)

    Photon detection: avalanche generation

  2. 2)

    Avalanche detection and quenching

  3. 3)

    Recharging

  4. 1)

    Photon detection

SPADs are avalanche photo-diodes operated at a certain working point of the diode, i.e., beyond its reverse-bias breakdown voltage. Such a mode of operation is referred to as “Geiger mode”. In this mode of operation, the diode is sensitive to the detection of single photons (Cova and Ghioni 2011 Charbon and Fishburn 2011 Rochas 2003 Fishburn 2012 Cova et al. 1982). In Geiger mode of operation, a fixed region within the diode is subjected to a very high electric-field. When a photon is incident on the high electric-field region of the diode, it triggers an avalanche of electric charges, which can be sensed to mark the event of an incident photon. The photon detection probability (PDP) refers to the probability at which an incident photon triggers an avalanche. It depends on two factors namely: the QE of the photon detector and the avalanche probability. Typically, the PDP of the state-of-the-art SPADs range between 40%-50% (Charbon 2007). Besides, just like the sCMOS camera, a SPAD array camera requires pixel level integration of electronic devices for its operation. This leads to issues related to the FF. Quantitatively, the PDE of the SPAD array camera is therefore reduced to the product of the PDP and FF.

  1. 2)

    Avalanche detection and quenching

As described earlier, the incidence of a photon is marked by the generation of an avalanche within the SPAD device. The detection of this avalanche can be achieved by either measuring the voltage drop across the diode with the help of a ballast resistor or by measuring the current across a low resistance path. In both cases, an abrupt change in the signal level (voltage or current) is observed. The pulse shaping of the measured signal, can be accomplished by using a comparator which is usually a minimum-sized inverter or a thresholder (Charbon 2007 Charbon and Fishburn 2011). Every avalanche is marked as an event with the generation of a digital pulse. The digital pulses are recorded as counts. SPADs can thus be used as a photon-counting device. Besides, the recorded pulse can also provide the information on the time of arrival of photons (Cova et al. 1981). Upon recording the event, it is imperative to stop or quench the avalanche to avoid damage to the diode. The avalanche is quenched by using a ballast resistor (passive quenching) or by an electronic circuit (active quenching) (Cova et al. 1982 1996).

  1. 3)

    Recharging

In the final phase of operation, the operating voltage of the diode is restored after passive or active quenching. The diode is now ready for detecting the next photon. It is worth noting that SPADs are not detecting photons during the quenching and recharge phases of operation. This inactive time of the diode is termed ‘dead time’. The magnitude of dead time depends on the speed at which the quenching and recharging takes place. With very fast electronics, the dead time is in the range between 20–50 ns (Cova et al. 1982 1996 Charbon 2007).

SPADs can be manufactured by the widely-used complementary metal oxide semiconductor (CMOS) technology (Rochas et al. 2003). SPAD array cameras that comprises large scale arrays of SPAD devices in combination with quenching and recharge electronics have already been realized (Niclass et al. 2008 Veerappan et al. 2011 Burri et al. 2013). With single-photon sensitivity and arrival-time information of photons, these cameras have found their applications in fluorescence life-time imaging measurements (FLIM) (Gersbach et al. 2010 Stoppa et al. 2009 Powolny et al. 2013). Recently, video-rate FLIM using low cost SPAD array cameras has also been developed (Li et al. 2011).

Single-photon sensitivity

As discussed earlier, high-speed imaging with extremely short exposure time demands high sensitivity of the detector to capture extremely weak signals. Therefore, single-photon sensitivity is essential. Besides, the camera must also be able to read-out the detected (weak) signals by countering the inherent noise sources within the camera which is considered as one of the most important requirements of a camera when it comes to high-speed live-cell imaging.

EMCCD camera

The state-of-the-art EMCCD cameras are capable of providing single-photon sensitivity. EMCCDs with the stochastic electron multiplication mechanism amplify the weak signals generated by the incidence of a single photon above the noise limits. In practice, a thresholding scheme is used to distinguish single-photon events from false positives occurring due to noisy events. The threshold is set by taking into account the mean gain of the electron multiplication and the noise sources of the camera, including read-out noise (Basden et al. 2003). This enables photon counting with the use of EMCCDs. However, the limited frame rate of the EMCCD camera (~100 fps) does not allow high speed imaging.

sCMOS camera

The state-of-the-art sCMOS cameras do not have single photon sensitivity (Flower 2011). The inherent dark noise and the read-out noise of sCMOS cameras are significantly higher than the weak signal generated by the incidence of a single photon. This makes it difficult to separate single-photon events from spurious noise events. Although sCMOS cameras allow high-speed imaging, with high frame rates of up to several 100 fps, the detection of single photons is still not a reality.

SPAD array camera

The intrinsic photon-counting SPAD array camera offers single-photon sensitivity. The photon counts are read as digital signals. Therefore, the read-out noise of such cameras is zero (Cova and Ghioni 2011). SPAD array cameras enable high-speed imaging up to several 1000 fps (Veerappan et al. 2011) which is a capability needed for live-cell optical nanoscopy.

It can be concluded that the three different types of cameras have their own sets of characteristics. Here, we compare the performance of these types of cameras for single-molecule nanoscopy.

Methods

Numerical simulations

Numerical simulations were performed to compare and evaluate the performance of 4 different types of scientific cameras for single-molecule identification and localization. Four different cameras, namely charge coupled devices (CCD), EMCCD, sCMOS and SPAD array were considered. Various noise models (Table 1) associated with each of the 4 different cameras were taken into account for the purpose of simulation. All the simulations were performed using Matlab (© The Mathworks, Natick, USA). A Gaussian-based point spread function model was used to mimic a diffraction-limited spot (Stallinga and Rieger 2010). The center of the diffraction-limited spot was placed at a random location in the image plane of 32x32 pixels. The pixel size was set at 100 nm. Noise originating from the background of the sample during imaging was set at a mean value of 10 photons/pixel. Typical values for the camera specifications were considered as reported in Table 2. The simulated camera images were considered for the purpose of analysis. Analysis of the images was performed on 2 major issues, namely diffraction-limited spot identification and spot localization precision.

Table 1 Statistical models considered for numerical simulation with the use of CCD, EMCCD, sCMOS and SPAD array cameras
Table 2 Numerical specification values considered for simulation

Spot identification

Identification of diffraction-limited spots from the images produced by the cameras formed the first step in data processing for the reconstruction of super-resolution images. The simulated images produced by the cameras were filtered using a 5x5 Gaussian kernel. Based on the pixel intensity values, the local maxima were then identified in the image. In order to distinguish the local maxima arising from diffraction-limited spots with the false positives arising from background and camera noise sources, a thresholding operation was performed. A fixed thresholding criterion was adopted for simulated images from all cameras (CCD, EMCCD, sCMOS and SPAD array), on the basis of number of photons and background noise as reported by Ma et al. (2013). The threshold was set at a value equal to 3 times the standard deviation of the background noise and pixel values were compared with the set threshold. Pixels satisfying the criterion were selected for localization. Prior knowledge of the location of the spot on the basis of the simulation settings was used to verify whether the diffraction-limited spot was correctly identified or not. It has to be noted that the spot identification performance results of the cameras were strongly influenced by the type of spot identification criterion adopted. Here, we only compared the performance of different cameras based on a fixed thresolding criterion.

Spot localization

Localization of the identified diffraction-limited spot was the next step in super-resolution image reconstruction. The precision of localization is one of the key factors that determine the final resolution of the super-resolution image. Theoretical limits of localization precision were estimated by computing the Cramer-rao lower bound (CRLB) limits (Ober et al. 2004 Rieger and Stallinga 2013). Furthermore, the actual localization precision was estimated by using a localization algorithm based on Gaussian fitting by maximum likelihood estimation (MLE) with a-priori knowledge on the different noise source models of the cameras.

For a fair comparison, the same spot identification criterion and localization algorithms were used for the purpose of evaluating the performance of the 4 different cameras under study.

Results

Spot identification

For a fixed sample background, the identification of the spot became much easier with increasing numbers of incident photons, when a fixed thresholding scheme was used. The detected number of photons scaled with the PDE of the camera. However, simulation results reflected that the spot identification did not scale with the PDE of the camera as shown in Figure 1a. CMOS-based cameras (sCMOS and SPAD array) with a relatively lower PDE of 70% exhibited a performance similar to that of the EMCCD camera with a PDE of 90%. Therefore, CMOS-based cameras with a PDE of 90% were capable of exhibiting better performance than EMCCDs (Figure 1b). Although the EMCCD camera exhibits a very high PDE (~90%), it suffers from ‘excess noise’. Excess noise is the additional noise introduced by the electron multiplication stage in the EMCCD cameras (Robbins and Hadwen 2003). As stated earlier, the EMCCD cameras are equipped with an electron multiplication stage to amplify the signal and boost the SNR. On the downside, this stochastic electron multiplication stage also amplifies the inherent shot noise in the signal leading to excess noise. The excess noise factor (ENF) is used as a measure to quantify the excess noise. Typically at high mean gain of electron multiplication in EMCCDs, the value of ENF is √2. This effect can be considered as a reduction in PDE by 50% (Mortensen et al. 2010 Huang et al. 2013). Therefore, the excess noise does not allow the EMCCD camera to perform to its full potential. Besides, a clear distinction can be made with respect to CCD cameras as shown in Figure 1a. The CCD camera, despite having a PDE of 70% performs much worse than all the other cameras. This indicates that a significantly high read-out noise affects the performance of a camera in spot identification. The difference between sCMOS and SPAD array cameras is almost negligible due to a relatively smaller difference in their read-out noise margins.

Figure 1
figure 1

Percentage numbers of spot identifications versus numbers of incident photons on the diffraction-limited spot. (A) Comparison between CCD, EMCCD, sCMOS and SPAD array cameras (with 70% photon detection efficiency). CCD camera had the worst performance with high readout noise, whereas the EMCCD camera with higher photon-detection efficiency (~90%) performed worse than expected owing to excess noise. (B) Comparison between performance of SPAD array cameras with EMCCD for different photon detection efficiencies. The SPAD array camera with a PDE of 90% performed better than EMCCD cameras. A fixed thresholding criterion was adopted for the simulations. The results and therefore the relative ranking of the cameras can vary depending on the type of criterion used for the identification of a spot.

Both PDE and noise in the camera affect spot identification, depending on the background conditions and identification criterion. Therefore, the SNR is a good measure to study the spot identification. The SNR of the cameras is calculated by using Equation 1.

SNR = PDE * n ( ENF 2 * ( PDE * n + nbg + dark noise + cic noise ) + read noise M 2 )
(1)

With, n: number of incident photons per pixel; nbg: number of background photons per pixel; darknoise: dark noise from the pixel; cic noise : clock-induced charge noise; readnoise: read-out noise of the camera; M: multiplication gain factor; ENF: excess noise factor; PDE: photon detection efficiency.

Figure 2 shows that for the selected specifications, CMOS-based cameras exhibited a significantly higher SNR than CCD-based cameras, even at low mean numbers of photon counts per pixel. Differences in SNR between sCMOS and SPAD array cameras were almost negligible.

Figure 2
figure 2

Signal-to-noise ratio: Plot showing signal-to-noise ratios (SNR) of CCD, EMCCD, sCMOS and SPAD array cameras. Excess noise in EMCCDs clearly affected their SNR. The effect of read-out noise is minimal at higher numbers of incident photons, while the multiplication of shot noise in the signal in the case of EMCCD, diminished its SNR.

Spot localization

The identified spots were localized with a high degree of precision. The algorithm achieved localization precision very close to the CRLB theoretical limits for all the cameras, given their noise models. Localization precision depends on the total number of detected photons from the diffraction-limited spot. Therefore, the EMCCD camera with a relatively higher PDE of 90% was expected to show best performance. However, the localization precision achieved was affected by the excess noise present in EMCCD cameras. Localization precision achieved by an EMCCD camera was significantly lower than that of sCMOS and SPAD array cameras as shown in Figure 3. The performance of the EMCCD camera may be improved by limiting excess noise. The ENF value which determines the effect of excess noise depends on 2 important factors, the set value of mean electron multiplication gain and the incident light levels. ENF can be reduced by lowering the electron multiplication gain, but this is not desirable when EMCCDs are used for detection of weak signals. Therefore, it is important to regulate the incident light levels per pixel, to reduce the ENF. The ENF of EMCCDs, when operated in the photon-counting mode, reduces to a factor ‘1’ when the number of incident photons per pixel is <1 (Basden et al. 2003). In particular, when each pixel is allowed to detect <1 photon on an average, the localization precision can reach its best possible theoretical limits (Chao et al. 2013). It can be achieved by applying higher magnifications that lead to smaller effective pixel sizes and hence fewer mean numbers of photons per pixel.

Figure 3
figure 3

Localization precision versus numbers of incident photons on the diffraction-limited spot. EMCCD cameras show a worse performance with respect to the localization precision when affected by excess noise. Localization precision is also affected by read-out noise of the camera. SPAD array camera with zero read-out noise shows superior performance when compared to cameras with the same PDE, but a higher read-out noise. Therefore, in theory SPAD array cameras with 90% PDE can achieve higher (Poisson noise-limited) localization precision owing to the absence of the Gaussian/normal read-out noise component. For all cameras, the localization precision points are plotted only when the spot detection capability of the corresponding cameras is ≥ 90% for a given number of incident photons to avoid outliers resulting from insufficient statistics.

The performance of SPAD array and sCMOS cameras were superior to that of CCD and EMCCD cameras (when affected by excess noise). Despite having similar PDEs of 70%, a clear distinction was observed between the performances of SPAD array, sCMOS and CCD cameras. Localization precision was lowest in case of SPAD array cameras with zero read-out noise, while it was significantly worse in case of CCD cameras which suffer from a relatively high read-out noise. This shows that SPAD array camera with zero read-out noise is capable of providing the highest possible (poisson-noise limited) localization precision for a given detected number of photons.

Discussion

Architectural trend

Numerical simulations showed that CMOS-based cameras perform well with respect to localization precision. When higher frame rates are needed, CMOS-based cameras offer greater benefits, when compared to CCD-based cameras. Therefore, CMOS-based cameras should be preferred for high speed single-molecule localization nanoscopy. The paradigm shift from CCD-based technologies to CMOS-based technology has witnessed an increasing trend in the integration of different camera functionalities at a pixel level (Figure 4). CMOS technologies have allowed integration of electronic transistors at the pixel level leading to greater functionality. Most of the activity performed by the camera is now completed at the pixel level, leading to minimal processing requirements outside the pixel array. SPAD array cameras go even a step further by integrating most of the functionalities at the detector level.

Figure 4
figure 4

Architectural organization of scientific cameras: A trend indicating an increasing pixel level integration of camera functionalities. A) (EM)CCD cameras have the simplest pixel architecture consisting of only the detector leading to higher efficiency and uniformity. (B) sCMOS cameras include the integration of electronic devices at a pixel level, leading to faster operation by trading off the photon detection efficiency and uniformity between pixels. (C) SPAD array cameras have a powerful detector, that enables the counting of single photons at high speed and make them the cameras of choice for live-cell nanoscopy.

Speed of the camera

Integration of electronics at the pixel level provides greater access to pixels in the array, facilitating fast column parallel read-outs, leading to higher frame rates and hence increased temporal resolution. Frames rates of up to 25 k fps, with exposure times in the order of microseconds per frame are today possible with SPAD array cameras (Veerappan et al. 2011). Besides the high frame rates, it has allowed for the integration of larger numbers of pixels in the arrays, leading to megapixel cameras. This is a major improvement over EMCCD cameras, which suffer from low frame rates (~100 fps) due to the sequential read-out mechanism which also limits the scalability of the number of pixels in the array. The limited frame rate and limited number of pixels in the array are the reason that large fields of views are not possible with EMCCDs. Imaging of high speed dynamics such as single-molecule tracking in live-cell imaging needs CMOS-based cameras and not EMCCDs. The temporal information in the order of microseconds is much more useful to study dynamic events. However, the benefits of CMOS-based cameras come with the cost of pixel non-uniformity, PDE and data-handling complexity.

Noise non-uniformity

The read-out electronics in EMCCD-based cameras is the same for all pixels, as it has a sequential read-out mechanism. Optimization and fine tuning of the read-out electronics has led to a good deal of spatial uniformity, whereas sCMOS cameras employ electronics at a pixel level. In this case, integrated electronics comprising of transistors are prone to manufacturing process variations which lead to deviations in their performance and thus non-uniformity in noise and sensitivity of the pixels. The deviation in noise and sensitivity occurs in a fixed pattern once the image sensor is manufactured. Such a noise is termed fixed pattern noise (FPN). FPN can easily be corrected and compensated for, by characterizing the image sensor after it has been manufactured. For single-molecule nanoscopy techniques, localization algorithms have been developed by taking into account the pixel non-uniformities (Huang et al. 2013). In case of SPAD arrays, the non-uniformity in sensitivity between pixels mainly depends on the performance of the detector itself. With a clean manufacturing process, SPAD arrays can therefore exhibit a high level of uniformity of all pixels, behaving identically (Niclass et al. 2005).

Photon detection efficiency

The PDE remains one of the major determining factors for the performance of a camera. While the EMCCD cameras already have high PDEs, the PDE of the CMOS-based cameras is primarily affected by the FF. The FF can be improved by using micro lenses in front of every pixel, thereby allowing the concentration of light towards the active area of the pixel. Furthermore, scaling down of manufacturing technology nodes, leads to transistors becoming smaller and faster. Reduction in the size of transistors paves the way for having the same set of functionality within a pixel in a smaller area, leading to FF improvement. On the other hand, with the introduction of heterogeneous 3D integration in manufacturing technology, electronics can now be placed at a different plane as the detector. In theory, this can lead to a FF close to 100%. Recently, a prototype of a 3D-stacked CMOS image sensor with a vertically-assembled image signal processor was successfully realized (Coudrain et al. 2013).

Embedded data processing

Faster frame rates and read-outs also lead to challenges in data handling and management. Intelligent data-handling mechanisms with increased embedded processing are becoming more and more important. Therefore, it is also worth focusing on pixel level data handling and processing techniques. State-of-the-art CMOS cameras are generally equipped with a customizable integrated circuit in field programmable gate arrays (FPGAs). FPGAs can be programmed to perform real-time embedded data processing (Grüll et al. 2011 Ma et al. 2013). In the context of nanoscopy, implementation of localization algorithms in FPGAs can be considered as a step forward.

Conclusion

The market of scientific cameras is witnessing a paradigm shift from CCD to CMOS technology. The shift has been accompanied by improvements in speed, numbers of pixels and pixel level functionality. Both EMCCDs and sCMOS cameras perform satisfactorily in applications of single-molecule measurements, including nanoscopy. Moreover, the sCMOS cameras with their higher temporal resolution are the cameras of choice at the moment for live-cell nanoscopic imaging. However, the read-out noise in sCMOS cameras is a bottleneck for the combination of single-photon sensitivity and higher speed limits.

CMOS-based digital photon counting cameras with SPAD arrays offers single-photon sensitivity with zero read-out noise. This paves the way for increasing speed in combination with detection of weak signals at short exposure times. Nowadays, SPAD array cameras have a limited PDE due to a low FF and avalanche probabilities. Optimization of technology to reduce read-out noise in case of sCMOS cameras requires a complex design of analog chains of read-out electronics, but SPAD arrays with their digital approach are less complex and readily offer zero read-out noise. The absence of Gaussian read-out noise, in a SPAD array camera enables localization algorithms to achieve higher localization precisions which are purely limited by the poissonian noise components. However, SPAD array cameras have a limited PDE owing to low FF and avalanche probabilities. Improvement of the PDE of SPADs largely depends on improved manufacturing technology. Efforts towards technology scaling, 3D heterogeneous integration of electronic devices holds great promise for FF improvement, resulting in higher PDEs for SPAD array cameras. With single-photon sensitivity, zero read-out noise, digital embedded processing capabilities and improved PDEs, SPAD arrays are the cameras of choice for live-cell nanoscopic imaging.

Abbreviations

SPAD:

Single photon avalanche diode

STED:

Stimulated emission depletion

SIM:

Structured illumination microscopy

STORM:

Stochastic optical reconstruction microscopy

PALM:

Photoactivated localization microscopy

sCMOS:

Scientific grade complementary metal oxide semiconductor

PDE:

Photon detection efficiency

SNR:

Signal-to-noise ratio

EMCCD:

Electron multiplying charge coupled device

QE:

Quantum efficiency

CGF:

Conversion gain factor

ADC:

Analog to digital converter

PDP:

Photon detection probability

FLIM:

Fluoroscence lifetime imaging measurement

CCD:

Charge coupled device

CRLB:

Cramer-rao lower bound

ENF:

Excess noise factor

FF:

Fill factor

FPN:

Fixed pattern noise

FPGA:

Field programmable gate array.

References

  • Barsic A, Piestun R: Super-resolution of dense nanoscale emitters beyond the diffraction limit using spatial and temporal information. Appl Phys Lett 2013, 102: 231103. 10.1063/1.4809834

    Article  Google Scholar 

  • Basden AG, Haniff CA, Mackay CD: Photon counting strategies with low-light-level CCDs. Mon Notices R Astron Soc 2003, 345: 985–991. 10.1046/j.1365-8711.2003.07020.x

    Article  Google Scholar 

  • Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF: Imaging intracellular fluorescent proteins at nanometer resolution. Science 2006, 313: 1642–1645. 10.1126/science.1127344

    Article  CAS  PubMed  Google Scholar 

  • Blanc N, Giffard P, Seitz P, Buchschacher P, Nguyen V, Hoheisel M: Semiconductor Image Sensing. In More than Moore: creating high value micro/nanoelectronics systems. 1st edition. Edited by: Zhang QG, Roosmalen A. US: Springer; 2009:239–278.

    Chapter  Google Scholar 

  • Burri S, Stucki D, Maruyama Y, Bruschini C, Charbon E, Regazzoni F: Jailbreak imagers: transforming a single-photon image sensor into a true random number generator. Snowbird Resort, Utah, USA: Paper presented at 14th International Image Sensor Workshop; 2013:12–16.

    Google Scholar 

  • Castelletto SA, Degiovanni IP, Schettini V, Migdall AL: Reduced dead time and higher rate photon counting detection using a multiplexed detector array. J Mod Opt 2007, 54: 337–352. 10.1080/09500340600779579

    Article  Google Scholar 

  • Chao J, Ram S, Ward ES, Ober RJ: Ultrahigh accuracy imaging modality for super-localization microscopy. Nat Methods 2013, 10: 335–338. 10.1038/nmeth.2396

    Article  CAS  PubMed  Google Scholar 

  • Charbon E: Will avalanche photodiode arrays ever reach 1 megapixel?. Ogunquit, Maine, USA: Paper presented at 11th International Image Sensor Workshop; 2007:246–249.

    Google Scholar 

  • Charbon E, Fishburn MW: Monolithic Single-Photon Avalanche Diodes: SPADs. In Single-Photon Imaging Springer Series in Optical Sciences. Volume 160. Edited by: Seitz P, Theuwissen APJ. Berlin Heidelberg: Springer; 2011:138–172.

    Google Scholar 

  • Coudrain P, Henry D, Berthelot A, Charbonnier J, Verrun S, Franiatte R, Bouzaida N, Cibrario G, Calmony F, O’Connory I, Lacrevazz T, Fourneaudz L, Flechetz B, Chevrier N, Farcy A, Le-Briz O: 3D Integration of CMOS image sensor with coprocessor using TSV last and micro-bumps technologies. In Electronic Components and Technology Conference (ECTC). Las Vegas, Nevada, USA: IEEE 63rd; 2013:674–682.

    Google Scholar 

  • Cova SD, Ghioni M: Single-photon counting detectors. IEEE Photonics J 2011, 3: 274–277.

    Article  Google Scholar 

  • Cova S, Longoni A, Andreoni A: Towards picosecond resolution with single-photon avalanche diodes. Rev Sci Instr 1981, 52: 408–412. 10.1063/1.1136594

    Article  CAS  Google Scholar 

  • Cova S, Longoni A, Ripamonti G: Active-quenching and gating circuits for single-photon avalanche diodes (SPADS). IEEE Trans Nucl Sci 1982, 29: 599–601.

    Article  Google Scholar 

  • Cova S, Ghioni M, Lacaita A, Samori C, Zappa F: Avalanche photodiodes and quenching circuits for single-photon detection. Appl Opt 1996, 35: 1956–1976. 10.1364/AO.35.001956

    Article  CAS  PubMed  Google Scholar 

  • Fishburn MW Dissertation. In Fundamentals of CMOS single photon avalanche diodes. Delft, The Netherlands: Delft University of Technology; 2012.

    Google Scholar 

  • Flower B: Single photon CMOS imaging through noise minimization. In Single-Photon Imaging Springer Series in Optical Sciences. Volume 160. Edited by: Seitz P, Theuwissen APJ. Berlin Heidelberg: Springer; 2011:173–209.

    Google Scholar 

  • Folling J, Bossi M, Bock H, Medda R, Wurm CA, Hein B, Jakobs S, Eggeling C, Hell SW: Fluorescence nanoscopy by ground-state depletion and single-molecule return. Nat Methods 2008, 5: 943–945. 10.1038/nmeth.1257

    Article  PubMed  Google Scholar 

  • Galbraith CG, Galbraith JA: Super-resolution microscopy at a glance. J Cell Sci 2011, 124: 1607–1611. 10.1242/jcs.080085

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Gersbach M, Trimananda R, Maruyama Y, Fishburn M, Stoppa D, Richardson J, Walker R, Henderson RK, Charbon E: High frame-rate TCSPC-FLIM using a novel SPAD-based image sensor. In International Society for Optics and Photonics. San Diego, California, USA: SPIE NanoScience Engineering; 2010:77801H-77801H.

    Google Scholar 

  • Grüll F, Kirchgessner M, Kaufmann R, Hausmann M, Kebschull U: Accelerating Image Analysis For Localization Microscopy With FPGAs. Chania, Greece: 21st International Conference on Field Programmable Logic and Applications; 2011:1–5.

    Google Scholar 

  • Heilemann M: Fluorescence microscopy beyond the diffraction limit. J Biotechnol 2010, 149: 243–251. 10.1016/j.jbiotec.2010.03.012

    Article  CAS  PubMed  Google Scholar 

  • Heilemann M, van de Linde S, Schuttpelz M, Kasper R, Seefeldt B, Mukherjee A, Tinnefeld P, Sauer M: Subdiffraction-resolution fluorescence imaging with conventional fluorescent probes. Angew Chem Int Ed Engl 2008, 47: 6172–6176. 10.1002/anie.200802376

    Article  CAS  PubMed  Google Scholar 

  • Hell SW: Microscopy and its focal switch. Nat Methods 2009, 6: 24–32. 10.1038/nmeth.1291

    Article  CAS  PubMed  Google Scholar 

  • Hess ST, Girirajan TP, Mason MD: Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys J 2006, 91: 4258–4272. 10.1529/biophysj.106.091116

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Huang F, Hartwich TM, Rivera-Molina FE, Lin Y, Duim WC, Long JJ, Uchil PD, Myers JR, Baird MA, Mothes W, Davidson MW, Toomre D, Bewersdorf J: Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms. Nat Methods 2013, 10: 653–658. 10.1038/nmeth.2488

    Article  CAS  PubMed  Google Scholar 

  • Jones SA, Shim SH, He J, Zhuang X: Fast, three-dimensional super-resolution imaging of live cells. Nat Methods 2011, 8: 499–505. 10.1038/nmeth.1605

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Lakadamyali M: Super-resolution microscopy: going live and going fast. J Chem Phys Chem 2013, 15: 630–636.

    Google Scholar 

  • Li DU, Arlt J, Tyndall D, Walker R, Richardson J, Stoppa D, Charbon E, Henderson RK: Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm. J Biomed Opt 2011, 16: 096012. 10.1117/1.3625288

    Article  PubMed  Google Scholar 

  • Lukinavicius G, Umezawa K, Olivier N, Honigmann A, Yang G, Plass T, Mueller V, Reymond L, Correa IR Jr, Luo Z, Schultz C, Lemke EA, Heppenstall P, Eggeling C, Manley S, Johnsson K: A near-infrared fluorophore for live-cell superresolution microscopy of cellular proteins. Nat Chem 2013, 5: 132–139. 10.1038/nchem.1546

    Article  CAS  PubMed  Google Scholar 

  • Ma H, Kawai H, Toda E, Zeng S, Huang ZL: Localization-based super-resolution microscopy with an sCMOS camera part III: camera embedded data processing significantly reduces the challenges of massive data handling. Opt Letters 2013, 38: 1769–1771. 10.1364/OL.38.001769

    Article  Google Scholar 

  • Mortensen KI, Churchman LS, Spudich JA, Flyvbjerg H: Optimized localization analysis for single-molecule tracking and super-resolution microscopy. Nat Methods 2010, 7: 377–381. 10.1038/nmeth.1447

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Niclass C, Rochas A, Besse PA, Charbon E: Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes. IEEE J Solid State Circuits 2005, 40: 1847–1854.

    Article  Google Scholar 

  • Niclass C, Favi C, Kluter T, Gersbach M, Charbon E: A 128 × 128 single-photon image sensor with column-level 10-bit time-to-digital converter array. IEEE J Sol State Circuits 2008, 43: 2977–2989.

    Article  Google Scholar 

  • Ober RJ, Ram S, Ward ES: Localization accuracy in single molecule microscopy. Biophys J 2004, 86: 185–1200.

    Article  Google Scholar 

  • Powolny F, Burri S, Bruschini C, Michalet X, Regazzoni F, Charbon E: Comparison of two cameras based on single photon avalanche diodes (SPADS) for fluorescence lifetime imaging application with picosecond resolution. Snowbird Resort, Utah, USA: 14th International Image Sensor Workshop; 2013:100:25–50.

    Google Scholar 

  • Quan T, Zeng S, Huang Z-L: Localization capability and limitation of electron-multiplying charge-coupled, scientific complementary metal-oxide semiconductor, and charge-coupled devices for superresolution imaging. J Biomed Opt 2010, 15: 066005. 10.1117/1.3505017

    Article  PubMed  Google Scholar 

  • Rieger B, Stallinga S: The lateral and axial localization uncertainty in super-resolution light microscopy. J Chem Phys Chem 2013. 10.1002/cphc.201300711

    Google Scholar 

  • Ries J, Kaplan C, Platonova E, Eghlidi H, Ewers H: A simple, versatile method for GFP-based super-resolution microscopy via nanobodies. Nat Methods 2012, 9: 582–584. 10.1038/nmeth.1991

    Article  CAS  PubMed  Google Scholar 

  • Robbins MS: Electron-Multiplying Charge Coupled Devices – EMCCDs. In Single-Photon Imaging Springer Series in Optical Sciences. Volume 160. Edited by: Seitz P, Theuwissen APJ. Berlin Heidelberg: Springer; 2011:103–121. 10.1007/978-3-642-18443-7_6

    Chapter  Google Scholar 

  • Robbins MS, Hadwen BJ: The noise performance of electron multiplying charge-coupled devices. IEEE Trans Electron Dev 2003, 50: 1227–1232. 10.1109/TED.2003.813462

    Article  Google Scholar 

  • Rochas A Dissertation. In Single photon avalanche diodes in CMOS technology. Lausanne, Switzerland: École Polytechnique Fédérale de Lausanne; 2003.

    Google Scholar 

  • Rochas A, Gani M, Furrer B, Besse PA, Popovic RS, Ribordy G, Gisin N: Single photon detector fabricated in a complementary metal–oxide–semiconductor high-voltage technology. Rev Sci Instrum 2003, 74: 3263. 10.1063/1.1584083

    Article  CAS  Google Scholar 

  • Rust MJ, Bates M, Zhuang X: Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat Methods 2006, 3: 793–795. 10.1038/nmeth929

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Sauer M: Localization microscopy coming of age: from concepts to biological impact. J Cell Sci 2013, 126: 3505–3513. 10.1242/jcs.123612

    Article  CAS  PubMed  Google Scholar 

  • Stallinga S, Rieger B: Accuracy of the gaussian point spread function model in 2d localization microscopy. Opt Express 2010, 18: 24461–24476. 10.1364/OE.18.024461

    Article  CAS  PubMed  Google Scholar 

  • Stoppa D, Mosconi D, Pancheri L, Gonzo L: Single-photon avalanche diode CMOS sensor for time-resolved fluorescence measurements. IEEE Sensors J 2009, 9: 1084–1090.

    Article  Google Scholar 

  • Veerappan C, Richardson J, Walker R, Li DU, Fishburn MW, Maruyama Y, Stoppa D, Borghetti F, Gersbach M, Henderson KR, Charbon E: A 160× 128 single-photon image sensor with on-pixel 55ps 10b time-to-digital converter. San Francisco, California, USA: 58th International Solid-state Circuits Conference (ISSCC); 2011:20–24.

    Google Scholar 

Download references

Acknowledgements

We thank Sjoerd Stallinga and Bernd Rieger for providing us with the spot fitting code and their very valuable advice. We express our gratitude to Edoardo Charbon for his encouragement and support. We also thank our funding partners, Leica Microsystems and Dutch Technology Foundation STW, which is a part of the Netherlands Organization for Scientific Research (NWO)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ron A Hoebe.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

VK drafted the manuscript and carried out the numerical simulations. RH, EM, CFJVN supervised VK and participated in analysis of simulation results, discussions and revisions of the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Cite this article

Krishnaswami, V., Van Noorden, C.F., Manders, E.M. et al. Towards digital photon counting cameras for single-molecule optical nanoscopy. Opt Nano 3, 1 (2014). https://doi.org/10.1186/2192-2853-3-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2192-2853-3-1

Keywords