Hi Peter,
Speaking for DSLR:
In general check for non-linearity once to clarify how the sensor behaves. Typical non-linearities are found in the highest intensities close to saturation. Here it is important. Within a certain range of intensities below this, you may ignore correction for non-linearity.
Check for non-linearities using dark calibrated flatfields of different illumination and perform noise analysis. Measured intensities and standard deviation shall follow a square-root relationship. If not, non-linearty is found and can be corrected. See Berry & Burnell (2005) for details about testing procedures and proper computation of noise properties. While intensities and noise shall follow a square-root relation, the major problem with this method is, that It is also sensitive for shutters not being linear. So trying to find a relation between shutter exposure time and noise will certainly yield a non-linear relation. Often this is not a problem of the sensors, but the mechanics of shutter. Use of a calibration curve from shutter non-linearities is a wrong decision, however. Use calibrated gray filters in the optical path or take flatfields with constant exposure, but at different illumination during twilight. I would recommend to analyze just intensities found and relate this to noise from the pixel intensities. Do NOT put this into relation with exposure time, shutter speed, or gain settings, but use calibration consistent with the settings of the camera. A dark frame for ISO 400 will not match an image of ISO 1600 and even temperature variation over night might be critical. A good decision will also be to keep maximum intensities below 50-60% of the saturation level. This is also safe for intensity variations from atmospheric image blur within imaging & spectroscopy.
Another test will be photometric observations of a field of stars from a well known reference. I did photometric observations for several star clusters a couple of times to find out how well a DSLR can be transformed to Johnson-Cousing BVR system. In short: Yes, we can. In contrast to the hypothesis of large non-linearties of a CMOS, photometry of star clusters yield stellar magnitudes with accuracy of much better than 0.01-0.03 mag. To be honest, I didn’t take non-linearities into account as longs as stars in the field have intensities below 70% of saturation. This is usually the case for 30 s exposure of a stellar fields having maximum brightness of 8(7) mag stars with a 8" telescope and focal reducer. Photometry is perfect even without correction for non-linearity. Having found a star cluster with no perfect match is a matter of quality of the observers and publication where data sets have been taken out. This is a different story and the valid assumption will be to have properly identified a poor publication record.
Peter Somogyi wrote:At least if I'd get to a CMOS (again...), the very first I'd test was the linearity curve.
With variable gains (I do have an ASI for guiding and viewfinder), gain setting easily produce a nonlinear exposure-ADU curve.
Again this is a test, if the gain settings will follow a linear configuration. We should be careful to find a wrong conclusion here of having found sensor non-linearity (see above discussion of proper testing method). This may widely differ between camera manufacturers, esp. true for the CCD imagers.
Peter Somogyi wrote:As for DSLRs, for me even the dark exposure time had to match with the light exposure (dark non-reproduceability). That's also something to be tested.
And, did hear that bias frames also change sometimes, however bias is not usable for DSLRs for the above reason. Question if it comes from the chip or camera.
Probably, we should align on the terminoiogy here: Distinguish between
bias and
offset. I would propose to use these definitions:
Bias frame is obtained from any short or the shortest possible exposure time, while (constant?)
offset means certain arbitrary electronic or digital offset to the measured signal. As dark signal and noise of CMOS evolve mildly compared to CCD, bias exposure is sufficient to be done with 1/100 s, while taking these as darks for the flats. Shorter shutter times may produce artifacts like unequal illumination due to inabilities of the shutter mechanics (seen as gradient in image).
Bias shall be measured. However, this is only required in case a proper noise analysis is required to calibrate the sensor characteristics. I have taken literally thousands of bias frames as dark calibration for my flatfield series over time from different Canon cameras over 10 years now. I would assume bias will not change much, but noise characteristics vary with time, temperature, or ISO (gain) settings.
The
offset, which is found with several DSLR models is delivered as digital number within the EXIF header the raw image contents of DSLRs. This may be subtracted channel-wise during processing of the images (depending on which software you are using). There is also the term "black" or "black level" used for the offset. This offset may slightly vary, but is almost fixed and aligned around certain value of 2°N for Canon devices. Depending on how you stretch image intensity to fit into 16 bit numbers, these offset numbers differ, and vary with models like EOS 40D and 60D. The 60D has a remarkably high offset around value of 2048 compared to EOS 40D having a value around 1024. This kind of offset just limits the available dynamic range.
Canon cameras in general have their black level kept as a constant added to the raw data stored (providing the offset added in the image file header), while Nikon seems to subtract it before storing image to file. Therefore, with Nikon camera models (tested until today) you will find a fixed back level adjusted around zero point. This strategy to increase available dynamic range comes with the big disadvantage: Noise properties of the Nikons CANNOT be measured in a realistic way. This, because negative values are clipped in the raw image and the noise computation will yield wrong values from statistical computation of the raw image (many clipped zero values found). Noise computation for the Nikons in general yields values too low compared to what should be expected from Gaussian noise. Histograms clearly show this typical Nikon artifact as a single half of a Gaussian distribution, which should be symmetric, but is not.
Typically, CMOS sensors show fixed pattern noise. This is found as static patterns in the background when adding hundreds of dark images and can be corrected accordingly using averaged dark frames. The effect can also be seen as varying offset within the image lines and columns and may also vary with time and observing conditions (temperature). Earlier models, like the 400D, provided more dramatic effects compared to the newest models. I own one modified EOS 40D which produces an arbitrary static "glow" in a portion of the image off center using exposure times of >2 minutes. I never found out what is the driver of this sensor defect. It is not typical and it is no amplifier glow. As read earlier in this forum, use at least the same number of dark frames as recorded images from the object of interest. The same applies for the number of flats for series of images. Otherwise you will not eliminate static noise from the calibration.
You may find more details here:
http://www.astroinformatics.de/index.ph ... &Itemid=71
There is also an older article published on a conference about scientific use of DSLR and noise properties. You will find it on my page as well.
Peter Somogyi wrote:
- special shapes (any filtration by chip, like low pass?)
I would assume two different things here: (a) optical filters or blurring filters in front of the sensor or (b) „figures“ seen from readout smear or convolution with a defect function from readout electronics. The latter is a typical source of tricky non-linearities in case of the CCD, but not CMOS images with individually addressable pixels.
Regarding (a) I am astronomer: All cameras, that I use, are modified Canon cameras with the filters and glasses removed from the body. Image blur from the production camera bodies is just slightly (worse) blurred to adapt for the many optics, the manufacturer also produces. If the camera manufacturers wouldn’t introduce image blur by such filters, many lens optics would not pass independent testing from the various magazines…
The smoothing is intended to correct for undersampling of certain optics esp. with fast stops. Undersampled images introduce several nasty problems. So, this discussion is not related to non-linearity of the sensors as it deals with the optical path.
CMOS sensors are expected to show problems completely different from a CCD. Speaking about DSLR, I didn't find non-linearities and thus ignore it usually from the image calibration pipeline. This is not a general recommendation, but my experience with the EOS bodies. I also found advantages over CCD, like better handling because DSLR don't require large current for the cooling of the sensor and heating the front glass to prevent dew on the other hand. One of the biggest advantage of a DSLR is: No computer required to take the many pictures from a nightly session. Any long-term testing of cooled CMOS cameras for astronomy will be one of my next tasks in astronomy.
Analytics of the CCD was done somewhere done in the early 90ies. I already closed this file.
DSLRs are great for spectroscopy of the stars down to 10 mag with an 8" Cassegrain. But, I also realized, I need an additional cooled monochrome sensor for recording of the very faint object spectra and also narrow band filter images of H-II regions.
A major advantage of the modern CMOS sensors may be seen in mass production and integrated electronic circuits for the whole digitization. Therefore, and because of their special designs there are less problems with capabilities of camera manufacturers to create high quality and linear imagers. I don't expect many surprises from current CMOS sensors, but expect a modern high quality CMOS imager.
Today, I got someone by phone and placed my order...
Regards,
Thilo