16Mar/100

## Intro to imaging optics

If you’re like me, and not an optical engineer, the most interaction you'll have with optics is with either an aspheric collimating lenses or multielement imaging lenses.  This post is going to focus on the later while informing on the former.  We will briefly discuss the fundamentals of optical design, followed by a discussion of lens specification by defining Image Circle, Focal Length, Aperture, and Resolution.

Background:
Imaging lenses, be it for a telescope or a camera, can be designed using either mirrors which work by the reflection of light, or glass which work by refraction of light.  Reflective optical systems have the ideal quality that all light reflected by the surface bends at the same rate, and thus are inherently achromatic.  They also follow simple trigonometric rules where the excidence angle is equal but opposite in sign to its corresponding incidence angle.  These ray angles are referenced from the normal line which is perpendicular to the surface tangent at the particular point of incidence.  The only knob the designer has to turn is the mirror element shape and clever mechanical design.  These designs are quite large, and are ideal for systems which require long focal lengths; however they are unreasonable for designs which require small compact lenses.  Refractive optical systems are more conducive to smaller systems, which is why they are much more prevalent, yet their designs are more complex.

If you are one of the aforementioned optical engineers, back in the day one would do the entire design by hand using Snell’s law and a list of available glasses with their frequency dependent index of refractions; nowadays one would use an optical ray tracing software package like ZEMAX or Code V (pronounced "code 5") to aid in the design.  The advent of computer aided optical design opened the industry up for much more advanced, reduced distortion, achromatic lens designs.  However, to pay our respects to the past – Snell’s law states that the excidence angle of an incidence ray is proportional to the index of refraction of the two materials.

$\sin\left(\theta_e\right)=\frac{n_i}{n_e}\cdot\sin\left(\theta_i\right)$

Where 'θi' represents the angle off perpendicular to the surface tangent at the point of incidence, 'θe' represents the excidence angle while 'ni' and 'ne' represent the index of refraction of the two materials.  Note: index of refraction is defined as how much faster or slower light travels through a medium as compared to vacuum where the refractive index of vacuum = 1 while air ≈ 1.  Refractive index is also wavelength dependent which means that light bends at different rates when entering and exiting a medium, thus braking up into its component wavelengths.  There are graphs which can be found for different optical materials which plot their index of refraction vs. wavelength.  Snell’s equation gives the designer two knobs they can turn to develop their design - element material, and shape.  However, the fact that refractive index varies with wavelength adds complexity in producing an achromatic design.

Image Circle:
As for the rest of us mere mortals, we can either specify the qualities of the lens we are want designed, or find a COTS lens which comes close to what we’re looking for.  The first step in this process is defining the imager format that the lens needs to support.  The image circle of a lens is the circular area in the image plane formed by the cone of light projected by the lens.  If the image circle is smaller then the imager, you will see a resolved image contained within a circle where outside of this circle is black – this is also known as vignetting.  For most applications, it is important to specify the image circle to be larger then the imager’s active area diagonal length such that the entire image plane is being utilized.

This value is generally provided in the imager’s datasheet, however if this value is not easily accessible, it can also be found using Pythagoreans’ equation to calculate the diagonal length of the imager’s active area.  On-top of the active area’s diagonal length, due to questions of linearity & distortion at the edges of the field, it is practice to include a factor of x1.5.  This is legacy from the 1950’s where a 1 inch vidicon tube had a usable image diagonal of 16 mm – only about 2/3 of the diameter.  This number defines the approximate required size of the image circle produced by the lens, so the chosen lens should be within an approximate type within a few millimeters tolerance.  Standard format types can be found at Wikipedia’s Image sensor format entry.

Focal length:
Once you have decided on the type of lens needed based on the imager format, one then needs to find the focal length of the lens which satisfies the system requirements - this is assuming that a field of view has already been defined in the system requirements and an appropriate imager has been selected to provide the desired resolution/pixel field of view, and waveband sensitivity.  The details of the imager selected for use in the design which we will need to find the field of view is the horizontal and vertical mechanical dimensions of its active area.  Field of view for either a reflective or refractive system is defined by the equation:

$f=\frac{l}{2\cdot\tan\left(\frac{\theta_{fov}}{2}\right)}$

where 'f' is the focal length in either inches or meters depending on the units of the length value, 'l' is the length of imagers active area (horizontal, vertical, or diagonal) in inches or meters, and 'θfov' is the full angle field of view projected on the imager.  This equation can be applied to not only an imager, but can also be applied to its individual pixels.

To derive this equation, let us digress to basic geometry and solve for the tangent, thus producing the equation:

$\tan\left(\frac{\theta_{fov}}{2}\right)=\frac{\frac{l}{2}}{f}$

The definition of the tangent of theta produces the ratio of the length of the opposite leg to the adjacent leg of the right triangle.  We perform the calculation on the half angle and thus we use half the length of the imager.  This equation assumes the use of a pinhole lens to define focal length, though it is unlikely that you are going to use one in your design.  Due to the disparity between how focal length is defined and how modern lenses are designed, the actual dimensions of these lenses do not correspond to the calculated focal length.

Depending on budget, a custom lens design may be too costly, so a commercial lens will have to be identified which best suits the requirements.  The focal length given by the above equation is going to be an exact value, and finding a lens with that exact value is highly unlikely.  Use the calculated focal length as a starting value and the relationship that a shorter focal length produces a wider field of view and conversely a longer focal length produces a narrower field of view to find a lens that most closely matches the requirements.

Aperture a.k.a. F/#:
Another important aspect of a lens is its collection aperture.  The larger the aperture, the more light is collected by the lens and projected on the image plane.  Depending on the intended design of the system, the aperture dimension effects the SNR, integration time, frame rate, and mechanical size.  Note: integration time is the length of time the imager is left exposed to light in order to collect charge before read out by the supporting electronics – the longer the integration time, the more exposed the image.  The lens f/# is the ratio of the lenses focal length to its collection aperture.  This can be rewritten in terms of the collection aperture such that:

$d=\frac{f}{f/\#}$

Where ‘d’ is the diameter of the lens aperture and ‘f’ is the focal length.  Aperture, like focal length, is a calculated value and does not translate directly to mechanical dimensions.

Resolution:
Depending on the imager being used, its resolution and pixel size may or may not become an issue in the lens selection.  There’s no point in buying a million dollar lens for a VGA imager; conversely there’s no point in purchasing a 16MP imager if you don’t have a lens that can resolve down to each pixel.  Assuming that the system is not diffraction limited by the aperture, the lens specification which speaks to its capability to spatially resolve fine detail is its Modulation Transfer Function (MTF) in units of line pairs per millimeter (LP/mm).  This value details the maximum number of white-black line pairs the lens is able to resolve in a 1mm width regardless of the detector.  The details of the imager selected for use in the design which we will need to find the minimum required MTF, is the imager’s pixel size.

$MTF\;\left[\frac{LP}{mm}\right]=\frac{2\cdot10^{-3}}{l_{pixel}}$

Where ‘lpixel’ is the imager’s pixel dimension in meters.  It’s important to note that MTF is rarely flat across the entire field of view, and is generally specified both on center as well as at the edge, either as a separate value, a percentage of the original, or a figure detailing MTF vs. degrees off-axis.  Depending on your application, you may only care about resolution on-center and the periphery is less important.  If this is not the case, however, and you must ensure the image is being resolved at the pixel level across the entire imager, you need to make sure that the MTF specified is as good or better (higher) then what is calculated both on-center and at the periphery.  We will discuss diffraction limited optics in a later post.

Other:
There are other lens characteristics which speak to its capability of producing quality imagery; some of these characteristics speak to the optical aberrations seen in the imagery they produce.  Some common optical aberrations include defocus, spherical aberration, coma, astigmatism, field curvature, and chromatic aberration.  Each of these aberrations distort the imagery in a unique way, however I’m not going to discuss them here.