Electric fields travel at the speed of light - a constant 299,792,458 meters per second regardless of wavelength. However, this fact is only true for the ideal case where the waves are traveling through vacuum away from any other charge sources. In non-ideal situations, where an electric field is traveling through another medium, the speed of the field is slowed. But how slow?
In order to understand how different mediums effect an electric field's velocity, we first need to understand why the speed of light travels at the velocity it does in that idealized case. The first part of this multi-part series is going to discuss the general form of the wave equation followed by a derivation of the speed of light in vacuum.
When one talks about quantitative qualities of a light source, you generally subscribe to one of two schools of thought - Radiometric in SI units of Watts or Photometric in SI units of lumens, candelas or lux. In this post i am going to discuss these two different approaches.
Photometric units are quasi-quantitative hand-wavey units; they are an attempt to quantify the qualitative aspects of a light source i.e. how bright a light source appears to the human eye. These units are useful for things like home lighting, vehicle headlights, or photography light meters.
A COTS Sylvania SoftWhite 100W incandescent light bulb advertises that it produces 122 candelas. If we were to convert that value into watts using the standard conversion factor of 1 lumen = 1/683 Watt (at 555nm as that is the accepted wavelength of peak sensitivity of the human eye), we get:
This tells us that a 100W incandescent bulb is only about 2.25% efficient in producing visible light and the rest of the power is wasted in producing NIR through LWIR (heat) photons.
Radiometric units are scientific in nature - absolute units based solely on the physical photonic quantities. Radiometric units address the entire spectral transmission of a light source regardless of detector. These units are useful for scientific applications like spectroscopy, detector characterization, optical communications, etc., essentially anything scientific that deals with light.
the MS257's spec sheet calls out the input F/# in order to match the source to the monochrometer however it doesn't specify the output F/# or divergence which is extremely important to couple the monochromatic light into an optical system. The spec sheet does instead specify the input and output focal lengths which we can use to back calculate the output divergence.
We first must find the F/# at the output, which is only possible if we assume the input and output apertures are the same. This is a reasonable assumption for a monochrometer as the instrument requires matching input and output slits to function properly. Using the definition of F/# we first solve for d:
Once we have this standard equation, we plug it back into the F/# equation with the different focal lengths:
This equation can be rewritten to show that the ratio of the output F/# to the input F/# is equal to the ratio of the output focal length to the input focal length.
Now that we have a close form solution, we can plug-in the constants - the MS257 specification states the instrument has an input focal length of 220mm and an output focal length of 257.4mm; when we plug those values into the equation with the original input F/3.9 we get an output of F/4.56, and by using the equation from the previous post, that translates to a divergence of 12.51° from the exit slit.
i have recently come across an issue with designing an optical system to interface with an Oriel MS257 monochrometer and found people referring to the divergence of the instrument by the F/# listed in the MS257's spec sheet - an F/3.9 at the input slit. This didn't make sense to me as F/# is defined as the ratio of a lens's focal length to its aperture.
After some digging, it turns out that optics people use F/# and numerical aperture (NA) as ways of expressing a cone angle which, for the rest of this post, i will refer to as θ. If we define the half angle using tangent, θ/2 is defined by the ratio of its opposite length to its adjacent length which easily correlate to half a lens' aperture and its focal length respectively. Using basic trigonometry we get:
where d is the length of the aperture and fl is the focal length. We can then massage this equation such that it is in terms of F/#:
So drawing from the MS257 example where we have F/3.9 at the input, that translates to an input angle of 14.61°, and any source that we input into the monochrometer must be focusing at the input slit at an angle of 14.61°.
If you’re like me, and not an optical engineer, the most interaction you'll have with optics is with either an aspheric collimating lenses or multielement imaging lenses. This post is going to focus on the later while informing on the former. We will briefly discuss the fundamentals of optical design, followed by a discussion of lens specification by defining Image Circle, Focal Length, Aperture, and Resolution.
Imaging lenses, be it for a telescope or a camera, can be designed using either mirrors which work by the reflection of light, or glass which work by refraction of light. Reflective optical systems have the ideal quality that all light reflected by the surface bends at the same rate, and thus are inherently achromatic. They also follow simple trigonometric rules where the excidence angle is equal but opposite in sign to its corresponding incidence angle. These ray angles are referenced from the normal line which is perpendicular to the surface tangent at the particular point of incidence. The only knob the designer has to turn is the mirror element shape and clever mechanical design. These designs are quite large, and are ideal for systems which require long focal lengths; however they are unreasonable for designs which require small compact lenses. Refractive optical systems are more conducive to smaller systems, which is why they are much more prevalent, yet their designs are more complex.
If you are one of the aforementioned optical engineers, back in the day one would do the entire design by hand using Snell’s law and a list of available glasses with their frequency dependent index of refractions; nowadays one would use an optical ray tracing software package like ZEMAX or Code V (pronounced "code 5") to aid in the design. The advent of computer aided optical design opened the industry up for much more advanced, reduced distortion, achromatic lens designs. However, to pay our respects to the past – Snell’s law states that the excidence angle of an incidence ray is proportional to the index of refraction of the two materials.
Where 'θi' represents the angle off perpendicular to the surface tangent at the point of incidence, 'θe' represents the excidence angle while 'ni' and 'ne' represent the index of refraction of the two materials. Note: index of refraction is defined as how much faster or slower light travels through a medium as compared to vacuum where the refractive index of vacuum = 1 while air ≈ 1. Refractive index is also wavelength dependent which means that light bends at different rates when entering and exiting a medium, thus braking up into its component wavelengths. There are graphs which can be found for different optical materials which plot their index of refraction vs. wavelength. Snell’s equation gives the designer two knobs they can turn to develop their design - element material, and shape. However, the fact that refractive index varies with wavelength adds complexity in producing an achromatic design.
This post is going to discuss two pitfalls that i encountered while using MODTRAN via the PLEXUS GUI. First is the conversion between Wavenumber to Wavelength, the second is using PLEXUS to perform night time lunar models.
The atmosphere, through its six layers, contains various particles and gases which attenuate impinging solar radiation. The particles which contribute the most to this attenuation are water (H2O) in the troposphere (0-11Km), carbon dioxide (CO2) also in the troposphere, and ozone (O3) in the stratosphere (11-50Km). While there is relatively little solar absorption through the visible bands (380nm - 750nm), there are strong absorption bands in the UVC and LWIR attributed to ozone, while H2O and CO2 absorb intermittently throughout the rest of the solar spectrum. A Transmittance vs. Wavelength graph for two generic scenarios can be seen below. Note: For larger absorption bands, the contributing particles are shown; the full Raytheon infrared wall chart can be found below under references. MODTRAN (MODerate spectral resolution atmospheric TRANSmittance algorithm and computer model) is an atmospheric spectral radiance modeling code developed by the Air Force Research Lab, Space Vehicles Directorate. This code has been combined with several others (MODTRAN4 V2R1, SAMM 1.1, SAMM 1.82, FASCODE3 with HITRAN2K, SHARC Atmosphere Generator (SAG) V1 & V2, and Celestial Background Scene Descriptor (CBSD) V5) into a single software suite called PLEXUS (Phillips Laboratory EXpert-assisted User Software) which provides the user with an easier to use GUI for these atmospheric codes. The most recent version as of the publishing of this post is Release 3 Version 3A. More information on PLEXUS as well as its constituent codes can be found on the AFRL software information page.