In order to understand how different mediums effect an electric field's velocity, we first need to understand why the speed of light travels at the velocity it does in that idealized case. The first part of this multi-part series is going to discuss the general form of the wave equation followed by a derivation of the speed of light in vacuum.

**The Wave Equation:
**The wave equation, is a second order partial differential equation in both time and space describing a traveling plane wave. In the one-dimensional case, the equation is:

Where is a scalar function representing the plane wave in both time and space, x is the direction in which the wave is traveling, v is the phase velocity of the wave, and t is time. In the two-dimensional case, we add an additional spatial term to the equation:

Where the wave is now traveling in both the x and y directions. This equation can be simplified by using the spacial Laplacian operator defined as:

Yielding the general form of the wave equation as:

For our purposes of the electric field case, the variables in the general form of the wave equation are modified to:

**The Speed of Light in Vacuum**

Now that we have the wave equation for an electric field, we can move onto deriving the speed of light. To do this, we start with Maxwell's equations in differential form:

We want to calculate the field's velocity in the ideal case of a vacuum where there are no electric charges, therefore it is safe to assume that the total charge density , and the total current density . Therefore eq.2 & eq.3 from above become:

In order to arrive at the wave equation, we need to find the Laplacian of . In order to get it into this form, we first take the curl on both sides of eq.1:

Plugging eq.6 into eq.7 we get:

We then leverage the vector identities:

Applying the first identity to eq.8 we get:

Then using eq.5 with the second identity on eq.9:

Arriving at a form resembling the generalized wave equation from above where , therefore the phase velocity can be represented as:

]]>**What is the Hall effect:**

The Hall effect, named after the scientist who first detected it Edwin Hall, is the opposing force a flow of charge carriers (current) induces to cancel out a Lorentz force acting upon it. This opposing forces is equal in magnitude and opposite in sign to the Lorentz force in order for the steady state net force on the carries to be zero, satisfying Newtons 3rd law. This answer requires a basic understanding of the fundamental laws of classical electromagnetic fields - Maxwell's equations, and the Lorentz force where Maxwell's equations describe how moving charge carriers induce EM fields and the Lorentz force is how EM fields effect moving charge carriers.

**How to leverage the Hall effect:**

One generally uses Hall effect sensors to sense the position of a magnet, or the magnitude of a current. There are 3 main types of position sensors - binary, linear and radial. For each, it is assumed that there is a magnetic field (generally generated by a magnet) that is a part of the component you want to sense the position of, in a location where it will pass over the sensor's active area.

- For a binary sensor, it will switch logic states when the active area is in proximity of a north or south pole (axially magnetized).
- For a linear sensor, it will give you an analog or multiple bit digital output corresponding to the distance the active area is away from a north or south pole.
- For a radial sensor, it will give you an analog or multiple bit digital output corresponding to the angle at which the north/south pole is to the active area. Radial sensors require a diametrically magnetized magnet persistently centered on the sensors active area.
- For current sensing, the Hall sensor's active area is placed perpendicularly to the currents induced magnetic field and will output an analog or multiple bit digital signal corresponding to the current's magnitude.

**How do Hall effect sensors work:**

Internal to the IC, there is a rectangular plate that a small current travels through in the y-direction, with voltage sensing circuitry connected across the plate in the x-direction. While there is no external magnetic field and therefore no Lorentz force, the current travels linearly through the plate and no voltage difference can be sensed at the sides of the plate. However, in the presence of a magnetic field perpendicular to the current flow (through the large surface of the plate in the z-direction), an electric field is formed called the Hall field which when integrated across the length of the plate gives you the a voltage differential called the Hall voltage.

The Hall voltage is dependent on on the sensor's plate current, plate dimensions, and sensed Lorentz's force. We first need to extrapolate the charge carrier's velocity which we can find using the plate current and plate dimensions.

Where is the plate current vector, is the length in the x-direction and width in the z-direction respectively of the plate, is the current density vector, N is the charge carrier density, q is the carrier charge which is for an electron, and is the charge carrier's velocity vector. We note that the charge carriers are flowing in only the y-direction therefore the current vector, current density vector, and velocity vector are all pointing only in the y-direction.

We then use the Lorentz force equation to find the Hall field remembering that the Hall field is equal in magnitude and opposite in direction to the field created by the Lorentz force.

Where is the Lorentz force vector, is the external electric field intensity vector, is the hall sensor's charge carrier velocity vector from above, is the external magnetic flux density vector, is the electric field vector produced by the Lorentz force, and is the induced Hall field vector. We note that because we are sensing the hall voltage across the length of the plate in the x-direction, the voltage will only be induced by a Lorentz magnetic force in the z-direction and skewed by Lorentz electric force in the x-direction, therefore we only consider the magnitude of the external electric and magnetic fields in those directions. This simplifies our Hall field to:

To find the induced Hall voltage , we must integrate the Hall field across the entire length of the plate:

In most cases, the sensor's intended use is to detect and measure the magnetic field; in electrically noisy environments, the external electric field adds noise to the measurement. This noise can be mitigated by increasing the strength of the magnetic field, or increasing the velocity of the charge carriers across the Hall plate. Going on the assumption that the Hall voltage simplifies to:

Plugging back in the equation for we get:

Another term known as the Hall coefficient which plugs the current density equation in for into the simplified Hall field equation resulting in the ratio:

*Conclusion:*

The Hall effect is how moving charge carriers react to a Lorentz force, by producing a voltage differential known as a Hall voltage. This Lorentz force can be from a magnet, or from the induced magnetic field produced by a flow of current. Hall effect sensors can be used as non-obstructive current sensors, as well as wireless position sensors.

**Photometric**

Photometric units are quasi-quantitative hand-wavey units; they are an attempt to quantify the qualitative aspects of a light source i.e. how bright a light source appears to the human eye. These units are useful for things like home lighting, vehicle headlights, or photography light meters.

A COTS Sylvania SoftWhite 100W incandescent light bulb advertises that it produces 122 candelas. If we were to convert that value into watts using the standard conversion factor of 1 lumen = 1/683 Watt (at 555nm as that is the accepted wavelength of peak sensitivity of the human eye), we get:

This tells us that a 100W incandescent bulb is only about 2.25% efficient in producing visible light and the rest of the power is wasted in producing NIR through LWIR (heat) photons.

**Radiometric**

Radiometric units are scientific in nature - absolute units based solely on the physical photonic quantities. Radiometric units address the entire spectral transmission of a light source regardless of detector. These units are useful for scientific applications like spectroscopy, detector characterization, optical communications, etc., essentially anything scientific that deals with light.

We first must find the F/# at the output, which is only possible if we assume the input and output apertures are the same. This is a reasonable assumption for a monochrometer as the instrument requires matching input and output slits to function properly. Using the definition of F/# we first solve for d:

Once we have this standard equation, we plug it back into the F/# equation with the different focal lengths:

This equation can be rewritten to show that the ratio of the output F/# to the input F/# is equal to the ratio of the output focal length to the input focal length.

Now that we have a close form solution, we can plug-in the constants - the MS257 specification states the instrument has an input focal length of 220mm and an output focal length of 257.4mm; when we plug those values into the equation with the original input F/3.9 we get an output of F/4.56, and by using the equation from the previous post, that translates to a divergence of 12.51° from the exit slit.

]]>After some digging, it turns out that optics people use F/# and numerical aperture (NA) as ways of expressing a cone angle which, for the rest of this post, i will refer to as θ. If we define the half angle using tangent, θ/2 is defined by the ratio of its opposite length to its adjacent length which easily correlate to half a lens' aperture and its focal length respectively. Using basic trigonometry we get:

where d is the length of the aperture and fl is the focal length. We can then massage this equation such that it is in terms of F/#:

NOTE:

So drawing from the MS257 example where we have F/3.9 at the input, that translates to an input angle of 14.61°, and any source that we input into the monochrometer must be focusing at the input slit at an angle of 14.61°.

If we want to relate F/# to NA, we must start with the definition of numerical aperture which is defined by the equation:

where n is in index of refraction and in this situation it is ≈1, and φ is the half cone angle which we have been referring to as θ/2. We defined θ above and when we combine these two equations we get:

To again use our MS257 example of F/3.9, the input slit of the monochrometer has an NA of 0.127.

]]>Being that i intended to hack the system in order to watch DVDs while the car is not parked, i followed the directions to bypass the built-in security features. The first step is to ground the parking break signal wire, and then ground the bottom right most pin of connector 2 which will keep the unit unlocked when you start moving. This resolves the parking break signal issue, but we still have 2 to go.

The reverse signal is used to engage the backup camera (if you have one installed), and may or may not be used with the GPS functionality, i don't know. After some searching i found the vehicles repair manual where chapter 7 section 23 discusses the back-up light system's wiring schematic which can be found here (Warning: PDF). The reverse signal goes over the Brown/Yellow wire (the same for all models) and can be found under the passenger side kick-plate.

The last signal which the unit requires is the vehicle speed sensor. Note: Even if the unit is bypassed and the VSS signal wont disable the unit, the signal is still important for the GPS functionality as it is not capable of solely running on the GPS signal for vehicle location. My experience shows that without the VSS connected, the unit may assume your driving 65MPH down the highway... backwards. Chapter 7 section 17 of the vehicles repair manual discusses the 3 WRX model's engine electrical systems and their various ECUs can be found here (Warning: PDF). To cover all 3 WRX models: the VSS signal is the Green/Yellow wire connected to pin 1 of connector B134 of the turbo model's ECU (PDF page 4), the Green/Yellow wire connected to pin 10 of connector B147 of the SOHC model's ECU (PDF page 18), and the Green/Yellow wire connected to pin 27 of connector B135 of the STI model's ECU (PDF page 30).

]]>*Background:*

Imaging lenses, be it for a telescope or a camera, can be designed using either mirrors which work by the reflection of light, or glass which work by refraction of light. Reflective optical systems have the ideal quality that all light reflected by the surface bends at the same rate, and thus are inherently achromatic. They also follow simple trigonometric rules where the excidence angle is equal but opposite in sign to its corresponding incidence angle. These ray angles are referenced from the normal line which is perpendicular to the surface tangent at the particular point of incidence. The only knob the designer has to turn is the mirror element shape and clever mechanical design. These designs are quite large, and are ideal for systems which require long focal lengths; however they are unreasonable for designs which require small compact lenses. Refractive optical systems are more conducive to smaller systems, which is why they are much more prevalent, yet their designs are more complex.

If you are one of the aforementioned optical engineers, back in the day one would do the entire design by hand using Snell’s law and a list of available glasses with their frequency dependent index of refractions; nowadays one would use an optical ray tracing software package like ZEMAX or Code V (pronounced "code 5") to aid in the design. The advent of computer aided optical design opened the industry up for much more advanced, reduced distortion, achromatic lens designs. However, to pay our respects to the past – Snell’s law states that the excidence angle of an incidence ray is proportional to the index of refraction of the two materials.

Where 'θ_{i}' represents the angle off perpendicular to the surface tangent at the point of incidence, 'θ_{e}' represents the excidence angle while 'n_{i}' and 'n_{e}' represent the index of refraction of the two materials. Note: index of refraction is defined as how much faster or slower light travels through a medium as compared to vacuum where the refractive index of vacuum = 1 while air ≈ 1. Refractive index is also wavelength dependent which means that light bends at different rates when entering and exiting a medium, thus braking up into its component wavelengths. There are graphs which can be found for different optical materials which plot their index of refraction vs. wavelength. Snell’s equation gives the designer two knobs they can turn to develop their design - element material, and shape. However, the fact that refractive index varies with wavelength adds complexity in producing an achromatic design.

*Image Circle:*

As for the rest of us mere mortals, we can either specify the qualities of the lens we are want designed, or find a COTS lens which comes close to what we’re looking for. The first step in this process is defining the imager format that the lens needs to support. The image circle of a lens is the circular area in the image plane formed by the cone of light projected by the lens. If the image circle is smaller then the imager, you will see a resolved image contained within a circle where outside of this circle is black – this is also known as vignetting. For most applications, it is important to specify the image circle to be larger then the imager’s active area diagonal length such that the entire image plane is being utilized.

This value is generally provided in the imager’s datasheet, however if this value is not easily accessible, it can also be found using Pythagoreans’ equation to calculate the diagonal length of the imager’s active area. On-top of the active area’s diagonal length, due to questions of linearity & distortion at the edges of the field, it is practice to include a factor of x1.5. This is legacy from the 1950’s where a 1 inch vidicon tube had a usable image diagonal of 16 mm – only about 2/3 of the diameter. This number defines the approximate required size of the image circle produced by the lens, so the chosen lens should be within an approximate type within a few millimeters tolerance. Standard format types can be found at Wikipedia’s Image sensor format entry.

*Focal length:*

Once you have decided on the type of lens needed based on the imager format, one then needs to find the focal length of the lens which satisfies the system requirements - this is assuming that a field of view has already been defined in the system requirements and an appropriate imager has been selected to provide the desired resolution/pixel field of view, and waveband sensitivity. The details of the imager selected for use in the design which we will need to find the field of view is the horizontal and vertical mechanical dimensions of its active area. Field of view for either a reflective or refractive system is defined by the equation:

where 'f' is the focal length in either inches or meters depending on the units of the length value, 'l' is the length of imagers active area (horizontal, vertical, or diagonal) in inches or meters, and 'θ_{fov}' is the full angle field of view projected on the imager. This equation can be applied to not only an imager, but can also be applied to its individual pixels.

To derive this equation, let us digress to basic geometry and solve for the tangent, thus producing the equation:

The definition of the tangent of theta produces the ratio of the length of the opposite leg to the adjacent leg of the right triangle. We perform the calculation on the half angle and thus we use half the length of the imager. This equation assumes the use of a pinhole lens to define focal length, though it is unlikely that you are going to use one in your design. Due to the disparity between how focal length is defined and how modern lenses are designed, the actual dimensions of these lenses do not correspond to the calculated focal length.

Depending on budget, a custom lens design may be too costly, so a commercial lens will have to be identified which best suits the requirements. The focal length given by the above equation is going to be an exact value, and finding a lens with that exact value is highly unlikely. Use the calculated focal length as a starting value and the relationship that a shorter focal length produces a wider field of view and conversely a longer focal length produces a narrower field of view to find a lens that most closely matches the requirements.

*Aperture a.k.a. F/#:*

Another important aspect of a lens is its collection aperture. The larger the aperture, the more light is collected by the lens and projected on the image plane. Depending on the intended design of the system, the aperture dimension effects the SNR, integration time, frame rate, and mechanical size. Note: integration time is the length of time the imager is left exposed to light in order to collect charge before read out by the supporting electronics – the longer the integration time, the more exposed the image. The lens f/# is the ratio of the lenses focal length to its collection aperture. This can be rewritten in terms of the collection aperture such that:

Where ‘d’ is the diameter of the lens aperture and ‘f’ is the focal length. Aperture, like focal length, is a calculated value and does not translate directly to mechanical dimensions.

*Resolution:*

Depending on the imager being used, its resolution and pixel size may or may not become an issue in the lens selection. There’s no point in buying a million dollar lens for a VGA imager; conversely there’s no point in purchasing a 16MP imager if you don’t have a lens that can resolve down to each pixel. Assuming that the system is not diffraction limited by the aperture, the lens specification which speaks to its capability to spatially resolve fine detail is its Modulation Transfer Function (MTF) in units of line pairs per millimeter (LP/mm). This value details the maximum number of white-black line pairs the lens is able to resolve in a 1mm width regardless of the detector. The details of the imager selected for use in the design which we will need to find the minimum required MTF, is the imager’s pixel size.

Where ‘l_{pixel}’ is the imager’s pixel dimension in meters. It’s important to note that MTF is rarely flat across the entire field of view, and is generally specified both on center as well as at the edge, either as a separate value, a percentage of the original, or a figure detailing MTF vs. degrees off-axis. Depending on your application, you may only care about resolution on-center and the periphery is less important. If this is not the case, however, and you must ensure the image is being resolved at the pixel level across the entire imager, you need to make sure that the MTF specified is as good or better (higher) then what is calculated both on-center and at the periphery. We will discuss diffraction limited optics in a later post.

*Other:*

There are other lens characteristics which speak to its capability of producing quality imagery; some of these characteristics speak to the optical aberrations seen in the imagery they produce. Some common optical aberrations include defocus, spherical aberration, coma, astigmatism, field curvature, and chromatic aberration. Each of these aberrations distort the imagery in a unique way, however I’m not going to discuss them here.

Going forward with this definition, the **F**ull **W**idth **H**alf **M**ax (FWHM) of a figure is defined by the 3dB point. To find the FWHM of a graph, you find the 2 points at which the graph crosses the half-max point (0.5 x peak's max value) - the horizontal distance between the two points is the FWHM.

**Background:**

The atmosphere, through its six layers, contains various particles and gases which attenuate impinging solar radiation. The particles which contribute the most to this attenuation are water (H_{2}O) in the troposphere (0-11Km), carbon dioxide (CO_{2}) also in the troposphere, and ozone (O_{3}) in the stratosphere (11-50Km). While there is relatively little solar absorption through the visible bands (380nm - 750nm), there are strong absorption bands in the UVC and LWIR attributed to ozone, while H_{2}O and CO_{2} absorb intermittently throughout the rest of the solar spectrum. A Transmittance vs. Wavelength graph for two generic scenarios can be seen below. Note: For larger absorption bands, the contributing particles are shown; the full Raytheon infrared wall chart can be found below under references. MODTRAN (MODerate spectral resolution atmospheric TRANSmittance algorithm and computer model) is an atmospheric spectral radiance modeling code developed by the Air Force Research Lab, Space Vehicles Directorate. This code has been combined with several others (MODTRAN4 V2R1, SAMM 1.1, SAMM 1.82, FASCODE3 with HITRAN2K, SHARC Atmosphere Generator (SAG) V1 & V2, and Celestial Background Scene Descriptor (CBSD) V5) into a single software suite called PLEXUS (Phillips Laboratory EXpert-assisted User Software) which provides the user with an easier to use GUI for these atmospheric codes. The most recent version as of the publishing of this post is Release 3 Version 3A. More information on PLEXUS as well as its constituent codes can be found on the AFRL software information page.

**Data Conversion:**

MODTRAN is written in FORTRAN and requires the user to specify various attributes of an experimental scenario (Location, Date, Time, Albedo, etc.) which MODTRAN will then produce both the transmittance in percentage vs. wavenumber as well as the spectral radiance in Watt/centimeter^{2}·steradian·centimeter^{-1} vs. wavenumber for that scenario. Unfortunately, while MODTRAN performs all of its modeling in wavenumber space (cm^{-1}), my work requires the data to be in wavelength space (nm) so a conversion for the output data is needed. PLEXUS gives the user to option convert the units of the plots, but there is no option to save the data being displayed. The user does however have access to the output data directly in its original units via the MODTRAN output files (radiance: *.spc, transmittance: *.trn). You can use an external mathematical editor to modify the data, i chose to write a MatLab function. For the conversion, one could easily make the mistake of performing a quick x-axis conversion of λ=10^{7}/ν to obtain the correct units and assume this will change the y-axis accordingly. Unfortunately, this will not work, as it would produce a data-point density gradient with a higher density of points at shorter wavelengths which causes less spectral information to be represented in the shorter wavelength points while conversely more spectral information is represented in the longer wavelength points. To properly perform the conversion, one has to convert both the x-axis from wavenumber to wavelength (1), as well as the y-axis from W/cm^{2}·sr·cm^{-1} to W/m^{2}·sr·nm (2). The conversion derivations for both axis are shown below.

The post conversion dataset is still going to have a data-point density gradient, but all the points are now weighted equally. To get a equally spaced set of data, you can take the mean of all the points within the range of your desired step. More information on optical units as well as the difference between radiance and irradiance can be found in the Jurgen R. Meyer-Arendt paper "*Radiometry and Photometry: Units and Conversion Factors*" below under references.

**Lunar Modeling:**

PLEXUS has a built in utility called SAG which will determine the kind of scenario which should be modeled based on the input parameters i.e. 'Night' scenario or 'Day' scenario. Unfortunately i made the assumption that if SAG suggests a night scenario, it would set the MODTRAN cards to perform a lunar radiance model. What actually happens, is solar spectral radiance is produced for that scenario which would look like the figure below. You will notice that there is no visible radiation with extremely low levels of radiation starting to pick up in the NIR. This makes sense as the sun is illuminating the other side of the Earth, so the only solar radiation would be due to various scattering phenomena. In order to obtain a Lunar radiance model for a given scenario, PLEXUS has to be in the advanced user profile set in the 'User Profile' screen. All the parameters should be input as you would have originally, but before running the model you must go into the 'CPDF Editor' screen and set the 'ISOURC' attribute in the 'Source' tab to 1 (moon) as well as set the 'IEMSCT' attribute in the right column to either 2 or 3. Note: MODTRAN will not produce a lunar model if the 'IEMSCT' attribute is set to 1 or 0. If you are using MODTRAN directly through its FORTRAN interface - the 'ISOURC' attribute is Card 3 Field 6 or Card3A1 Field 4, while the 'IEMSCT' attribute is Card 1 Field 5. When these parameters are set correctly, the lunar spectral radiance for the same scenario as above will look like the figure below. You'll notice the shape of the output looks very similar to that of daytime solar radiance which is due to the fact that the moon acts as a reflector for the sun and is modeled with a frequency dependent geometric albedo. Unfortunately, MODTRAN only models the solar contribution to spectral radiance and does not take into account other night time sources of natural illumination i.e. airglow, zodiacal light, and faint stars. More information on these nighttime contributing sources can be found in the Ch. Leinert paper "*The 1997 reference of diffuse night sky brightness*" below under references. Additionally an excel dataset for visible contributors of nighttime illumination (airglow, zodiacal light, and faint stars) extracted from Figure 1 of the Leinert paper (shown below) can be found by clicking the figure.

**References:**

Raytheon Vision Systems - The Infrared Wall Chart (©2009)

Jurgen R. Meyer-Arendt, "Radiometry and Photometry: Units and Conversion Factors," Appl. Opt. 7, 2081-2081 (1968)

Leinart et al. "The 1997 Reference of Diffuse Night Sky Brightness." Astronomy and Astrophysics Supplement Series, 127, pp1-99 (1998)

The global dependence on fossil fuel is among the greatest challenges facing our economic, social and political future. The uncertainty of imported oil threatens global energy security, the pollution of fossil combustion threatens human health, and the emission of greenhouse gases threatens global climate. Meeting the demand for double the current global energy use in the next 50 years without damaging security, environment or climate requires finding alternative sources of energy that are clean, abundant, accessible and sustainable. Electricity and hydrogen, once produced, meet these criteria and are among the most versatile of energy carriers. Research challenges that would enable the production, storage, and use of electricity and hydrogen as sustainable alternatives to fossil fuel will be presented.

It was a very interesting talk, and Dr. Crabtree was very responsive to questions from the audience. The main messages i took home from the presentation is:

- Hydrogen should be our means of energy storage (slide 19)
- We should use super conductors as our transmission lines from coast to coast and continent to continent (slides 13,17,30,31)
- We should be harvesting all of our energy from the sun (slide 22,23)

However as he points out, the only thing standing in our way (and this is quite a big thing) is that we do not yet have any of these technologies nor materials that are mature enough for a solution (slide 25,34,35).

The referenced slides can be found in his presentation here. (Warning: Power Point)

]]>