See other posts from July 2008
More things to see in the amazing HiRISE image of Phoenix' descent
Posted by Emily Lakdawalla
2008/07/10 02:09 CDT
Topics:
I have posted several times about the amazing photo captured by Mars Reconnaissance Orbiter's spy camera in space, HiRISE, of Phoenix under its parachute as it descended. There have been two common questions I've received about the photo: was there any color data taken, and what more can I tell you about how hard it was to take the photo? I've got answers to both questions for you today.
The HiRISE team has released a new version of this image, which includes some color data, though unfortunately not over Phoenix itself. HiRISE takes images in long, skinny strips, and only the very skinniest center portion of the strip ever contains any color data. Here's what the new color image looks like:

NASA / JPL / U. Arizona
Color version of the 'Phoenix Descending' HiRISE image
When the HiRISE camera captured the image of Phoenix descending under its parachute, it also captured some color data, though unfortunately not on top of Phoenix. The gaps between the color strips are caused by the fact that the camera's detector is actually made up of 14 staggered CCDs, and the spacecraft had to slew at an angle in order not to capture a distorted view of Phoenix. Most of the color information indicates that the landscape is the usual red of Mars, but some blue spots indicate the presence of frost inside Heimdall crater.
NASA / JPL / UA
Phoenix lander, parachute, and heat shield
As Phoenix descended, HiRISE captured a photo of the lander, its parachute, and the blackened heatshield that slowed Phoenix' entry into Mars' atmosphere, recently detached and falling ahead of the spacecraft.HiRISE was designed to take photos staring approximately straight down from the position of Mars Reconnaissance Orbiter, which is in a nearly circular orbit about 300 kilometers above the ground. Like many space cameras, HiRISE is a "pushbroom" style camera, which is different from a "framing" camera. The digital camera you may have at home is a framing camera: point, click, and one moment in time is captured on a two-dimensional detector array, usually a rectangular array measuring a few thousand pixels in each direction.
Pushbroom cameras are different. They have long, skinny detector arrays. The arrays are oriented perpendicular to the spacecraft's direction of motion. At an instant in time, the detector array picks up one line of pixels. When the spacecraft has advanced in its orbit by the distance corresponding to the height of one pixel, the detector picks up the next line. On earth, many photocopiers and scanners operate the same way. When that line of light goes across the glass platen, it's illuminating one linear section of your document and copying it.
Pushbroom cameras allow you to capture images of fixed width (as wide as your detector array) but of arbitrary length (usually limited only by something within the spacecraft's brain, its operating system or memory volume or whatever), making a pushbroom imager the camera style of choice if the goal is to acquire very large images, like if you're trying to map a whole planet. They work great as long as what you are imaging doesn't change with time, because each line of pixels is captured at a very slightly different time. (In the photocopier analogy, you need to hold still a book that you're photocopying, or the pages get smeared and illegible.) Mars' surface doesn't change very rapidly (except when there are active landslides or dust devils or moon shadows), so ordinarily the HiRISE team doesn't need to worry about things moving across their field of view.
But Phoenix was moving, and it was moving in not quite the same direction that the surface of Mars appeared to be moving. The point of the image was to capture a view of Phoenix, so the Mars Reconnaissance Orbiter team set up a slew -- a rotation of the spacecraft -- that was precisely timed to compensate for the motion of Phoenix. As the orbiter traveled along in its orbit and Phoenix fell to the ground the spacecraft rotated in such a way that, from the camera's point of view, Phoenix appeared to travel perpendicular to the orientation of the HiRISE detectors -- that is, it made the image of Phoenix build up in a way that was the way the HiRISE team designed their camera to capture images. So they got a really nice image of the lander. However, when they followed Phoenix, they allowed the ground to drift at an angle across the CCDs. Here's a cartoon Timothy sent me that shows what I'm talking about:

courtesy of Timothy Reed
Motion of Phoenix and the ground with respect to the HiRISE detectors
As Phoenix descended to Mars, Mars Reconnaissance Orbiter rotated to compensate for its motion, making Phoenix appear to travel in a direction perpendicular to the linear detectors of the HiRISE camera. However, the ground was moving in a different direction, at an angle to the detectors, which caused gaps in the resultant image.
NASA / JPL / UA / courtesy of Timothy Reed
The HiRISE focal plane
The HiRISE camera has 14 detectors, lined up on a large focal plane. Ten of the detectors see red wavelengths and form a staggered line that allow HiRISE to capture grayscale images 20,000 pixels across. At the middle of the focal plane are two more pairs of detectors that see blue-green and infrared light, allowing the HiRISE team to show the center 20% of any image swath in color.At this point in the HiRISE assembly the detector elements are bare but are correctly aligned, set to the same height, and measured so that the locations of the pixels is known to an accuracy of a few microns. Later a metal cover was installed that had spectral filters, sharp-edged rectangular apertures, and stray light baffling.

The HiRISE focal plane projected onto Heimdall Crater
As Mars Reconnaissance Orbiter slewed to make Phoenix travel perpendicular to the HiRISE detectors, the surface of Mars drifted at an angle through the HiRISE field of view. The combination of the angular drift and the staggered positions of HiRISE's 14 detectors resulted in gaps in the final image.Key: Black rectangles show approximate locations of HiRISE chip assembly active areas overlaid onto a photo of the central group of six chips. Purple lines show where the angle of the ground image travel across the focal plane causes a gap in the overlap for the red (monochromatic) channel. Red lines show where the angle of the ground image travel across the focal plane causes a larger than normal overlap for the red channel. Green lines indicate the boundaries within which color data is available, where portions of image swaths reach all three color channels.All of this developed after my initial conversation with Timothy, in which I asked him to explain another detail about HiRISE, the fact that it uses Time-Delay Integration to build up its images. Time-Delay Integration is an important feature of many (in fact, most) pushbroom cameras in space, including not just Mars cameras but also the MVIC instrument on New Horizons, but until Timothy explained it to me I could not make myself understand what it was. Below is Timothy's explanation, which cleared things up for me -- I hope it helps you as well.
Time-Delay Integration Tutorial
by Timothy ReedNow, there are some limitations to this arrangement. You have to make sure that you can read out the entire length of the array in that one-pixel-height's-worth of time. If you make the array wider to cover a wider swath, you have to read out the register quicker. That, and the inability to reliably manufacture a very long array, puts a practical limit on the width of a detector element. So, it's usually broken up into multiple detector elements. More on that later.
In addition, with only a single pixel to collect photons for that very short amount of time, there's two serious limits on the sensitivity: dynamic range and signal-to-noise ratio of the image. Zipping past the ground in a few tens of microseconds can leave the image starved for light, especially in low illumination situations. The solution? Put a second row of pixels below the first. Now you have a 1,024 x 2 array. When the spacecraft travels one pixel height's worth, rather than dumping the accumulated charge into a register, you dump it "down the column" into another row of pixels where it accumulates more charge. When the accumulated charge is now read out, there's twice as much charge as you'd have gotten from the single-pixel-high array.
You do have to make sure that you've got the time interval matched correctly to the spacecraft velocity. Otherwise the sum of the charge in a particular column's readout will be from two different spots along the ground track, and you'll get smearing in that direction.
Now, the magic occurs! Add more rows to your heart's content. At each moment in time, each row contains the integrated charge of all the previous rows looking at one single line on the ground. (Hence, time-delay integration.) And the charge moves from row to row at the same rate that the image moves across the array.
Add 100 rows. Voila! Your signal is increased by 100 without having to increase the diameter of your telescope by a factor of 10.
The more rows you add, the more you have to be careful to get the timing correct, but the timing is basically just dependent on the spacecraft velocity (which is known and unchanging) and altitude (which changes, but is known for any image).
You now have a very flexible tool. You can integrate over all the pixels in a column to get maximum sensitivity. You can control the exposure of bright images by only accumulating charge over some of the pixels. You can "bin" the charge both in the across-track direction (adjacent pixels) and the along-track direction (adjacent TDI lines) to get less noisy, lower-resolution images.
HiRISE has 128 TDI lines, giving it wonderful sensitivity and dynamic range. But to take advantage of the potential gains, care had to be taken. If a point in the image starts at the top of a TDI column, and by the time it gets to the bottom row it has drifted into the adjacent column, the lateral resolution is compromised. That can happen if the spacecraft's roll around HiRISE's optical axis is incorrect by more than 1 milliradian. If the spacecraft jitters [such as when another instrument, like Mars Climate Sounder is moving], the image smears. If HiRISE is trying to capture a moving object like Phoenix, and the orientation of the spacecraft isn't lined up with the direction of motion, the image smears.
Back to segmented detectors. HiRISE has 14 individual 2,048 x 128 detector elements. Ten of them are lined up in a staggered array, overlapping by 48 pixels to create (effectively) a 20,000 pixel wide array. The individual detector outputs are stitched together to give the full width. Two additional pairs of detectors are stacked vertically above and below the center detectors -- these have infrared and blue-green filters to allow the center swath to collect full-resolution, panchromatic images.
A neat detail that not many people know: one would assume that during the construction of the detector array, the individual elements were precisely lined up absolutely parallel and perpendicular so that the TDI columns would all line up. (As noted before, anything that causes a point on the image to cross into another column smears the image.) Well, the optical system had a small amount of distortion, so the edges of the image didn't run straight down, they curved a slight amount. So each of the detector elements was tilted slightly to follow a gentle "smile" so that the performance at the edge of the field would be maximized. Had that not occurred, the edges of HiRISE images would be blurry.
Another reason for using segmented detectors ... one or more elements can die or degrade, and the imager can still function, albeit with gaps in the data.
Back to Emily here: With this explanation, I now understand why the image of Phoenix under its parachute (below) looks so much sharper and clearer than the background. They oriented HiRISE so that Phoenix' motion was perpendicular to the HiRISE detector arrays, parallel to the time-delay integration columns. So they accumulated lots and lots of signal on Phoenix in the orientation that was correct to make the image really detailed and really sharp using the TDI technique. However, the ground was drifting across the TDI columns, so it smeared. To prevent that smear, they have to use a very small number of the TDI elements for the image, maybe four or eight -- which makes the image look more noisy -- and they have to reduce the resolution of the published image. They said as much in their original image caption, but because I didn't understand the TDI thing I didn't get what they were talking about.
Now I get it! And I hope that the ten of you readers who followed this entry this far get it too. You deserve a cookie.
Blog Search
JOIN THE
PLANETARY SOCIETY
Our Curiosity Knows No Bounds!
Become a member of The Planetary Society and together we will create the future of space exploration.



















Comments:
Leave a Comment:
You must be logged in to submit a comment. Log in now.