In Search of Ancient Astronomy Images

The Apollo 16 Far UV Scans, and How They Were Converted to Windows

Click here to view the images.

Twenty years before Hubble, a space observatory was operated outside the earth's atmosphere. For three days astronaut John Young operated a small ultraviolet astronomy camera on the moon during the Apollo 16 mission. Designed by Dr. George Carruthers of the US Naval Research Laboratory, it consisted of a tripod-mounted, 3-inch electronographic Schmidt camera with a cesium iodide cathode and film cartridge. Targets included the geocorona, the earth's atmosphere, the solar wind, various nebulae, the Milky Way, galactic clusters and other galactic objects, intergalactic hydrogen, solar bow cloud, the lunar atmosphere, and lunar volcanic gases (if any). At the end of the mission, the film was removed from the camera and returned to earth (NSSDC, Far-Ultraviolet Camera/Spectroscope).

When I first heard about this experiment I was intrigued. What sort of pictures did the astronauts take? How did our universe appear in ultraviolet? I assumed these pictures would be on the Internet like everything else, but all I could find on NASA's many websites was a single scan of the earth in ultraviolet (see right below or here). This beautiful picture only whetted my appetite; nearly 200 astronomy pictures were taken from the moon. Where were the rest?

According to the all-knowing Google, the images were available from the National Space Science Data Center as either print images or electronic scans. Since prints cost money, I asked for the scans. The NSSDC very kindly FTP'd the files to me soon after the New Year's holidays, but when I saw the list of files I began to realize why no one had posted them before.

You couldn't just click and open these files into Windows Viewer, since their extensions (.FILE001;1, .FILE002;1, etc.) gave no clue to their format. Dropping them into Photoshop produced only read errors. No one at the NSSDC could tell me how to view these images. Dr. Carruthers told me he did not produce these scans; they were encoded by his co-investigator Dr. Thornton Page who was, alas, no longer with us.

I still wanted to view these files. I did not allow myself to be discouraged by my complete and total ignorance of computer graphics. If there was a way to see these pictures I would find it. Somehow. Some way. Even if it meant having to read things.

The Scans and the Rescans

A month after these pictures were brought back to earth, Dr. Page produced a set of digitized scans on a dicomed model 57 microdensitometer using a spot size of 38 microns and a scan interval of 32 microns. As detailed in the National Space Science Data Center's Master Catalog, these scans were stored on 26 magnetic tapes containing a total of 476 files. The scans were encoded as 8-bit uncompressed 1024x1024 greyscale images, which would have been viewable by any modern imaging program such as Photoshop.

In late 1972/early 1973, Dr. Page produced an updated set of scans to replace the original set stored at the NSSDC. This new data set had 359 files on 6 tapes. Unfortunately, there was no information on how they were encoded.

Finding the Format

Most electronic images are encoded by assigning a certain number of bits per pixel. Usually this number is either 2 (for 2 colour / B&W images); 4 (for 16-colour images); 8 (256 colour / greyscale); or 24 (usually consisting of one byte for each red, green, and blue channel). Some images (such as TIFF's) will have 16 or 32 bits/pixel, but this is less common.

In theory one could assign any number of bits per pixel, but this would make it awkward to share images. Standard byte depths have made the Internet possible, but who knows what formats were used in the early 1970's?

An uncompressed, 1024 x 1024 image with an 8-bit depth should be one megabyte long, yet most of the files in this data set had 1.5 megabytes–or 50% too many. What's more, these bytes alternated between high and low values and seemed to do it in threes:

7  24  59  7  24  63  8  17  5  9  1  17 (et cetera...)

If the files had 8 bits/pixel there should be entire segments with similar adjacent byte values–after all, much of the images contained the blackness of space. Yet this pattern suggested the files had a byte depth of three. But in that case the files should be 3 megabytes long.

Apollo 16 UV Astronomy Camera deployed in the shadow of the Lunar Module. Astronaut Charles Duke is in the background.

Screenshot of file list

Detail of Mission Frame 45, showing the earth's magnetic corona. Image distortion caused by reading data at 8 bits / pixel. Click for complete image.

It's possible to view these NSSDC files as 8-bit greyscale images, but they appear as fuzzy rectangles. For example, if you open this sample file into Photoshop, set the Width to 1537 pixels, Height to 1024, and Header Size to 601, you will get something like the image on the left.

Not only is this view of the crescent earth stretched into an oval, but the background, which should show the blackness of space, is rendered into vertical stripes. Apparently some bits were being read more than once.

These files were too big for 8 bits, too small for 24 bits. Perhaps they had a bit depth of 12.

Moving Bits

A 12-bit depth would mean these files were using 3 bytes per 2 pixels. Which meant I would have to split some of the bytes and join them to the others. The procedure for performing this conversion on a 1024 x 1024 image is simple enough, especially if you have a computer:
  1. Read 3 bytes
  2. Join first half of the 2nd byte to the 1st byte
  3. Join 3rd byte to the second half of the 2nd byte
  4. Repeat 524,287 more times

If your chosen computer language cannot directly manipulate bits, then you can do it mathematically:

  1. To shift a series of bits to the left by n bits, multiply their value by 2n.
  2. To extract the first n bits from a byte, integer divide its value by 28-n.
  3. To extract the last n bits from a byte perform modulo 2n to its value.

Of course, attaching 4 bits to 8 will yield a value that's more than a single byte can handle. One way to break each output value into byte-sized pieces is to DIV and MOD each output value by 256 to create 2 bytes per pixel with 65,536 shades of grey. This would convert the original 1.5 megabyte file into a 2 megabyte file, but at least Photoshop can display it.

However, after some trial and error and some serious examination of these files, I decided that they had a bit depth of only 9. The highest byte value I found in these files was 63, making for 512 (29) greyscale shades. The upper 2 bits were never used. So I actually had to shift the bytes by 3 places and disregard the upper two bits. The next diagram shows how I spliced the bytes:

             7       24       59
  Input: 00000111 00011000 00111011    0 = unused value
  Output:  000111011   000111011
               59          59 

Bits shifted 4 places. Click to view.

Trust me, I tried a 4-bit shift and it didn't work. I tried other bit-shifting combinations too. Thousands of them.

So, to do the 3-bit shift, I used an algorithm that looked something like this:

1ST_VALUE = (1ST_BYTE * 8) + (2ND_BYTE \ 8)
2ND_VALUE = ((2ND_BYTE MOD 8) * 64) + 3RD_BYTE
...which yielded a byte progression that looked much more orderly:
Input Bytes:  7  24  59  7  24  63  8  17  5  9  1  17
Output Bytes:   59  59     59  63     66  69   72  81
The Output Bytes now have similar values that are gradually increasing, which is what we should expect from a smooth greyscale image.

But, you are no doubt wondering, how did I manage to cram a quart into a pint, or fit 9 bits into a byte? Sure, I could get away with it if that 1st bit were equal to zero, but what would happen when it equalled 1 and we got an input value above 255?

Byte rollover causing dark border inside crescent

What would happen is we'd get a result like the one on the left.

Notice how the area around the earth's crescent looks oddly black. In reality it should be light, while the centre of the image should be whiter still. Here is a fine example of "byte rollover": when the input value went above 255, the output value dropped back to a lower number. (Think of your car odometer turning around to show all zeroes.) As a result, light pixels came out dark. (Actually, the high value would crash the computer with an overflow error, but a competent programmer should be able to handle that.)

Of course I tried splitting the bytes into 2 to create a 16 bits / pixel image. Unfortunately, Photoshop still only displayed 256 greyscale shades. That is, it read the file but ignored every other byte. (At least my version did.)

Fortunately there are other ways to deal with rollover. Unfortunately some of them are less satisfying than others.

Handling the High Bits

High contrast

As I said earlier, these images use only 9 bits/pixel, which gives 512 greyscale shades. Still too much for a single byte depth, but at least we can jettison some data without losing too much detail.

One way is to make all the high values pure white. This is somewhat like turning up the contrast knob on an old black & white TV. While this eliminates the dark banding and striping (see High Contrast image, left), it eliminates the detail in precisely the area we're interested in: the bright stars and nebulae.

Low contrast

Alternatively, we can turn the contrast the other way. To do this on the computer, convert all the low values (less than 256) to zero—which is pure black—and then subtract 256 from the higher values. The Low Contrast image shows what we get. Since the high values in these images weren't terribly high, the image is mainly dark and not very interesting, at least not from an aesthetic point of view.

Dimmed image

A third way is to split the difference, by reducing all the byte values to half. It's like turning down the brightness knob on the TV. As you can see in the Dimmed Image, we lose some detail overall because of rounding. (Since bytes cannot have fractional bits, 60\2 and 61\2 will both equal 30.)

But we don't have to choose among any of these unsatisfactory alternates. We live in a universe of glorious colour, and we can represent these colours with three bytes per pixel. This gives us 16,777,216 possible values, which is more than enough to cover all the original greyscale shades.

Presented in Colour

Colorized image. Note horizontal striping inside the crescent.

It's easy enough to colorize a greyscale image: just assign a colour value to a particular shade. The hard part is deciding which colours to use. For values below 14 I used greyscale–all three bytes had the same value, which made the borders appear nearly black and "framed" the image (which was shot through a tube). From 15 to 255 I set the red and green channels to 0.5x and blue to 1x the original value, which made most of space appear dark blue. It somehow seemed appropriate for images that were shot in ultraviolet. Above 255 I used cyan: subtract 256 from the original value, and multiply the result by 0 red, 1 green, and 1 blue. This not only made the bright areas look brighter, it also continued the "blue" theme.

These colours were purely arbitrary. They were based on my own aesthetic sense and desire to show enough without adding too much. I could just as easily have put in some intermediate colours (as in the example at bottom left) and used different values. I even tried to imitate the published example (bottom right) by making the high values red and putting in some intermediate green, but it made the geocorona look like a slice of watermelon.

There was one more flaw to fix. In these examples, you might have noticed a series of short horizontal black stripes running around the inside edge of the earth's sunlit crescent. Actually, it's the white areas between these stripes that don't belong (the black stripes being the result of "byte rollover"). I'm not sure what these white stripes were doing in the image. They are obviously not natural. And examining the files' byte values showed long strings of 63's running along every other line. I assume they they were either a scanning error, a feature of the encoding algorithm, or my own stupidity.

Since these white bands were in the original files, there didn't seem to be any way I could recover the original data. The only way I could eliminate these stripes was to cheat.

Corrected colours

The "corrected" image on the left is the result of replacing these light areas with a colour value that was the average between the darker strips above and below (which did belong). Although this average value is probably close to what was photographed, and although it does look better, I have put in pixels that weren't in the originals. Just so you know.

There is one other minor detail. I initially converted these images line-by-line into bitmap format. However, bitmaps are read from the bottom up, so these images came out upside-down. I didn't think it mattered much for astronomy images, but all the same, when I converted them into net-friendly JPEG's I also took the step of flipping them back. Just to be consistent.

Assigning Names

Once I acquired the pictures, the next step was to identify them. Most (but not all) of these files have a 601-byte header that gives their frame number, target designation, time of exposure, etc. Also, Dr. Page produced a catalogue that described these images, but it was for the original 8-bit data set. I have not been able to find a catalogue to match the revised data set. However, it was still possible to match the times in most cases and find when each picture was taken and where the camera was pointing.

This catalogue is on 16 mm microfilm and can be purchased from the NSSDC. Alternatively, I have keyed in the information from the 190 lunar surface frames into Works Database format, and you can download it here. (Requires Microsoft Works to view.)

Final Thoughts

More colours

I have probably done as much as I can with these images, but I don't think I will ever be finished with them. The NSSDC sent me 360 files ranging in size from 3K to 1.539M, and I have not yet converted the smaller ones. I think I have figured out their original format, but there are probably some important details that I have overlooked. If you have your own ideas on how to arrange these bits, then you're welcome try your hand on this raw data file which contains the geocorona image featured on this page.

The program I used to convert these files was written on MS QuickBasic, a program that is no longer offered or supported by Microsoft. I decided against posting the source code for several reasons: 1) You need QBasic to run it, which hasn't been available in over 10 years (though it might run on VBasic; I haven't checked). 2) It's a quick-and-dirty utility, written and continually re-written for a specific purpose, with no interface, minimal commenting, and many hard-coded values that only I know about. User-friendly it is not. 3) If you're serious about trying your own conversions, I think it's better to blaze your own trail instead of following my rutted path. You'll probably do a much better job.

Last year the NSSDC told me they were rescanning all their original Apollo pictures. These newer and better scans are being posted on the Apollo Lunar Surface Journal and the Apollo Image Gallery. Perhaps soon the far UV images will once again be easily available to the public, and as richly detailed high resolution scans with colour added. And the original data set, and my own efforts to decode it, will be only a historic sideline. But it was still fun to work with them. At least I think so.

One final word: Although I have received much information and assistance from Dr. Carruthers and the NSSDC staff, they are in no way responsible for the content of these pages. The opinions expressed on this website, the conclusions given, and especially the mistakes, are mine and mine alone. This site is not affiliated with or endorsed by NASA, the USNRL, or any other organization or individual. It is merely a personal project, a result of attempting to satisfy my own curiosity and share the results with others. I hope you liked it.

Questions? Comments? Ideas? Send them to


Carruthers, George R. "Apollo 16 Far-Ultraviolet Camera/Spectrograph: Instrument and Operations." Applied Optics, October 1973, Vol. 12, p. 2501. Detailed description of how the camera was built and operated for lunar conditions.

Carruthers, George R., and Thornton Page. "Apollo 16 Far-Ultraviolet Camera/Spectrograph: Earth Observations." Science, 1 September 1972, Vol. 177, p. 788. The camera took the first full earth images of the magnetic corona. This article describes some of those findings.

Carruthers, George R., and Thornton Page. "Distribution of Hot Stars and Hydrogen in the Large Magellanic Cloud." The Astrophysical Journal, Sept. 15, 1981, Vol. 248, p. 906. A highly technical article on some more findings obtained during this observatory's brief existence.

National Aeronautics and Space Administration. "Far Ultraviolet Camera/Spectroscope." Apollo 16 Press Kit Release No. 72-64K, pp. 71-73 This 5.6 Mb PDF document contains a short description of the experiment and diagram of the camera. You can find the article by searching for "ultraviolet."

National Aeronautics and Space Administration. "Far Ultraviolet Camera/Spectroscope Experiment" Apollo 16 Mission Report MSC-07230, p. 4-16 (18 Mb PDF). Search for "ultraviolet" to find two pieces on the far UV experiment. The first article describes the experiment, the second describes some of the problems encountered.

National Space Science Data Center. "Catalog of Information on Mission Frames and How They Were Microdensitometered." Dr. Page's catalogue for the original data set.

National Space Science Data Center. "Digitized Scans of the Far-Ultraviolet Camera/Spectroscope Frames."

National Space Science Data Center. "Far-Ultraviolet Camera/Spectroscope."*&ex=*&ds=ASUV-00017

A later geocorona scan from NASA. To make comparing easier, I flipped and turned the image; the original is over here.