A GPS System for Cosmic Images
The Chandra X-ray Observatory captures information about the high-energy Universe. Chandra data is inherently digital. As the methods to communicate digitally have advanced, so too have the efforts to keep Chandra engaged with the public.
One initiative has been to encode Chandra images with Astronomy Visualization Metadata, or AVM. Tagging astronomical images with AVM ensures that key information such as image description, object type, location, coordinates and more, stays linked to the image as it travels through the Internet. By keeping content embedded with the imagery, the outreach potential of each image released with metadata is increased dramatically.
Let's listen to Joe DePasquale, Chandra's Science Imager, talk about how he incorporates science metadata into Chandra images.
We'll start at the main page for Chandra public outreach. I'm going to dig into our photo album and bring up a recent image of the supernova remnant N49 as an example of what we can do with AVM.
As you can see here, we have many individual images associated with an image release. Each release is complemented by a few paragraphs describing the science behind the image, as well as some specific information like image credits, scale of the image, and color coding.
Ideally, we want to keep all of this information tied to the images as they make their way into the wild. And that is where the Virtual Astronomy Multimedia Project comes into play. The VAMP team has developed the Astronomy Visualization Metadata standard which is basically the DNA of an image. The AVM standard is an XML-formatted series of keyword value pairs that is relevant to astronomical imaging. The team has expanded on existing image metadata editing software by creating an AVM panels plugin for Adobe imaging products.
Using Adobe Bridge with the AVM plugin, I can easily view and edit N49's metadata. The entries prefaced with "Astro" come from the AVM plugin. We have entries for Image Creator, the Content of the image - coming directly from the paragraphs on the website - and there's a Subject Category entry, which I'll describe in more detail in a minute. There is some very specific information regarding the instruments used to create the image, the wavelengths of light in the image as well as their color coding. We also have a section devoted to the coordinates of this object on the sky which is derived from a tool called PinpointWCS that I'll describe in more detail. Finally, we have information about the publisher of the image, and where it can be found on the web.
Regarding the Subject Category entry, the VAMP team has developed a taxonomy of astronomical objects with associated code numbers to better describe each image. We can use the subject category widget to distill N49's category down from something in the universe to a supernova remnant and neutron star in the local universe using these two codes.
Now I also promised more information about deriving image coordinates. Once we've created an image for press release, it has lost a great deal of its original metadata, including information relevant to its position on the celestial sphere. Using Pinpoint WCS, we can load an original data file that contains some portion of the image, as well as the press release image itself, and use pinpoints of light, or stars, within the two images to re-derive the coordinates of the source and store that information in the press release image's metadata. I literally zoom in to the pixel level of each image and hand-select a few stars until I start to see the images coordinate information being populated below. The more selections, the more accurate the registration, but we'll go with 3 stars here.
The seemingly insignificant step of adding coordinate information has great implications for what can be done with this image down the line other than just staring at it. With this additional information ties to the image, we can open up a whole universe of applications using third party astronomical viewing software such as Microsoft's Worldwide Telescope. Worldwide Telescope actually read the AVM metadata an intelligently places the image in its proper location in the sky. You can also vie the information about the image that we stored in its metadata directly in WWT.
Now that the image is loaded, you can see how WWT has placed into its true location in the sky. We can cross-fade between the loaded image and the background digital sky survey image and see how the two line up.
Without AVM, it is virtually impossible to place publicly released astronomical images in their proper place on the night sky. As we've seen, with AVM information, these images can be imported and used in immersive environments such as Microsoft's Worldwide Telescope and Google Sky. Digital planetariums can directly import images thanks to AVM. Online services like Flickr and Wikisky can automatically incorporate Chandra images because they have this information embedded into their electronic DNA. This largely invisible technology allows Chandra images to be shared and utilized by millions of people.