Education Activities To Accompany Chandra Data Analysis Software M31 & Coma
Big, Bigger, Biggest
Activity 2: When is a pixel not a pixel? Answer: when it's 4, 9, 16 or more pixels!
There are many ways to look at astronomical data.Sometimes you want to see as wide a field-of-view as possible. Sometimes, you want to zoom in to a small region of the sky. So displaying the same data can take many forms. To see this, open DS9, and go to Frame > tile frames. Then load the image that reads center of M31.
Select Scale > log. Then go to Frame > New frame, and load the regular "M31" image.
What you are now looking at is the same data file, but displayed in two different ways. Notice that the image on the left (the "center image") looks magnified. Although it occupies the same amount of screen space as the other, you are only seeing a portion of the sky that the "full" image (on the right) displays. How is this done? The center image is displaying the native 0.5 arc-second resolution of the Chandra satellite. I.e. each pixel edge you see displayed corresponds to 0.5 arc-second on the sky. It is a portion of the data that was obtained in observation #303 in 1999. The image on the right takes the same data, and sums a 4x4 set of 16 pixels together and displays them as a single pixel on the screen. Therefore, you can see 16 times as much sky area for a given number of displayed screen pixels. To see this graphically, zoom in the right image. Either select zoom > 4 in the drop down menu, or click the "zoom" button on the menu bar just above the image displays. Then click "in" twice to get the same result as from the drop down menu. (Many functions in DS9 can be accessed in different ways; when you use the program frequently you will see the utility for so doing).
Now you see the two images with the same magnification, but the one on the right seems very "grainy". That's because 16 pixels of the left hand image went into each (now larger displayed) pixel on the right. In order to see more of the sky on our screen, we sacrificed resolution, since each pixel edge now represents 2 arc-sec on the sky. To see this clearly, go to Physical Pixel (4068, 4136) on each image. Notice that it "sees" the same part of the sky, namely, a region where there are two point sources very close to each other. Notice also the size of the magnified pixel in the "magnifier" at the upper right of the DS9 display, as you cruise across from one image to the other. (It is vital to use Physical Pixels here, because they represent the "true" sky position of the object in the data file, regardless of where they appear on the image display). But see how you can barely distinguish the sources on the right hand image. So many pixels (16) were summed for each part of that image, that the separation of the two objects was almost lost, and they virtually appear as one "blob".
Now you can begin to appreciate what kind of advance the Chandra satellite had over previous missions. Because of its incredibly high resolution capability, it could distinguish sources separated from each other by tiny angular distances on the sky.
You may wonder why one would ever use anything but the "best" resolution that Chandra has? Just imagine a very faint source in the field of an observation. It may be virtually indistinguishable from the "background". But now, if you sum adjacent pixels (each one just a bit brighter than the background surrounding the source), you may be able to see the source more clearly. Thus, depending on what you want to do with your data, you may elect to display it in different ways.