Week 7: Analysis of imagery

Lecture Index: Introduction. / Imagery use in the geoscience. / Commonly used types of imagery in geoscience. / Basic principles behind image filtering/analysis. / Classifications. / Exercise 7



The basic goal of image analysis is to extract information from an image. Remote Sensing is a discipline that focuses on this goal in much greater depth, and we have a course devoted to learning this topic. The goals of this weeks exercise are: a) to introduce you to GIMP and/or Adobe Photoshop, software packages that manipulate raster images, b) teach you some ways that images can be "enhanced" or manipulated, and c) teach you some of the basic ideas behind such image analysis, including the concept of classification.

This was altered in Adobe photoshop from an original black and white photo of the Elkhorn River near Scribner NE in an attempt to emphasize the river channel forms. Old channel forms show somewhat selectively in green with dark outlines. Notice the large amount of noise.

It is helpful to think about the chain of events in image analysis. It might be summarized as having the following steps:

an illuminator sends out radiation -> atmospheric effects occur -> reflectance occurs (surface effects) -> atmospheric effects occur -> sensor acquisition -> computer capture and processing (including rectification) -> analysis and classification -> end use.

Thus, there are a lot of variables that influence the image and final product. We will focus on the last three stages in this exercise.

A Holy Grail of image analysis is the assumption/desire/hope that a distinctive feature has a distinctive reflectance signature. For example, does contaminated soil have some unique spectral or distribution signature that will allow for its distinction from all the other features in an image? If so this will permit efficient analysis of the extent and pattern of occurrence of that feature in the image, in this case a clear picture of the contamination. A major goal is to identify such unique signatures in the imagery. If a series of distinctive signatures can be determined, then the image can be classified. You can think of classification as assigning a pixel an attribute trait instead of just a reflectance value. For water vs. land as classes this can be pretty easy to envision. For contaminated soil vs. uncontaminated soil that could be much more difficult. Fuzzy classifications can be used here (i.e. it belongs 80% to one class and 20% to another). More on fuzzy set theory later. Once you have figured out a scheme that works you can automate analysis and process a lot of data efficiently. You could, for example, monitor the change in forest cover with time, and connect that to sediment plumes in nearby bodies of water.

The challenge is non-unique signatures, meaning that two different types of features 'look' the same. This is why multi-band satellite data is used. For black and white imagery you can see that developing a unique signature could be a real problem. For each pixel all you have is a gray value. For a color image for each pixel you can have a color and an intensity, and with multiband for each pixel you may have different band values. But the more information you collect (a multi-band approach, where bands refer to different portions of the electromagnetic spectrum), the better the chance for determining a unique signature for a unique surface. If more reflectance information exists for each pixel location you consider the possibility of identifying a unique response for a feature increases.

Important note: In order to be as scientifically transparent as possible, when you have manipulated an image that is used in any report, presentation or publication you should make clear to your audience that it has been manipulated, and what the nature of the manipulation was .

Imagery use in the geoscience.

Below is an incomplete list of how imagery is commonly used in the geoscience:

Lineament analysis: this is a special case of a type of analysis that maps linear and curvilinear patterns.

Example of lineament analysis of airphoto to better understand fracture fluid flow. Image source: New Hampshire Bedrock Aquifer Anlaysis Lineament Map areas - USGS - http://nh.water.usgs.gov/project/nhwellyieldprob/lin_index.htm .

Commonly used types of imagery in geoscience

Below is a list of commonly used types of and terms for imagery. This list is far from complete and is constantly evolving.

  • air photos: Still one of the most commonly used types of imagery, this imagery has some major advantages that include low cost, high resolution, and a historic archival character useful for comparison purposes. The Earth Explorer Site site provides access to such imagery for a good portion of the U.S..
  • satellite imagery: These often provide a bigger view, but usually at lower resolution and higher cost. LANDSAT is perhaps the best known and is widely available on the web, but historically doesn't have the best resolution. The commercial IKONOS satellite does produce black and white imagery with a resolution such that lines on a football field are visible. More and more satellite imagery is available on a real time basis.
  • multispectral imagery: This refers to a process where different sensors collect reflectance information in different parts of the spectrum, visible and invisible. each portion they collect information is called a band. This imagery has much more potential for analysis and classification simply because there is more information to work with. Satellite imagery is often multispectral. There are an immense number of different types of sensors and 'bands' on the variety of imaging satellite platforms. You just have to learn the details for the specific imagery you are using.
  • false color imagery: If it is in the non-visible part of the spectrum, then how do you see it?? Non-visible portions of the spectrum are assigned colors (substituted by part of the visible spectrum) and hence the colors are false. Infrared signatures are often shown as red hues.
  • radar, SLAR (Side Looking Airborne Radar): This particular sensor is often particularly good for seeing structure, or form (geomorphic surfaces), response a function of surface 'roughness', not color.
  • LIDAR: laser illumination coupled with 3-D positioning coupled with sub-meter resolution possible, creates a virtual outcrop.
  • geophysical 'images': the analysis techniques in remote sensing and geophysics share a lot of similarities, which is no surprise.
  • Newer developments include the use of photogrammetric software to create 3-D surface models from hand-held overlapping photos so that each pixel also has position data associated with it. This of course allows all sorts of analysis not possible before. Drones also allow image capture in ways that were not possible before and it is likely we will see a lot more drone acquired imagery.
  • The image to the left is of part of the Mississippi River. It is clearly a false color image, and the bands involved are actually radar bands. The point bar sand deposits and channel scars stand out clearly in this image. Details can be found at NASA's Visible Earth link.

    Basic principles behind image filtering/analysis

    One standard approach is to look at the distribution of values (via a histogram of pixel value frequency) and then modify it. This consists of recomputing the values in the array on the basis of some modification algorithm. You could stretch, condense, remove, replace all or portions of the histogram. An asymmetric distribution could be changed into a more symmetric and centered one. The possibilities are almost endless - so, the question as to which possibility will be most helpful is a crucial one. Tools in GIMP allow you to do this.

    Thinking of a digital image as an x-y array of values (where multispectral then imagine stacked layers of arrays of numbers) where the z value is the intensity of luminosity. If you take one value and then compute a new value for it on the basis of its neighbor values, you can create a new image.

    Consider each of the below transformations for the z variable at x,y points in an 'image' and describe how it should transform your image:

    This is basically matrix manipulation (which is why many remote sensors love C++ and similar programming languages). The possibilities are almost endless. In the cases above you are comparing a pixels value to its neighbors and making a modification of that pixels value based on the basis of the result. The window size, or how far away you look for neighbors, is also an important consideration


    Classifying an image is an attempt to map out the distribution of features of interest, such as grasslands versus woodlands, or mineralized versus unmineralized rock, or simply different rock units. The mapping is based on identifying a spectral signature for the feature and then all the pixels (or other 'window' sizes) with that signature are assigned a common pixel value that represents that feature. The lure of classification is to automate mapping in the computer environment. It is a very challenging endeavor because the world is complex with lots of fuzzy boundaries and non-unique signatures, and it gives one a little more respect for the brain, that we so easily classify features in an image.

    Image from a USGS site - Using Satellite Imagery to Map Irrigated Land, Sharon L. Qi, Alexandria Konduris, and David W. Litke, http://co.water.usgs.gov/nawqa/hpgw/meetings/p0507.htm

    Modeling spectral responses: When classifying you could build a theoretical-physical model of how a surface type reflects, but it can be challenging. Consider the factors that determine spectral reflectance of granular material at the earth's surface (e.g. of a point bar):

    You could make another list for a type of vegetative cover that would be much longer. In addition, there are illumination factors, such as the intensity and angle of illumination. If you can develop a working model then you can use this to analyze and classify your image. In practice it is best done empirically, matching a known site of surface type with its spectral reflectance.

    Sites where spectral responses for minerals and other materials exist. These can serve as a basis for building spectral response models.

    This is a classified image of Death Valley that has been draped on top of a DEM surface. The different colors are an attempt to map out dominant mineral distribution. See this link (Credit Image courtesy NASA GSFC, MITI, ERSDAC, JAROS, and U.S./Japan ASTER Science Team ) for details.

    There are very sophisticated programs for image analysis and classification such as ERDAS Imagine. However, the associated learning curve is long, and beyond the scope of this class. Instead we will explore Adobe Photoshop and/or GIMP, which can digitally manipulate raster files, and which you can learn to use significant portions of in an hour to several hours time. Adobe Photoshop provides no real capacity for true analysis (e.g. to measure percentage of coverage by a feature, to classify an image, to map gradients), but it will introduce you to basic and 'canned' raster manipulation.

    Exercise 7

    Potentially useful links for further exploration (let me know if you find other relevant links):

    Copyright by Harmon D. Maher Jr.. This material may be used for non-profit educational purposes if proper attribution is given. Otherwise please contact Harmon D. Maher Jr.