Tuesday, May 1, 2018

Lab 10: Radar Remote Sensing

Introduction

The goal of this lab was to gain a basic understanding of pre-/processing functions associated with radar remote sensing. These functionalities include noise reduction through speckle filtering, spectral and spatial enhancement, multi-sensor fusion, texture analysis, polarmetric processing, and slant-range to ground-range conversion.

Methods

Part I: Speckle Reduction and Edge Enhancement

Section I: Speckle Filtering

To begin this lab, speckle filtering technique was applied to a raw radar image for the reduction of "salt and pepper" heterogeneity displayed in the image. Applying speckle suppression functions before conducting other image processing functions is crucial to the accuracy and effectiveness of outputs generated from radar imagery.

Through an iterative process, the Radar Speckle Suppression (RSS) tool in ERDAS was used to suppress salt and peppering in a raw radar image.

Figure 1: Original raw radar image.
Notice the television static-like appearance of the image in Figure 1. This is what is meant by "salt and peppering". The heterogeneity of the image is reduced in each iteration of the RSS tool. Before using the RSS tool, however, the coefficient of variation (CV) for the raw radar image must be determined. This can be done by using the following equation.

Figure 2: Coefficient of Variation.
This equation quantifies the amount of noise in a radar image and aids the RSS tool in speckle suppression. The CV for the raw radar image was determined by checking the "calculate coefficient of variation" box shown in Figure 3, then running the RSS tool, and checking the session log. The CV for the raw radar image was 0.274552 (this was rounded to 0.275 in Figure 3).

Figure 3: RSS tool parameters.
As shown in Figure 3, the raw radar image from Figure 1 was used as the input file, the output file was given a name and storage location, the window size was changed to 3x3, the coefficient variable multiplier was set to 0.5, the filter type was set to Lee-Sigma, and the coefficient of variation was set to 0.275. The tool was ran to produce the following image.

Figure 4: Output from first RSS iteration.
Using the output image shown in Figure 4, the above process was repeated two more times as shown in Figures 5 through

Figure 5: RSS tool second iteration.

Figure 6: Second iteration output.

Figure 7: RSS tool third iteration.

Figure 8: Third iteration output.
As shown in Figure 8, the third iteration of speckle suppression produces a more homogeneous image as compared to the raw radar image, however, the spatial resolution is diminished.

Section II: Edge Enhancement

For the next section of this part of the lab, the affects of speckle filtering were addressed through edge enhancement. First, edge enhancement was performed on both the raw radar image (not having undergone speckle suppression) and the filtered image (having undergone speckle suppression) and were compared. Then, a speckle suppression was performed on the edge enhanced unfiltered image.

Figure 9: Third iteration output of speckle suppression post-edge enhancement. 
Section III - Image Enhancement

For the third section of this part of the lab, the Gamma-MAP filter was used on a radar image through the RSS tool versus the Lee-Sigma filter which was applied in the first section of using the RSS tool.

Figure 10: RSS tool parameters with Gamma-MAP filter.
Figure 11: Original unfiltered image.
Figure 12: Despeckled image output.
Once the image was despeckled, it was enhanced with the wallis adaptive filter.

Figure 13: Wallis adaptive filter parameters.
Figure 14: Enhanced filtered image output.
Notice the presence of both shape and homogeneity in Figure 14.

Part II: Sensor Merge, Texture Analysis, and Brightness Adjustment

Section I: Apply Sensor Merge

In the first section of this part of the lab, similar images from different sensors (one radar and one multispectral) were overlayed on one another to apply color to a radar image and shape to a cloud-covered multispectral image. This way, the resulting image contained pertinent information to the shape, and color of surface features within the study area.

To do this, the Sensor Merge tool was used from the radar utilities toolbox.

Figure 15: Sensor Merge tool parameters.
As shown in Figure 15, a radar image (used for gray scale parameter) and thematic mapper ("tm", used for multispectral image parameter) of the same area were overlayed, setting the merge method to IHS, resampling technique to nearest neighbor, IHS substitution to intensity, and RGB parameters to 1, 2, and 3. The output was stretched to an unsigned 8-bit file then given a name and storage location.

Section II: Apply Texture Analysis

In the second section of this part of the lab, the goal was to apply the texture analysis tool to an image and compare the results with the original radar image.

To do this, the Texture Analysis tool was used from the radar utilities tool box.

Figure 16: Texture Analysis tool parameters.
In Figure 16, a radar image of segmented cropland was used as the input image, the output image was given a name and storage location, the output type was set to float single, and the operator was set to skewness.

Section III: Brighness Adjustment

In the final section of this part of the lab, the goal was to adjust the brighness of the same original radar image used in the previous section. To do this, the Brightness Adjustment tool from the radar utilities tool box was used. This tool enhances the brightness variance of the radar image.

Figure 17: Brightness Adjustment tool parameters.
In Figure 17, the same raw radar image from section 2 was used as the input, the output file was given a name and storage location, the output type was set to float single, and the output option was set to column.

Part III: Polymetric SAR Processing and Analysis

In this part of the lab, the objective was to gain a better understanding of various ways of processing and analyzing synthetic aperture radar (SAR). The first task was to synthesize an image. This was done by using the Synthesize SIR-C Data tool.

Figure 18: Synthesize SIR-C Data tool. 
Figure 19: Synthesize SIR-C tool parameters.
Figure 18 displays the default parameters for the Synthesize SIR-C data tool and Figure 19 displays another example of acceptable input parameters for the tool. The first set of parameters added four bands to the Available Bands list in ENVI and the second combination added two more bands.

From there, the next step was to display the image using three different stretch types: Gaussian, Linear, and Square Root.

Figure 20: Apply Gaussian stretch.

Figure 21: Apply Linear stretch.

Figure 22: Apply Square Root stretch.
In Figures 20-22, the histograms of the images before and after their respective stretch was applied are shown. The output images can be viewed in the Results section.

Part IV: Slant-to-Ground Range Transformation

In the fourth and final part of this lab, the objective was to correct distortion in a radar image using the Slant-to-Ground Range tool. 

Lab 10: Radar Remote Sensing

Introduction The goal of this lab was to gain a basic understanding of pre-/processing functions associated with radar remote sensing . ...