Download - AE 497 Spring 2015 Final Report
AE 497 Final Report
By Catherine McCarthy
Supervised by Mike Bragg, Brian Woodard, Jeff Diebold
Aerospace Engineering, University of Illinois at Urbana-Champaign
Introduction:
One way that the pressure on a wing can be measured is through pressure taps. These are small
holes that are on the wing and measure the static pressure on the wing surface. Usually this static
pressure is then referenced to the static pressure in the freestream so the pressure coefficient can
be calculated. Another, more advanced, method in which the pressure on the wing can be
measured is through the use of pressure sensitive paint (PSP). This paint emits light at different
intensities based on the local pressure, and a continuous pressure distribution is obtained by
imagining the paint via an excitation light. Pressure sensitive paint requires a wind-off and wind-
on picture. Pervious experiments on swept wings at UIUC found that there was a significant
amount of noise on the tip of the wing model due to model deflection caused by aerodynamic
loads, resulting in misalignment between the wind-on and wind-off images. A solution to this is
to use image registration, which identifies physical markers on the surface of the model and
aligns the two images based on those marker points using computer software. Pressure taps on
the wing are commonly used as markers. Therefore, these pressure taps’ use are two-fold. They
allow collection of static pressure measurements, and for the pressure sensitive paint to be
aligned.
Last semester, I performed experiments utilizing a code that had been created by a previous
student. I created four different types of maker patterns, and attached them to plates that were
then attached to an apparatus. I would then take pictures of the plates at different angles of
deformation. The plate was rigidly deflected in increments of 3 degrees, ranging from 3 to 30
degrees, with no deformation representing the wind-off image. I would then receive alignment
measurements in both the x and y directions, and have the root mean square error calculated for
each deformation.
After performing the experiments, it was discovered that there were some inconsistencies in the
data. In the majority of the data, there were some noticeable rises and falls that did not follow
what was predicted to take place. In all the data, there was no clear pattern amongst the different
marker styles. The only tentative conclusion that could be made was that the small, close
together markers had a relatively regular data set, giving the appearance that that was the optimal
marker setup.
The majority of these problems with the data can be attributed to some problems that arose in the
code that was then being utilized. A problem that arose from the code was that it occasionally
had a challenging time determining where the ‘beginning’ of the plate was. Oftentimes the
bottom row of the wind-off image would be aligned with the third or fourth row of the wind-on
image. The corresponding markers would be manually moved to make sure that the two images
were aligned properly. However, a problem with this method is it leads to the possibility of
human error, as the alignment points may no longer be located at the exact center of the dot. This
happened most often with large degrees of deflection, and may account for the spike specifically
in the large, far apart data at the 24 degree mark.
Another problem that could lead to this data is the method in which the code was finding the
center of the markers. The Matlab code was initially creating a gray threshold, then converting
the picture to binary in order to locate the center of mass for each dot. However, depending on
what that grayscale is, the program may not have been able to have a perfect circle with the
binary image, leading to an incorrect center point. This would also lead to misalignment, and
could account for some of the spikes that were seen in the data.
Finally, another potential reason for the poor data could have been due to the cropping technique
utilized with the close-together plates. The program ran too slow, and was causing problems to
occur with the image processing. To resolve this, the image was cropped down length-wise,
creating a long, skinny plate equivalent, which could have also led to inaccurate data.
It was because of this that this semester was spent attempting to create a new code with updated
measurement techniques. Once this code would be completed, then the previous plate experiment
could be reattempted.
Experimental Methods/Results:
As previously stated, the largest goal associated with the code that was being used previously
was the inability of the program to find the center of each marker. I began with performing some
research on Otsu’s Method, which was the current threshold method being utilized. In order to
implement Otsu’s Method, an image must first be transformed into black and white. Then, it
utilizes a
“Relatively straightforward analysis which finds that threshold which minimizes the within-class
variance of the thresholded black and white pixels. In other words, this approach selects the
threshold which results in the tightest clustering of the two groups represented by the foreground
and background pixels” (Solomon 266).
However, some of the concerns that arose with Otsu’s method was the inability to define what
exactly that threshold was. Consequently, it was difficult to determine if the markers being found
using this method were actually remaining in the shape we wanted. In order to test this theory,
and to see how far off the actual image was, I started by implementing the code to stop after
Otsu’s method and return the black and white image after thresholding. In order to test this code,
I took a small cropped version of the entire plate (Figure 1).
Figure 1. Cropped Image of Small, Close Together Marker Plate
I then ran it through the Otsu’s Method (Figure 2).
Figure 2. Small, Close Together Markers after Otsu’s Method
As we can see, the markers are not in perfect circles like we would like. This could lead to an
inaccurate center, which could then lead to poor data like that which was seen previously.
My next thought was to attempt to use a smoothing technique in order to help with the
inconsistencies being produced with Otsu’s Method. I first used the fspecial command so apply a
rectangular averaging filter, and then applied this filter using imfilter. Using this filter, anything
outside the bounds of the array specified in the average filter would be assumed equal to the
nearest array border value. The results from this are shown in Figure 3. The smoothing did not
appear to be of any help in creating a more accurate representation of the markers.
Figure 3. Image after Otsu’s Method Followed by Smoothing Filter
My next method that I attempted was to utilize a minimum perimeter polygon technique. The
basic idea for this technique, as outlined by Gonzalez, is that a digital boundary can be
approximated by utilizing a polygon. The goal of this polygon approximation is “to capture the
essence of a shape in a given boundary using the fewest possible number of vertices”. I quickly
learned that this was certainly a time-consuming manner, and was never able to successfully find
the outlines of the markers like I had hoped. An image of the unsuccessful image processing is
shown in Figure 4.
Figure 4. Image after Utilizing Polygonal Approximations
Finally, I attempted to utilize a template matching technique:
“Given an image f(x,y), the correlation problem is to find all places in the image that match a
given subimage w(x,y) (called a mask or template). Usually, w(x,y) is much smaller than f(x,y).
The method of choice for matching by correlation is to use the correlation coefficient” (Gonzalez
681-2).
The image used as a template is shown in Figure 5, and the corresponding correlation can be
seen in Figure 6.
Figure 5. Template Image Figure 6. Image after Correlation
Due to the poor photo editing software I used on my computer, I was only able to get the
template image small enough to cover two markers, instead of the ideal one. However, for testing
purposes, this was not a large factor. In the correlation image, there was a gradual brightening as
correlation increased. I believe that the reason for the strange fading in and out on the edges of
the image is due to the fact that a template of two markers was used when running the program.
After this correlation method was implemented, it was still necessary to find the center of each
marker. I did this by using the matlab command findpeaks, which has the ability to find local
maxima. Using this command, the maximum pixel values are found in each region of the image,
and then marked with a blue ‘x’. Figure 7 shows the image after it has been altered with
findpeaks.
Figure 7. Image after Finding Peaks in Markers
Once again, I believe that the reason for the off-center “peaks” is due to the correlation image
created using a template with two markers on it. While these peaks were the closest thing
achieved to finding a promising center of the markers, some of the centers found were
unfortunately still not perfectly center.
Conclusions:
While a variety of image processing methods were utilized during this semester, the most
promising seems to be using template matching in order to create a correlation image that can
then be used to find the center of the markers. This would prove helpful to the group as well
since the markers may not necessarily always be circular. By using the template matching
method, any type of marker can be utilized.
The next step would be to find this “peak” at a subpixel resolution. A list of the correlation at
each individual point, or perhaps within a certain region where the markers are known to be, can
be created, and then the program can interpolate over these values. By interpolating over the
entire surface, a more accurate representation of the center peak can hopefully be created. Once
this is done, then the root mean square error can be recalculated in the manner it was calculated
previously. With that, we can reevaluate the data with the original plates that were made, and
update the previous conclusions.
References
Gonzalez, R. C., Woods, R. E., & Eddins, S. L. (2009). Digital image processing using Matlab
(2nd ed.). Gatesmark.
Solomon, C., & Breckon, T. (2011). Fundamentals of digital image processing. John Wiley &
Sons.