dec. 13, 2003 valliappa.lakshmanan@noaa.gov 1 quality control of weather radar data...
Post on 02-Apr-2015
216 Views
Preview:
TRANSCRIPT
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 1
Quality Control of Weather Radar Data
Valliappa.Lakshmanan@noaa.govNational Severe Storms Laboratory &
University of Oklahoma
Norman OK, USAhttp://cimms.ou.edu/~lakshman/
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 2
Weather Radar
Weather forecasting relies on observations using remote sensors. Models initialized using observations Severe weather warnings rely on real-time
observations.
Weather radars provide the highest resolution In time: a complete 3D scan every 5-15 minutes In space: 0.5-1 degree x 0.25-1km tilts Vertically: 0.5 to 2 degrees elevation angles
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 3
NEXRAD – WSR-88D
Weather radars in the United States Are 10cm Doppler radars
Measure both reflectivity and velocity. Spectrum width information also provided. Very little attenuation with range Can “see” through thunderstorms
Horizontal resolution 0.95 degrees (365 radials) 1km for reflectivity, 0.25km for velocity
Horizontal range 460km surveillance (reflectivity-only) scan 230km scans at higher tilts, and velocity at lowest tilt.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 4
NEXRAD volume coverage pattern
The radar sweeps a tilt.
Then moves up and sweeps another tilt.
Typically collects all the moments at once Except at lowest
scan The 3dB beam width
is about 1-degree.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 5
Beam path
Path of the radar beam slightly refracted earth curvature Standard atmosphere:
4/3 Anamalous propagation
Beam heavily refracted Non-standard
atmospheric condition Ground clutter: senses
ground.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 6
Anomalous Propagation
Buildings near the radar.
Reflectivity values correspond to values typical of hail.
Automated algorithms severely affected.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 7
AP + biological
North of the radar is some ground-clutter.
The light green echo probably corresponds to migrating birds.
The sky is actually clear.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 8
AP + precipitation
AP north of the radar A line of
thunderstorms to the east of the radar.
Some clear-air return around the radar.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 9
Small cells embedded in rain
The strong echoes here are really precipitation.
Notice the smooth green area.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 10
Not rain
This green area is not rain, however.
Probably biological.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 11
Clear-air return
Clear-air return near the radar
Mostly insects and debris after the thunderstorm passed through.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 12
Chaff
The high reflectivity lines are not storms.
Metallic strips released by the military.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 13
Terrain
The high-reflectivity region is actually due to ice on the mountains.
The beam has been refracted downward.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 14
Radar Data Quality
Radar data is high resolution, and is very useful.However, it is subject to many
contaminants.Human users can usually tell good data from
bad.Automated algorithms find it difficult to do so.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 15
Motivation
Why improve radar data quality?McGrath et al (2002) showed that the
mesocyclone detection algorithm (Stumpf et al, Weather and Forecasting, 1999) produces the majority of its false detections in clear-air.
The presence of AP degrades the performance of a storm identification and motion estimation algorithm (Lakshmanan et al, J. Atmos. Research, 2003)
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 16
Quality Control of Radar Data
An extensively studied problem.Simplistic approaches:
Threshold the data (low=bad)High=bad for AP, terrain, chaffLow=good in mesocylones, hurricane eye, etc.
Vertical tilt testsWorks for APFails farther from the radar, shallow
precipitation.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 17
Image processing techniques
Typically based on median filtering reflectivity dataRemoves clear-air return, but fails for AP.Fails in spatially smooth clear-air return.Smoothes the data
Insufficiently tested techniquesFractal techniques.Neural network approaches.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 18
Steiner and Smith
Journal of Applied Meteorology, 2002 A simple rule-base. Introduced more sophisticated measures
Echo top: the highest tilt that has at least 5dBZ. Works mostly. Fails in heavy AP, shallow precipitation.
Inflections Measure of variability within a local neighborhood of pixel. A texture measure suited to scalar data.
Their hard thresholds are not reliable.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 19
Radar Echo Classifier
Operationally implemented on US radar product generators
Fuzzy logic technique (Kessinger, AMS 2002) Uses all three moments of radar data
Insight: targets that are not moving have zero velocity, and low spectrum width.
High reflectivity values usually good. Those that are not moving are probably AP.
Also makes use of Steiner-Smith measures Not vertical (echo-top) features (to retain tilt-by-tilt ability)
Good for human users, but not for automated use
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 20
Radar Echo Classifier
Finds the good data and the AP.
But can not be used to reliably discriminate the two on a pixel-by-pixel basis.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 21
Quality Control Neural Network
Compute texture features on three moments. Vertical features on latest (“virtual”) volume
Can clean up tilts as they arrive and still utilize vertical features.
Train neural network off-line on these features to classify pixels into precip or non-precip at every
scan of the radar.
Use classification results to clean up the data field in real-time.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 22
The set of input features
Computed in 5x5 polar neighborhood around each pixel.
For velocity and spectrum width:MeanVariance (Kessinger)value-mean
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 23
Reflectivity Features
Lowest two tilts of reflectivity:MeanVarianceValue-meanSquare diff of pixel values (Kessinger)Homogeneity radial inflections (Steiner-Smith)echo size
found through region-growing
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 24
Vertical Features
Vertical profile of reflectivitymaximum value across tiltsweighted average with the tilt angle as the
weightdifference between data values at the two
lowest scans (Fulton)echo top height at a 5dBZ threshold
(Steiner-Smith)Compute these on a “virtual volume”
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 25
Training the Network
How many patterns?Cornelius et al. (1995) used a neural
network to do radar quality controlResulting classifier not useful
discarded in favor of fuzzy logic Radar Echo Classifier.
Used < 500 user-selected pixels to train the network.
Does not capture the diversity of the data.Skewed distribution.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 26
Diversity of data?
Need to have data cases that coverShallow precipitation Ice in the atmosphereAP, ground-clutter (high data values that are
bad)Clear-air returnMesocyclones (low data values that are
good)
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 27
Distribution of data
Not a climatalogical distribution Most days, there is no weather, so low reflectivities (non-
precipitating) predominate. We need good performance in weather situations.
Need to avoid bias in selecting pixels – choose all pixels in storm echo, for example, not just the storm core
Neural networks perform best when trained with equally likely classes At any value of reflectivity, both classes should be equally
likely Need to find data cases to meet this criterion. Another reason why previous neural network attempts failed.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 28
Distribution of training data by reflectivity values
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 29
Training the network
Human experts classified the training data by marking bad echoes.Had access to time-sequence and
knowledge of the event.Training data was 8 different volume
scans that captured the diversity of the data.1 million patterns.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 30
The Neural Network
Fully feed-forward neural network. Trained using resilient propagation with weight
decay. Error measure was modified cross-entropy.
Modified to weight different patterns differently.
Separate validation set of 3 volume scans used to choose the number of hidden nodes and to stop the training.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 31
Emphasis
Weight the patterns differently because: Not all patterns are equally useful.
Given a choice, we’d like to make our mistakes on low reflectivities. We don’t have enough “contrary” examples.
Texture features are inconsistent near boundaries of storms. Vertical features unusable at far ranges.
Does not change the overall distribution to a large extent.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 32
Histograms of different features
The best discriminants: Homogeneity Height of
maximum Inflections Variance of
spectrum width.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 33
Generalization
No way to guarantee generalizationSome ways we avoided overfitting
Use the validation set (not the training set) to decide:
Number of hidden nodes When to stop the network training
Weight-decayLimited network complexity
<10 hidden nodes, ~25 inputs, >500,000 patterns
Emphasize certain patterns
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 34
Untrainable data case
None of the features we have can discriminate the clear-air return from good precipitation.
Essentially removed the migratory birds from the training set.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 35
Velocity
We don’t always have velocity data. In the US weather radars,
Reflectivity data available to 460km Velocity data available to 230km
But higher resolution.
Velocity data can be range-folded Function of Nyquist frequency
Two different networks One with velocity (and spectrum width) data Other without velocity (or spectrum width) data
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 36
Choosing the network
Training the with-velocity and without-velocity networks
Shown is the validation error as training progresses for different numbers of hidden nodes
Choose 5 nodes for with-velocity (210th epoch) and 4 nodes for without-velocity (310th epoch) networks.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 37
Behavior of training error
Training error keeps decreasing.
Validation error starts to increase after a while. Assume that point
this happens is where the network starts to get overfit.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 38
Performance measure
Use a testing data set which is completely independent of the training and validation data sets.
Compared against classification by human experts.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 39
Receiver Operating Characteristic
A perfect classifier would be flush top and flush left.
If you need to retain 90% of good data, then you’ll have to live with 20% of the bad data when using the QCNN Existing NWS
technique forces you to live with 55% of the bad data.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 40
Performance (AP test case)
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 41
Performance (strong convection)
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 42
Test case (ground clutter)
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 43
Test case (small cells)
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 44
Summary
A radar-only quality control algorithmUses texture features derived from 3 radar
momentsRemoves bad data pixels corresponding to
AP, ground clutter, clear-air impulse returnsDoes not reliably remove biological targets such
as migrating birds.Works in all sorts of precipitation regimes
Does not remove bad data except toward the edges of storms.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 45
Multi-sensor Aspect
There are other sensors observing the same weather phenomena.
If there are no clouds on satellite, then it is likely that there is no precipitation either.Can’t use the visible channel of satellite at
night.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 46
Surface Temperature
Use infrared channel of weather satellite images.Radiance to temperature relationship exists.
If the ground is being sensed, the temperature will be ground temperature.
If satellite “cloud-top” temperature is less than the surface temperature, cloud-cover exists.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 47
Spatial and Temporal considerations
Spatial and temporal resolutionRadar tilts arrive every 20-30s
High spatial resolution (1km x 1-degree)Satellite data every 30min
4km resolutionSurface temperature 2 hours old
20km resolution
Fast-moving storms and small cells can pose problems.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 48
Spatial …
For reasonably-sized complexes, both satellite infrared temperature and surface temperature are smooth fields.
Bilinear interpolation is effective.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 49
Temporal
Estimate motion Use high-resolution radar to estimate motion.
Advect the cloud-top temperature Based on movement from radar Advection has high skill under 30min.
Assume surface temperature does not change 1-2 hr model forecast has no skill above
persistence forecast.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 50
Cloud-cover: Step 1
Satellite infrared temperature field. Blue is colder Typically higher
storms A thin line of fast-
moving storms A large thunderstorm
complex
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 51
Cloud-cover: Step 4
Forecast to move east, and decrease in intensity.
This forecast is made based on radar data.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 52
Cloud-cover: Step 2
Combined data from 4 different radars.
Two “views” of the same phenomenon – the different sensors measure different things, and have different constraints.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 53
Cloud-cover: Step 3
Estimates of motion and growth-and-decay made using KMeans texture segmentation and tracking.
Red – eastward motion.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 54
Cloud-cover: Step 4
The forecast is for 43 minutes – time difference between satellite image and radar tilt.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 55
Cloud-cover: Step 5
Surface temperature 20kmx20km spatial
resolution 2 hours old Interpolated from
data from weather stations around the country Best we have.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 56
Cloud-cover: Step 6
Difference field White – temperature
difference more than 20K.
5K is a very conservative threshold.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 57
Distribution of cloud-cover
Two precipitation cases May 8, 2003 July 30, 2003
Indicate that cloud-cover values more than 15K minimum.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 58
Multi-sensor QC: Step 1
Original data from July 11, 2003 (KTLX)
Large amount of contamination. Clear-air Probably biological
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 59
Multi-sensor QC: Step 2
Result of applying the radar-only neural network.
Most of the clear-air contamination is gone.
Possible precipitation north-west of the radar.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 60
Multi-sensor QC: Step 3
Cloud-cover field Some cloud-cover
north-west of the radar.
Nothing to the south of the radar.
5K threshold corresponds to the light blues.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 61
Multi-sensor QC: Step 4
Result of applying cloud-cover field to NN output.
Small cells retained, but biological contamination removed.
Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 62
Conclusion
The radar-only neural network outperforms the currently operational quality-control technique.Can be improved even further using data
from other sensors.Needs more systematic examination.
top related