Meeting 1
Summary Notes
Summary
The first meeting of the
1. Review and make an assessment of the methods used
to calculate vertical resolution in the NDACC ozone and temperature lidar
algorithms
2. Review and make an assessment of the methods used
to define and propagate uncertainties in the NDACC ozone and temperature lidar
algorithms
3. Define common grounds towards the standard
definition of vertical resolution and uncertainties
4. Elaborate an efficient approach to implement the
use of these standard definitions within the Team’s lidar algorithms, then
within all NDACC investigators’ lidars algorithms
General Presentations (Monday afternoon):
T. Leblanc (JPL, NDACC Lidar) introduced the subject in the context of NDACC, and
reviewed the needs of the community and the main expected tasks of the
C. Retscher (NASA/GSFC, AVDC) presented an end-user perspective of
the problematic. In particular, the newly defined GEOMS (Generic Earth
Observation Metadata Standards) system was reviewed, including the existing
tools for the conversion of the current NDACC lidar
Presentations
of General Interest on Vertical Resolution (Monday afternoon):
T. Leblanc then briefly reviewed the theory of Digital Filtering. In this review,
focus was made on the equivalency of digital filtering and the definition of
vertical resolution. It appeared clearly that there was a direct correspondence
between the frequency cut-off of a digital filter, the number of points used in
a filter, and the definition of the vertical resolution reported in the NDACC
lidar data files. It was suggested at the end of the presentation to use the Digital
Filter parameters and the convenient associated Fourier theory to facilitate
the standardization of the definition of vertical resolution within the NDACC
lidar community.
S. Godin-Beekmann (
F. Madonna (CNR/IMAA) provided a summary of the aerosol lidar algorithm
intercomparison activities undergone during the past ten years within the
framework of the European network EARLINET. Several aspects were reviewed,
including quality control as well as standard data processing. In the late
1990s and early 2000s, EARLINET benefited from significant dedicated funding
from the European Community. A central data handling center was created, where all
the EARLINET partners have the possibility to analyze their data in an
automatic way from raw lidar signals to final products (aerosol optical and
microphysical properties) by using the so-called Single Calculus Chain (
Individual
Presentations on Vertical Resolution (Tuesday):
T. Trickl (IMK-IFU) summarized how vertical filtering was
handled in the IFU tropospheric ozone DIAL algorithm (instrument located in
T. McGee
(NASA/GSFC) briefly described his ozone DIAL algorithm. A least-squares 4th
degree polynomial fit (also called Savitsky-Golay or Super Lanczos)
derivative filter is used. Another definition of vertical resolution was also
presented, again based on the Impulse Response of Dirac, but this time by
measuring the Full-Width at Half-Maximum (FWHM) of the filter’s response. It
was shown that for the 2nd degree polynomial fit, there was a linear
relation between the FWHM and the width of the window (number of points) used.
Finally, it was suggested that the 4th degree polynomial fit was of
better quality on the ozone profiles than that of the 2nd degree
polynomial fit for large numbers of filter points.
B. Sica
(UWO) presented an overview of the Purple Crow Lidar instrument and temperature
algorithm. The relative number density is smoothed using either a smoothing by
3s and 5s (boxcar filter) for climatology studies, or a Kaiser (Bessel-based)
FIR filter for studies requiring higher spatial resolution. Filter
parameters are reported in the data files locally produced. A full revision
history and algorithm version, as well as the unfiltered vertical resolution,
is also available. No data was
produced yet in
B. Tatarov (NIES) then presented an overview of the ozone DIAL
and temperature algorithms of the Tsukuba lidar (
A. van Gijsel (RIVM/KNMI) then summarized how vertical resolution
is reported for the Lauder ozone DIAL. Their definition is based on the width
of the fitting window used for the ozone derivation. The temperature algorithm
is not yet mature enough (multiple versions) to objectively describe the
filtering method used.
G. Payen (LATMOS) presented the filter used for the data
processing of the OHP stratospheric ozone lidar. A 2nd degree
polynomial derivative filter is used. The vertical resolution is reported from
the frequency cut-off of the digital filter. A nearly linear relation between
the number of points used and the reported vertical resolution was found (as
was shown by T. McGee). Finally it was shown that in the case of this filter, there
is a ratio of 0.89 between the frequency cut-off and FWHM definitions, and a
ratio of 1.3 between the frequency cut-off and window width (number of points x
sampling width) definitions.
F. Gabarrot (LACy) summarized the data
processing of the tropospheric ozone and temperature lidars of La Reunion
Island. In the ozone DIAL algorithm, a 2nd degree polynomial
least-squares fit (Savitsky-Golay derivative filter) is used with the number of
point exponentially increasing with height. The vertical resolution is reported
as the cut-off frequency of the corresponding digital filter. For the
temperature profiles (second lidar), a Hamming filter is applied on the temperature
profile. The width of the window used is reported as the vertical resolution.
G. Liberti (CNR) reported
on the data processing of the lidar Rayleigh-Raman lidar of Tor
Vergata (
T. Leblanc (JPL) provided a description of the analysis algorithm of the JPL ozone
DIAL and temperature NDACC lidars at Table Mountain Facility (
Presentations
of General Interest on Uncertainties (Tuesday afternoon):
T. Leblanc (JPL) briefly reviewed the various definitions and laws of propagation
of uncertainties. Defining and propagating statistical errors associated with lidar
signal noise is straightforward, and will be easy to implement in the NDACC
algorithms. The handling of systematic uncertainties is much more problematic.
If these uncertainties are uncorrelated they can be propagated in a similar
manner as for the statistical uncertainties (i.e., quadrature).
Also there is no universal method to combine statistical and systematic
uncertainties together.
F. Immler (DWD) provided a GRUAN perspective to the treatment
of uncertainties. The GRUAN (Reference Upper Air) network is principally focused
on measuring water vapor and temperature in the troposphere and lower
stratosphere with the best possible accuracy, i.e., with enhanced quality
control criteria. A review of official definitions from the Bureau des Poids et Mesures
was then provided, with a reference to the Guide to the expression of
Uncertainty in Measurement (GUM, latest version JCGM 100:2008). Definitions of
redundancy and consistency tests were given, leading to the definition of terms
such as “consistent”, “inconsistent”, “suspicious”, “significantly different”,
and “in agreement”. Two types of propagation formula were introduced, one
referred to as “uncertainty of mean” and the other referred to as “derived
uncertainty of uncorrelated input quantities”. The formulae given for
uncorrelated and correlated quantities were similar to that provided earlier by
T. Leblanc. Finally, applications of the definitions adopted by GRUAN were
shown for the Vaisala RS92 radiosonde measurements.
C. Straub (Univ. Bern) then presented a passive remote sensing perspective to the
definition of vertical resolution and uncertainties. Passive remote sensing
techniques such as microwave radiometry use averaging kernels. The FWHM of
these averaging kernels are usually reported as vertical resolution. The
concept of nominal height and peak height was introduced. In the case of the
MIAWARA-C stratospheric H2O instrument, only 4 to 5 independent
layers (based on the AK’s FWHM) can be distinguished, though the profiles are
usually reported with a 1 or 2-km sampling width. Uncertainties are divided in
three categories: measurement noise (random), systematic errors associated to
uncertain model parameters, and the smoothing error.
Discussion
on Vertical Resolution (Tuesday, Wednesday and Friday morning):
Following the general and individual presentations on
vertical resolution, at least four different definitions were identified.
One definition is based on the cut-off frequency of
the transfer function of the equivalent digital filter applied when convoluting
coefficients and signal. The cut-off frequency is the frequency at which the
transfer function equals ½ (approx. -6 dB). The corresponding vertical
resolution is the sampling width divided by the cut-off frequency. The approach
towards standardization using this definition was referred to as “Approach A”.
Another definition frequently used is one based on
the impulse response of the filter to a Delta function (Dirac). The full-width
at half-max (FWHM) of the response to an input delta function is taken as the
vertical resolution. The approach towards standardization using this definition
was referred to as “Approach B”.
A third definition commonly found within the NDACC
lidar community, and beyond, is the width of the vertical window used (i.e.,
the sampling width multiplied by the number of convoluting coefficients). This
approach was referred to as “Approach C”.
Two other definitions were discussed: the
Approaches A and B were chosen to have the most
physical and practical sense, and therefore were retained for the standardization
implementation. It was not clear at the time of the discussions which method
(if only one) must be retained, and it was decided to develop the
standardization tools for both approaches.
Finally it was agreed that documentation on the
various approaches, especially A and B is essential. A detailed guide on the
definitions and how to implement them is going to be produced as part of the
Individual
Presentations on Uncertainties (Wednesday):
T. Trickl briefly mentioned that no final solution for error
calculation was yet available for the tropospheric ozone DIAL at Garmisch-Partenkirchen,
this being caused by the need of a complex programming task that will be done
after the current period of system modification and upgrading. He showed
several examples for the system validation that suggest very low systematic
errors (less than 1 % in a four-day comparison with in-situ data also featuring a standard deviation of less than 3 %).
T. McGee
briefly described how uncertainties were handled for the mobile lidars AT and
STROZ. Only statistical (random) noise from photon-counting noise is taken into
account. The variance is propagated through the Savitsky-Golay filter following
the standard law for uncorrelated quantities. No systematic uncertainty is
reported.
B. Sica
reported on the main uncertainties identified in their
B. Tatarov reviewed the uncertainties that will be implemented
in the future in the NIES lidar data processing algorithm. There are no
statistical or systematic uncertainties currently calculated, propagated, or
reported.
A. van Gijsel then summarized the status of the treatment of
uncertainties in the RIVM algorithms. For ozone, the 1-sigma uncertainty
estimate on the fit of the slope of the signal is propagated, and reported. For
temperature, various sources of uncertainties, including photon counting noise,
a priori temperature and number
density. Uncertainties are propagated when signal ranges are combined (linear
combination).
S. Godin-Beekmann summarized the treatment of uncertainties for the
OHP stratospheric ozone DIAL algorithm. Only the statistical uncertainty
associated to photon counting noise and the background correction are currently calculated and propagated. Several other
sources are being considered and were described, including aerosol extinction,
NO2 absorption, Rayleigh extinction differential, and ozone
absorption cross-sections. Results from a past intercomparison campaign at OHP
showed that the reported uncertainties remain below, or at the level of
observed standard deviation (down to 4% variability above 20 km in summer).
F. Gabarrot then summarized how uncertainties are (or will be)
treated in the
G. Liberti reported on the Tor Vergata algorithm. Uncertainties include the statistical
error due to photon counting noise, and a systematic error due to dead-time
correction. Additional uncertainties associated with the instrument calibration
for water vapor profiling was also briefly reviewed.
T. Leblanc provided a step-by-step description of the treatment of uncertainties
in the JPL lidars algorithm. The statistical uncertainty due to photon counting
noise is propagated through the algorithm, including the Savitsky-Golay (ozone)
and Kaiser (temperature) filters. Systematic uncertainties were defined and
propagated for saturation correction, background correction, Rayleigh
extinction correction, ozone absorption cross sections, a priori temperature and number density. Though statistical
uncertainties are propagated rigorously, the definition and propagation of the
systematic uncertainties remain questionable and will be revisited for better
accuracy in the near future.
Discussion
on Uncertainties (Wednesday and Friday morning):
As expected, discussion focused mainly on how
systematic uncertainties should be defined and propagated. Qualitatively, the
various sources of uncertainties were easily and well identified. However there
was no consensus on the quantitative aspects.
Background correction: Whether uncertainties
associated with this correction are correlated or uncorrelated was subject to
debate. The majority of team members considered them uncorrelated. Also it was
agreed that if a polynomial or exponential function was used to fit the
background noise, then uncertainties associated with the starting altitude, and
with the chi-square of the fit, should be given.
Saturation correction: This correction is important
for single-photon counting because the pulse pile-up effects have an influence
over extended part of the operating range. The corrections should be validated,
e.g., by comparison with an analogue channel or by using attenuators. Uncertainties
associated with this correction should be considered correlated.
Quantitatively, they should be estimated empirically (typically by comparing
unsaturated signals to saturated ones).
Overlap correction: If correction is applied, the
associated uncertainties are systematic and correlated.
Solid Angle (range) correction: Trigger delays should
be verified experimentally (e.g., by comparisons with a digital scope with
pre-trigger capability) so that no uncertainty associated with this correction
should be included.
Rayleigh extinction correction: Uncertainties
associated with this correction can be calculated differently in the case of
the ozone DIAL retrieval and temperature retrieval. Rayleigh extinction cross-sections
and a priori air number density are
the sources of uncertainties. An assessment of the available Rayleigh
cross-sections was recommended (see action items).
Particulate extinction correction: If correction is
applied, associated uncertainties are systematic and correlated.
Ozone absorption correction (for temperature
retrieval only): Ozone cross-sections and a
priori ozone number density profile are the sources of uncertainty. They
are correlated and systematic. In the UV errors of 1 % and less are specified
for the best measurements (Mauersberger et al., Brion et al.). The accuracies of the different sources for
532 nm must be examined
Absorption by other trace gases: Correction for NO2
is unnecessary in most cases. Some exceptions may occur, especially in the lower
troposphere, and in the upper troposphere in the case of lightning. An
assessment of NO2’s impact to the lidar signals was recommended (see
action items). Correction for SO2 is unnecessary in most cases.
Again, exceptions may occur in the lower troposphere and highly polluted
environment.
Ozone absorption cross-sections (ozone retrieval
only): Uncertainties must be included. The recent assessment of ozone
cross-sections must be taken into account.
Ozone mixing ratio (ozone retrieval only): If O3
v.m.r. is calculated then the uncertainty
(uncorrelated) associated with a priori
air number density must be included.
Density normalization and temperature tie-on (temperature
retrieval only): Methods for the minimization of the tie-on error was
discussed. Because tie-on is the main source of uncertainty (at least in the
upper part of the profile), the tie-on process and associated uncertainties
must be studied thoroughly.
Choice of Earth Gravity g(z)
(temperature retrieval only): Different versions of g(z) are used among the
NDACC lidar PIs. An assessment of the impact of this disparity was recommended
(see action items).
Vertical Resolution Standardization Tool development
in multiple languages (IDL, FORTRAN, Matlab) turned
out to be practically more difficult than initially expected. An outline of
what the tools should do was elaborated.
The tool must consist of a ready-to-use subroutine to
be inserted inside the NDACC PI’s analysis program. The subroutine must be
called each time smoothing and/or differentiating is applied to the lidar
signal. The primary input parameters must be the coefficients of the filter,
and the primary output parameters must be the standardized vertical resolution
itself. Additional input and output parameters are necessary for a practical
implementation. For example, FORTRAN requires more coding and the size of the
input coefficients vector is required.
When the lidar signals are smoothed and/or
differentiated more than once during the analysis, multiple calls to the
standardization subroutine are necessary. For approach A, the transfer function
must be output from the subroutine, and then passed in input of the next
subroutine call. The vertical resolution resulting from the multiple filtering
will be deduced by calculating the product of the transfer functions obtained
from each call. For approach B, the impulse response must be passed through the
multiple calls in a similar manner than the transfer function was.
A last implementation issue is the tracking of the
subroutine’s input and output parameters within the PI’s analysis program. Two
options were proposed: one option is to read/write the input/output parameters
using a simple, text file. This way there is no need for additional programming
in the other parts of the PI’s analysis program besides the final output (i.e.,
what is reported as Vertical Resolution in the final data file). Some team
members mentioned possible I/O rights issues with this method. Therefore,
passing the input/output parameters as global variables of the PI’s analysis
program was also considered. In order to make the implementation as smooth as
possible, and as easy as possible for the PIs, it was decided to offer the
choice of any of option 1 or 2 (i.e., I/O option or global variables option).
Initially it was decided to start programming the
tools “in parallel” for all programming languages. However, due to the lack of
consistency between programming languages, it was decided in the end to start
the programming in IDL, and then translate it in the other languages. The tool
development timeline is as follows: IDL tool ready by mid-February. Matlab and Fortran versions ready by end of mid-May. Tool
testing using simulation between mid-May and June.
Finally, several team members suggested that the PI’s
lidar signals (e.g., nightly averaged counts) could be sent by the PIs in a
simple text format to a “centralized” location for being analyzed by a single processing software. They suggested that this
option would insure the highest level of consistency in the database. This
approach will be considered in complement to the abovementioned test tools.
Discussion
on the future
A basic outline, with a list of topics was proposed.
The topics are listed below, with no specific order of appearance yet. Further
elaboration will take place at Meeting #2.
1.
Glossary, acronyms and used symbols
2. Introduction and context
3. Historical background (past projects)
4. Theoretical justification and review of:
- Digital filtering
- Uncertainty propagation
- Lidar algorithms for the
retrieval of ozone and temperature
5. Details of the
- Definitions
- Uncertainty sources
- Uncertainty Propagation
- Vertical Resolution
6. Simulation and proofing
- use of lidar simulated
signals
- algorithm tests and
comparisons
- details of changes made (and
their impact on the new results)
7. NDACC PIs checklist (step-by-step User Guide)
8. Conclusions, future work
9. Summary
10. Acknowledgements and links to
11. Appendix A: Tools
- Pseudo-code
- IDL
- Matlab
- Fortran
- Python
12. Appendix B: List of recommended filters (and coefficients)
13. List of Participants
Name Who by Details
AI-01 T. Leblanc Dec. ‘10 Update
AI-02
AI-03 A.
van Gijsel Feb. ‘11 Gather
all constants + g(z) for assessment
AI-04 T. McGee Feb. ‘11 Inquire
on accuracy of gravity fields g(z) (and best sources)
AI-05 T.
Leblanc mid-Feb. ‘11 Finalize
IDL version of standardization tool
AI-06 F. Gabarrot mid-March ’11 Finalize Fortran
with B. Tatarov and
AI-07 F. Gabarrot mid-April ’11 Finalize
Matlab version with G. Payen
AI-08
AI-09 G. Liberti mid-May ’11 Assessment of the impact of NO2 on lidar signals
AI-10 T. Trickl mid-May ’11 Assessment of the use of different extinction cross-sections
AI-11 T. Leblanc June ‘11 Simulate
lidar signals and test
AI-12
AI-13
AI-14
AI-15
Next
An attempt meeting week was set: