data analysis methods -...

30

Upload: others

Post on 28-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E
Page 2: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

DATA ANALYSIS METHODSIN PHYSICAL OCEANOGRAPHY

THIRD EDITION

Page 3: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

This page intentionally left blank

Page 4: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

DATA ANALYSISMETHODS INPHYSICAL

OCEANOGRAPHYTHIRD EDITION

RICHARD E. THOMSONFisheries and Oceans CanadaInstitute of Ocean SciencesSidney, British Columbia

Canada

and

WILLIAM J. EMERYUniversity of Colorado

Aerospace Engineering Sciences DepartmentBoulder, CO

USA

AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORDPARIS • SAN DIEGO • SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO

Page 5: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

Elsevier225, Wyman Street, Waltham, MA 02451, USAThe Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UKRadarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands

Copyright � 2014, 2001, 1998 Elsevier B.V. All rights reserved.

No part of this publication may be reprodopuced, stored in a retrieval system or transmitted in any form orby any means electronic, mechanical, photocopying, recording or otherwise without the prior writtenpermission of the publisher.

Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford,UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email: [email protected]. Alternativelyyou can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material.

NoticeNo responsibility is assumed by the publisher for any injury and/or damage to persons or property asa matter of products liability, negligence or otherwise, or from any use or operation of any methods,products, instructions or ideas contained in the material herein. Because of rapid advances in the medicalsciences, in particular, independent verification of diagnoses and drug dosages should be made.

British Library Cataloguing in Publication DataA catalogue record for this book is available from the British Library

Library of Congress Cataloging-in-Publication DataA catalog record for this book is available from the Library of Congress

ISBN: 978-0-12-387782-6

For information on all Elsevier publications visit ourweb site at http://store.elsevier.com/

Printed and bound in Poland

• Coastal photo courtesy of Dr. Audrey Dallimore, School of Environment and Sustainability, Royal RoadsUniversity, British Columbia, Canada

• Satellite image from Thomson, R.E., and J.F.R. Gower. 1998. A basin-scale oceanic instability event in theGulf of Alaska. J. Geophys. Res., 103, 3033-3040

Page 6: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

Dedication

Richard Thomson dedicates this book to his wife Irma, daughters Justine and Karen, and grandchil-dren Brenden and Nicholas.

Bill Emery dedicates this book to his wife Dora Emery, his children Alysse, Eric, andMicah, and to hisgrandchildren Margot and Elliot.

Page 7: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

This page intentionally left blank

Page 8: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

Contents

Preface ixAcknowledgments xi

1. Data Acquisition and Recording

1.1 Introduction 11.2 Basic Sampling Requirements 31.3 Temperature 101.4 Salinity 371.5 Depth or Pressure 481.6 Sea-Level Measurement 611.7 Eulerian Currents 791.8 Lagrangian Current Measurements 1151.9 Wind 1441.10 Precipitation 1521.11 Chemical Tracers 1551.12 Transient Chemical Tracers 175

2. Data Processing and Presentation

2.1 Introduction 1872.2 Calibration 1892.3 Interpolation 1902.4 Data Presentation 191

3. Statistical Methods and ErrorHandling

3.1 Introduction 2193.2 Sample Distributions 2203.3 Probability 2223.4 Moments and Expected Values 2263.5 Common PDFs 2283.6 Central Limit Theorem 2323.7 Estimation 2343.8 Confidence Intervals 2363.9 Selecting the Sample Size 2433.10 Confidence Intervals for Altimeter-Bias

Estimates 244

3.11 Estimation Methods 2453.12 Linear Estimation (Regression) 2503.13 Relationship between Regression and

Correlation 2573.14 Hypothesis Testing 2623.15 Effective Degrees of Freedom 2693.16 Editing and Despiking Techniques:

The Nature of Errors 2753.17 Interpolation: Filling the Data Gaps 2873.18 Covariance and the Covariance Matrix 2993.19 The Bootstrap and Jackknife Methods 302

4. The Spatial Analyses of DataFields

4.1 Traditional Block and Bulk Averaging 3134.2 Objective Analysis 3174.3 Kriging 3284.4 Empirical Orthogonal Functions 3354.5 Extended Empirical Orthogonal Functions 3564.6 Cyclostationary EOFs 3634.7 Factor Analysis 3674.8 Normal Mode Analysis 3684.9 Self Organizing Maps 3794.10 Kalman Filters 3964.11 Mixed Layer Depth Estimation 4064.12 Inverse Methods 414

5. Time Series Analysis Methods

5.1 Basic Concepts 4255.2 Stochastic Processes and Stationarity 4275.3 Correlation Functions 4285.4 Spectral Analysis 4335.5 Spectral Analysis (Parametric Methods) 4895.6 Cross-Spectral Analysis 5035.7 Wavelet Analysis 5215.8 Fourier Analysis 5365.9 Harmonic Analysis 5475.10 Regime Shift Detection 557

vii

Page 9: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

5.11 Vector Regression 5685.12 Fractals 580

6. Digital Filters

6.1 Introduction 5936.2 Basic Concepts 5946.3 Ideal Filters 5966.4 Design of Oceanographic Filters 6046.5 Running-Mean Filters 6076.6 Godin-Type Filters 6096.7 Lanczos-window Cosine Filters 6126.8 Butterworth Filters 6176.9 KaisereBessel Filters 6246.10 Frequency-Domain (Transform) Filtering 627

References 639Appendix A: Units in Physical

Oceanography 665

Appendix B: Glossary of StatisticalTerminology 669

Appendix C: Means, Variances andMoment-Generating Functionsfor Some Common ContinuousVariables 673

Appendix D: Statistical Tables 675Appendix E: Correlation Coefficients

at the 5% and 1% Levels of Significancefor Various Degrees of Freedom n 687

Appendix F: Approximations andNondimensional Numbers in PhysicalOceanography 689

Appendix G: Convolution 697Index 701

CONTENTSviii

Page 10: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

Preface

There have been numerous books written ondata analysis methods in the physical sciencesover the past several decades. Most of thesebooks are heavily directed toward the moretheoretical aspects of data processing ornarrowly focus on one particular topic. Fewbooks span the range from basic data samplingand statistical analysis to more modern tech-niques such as wavelet analysis, rotary spectraldecomposition, Kalman filtering, and self-organizing maps. Texts that also providedetailed information on the sensor and instru-ments that collect the data are even more rare.In writing this book we saw a clear need fora practical reference volume for earth and oceansciences that brings established and modern pro-cessing techniques together under a single cover.The text is intended for students and establishedscientists alike. For the most part, graduateprograms in oceanography have some form ofmethods course in which students learn aboutthe measurement, calibration, processing, andinterpretation of geophysical data. The classesare intended to give the students needed experi-ence in both the logistics of data collection andthe practical problems of data processing andanalysis. Because the class material generally isbased on the experience of the faculty membersgiving the course, each class emphasizesdifferent aspects of data collection and analysis.Formalism and presentation can differ widely.While it is valuable to learn from the first-handexperiences of the class instructor, it seemedto us important to have available a central refer-ence text that could be used to provide someuniformity in the material being covered withinthe oceanographic community. This 3rd Edition

provides a much needed update on oceano-graphic instrumentation and data processingmethods that have become more widely avail-able over the past decade.

Many of the data analysis techniques mostuseful to oceanographers can be found in booksand journals covering a wide variety of topics.Much of the technical information on these tech-niques is detailed in texts on numerical methods,time series analysis, and statistical methods. Inthis book, we attempt to bring together manyof the key data processing methods found inthe literature, as well as add new informationon spatial and temporal data analysis techniquesthat were not readily available in older texts.Chapter 1 also provides a description of mostof the instruments used in physical oceanog-raphy today. This is not a straightforward taskgiven the rapidly changing technology for bothremote and in situ oceanic sensors, and theever-accelerating rate of data collection andtransmission. Our hope is that this book willprovide instructional material for students inthe marine sciences and serve as a general refer-ence volume for those directly involved withoceanographic and other branches of geophys-ical research.

The broad scope and rapidly evolving natureof oceanographic sciences has meant that it hasnot been possible for us to fully detail all existinginstrumentation or emerging data analysismethods. However, we believe that many ofthe methods and procedures outlined in thisbook will provide a basic understanding of thekinds of options available to the user for inter-pretation of data sets. Our intention is to describegeneral statistical and analytical methods that

ix

Page 11: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

will be sufficiently fundamental to maintaina high level of utility over the years.

Finally, we also believe that the analysisprocedures discussed in this book apply toa wide readership in the geophysical sciences.As with oceanographers, this wider communityof scientists would likely benefit from a centralsource of information that encompasses notonly a description of the mathematical methods,

but also considers some of the practical aspectsof data analyses. It is this synthesis betweentheoretical insight and the logistical limitationsof real data measurement that is a primary goalof this text.

Richard E. Thomson and William J. EmeryNorth Saanich, British Columbia

and Boulder, Colorado

PREFACEx

Page 12: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

Acknowledgments

Many people have contributed to the threeeditions of this book over the years. DudleyChelton of Oregon State University andAlexander Rabinovich of Moscow State Univer-sity and the Institute of Ocean Sciences havehelped with several chapters. Dudley proved tobe a most impressive reviewer and Sasha hasprovided figures that have significantlyimproved the book. For this edition, we alsothank friends and colleagues who took timefrom their research to review sections of thetext or to provide suggestions for new material.There were others, far too numerous to mention,whose comments and words of advice haveadded to the usefulness of the text. We thankAndrew Bennett of Oregon State University forreviewing the section on inverse methods inChapter 4, Brenda Burd of Ecostat Research forreviewing the bootstrap method in Chapter 3,and Steve Mihály of Ocean Networks Canadafor assisting with the new section on self-orga-nizing maps in Chapter 4. Roy Hourston andMaxim Krassovski of the Institute of OceanSciences helped generously with varioussections of the book, including new sections onregime shifts and wavelet analysis in Chapter5. The contributions to Chapter 1 from TamásJuhász and David Spear, two accomplished tech-nicians at the Institute of Ocean Sciences, aregratefully acknowledged. Patricia Kimber ofTango Design helped draft many of the figures.

Expert contributions to the third editiondincluding reports of errors and omissions in theprevious editionsdwere also provided byMichael Foreman, Robie Macdonald, IsaacFine, Joseph Linguanti, Ron Lindsay, Germaine

Gatien, Steve Romaine, and Lucius Perreault(Institute of Ocean Sciences, Fisheries andOceans Canada), Philip Woodworth (PermanentService for Mean Sea Level, United Kingdom),William (Bill) Woodward (President and CEOof CLS America, Inc., USA), Richard Lumpkin(NOAA/AOML, USA), Laurence Breaker(California State University, USA), Jo Suijlen(National Institute for Coastal and MarineManagement, The Netherlands), Guohong Fang(First Institute of Oceanography, China), VladoMala�ci�c (National Institute of Biology, Slovenia),Ųyvind Knutsen (University of Bergen, Norway),Parker MacCready (University of Washington,USA), Andrew Slater (University of Colorado,USA), David Dixon (Plymouth, UnitedKingdom), Drew Lucas (Scripps Institute ofOceanography, USA), Wayne Martin (Universityof Washington, USA), David Ciochetto (Dalhou-sie University, Canada), Alan Plueddemann(Woods Hole Oceanographic Institution, USA),Fabien Durand (IRD/LEGOS, Toulouse, France),Jack Harlan (NOAA, Boulder Colorado, USA),Denis Gilbert (The Maurice LamontagneInstitute, Fisheries and Oceans Canada), IgorYashayaev (Bedford Institute of Oceanography,Fisheries and Oceans Canada), Ben Hamlington(University of Colorado, USA), John Hunter(University of Tasmania, Australia), Irene Alonso(Instituto de Cienciias Mrinas de Andalucia,Spain), Yonggang Liu (University of SouthFlorida, USA), Gary Borstad (ASL Environ-mental Sciences Ltd., Canada), Earl Davis andBob Meldrum (Pacific Geosciences Centre,Canada), and Mohammand Bahmanpour(University of Western Australia, Australia).

xi

Page 13: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

This page intentionally left blank

Page 14: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

C H A P T E R

1

Data Acquisition and Recording

1.1 INTRODUCTION

Physical oceanography is an ever-evolvingscience in which the instruments, types of obser-vations, and methods of analysis undergocontinuous advancement and refinement. Thechanges that have occurred since we completedthe 2nd Edition of this book over a decade agohave been impressive. Recent progress in ocean-ographic theory, instrumentation, sensor plat-forms, and software development has led tosignificant advances in marine science and theway that the findings are presented. The adventof digital computers has revolutionized datacollection procedures and the way that data arereduced and analyzed. No longer is the individ-ual scientist personally familiar with each datapoint and its contribution to his or her study.Instrumentation and data collection are movingout of direct application by the scientist andinto the hands of skilled technicians who arebecoming increasingly more specialized in theoperation and maintenance of equipment. Newelectronic instruments operate at data rates andstorage capacity not possible with earlier me-chanical devices and produce volumes of infor-mation that can only be handled by high-speedcomputers. Most modern data collection systemstransmit sensor data directly to computer-based

data acquisition systems where they are storedin digital format on some type of electronic me-dium such as hard drives, flash cards, or opticaldisks. High-speed analog-to-digital convertersand digital-signal-processors are now used toconvert voltage or current signals from sensorsto digital values. Increasing numbers of cabledobservatories extending into the deep oceanthrough shore stations are now providing high-bandwidth data flow in near real time supportedby previously impossible sustained levels ofpower and storage capacity. As funding forresearch vessels diminishes and existing fleetscontinue to age, open-ocean studies are gradu-ally being assumed by satellites, gliders, pop-up drifters, and long-term moorings. The daysof limited power supply, insufficient data stor-age space, and weeks at sea on ships collectingroutine survey data may soon be a thing of thepast. Ships will still be needed but their rolewill be more focused on process-related studiesand the deployment, servicing, and recovery ofoceanographic and meteorological equipment,including sensor packages incorporated incabled observatory networks. All of these devel-opments are moving physical oceanographersinto analysts of what is becoming known as“big data”. Faced with large volumes of informa-tion, the challenge to oceanographers is deciding

Data Analysis Methods in Physical Oceanographyhttp://dx.doi.org/10.1016/B978-0-12-387782-6.00001-6 Copyright � 2014 Elsevier B.V. All rights reserved.1

Page 15: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

how to approach these mega data and how toselect the measurements and numerical simula-tions that are most relevant to the problems of in-terest. One of the goals of this book is to provideinsight into the analyses of the ever-growingvolume of oceanographic data in order to assistthe practitioner in deciding where to investhis/her effort.

With the many technological advances takingplace, it is important for marine scientists to beaware of both the capabilities and limitations oftheir sampling equipment. This requires a basicunderstanding of the sensors, the recording sys-tems and the data processing tools. If these areknown and the experiment carefully planned,many problems commonly encountered duringthe processing stage can be avoided. We cannotoveremphasize the need for thoughtful experi-mental planning and proper calibration of alloceanographic sensors. If instruments are notin near-optimal locations or the researcher is un-sure of the values coming out of the machines,then it will be difficult to believe the results gath-ered in the field. To be truly reliable, instrumentsshould be calibrated on a regular basis at inter-vals determined by use and the susceptibilityof the sensor to drift. More specifically, theoutput from all oceanic instruments such as ther-mometers, pressure sensors, dissolved oxygenprobes, and fixed pathlength transmissometersdrift with time and need to be calibrated beforeand after each field deployment. For example,the zero point for the Paroscientific Digiquartz(0e10,000 psi) pressure sensors used in theHawaii Ocean Time-series at station “Aloha”100 km north of Honolulu drifts about 4 dbarin three years. As a consequence, the sensorsare calibrated about every six months against aParoscientific laboratory standard, which is reca-librated periodically at special calibration facil-ities in the United States (Lukas, 1994). Eventhe most reliable platinum thermometersdthebackbone of temperature measurement in ma-rine sciencesdcan drift of order 0.001 �C over ayear. Our shipboard experience also shows that

opportunistic over-the-side field calibrationsduring oceanic surveys can be highly valuableto others in the science community regardlessof whether the work is specific to one’s ownresearch program. As we discuss in thefollowing chapters, there are a number of funda-mental requirements to be considered whenplanning the collection of field records, includingsuch basic considerations as the sampling inter-val, sampling duration, and sampling location.

It is the purpose of this chapter to reviewmany of the standard instruments and measure-ment techniques used in physical oceanographyin order to provide the reader with a commonunderstanding of both the utility and limitationsof the resulting measurements. The discussionis not intended to serve as a detailed “user’smanual” nor as an “observer’s handbook”.Rather, our purpose is to describe the fundamen-tals of the instruments in order to give someinsight into the data they collect. An understand-ing of the basic observational concepts, and theirlimitations, is a prerequisite for the developmentof methods, techniques, and procedures used toanalyze and interpret the data that are collected.

Rather than treat each measurement tool indi-vidually, we have attempted to group them intogeneric classes and to limit our discussion tocommon features of the particular instrumentsand associated techniques. Specific referencesto particular company’s products and the quota-tion of manufacturer’s engineering specificationshave been avoided whenever possible. Instead,we refer to published material addressing themeasurement systems or the data recorded bythem. Those studies that compare measurementsmade by similar instruments are particularlyvaluable. On the other hand, there are companieswhose products have become the “gold stan-dard” against which other manufacturers arecompared. Reliability and service are criticalfactors in the choice of any instrument. Theemphasis of the instrument review section is togive the reader a background in the collectionof data in physical oceanography. For those

1. DATA ACQUISITION AND RECORDING2

Page 16: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

readers interested in more complete informationregarding a specific instrument or measurementtechnique, we refer to the references at the end ofthe book where we list the sources of the materialquoted. We realize that, in terms of specificmeasurement systems, and their review, thistext will be quickly dated as new and better sys-tems evolve. Still, we hope that the generaloutline we present for accuracy, precision, anddata coverage will serve as a useful guide tothe employment of newer instruments andmethods.

1.2 BASIC SAMPLINGREQUIREMENTS

A primary concern in most observationalwork is the accuracy of the measurement device,a common performance statistic for the instru-ment. Absolute accuracy requires frequentinstrument calibration to detect and correct forany shifts in behavior. The inconvenience offrequent calibration often causes the scientist tosubstitute instrument precision as the measure-ment capability of an instrument. Unlike abso-lute accuracy, precision is a relative term andsimply represents the ability of the instrumentto repeat the observation without deviation.Absolute accuracy further requires that theobservation be consistent in magnitude withsome universally accepted reference standard.In most cases, the user must be satisfied withhaving good precision and repeatability of themeasurement rather than having absolute mea-surement accuracy. Any instrument that fails tomaintain its precision, fails to provide data thatcan be handled in any meaningful statisticalfashion. The best instruments are those that pro-vide both high precision and defensible absoluteaccuracy. It is sometimes advantageous to mea-sure simultaneously the same variable withmore than one reliable instrument. However, ifthe instruments have the same precision butnot the same absolute accuracy, we are reminded

of the saying that “a man with two watches doesnot know the time”.

Digital instrument resolution is measured inbits, where a resolution of N bits means thatthe full range of the sensor is partitioned into2N equal segments (N¼ 1, 2.). For example,eight-bit resolution means that the specifiedfull-scale range of the sensor, say V¼ 10 V, isdivided into 28¼ 256 increments, with a bit reso-lution of V/256¼ 0.039 V. Whether the instru-ment can actually measure to a resolution oraccuracy of V/2N units is another matter. Thesensor range can always be divided into anincreasing number of smaller increments buteventually one reaches a point where the valueof each bit is buried in the noise level of thesensor and is no longer significant.

1.2.1 Sampling Interval

Assuming the instrument selected can pro-duce reliable and useful data, the next highestpriority sampling requirement is that the mea-surements be collected often enough in spaceand time to resolve the phenomena of interest.For example, in the days when oceanographerswere only interested in the mean stratificationof the world ocean, water property profilesfrom discrete-level hydrographic (bottle) castswere adequate to resolve the general verticaldensity structure. On the other hand, thesesame discrete-level profiles failed to resolve thedetailed structure associated with interleavingandmixing processes, including those associatedwith thermohaline staircases (salt fingering anddiffusive convection), that now are resolved bythe rapid vertical sampling provided by modernconductivity-temperature-depth (CTD) probes.The need for higher resolution assumes that theoceanographer has some prior knowledge ofthe process of interest. Often this prior knowl-edge has been collected with instruments inca-pable of resolving the true variability and may,therefore, only be suggested by highly aliased(distorted) data collected using earlier

1.2 BASIC SAMPLING REQUIREMENTS 3

Page 17: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

techniques. In addition, laboratory and theoret-ical studies may provide information on thescales that must be resolved by the measurementsystem.

For discrete digital data x(ti)measured at timesti, the choice of the sampling increment Dt (or Dxin the case of spatial measurements) is the quan-tity of importance. In essence, we want to sampleoften enough that we can pick out the highest fre-quency component of interest in the time seriesbut not oversample so thatwefill up the data stor-age file, use up all the battery power, or becomeswamped with unnecessary data. In the case ofreal-time cabled observatories, it is also possibleto sample so rapidly (hundreds of times per sec-ond) that inserting the essential time stamps inthe data string can disrupt the cadence of the re-cord. We might also want to sample at irregularintervals to avoid built-in bias in our samplingscheme. If the sampling interval is too large toresolve higher frequency components, it becomesnecessary to suppress these components duringsampling using a sensor whose response islimited to frequencies equal to that of the sam-pling frequency. As we discuss in our section onprocessing satellite-tracked drifter data, these les-sons are often learned too latedafter the buoyshave been cast adrift in the sea.

The important aspect to keep in mind is that,for a given sampling interval Dt, the highest fre-quency we can hope to resolve is the Nyquist (orfolding) frequency, fN, defined as

fN ¼ 1�2Dt (1.1)

We cannot resolve any higher frequencies thanthis. For example, if we sample every 10 h, thehighest frequency we can hope to see in thedata is fN¼ 0.05 cph (cycles per hour). Equation(1.1) states the obviousdthat it takes at leasttwo sampling intervals (or three data points) toresolve a sinusoidal-type oscillation with period1/fN (Figure 1.1). In practice, we need to contendwith noise and sampling errors so that ittakes something like three or more sampling

increments (i.e., � four data points) to accuratelydetermine the highest observable frequency.Thus, fN is an upper limit. The highest frequencywe can resolve for a sampling of Dt¼ 10 h inFigure 1.1 is closer to 1/(3Dt)z0.033 cph.(Replacing Dt with Dx in the case of spatial sam-pling increments allows us to interpret these lim-itations in terms of the highest wavenumber(Nyquist wavenumber) the data are able toresolve.)

An important consequence of Eqn (1.1) is theproblem of aliasing. In particular, if there is en-ergy at frequencies f> fNdwhich we obviouslycannot resolve because of the Dtwe pickeddthisenergy gets folded back into the range of fre-quencies, f< fN, which we are attempting toresolve (hence, the alternate name “folding fre-quency” for fN). This unresolved energy doesnot disappear but gets redistributed within thefrequency range of interest. To make mattersworse, the folded-back energy is disguised (oraliased) within frequency components differentfrom those of its origin. We cannot distinguishthis folded-back energy from that which actuallybelongs to the lower frequencies. Thus, we endup with erroneous (aliased) estimates of thespectral energy variance over the resolvablerange of frequencies. An example of highlyaliased data would be current meter datacollected using 13-h sampling in a region domi-nated by strong semidiurnal (12.42-h period)tidal currents. More will be said on this topic inChapter 5.

As a general rule, one should plan a measure-ment program based on the frequencies andwavenumbers (estimated from the correspond-ing periods and wavelengths) of the parametersof interest over the study domain. This require-ment may then dictate the selection of the mea-surement tool or technique. If the instrumentcannot sample rapidly enough to resolve thefrequencies of concern it should not be used. Itshould be emphasized that the Nyquist fre-quency concept applies to both time and spaceand the Nyquist wavenumber is a valid means

1. DATA ACQUISITION AND RECORDING4

Page 18: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

of determining the fundamental wavelength thatmust be sampled.

1.2.2 Sampling Duration

The next concern is that one samples longenough to establish a statistically significantdetermination of the process being studied. Fortime-series measurements, this amounts to arequirement that the data be collected over aperiod sufficiently long that repeated cycles ofthe phenomenon are observed. This also appliesto spatial sampling where statistical consider-ations require a large enough sample to definemultiple cycles of the process being studied.Again, the requirement places basic limitationson the instrument selected for use. If the equip-ment cannot continuously collect the dataneeded for the length of time required to resolverepeated cycles of the process, it is not wellsuited to the measurement required.

Consider the duration of the sampling at timestep Dt. The longer wemake the record the betterwe are to resolve different frequency compo-nents in the data. In the case of spatiallyseparated data, Dx, resolution increases with

increased spatial coverage of the data. It is thetotal record length T¼NDt obtained for N datasamples that: (1) determines the lowest fre-quency (the fundamental frequency)

f0 ¼ 1�ðNDtÞ ¼ 1=T (1.2)

that can be extracted from the time-series record;(2) determines the frequency resolution or mini-mum difference in frequency Df¼ jf2 e f1j ¼ 1/(NDt) that can be resolved between adjoiningfrequency components, f1 and f2 (Figure 1.2);and (3) determines the amount of band aver-aging (averaging of adjacent frequency bands)that can be applied to enhance the statisticalsignificance of individual spectral estimates. InFigure 1.2, the two separate waveforms of equalamplitude but different frequency produce asingle spectrum. The two frequencies are wellresolved for Df¼ 2/(NDt) and 3/(2NDt), justresolved for Df¼ 1/(NDt), and not resolved forDf¼ l/(2NDt).

In theory, we should be able to resolve allfrequency components, f, in the frequency rangef0� f� fN, where fN and f0 are defined by Eqns(1.1) and (1.2), respectively. Herein lies a classicsampling problem. In order to resolve the

FIGURE 1.1 Plot of the function F(n)¼ sin (2pn/20þ f) where time is given by the integer n¼�1, 0, ., 24. The period2Dt¼ 1/fN is 20 units and f is a random phase with a small magnitude in the range �0.1 radians. Open circles denotemeasured points and solid points the curve F(n). Noise makes it necessary to use more than three data values to accuratelydefine the oscillation period.

1.2 BASIC SAMPLING REQUIREMENTS 5

Page 19: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

frequencies of interest in a time series, we need tosample for a long time (T large) so that f0 coversthe low end of the frequency spectrum and Df issmall (frequency resolution is high). At the sametime, we would like to sample sufficientlyrapidly (Dt small) so that fN extends beyond allfrequency components with significant spectralenergy. Unfortunately, the longer and morerapidly we want to sample, the more data weneed to collect and store, the more power weneed to provide, and the more time, effort, andmoney we need to put into the sensor designand sampling program.

Our ability to resolve frequency componentsfollows from Rayleigh’s criterion for the resolu-tion of adjacent spectral peaks in light shoneonto a diffraction grating. It states that two adja-cent frequency components are just resolvedwhen the peaks of the spectra are separatedby frequency difference Df¼ f0¼ 1/(NDt)(Figure 1.2). For example, to separate the spectralpeak associated with the lunaresolar semidi-urnal tidal component M2 (frequency¼0.08051 cph) from that of the solar semidiurnal

tidal component S2 (0.08333 cph), for whichDf¼ 0.00282 cph, it requires N¼ 355 data pointsat a sampling interval Dt¼ 1 h or N¼ 71 datapoints at Dt¼ 5 h. Similarly, a total of 328 datavalues at 1-h sampling are needed to separatethe two main diurnal constituents K1 and O1(Df¼ 0.00305 cph). Note that since fN is the high-est frequency we can measure and f0 is the limitof our frequency resolution, then

fN�f0 ¼ ð1=2DtÞ�ð1=NDtÞ ¼ N

�2 (1.3)

is the maximum number of Fourier componentswe can hope to estimate in any analysis.

1.2.3 Sampling Accuracy

According to the two previous sections, weneed to sample long enough and often enoughif we hope to resolve the range of scales of inter-est in the variables we are measuring. It is intui-tively obvious that we also need to sample asaccurately as possibledwith the degree ofrecording accuracy determined by the responsecharacteristics of the sensors, the number of

FIGURE 1.2 Spectral peaks of two separate waveforms of equal amplitude and frequencies f1 and f2 (dashed and thin line)together with the calculated spectrum (solid line). (a) and (b) are well-resolved spectra; (c) just resolved spectra; and (d) notresolved. Thick solid line is total spectrum for two underlying signals with slightly different peak frequencies.

1. DATA ACQUISITION AND RECORDING6

Page 20: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

bits per data record (or parameter value) neededto raise measurement values above backgroundnoise, and the volume of data we can live with.There is no use attempting to sample the highend of the spectrum if the instrument cannotrespond sufficiently rapidly or accurately toresolve changes in the parameter beingmeasured. (A tell-tale sign that an instrumenthas reached its limit of resolution is a flatteningof the high-frequency end of the power spec-trum; the frequency at which the spectrum of ameasured parameter begins to flatten out as afunction of increasing frequency typically marksthe point where the accuracy of the instrumentmeasurements is beginning to fall below thenoise threshold.) In addition, there are severalapproaches to this aspect of data samplingincluding the brute-force approach in which wemeasure as often as we can at the degree of accu-racy available and then improve the statisticalreliability of each data record through postsur-vey averaging, smoothing, and other manipula-tion. This is the case for observations providedthrough shore-powered, fiber-optic, cabled ob-servatories such as the ALOHA observatorylocated 100 km north of the island of Oahu(Hawaii), the Monterey Accelerated ResearchSystem (MARS) inMonterey Canyon, California,the Ocean Networks Canada cabled observatorysystems Victoria Experimental Network Underthe Sea (VENUS), and North-East Pacific TimeSeries Underwater Networked Experiments(NEPTUNE) extending from the Strait of Geor-gia to the continental margin and Cascadia Basinout to the Juan de Fuca Ridge off the west coastof British Columbia, the Ocean ObservatoriesInitiative (OOI) Regional Scale Nodes off thecoasts of Oregon and Washington in the PacificNorthwest of the United States (including AxialSeamount), and the Dense Oceanfloor NetworkSystem for Earthquakes and Tsunamis off theeast coast of Japan. Data can be sampled asrapidly as possible and the data processing leftto the postacquisition stage at the onshore datamanagement facility.

1.2.4 Burst Sampling vs ContinuousSampling

Regularly spaced, digital time series can beobtained in two different ways. The most com-mon approach is to use a continuous samplingmode, in which the data are sampled at equallyspaced intervals tk¼ t0þ kDt from the start timet0. Here, k is a positive integer. Regardless ofwhether the equally spaced data have under-gone internal averaging or decimation using al-gorithms built into the machine, the output tothe data storage file is a series of individual sam-ples at times tk. (Here, “decimation” is used inthe loose sense of removing every nth data point,where n is any positive integer, and not in thesense of the ancient Roman technique of puttingto death one in 10 soldiers in a legion guilty ofmutiny or other crime.) Alternatively, we canuse a burst sampling mode, in which rapid sam-pling is undertaken over a relatively short timeinterval DtB or “burst” embedded within eachregularly spaced time interval, Dt. That is, thedata are sampled at high frequency for a shortduration starting (or ending) at times tk forwhich the burst duration DtB<<Dt. The instru-ment “rests” between bursts. There are advan-tages to the burst sampling scheme, especiallyin noisy (high frequency) environments whereit may be necessary to average out the noise toget at the frequencies of interest. Burst samplingworks especially well when there is a “spectralgap” between fluctuations at the high and lowends of the spectrum. As an example, there istypically a spectral gap between surface gravitywaves in the open ocean (periods of 1e20 s)and the 12.4-hourly motions that characterizesemidiurnal tidal currents. Thus, if we wantedto measure surface tidal currents using theburst-mode option for our current meter, wecould set the sampling to a 2-min burst everyhour; this option would smooth out the high-frequency wave effects but provide sufficientnumbers of velocity measurements to resolvethe tidal motions. Burst sampling enables us to

1.2 BASIC SAMPLING REQUIREMENTS 7

Page 21: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

filter out the high-frequency noise and obtain animproved estimate of the variability hidden un-derneath the high-frequency fluctuations. Inaddition, we can examine the high-frequencyvariability by scrutinizing the burst sampleddata. If we were to sample rapidly enough, wecould estimate the surface gravity wave energyspectrum. Many oceanographic instrumentsuse (or have provision for) a burst samplingdata collection mode.

A “duty cycle” has sometimes been used tocollect positional data from Service Argossatellite-tracked drifters as a cost-saving formof burst sampling. In this case, all positionaldata within a 24-h period (about 10 satellitefixes) were collected only every third day.Tracking costs paid to Service Argos werereduced by a factor of three using the duty cycle.Unfortunately, problems arise when the lengthof each burst is too short to resolve energetic mo-tions with periods comparable to the burst sam-ple length. In the case of satellite-tracked drifterspoleward of tropical latitudes, these problemsare associated with highly energetic inertial mo-tions whose periods T¼ l/(2Usin q) are compa-rable to the 24-h duration of the burst sample(here, U¼ 0.1161� 10�4 cycles per second is theearth’s rate of rotation and qh latitude). Begin-ning in 1992, it became possible to improveresolution of high-frequency motions using a1/3-duty cycle of 8 h “on” followed by 16 h“off”. According to Bograd et al. (1999), evenbetter resolution of high-frequency mid-latitudemotions could be obtained using a duty cycleof 16 h “on” followed by 32 h “off”.

A duty cycle is presently being used in theDeep-ocean Assessment and Reporting of Tsu-namis (DART) buoys moored in several regionsof the world ocean as part of enhanced tsunamiwarning systems. To save battery life and reducedata storage needs, the bottom pressure sensors(BPRs) in DART buoys report time-averagedpressure (ywater depth) every 15 min to orbit-ing satellites through an acoustic link to a nearbysurface buoy. When the built-in algorithm

detects an anomalous change in bottom pressuredue to the arrival of the leading wave of atsunami, the instrument switches into “eventmode”. The instrument then transmits bottompressure data every 15 s for several minutesfollowed by 1 min averaged data for the next4 h (González et al., 1998; Bernard et al., 2001;González et al., 2005; Titov et al., 2005; Mungov,2012). At present, DART buoys switch to eventmode if there is a threshold change of 3 cm inequivalent water depth for earthquake magni-tudes greater than 7.0 and epicenter distancesof over 600 km. Problems arise when the leadingwave form is too slowly varying to be detectedby the algorithm or if large waves continue toarrive well after the 4-h cutoff. For example,several DART buoys in the northeast Pacificfailed to capture the leading tsunami wavefrom the September 2009,Mw¼ 8.1 Samoa earth-quake or to detect the slowly varying trough thatformed the lead wave from the magnitude 8.8,February 2010 earthquake off the coast ofChile (Rabinovich et al., 2012). Vertical accelera-tion of the seafloor associated with seismicwaves (mainly Rayleigh waves moving alongthe waterebottom interface) can also triggerfalse tsunami responses (Mofjeld et al., 2001).Because of the duty cycle, only those few buoysproviding continuous 15-s internal recording canprovide data for the duration of major tsunamievents, which typically have frequency-dependent, e-folding decay timescales of arounda day (Rabinovich et al., 2013).

1.2.5 Regularly vs IrregularlySampled Data

In certain respects, an irregular sampling intime or nonequidistant placement of instrumentscan be more effective than a possibly moreesthetically appealing than uniform sampling.For example, unequal spacing permits a morestatistically reliable resolution of oceanicspatial variability by increasing the number ofquasi-independent estimates of the dominant

1. DATA ACQUISITION AND RECORDING8

Page 22: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

wavelengths (wavenumbers). Since oceanogra-phers are almost always faced with having fewerinstruments than they require to resolve oceanicfeatures, irregular spacing can also be used to in-crease the overall spatial coverage (fundamentalwavenumber) while maintaining the small-scaleinstrument separation for Nyquist wavenumberestimates. The main concern is the lack ofredundancy should certain key instrumentsfail, as so often seems to happen. In this case, aquasi-regular spacing between locations isbetter. Prior knowledge of the scales of vari-ability to expect is a definite plus in any experi-mental array design.

In a sense, the quasi-logarithmic verticalspacing adopted by oceanographers for bottlecast (hydrographic) samplingdspecifically0, 10, 20, 30, 50, 75, 100, 125, 150 m, etc.drepresents a “spectral window” adaptation tothe known physicalechemical structure of theocean. Highest resolution is required near thesurface where vertical changes in most oceanicvariables are most rapid. Similarly, an unevenhorizontal arrangement of observations in-creases the number of quasi-independent esti-mates of the horizontal wavenumber spectrum.Digital data are most often sampled (or sub-sampled) at regularly spaced time increments.Aside from the usual human propensity for or-der, the need for regularly spaced data derivesfrom the fact that most analysis methods havebeen developed for regular-spaced, gap-freedata series. Although digital data do not neces-sarily need to be sampled at regularly spacedtime increments to give meaningful results,some form of interpolation between valuesmay eventually be required. Since interpolationinvolves a methodology for estimating unknownvalues from known data, it can lead to its ownsets of problems.

1.2.6 Independent Realizations

As we review the different instruments andmethods, the reader should keep in mind the

three basic concerns with respect to observa-tions: accuracy/precision; resolution (spatialand temporal); and statistical significance (statis-tical sampling theory). A fundamental consider-ation in ensuring the statistical significance of aset of measurements is the need for independentrealizations. If repeated measurements of a pro-cess are strongly correlated, they provide nonew information and do not contribute to thestatistical significance of the measurements.Often a subjective decision must be made onthe question of statistical independence. Whilethis concept has a formal definition, in practiceit is often difficult to judge. A simple guide isthat any suite of measurements that is highlycorrelated (in time or space) cannot be indepen-dent. At the same time, a group or sequence ofmeasurements that is totally uncorrelated, mustbe independent. In the case of no correlation be-tween each sample or realization, the number of“degrees of freedom” is defined by the totalnumber of measurements; for the case of perfectcorrelation, the redundancy of the data valuesreduces the degrees of freedom to unity for ascalar quantity and to two for a vector quantity.The degree of correlation within the data setprovides a way of estimating the number ofdegrees of freedom within a given suite of obser-vations. While more precise methods will be pre-sented later in this text, a simple linear relationbetween degrees of freedom and correlationoften gives the practitioner a way to proceedwithout developing complex mathematicalconstructs.

As will be discussed in detail later, all of thesesampling recommendations have statisticalfoundations and the guiding rules of probabilityand estimation can be carefully applied to deter-mine the sampling requirements and dictate theappropriate measurement system. At the sametime, these same statistical methods can beapplied to existing data in order to better eval-uate their ability to measure phenomena of inter-est. These comments are made to assist thereader in evaluating the potential of a particular

1.2 BASIC SAMPLING REQUIREMENTS 9

Page 23: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

instrument (or method) for the measurement ofsome desired variable.

1.3 TEMPERATURE

The measurement of temperature in the oceanuses conventional techniques except for deepobservations, where hydrostatic pressures arehigh and there is a need to protect the sensingsystem from ambient depth/temperaturechanges higher in the water column as the sensoris returned to the ship. Temperature is the easiestocean property to measure accurately. Some ofthe ways in which ocean temperature can bemeasured are:

1. Expansion of a liquid or a metal.2. Differential expansion of two metals

(bimetallic strip).3. Vapor pressure of a liquid.4. Thermocouples.5. Change in electrical resistance.6. Infrared radiation from the sea surface.

In most of these sensing techniques, the tem-perature effect is very small and some form ofamplification is necessary to make the tempera-ture measurement detectable. Usually, theresponse is nearly linear with temperature sothat only the first-order term in the calibrationexpansion is needed when converting the sensormeasurement to temperature. However, in orderto achieve high precision over large temperatureranges, second, third and even fourth orderterms must sometimes be used to convert themeasured variable to temperature.

1.3.1 Mercury Thermometers

Of the above methods, (1), (5), and (6) havebeen the most widely used in physical oceanog-raphy. The most common type of the liquidexpansion sensor is the mercury-in-glass ther-mometer. In their earliest oceanographic applica-tion, simple mercury thermometers were

lowered into the ocean with hopes of measuringthe temperature at great depths in the ocean.Two effects were soon noticed. First, thermom-eter housings with insufficient strength suc-cumbed to the greater pressure in the oceanand were crushed. Second, the process ofbringing an active thermometer through theoceanic vertical temperature gradient suffi-ciently altered the deeper readings that it wasnot possible to accurately measure the deepertemperatures. An early solution to this problemwas the development of minemax thermome-ters that were capable of retaining the minimumand maximum temperatures encountered overthe descent and ascent of the thermometer.This type of thermometer was widely used onthe British Challenger expedition of 1873e76.

The real breakthrough in thermometry wasthe development of reversing thermometers,first introduced in London by Negretti and Zam-bra in 1874 (Sverdrup et al., 1942, p. 349). Thereversing thermometer contains a mechanismsuch that, when the thermometer is inverted,the mercury in the thermometer stem separatesfrom the bulb reservoir and captures the temper-ature at the time of inversion. Subsequenttemperature changes experienced by the ther-mometer have limited effects on the amount ofmercury in the thermometer stem and can beaccounted for when the temperature is read onboard the observing ship. This “break-off”mech-anism is based on the fact that more energy isrequired to create a gasemercury interface(i.e., to break the mercury) than is needed toexpand an interface that already exists. Thus,within the “pigtail” section of the reversing ther-mometer is a narrow region called the “break-offpoint”, located near appendix C in Figure 1.3,where the mercury will break when the ther-mometer is inverted.

The accuracy of the reversing thermometerdepends on the precision with which this breakoccurs. In good reversing thermometers thisprecision is better than 0.01 �C. In standardmercury-in-glass thermometers, as well as in

1. DATA ACQUISITION AND RECORDING10

Page 24: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

reversing thermometers, there are concerns otherthan the break point, which affect the precisionof the temperature measurement. These are:

1. Linearity in the expansion coefficient of theliquid.

2. The constancy of the bulb volume.3. The uniformity of the capillary bore.4. The exposure of the thermometer stem to

temperatures other than the bulbtemperature.

Mercury expands in a near-linear mannerwith temperature. As a consequence, it hasbeen the liquid used in most high precision,liquid-glass thermometers. Other liquids suchas alcohol and toluene are used in precision ther-mometers only for very low temperature appli-cations, where the higher viscosity of mercuryis a limitation. Expansion linearity is critical inthe construction of the thermometer scale, whichwould be difficult to engrave precisely if expan-sion were nonlinear.

In a mercury thermometer, the volume of thebulb is equivalent to about 6000 stem-degreesCelsius. This is known as the “degree volume”and usually is considered to comprise the bulband the portion of the stem below the mark(a stem-degree is the temperature measured byany tube-like thermometer). If the thermometeris to retain its calibration, this volume mustremain constant with a precision not commonlyrealized by the casual user. For a thermometerprecision within �0.01 �C, the bulb volumemust remain constant within one part in600,000. Glass does not have ideal mechanicalproperties and it is known to exhibit some plasticbehavior and deform under sustained stress.Repeated exposure to high pressures may pro-duce permanent deformation and a consequentshift in bulb volume. Therefore, precision canonly be maintained by frequent laboratory cali-bration. Such shifts in bulb volume can bedetected and corrected by the determination ofthe “ice point” (a slurry of water and ice), whichshould be checked frequently if high accuracy isrequired. The procedure is more or less obviousbut a few points should be considered. First theice should be made from distilled water andthe watereice mixture should also be madefrom distilled water. The container should beinsulated and at least 70% of the bath in contactwith the thermometer should be of chopped ice.The thermometer should be immersed for five ormore minutes during which the iceewatermixture should be stirred continuously. The con-trol temperature of the bath can be taken by an

FIGURE 1.3 Details of a reversing mercury thermometershowing the “pigtail appendix”.

1.3 TEMPERATURE 11

Page 25: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

accurate thermometer of known reliability. Com-parison with the temperature of the reversingthermometer, after the known calibration char-acteristics have been accounted for, will give anestimate of any offsets inherent in the use ofthe reversing thermometer in question.

The uniformity of the capillary bore is criticalto the accuracy of the mercury thermometer. Inorder to maintain the linearity of the temperaturescale it is necessary to have a uniform capillary aswell as a linear response liquid element. Smallvariations in the capillary can occur as a resultof small differences in coolingduring its construc-tion or to inhomogeneities in the glass. Errorsresulting from the variations in capillary borecan be corrected through calibration at knowntemperatures. The resulting corrections,including any effect of the change in bulb volume,are known as “index corrections”. These remainconstant relative to the ice point and, once deter-mined, can be corrected for a shift in the ice pointby addition or subtraction of a constant amount.With proper calibration and maintenance, mostof the mechanical defects in the thermometercan be accounted for. Reversing thermometersare then capable of accuracies of �0.01 �C, asgiven earlier for the precision of the mercurybreak point. This accuracy, of course, dependson the resolution of the temperature scale etchedon the thermometer. Forhigh accuracy in the typi-cally weak vertical temperature gradients of thedeep ocean, thermometers are etched with scaleintervals between 0.1 and 0.2 �C. Most reversingthermometers have scale intervals of 0.1 �C.

The reliability and calibrated absolute accu-racy of reversing thermometers continue to pro-vide standard temperature measurement againstwhich all forms of electronic sensors arecompared and evaluated. In this role as a calibra-tion standard, reversing thermometers continueto be widely used. In addition, many oceanogra-phers still believe that standard hydrographicstations made with sample bottles and reversingthermometers, provide the only reliable data.For these reasons, we briefly describe some of

the fundamental problems that occur when us-ing reversing thermometers. An understandingof these errors may also prove helpful in evalu-ating the accuracy of reversing thermometerdata that are archived in the historical data file.The primary malfunction that occurs with areversing thermometer is a failure of the mercuryto break at the correct position. This failure iscaused by the presence of gas (a bubble) some-where within the mercury column. Normallyall thermometers contain some gas within themercury. As long as the gas bubble has sufficientmercury compressing it, the bubble volume isnegligible, but if the bubble gets into the upperpart of the capillary tube it expands and causesthe mercury to break at the bubble rather thanat the break-off point. The proper place for thisresident gas is at the bulb end of the mercury;for this reason it is recommended that reversingthermometers always be stored and transportedin the bulb-up (reservoir-down) position. Roughhandling can be the cause of bubble formationhigher up in the capillary tube. Bubbles lead toconsistently offset temperatures and a record ofthe thermometer history can clearly indicatewhen such a malfunction has occurred. Againthe practice of renewing, or at least checking,the thermometer calibration is essential toensuring accurate temperature measurements.As with most oceanographic equipment, a ther-mometer with a detailed history is much morevaluable than a new one without some prior use.

There are two basic types of reversing ther-mometers: (1) protected thermometers that areencased completely in a glass jacket and notexposed to the pressure of the water column;and (2) unprotected thermometers for whichthe glass jacket is open at one end so that thereservoir experiences the increase of pressurewith ocean depth, leading to an apparent in-crease in the measured temperature. The in-crease in temperature with depth is due tothe compression of the glass bulb, so that if thecompressibility of the glass is known from themanufacturer, the pressure and hence the depth

1. DATA ACQUISITION AND RECORDING12

Page 26: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

can be inferred from the temperature difference,DT¼ TUnprotected� TProtected. The difference inthermometer readings, collected at the samedepth, can be used to compute the depth of thetemperature measurement to an accuracy ofabout �1% of the depth. This subject will betreated more completely in the section ondepth/pressure measurement. We note that the�1% full-scale accuracy for reversing thermome-ters is better than the accuracy of �2e3% nor-mally expected from modern depth soundersbut is much poorer than the �0.01% full-scalepressure accuracy expected from strain gaugesused in most modern CTD probes.

Unless collected for a specific observationalprogram or taken as calibrations for electronicmeasurement systems, reversing thermometerdata are most commonly found in historicaldata archives. In such cases, the user is often un-familiar with the precise history of the tempera-ture data and thus cannot reconstruct theconditions under which the data were collectedand edited. Under these conditions one generallyassumes that the errors are of two types; eitherthey are large offsets (such as errors in readingthe thermometer), which are readily identifiableby comparison with other regional historicaldata, or they are small random errors due to avariety of sources and difficult to identify orseparate from real physical oceanic variability.Parallax errors, which are one of the main causesof reading errors, are greatly reduced throughuse of an eyepiece magnifier. Identification andediting of these errors depends on the problembeing studied and will be discussed in a latersection on data processing.

1.3.2 The MechanicalBathythermograph

The mechanical bathythermograph (MBT)uses a liquid-in-metal thermometer to registertemperature and a Bourdon tube sensor to mea-sure pressure. The temperature-sensing elementis a fine copper tube nearly 17 m long filled

with toluene (Figure 1.4). Temperature readingsare recorded by a mechanical stylus, whichscratches a thin line on a coated glass slide.Although this instrument has largely beenreplaced by the expendable bathythermograph(XBT), the historical archives contain numeroustemperature profiles collected using this device.It is, therefore, worthwhile to describe the instru-ment and the data it measures. Only the temper-ature measurement aspect of this device will beconsidered; the pressure/depth recording capa-bility will be addressed in a later section.

There are numerous limitations to the MBT.To begin with, it is restricted to depths lessthan 300 m. While the MBT was intended to beused with the ship underway, it is only reallypossible to use it successfully when the ship istraveling at no more than a few knots. At higherspeeds, it becomes impossible to retrieve theMBT without the risk of hitting the instrumentagainst the ship. Higher speeds also make it diffi-cult to properly estimate the depth of the probefrom the amount of wire out. The temperatureaccuracy of the MBT is restricted by the inherentlower accuracy of the liquid-in-metal thermom-eter. Metal thermometers are also subject to per-manent deformation. Since metal is more subjectto changes at high temperatures than is glass it ispossible to alter the performance of the MBT bycontinued exposure to higher temperatures (i.e.,by leaving the probe out in the sun). The metalreturn spring of the temperature stylus is also asource of potential problems in that it is subjectto hysteresis and creep. Hysteresis, in whichthe uptrace does not coincide with the down-trace, is especially prevalent when the tempera-ture differences are small. Creep occurs whenthe metal is subjected to a constant loading forlong periods. Thus, an MBT continuously usedin the heat of the tropics may be found later tohave a slight positive temperature error.

Most of the above errors can be detected andcorrected by frequent calibration of the MBT.Even with regular calibration, it is doubtfulthat the stated precision of �0.1 �F (�0.06 �C)

1.3 TEMPERATURE 13

Page 27: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

can be attained. Here, the value is given in �Fsince most of the MBTs were produced withthis temperature scale. When considering MBTdata from the historical data files, it should berealized that these data were entered into thefiles by hand. The usual method was to producean enlarged black-and-white photograph of thetemperature trace using the nonlinear calibrationgrid unique to each instrument. Temperaturevalues were then read off of these photographsand entered into the data file at the correspond-ing depths. The usual procedure was to recordtemperatures for a fixed depth interval (i.e.,5 or 10 m) rather than to select out inflectionpoints that best described the temperature pro-file. The primary weakness of this procedure isthe ease with which incorrect values can enterthe data file throughmisreading the temperaturetrace or incorrectly entering the measured value.Usually these types of errors result in large dif-ferences with the neighboring values and canbe easily identified. Care should be taken,

however, to remove such values before applyingobjective methods to search for smaller randomerrors. It is also possible that data entry errorscan occur in the entry of date, time, and positionof the temperature profile and tests should bemade to detect these errors.

1.3.3 Resistance Thermometers (XBT)

Since the electrical resistance of metals, andother materials, changes with temperature, thesematerials can be used as temperature sensors.The resistance (R) of most metals depends ontemperature (T) and can be expressed as apolynomial

R ¼ R0�1þ aT þ bT2 þ cT3 þ.

�(1.4)

where a, b, and c are constants and R0 is the resis-tance at T¼ 0 �C. In practice, it is usuallyassumed that the response is linear over somelimited temperature range and the proportional-ity can be given by the value of the coefficient a

Temperature element Pressure element

Stylusarm

Smoked glassslide Bellows

Xylene filledtubing Bourdon

tube

Styluslifter Piston

headHelicalspring

FIGURE 1.4 A bathythermograph showing its internal construction and sample bathythermograph slides.

1. DATA ACQUISITION AND RECORDING14

Page 28: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

(called the temperature resistance coefficient).The most commonly used metals are copper,platinum, and nickel, which have temperaturecoefficients, a, of 0.0043, 0.0039, and 0.0066/�C,respectively. Of these, copper has the most linearresponse but its resistance is low so that a ther-mal element would require many turns of finewire and would consequently be expensive toproduce. Nickel has a very high resistance butdeviates sharply from linearity. Platinum havinga relatively high resistance level is very stableand has a relatively linear behavior. For thesereasons, platinum resistance thermometershave become a standard by which the interna-tional scale of temperature is defined. Platinumthermometers are also widely used as laboratorycalibration standards and have accuracies of0.001 �C.

The semiconductors form another class ofresistive materials used for temperature measure-ments. These are mixtures of oxides of metalssuch as nickel, cobalt, and manganese, whicharemolded at high pressure followed by sintering(i.e. heating to incipient fusion). The types ofsemiconductors used for oceanographic measure-ments are commonly called thermistors. Thesethermistors have the advantages that: (1) the tem-perature resistance coefficient of �0.05/�C isabout 10 times as great as that for copper; and(2) the thermistors may be made withhigh resistance for a very small physical size.

The temperature coefficient of thermistors isnegative, which means that the resistance de-creases as temperature increases. This tempera-ture coefficient is not a constant except oververy small temperature ranges; hence the changeof resistance with temperature is not linear.Instead, the relationship between resistanceand temperature is given by

RðTÞ ¼ R0exp�b�T�1 � T�1

0��

(1.5)

where R0¼ R(T0) is the conventional tempera-ture coefficient of resistance, T and T0 are abso-lute temperatures (K) with respective resistancevalues of R(T) and R0, and constant b is

determined by the energy required to generateand move the charge carriers responsible forelectrical conduction. (As b increases, thematerial becomes more conducting.) Thus, wehave a relationship whereby temperature T canbe computed from the measurement of resis-tance R(T).

One of the most common uses of thermistorsin oceanography is in XBTs. The XBT was devel-oped to provide an upper ocean temperature-profiling device that operated while the shipwas underway. The crucial development wasthe concept of depth measurement using theelapsed time for the known fall rate of a “freelyfalling” probe. To achieve “free fall”, indepen-dent of the ship’s motion, the data transfer cableis constructed from fine copper wire with feedspools in both the sensor probe and in thelaunching canister (Figure 1.5). The details ofthe depth measurement capability of the XBTwill be discussed and evaluated in the sectionon depth/pressure measurements.

The XBT probes employ a thermistor placedin the nose of the probe as the temperature-sensing element. According to the manufacturer(Sippican Corp.; Marion, Massachusetts, U.S.A.),the accuracy of this system is�0.1 �C. This figureis determined from the characteristics of a batchof semiconductor material, which has knownresistanceetemperature (ReT) properties. Toyield a given resistance at a standard tempera-ture, the individual thermistors are precision-ground, with the XBT probe thermistors groundto yield 5000 U (here, U is the symbol for the unitof ohms) at 25 �C (Georgi et al., 1980). If the ma-jor source of XBT probe-to-probe variability canbe attributed to imprecise grinding, then asingle-point calibration should suffice to reducethis variability in the resultant temperatures.Such a calibration was carried out by Georgiet al. (1980) both at sea and in the laboratory.

To evaluate the effects of random errors onthe calibration procedure, 12 probes were cali-brated repeatedly. The mean differences be-tween the measured and bath temperatures

1.3 TEMPERATURE 15

Page 29: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

was �0.045 �C with a standard deviation of0.01 �C. For the overall calibration comparison,18 cases of probes (12 probes per case) wereexamined. Six cases of T7s (good to 800 m andvessel speeds up to 30 knots) and two cases ofT6s (good to 500 m and at less than 15 knots)were purchased newly from Sippican, whilethe remaining 10 cases of T4s (good to 500 mup to 30 knots) were acquired from a largepool of XBT probes manufactured in 1970 forthe U.S. Navy. The overall average standard

deviation for the probes was 0.023 �C, whichthen reduces to 0.021 �C when consideration ismade for the inherent variability of the calibra-tion procedure.

A separate investigation wasmade of the ReTrelationship by studying the response character-istics for nine probes. The conclusion was thatthe ReT differences ranged from þ0.011 �C to�0.014 �C which then means that the measuredrelationships were within �0.014 �C of the pub-lished relationship and that the calculation of

FIGURE 1.5 Exploded view of aSippican Oceanographic, Inc. XBTshowing spool and canister. XBT,Expendable bathythermograph.

1. DATA ACQUISITION AND RECORDING16

Page 30: DATA ANALYSIS METHODS - preview.kingborn.netpreview.kingborn.net/1208000/35426d2a72c44ce593592220494d7c0a.pdfDATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY THIRD EDITION RICHARD E

new coefficients, following Steinhart and Hart(1968), is not warranted. Moreover the final con-clusions of Georgi et al. (1980) suggest an overallaccuracy for XBT thermistors of �0.06 �C at the95% confidence level and that the consistencybetween thermistors is sufficiently high that in-dividual probe calibration is not needed forthis accuracy level.

Another method of evaluating the perfor-mance of the XBT system is to compare XBT tem-perature profiles with those taken at the sametime with a higher accuracy profiler such as aCTD system. Such comparisons are discussedby Heinmiller et al. (1983) for data collected inboth the Atlantic and the Pacific using calibratedCTD systems. In these comparisons, it is alwaysa problem to achieve true synopticity in the datacollection since the XBT probe falls much fasterthan the recommended drop rate of around1 m/s for a CTD probe. Most of the earlier com-parisons between XBT and CTD profiles (Flierland Robinson, 1977; Seaver and Kuleshov,1982) were carried out using XBT temperatureprofiles collected between CTD stations sepa-rated by 30 km. For the purposes of intercompar-ison, it is better for the XBT and CTD profiles tobe collected as simultaneously as possible.

The primary error discussed by Heinmilleret al. (1983) is that in the measurement of depthrather than temperature. There were, however,significant differences between temperaturesmeasured at depths where the vertical tempera-ture gradient was small and the depth errorshould make little or no contribution. Here, theXBT temperatures were found to be systemati-cally higher than those recorded by the CTD.Sample comparisons were divided by probetype and experiment. The T4 probes (as definedabove) yielded a mean XBTeCTD difference ofabout 0.19 �C while the T7s (defined above)had a lower mean temperature difference of0.13 �C. Corresponding standard deviations ofthe temperature differences were 0.23 �C, forthe T4s, and 0.11 �C for the T7s. Taken together,these statistics suggest an XBT accuracy less than

the �0.1 �C given by the manufacturer and farless than the 0.06 �C reported by Georgi et al.(1980) from their calibrations.

From these divergent results, it is difficult todecide where the true XBT temperature accuracylies. Since the Heinmiller et al. (1983) compari-sons were made in situ, there are many sourcesof error that could contribute to the larger tem-perature differences. Even though most of theCTD casts were made with calibrated instru-ments, errors in operational procedures duringcollection and archival could add significanterrors to the resultant data. Also, it is not easyto find segments of temperature profiles withno vertical temperature gradient and thereforeit is difficult to ignore the effect of the depth mea-surement error on the temperature trace. It seemsfair to conclude that the laboratory calibrationsrepresent the ideal accuracy possible with theXBT system (i.e. better than�0.1 �C). In the field,however, one must expect other influences thatwill reduce the accuracy of the XBT measure-ments and an overall accuracy slightly morethan �0.1 �C is perhaps realistic. Some of thesources of these errors can be easily detected,such as an insulation failure in the copper wire,which results in single step offsets in the resultingtemperature profile. Other possible temperatureerror sources are interference due to shipboardradio transmission (which shows up as high-frequency noise in the vertical temperature pro-file) or problems with the recording system.Hopefully, these problems are detected beforethe data are archived in historical data files.

In closing this section we comment that, untilrecently, most XBT data were digitized by hand.The disadvantage of this procedure is that chartpaper recordingdoesnot fully realize thepotentialdigital accuracy of the sensing system and that theopportunities for operator recording errors areconsiderable. Again, some care should be exer-cised in editing out these large errors, which usu-ally result from the incorrect hand recording oftemperature, date, time or position. It is becomingincreasingly popular to use digital XBT recording

1.3 TEMPERATURE 17