#3/9 ornithology monitoring on offshore windfarms

26
ORNITHOLOGY MONITORING AT OFFSHORE WIND FARMS: SURVEY DESIGN & INFERENCE Mark Trinder MacArthur Green

Upload: naturalengland

Post on 19-Jul-2015

282 views

Category:

Environment


2 download

TRANSCRIPT

ORNITHOLOGY MONITORING AT OFFSHORE WIND FARMS: SURVEY DESIGN & INFERENCE

Mark Trinder MacArthur Green

MacArthur Green • Specialist services in marine ornithology for Developers,

Regulators and Statutory Advisory Bodies

• Impact assessment & monitoring work including:

• Beatrice,

• Hornsea,

• Dogger Bank,

• EA1, EA3

• Guidance & Research: • Seabird sensitivity scoring and mapping, CIA methods for PFOW,

Biologically Defined Minimum Population Scales, Compensation options for seabird SPAs, Offshore wind farm collision risk database, NERC PhD studentship

• Experience and background: • Population modelling and analysis, impact assessment, seabird ecology

Ornithology Monitoring: Talk Overview • Why and what we monitor for birds and OWFs

• Survey design & power analysis

• Power analysis and simulation illustration

• Summary

• Recommendations

Ornithology Monitoring • What is the purpose?

• Tick a box,

• Validate site specific assumptions in assessment,

• Reduce uncertainties for future developments?

• All of the above – need not be mutually exclusive

• How to maximise value

• So what? – detection of an impact doesn’t have

to mean there is an effect on the population.

What are the key bird issues? • Collisions & displacement (precautionary estimates)

• Monitoring can help to reduce uncertainties

• There is a need to focus on:

• Phrasing of questions and

• Design of surveys

• To ensure results are useful (not wasted

opportunity)

How to ensure monitoring is useful • Identify key assumptions made in assessment

(e.g. flight height, displacement rate, SPA

connectivity)

• Clearly state purpose of monitoring:

• Were the flight heights used in CRM appropriate?

• Is species x displaced by 50%?

• What proportion of birds seen are from particular colony?

• What are consequences of answers (‘so what?’)

• Avoid vagueness – ‘monitor bird usage of the wind

farm’

Survey design

• More data (usually) means greater confidence in

results

• But, particularly infrequent events such as

collisions would need very large scale (spatial and

temporal) monitoring to reliably estimate

• Even if aim is to measure more common events

such as displacement still important to understand

what is feasible:

• How much data is enough to detect an effect?

• Power analysis is a tool for answering this

question

Power Analysis

• Power analysis is a means for:

determine the sample size required to detect an

effect of a given size with a given degree of

confidence.

• Or put another way it allows us to:

determine the probability of detecting an effect

of a given size with a given level of confidence,

under sample size constraints

• So, what does that actually mean?

Statistical significance

• Statistical tests typically define a threshold of significance

- p value (e.g. 0.05)

• This is used to minimise risk that an observation is simply

chance

• So, at p = 0.05 the result is only expected 5% of the time,

which is considered sufficiently unlikely to be termed

‘significant’

• So, one way to increase the likelihood (power) of

detecting an effect is by increasing the p threshold, but:

• This increases the risk of false positives (identifying an effect when

it was just chance; Type I error)

• Conversely lower p threshold values risk false negatives (real

effects being dismissed; Type II error)

Power analysis – trade off

• By convention:

• Maximum p value for significance is 5%, • Risk of a false negative is 5% (wrongly identifying a chance effect)

• Minimum probability of detecting an effect is 80% • Risk of a false positive is 20% (expect to detect an effect, if present,

80% of the time)

• This is the trade-off:

• 1 in 20 chance of false negative

• 1 in 5 chance of false positive,

Power analysis – offshore wind determine the sample size required to detect an

effect of a given size with a given degree of

confidence

• So, we have 2 of the 3 values above: • given degree of confidence = 5% (i.e. a significant result)

• probability of detecting an effect = 80%

• And can estimate the third: • determine the sample size required

• Use existing data to develop a simulation of the

bird distributions and the survey design

Power analysis – illustration

• Survey area – random bird

distributions

• Based on Sandwich tern at

2 birds/km2

605000 615000 625000 635000

37

00

00

38

00

00

39

00

00

Be

fore

Power analysis – illustration

605000 615000 625000 635000

37

00

00

38

00

00

39

00

00

Be

fore

• Survey area – random bird

distributions

• Based on Sandwich tern at

2 birds/km2

• Add a wind farm

(Dudgeon)

Power analysis – illustration

605000 615000 625000 635000

37

00

00

38

00

00

39

00

00

Be

fore

• Survey area – random bird

distributions

• Based on Sandwich tern at

2 birds/km2

• Add a wind farm

(Dudgeon)

• Add potential survey

design – aerial transects

• Birds observed (in

transect) generate survey

data

Power analysis – illustration

• Run again for post-

construction

• Apply a displacement

effect from the wind

farm (reduce number

inside wind farm in

‘after’ period)

37

00

00

38

00

00

39

00

00

Be

fore

605000 615000 625000 635000

37

00

00

38

00

00

39

00

00

Afte

r

Power analysis – illustration

• Run survey model multiple times (e.g. 50+)

• Analyse simulated before / after data using spatial

models (MRSea; CREEM, St. Andrews University)

• Vary survey design: • Survey frequency,

• Survey coverage,

• Vary biology:

• Bird distribution (uniform or concentrated), and

• Displacement effect size.

• Compare before (no effect) with after (displacement

from wind farm) - how often is effect detected?

(target 80%)

Power analysis – illustration Simulation A B C D

Percentage redistribution

(effect size) 25 50 50 50

Transect separation (km) 2 2 1.5 1

No. surveys per

month (April – July)

1 39.4 46.2 31.0 45.5

2 28.1 47.4 46.3 55.3

3 33.3 50.0 56.8 50.0

4 43.3 58.3 65.2 77.3

5 24.1 65.8 68.9 73.8

• Large effects – higher probability of detection

• More surveys – increase probability of detection

Power analysis – illustration Simulation A B C D

Percentage redistribution

(effect size) 25 50 50 50

Transect separation (km) 2 2 1.5 1

No. surveys per

month (April – July)

1 39.4 46.2 31.0 45.5

2 28.1 47.4 46.3 55.3

3 33.3 50.0 56.8 50.0

4 43.3 58.3 65.2 77.3

5 24.1 65.8 68.9 73.8

• Key message – only with 4-5 surveys/month for 4

months at 2x typical coverage and large effect do

results get close to achieving 80% target

Power analysis - distributions

• In this example - need very high intensity of

surveying effort to detect even large effect

• Even then – result based on low modelled level of

inter-annual variability

• This won’t always be the case (e.g. more

numerous species)

• But - can any effect detected be definitely

attributed to wind farm?

• Are there alternatives which may be more

suitable addressing key questions?

One option - tagging

• GPS tags:

• Track individuals

• Collect detailed spatial and behavioural data (identify

foraging areas, behavioural responses, flight

characteristics)

• Estimate connectivity with breeding populations

• Simulation can have a role in study design

• Use to estimate sample sizes on basis of

previous studies (e.g. trip characteristics)

Simulate tagging data

Simulate tagging data

• Generate bird tracks

for population (yellow)

• Identify all grid cells

visited (red)

• Randomly select a

subset ‘tagged’ (green)

• Identify cells visited by tracked birds (blue)

• 15 tagged birds from population of 4,000

• 60% of total area identified from tagged data

Simulate tagging data

• Generate bird tracks

for population (yellow)

• Identify all grid cells

visited (red)

• Randomly select a

subset ‘tagged’ (green)

• Identify cells visited by tracked birds (blue)

• 15 tagged birds from population of 4,000

• 60% of total area identified from tagged data

• Estimate area with different sample sizes

Monitoring conclusions

• Need to carefully define monitoring goals and questions (set up Advisory groups)

• Link monitoring aims to survey design (iterative process)

• Power analysis and simulation have roles to play in the decision making process – reduce risks of undertaking monitoring which may be of limited value

• And so what – detection of effect need not mean there is a population impact.

Monitoring recommendations

• If evidence suggests that detecting site specific effects using current approaches is not possible - why do it?

• Key consenting concerns are cumulative impacts on SPA populations so monitoring should be population centred not focussed on wind farm sites

• Strategic monitoring offers potential to assess population level impacts to benefit of industry

• Consent conditions should allow for flexibility in approach to make most of opportunities

Acknowledgements:

• Chris Nunn at Statkraft for permission to present data from Dudgeon simulations