what we expect from watermarking scott craver and bede liu department of electrical engineering,...

25
What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Upload: josie-haselton

Post on 01-Apr-2015

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

What we expect from Watermarking

Scott Craver and Bede Liu

Department of Electrical Engineering, Princeton University

Page 2: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

“We” who?

Page 3: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

“We” who?

“What does breaking the code have to do with research? Research for what? Are you researching cloning, or the laws of physics? We’re not dealing with Boyle’s Law here….”

--Jack Valenti, MPAA

Page 4: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Security research

“Make and Break” research cycle– “Break” is a terrible misnomer.– Better systems derived from earlier mistakes – In practice, just about everything invented is

broken the day it is designed.– Academic papers on watermarking schemes

vastly outnumber academic papers identifying flaws in watermarking schemes.

Page 5: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Security Design Principles

Have a well-defined goal/threat model Be wary of Kerckhoffs’ Criterion Utilize Community Resources. Economy of final design

Page 6: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Threat model and goals

Modeling the adversary

Page 7: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

The Kerckhoffs Criterion

Assume an attacker will know the inner workings of your system—algorithm, source code, etc etc.

A system that violates this principle is often called an obscurity tactic, e.g. security through obscurity.

But, why do we have to be so pessimistic?– Obscurity does not scale.– Obscurity tactics are not amenable to quantitative

analysis.– In practice, the worst case is pretty common.

Page 8: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Modeling an adversary

The “common case” is hard to define in an adversarial situation.

A technology that fails 1% of the time can be made to fail 100% of the time.

Threat one: distribution of exploit code, circuit plans, for circumventing controls.

Threat two: widespread distribution of one clip after one successful circumvention.

Page 9: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Community Resources

If there’s a vulnerability in your system, it has probably already been published in a journal somewhere.

The body of known attacks, flaws and vulnerabilities is large, and the result of years of community effort.

So is the body of successful cryptosystems. But, does it really help to publish the

specifications of your own system?

Page 10: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Economy of Solution

A solution’s complexity should match the system’s design goals.

Problem: easing or removing goals that a finished product does not meet.

Nobody needs a solid gold speed bump with laser detectors and guard dogs.

Page 11: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

General Observations

People often overestimate security. Flaws in many published security systems are

often either well-known and completely preventable, or completely unavoidable for a given application.

Common mistakes, exploited by attackers:– Mismodeling the attacker– Mismodeling the human visual system– Ignoring application/implementation issues

supposedly beyond the scope of the problem.

Page 12: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Part II: video steganalysis

Page 13: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Attacks on video watermarking

Problem: attacker digitizes video from analog source, distributes over Internet.

Solution model one: control digitizing. Solution model two: control playback of

digitized video in display devices. Is there a “common case” in either model?

Page 14: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Known attacks

Ingemar Cox and Jean-Paul Linnartz, IEEE JSAC 1998, v.16 no. 4, 587-593– Pre-scrambling operation prior to detection– Attack on type-1 solution.

S A/DDetect

S-1

Page 15: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Known attacks Ingemar Cox and Jean-Paul Linnartz,

IEEE JSAC 1998, v.16 no. 4, 587-593– In our threat model, this must be done in

hardware. It must also commute with some pretty heavy processing.

SA/D Compress

VHSRecording

Decompress S-1

VHSPlayback S-1

Page 16: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Example

Scanline inversion– Subset of scanlines flipped in luminance– Strips of 8/16 lines for better commutativity

with compression.

Question: what watermarking schemes are immune to this attack?

Page 17: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Vulnerabilities in Digital Domain

Direct removal/damaging of watermarks Oracle attacks Reverse-engineering of mark parameters. Collusion attacks Mismatch attacks

Page 18: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Direct Removal

Space of possible attacks is too large to model.

Direct removal attacks can be unguided, or based on information about the watermark embedding method.

Attacks often exploit flaws in our perceptual models.

Page 19: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Oracle Attacks

Use watermark detector itself to guide removal process.

Easiest if attackers have their own watermark detector software.

Other attacks are probably more convenient.

If you can force a an attacker to resort to an oracle attack using your own hardware, you’re probably doing something right.

Page 20: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Mismatch attacks

No need to remove watermark, just render it unnoticeable by automated detectors.

Two kinds: subtle warping, and blatant scrambling (must be inverted later.)

Watermarks for automated detection need to be super-robust: not only must the information survive, but it must remain detectable by a known algorithm.

Page 21: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Estimation of mark parameters

Robust watermarking leaves fingerprints. Super-robust marks leave big fingerprints.

Auto-collusion attacks take advantage of temporal redundancy [Boeuf & Stern `01]

Histogram analysis [Maes `98] [Fridrich et. al. 02] [Westfeld & Pfitzmann, `00]

Page 22: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Histogram Analysis An additive signature leaves statistical

“fingerprints” in sample histogram.

Additive watermark convolution of histograms. y(t) = x(t)+w(t)

hy (x) = hx (x) * hw (x)

Page 23: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Histogram Analysis

Transform statistical effect of watermark into an additive signal – hy (x) = hx (x) * hw (x)

– gy () = log( FFT [hy (x) ] )

– gy () = gx () + gw ()

Can detect by correlation

[Harmsen & Pearlman, Proc. SPIE 2003]

Page 24: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Histogram Analysis

Example: Additive spread spectrum mark– y(t) = x(t) + s(t), s(t) = { -1, 0, 1 }– G() = (1–p) + p cos(2/N)

Page 25: What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

Conclusions

For this watermarking application, the state of the art favors analysis.

Most new systems we examine possess elementary flaws.

The scientific community is here to help.