what we expect from watermarking scott craver and bede liu department of electrical engineering,...

Post on 01-Apr-2015

219 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

What we expect from Watermarking

Scott Craver and Bede Liu

Department of Electrical Engineering, Princeton University

“We” who?

“We” who?

“What does breaking the code have to do with research? Research for what? Are you researching cloning, or the laws of physics? We’re not dealing with Boyle’s Law here….”

--Jack Valenti, MPAA

Security research

“Make and Break” research cycle– “Break” is a terrible misnomer.– Better systems derived from earlier mistakes – In practice, just about everything invented is

broken the day it is designed.– Academic papers on watermarking schemes

vastly outnumber academic papers identifying flaws in watermarking schemes.

Security Design Principles

Have a well-defined goal/threat model Be wary of Kerckhoffs’ Criterion Utilize Community Resources. Economy of final design

Threat model and goals

Modeling the adversary

The Kerckhoffs Criterion

Assume an attacker will know the inner workings of your system—algorithm, source code, etc etc.

A system that violates this principle is often called an obscurity tactic, e.g. security through obscurity.

But, why do we have to be so pessimistic?– Obscurity does not scale.– Obscurity tactics are not amenable to quantitative

analysis.– In practice, the worst case is pretty common.

Modeling an adversary

The “common case” is hard to define in an adversarial situation.

A technology that fails 1% of the time can be made to fail 100% of the time.

Threat one: distribution of exploit code, circuit plans, for circumventing controls.

Threat two: widespread distribution of one clip after one successful circumvention.

Community Resources

If there’s a vulnerability in your system, it has probably already been published in a journal somewhere.

The body of known attacks, flaws and vulnerabilities is large, and the result of years of community effort.

So is the body of successful cryptosystems. But, does it really help to publish the

specifications of your own system?

Economy of Solution

A solution’s complexity should match the system’s design goals.

Problem: easing or removing goals that a finished product does not meet.

Nobody needs a solid gold speed bump with laser detectors and guard dogs.

General Observations

People often overestimate security. Flaws in many published security systems are

often either well-known and completely preventable, or completely unavoidable for a given application.

Common mistakes, exploited by attackers:– Mismodeling the attacker– Mismodeling the human visual system– Ignoring application/implementation issues

supposedly beyond the scope of the problem.

Part II: video steganalysis

Attacks on video watermarking

Problem: attacker digitizes video from analog source, distributes over Internet.

Solution model one: control digitizing. Solution model two: control playback of

digitized video in display devices. Is there a “common case” in either model?

Known attacks

Ingemar Cox and Jean-Paul Linnartz, IEEE JSAC 1998, v.16 no. 4, 587-593– Pre-scrambling operation prior to detection– Attack on type-1 solution.

S A/DDetect

S-1

Known attacks Ingemar Cox and Jean-Paul Linnartz,

IEEE JSAC 1998, v.16 no. 4, 587-593– In our threat model, this must be done in

hardware. It must also commute with some pretty heavy processing.

SA/D Compress

VHSRecording

Decompress S-1

VHSPlayback S-1

Example

Scanline inversion– Subset of scanlines flipped in luminance– Strips of 8/16 lines for better commutativity

with compression.

Question: what watermarking schemes are immune to this attack?

Vulnerabilities in Digital Domain

Direct removal/damaging of watermarks Oracle attacks Reverse-engineering of mark parameters. Collusion attacks Mismatch attacks

Direct Removal

Space of possible attacks is too large to model.

Direct removal attacks can be unguided, or based on information about the watermark embedding method.

Attacks often exploit flaws in our perceptual models.

Oracle Attacks

Use watermark detector itself to guide removal process.

Easiest if attackers have their own watermark detector software.

Other attacks are probably more convenient.

If you can force a an attacker to resort to an oracle attack using your own hardware, you’re probably doing something right.

Mismatch attacks

No need to remove watermark, just render it unnoticeable by automated detectors.

Two kinds: subtle warping, and blatant scrambling (must be inverted later.)

Watermarks for automated detection need to be super-robust: not only must the information survive, but it must remain detectable by a known algorithm.

Estimation of mark parameters

Robust watermarking leaves fingerprints. Super-robust marks leave big fingerprints.

Auto-collusion attacks take advantage of temporal redundancy [Boeuf & Stern `01]

Histogram analysis [Maes `98] [Fridrich et. al. 02] [Westfeld & Pfitzmann, `00]

Histogram Analysis An additive signature leaves statistical

“fingerprints” in sample histogram.

Additive watermark convolution of histograms. y(t) = x(t)+w(t)

hy (x) = hx (x) * hw (x)

Histogram Analysis

Transform statistical effect of watermark into an additive signal – hy (x) = hx (x) * hw (x)

– gy () = log( FFT [hy (x) ] )

– gy () = gx () + gw ()

Can detect by correlation

[Harmsen & Pearlman, Proc. SPIE 2003]

Histogram Analysis

Example: Additive spread spectrum mark– y(t) = x(t) + s(t), s(t) = { -1, 0, 1 }– G() = (1–p) + p cos(2/N)

Conclusions

For this watermarking application, the state of the art favors analysis.

Most new systems we examine possess elementary flaws.

The scientific community is here to help.

top related