technology-assisted review hands-on...

70
Technology-assisted Review Hands-on Workshop Presented by kCura Caesars Palace, Las Vegas, Nevada Tuesday, August 20, 2013

Upload: vannhan

Post on 04-Aug-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Technology-assisted Review

Hands-on Workshop

Presented by kCura

Caesars Palace, Las Vegas, Nevada

Tuesday, August 20, 2013

Agenda

SESSION ONE

Introduction

Overview

Checklist

Scenarios/Project Goals

Sampling

Round Calculations

Project Creation

Rounds and Reporting (part 1)

Control Sets

Reviewer Decision-making

SESSION TWO

Rounds and Reporting (part 2)

Training Rounds

Pre-coded Sets

Random/Judgmental Sampling

QC Rounds

Meta Round Workflow

Project Stabilization Criteria

Conclusion

Group Quiz

Workbook Conventions

Course Objective

This course is designed to acquaint you with the theory

and processes associated with a technology-assisted

review project.

By the end of this training, you will have learned

project basics including planning, overall workflow,

project creation, reviewer training, round types, and

reporting.

Understanding Concepts

How Categorization Works

Categorization

Responsive Non-responsive

Technology-

assisted

Review is not:

An easy button or

an instantaneous,

magical solution

A process that can

completely replace

attorney review

The best solution

for all e-discovery

projects

Technology-assisted Review is:

A process that augments and amplifies document review

with human guidance

A toolset that requires experts to train the system,

employing the force multiplier effect

A specialized text analytics categorization workflow,

which is validated using statistics

Technology-assisted Review

Reference Model

Courtesy of: EDRM.net

Workflow Overview

Goals and Scenarios

Do you -

Plan to review all of the documents, but want to prioritize your most important documents from a time or reviewer standpoint?

Intend to review your responsive population once you’ve stabilized your overturn rate?

Want a quick production after you verify the responsive set?

Wish to locate only the most relevant documents from your opposition’s production?

Want to QC a standard linear review prior to production?

Project Planning Checklist

Technology-assisted Review Case Checklist Meets Criteria

Minimum of 50,000 records with text

Concept-rich files, e.g. emails, word documents,

limited excel files, or graphics

Will there be issue or privilege coding concurrent

with the Assisted Review project?

Is this previously reviewed data? If so, how were

families reviewed?

Expected timelines given

Expected process given

Number of reviewers

Level of reviewers

Level of accuracy, precision, recall, F1

Sampling

There are three types of sampling employed in technology-assisted review:

Fixed Sample Size: You enter the number of documents to sample.

Percentage: You enter the percentage of the eligible population to sample.

Statistical: The system calculates the sample size based on the population size, the confidence level, and the margin of error.

Sampling Definitions

In order to understand statistical sampling, it’s important to understand the meaning of confidence level and margin of error:

Confidence level: Percent confidence that another random sample of the same size will produce the same results within the margin of error.

Higher confidence levels require a larger sample size.

Margin of error: The reliability of the estimate. The maximum expected difference between the true value and the value determined through sampling.

A lower margin of error requires a larger sample size.

Margin of error is also calculated by selecting Fixed or Percentage sampling.

Confidence Level

Documents to Review

Cost

of

Revie

w

Auto Brakes E-Discovery Social Sciences

0

500

1,000

1,500

2,000

2,500

3,000

Sample Size

+/- 2.0

+/- 2.5

+/- 5.0

Document

Count

The Numbers Behind the

Statistics

Confidence: 95%

Sampling Calculator

Exercise: Calculate Sample Size

Navigate to the ILTA Assisted Review workspace

Select the Searchable Set saved search

Select a confidence level of 95%

Select the margin of error 2.5. What is the sample

size?

Select the margin of error 2. What is the sample

size?

Change the confidence level to 90%. How does this

affect the sample size?

Exercise: Creating an Assisted Review

Project

Enter a project name, prefix, description

Select index

Select fields

Select a default sampling type

Turn on auto batching and sets of 50

Round Types Control

Set Training Round

QC Round

There are three types of Assisted Review rounds:

Control Set: Required to calculate Precision, Recall, & F1. Control Set documents are not submitted as examples, but they are categorized by other seed example documents.

Training Round: Submitted to train the system. System accuracy is not a factor.

QC Round:

Performed in order to measure and hone system accuracy. Most rounds are QC rounds.

Control Sets

Also known as “Truth Set” or “Golden Set”

Typically performed at very start of project

Control Set documents are not submitted as seed documents

Tracking how Control Set documents are categorized allows for Precision, Recall, & F1 calculations

Control Set Training Round

QC Round

Precision, Recall, & F1 Scores

Exercises: Calculate Precision & Recall

Review the workbook tables and calculate Precision and

Recall as directed:

F1 = Harmonic Mean (average) of Precision and Recall

Precision = True Positive

True Positive + False Positive

Recall =

True Positive

True Positive + False Negative

(Total Possible Responsive)

Exercise: Calculate Precision (example)

Document Number Categorization Value

(System Coding Decision) Truth Value

(Expert’s Coding Decision)

A001 Responsive Responsive

A002 Responsive Responsive

A003 Responsive Responsive

A004 Responsive Not Responsive

A005 Responsive Responsive

A006 Responsive Not Responsive

A007 Responsive Not Responsive

A008 Responsive Responsive

A009 Responsive Not Responsive

A010 Responsive Responsive

Precision = True Positive

True Positive + False Positive

Precision = 6 True Positive

6 True Positive + 4 False Positive =

6

10 = 60%

Exercise: Calculate Precision

Document Number Categorization Value (System Coding Decision)

Truth Value (Expert’s Coding Decision)

A011 Responsive Responsive

A012 Responsive Responsive

A013 Responsive Responsive

A014 Responsive Responsive

A015 Responsive Responsive

A016 Responsive Responsive

A017 Responsive Responsive

A018 Responsive Responsive

A019 Responsive Not Responsive

A020 Responsive Responsive

Precision = 9 True Positive

9 True Positive + 1 False Positive =

9

10 = 90%

Precision = True Positive

True Positive + False Positive

Document Number Categorization Value (System Coding Decision)

Truth Value (Expert’s Coding Decision)

B001 Responsive Responsive

B002 Not Responsive Responsive

B003 Responsive Not Responsive

B004 Responsive Not Responsive

B005 Not Responsive Not Responsive

B006 Responsive Not Responsive

B007 Not Responsive Not Responsive

B008 Responsive Responsive

B009 Not Responsive Not Responsive

B010 Responsive Responsive

Exercise: Calculate Recall (example)

Recall =

True Positive

True Positive + False Negative

(Total Possible Responsive)

Recall =

3 True Positive

3 True Positive + 1 False Negative

(Total Possible Responsive)

=

3

4

= 75%

Document Number Categorization Value (System Coding Decision)

Truth Value (Expert Coding Decision)

B011 Not Responsive Not Responsive

B012 Not Responsive Not Responsive

B013 Responsive Responsive

B014 Not Responsive Not Responsive

B015 Not Responsive Responsive

B016 Not Responsive Responsive

B017 Not Responsive Not Responsive

B018 Not Responsive Responsive

B019 Responsive Responsive

B020 Not Responsive Not Responsive

Exercise: Calculate Recall

Recall =

2 True Positive

2 True Positive + 3 False Negative

(Total Possible Responsive)

=

2

5

= 40%

Control Set Statistics Precision = True Positive / True Positive + False Positive

Recall = True Positive / True Positive + False Negative

Responsive Non-responsive

Exercise: Create and Code a Control Set

Create a new round of type Control Set

Each member of your team should code 10 documents

as either Responsive or Not Responsive

Do not check the Use as Example box for Control Set

documents

When you are done coding, go back to your project

console and click the Finish Round button

Reviewer Decision-making

Decision-making is a crucial aspect of a technology-assisted review workflow. Reviewers must understand how to:

Select good example documents

Understand that the quality of an example is independent of responsiveness—it’s a second dimension

Apply the Use as Example field

Use the Excerpt Text feature

Sufficient Text

All machine learning is derived from a document’s extracted text.

In order for a document to be considered a good example for machine learning, it must contain a sufficient quantity of text to train the system.

Some documents may be highly responsive, yet undesirable as example documents for a technology-assisted review project.

Assisted Review’s text analytics engine learns from concepts, rather than individual words or short phrases.

Email Headers and Repeated

Content

Images

Spreadsheet – Bad Example

Spreadsheet – Good Example

Families and the Four Corners Test

Families and the Four Corners Test

The following scenarios violate the Four Corners Test and will not

yield good example documents:

The document is conceptually empty, but is a family member of

another document which is substantively Responsive.

The document comes from a Custodian whose documents are

presumptively Responsive.

The document was created within a date range which is

presumptively Responsive.

The document comes from a location or repository where

documents are typically Responsive.

Carve-outs: False Negative Examples

Applying the Use as Example Field

Using the Excerpt Text Feature

Excerpt Text Caveat

NO

Coding Tips

Review the workbook tables and calculate Precision and

Recall as directed:

Consistency is crucial.

Double check the extracted text.

Stick to the recommended workflow; when in doubt,

ask.

Do not submit Control Set documents as examples.

End of

Session 1

Questions?

Agenda

SESSION ONE

Introduction

Overview

Checklist

Scenarios/Project Goals

Sampling

Round Calculations

Project Creation

Rounds and Reporting (part 1)

Control Sets

Reviewer Decision-making

SESSION TWO

Rounds and Reporting (part 2)

Training Rounds

Pre-coded Sets

Random/Judgmental Sampling

QC Rounds

Meta Round Workflow

Project Stabilization Criteria

Conclusion

Group Quiz

Training Rounds

Goal is to categorize as many documents as possible; not concerned with system accuracy

Performed near start of project, or when documents with new concepts are added

Two types of training rounds: Random Sample

Judgmental Sample Targeted search for documents likely to be good for training

Pre-coded seed sets an option for documents which have already been reviewed

Control Set

Training Round

QC Round

Exercise: Pre-Coded Seed Set

Mass Edit all Hot-Gas related documents as Responsive.

Create a new round of pre-coded seed types.

Go back to your project console and click the Finish

Round button. Select “yes” for Categorize and “no” for

Save Results.

Round Summary Report: Categorization Results

Round Summary Report: Categorization Volatility

Rank Distribution Report

Exercise: Analyze End-of-round

Reports

Navigate to your project console

Open and examine the Round Summary Report

How many documents categorized Responsive?

How many categorized Not Responsive?

How many didn’t categorize?

Scroll down to the Project Volatility Report

Were there any major categorization shifts?

Are more training rounds necessary?

Open and examine the Rank Distribution Report

Control Set Statistics Reports

QC Rounds

Also known as Validation Rounds. Used to measure and hone system accuracy.

Most rounds in a technology-assisted review project will be QC rounds.

Typically, there are three types of QC rounds:

Combined Responsive & Not Responsive (all categorized documents)

Responsive only

Not Responsive only

Control Set

Training Round

QC Round

When to Perform a Combined

Responsive/Not Responsive QC Round

At the beginning and end of projects

As a de facto option when not sure how to proceed

If calculating Precision & Recall without a Control Set

When to Perform a Not Responsive QC

Round

If previous combined round overturns have high responsive and fairly low non-responsive overturn rates.

Only stabilize on non-responsive, which is typically the quicker of the two.

Batch out the documents categorized as responsive. The batch might still contain some non-responsive documents, but these will be weeded out during review.

If High Recall is the production priority.

Example: A government investigation where it’s okay to produce some non-responsive documents.

When to Perform a Responsive QC

Round

If there’s a very low responsiveness rate, and you’re

concerned about getting enough good responsive

examples

High Precision is the production priority

Example: A civil litigation between competitors where

you don’t want to give the opposition free information

that is not related to the case.

Exercise: Create and Code a QC

Round

Click Start Round on the project console

Create a new round of type QC Round

Each team member should code 10 documents as

either Responsive or Not Responsive

Do not click Finish Round

Overturn Rate Calculation

Responsive Non-responsive

Coding Criteria

Responsive Criteria Responsive Non-responsive

Black cards: 2-5

Red cards: 9-10

Face cards: Only Queens

No diamonds can be

Responsive

Meta Rounds

Not a true round, but rather a fine-tuning

workflow performed at the end of QC rounds

Goal is to make necessary course corrections:

Identify and correct coding inconsistencies

Identify and remove bad seed documents

Overturn Summary Report

Overturned Documents Report

Meta Round Document Level Analysis

Exercise: Perform a Meta Round

Click and open Overturn Summary Report on your project

console

Click and open the Overturn Documents Report on your

console

Filter for your highest ranking overturns

Run a pivot to identify and filter on your most influential seed

document

Click on the seed document link and compare it to an

overturned document by using the Overturn Analysis pane

Return to your project console and click Finish Round.

Categorize but do not save results

Exercise: Final Report Analysis

Navigate to your project console

Locate and analyze the values found in:

Round Summary Report

Rank Distribution Report

Control Set Statistics Report

Based on your findings, what would your next

round be?

How to Determine When the

Project is Done

There are three ways of measuring and determining

stabilization:

Precision/Recall/F1 (the closer to 100, the better)

Overturn Percentage (the closer to 0, the better)

Volatility (caveat: not a true measure of system accuracy)

“Acceptable” values are highly subjective and situational

Proportionality is often a legitimate determining factor

The Project has Stabilized. What

Comes Next?

Manual review of all the documents categorized as Responsive (unless doing a quick production)

Manual review of all uncategorized documents

Issue Coding

Redactions

Privilege Screening

Deposition preparation

Group Quiz

1. Who should be reviewing documents in a technology-assisted review project?

2. Which round type is used to calculate Precision, Recall, & F1?

3. Which statistical sampling settings are best suited for technology-assisted review?

4. Which types of documents make poor seed examples?

5. Raising the confidence level will (increase or decrease) the size of a sample set?

6. Raising the margin of error will (increase or decrease) the size of a sample set?

Group Quiz

7. Control Set documents are not submitted as ___________?

8. All machine learning is derived from ___________?

9. Name a form of judgmental sampling. What it is used for?

10. Volatility is the measurement of what?

11. Precision, Recall, and F1 values should (increase or

decrease) as a project moves towards completion?

12. Overturn rates should (increase or decrease) as a project

moves towards completion?

Thanks for

attending!

Questions?