licensing artificial intelligence...
TRANSCRIPT
Licensing Artificial Intelligence SystemsRights of Licensor and Licensee, Liability for IP Infringement by AI, Rights to
Product of AI System
Today’s faculty features:
1pm Eastern | 12pm Central | 11am Mountain | 10am Pacific
The audio portion of the conference may be accessed via the telephone or by using your computer's
speakers. Please refer to the instructions emailed to registrants for additional information. If you
have any questions, please contact Customer Service at 1-800-926-7926 ext. 1.
WEDNESDAY, MARCH 27, 2019
Presenting a live 90-minute webinar with interactive Q&A
Heiko E. Burow, Of Counsel, Baker McKenzie, Dallas
Samuel Jo, Counsel, Perkins Coie, Seattle
Robert W. (Bob) Kantner, Partner, Jones Day, Dallas
Tips for Optimal Quality
Sound Quality
If you are listening via your computer speakers, please note that the quality
of your sound will vary depending on the speed and quality of your internet
connection.
If the sound quality is not satisfactory, you may listen via the phone: dial
1-866-570-7602 and enter your PIN when prompted. Otherwise, please
send us a chat or e-mail [email protected] immediately so we can address
the problem.
If you dialed in and have any difficulties during the call, press *0 for assistance.
Viewing Quality
To maximize your screen, press the F11 key on your keyboard. To exit full screen,
press the F11 key again.
FOR LIVE EVENT ONLY
Continuing Education Credits
In order for us to process your continuing education credit, you must confirm your
participation in this webinar by completing and submitting the Attendance
Affirmation/Evaluation after the webinar.
A link to the Attendance Affirmation/Evaluation will be in the thank you email
that you will receive immediately following the program.
For additional information about continuing education, call us at 1-800-926-7926
ext. 2.
FOR LIVE EVENT ONLY
Program Materials
If you have not printed the conference materials for this program, please
complete the following steps:
• Click on the ^ symbol next to “Conference Materials” in the middle of the left-
hand column on your screen.
• Click on the tab labeled “Handouts” that appears, and there you will see a
PDF of the slides for today's program.
• Double click on the PDF and a separate page will open.
• Print the slides by clicking on the printer icon.
FOR LIVE EVENT ONLY
Perkins Coie LLP
Licensing Artificial Intelligence Systems
Strafford Webinars
March 27, 2019
Sam Jo, Counsel
Technology Transactions & Privacy
Perkins Coie LLP | PerkinsCoie.com
Webinar Overview
6
• AI Product Development and Inbound Licensing – Sam Jo
• Commercial Licensing of AI – Heiko Burow
• AI: Enforcement and Litigation Issues – Bob Kanter
• Questions - feel free to ask throughout
Goal: Understanding AI from a product development and licensee perspective.
Perkins Coie LLP | PerkinsCoie.com
Conceptual Depiction
7
AI = simulation of human
intelligence in machines.
Machine Learning = subset of
AI involving a system that
learns from data without rules-
based programming.
Robotics
ML
AI
Perkins Coie LLP | PerkinsCoie.com
Machine Learning vs. Traditional Software
8
Key Distinctions
• Traditional software requires hand-coding with specific instructions to
complete a task.
• An ML system learns to recognize patterns and make predictions
using large amounts of data.
Spam Example:
• Old way: “if the email contains the word ‘Viagra,’ then…”
• New way: ML system learns from training data to identify if it is
spam.
Perkins Coie LLP | PerkinsCoie.com
How are ML Models Developed?
9
Learner -
Algorithm
Machine
Model
Parameters (distilled learnings)
Identify Features(e.g., subject,
country, time sent)
Data Input(e.g., email data
features as training
data) Output – TruthEmails marked
as spam
PredictionIs the email spam?
Perkins Coie LLP | PerkinsCoie.com
Risks of ML Development/Licensing
10
• Not all problems are ML problems (i.e., ML is not always the
right solution).
• Bad parameters, incorrect learnings, data bias.
• Development of ML models is usually an iterative process.
• The outcome/performance of a new ML model generally
cannot be accurately predicted before it is built.
• Some ML development efforts fail.
• Recommendation: Start with a proof of concept.
Perkins Coie LLP | PerkinsCoie.com
Licensing Considerations
11
• Algorithms vs. Models vs. “AI Software” vs. Output Data
• Defining the license scope – configurations, modifications, retraining, etc.
• Define the computing environment – who, what, where.
• Output can be data (e.g., outcome – true or false), but also training
parameters.
• Address rights to learnings/algorithmic and parameter optimizations,
output, etc.
• Understand “Licenses” (and “Ownership”) vis-à-vis IP Laws
• What do you technically need a license to? Are “use rights” sufficient?
• Alternatives – covenant not to sue, acknowledgement of lack of IP rights,
etc.
• Consider International Differences
Perkins Coie LLP | PerkinsCoie.com
Ownership – of what?
12
• ML Algorithms
• Model drift and retraining, modification, trained vs. untrained algorithms.
• Understand what is proprietary to the licensor.
• ML Models
• Almost invariably remain with licensors.
• But … understand what the “AI model” is to understand what it is you’re
addressing.
• Data and Other Output
• Output Examples – weightings, classifiers, taxonomies, ontologies,
configurations.
• Ownership of derivative works of output.
Perkins Coie LLP | PerkinsCoie.com
Economic Models + Considerations
13
• Vendors are looking for a long term revenue model.
• Model drift and retraining
• Model Modification
• Beware the “free” proof of concept.
• Consider benefits to vendor and impact on licensee
• Be cognizant of a vendor’ rights to learnings/algorithmic optimizations,
output, etc.
• Carefully consider entering into “gain sharing” arrangements.
• Increased scale = increased costs/fees
• Accounting and administrative tracking efforts can be costly
Perkins Coie LLP | PerkinsCoie.com
Termination/Transition
14
• Define Key Terms
• Model components and format, compute environment, documentation, etc.
• Return and Destruction
• Understand how the AI will be integrated to comply with wind-down.
• Think through overlap with confidentiality.
• Residuals
• Residuals clauses and surviving rights.
• Rights to learnings/algorithmic and parameter optimizations, output, etc.
Perkins Coie LLP
Data Ingestion and Licensing
Perkins Coie LLP | PerkinsCoie.com
Criticality of Training Data
16
• Biased Datasets = Biased
Models
• Bias may be completely
unintentional
• Bias may be introduced by the
data scientist, or inherent in
data
Perkins Coie LLP | PerkinsCoie.com
Licensing Datasets (and Content)
17
• Online Dataset Ingestion Issues
• Often governed by some terms (e.g., cc, copyleft/open source licenses).
• Database countries also offer thin layer of copyright protection for
databases.
• Scraping vs. crawling - be cautious of fair use arguments.
• Understand the License Scope and Restrictions
• “Non-commercial” vs. “commercial” use.
• Consider ownership of and restrictions to model output (e.g., derivative
data).
• Understand your AI product and how it is to be implemented/launched.
• Ensure Licensor has Sufficient Rights
• Include appropriate warranties and indemnities.
Perkins Coie LLP | PerkinsCoie.com
Thanks for your time!
18
Questions?
Sam Jo
Seattle
206.359.6123
Agenda
1 Traditional Software and AI
2 License Grants
3 Services
4 Fee Structure
5 Ownership
6 Warranties
7 Indemnity
8 Liability
“This is the voice of world control. I bring
you peace. It may be the peace of plenty
and content or the peace of unburied
death. The choice is yours: Obey me
and live, or disobey and die. … We can
coexist, but only on my terms. You will
say you lose your freedom. Freedom is
an illusion. All you lose is the emotion of
pride. To be dominated by me is not as
bad for humankind as to be dominated
by others of your species. Your choice is
simple.”
Colossus , from “Colossus: The Forbin Project”
(1970, dir. Joseph Sargent)
© 2019 Baker & McKenzie LLP
Traditional Software vs. Artificial Intelligence
21
Traditional Software
▪ Known capabilities
▪ Defined purpose and functions
▪ Controllable scope
Artificial Intelligence• Known and unknown capabilities
• Individualized purpose and functions
• Potential for unknown or unpredictable output
• Fear of the unknown
→ Disruption of License Structures
© 2019 Baker & McKenzie LLP
License Grant
22
Traditional Software
▪ Defined scope:
▪ identified software specifications
▪ identified intended functionality
▪ designated interoperability parameters
▪ Limitations on use controllable through internal and external means
Artificial Intelligence
▪ Purposeful flexibility: specifications and intended functions provide framework for machine learning
▪ less control over functionality and operations
▪ self-actualization (self-reflection?)
© 2019 Baker & McKenzie LLP
Services
23
Traditional Software
▪ Maintenance and support
▪ Defined service levels
▪ Updates and enhancements standard or as added value
Artificial Intelligence
▪ Self-improvement and self-maintenance
▪ Less control over service levels
▪ Updates and enhancements as embedded functionality
▪ Loss of value proposition of updates and enhancements
© 2019 Baker & McKenzie LLP
Fee Structure
24
Traditional Software
▪ License fee
▪ Licensing model with focus on maintenance and support fees
▪ Linear fee models (e.g., annual maintenance)
▪ Long-term maintenance agreements
Artificial Intelligence
▪ Potentially reduced maintenance and support needs:
▪ loss of updates and enhancements as added value
▪ diminished need for long-term maintenance and support
▪ Non-linear fee models
▪ Alternative service offering: analytics and consulting
© 2019 Baker & McKenzie LLP
Ownership
25
Traditional Software
▪ General reservation of ownership and rights:
▪ licensed software
▪ derivative works and improvements
▪ Possibly ownership of customer-specific customizations
Artificial Intelligence
▪ Output: not mere data but intellectual property (derivative works and improvements) based on licensee data input
▪ Do licensees expect ownership?
▪ Should licensor want to own the output (risks vs. benefits of ownership)?
▪ Split ownership: licensee-specific output vs. generally applicable output (e.g., self-improvements, further developments) – but how can they be distinguished?
© 2019 Baker & McKenzie LLP
Data Privacy
26
Traditional Software
▪ Segregating user data from licensor data
▪ reduce data privacy exposure
▪ control over data privacy compliance
Artificial Intelligence
▪ Licensee data driven
▪ Can AI distinguish between PII and anonymized data?
▪ Accidental data leaks or licensor data contamination
© 2019 Baker & McKenzie LLP 27
Traditional Software
▪ Short product warranty
▪ Broad warranty disclaimer
▪ No warranty of effectiveness and usefulness
Artificial Intelligence
▪ Licensees may request extended product warranties:
▪ warranties of functionality
▪ warranties of usefulness
▪ Licensee may seek warranty protection against risks from use of AI: reliance on AI functionality and warranties against harmful consequences:
▪ product liability
▪ AI malfunctions
▪ violation of law or third party rights
Warranties
© 2019 Baker & McKenzie LLP 28
Traditional Software
▪ Narrow infringement indemnity:
▪ third party claims arising from software as delivered
▪ exclusion of combination claims or claims arising from customizations or failure to update (for installed software)
Artificial Intelligence
▪ Risk allocation of unknown or unpredictable output
▪ Licensees may demand broader indemnities:
▪ third party infringement claims arising from output
▪ no or limited exclusions regarding combinations and customizations
▪ claims from AI functionality failures, e.g., product liability indemnity and loss of business
Indemnities
© 2019 Baker & McKenzie LLP 29
Traditional Software
▪ Exclusion of consequential and indirect damages
▪ Limitation of direct damages (liability cap)
▪ Limitation even for potential high liability risk (e.g., data loss, corruption, or leaks)
Artificial Intelligence
▪ Unknown output results = unpredictable risk profile and higher risk of consequential damages: no exclusion or limited exclusion of consequential damages?
▪ No cap or high cap on direct damages?
▪ Exclusion of specific risk areas from exclusion and limitation of liability
Liability
Artificial Intelligence: Enforcement and Litigation Issues
Robert W. Kantner
Jones Day Dallas
214-969-3737
Regulatory and Liability Challenges for Artificial Intelligence:
• Regulatory Issues: Transparency and Accountability
• Product Liability Issues
• Labor and Employment Issues
• Data Breach Liabilities
• IP Enforcement
• Self-Regulation: Will Best Practices Become Standards of Care
31
Regulatory Issues - Introduction
Isaac Asimov once proposed three laws of robotics:
• A robot may not injure a human being . . .
• A robot must obey the orders given it by human beings, except when such
orders would conflict with the previous law.
• A robot must protect its own existence as long as such protection does not
conflict with the previous two laws.
32
Regulatory Issues – Possible Regulation
Oren Etzioni, Chief Executive of the Allen Institute for Artificial Intelligence, has
proposed “Three Rules for A.I.”:
• An A.I. system must be subject to the full gamut of laws that apply to its
human operators.
• An A.I. system must clearly disclose that it is not human.
• An A.I. system cannot retain or disclose confidential information without
explicit approval from the source of information.
33
Technical Challenges to Regulation
House of Commons Science and Technology Committee 2016 Report on Robotics and
Artificial Intelligence explained the need to ensure AI operates as intended
According to the Association for the Advancement of Artificial Intelligence (Menlo Park,
CA):
It is critical that one should be able to prove, test, measure and validate the reliability,
performance, safety and ethical compliance–both logically and
statistically/probabilistically – of such robotics and artificial intelligence systems before
they are deployed.
Similarly, Professor Stephen Muggleton, Professor of Machine Learning at Imperial
College, London, saw a pressing need:
To ensure that we can develop a methodology by which testing can be done and the
systems can be retrained, if they are machine learning systems, by identifying
precisely where the element of failure was.
But the verification and validation of autonomous systems is “extremely challenging” since
they are increasingly designed to learn, adapt and self-improve during their deployment.
Traditional methods of software verification cannot extend to these situations.
34
Technical Challenges to Regulation
The House of Commons Report posed the challenge:
It is currently rare for AI systems to be set up to provide a reason for reaching a particular
decision. For example, when Google DeepMind’s AlphaGo played Lee Sedol in March 2016,
the machine was able to beat its human opponent in one match by playing a highly unusual
move that prompted match commentators to assume that AlphaGo had malfunctioned. AlphaGo
cannot express why it made this move and, at present, humans cannot fully understand or
unpick its rationale. As Dr. Owen Cotton-Barratt from the Future of Humanity Institute reflected,
we do not “really know how the machine was better than the best human Go player.”
. . . .
Part of the problem [is] that researchers’ efforts [have] previously been focused on achieving
slightly better performance on well-defined problems, such as the classification of images or the
translation of text while the “interpretation of the algorithms that were produced to achieve those
goals had been left as a secondary goal.”
35
Regulatory Issues – Technical Challenges
INPUT
NODES
PERFORMANCE
NODES
OUTPUT
NODES
D
O
G
D
2
3
1
2
3
4
5
6
7
8
36
Regulatory Issues – Technical Challenges
37
D
O
G
D
O
X
2
3
4
5
9
8
7
6
INPUT
NODES
PERFORMANCE
NODES
OUTPUT
NODES
Regulatory Issues – Technical Challenges
INPUT
NODES
PERFORMANCE
NODES
OUTPUT
NODES
D
O
G
D
O
G
3
4
5
6
10
9
8
7
38
Regulatory Issues – Technical Challenges
INPUT
NODES
PERFORMANCE
NODES
OUTPUT
NODES
D
O
G
D
O
G
6
5
4
3
7
8
9
10
39
Possible Regulation
A.I. / A.I. devices must have an impregnable kill switch
Google DeepMind was reported in June 2016 to be working with academics at the
University of Oxford to develop a ‘kill switch’; code that would ensure an AI system
could be repeatedly and safely interrupted by human overseers without [the
system] learning how to avoid or manipulate these interventions.
40
Possible Regulation – an Explanation
• Harvard University Berkman Klein Center Working Group on Explanation and
the Law says:
• A.I. / A.I. devices should give the reasons or justifications for a particular
outcome, but not a description of the decision-making procedures.
• Also, we need to know whether changing a factor would change the
decision.
• This explanation should be given whenever a human (or corporation) would
have to give an explanation.
• But what about the weighing of factors? Judgment?
41
Regulatory Issues – GDPR
• Automated Decision-Making
Article 22(1) states: “The data subject shall have the right not to be
subject to a decision based solely on automated processing, including
profiling, which produces legal effects concerning him or her or similarly
significantly affects him or her”. In draft guidelines, the WP has stated that
there is a prohibition on fully automated individual decision-making, including
profiling that has a legal or similarly significant effect.
If that view is correct, such processing would have to be justified on one
of three bases set out as exceptions under Article 22(2), namely: performance
of a contract, authorized under law, or explicit consent.
42
Regulatory Issues – GDPR
• Right to Explanation?
Articles 13-15 cover separate aspects of a data subject’s right to
understand how her/his data is being used. Importantly, each article states
that the data subject has the right to access “meaningful information about
the logic involved, as well as the significance and the envisaged
consequences of such processing for the data subject.” Articles 21-22
suggest that the subject’s right to understand “meaningful information” about
and the “significance” of automated processing is related to her/his right to
opt out.
Recital 71 states that automated processing “should be subject to
suitable safeguards, which should include specific information to the data
subject and the right to obtain human intervention to express his or her point
of view, to obtain an explanation of the decision reached after such
assessment and to challenge the decision.”
43
Product Liability Issues
• Strict Liability
• Negligence
• Misrepresentation
• Breach of Warranty
44
Product Liability: Strict Liability
Manufacturing Defects
• Departure from Intended Design
• Malfunction Doctrine (Evidence of defect not apparent):
1. Product malfunctioned
2. Malfunction occurred during regular and proper use of product
3. Product not altered or misused in way that would cause malfunction
45
Product Liability: Strict Liability cont.
Design Defects
• Consumer Expectations Test
• Danger posed by design greater than ordinary consumer would expect
when using product in intended / reasonably foreseeable manner
• Risk Utility Test (Dominant Test)
• Product design proximately caused injury and defendant failed to
establish benefit of design outweighs danger inherent in design
46
Product Liability: Strict Liability cont.
Factors for Risk Utility Test (Dominant Test)
1. Usefulness and desirability of product (utility to the user and to the public)
2. Safety aspects of product (likelihood it will cause injury, and probable
seriousness of the injury)
3. Availability of substitute product which would meet same need and not be
as unsafe
4. Manufacturer’s ability to eliminate unsafe character of product without
impairing usefulness / making too expensive to maintain utility
5. User’s ability to avoid danger by exercise of care in use
6. User’s anticipated awareness of dangers inherent in product and
avoidability from public knowledge of condition of product / existence of
suitable warnings or instructions
7. Feasibility, on part of manufacturer, of spreading loss by setting product
price or carrying liability insurance
47
Product Liability: Proof of Design Defect
Proof Needed for a Design Defect?
In re Toyota Unintended Acceleration Litigation, 978 F. Supp. 2d 1053 (C.D. Cal.
2013) (Georgia law)
• Expert permitted to testify to defective car design even though he could not ID
specific bug(s) that could open throttle from idle
• Expert opined there were source code errors, including:
✓ Inadequate operating system
✓ Substandard ECM software architecture
✓ Negligently designed watchdog supervisor software
✓ Untestable, unduly complex “spaghetti” code
✓ Task X could disable fail-safes and cause unintended acceleration
✓ Unidentified software bug could cause partial task death of Task X
48
Product Liability: Proof of Design Defect cont.
In re Toyota Unintended Acceleration Litigation, 978 F. Supp. 2d 1053 (C.D. Cal.
2013) cont.
• Georgia applies risk-utility test
• Only need to show device did not operate as intended
• And that was proximate cause of injuries
• Circumstantial evidence of design defects sufficient, especially if:
✓ Alleged defect destroys evidence needed to prove defect, or
✓ Evidence is otherwise unavailable through no fault of plaintiff
• Expert testified car’s software does not record software failures
✓ Good enough circumstantial evidence
• Motion to strike expert report denied; MSJ denied; Case settled
49
Product Liability: Strict Liability cont.
Inadequate Warnings
• Failure to warn consumers about danger or hazard which manufacturer knew
or should have known about
• Post-sale notifications
50
Product Liability: Negligence
• Conduct that falls below legal standard established by law for protection of
others against unreasonable risk of harm
• Reasonably foreseeable
• Risk of harm vs. utility of act
51
Product Liability: Misrepresentation
• Misstatements and material omissions
• Auto-pilot?
• Driver will rarely take over?
• Sufficient time for warning?
52
Product Liability: Best Practices
• Careful documentation of functional safety verification practices
• Review of advertising
• Characterization of "product" as software
• Waivers
53
Labor & Employment Issues
• Disparate impact
• Problem areas:
• Reliance on historical data
• Lack of data
• Reliance on vendors without due diligence
• Don’t forget privacy issues
• GDPR – right of explanation?
54
Data Breach Liabilities
• GDPR
• U.S. state regulation
• Prompt disclosures
55
IP Enforcement
• Patents after Alice
• Patents vs Trade Secrets
• Copyrights
• IP created by AI?
56
Self Regulation: Will Best Practices Become Standards of Care?
• IEEE recommendations:
• “Software engineers should be required to document all of their systems
and related data flows, their performance, limitations and risks.”
• “[S]tandards providing oversight of the manufacturing process of intelligent
and autonomous technologies need to be created…”
• “Technologists should be able to characterize what their algorithms and
systems are going to do via transparent and traceable standards.”
• The process should be auditable.
57