LawBurst: Legal Issue Spotting Framework for AI
[What is a LawBurst? A short, information-packed legal article that’s a roughly written breakdown of something interesting.]
Frameworks aren't just for McKinsey consultants. We lawyers are all about frameworks; anyone who went to law school remembers applying IRAC (issue, rule, application, conclusion) to a fact pattern on an exam. So, let's break down all the legal issues that may arise in artificial intelligence (AI) – or at least we’ll give it a try. Feel free to add to this by dropping comments below.
Why do this? As a lawyer in an emerging industry, you must anticipate every potential pitfall, think creatively, and craft new and innovative legal solutions to protect your client in unchartered territory where precedents may be scarce. It’s truly exciting and invigorating for those legal eagles with the mettle to tackle the unknown.
The legal brain may at times feel like Mel Brooks in History of the World Part I (“Panic!”), but if we can war game AI like his son Max Brooks does with zombies, AI won’t feel like a foreboding hoard of the undead about to destroy us all. We hope….
We've applied an industry spotting framework that can be useful in the practice of law or business to the AI universe. When legal issue spotting a particular industry or business scenario, we think about:
Regulation
Product
Customer/End User
Company
Competition
You may have some issues that could fall into more than one bucket, but hopefully, using this framework will help you develop a 360-degree perspective on many of the legal issues within this nascent industry.
Overall Framework:
If you want to dig deeper into these topics, below are full explanations. Have fun!
I. Regulation
Federal Consumer Protection Laws
The Federal Trade Commission (FTC): Lying can be deemed an unfair and deceptive trade practice (Section 5 of the FTC Act). The FTC asserts that this prohibition on deceptive or unfair conduct also applies to generative AI tools, including any actions that mislead or harm consumers, such as phishing scams, identity theft, deepfakes, and other forms of deception - Link
Federal Communications Commission (FCC): The FCC has prohibited AI-generated voices for unsolicited robocalls. - Link
Securities and Exchange Commission (SEC): The SEC has issued some recent fines for “AI Washing” in public company filings (meaning companies made false and/or misleading statements about their use of AI). - Link
Terrorism/Anti-Money Laundering (AML) Laws
Office of Foreign Assets Control (OFAC): Terrorists could use AI platforms for nefarious purposes (automate fake social media, spread disinformation, optimize logistics and planning of attacks, etc.), and OFAC could impose sanctions on AI companies or its code if they are found to be engaging with entities or persons on the OFAC sanctions list (think Tornado Cash). - Link
Financial Crimes Enforcement Network (FinCEN): The agency could view AI software as a potential tool used for money laundering and terrorist financing, and it can bring enforcement actions against a company that fails to comply with AML regulations. - Link
US State Laws
Deepfakes in Elections and Non-Consensual Sexual Content: Various US states have begun to pass laws around deepfakes in regard to non-consensual sexual content and elections (below are a few examples). - Link.
California
California law allows individuals to sue for non-consensual image-based sexual abuse (IBSA) deepfakes, holds creators and distributors liable, but it exempts deepfakes of public concern or newsworthiness.
Additionally, political candidates have a cause of action against deceptive deepfakes during elections, unless they are clearly labeled as manipulated or a parody.
New York has updated its right-of-publicity law to protect against unauthorized commercial use of digitally manipulated images for 40 years post-mortem, and separately, it prohibits the distribution of unauthorized IBSA deepfakes. Disclaimers saying the representation is fake is not a defense.
Employment/Use of AI for Hiring Bias: In employment, AI-driven hiring tools might perpetuate discrimination if they inadvertently learn and replicate biases present in historical hiring data. New York passed the first law in the country to require companies to reveal how algorithms influence hiring employees. - Link
Automatic Decision Making (aka Profiling Technologies): This refers to systems in which decisions are made by AI without human involvement. Algorithm based decision-making could be biased, which can impact consumers depending on how they are profiled. Some states (California, Colorado, Connecticut, and Virginia) have passed laws integrating AI into their privacy laws, requiring notice, opt-out options, and consumer access to AI-driven decision-making processes. - Link
Insurance Industry/Discrimination: AI models might unfairly calculate premiums or deny coverage based on biased data, potentially discriminating against certain groups. As an example, the Colorado Division of Insurance is first in the nation to publish a final rule obliging life insurance companies to report how their AI models work. The purpose is to put into place AI governance and risk-management tools to combat discrimination and bias. - Link
State-by-state Tracker: For more information, here’s a state-by-state tracker on AI legislation.
European Union (EU) Laws: The EU has passed the first comprehensive AI regulatory framework in the world, which included requirements regarding transparency, disclosures, data quality, and human oversight. It identifies high-risk and low-risk AI use cases. - Link
Climate: The substantial computing power required for AI development may prompt the introduction of new climate regulations. - Link
II. Product
Intellectual Property (IP) Rights Issues
Creation of Patents: Per the US Patent and Trademark Office (USPTO), it is okay to use AI to create a patentable invention, but a "natural person" must provide a "significant contribution" to the invention to be eligible for a patent. The logic is that patents reward human ingenuity. - Link / Link 2
Creation of Copyrights: US Copyright Office (USCO) issued guidance that anything generated by AI is not copyrightable except the parts not generated by AI. - Link
AI-generated art cannot receive copyrights (Thaler v. Perlmutter of US Copyright Office (D.C. of District of Columbia). - Link
Creation of Trademarks: You base trademark rights on your use of the mark, not your creation of it. So, anything created by AI could be trademarked as long as it doesn't infringe on another's trademark. - Link
Scraping Other Materials for Training: There are open questions on scraping as to whether:
(i) training a model with copyrighted data requires a license,
(ii) the AI output infringes on the copyright of the training materials, and
(iii) it is a direct copyright infringement v. fair use? [The Fair Use Doctrine says there’s a limited exception for using a copyright without permission if it is “transformative.”] If there's no direct copying, the owner of the data set being used by the AI company must show a substantial similarity in the output.
Pending Litigation
There have been rulings by federal judges who have rejected the idea that the output of AI is an automatic copyright violation to those whose works were used to train the systems. Direct copyright infringement causes of action have been permitted to go to trial. - Link
TLDR: Presently, AI systems and rights holders are at odds over data collection, so you’re seeing AI companies enter into licensing agreements with content owners as a result. Fair use may be a hard argument as a result.
Ownership:
Custom AI Agents: Companies are now letting you license their base models to build your own custom model. What type of license are you getting? Is it viral open-source software (OSS)? If you're creating a custom model, who owns it? How is liability allocated? Check the Terms of Use! - Link
AI-Generated Content: Who owns the output that AI generates? Check the Terms of Use! - Link1 / Link2
Open v. Closed Source Code: The code may be publicly available like any open-source software (OSS), meaning it can be scrutinized by anybody to see how it works. With closed-source software (CSS), the code is not made public. - Link
Pros and Cons:
CSS allows the company to protect its IP, but it's also more of a black box regarding what it’s doing. It leads to a concentration of power in the hands of centralized companies. On the flip side, CSS does mean that bad actors can't easily take its code and use it for nefarious purposes, so it may be easier to regulate AI at the company level and also impose liability.
OSS allows for greater transparency, which can be good for strengthening the code for user development (less buggy, etc.) and regulatory scrutiny.
The Biden administration has also weighed in on this debate to encourage CSS but VCs like Andreessen Horowitz disagree. - Link / Link2
Data Sets: The algorithm (OSS vs CSS) actually may not be as important as transparency around data sets, which are paramount to train the models. The power of AI companies primarily stems from their control over extensive data sets through their various lines of business, which can give them more power than the code itself, as they determine the AI's capabilities and applications. - Link/Link2
“Synthetic Data:” Synthetic data refers to artificially created information by AI itself that’s used to train a model to avoid violating copyrights. Some refer to it as “laundering a copyright” because you wash the data to obfuscate the source. However, its repeated use can lead to degradation in quality, resulting in outputs that are merely less accurate copies of their originals (think AI inbreeding!). - Link
Product Harm
Hacking: Breaking into software can lead to theft of personal data. If the code is impacted by bugs or viruses, it could lead to end-user lawsuits. For example, if someone hacks into a Tesla’s computer and crashes the car, the manufacturer may be held responsible under a product liability claim. - Link
Image-Based Sexual Abuse (IBSA): Without guardrails on technology, AI tools could unknowingly be used to create illegal or harmful content such as child sexual abuse material (CSAM). Companies should institute policies to mitigate that risk and comply with local laws on CSAM reporting.
Deepfake IBSA Material: This is like the tort of nonconsensual IBSA material (colloquially referred to as “revenge pornography”) as it deals with a person's dignity and privacy of their likeness. The difference is that with deepfake IBSA, there is a "distortion" that changes the way others may perceive you. State laws differ on how they come down on deepfake IBSA. States like CA and NY have instituted restrictions. - Link
Deepfake CSAM: Under the Supreme Court decision Ashcroft v. Free Speech Coalition, fully fake CSAM can't be criminalized (i.e. no real children were involved in it). However, if the CSAM pulled the faces of real children and morphed that into a new creation, that would probably be criminalized in each state as the government has a compelling interest to safeguard children. - Link
Deepfakes in Commerce: The FTC is passing rules to make impersonation of government officials, businesses, and individuals a violation if deceptive to commerce. Biden’s executive order also addresses deepfakes. - Link / Link2
Misinformation: Internet companies are protected by Section 230 of the Communications Decency Act, which shields internet companies from liability for misinformation, which would include content generated by AI that users may post. Question is whether Section 230 also applies to the AI output itself. AI could pull from data that's inaccurate or biased and that can lead to "hallucinations." - Link1 / Link2
FTC: Lying can be deemed an unfair and deceptive trade practice under Section 5 of the FTC Act. - Link
Torts: All of this could lead to torts lawsuits like for defamation, libel, right of privacy violation.
Election Laws: States are beginning to pass laws to combat deepfakes in elections.
Example: California has a civil cause of action on the books for a political candidate deepfake video. Check state laws! - Link
III. Customer / End User
Customer v. End User: Who is the company's customer? Sometimes, it's the end user and sometimes it’s a B-to-B situation. Either way, the AI company should be concerned with how the product is used downstream and how liability is allocated amongst the various stakeholders. - Link
Confidentiality / Privacy
Confidential Information: Many AI systems use the data a person inputs for training purposes, so users need to be careful to anonymize data, upgrade plans or change privacy settings. - Link
Privacy Laws: Given AI is trained on huge data sets, there may be an increase of personal data collected on people. Companies may be collecting biometric or personal data on end users. Each country has its own data privacy law (ex. GDPR in the EU) with which one would need to comply. - Link
HIPAA Violations: Patient data may be collected.
Discrimination and Bias
Civil Rights Violations: Title VII of the Civil Rights Act protects employees and job applicants from employment discrimination based on race, color, religion, sex, and national origin. AI can show bias and may violate the Civil Rights Act, so under certain state or city laws, companies may need to test for bias. For example, NYC has passed such regulations (see Insurance and Automatic Decision Making in Section I) . - Link
IV. Company
Private Causes of Action: AI companies and users should be prepared to be named in lawsuits concerning the unauthorized use of personal information or likeness, distribution of harmful or false content, and actions that invade privacy or cause emotional distress. Such “torts” include the following: defamation, libel/slander, false light claim, intentional infliction of emotional distress, right-of-publicity claim, non-consensual IBSA, CSAM, and expectation of privacy / invasion of privacy. - Link
Product Liability/Warranties/Liability/Indemnity
Negligence v. Product Liability: When there is no human involved in an incident (like a driverless car) or software gets a bug, it may turn a negligence case into a product liability case (manufacturing defect, a design defect, or a marketing defect). - Link
Allocation of Liability: Liability may exist with the AI company (originator of algorithm) or maybe a seller of software incorporating that algorithm that then gets sold downstream to an end user. Given this is a nascent industry, standards are still evolving. - Link
Software Developer Liability: It has been hard to pierce the "tech veil" as courts have not recognized developers as fiduciaries or as having professional duties of care. Courts have upheld posted disclaimers finding the downstream actors responsible. This “tech veil” is always at risk of being eroded and some cases in the blockchain space are evidence of that. - Link
Deepfake Liability: The FTC wants to hold software developers liable for deepfakes (more above).
Risk Mitigation/Self-Regulation
Content and Safety Risk Assessments: AI Companies should consider running red teaming and impact assessments. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework provides a framework that can be useful. - Link
Content Moderation Policies: Companies may consider having content regulation policies in place that put a guardrail on what can be produced to reduce consumer harms. - Link
Audits: Companies should consider allowing third-party auditors to evaluate their safety systems. - Link1 / Link2
Insurance: Software providers may face liability for content produced by their software, so companies should be ready to be named as defendants in private lawsuits for defamation, right of publicity violations, and other legal issues. - Link
Contracts/Terms of Service (ToS):
Implied Warranties: AI companies want their ToS to disclaim the implied warranties of merchantability and fitness. If left out, they are assumed as a matter of law. - Link
Disclaimers, Waivers, Limitations of Liability and Consequential Damages: Companies would want to make sure to include these in the ToS to mitigate their liability.
V. Competition
Antitrust: General purpose AI requires substantial computing power that only the largest companies could afford. Many of the largest AI companies are highly centralized and controlled by big tech. It's very costly to create AI products, so it may discourage small companies from engaging in this new technology. Big companies should play fairly and not stifle competition. - Link1 / Link2
Recently, the FTC launched inquiries into the big tech/AI deals. - Link
VI. Miscellaneous
Defenses for Regulating AI
First Amendment Considerations: Under Bernstein v. the Department of Justice (1999), the Ninth Circuit Court of Appeals ruled that software source code was protected speech under the First Amendment, and any government prohibition on its publication was unconstitutional. - Link1 / Link2
Section 230 of Communications Decency Act: Section 230 of the Communications Decency Act provides limited legal immunity to service providers (and its users): "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Software developers may have limited legal immunity for unlawful acts done by their users. - Link1 / Link2
Supporting Growth of Technology: Imposing liability on software developers may hinder AI development. Premature regulation risks creating harmful or outdated laws. Effective regulation balances public benefits with risks. Overly restrictive rules may increase costs, reduce product quality, or shut down projects, ultimately affecting users. - Link1 / Link2
Practicing Law - Link
Duty of Confidentiality: Be careful what you input into an AI model. Don’t use client names as it could violate attorney-client privilege.
Local Rules: Do the local bar associations and courts allow the use of AI in the practice of law? Each state has different rules, and the Supreme Court hasn't opined yet. Check your local rules!
Duty of Competence and Diligence: Sometimes generative AI hallucinates, so you have to be careful you're not using incorrect information. Don’t trust. Verify and doublecheck…Triple check!
For those that made it this far, here’s a little humor as a reward:
Why did the hippy zombie eat the human brain instead of the AI bot’s neural network?
It never eats anything artificial.
CREDITS:
Joke: Human made
Images:
MindMap created with EdrawMind
Mel Brooks Chased by Zombies created by DreamStudio and speech bubbles
Robot Comedian generated from ChatGPT
Disclaimer: This post is for general information purposes only. It does not constitute legal advice. This post reflects the current opinions of the author(s). The opinions reflected herein are subject to change without being updated.
looks great
Hi Jessica, please DM me on LinkedIn if you might be interested in doing a guest post on my AI Newsletter. I'm very curious about the intersection of AI and the future of Law and tasks at law firms.