Human-in-the-Loop: Survey on How AI Insiders View Regulation
Takeaways for Regulators Based on Results from Surveying People Who Work in the AI Industry
Achieving a consensus on regulating AI, given its rapid evolution, complexity, and borderless impact, presents a formidable challenge. The conflicting and passionate viewpoints of the various stakeholders — regulators, doomers, or accelerationists — make it seem that consensus could never be possible. So, we put that to the test.
[AI-generated image: The debate arena where wits do battle – the accelerationist Nick Land v. Doomer Geoffrey Hinton as refereed by FTC Commissioner Lina Khan]
We surveyed 43 people working in the AI industry (a statistically large enough sample size for generalizability) to gather their thoughts anonymously on regulating AI. The folks we surveyed work at big tech companies, banks, nonprofits, and startups, and their job functions spanned engineering and tech, business, legal & compliance, and non-profit and other functions.
In reviewing those responses, we realized there was a surprising level of commonality. We hope that sharing these AI insider viewpoints may debunk the myth of tribalism in AI and also better inform lawmakers that they have allies within the industry. Below, you will find our high-level takeaways, the survey results, and some closing thoughts. Enjoy!
Key Takeaways from the Survey (n = 43) (detailed results below)
Most respondents think that AI should be regulated (72% Yes and 19% Maybe).
That number goes up even more for “high risk” use cases, such as AI systems dealing with biometric data, education, employment, law enforcement, etc. (88% Yes and 7% Maybe).
An overwhelming number of respondents believed that if the AI industry were to self-regulate, the companies should pledge to have the following measures or procedures in place:
Require continuous evaluation for bias, security, and vulnerabilities (91%)
Require the maintenance of policies and procedures (76%)
Set standards for content provenance (i.e. where does the content originate), including watermarking, etc. (76%)
Election integrity (52%)
Therefore, these are the low-hanging fruits for either creating regulations or setting standards for AI.
Interestingly, most folks think that there should be some kind of accountability for companies that create AI software, but they are not sure about what to do or how to go about it. These two questions delineated the dilemma:
Do you think it’s fair that an AI company should be expected to know how the output generated by its technology is being used by its consumers? (44% Yes and 19% Maybe).
Should AI companies be held liable for what their users do with the generated content? (42% Maybe, 16% Yes and 33% No).
There seems to be consensus around requiring social media platforms such as Facebook to take down harmful content (84% Yes) and be responsible for ensuring any synthetic content uploaded by users is labeled as AI-generated (79% Yes).
Final Thoughts
While it’s clear there is an industry desire for regulation, it’s also clear that many are unsure what it should look like in an ideal world. We want to leave you with some final thoughts as you form your own opinions about AI regulation:
Regulation of emerging technologies, especially one as transformative as AI, should avoid negative externalities for industries and consumers. You need a balance between fostering innovation and ensuring ethical, social, and legal safeguards. Good regulatory frameworks encourage creativity and experimentation while safeguarding against potential harm.
Remember that there are already laws on the books that regulate AI as they would any industry. The approach towards regulating AI should be surgical, using a scalpel instead of a sledgehammer. You don’t want to crush a fledgling industry with many positive use cases.
There is an entire ecosystem of stakeholders that need to be considered in designing AI regulations, including the builders of AI, the users of AI, and the platforms where AI content is published.
Given technology moves so fast, regulations should be designed not to favor one technology over another and not affect the development of the technology itself.
Consider whether it makes sense for a certain technology to be regulated based on its tech capabilities, user behavior, user reach, and use cases. Regulations may need a hybrid approach of different philosophies.
Soft laws or self-regulations may be more effective for fast-moving industries and should be designed with public-private partnerships.
Regulations should be designed to not become stale too quickly. They should stand the test of time. (ex. The EU’s MiCA regulation was designed with Facebook’s Diem project in mind, but Diem has since shut down.)
Detailed Survey Results
[We want to thank everyone who participated in the survey. We hope you feel you got your voice heard. More to come!]
Resources:
Survey conducted using Google Forms
Image of Boxing Match Generated on Pixlr, ChatGBT and BeFunky
Disclaimer: This post is for general information purposes only. It does not constitute legal advice. This post reflects the current opinions of the author(s). The opinions reflected herein are subject to change without being updated.
They are not always sincere..