WASHINGTON, April 28, 2023 — As artificial intelligence technologies continue to rapidly develop, many industry leaders are calling for increased federal regulation to address potential technological displacement, algorithmic discrimination and other harms — while other experts warn that such regulation could stifle innovation.
“It’s fair to say that this is a watershed moment,” said Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at a panel hosted Wednesday by the Brookings Institution. “But we have to be honest about this as well, which is to say, there will be displacement.”
Screenshot of Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at the Brookings Institute event
While some AI displacement is comparable to previous technological advances that popularized self-checkout machines and ATMs, Townsend argued that the current moment “feels a little bit different… because of the urgency attached to it.”
Recent AI developments have the potential to impact job categories that have traditionally been considered safe from technological displacement, agreed Cameron Kerry, a distinguished visiting fellow at Brookings.
In order to best equip people for the coming changes, experts emphasized the importance of increasing public knowledge of how AI technologies work. Townsend compared this goal to the general baseline knowledge that most people have about electricity. “We’ve got to raise our level of common understanding about AI similar to the way we all know not to put a fork in the sockets,” he said.
Some potential harms of AI may be mitigated by public education, but a strong regulatory framework is critical to ensure that industry players adhere to responsible development practices, said Susan Gonzales, founder and CEO at AIandYou.
“Leaders of certain companies are coming out and they’re communicating their commitment to trustworthy and responsible AI — but then meanwhile, the week before, they decimated their ethical AI departments,” Gonzales added.
Some experts caution against overregulation in low-risk use cases
However, some experts warn that the regulations themselves could cause harm. Overly strict regulations could hamper further AI innovation and limit the benefits that have already emerged — which range from increasing workplace productivity to more effectively detecting certain types of cancer, said Daniel Castro, director of the Center for Data Innovation, at a Broadband Breakfast event on Wednesday.
“We should want to see this technology being deployed,” Castro said. “There are areas where it will likely have lifesaving impacts; it will have very positive impacts on the economy. And so part of our policy conversation should also be, not just how do we make sure things don’t go wrong, but how do we make sure things go right.”
Effective AI oversight should distinguish between the different risk levels of various AI use cases before determining the appropriate regulatory approaches, said Aaron Cooper, vice president of global policy for the software industry group BSA.
“The AI system for [configuring a] router doesn’t have the same considerations as the AI system for an employment case, or even in a self-driving vehicle,” he said.
There are already laws that govern many potential cases of AI-related harms, even if those laws do not specifically refer to AI, Cooper noted.
“We just think that in high-risk situations, there are some extra steps that the developer and the deployer of the AI system can take to help mitigate that risk and limit the possibility of it happening in the first place,” he said.
Multiple entities considering AI governance
Very little legislation currently governs the use of AI in the United States, but the issue has recently garnered significant attention from Congress, the Federal Trade Commission, the National Telecommunications and Information Administration and other federal entities.
The National Artificial Intelligence Advisory Committee on Tuesday released a draft report detailing recommendations based on its first year of research, concluding that AI “requires immediate, significant and sustained government attention.”
One of the report’s most important action items is increasing sociotechnical research on AI systems and their impacts, said EqualAI CEO Miriam Vogel, who chairs the committee.
Throughout the AI development process, Vogel explained, each human touchpoint presents the risk of incorporating the developer’s biases — as well as a crucial opportunity for identifying and fixing these issues before they become embedded.
Vogel also countered the idea that regulation would necessarily stifle future AI development.
“If we don’t have more people participating in the process, with a broad array of perspectives, our AI will suffer,” she said. “There are study after study that show that the broader diversity in who is… building your AI, the better your AI system will be.”
Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.
Wednesday, April 26, 2023, 12 Noon ET – Should AI Be Regulated?
The recent explosion in artificial intelligence has generated significant excitement, but it has also amplified concerns about how the powerful technology should be regulated — and highlighted the lack of safeguards currently in place. What are the potential risks associated with artificial intelligence deployment? Which concerns are likely just fearmongering? And what are the respective roles of government and industry players in determining future regulatory structures?
Panelists
- Daniel Castro, Vice President, Information Technology and Innovation Foundation and Director, Center for Data Innovation
- Aaron Cooper, Vice President of Global Policy, BSA | The Software Alliance
- Rebecca Klar (moderator), Technology Policy Reporter, The Hill
Panelist resources
Daniel Castro is vice president at the Information Technology and Innovation Foundation and director of ITIF’s Center for Data Innovation. Castro writes and speaks on a variety of issues related to information technology and internet policy, including privacy, security, intellectual property, Internet governance, e-government and accessibility for people with disabilities. In 2013, Castro was named to FedScoop’s list of the “top 25 most influential people under 40 in government and tech.”
Aaron Cooper serves as vice president of Global Policy for BSA | The Software Alliance. In this role, Cooper leads BSA’s global policy team and contributes to the advancement of BSA members’ policy priorities around the world that affect the development of emerging technologies, including data privacy, cybersecurity, AI regulation, data flows and digital trade. He testifies before Congress and is a frequent speaker on data governance and other issues important to the software industry.
Rebecca Klar is a technology policy reporter at The Hill, covering data privacy, antitrust law, online disinformation and other issues facing the evolving tech world. She is a native New Yorker and graduated from Binghamton University. She previously covered local news at The York Dispatch in York, Pa. and The Island Now in Nassau County, N.Y.
Graphic from Free-Vectors.Net used with permission
As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.
SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTube, Twitter and Facebook.
See a complete list of upcoming and past Broadband Breakfast Live Online events.