Artificial Intelligence
Davidson said that the NTIA looked to financial auditing as inspiration for the new proposed rules.
Photo of Alan Davidson from May 2022
WASHINGTON, March 27, 2024 — The Biden administration’s top technology and telecommunications regulator on Wednesday released a report calling for independent audits of what the administration defines as “high-risk” uses of artificial intelligence.
In particular, the agency, the National Telecommunications and Information Administration of the Commerce Department, called for establishing a principle that federal agencies should require independent audits of all AI model classes “that present a high risk of harming rights or safety.”
These new rules “could” require the maintenance of a registry about such AI systems, the agency said.
The “AI Accountability Policy Report,” which was embargoed for release on Wednesday, is one element of a broader portfolio of work by the NTIA to regulate AI.
These “accountability policies,” the NTIA said in a press release,”‘will play a key part in unleashing the potential of this technology. They will help AI system developers and deployers show that their systems work as intended, and can be trusted not to cause harm. Such assurance will in turn boost public – and marketplace – confidence in these tools.”
During an embargoed conference call on Tuesday, Assistant Secretary of Commerce Alan Davidson announced the findings and agency proposals.
“The report calls for improved transparency into AI systems, independent evaluations of those systems, and consequences for imposing new risks,” said Davidson, who is also NTIA administrator.
“Responsible AI innovation will bring enormous benefits, but we need accountability to unleash the full potential of AI,” said Davidson. “NTIA’s AI Accountability Policy recommendations will empower businesses, regulators, and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer.”
He said that the report’s findings indicated that the government should mandate audits of the AI systems that pose the highest risk to the public. A.I. systems that affect public safety and health should be scrutinized more thoroughly, Davidson argued.
“We are going to need to build an ecosystem around A.I. accountability and auditing that starts with greater transparency into how models and systems work,” Davidson added.
Davidson said that the government looked to financial auditing as inspiration for how to approach AI auditing.
“We can tell if a company actually has the financial results that it claims because we have a broadly accepted set of accounting and compliance principles,” he said. “We have a system of accrediting auditors and holding them responsible if they don’t follow those practices. That’s the kind of system we need to build for AI.”