Experts Warn That OMBs AI Guidance Could Slow Federal Adoption Of The Emerging Tech

Experts Warn That OMBs AI Guidance Could Slow Federal Adoption Of The Emerging Tech

A number of stakeholders and AI experts, many from technology industry associations and trade groups, have warned that future AI guidelines for federal agencies could stifle the use of even low-risk AI in government.

The Office of Management and Budget issued draft guidance on the government's use of artificial intelligence shortly after the White House issued a sweeping executive order on the technology in late October. The final version is expected to be published by the end of March and stakeholder comments will be available.

This draft requires agencies to adopt minimum risk management practices for AI tools, such as real-world performance testing of systems deemed “security-compromising” or “privilege-compromising.”

Systems that significantly affect or control activities, such as: Other uses, such as operation of the electric grid or government benefit decisions, fall within the list of uses provided by the Office of Management and Budget, and agencies must take these “potentially” significant impacts into account.

An Office of Management and Budget spokesperson told Nextgov/FCW that “the draft guidance uses a risk-based approach to limit harm from AI by limiting enhanced protections to contexts where the use of AI poses a significant risk to the rights and safety of people.”

However, some stakeholders remain skeptical that the guidelines could hinder bureaucratic adoption of AI, as the use cases that could be included in these definitions would depend on the list of new processes and requirements.

“I'm very concerned that this will lead to a risky view of artificial intelligence if we have to use this technology now,” Ross Nodorft, executive director of the Digital Innovation Alliance, said during a panel discussion on December 6. Hearing before the Subcommittee on Oversight and Accountability on Artificial Intelligence Policy. The coalition consists of government contractors.

He continued: “There is a gap between the recommendations currently made and how these recommendations are being implemented in agencies.” “This delta will require people with individual authority to allow managers trying to use this AI to make decisions about whether it is good or bad based on a definition they can use more explicitly and more specifically.”

“The definition of ‘breach’ may include the use of information that does not pose a significant risk,” the Chamber of Commerce wrote in a commentary. Regulations affecting rights are subject to additional minimum risk management requirements in the draft guidance in addition to regulations affecting security.

Several technology trade groups responded to the draft guidance with concerns about the Office of Management and Budget's definitions of AI systems.

“The Office of Management and Budget (OMB) now defines almost all applications as high risk, making it difficult if not impossible for federal agencies to adopt AI tools,” the Information Technology Industry Council wrote.

The Software Alliance warned of ambiguity in the Office of Management and Budget's definitions of what constitutes high risk and uncertainty around minimum practices, while the Software and Information Industry Association also noted that low-risk activities are also likely to be high-risk.

Industry groups are not the only ones concerned.

“Humans cannot identify high-risk AI use cases,” says a commentary from the nonprofit Center for Democracy and Technology, which recommends that the Office of Management and Budget clarify guidance and provide support, for example, regarding interagency collaboration. A group to help agencies categorize their use cases.

The broader definition of AI is also problematic.

“Neither the scientific community nor industry agree on a common definition [of AI capabilities],” says a recent Government Accountability Office report. “Even within government, definitions vary.”

“While the OMB draft memo is impressive in many respects, large portions of it could have the unintended effect of bureaucracy,” said Nextgov/ Daniel Hu, a Stanford law professor and director of the laboratory's organization, evaluation, and management. FCW .

“This does not mean that outsourcing is the solution,” he added. “What we need is the development of technologists within government who can drive the adoption of responsible AI.”

He and a group of five other academics and former government officials, including Civic Technology Director Jennifer Palka, warned in their joint commentary that minimum requirements for all government benefits and services are set out in the definition of AI, affecting the rights of the Office of Management and Budget. identification. Uses intended to affect rights may also preclude non-controversial, low-risk uses.

They ask the Office of Management and Budget to narrow and clarify the definition of AI affecting claims and to distinguish between types of AI and types of employee benefits programs.

The minimum requirements for AI systems impacting rights, which include human review, opt-out options and public consultation, combined with the broad scope of application, could “jeopardize core operations of a variety of government programs” and “hinder important modernization efforts.” ". Writing, especially in the context of government agencies that are already at risk.

“The Office of Management and Budget memo is a model for illustrating the opportunities and risks of artificial intelligence. But the process must be tailored to the risks,” Ho said in a speech prepared for the hearing. “For example, the suggestion in the memo that authorities allow anyone to refuse AI for human review does not always make sense given the large number of AI programs and applications. For example, the US Postal Service uses artificial intelligence to read handwritten postal codes on envelopes. Abandoning this system would mean employing thousands of employees just to read the numbers.

A CIA spy explains Mossad's harsh tactics 🫣 | #Short pants