Washington Watches As Big Tech Pitches Its Own Rules For AI

Washington Watches As Big Tech Pitches Its Own Rules For AI

As Congress and the White House struggle to find ways to regulate AI, one base of power grows: the tech industry itself.

Microsoft chief Brad Smith hosted a high-profile PlanetWords event Thursday with a group of lawmakers in DC to present his company's proposal for how Washington should regulate fast-moving technology. Two days ago, Google's Sunder Pichai commented on how building AI responsibly is the only seed that matters.

Industrial endeavors are threatened by rapidly evolving technologies, which some believe may exacerbate existing social inequalities or, ultimately, threaten the future of humanity.

Congress is unlikely to act quickly, with the White House recently asking top industry executives to fill in the blanks on what "responsible AI" should look like.

Microsoft's result is a new five-point plan for regulating AI, focusing on critical infrastructure for cybersecurity and licensing systems for AI models.

Meanwhile, Pichai and OpenAI CEO Sam Altman have taken a similar trip abroad, trying to shape the conversation around AI regulation in Europe.

More than six lawmakers from both parties attended Smith's address.

Google, meanwhile, published a blog post on Friday with its political agenda.

A week after Smith testified before the Senate Select Committee on Artificial Intelligence in Washington, lawmakers expressed broad support for Altman's proposal and a willingness to work with Congress.

Rep. Derek Kilmer (D-Wash.), a member of the House Government Modernization Subcommittee, attended a Microsoft event Thursday and suggested that Congress take a closer look at companies developing artificial intelligence.

"Congress hasn't always had a say in these important technology issues," Kilmer said. "It's not uncommon for people with a lot of exposure, access, and knowledge of these technologies to take an active role and get policymakers involved in how these technologies are regulated," he said.

"At the end of the day, politicians must use their free judgment to do what is best for the American people," Kilmer said.

The company rejected the idea of ​​control: Speaking to reporters after the event, Smith denied that Microsoft, its corporate partner OpenAI or any other major company was in the "driver's seat" when it came to federal regulation of AI. .

"I'm not sure we were in the car," Smith said. But we offer insights and suggested directions for people who actually drive.

Smith acknowledged that the tech industry "probably has a more realistic idea" of AI regulation than Washington has right now. But he said that could change in the coming months.

I suspect you will see competing bills. "We like some more than others, but that's what democracy is," Smith said. "So don't worry about all the ideas coming from the industry."

Microsoft's CEO isn't the only major tech mogul trying to shape the rules of AI. Google CEO Pichai is in Europe on Wednesday to negotiate a voluntary AI agreement with the EU Commission as the union finalizes an AI law.

A week after his high-profile visit to Washington, Altman embarked on a European AI political tour. OpenAI's chief executive told an audience in London on Wednesday that there were "technical limitations" preventing his company from complying with EU rules on artificial intelligence. He warned that OpenAI could leave Europe entirely in the absence of major legislative changes.

Russell Wald, director of artificial intelligence policy at Stanford's humanities center, said he is concerned that some politicians, particularly in Washington, are focusing too much on the tech industry's AI governance proposals.

"It's a bit of a disappointment ... it's a pure industry focus," he told a Senate hearing on the Government's use of artificial intelligence last week. Wald suggested that academia, civil society and government officials should play a bigger role than they currently do in shaping federal AI policy.

Rep. Ted Lieu (D-Calif.), the new AI regulatory leader who attended Smith's speech, told POLITICO that "it's political to listen to the people who created AI." But sooner or later you have to hear a wider sound, he says.

"From researchers to advocacy groups, it's important to hear a variety of views on the impact of AI on the American people," Liu said.

Microsoft's approach to artificial intelligence

Smith urged Washington to adopt five new AI policy recommendations. Some of them are quite simple: for example, the company wants to push the White House to more broadly adopt the voluntary framework for managing AI risks released by the National Institute of Standards and Technology earlier this year. This framework is central to the White House's message about the guidelines AI companies must follow.

"The best way to move fast, and we have to move fast, is to build on the good things that are already there," Smith said Thursday.

The company has called for a "safety brake" on the deployment of artificial intelligence tools that monitor critical infrastructure, such as power grids and water systems. Congress can pass it broadly.

Microsoft has called on policymakers to increase AI transparency and provide academic and nonprofit researchers with access to advanced data infrastructure, the stated goals of the National AI Research Resource, which has not been approved or funded by Congress.

Microsoft wants to work with governments through public-private partnerships. Specifically, the company wants the public sector to use AI as a tool to solve "society's inevitable challenges."

Smith suggested to an audience in Washington how Microsoft could use artificial intelligence to help document the war in Ukraine or create presentations and other documents in the workplace.

Most of Microsoft's policy proposals call for a technologically aligned legal and regulatory framework. Smith wanted to "enforce existing laws and regulations" and establish a licensing system for the basic AI model.

"As Sam Altman told the Senate Judiciary Subcommittee last week ... we have to have legal authorization from the agency before we can issue test samples," the Microsoft chairman said. Critics see calls for a licensing system for these advanced AI models as an attempt to prevent smaller competitors like Microsoft and OpenAI from taking over.

In an effort to manage cybersecurity risks around the technology, Microsoft wants developers of robust AI models to "know the cloud" where their models are deployed and accessed.

Smith wants rules on the dissemination of AI-generated content to prevent the spread of misinformation, another goal supported by several key voices in Congress, among others. Nancy Mays (RS).

While Smith said enterprise companies are joining "everywhere," the new policy "doesn't just affect big companies like Microsoft." For example, Smith noted that startups and small tech companies will continue to play a large role in developing AI-enabled applications.

The world's best "elevator pitch"?