Governments used to lead innovation. On AI, they’re falling behind. – The Washington Post

But as countries from six continents concluded a landmark summit on the risks of artificial intelligence at the same historic site as the British code breakers Thursday, they faced a vexing modern-day reality: Governments are no longer in control of strategic innovation, a fact that has them scrambling to contain one of the most powerful technologies the world has ever known.

Already, AI is being deployed on battlefields and campaign trails, possessing the capacity to alter the course of democracies, undermine or prop up autocracies, and help determine the outcomes of wars. Yet the technology is being developed under the veil of corporate secrecy, largely outside the sight of government regulators and with the scope and capabilities of any given model jealously guarded as propriety information.

“They are daring governments to take away the keys, and it’s quite difficult because governments have basically let tech companies do whatever they wanted for decades,” said Stuart Russell, a noted professor of computer science at the University of California at Berkeley. “But my sense is that the public has had enough.”

That may be changing. This week in Britain, the European Union and 27 countries including the United States and China agreed to a landmark declaration to limit the risks and harness the benefits of artificial intelligence. The push for global governance took a step forward, with unprecedented pledges of international cooperation by allies and adversaries.

On Thursday, top tech leaders including Altman, DeepMind founder Demis Hassabis and Microsoft President Brad Smith sat around a circular table with Harris, British Prime Minister Rishi Sunak and other global leaders. The executives agreed to allow experts from Britain’s new AI Safety Institute to test models for risks before their release to the public. Sunak hailed this as “the landmark achievement of the summit,” as Britain agrees to two partnerships, with the newly announced U.S. Artificial Intelligence Safety Institute, and with Singapore, to collaborate on testing.

Musk, who attended the two-day event, mocked government leaders by sharing a cartoon on social media that depicted them saying that AI was a threat to humankind and that they couldn’t wait to develop it first.

However, the U.S. AI Safety Institute is being set up inside the National Institute of Standards and Technology, a federal laboratory that is notoriously underfunded and understaffed. That could present a key impediment to reining in the richest companies in the world, which are racing each other to ship out the most advanced AI models.

The NIST teams working on emerging technology and responsible artificial intelligence only have about 20 employees, and the agency’s funding challenges are so significant that its labs are deteriorating. Equipment has been damaged by plumbing issues and leaking roofs, delaying projects and incurring new costs, according to a report from the National Academies of Sciences, Engineering, and Medicine.

“NIST is a billion dollar agency but is expected to work like a ten billion dollar agency,” said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists. “Their buildings are falling apart, staff are overworked, some are leading multiple initiatives all at once and that’s bad for them, that’s bad for the success of those initiatives.”

Department of Commerce spokesperson Charlie Andrews said NIST has achieved “remarkable results within its budget.” “To build on that progress it is paramount that, as President Biden has requested, Congress appropriates the funds necessary to keep pace with this rapidly evolving technology that presents both substantial opportunities and serious risks if used irresponsibly,” he said.

Governments and regions are taking a piecemeal approach, with the E.U. and China moving the fastest toward heavier handed regulation. Seeking to cultivate the sector even as they warn of AI’s grave risks, the British have staked out the lightest touch on rules, calling their strategy a “pro innovation” approach. The United States — home to the largest and most sophisticated AI developers — is somewhere in the middle, placing new safety obligations on developers of the most sophisticated AI systems but not so much as to stymie development and growth.

At the same time, American lawmakers are considering pouring billions of dollars into AI development amid concerns of competition with China. Senate Majority Leader Charles E. Schumer (D-N.Y.), who is leading efforts in Congress to develop AI legislation, said legislators are discussing the need for a minimum of $32 billion in funding.

For now, the United States is siding with cautious action. Tech companies, said Paul Scharre, executive vice president of the Center for New American Security, are not necessarily loved in Washington by Republicans or Democrats. And President Biden’s recent executive order marked a notable shift from more laissez faire policies on tech companies in the past.

“I’ve heard some people make the arguments the government just needs to sit back and just trust these companies and that the government doesn’t have the technical experience to regulate this technology,” Scharre said. “I think that’s a receipt for disaster. These companies aren’t accountable to the general public. Governments are.”

Meanwhile, civil society advocates who were sidelined from the main event at Bletchley Park say governments are moving too slow — perhaps dangerously so. Beeban Kidron, a British baroness who has advocated for children’s safety online, warned that regulators risk making the same mistakes that they have when responding to tech companies in recent decades, which “has privatized the wealth of technology and outsourced the cost to society.”