Washington rushing to put guardrails on AI – fast enough?
| View caption Hide caption
“We don’t have a lot of time,” CEO Dario Amodei of Anthropic, a San Francisco-based firm that aims to create “reliable, beneficial” AI systems, told senators last week. “Whatever we do, we have to do it fast.”
The reason for urgency? Experts say that, with AI capable of making advances at an exponential pace, efforts to control how it is used – or to avoid unintended harm to society – may only get harder over time.
As AI-related discussions have been happening around Washington over the past year, several key ideas have gained currency: (1) creating a regulatory agency to oversee the fast-growing field and ensure that human interests are not mixed with profits as they would be in a private company, (2) establishing liability so that AI developers know they will be held responsible if their systems are used for nefarious ends, and (3) requiring transparency in AI models and clear identification of AI-generated materials, such as by a watermark or a red frame around a political ad.
An active Congress and White House
Over the past several months, more than 20 bills have been introduced in the House and Senate dealing with various aspects of AI – from requiring the government to conduct risk assessments and develop a public health preparedness strategy, to establishing a bipartisan national commission to make recommendations.
Last week’s hearing was a sign that the lawmakers are trying to move forward, with stepped-up efforts to educate themselves as they consider potential measures.
The Biden administration, for its part, recently welcomed seven AI companies to announce a pact committing themselves to internal and external testing of their models prior to publicly releasing them. Amazon, Google, Meta, Microsoft, and OpenAI (the creator of ChatGPT), as well as Mr. Amodei’s company, Anthropic, and another startup, Inflection, were part of the voluntary agreement.
The pact, which focused on building safety, security, and trust, was hailed as an important milestone, following on the White House’s Blueprint for an AI Bill of Rights last fall and an AI risk management framework released in January by the Commerce Department’s National Institute of Standards and Technology.
However, some said the new pact was too vague and lagged efforts elsewhere around the world, from the European Union to China. Experts say that a more robust framework is needed, with the teeth to enforce it – ideally by a new federal agency that would be able to respond and adapt to emerging challenges more quickly than, say, Congress.
View caption Hide caption
While many see the potential for AI to greatly benefit humanity, the technology raises grave concerns about everything from data privacy and election integrity to autonomous weapons and new biological threats. It could also reinforce or exacerbate societal inequities – a concern already raised, for example, regarding the consequences of facial recognition systems being less accurate among people of color.
Tristan Harris and Aza Raskin, who were featured in “The Social Dilemma” documentary warning of the dangers of social media, have described the latest chapter of AI development as a threat to humanity on par with the evolution of nuclear weapons – but worse.
“Nukes don’t make stronger nukes, but AI makes stronger AI,” said Mr. Raskin in a March talk the pair gave to more than 100 leaders in fields ranging from finance to government.
That means that as AI learns more, it can apply the gains across different fields, added Mr. Harris, a former design ethicist at Google. “It’s like an arms race to strengthen every other arms race,” he said, urging leaders to realize the responsibility they have to institute new systems for containing the new technology, just as the world did in its attempt to rein in nuclear proliferation and avert nuclear war.
It’s not just them. Recently, more than 250 AI experts, including Professor Russell, signed a statement saying that mitigating the risk of extinction from AI should be as high a priority as addressing the risk of pandemics and nuclear weapons.
Though there are many potentially constructive uses of AI, it is likely to be a disruptive force in society, even without any Hollywood-style plots about sentient machines intentionally attacking humans.
Bipartisan cooperation
So far, there appears to be bipartisan consensus to act on AI, removing one of the major obstacles to Washington policymaking.
Senate Majority Leader Chuck Schumer, a New York Democrat, described the Chinese Communist Party’s release this spring of its approach to regulating AI as “a wake-up call” to the United States. For months, he has been discussing and refining ideas for how America could take the global lead on AI innovation and shape the rules of the road.
“We must approach AI with the urgency and humility it deserves,” said Mr. Schumer in a statement outlining his SAFE Innovation Framework, noting that he was encouraging bipartisan policy work and legislation across numerous committees.
View caption Hide caption
As part of that, he is convening a series of nine bipartisan forums for all senators to get up to speed on various aspects of AI, starting with three this summer on AI’s current capabilities, future frontier, and U.S. defense and intelligence capabilities in the field vis-à-vis America’s adversaries.
Similarly, Speaker of the House Kevin McCarthy has said that all members of the House Intelligence Committee would take AI courses resembling those provided to military generals. And the congressional hearing last week, which featured AI “godfather” Yoshua Bengio of the University of Montreal as well as Professor Russell and Mr. Amodei, was marked by an unusual level of bipartisan comity.
“What you see here is not all that common, which is bipartisan unanimity,” said Chair Richard Blumenthal, a Democrat and former attorney general of Connecticut, who oversaw a serious, substantive discussion for more than two and a half hours before a packed hearing room.
“There has to be a cop on the beat,” he said. “That cop on the beat, in the AI context, has to be not only enforcing rules but also, as I said at the very beginning, incentivizing innovation – and sometimes funding it – to provide the air bags and seat belts and the crash-proof kinds of safety measures that we have in the automobile industry.”
Lessons from social media policy
Another shared view across Washington is that it needs to apply the lessons learned from not dealing sooner and more decisively with social media platforms.
“We can’t repeat the mistakes we made on social media, which was to delay and disregard the dangers,” said Mr. Blumenthal. The Connecticut senator, along with the top Republican on the subcommittee, Sen. Josh Hawley of Missouri, has co-sponsored legislation to prevent AI companies from claiming immunity for third-party content – as social media companies have done – under a telecommunications law known as Section 230.
Senator Hawley, a longtime critic of Big Tech, commended his Democratic counterpart for putting together such substantive hearings but expressed skepticism given legislators’ inability to rein in the sector’s social media regime – and the fact that many of the key players are the same now, including Google, which owns YouTube, and Meta, which owns Facebook.
“Will Congress actually do anything?” asked Senator Hawley, who said his priorities were protecting workers, kids, and consumers – as well as national security. “We’ve had a lot of talk, but now is the time for action.”