Biden’s Executive Order On AI Is Broad In Scope And Laser-Focused On Spurring Innovation Without Undue Risk

Biden’s Executive Order On AI Is Broad In Scope And Laser-Focused On Spurring Innovation Without Undue Risk

With much anticipation, and two days before the UK’s AI Safety Summit, the White House published the Fact Sheet and full Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, signaling the government’s commitment to AI. The EO sets the tone for the administration’s agenda to bolster the nation’s competitive advantage by investing in new opportunities that AI will create and fostering AI entrepreneurship while mitigating risks.

The Executive Order Is Broad In Scope, With Big Implications Beyond The Executive Branch

The EO builds on the administration’s previous actions to “drive safe, secure, and trustworthy development of AI.” The tone is optimistic, calling out opportunities for harnessing AI alongside guidelines for risk mitigation, and its broad scope shows a deep understanding of the nuances and impacts of AI. The EO has teeth beyond mandatory requirements for the executive branch with its commitment to developing new standards, taking a multi-agency approach to enforcement, and holding the government accountable to the same standards for “responsible and effective” use of AI. Ultimately, the EO will have a big and lasting impact for companies and industries that transact with the nation’s biggest employer: the federal government.

Pay Attention To These Four Notable Directives In The EO

The EO calls for a “societywide effort” from government, the private sector, academia, and civil society to address eight AI priorities. Expect it to impact your enterprise AI strategy in these four critical areas:

Enterprises: Prepare For Domestic And International AI Standards With AI Governance

Executive orders are typically aimed at directing the operations of the federal government, but that doesn’t mean that they don’t have significant influence at large organizations. For example, the EO calls out the need for broad, international collaboration on AI standards as well as aligning on risk mitigation. A common, shared framework and standard will be a welcome relief for multinational organizations navigating a fragmented, confusing, and growing regulatory landscape for AI both in the US and around the world. Also, the EO’s issuance of new guidance for the federal government to invest in AI and use it to overhaul how it procures AI products and services should ease the pain for companies that sell, service, and support the government. By doing so, the EO’s guidance requires the federal government to eat its own dogfood, as the private sector watches for lessons on what works and what doesn’t. Enterprises should prepare for the downstream impacts of this executive order by assessing the risks of existing AI use cases against the NIST AI Framework, launching a formal AI governance initiative, and monitoring for the creation of companion documents and new standards mandated in the EO.