Menu Close

Late last month, President Biden issued an Executive Order to establish new standards for artificial intelligence (AI) safety and security with the goal of ensuring the U.S. leads the way in seizing the promise and managing the risks of AI. The Executive Order also aims to protect privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, as well as advance U.S. leadership around the world.

The Order is more than 100 pages in length and is a directive across the “whole of government” to begin regulating this new technology. The order directs or makes requests of countless federal agencies, from the Departments of Energy and Homeland Security to the Consumer Financial Protection Bureau and the Federal Housing Financing Agency, and more. These agencies, in turn, have the authority to issue regulations that carry the force of law. The order also mandates the creation of new AI Governance Boards and Chief AI Officer positions across federal agencies, laying a possible groundwork for a new centralized AI agency.

And the reaction to this Order across America is not all good. Forbes responded with the article “Biden’s New AI Executive Order Is Regulation Run Amok.” and the author believes that this “may prove one of the most dangerous government policies in years” citing several scenarios of potential unintended negative consequences in various industries.

Some industry groups have expressed concern that the order’s regulatory requirements will stifle innovation in the AI sector. For example, the NetChoice trade group has warned that the order could “put any investment in AI at risk of being shut down at the whims of government bureaucrats.”

Other critics have argued that the order is too vague and lacks specific details about how the government plans to implement and enforce its provisions. For example, the Algorithmic Justice League, a civil rights group, has said that the order “does not go far enough” to protect people from the harms of AI.

Some experts have also criticized the Biden administration for failing to consult more broadly with the public and stakeholders before issuing the order. For example, the Brookings Institution think tank has said that the order “reflects the administration’s own priorities, but not necessarily the priorities of the broader AI community.”

The American Civil Liberties Union (ACLU) has warned that the order’s focus on national security could lead to the government using AI for mass surveillance and other purposes that could violate civil liberties.

Citing the announcement of this Executive Order, the National Safety Council issued a responsive statement. saying that it believes data and AI can be used to gain insights into workplace safety programs and that employers can apply those same insights and technology to reduce the risk of serious injuries and fatalities for workers.

The National Safety Council is America’s leading nonprofit safety advocate – and has been for 110 years. As a mission-based organization, we work to eliminate the leading causes of preventable death and injury, focusing our efforts on the workplace and roadways.

Over the past five years, NSC has focused specifically on how technology, including AI, can improve health and safety outcomes on the job through its Work to Zero initiative, which continues to reveal how technology works well and can be improved to save lives.

Some of the prevailing AI technologies include machine learning, computer vision, natural language processing, as well as predictive and prescriptive analytics engines. All these technologies serve as powerful tools to identify risk factors for musculoskeletal disorders and other injuries, reduce employee incidents and streamline manual tasks.

The NSC said it urges the White House, Congress and other policymakers to examine these findings as it continues to incorporate learnings into this effort.

The Biden Executive Order on AI is just the beginning of the process of regulating AI in the United States. It remains to be seen how the order will be implemented and enforced, and whether it will be effective in addressing the risks and challenges posed by AI.