AI Risks: Running Scared?

Firms citing AI-related risks to business in annual reports increased a whopping 474% this year. How much should boards be concerned?

For firms that jumped into the AI boom, there is no shortage of risks. The new technology can produce biased outcomes, leading to public attacks and harm to the company’s reputation. Issues with data quality could lead to poor decision-making and inefficiencies in AI-driven supply-chain and customer-experience innovations. And evolving laws and regulations related to AI could impact both business operations and financial performance. 

And the headaches go on. Indeed, new data shows that more than half the members of the Fortune 500—281 companies, to be exact—cited risks to their business and financial performance arising from AI in their most recent annual reports. The figure represents a 474% increase over the 49 companies that cited AI-related risks in 2023.  Nicholas Berente, a technology professor at Notre Dame’s Mendoza College of Business who teaches AI-ethics and AI-audit courses, says that even that massive increase fails to capture the mounting liability concerns faced by firm leaders and boards. “Organizations are making big investments in AI without understanding them, and may get worse outcomes for higher overall costs,” he says.

Smart boards, experts say, are trying to get handle on all this—fast. That’s particularly important as investors start scrutinizing AI investments more closely. During second-quarter earnings calls, leaders of major tech companies faced tough questions on AI spending, with investors demanding to know when they’ll see returns on the roughly $200 billion a year big tech firms are pouring into the technology. It isn’t just big tech, either—one-fifth of large companies across industries are investing up to $50 million per year in AI initiatives.

Two of the biggest risks boards see with AI are around governance and competition. As companies introduce new AI solutions, Berente says, boards are struggling to figure out how to develop policies and procedures to evaluate and audit them. He cites as an example a bank that has rolled out chatbots across the organization but is now struggling to manage and integrate them with other tech platforms. “That’s creating real governance costs,” he says.

On the competition side, boards’ major concern is that they are mostly in the dark about competitors’ AI strategies—very different from their experience with the first wave of digital disruption. Lacking awareness of what their competitors are doing makes boards fearful that they may end up losing market share to a rival—or worse. To be sure, companies and boards are doing investors a disservice by not disclosing more about their AI strategies, says Anthony Goodman, leader of the North American board effectiveness practice at Korn Ferry. While Goodman is not surprised by the increase in risk factors associated with AI—as he says, “It’s more of an issue if you don’t mention the risks than if you do”—he does think leaders and boards “can do more to tie together what they are doing with AI in a way that makes sense to shareholders.”

Michelle Lowry, a professor of finance and corporate-governance expert at Drexel University’s LeBow School of Business, agrees. She says leaders and boards are only beginning to understand how AI will affect their business. In next year’s annual reports and thereafter, she expects details about the capabilities companies are building and their boards’ processes for auditing and evaluating them. As she says, “It’s very easy for boards to focus on the risks of an action related to technological innovation, but they forget to adequately focus on the risk of inaction.”

Learn more about Korn Ferry’s Organization Strategy capabilities.