Boss, the AI Says You’re Wrong

AI will soon force major culture shifts, especially around decision-making. Can leaders handle this?

Since the beginning of the industrial revolution, one person has always been right: the boss. The best leaders wielded that power diplomatically, gathering information and advice, listening to naysayers and advocates alike, and then announcing the path forward. It was the right path. For anyone to suggest otherwise was unheard of. 

 But experts say a new and unprecedented dynamic in the corporate hierarchy may be coming, one in which AI may function to second-guess leadership. The technology’s ability to soak up and spew out data could upend a host of protocols—and, to some degree, already has. The average company spends $5 million annually on AI; among large companies, 20% are investing up to $50 million per year. “AI decision-making is absolutely going to change the way culture operates,” says retail expert John Long, senior client partner at Korn Ferry. “Culture is a singularly important element to address.”

 Long says shifting a firm’s decision-making culture won’t be easy. First, employees and leaders will likely resist AI-based decision-making (“It would be better to do this ourselves”), gradually moving toward acceptance (“Some of these recommendations are excellent”) and eventual advocacy (“Wow, this is really delivering great results”). Because AI “learns” with exposure to data, recommendations will become better and more and more targeted. Experts predict this will create conflicts, especially when executives lack the access their employees have to AI-enabled knowledge.

Without an awareness of what people in the trenches are seeing, “management could end up in tough situations”, says Chris Cantarella, senior client partner in Korn Ferry’s Global Technology Markets practice. “That could cause a lot of discord.” 

 Today, AI’s impact on culture differs by industry. For example, retail executives use predictive software which crunches millions of data points to recommend markdown cadences. In recruiting, AI and data have long supported hiring decisions, but the industry is grappling with how and when to use training bots instead of human coaches, and whether employees will feel safe with them, says Bryan Ackermann, head of AI strategy and transformation at Korn Ferry. “Every organization is looking at what to do with AI. Very few are implementing it at a scale large enough to have a meaningful impact on culture—but the early signs are here.” 

 AI is expected to potentially aggravate stakeholders’ already low trust numbers. Just 67% of employees and 30% of customers trust a given company, according to PwC’s 2024 Trust Survey. “This might be exacerbated if decisions are made by AI they don’t trust, and validated by leaders they don’t trust,” says Ackermann. “We need to watch that.” 

 To help smooth culture transformations, experts suggest that leaders clearly communicate how they intend to use AI to benefit both the organization and individual employees. “It may come down to transparency,” says Ackermann. 

 Experts further advise executives and employees to bone up on AI-enabled capabilities, as well as on how to communicate about AI across the corporation, be it with organizational strategy teams or the competency and assessment group. Even if many companies encourage the use of mainstream generative AI, in most cases the training they provide leaves much to be desired. “Know what you don’t know,” says Cantarella. “A lot of management doesn’t, and that’s a problem.”

Learn more about Organizational Strategy capabilities at Korn Ferry.