See the latest issue of Briefings at newsstands or read in our new format here.
By: Rupak Bhattacharya
The appliances manufacturer was over the moon. It had partnered with an AI provider to produce the fridge of the future—one that could not only anticipate grocery lists but also analyze dietary preferences and needs. So it was a rude awakening for executives to discover that the fridge used a basic inventory-tracking system that required manual input and employed only a generic algorithm to pick recipes.
Around the world, people have been crowding onto the AI bandwagon. With global financing for AI firms reportedly rising by 115 percent from 2020 to 2022, the bandwagon is better described as a bullet train with no planned stops. But this giddy prognostication rides on one critical premise: AI can do everything firms ask it to.
Welcome to AI washing, where either providers overpromise or buyers overassume. For providers, experts say the motivation to claim inflated AI capability is clear: higher valuations, greater investment, and cutting-edge appeal to customers. However, earlier this year the FTC warned AI vendors against embellishing—and with good reason. One study of European startups from 2019 found that 4 in 10 so-called AI startups had hardly any AI to speak of. “I’ve talked to clients who have been burned,” says Cory Hymel, VP of product and research at Gigster, a software firm. “The main thing is vendors overselling the outcomes while vastly understating the complexity it takes to get there.”
Buyers share equally in the blame, say experts, which starts with a lack of understanding of the differences between traditional and generative AI. The former draws on data to do specific tasks, while the latter can analyze data to deliver fresh insights. In the world of supply chains, for example, a traditional AI model might use historical sales data to generate a demand forecast. Generative AI, on the other hand, can pick up on complex patterns and relationships within the data to deliver a more nuanced prediction, along with a strategy based on its understanding. “Generative AI doesn’t just look for patterns,” says Chris Cantarella, a senior client partner in Korn Ferry’s Global Technology Markets practice. “It’s actually creating.”
Even for the mighty ChatGPT, so-called “hallucinations” continue to be an issue—earlier this year, the FTC sent a letter to OpenAI requesting information on how it plans to address the false, and potentially damaging, information that ChatGPT sometimes shares about real people. Experts say disappointment in any AI model’s output may result from the mismatch between a firm’s idealized marketing materials and the flawed user experience. “It’s certainly not going to work exactly as it’s shown in a promotional video, in most cases, yet,” says Michelle Seidel, a senior client partner on Korn Ferry’s Global Technology Industry team. “Users have to be educated.”
There is no foolproof protection against bluffing businesses or enchanted executives, but experts say that more firms should be checking with past clients, and should be aware that the technology is changing fast. Another easy step is for leaders to ask AI providers if their teams can test out the vendor’s AI tools, says Cantarella. “It’s one of the few technology systems where it’s very cheap to have a demo.”
Insights to your inbox
Stay on top of the latest leadership news with This Week in Leadership—delivered weekly and straight into your inbox.