See the latest issue of Briefings at newsstands or read in our new format here.
By: Simon Constable
It’s in language translators. Virtual shopping guides. Online movie suggestions. It helps diagnose diseases, makes video games more exciting, and so on. As the century progresses, artificial intelligence is becoming increasingly ubiquitous. And that, say experts, is the problem.
Each new advance in AI provokes new scrutiny from governments and rising suspicions from consumers. Just recently, the US government warned companies that using the technology in recruiting may exacerbate discrimination. This was hardly the first time AI got a black eye, however. It had already been widely criticized for flawed programs in facial recognition (it demonstrated racial bias) and driverless cars (it ran into pylons). All of which leaves many company leaders struggling with a single question: Will the robots get smarter?
The stakes are enormous. For years, companies have been talking about the importance of digital transformation, but it took a pandemic (and its attendant lockdowns) to create an urgent need for it. By last year, almost four companies in ten were deploying AI, with global investment in it ballooning to more than $325 billion. But whether a firm wants smarter ways to improve supply chains or chatbots that can better understand customers, this ever-advancing technology will need to grow a lot more, experts say. At the root of every AI application is an algorithm that analyzes masses of data. “If the data is biased, then the AI is biased,” says Roshni Raveendhran, a professor at the University of Virginia Darden School of Business.
She cites one example: many AI models use ZIP code data to make credit decisions. These ZIP codes are associated with specific demographic profiles, such as income, wealth, and the likelihood of the borrower defaulting. But the credit profile of the average person in a ZIP code may not tell us much about a specific individual’s creditworthiness. Put simply, an AI credit model based on ZIP codes may be biased—it may automatically consider anyone living in certain areas to be a risky prospect. “You may be already red-flagged as a potential defaulter,” says Raveendhran.
Infamously, the artificial intelligence used in driverless cars has had trouble deciphering—or at least deciphering fast enough—whether an unknown object is another car, a bike, or a human, leading to serious accidents over the years.
The good news: flawed algorithms can be corrected—by other humans, of course. People can audit computer programs with relative ease. “We can adjust the algorithms and make them fairer,” says Anton Korinek, a David M. Rubenstein Fellow at the Brookings Institution. “If we are smart, I really hope we will see quite a lot of progress in making things explicit that were implicit.” That applies to AI applications for recruiting as well, where many leading companies have combined AI selection processes with human discernment to assess the relative strengths of executives, says Brad Frank, senior client partner in Korn Ferry’s Technology practice.
“You will interact with algorithms from the moment you wake up until you go to sleep,” Korinek says. Which is all the more reason firms will be watching for the technology to increase its intelligence even more in 2022.
Insights to your inbox
Stay on top of the latest leadership news with This Week in Leadership—delivered weekly and straight into your inbox.