Senior Partner, Global Industrial Technology Industry
-
The problem:
Firms are discovering all sorts of unexpected issues that artificial intelligence creates, from privacy to poor communication.
-
Why it matters:
In three years, more than $50 billion will be pouring into a technology that companies still haven’t mastered.
-
The solution:
Identify and deploy reliably trained humans who can use emotional intelligence and learning agility skills to guide AI.
It’s the kick that so many of us need. In fact, we drink more than 2.25 billion cups of coffee a year globally now. So it stands to reason that no one would mind if their local coffee chain or shop were to use technology to serve as fast as possible.
Or does it? The local coffee shop of the future could, for example, start deploying video facial recognition technology to identify regular customers and start their orders ahead of time. Should those same coffee lovers be able to decide to opt in or out of that? For those who do opt in, what rights to privacy are they giving up?
This isn’t the kind of situation the media and Wall Street think about when they think about artificial intelligence. But the potential of AI to transform this precise kind of routine, everyday activity is what Gurdeep Pall, corporate vice president of business AI at Microsoft, thinks about all the time. “There are a lot of privacy issues you have to think through when it comes to applying artificial intelligence that businesses are coming across that are truly confounding,” says Pall. In fact, it’s so confounding he hired an ethicist last year to work with AI teams. And the reason is simple enough. “We have to make sure the data used to train AI models is representative of our society and be accountable for any biases,” he says.
We are familiar, of course, with all the buzz surrounding the future of artificial intelligence. By 2021, firms are expected to spend more than $50 billion on AI-related products, according to the research firm IDC. But are they really prepared? Already, the exciting technology has been throwing curveballs that suggest a far greater need for a special set of human skills and leadership to go with it
“Every corporation is thinking about AI, but most of them don’t know how to approach it strategically,” says Sally Beatty, a principal in Korn Ferry’s Global Technology practice. “From a C-suite perspective, they are trying to figure out what are the skill sets needed to be successful.” One of the biggest hurdles to overcome, according to Scott Coleman, a senior client partner with Korn Ferry’s Global Technology practice, is misalignment between boards and leadership teams with regards to how organizations prepare for this technology.
Time is in short supply, however. Advances in artificial intelligence are already growing faster than the ability of organizations to keep up with them. Or, in Pall’s words, “Most people in industry don’t realize what is coming with AI. If you are prepared, it can really propel you. But if you are not prepared, it is going to run you over.”
A Google search for “number of jobs lost and AI” returns 307 million results. Clearly, people are worried—and they should be. Though statistics vary widely, all of them are scary. The World Economic Forum predicts 5 million jobs will be automated by 2020, for instance. After all, companies are racing to the technology precisely to eliminate a lot of labor costs.
But here’s the good news for us humans: AI needs help from people. Lots of it. A Korn Ferry survey of tech executives who are using or developing AI found a need to develop leaders with the skills to develop, refine, and train AI systems. That means having a senior leadership team that is up to speed on how to use AI. It also means finding the talent that can execute AI-infused projects.
Scott Horn, chief marketing officer of [24]7.ai, the leading enterprise chatbot company, predicts the customer service agent role, for example, will be vastly transformed by AI—just not in the way most people think. The conventional wisdom is that as chatbots become smarter, they will render the customer service agent obsolete. [24]7.ai processes more than 1.6 billion customer interactions annually and has a library of roughly 40 million chats, after all. But Horn doesn’t see human agents going away—he sees them becoming more highly skilled and highly paid. “As more transactions become automated, the ones that need to be handed off to a human agent will be more complex and difficult to figure out, requiring agents to have a more specialized skill set,” he says.
Horn says agents will become “orchestrators of multiple conversations, monitoring bots and stepping in and out as needed.” In turn, that will require not only tech skills, but also investigative and reporting skills, adaptability, complex problem-solving ability, emotional intelligence, and more. In fact, tech skills can arguably be acquired more easily than the soft skills and emotional intelligence that AI jobs require.
“It’s important to recruit people who have learning agility,” says Mike Clementi, vice president of human resources at Unilever, which counts Breyers, Dove, Lipton, and Q-Tips among its consumer product brands. “AI applications are increasing so fast that you need people who can learn, unlearn, and relearn the skills required to do a particular job.” That means organizations must lean into training and development programs to help transition workers into AI-related roles.
Experts say organizations also need to apply a diversity and inclusion lens to AI. Already, companies have found out the hard way that AI- based recruiting tools may exclude women from job searches or discriminate because of skin color. There are myriad reasons why such unconscious biases surface in AI. Looking at historical data sets for engineering or coding jobs, for instance, likely results in AI favoring male candidates over females because those jobs have largely been male-dominated. Similarly, while breast cancer is predominantly a female disease, it does occur in men, which means that any data sets healthcare organizations are using for AI-based clinical applications must account for how the disease shows up in both genders as well.
“It is very important when building models and thinking through AI in healthcare to take into account any of the differences that may come about because of the different ways diseases present themselves in men and women,” says Dr. John Danaher, president of clinical solutions at the Anglo-Dutch information analytics company Elsevier, which as the largest publisher of medical and scientific journals and textbooks produces a significant amount of the world’s healthcare information.
Another reason why unconscious bias surfaces in AI is because organizations aren’t building teams representative of the world around us. “Algorithms and decision trees are subject to the biases of the people building them,” says Beatty. “Organizations need to build out a pipeline of diverse and inclusive AI talent to mitigate that.” Indeed, AI algorithms are only as good as the data, and the data is only as good as the people who collect, curate, and organize it. That means recruiting or developing data scientists who can work with very diverse systems, such as customer relations software or resource planning tools, as well as apply business thinking to the data. It also means building diverse and inclusive AI teams.
Not unlike virtual reality, AI has gone through a few hype and bust cycles. Rules-based systems, where software surfaces answers to questions based on a set of assertions, dominated the 1980s. The next wave of pattern analysis became the basis for search algorithms and other intelligence up until around 2010. The current era is defined by deep learning— only now, the computing power is infinitely greater.
Indeed, the World Economic Forum considers AI as the foundation for a fourth industrial revolution. This revolution, as the Forum puts it, “is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres. Put another way, AI is at an inflection point, with the possibility of companies emerging, transforming, or perishing all equally possible.
Yet many organizations are still trying to figure out how to deploy AI to achieve business goals, says Coleman. Unlike cybersecurity, for instance, where the impact of a data breach or other security vulnerability on business can be plainly seen, understanding AI’s power to drive business growth is more complicated. Boards, for one, are struggling to devise a strategy for implementing AI—a struggle that is delaying C-suite roles from forming around the technology as quickly as, say, chief information security officer positions were created.
Part of the friction comes from the difficulty of balancing incorporating AI into an organization with maintaining some level of cultural continuity. Clementi says HR teams can play a big role in influencing change in this regard. “If HR professionals are comfortable with the digital transformation and AI, they can help drive adoption throughout the business,” he says.
If companies are to succeed with AI, they need to develop a culture of humans working alongside machines, says Bob Concannon, a senior client partner with Korn Ferry’s CIO/Information Officers practice. Organizations must figure out how to close the talent gap, lest they fall behind competitors who are already using AI to improve operations, customer experience, revenue, and more. But once they do close that gap, they stand to gain in productivity—and in better employment.
“In the long run, humans will be increasingly working alongside machines in a collaborative setting, from the C-suite on down,” Concannon says.
For more information, contact Scott Coleman at scott.coleman@kornferry.com.
Insights to your inbox
Stay on top of the latest leadership news with This Week in Leadership—delivered weekly and straight into your inbox.