Senior Client Partner
IT Services & Software
Global Lead, AI Practice
It decides everything from what products we should buy to which candidates companies should hire. But depending on the outcome of one European court case, artificial intelligence may move into a whole new area: deciding who to fire.
Last week in the Netherlands, a union representing drivers filed suit against a ride-sharing firm, challenging the company’s practice of “robo-firing,” or using AI and machine learning to determine which drivers should be let go. The case puts a spotlight on one of the more overlooked—and controversial—aspects of incorporating machine learning into traditional organizations: how to explain the decisions an AI recommends. “You can’t just say a machine came to the decision, so that’s what we did,” says Vinay Menon, a Korn Ferry senior client partner and global lead of the firm’s AI practice.
This is one of the first major cases to test the limits of the General Data Protection Regulation, or GDPR, a European Union–wide set of rules about data protection and privacy. The GDPR broadly prohibits using AI or similar technology from being the sole decision-maker on actions that have “legal or other significant effects” on people. The ride-sharing firm contends its AI only identifies suspicious activity by drivers, with human managers making the final decisions.
Experts say AI, like any new technology, first has to be understood by the business leaders and employees who use it. But then those people need to explain what went into the AI’s decision-making process. Indeed, those explanations, Menon says, are more important than the actual technology that’s used to power the AI in the first place. “Data poured into a black box powered by algorithms and machine-learning principles will sometimes throw up explanations that are difficult to understand,” he says.
Though experts say robo-firing hasn’t made its way to the United States, machine learning has made inroads at organizations in a host of ways across the globe. It powers chatbots on websites, facial-recognition software in security systems, and production efficiency on assembly lines. More recently, machine learning has made its way into talent management, as firms use it to sift through thousands of resumes, looking for skills and experiences that might make a person a good candidate for an open role. Then the AI recommends who managers should interview. “AI takes a lot of mass-volume resumes and finds people to fill niche roles faster,” says Jacob Zabkowicz, Korn Ferry’s vice president and general manager for recruitment process outsourcing. Still, most firms aren’t comfortable enough with AI to have it recommend who should get hired, but that might not be that far off, at least for some lower-level roles. “If you’re looking to hire people with just high school diplomas, how much human recruiter touch do you need?” Zabkowicz says.
As AI has become more pervasive in talent management, so have its challenges. Companies have found out the hard way that AI-based recruiting tools may, thanks to the various unconscious biases of their programmers, exclude women from job searches or discriminate because of skin color. Then there are other iterations of AI that claim to predict, with a very high degree of accuracy, which employees are planning to quit. Even when the AI’s logic is sound, some question the ethics of using it for decisions that have a large impact on individuals. Using AI to determine who should get fired probably wouldn’t be well received in the US because it could open up the organization that uses it to litigation, Zabkowicz says.
For his part, Menon believes that there will be plenty of challenges to AI. The technology and its applications are moving at a pace far faster than humans expect. “It’s impossible to cover for every scenario, especially since it’s new,” he says.
Insights to your inbox
Stay on top of the latest leadership news with This Week in Leadership—delivered weekly and straight into your inbox.