A New AI Tool to Fight a New AI Tool

The developers behind ChatGPT say they now have software that can detect whether text was written by a human—or a machine. Will it help, or create more problems?

How’s this for meta: artificial-intelligence tools are being tested to protect against copyright infringement, plagiarism, and other liabilities that could potentially arise from using... artificial intelligence. 

Three months ago, ChatGPT debuted—the first artificial-intelligence bot to produce original content virtually indistinguishable from that of a human brain. Now, the creators of that software are beta testing a new tool that can (or so they say) determine whether a text was written by a person or a machine.

The applications could be many, from identifying disinformation campaigns to detecting when a job candidate is has used AI for a cover letter. But experts worry the software will only create more challenges for leaders already caught in an AI rabbit hole. “The mushing of original thinking and discernment and artificial intelligence is dangerous for employees, managers, and leaders,” says Andrés Tapia, a senior client partner and global diversity, equity, and inclusion strategist at Korn Ferry. 

Since ChatGPT and other “generative AI” bots draft responses based on user questions and prompts, the more the software is used, the more human it sounds. That could be beneficial in automating emails, onboarding new employees, or answering basic customer-service questions. But it could be problematic for evaluating performance, screening job candidates, or conducting layoffs, says Tamara Rodman, a senior client partner in the Culture, Change, and Communications practice at Korn Ferry. One job applicant could, for example, pass an initial screening using work produced mainly by a bot, while another, who really did write their own cover letter, could be ruled out for sounding too much like a bot.

But how well the software will work is an open question. OpenAI, creator of both the ChatGPT and AI-text detector software, says that the new tool has so far correctly identified only 26% of AI-written content, while mistaking human-written text for AI 9% of the time.

The numbers could improve, but Chris Cantarella, a senior client partner and global sector leader for software at Korn Ferry, worries that firms may end up placing the judgments of AI on par with, if not above, that of humans. “How long before society broadly agrees AI should be in the superior position overall to make the ultimate judgments?” asks Cantarella, adding that it’s a question with important talent and workforce ramifications.

There’s also a concern that firms and schools trying to root out cheating might inadvertently stifle innovation for businesses. After all, the attempt to ban or limit ChatGPT isn’t without precedent—Google and Wikipedia met similar resistance at some schools. Some experts fear the increases in productivity and efficiency that AI promises could be quashed before they can be realized. “There are places where using ChatGPT will matter and places where it won’t,” says Juan Pablo Gonzalez, a senior client partner and sector leader for the Professional Services practice at Korn Ferry. “Leaders are going to have to evolve how they think about writing.”

 

For more information, contact Korn Ferry's Software and Platforms practice.