December 12, 2023

Should You Say ‘I Used AI For This?’

A Brazilian city unwittingly passed a law written by ChatGPT, highlighting the debate over whether workers need to disclose when they use the high-profile tools.

At this point, employees use large-language AI models to develop sales strategies, write software code, design a new logo, build financial models, and even write laws.

But do employees have any obligation to disclose their reliance on AI for any—or all—of that work?

It’s a massive question that can affect relationships between colleagues, managers, and customers. “We’ve struggled with it too: When do we disclose?” says Shanda Mints, a Korn Ferry vice president who leads the firm’s efforts implementing Recruitment Process Outsourcing solutions in the Americas.

To be sure, experts say disclosure is already becoming a big issue in distinguishing between machine- and human-made work. In one recent case, a councilman in Porto Alegre, Brazil, a city of 1.3 million, submitted an ordinance creating a new municipal law involving the town’s water meters. The comprehensive ordinance was vetted by several city committees, but only after its passage did the councilman reveal that it had been written by ChatGPT—in about 15 seconds.

The reaction to the case from other city council members has ranged from fascination to anger. Experts say that, depending on their organization’s culture, workers can expect a similar reaction if they disclose AI use only after the fact. “The future of leadership is knowing how to make the best use of this information, but I’d want to know if something was generated by ChatGPT,” says Maria Amato, a Korn Ferry senior client partner who works on talent-management programs.

In some organizations, telling your boss that you’re using AI to help produce—or completely develop—material might be seen as showing initiative. But in other organizations, the admission could make the employee look lazy or negligent. “That’s part of the problem: If you give credit to the AI, you lose credibility,” Mints says.

There’s another reason some employees might not want to disclose their use of AI: job security. In other words, their managers might conclude that AI can easily replace them

Only a few organizations outside academia have actually developed rules around AI disclosure, and most of those revolve around what to tell constituents or customers. A 2020 executive order, for instance, requires US agencies to disclose when and how officials are using AI. Earlier this year, some media firms said any election-related advertising that makes use of AI-generated content must conspicuously disclose it. However, the disclosure debate mostly remains just that—a debate.

Absent specific rules around transparency, you might consider treating AI as the equivalent of “a great intern,” says Anu Gupta, a Korn Ferry senior client partner who works with life sciences and healthcare clients. A capable intern can do a lot of preliminary research, write up presentations, and even offer options. Bosses and colleagues would probably appreciate knowing that their workers are delegating such assignments to interns. They would likely feel similarly about the use of AI tools.

Client disclosure can be a little trickier. “You wouldn’t necessarily be compelled to say to the client that an intern started a project,” says Alma Derricks, a senior client partner in Korn Ferry’s Culture, Change, and Communications practice. Still, most clients want to know that their vendors are using all the tools available to them to add value, including AI. “If you used AI to give your project direction, that’s fine. The project is still yours,” Mints says.

 

Learn more about Korn Ferry’s Future of Work capabilities.