President, Asia Pacific
For the most part, Big Tech has never been a fan of regulation. In fact, with a new president coming in, many leaders were gearing up for a battle over many of the industry’s roles and practices. But all that was before last week—the week when an unprecedented riot at the US Capitol was followed by tech’s unprecedented response.
The actions have been well publicized and already debated heavily. After the riot, Facebook and Twitter suspended the social media accounts of the sitting US president, explaining that his words could incite more violence. Then Apple, Google, and Amazon effectively cut off the ability of Parler, a social media hub used by millions of the president’s fans, to do business, and YouTube suspended Trump's personal video channel. It was, without question, a remarkable display of Big Tech’s growing power over public discourse.
But according to some experts, the industry may not want that power. At issue is what exactly constitutes harmful speech or, more broadly, harmful content. Each tech firm has a different definition of what that is, as well as different—and often changing—policies for people who post content perceived as harmful. “There’s no guidance on this issue. It’s really time for regulation,” says Esther Colwill, president of Korn Ferry’s Global Technology Industries practice.
The question about who can say what on social media is just one of the myriad issues the government has with Big Tech these days. In recent weeks, many of the firms have been sued by dozens of states for allegedly holding monopolies over online search or suppressing competition in other ways. Some big names are also under scrutiny for the treatment of workers and other business practices. For their part, the tech firms deny the respective claims and will likely fight them hard.
But experts say they may want to press just as hard on issues over communication on social media, where some more government intervention might help. In February, Facebook asked lawmakers to devise rules around harmful content, a different model for platforms’ legal liability, and a new type of regulator to oversee enforcement. Other firms have voiced at least an openness to some form of guidelines about what is harmful speech. “It’s very nuanced, and everyone has a different view of what they’re looking for,” says Nels Olson, global leader of Korn Ferry’s Government Affairs practice.
To be sure, it’s unusual for corporate leadership to want regulation. But it isn’t unprecedented. Big oil and gas firms have asked for governments to limit certain methane emissions. Drone manufacturers have asked aviation authorities worldwide for rules about where and how they can operate their flying machines. Even chocolate manufacturers have asked governments to step in and ban the use of child labor around the world. (Children have been used to harvest cocoa despite multiple industry efforts to curtail the practice.) While individual tech firms may disagree on how far regulation on content should go, most appear ready to embrace some intervention. “Tech leaders need this and they know it,” says Colwill. “Clarity will help them balance where and how they need to invest.”
Insights to your inbox
Stay on top of the latest leadership news with This Week in Leadership—delivered weekly and straight into your inbox.