As part of its response, the company set up a responsible AI team to look specifically at the algorithms. At its semiannual conference, TwitchCon, the team’s principal product manager told Twitch streamers, “We are committed to being a leader in this area of responsible and fair recommendations.” He urged them to fill out demographic surveys to track potential discrimination.
But last week, the handful of people who made up the responsible AI team were laid off, part of a broader round of cuts that hit about 400 of the company’s 2,500 employees. Others who worked on the issue as part of their current jobs have been moved to other topics, according to a former member of the responsible AI team, who spoke on the condition of anonymity to discuss internal company matters.
“We wanted to make Twitch more equitable and also more safe for creators from all backgrounds,” the former employee said. “This is very much a step back.”
Twitch isn’t the only company to cut its responsible AI team in recent months. Twitter did the same in November, as Elon Musk took over the company and cut three-quarters of the workforce. And Microsoft cut its Ethics and Society team, which was one of the groups that led research on responsible AI at the company, as part of its massive round of layoffs in January.
Together, the moves form a pattern of companies rethinking or pulling back on ethical AI research, often as part of broader cost-cutting, even as new applications of the technology are booming. Ethical AI experts say the breakup of these teams could result in products that are harmful being released before their consequences are fully examined.
“To me, it feels like they’re in a race, and they just want to win the race, and anybody who’s doing anything else is useless,” said Timnit Gebru, a computer scientist who once helped lead Google’s ethical AI team, before she was controversially ousted in December 2020.
Fewer than 10 people lost their jobs when Microsoft cut its team, and some former members are now working on the company’s other groups focused on developing AI responsibly, Microsoft spokesman Frank Shaw said. “We have hundreds of people working on these issues across the company,” he added.
A Twitch spokesperson declined to comment on the company’s approach to AI and pointed to a blog post from its CEO that said the broader economic environment led to its layoffs. Twitter did not respond to a request for comment.
The cuts are coming just as a new wave of “generative” AI technology takes the tech world by storm, spurring a flurry of excitement, investment and product launches. Generative AI tools like OpenAI’s ChatGPT, Midjourney’s image generator, and Google’s Bard chatbot can create images, write computer code and hold humanlike conversations.
OpenAI, a smaller company that was founded as a nonprofit, began pushing its products out to the public last year, giving regular people the chance to interact with tools that had previously been confined to the testing labs of giants like Google and Microsoft.
The wild success of those start-ups’ tools prompted a wave of concern at the most powerful companies that they were falling behind the cutting edge, according to conversations with current and former employees of Facebook and Google, who spoke on the condition of anonymity to discuss internal company conversations. Companies that had moved more cautiously, taking feedback from internal teams that asked probing questions about the social ramifications of new products, are now moving faster to keep up with competitors and ride the wave of hype surrounding the technology.
On Tuesday, a large group of academics and business leaders including Musk, veteran AI researcher Yoshua Bengio and Apple co-founder Steve Wozniak signed a letter asking AI companies to pause the training of new, more powerful chatbots.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
Gebru, who went on to start a nonprofit dedicated to researching AI’s potential harms and seeking solutions, said she has come to view tech companies’ internal AI ethics efforts as “window dressing” that they’re quick to cast aside when it’s inconvenient or when they’re cutting costs. Since firing Gebru, Google has also dismissed two other leading AI researchers over the publication of critical papers. One of them, Margaret Mitchell, was hired by New York-based AI start-up Hugging Face in November.
A Google spokesperson declined to comment on its approach to responsible AI, but the company has a “Responsible AI and Human Centered Technology” team that does research on the impacts of AI tech and works with product teams at the company, according to its website. At the time Gebru left the company, a Google executive posted a memo online saying she did not follow the company’s guidelines for publishing research, though other employees said those guidelines were not generally enforced for other people.
The company has been rushing to launch generative AI products in the past few months, working to keep up with archrival Microsoft and hold on to its reputation as the top AI company in the world, according to current and former employees. All of its blog posts and product launches have mentioned the importance of developing the tech responsibly, and the company has been careful to call new, unproven products “experiments” or “previews” even as it makes them available to more and more people.
Rumman Chowdhury led Twitter’s acclaimed META team — an acronym for Machine Learning Ethics, Transparency, and Accountability — until Musk laid her off in November, along with every member of her 18-person team except one.
The team had been credited with innovative programs such as a “bias bounty,” in which the company offered prizes to outsiders who could demonstrate bias in its systems.
AI ethics is “seen as a cost center, not a revenue center,” Chowdhury said. “I think smart companies know this will cost them in the long run. But a lot of them are thinking short-term right now.”
Still, there could be upsides to integrating AI ethics work more closely into product development, Chowdhury said, if companies like Microsoft and Facebook parent Meta are serious about doing so.
After Facebook was accused of allowing foreign governments to use its platform to post propaganda that influenced American voters during the 2016 election, tech companies invested heavily in teams that dug into the broader societal impacts of their products. AI tech, which helps run social media recommendation algorithms for Facebook, Twitter and YouTube, was a core part of those teams’ research. Employees put out papers detailing negative side effects of the tech and showing how human biases had worked their way into products used by millions of people.
Some of the ethics cuts are coming as waves of layoffs strike the tech industry.
A former employee at the social media firm Snap, who spoke on the condition of anonymity to discuss personnel matters, said the company’s layoffs last summer included one of its only employees working full-time on machine-learning fairness, derailing a nascent internal working group on the topic.
Snap spokeswoman Rachel Racusen said the company does not have a dedicated AI ethics team but continues to invest in employees focused on developing products safely and responsibly, including AI. Racusen confirmed one employee’s departure but said it did not derail the working group, which she said went on to complete its work on time.
There’s a lot of attention on the big questions of whether sentient AI may be developed soon and what risks could come with that, as shown by the letter signed by Musk and other leaders asking for a pause in AI development. But focusing on those future questions may distract from problems that are real right now, Chowdhury said.
“I think it’s easy when you’re working in a pure research capacity to say that the big problem is whether AI will come alive and kill us,” Chowdhury said. But as these companies mature, form corporate partnerships and make consumer products, she added, “they will face more fundamental issues — like how do you make a banking chatbot not say racial slurs.”
Those kinds of issues were the ones that slowed the public launch of unproven AI tools in the past. When Microsoft put out its AI chatbot “Tay” in 2016, it was quickly manipulated into spouting racism and denying the holocaust. The company took Tay offline.
The new publicly available bots have had problems of their own. When Microsoft launched its Bing chatbot in February, some users quickly discovered that the bot would adopt an alternate persona with an aggressive tone, contradicting the human asking it questions and calling itself “Sydney.” Microsoft said the problem happened because of people making leading prompts and pushing the bot into a certain conversational direction. The company limited the number of questions users could ask Bing in a row.
The bots also repeatedly make up information and present it as fact, mixing it with legitimate information. Microsoft and Google have begun proactively pointing out this flaw in new-product announcements.
OpenAI, which helped kick off the current wave of AI excitement by launching its DALL-E image generator and ChatGPT conversation bot to the public before Big Tech companies had done the same with their own tools, is increasing its investments in responsible AI along with its investments in the technology, spokesperson Hannah Wong said. “While the entire company works closely together to develop and release safe and advanced AI systems, we are continuing to grow our teams dedicated to policy research, alignment and trust and safety, which are critical to this work.”
Ethical AI researchers who remain inside of companies will have to adapt and realize that they need to show their employers why listening to them will ultimately help the company avoid problems and make more money down the line, the former Twitch employee said.
“We need to make sure that communication is done in a manner such that it doesn’t seem like people who are talking about the responsible application [of AI] are gatekeeping, which we are not,” they said. “We are advocating for the safe and sustainable development of products.”
Nitasha Tiku contributed to this report.