AI ethics researchers are shifting focus from corporate social responsibility narratives to systemic critiques of how AI development concentrates power and resources. The AI Now Institute's Reframing Impact series features researchers like Timnit Gebru and Abeba Birhane challenging what they call the 'AI for Good' PR strategy.
"It's a way to paint a positive image of AI technologies, especially in light of the backlash from grassroots resistance movements," Birhane told AI Now Institute. "'AI for Good' allows companies to say 'Look, we're doing something good! You can't criticize us.'"
Gebru argues the dominant AI paradigm involves "stealing data, killing the environment, exploiting labor" as companies pursue what she calls building a "machine god." The critique extends beyond individual company practices to question the resource concentration behind large language models.
Investors pressure smaller language AI organizations to shut down when Big Tech announces models covering the same languages, Gebru reported. "When OpenAI or Meta comes with an announcement of a big model, potential investors literally told them to close up shop," she said.
Birhane warns that AI deployment may bring "surface-level improvements but also underlying destruction and division." She predicts AI systems will take time to reveal their full social impact, "encoding existing norms and stereotypes in a way that makes the rich richer and more powerful."
The researchers express particular concern about African governments adopting AI development rhetoric. "It's sometimes really scary the way you see some African governments jumping on the AI bandwagon and buying into this rhetoric that AI is going to 'leapfrog' the continent into prosperity," Birhane said. She notes "very little thought" goes to impacts on freedom of movement, speech, and knowledge ecosystems.
The Reframing Impact series represents a broader shift in AI ethics research from asking how to make AI systems fairer to questioning whether current development models should continue at all. The critique challenges assumptions that AI progress inevitably benefits society, demanding accountability for environmental, social, and labor costs.

