The Negativity of (Some) AI Discourses
In 2024, Peter Königs published a widely discussed paper titled “In Defense of Surveillance Capitalism.” A year later, in November 2025, he released another work—smaller in scope, but (to me) more interesting—about what he calls the “negativity crisis” in AI ethics. He emphasizes that philosophical debates on artificial intelligence tend to focus solely on its problems and risks.
In contrast, positive or constructive perspectives on AI are rare. According to him, although AI has the potential to improve human life, philosophers often overlook the ways in which this could happen. Thus, his main claim is that ethical discourse on AI is distorted by pervasive negativity.
Instead of defending AI against its critics, Königs examines why negativity dominates by analyzing the field from a philosophy of science standpoint. He attributes this focus to three interrelated factors:
- AI ethics engages emerging technologies rather than timeless questions, positioning philosophers as commentators reacting to innovation.
- There exists an implicit academic norm, according to which papers that explore ethical benefits rather than harms are often seen as insufficiently novel or prescriptive.
- Institutional pressures, especially the need to publish and obtain funding, encourage scholars to emphasize risks because addressing concerns seems more relevant than offering optimistic accounts.