Butterfly Ethics: Short Term Actions with Long Term Consequences
A clothing purchase, a consulting report line, or an AI prompt seems insignificant alone. Yet each participates in networks where minor choices cascade through supply chains, institutions, and people’s lives — creating a butterfly effect.
This article explores cascades in three domains — fashion, consulting, and the meaning of work — to show how governance and behaviors must adapt when minor decisions have the power to impact entire systems.
Fashionable Ethics Under Strain
Current debates about fashion illustrate the matter in detail.
A special issue of the Journal of Business Ethics sheds light on the contrasting nature of the fashion industry: enchanting yet exploitative. Problems include labor abuses, environmental damage, cultural appropriation, and discrimination.
Manufacturing is concentrated in the Global South, where protections are weak. Exploitation, slavery, environmental damage, and waste colonialism are not anomalies — they are systemic. Cost-cutting and speed directly result in longer hours, dangerous conditions, and new environmental harms.
Brands must extend their responsibility to include not only direct employees, but also suppliers and vulnerable workers, especially during crises like the pandemic.
Firms engage in “techwashing” — exaggerating sustainability claims. Meaningful change requires NGOs, consultants, agencies, firms, and regulators to work together on information, accountability, and risk distribution.
Marketing exacerbates problems. Social media encourages impulse buying, campaigns enforce narrow beauty standards, and provocative ads reinforce harmful stereotypes and gender-based violence — all for short-term sales gains.
Deloitte and Ai’s Reliability Problem
The same problems — issues that cascade into trust failures — exist beyond fashion.
For example, in 2025, Deloitte returned fees after a legal scholar discovered fabricated quotes and fake sources in its Australia welfare report. The firm had used OpenAI tools to draft sections without proper verification, though the main recommendations survived.
The real problem is that organizational procedures failed to detect the fabrications. What seemed like a shortcut became a test case for institutional integrity. Reports influence decisions even when they are only partially read because they are trusted to verify facts and methods. Unverified AI-generated statements undermine this institutional authority.
This raises questions about internal controls, client oversight, and systems of professional accountability. In this case, the butterfly effect operates through expectations rather than supply chains. A few unchecked citations can undermine confidence in entire levels of review, even if the primary analysis remains unchanged.
Structural pressures suggest that more incidents are likely. Organizations are strongly incentivized to quickly integrate AI to remain efficient and competitive. However, developing documentation, testing protocols, and human review processes for AI tools is slow and resource-intensive.
This does not mean that generative AI is unusable. Rather, it means that the reliability of generative AI depends on the wider socio-technical system in which the models are embedded. In the Deloitte case, the notable failure was not that hallucinations occurred, as this is expected, but rather that they were not identified by processes that should have treated the AI outputs with appropriate skepticism.
McKinsey’s Superagency Vision
McKinsey’s 2025 report on “Superagency” reveals a gap: companies claim AI ambition but lack maturity in execution.
While a substantial majority of companies intend to increase their investment in AI, only a small percentage of leaders consider their organizations to be mature in terms of AI integration into workflows and its impact on business outcomes.
Interestingly, employees report higher levels of AI use than leaders estimate, and many expect their use of generative tools to expand quickly. This suggests that bottom-up experimentation is outpacing formal strategy, and employees are leading, not resisting, change.
The report portrays AI as a liberating technology. Like the internet, AI has the potential to democratize knowledge and automate routine work if it is deployed safely and equitably.
Although the framework rightly emphasizes governance over algorithms, it falls short in practice. Self-reported data introduces bias, training-focused solutions overlook accountability gaps, and leaders prioritize operational benchmarks and performance metrics over ethical/compliance benchmarks when measuring AI systems.
The document’s origins also shape its outlook. As a consulting document linked to a book promoting the idea of superagency, the report is designed to encourage investment and frame AI as a strategic opportunity that firms cannot ignore.
This does not invalidate the findings, but it highlights executive action and competitive advantage over everyday maintenance work, sector-specific regulations, and social risks that make AI adoption ethical or problematic in practice.
Existential Unemployment and Meaningful Work
These cascades raise a deeper question. When AI outperforms humans in meaningful work, what remains valuable about being human?
O’Brien’s article shifts the focus from productivity to meaning. If AI outperforms humans in research, philosophy, and art, three threats emerge: altered incentives that discourage people from entering demanding fields, reduced skills through outsourcing cognitive work, and the transformation of meaningful work into trivial activities, even when humans continue to perform them.
A single breakthrough by a machine can alter how people perceive the potential of human contributions, influencing how younger generations evaluate the value of embarking on long and arduous journeys in research or the arts.
O’Brien distinguishes between automating routine tasks and automating meaningful work — research, art, creation — which hold objective and personal value.
Proposed solutions range from embracing games and expanding care work to restricting AI or enhancing human capacity.
Each solution has its limitations. Care work is meaningful, but it cannot replace intellectual creation. Appreciation matters, but it differs from achievement. Deskilling erodes passive engagement as well.
Restricting AI protects meaning, but it also delays medical and environmental solutions. Enhancement technologies may lag behind automation, creating interim periods of “temporary obsolescence” that reshape values.
O’Brien acknowledges the moral and economic costs of delaying AI, such as foregoing medical and environmental solutions, but he argues that these costs may be worth bearing. In the end, he concludes that the threat to meaning provides us with a prima facie reason to slow down AI development and opt for patience over rapid automation.
Conclusion: Ethics Across Systems
Today’s crucial ethical questions are not about dramatic failures — they are about how small, routine choices accumulate.
A quick purchase, a rushed edit, a metrics choice, a AI shortcut — each seems trivial. Yet in interconnected systems, they cascade into labor abuses, governance failures, and cultural shifts no one intended.
Ethical governance requires noticing and designing for cause-and-effect chains. This requires treating small choices as consequential and protecting human judgment from technological replacement.