McKinsey's "Superagency" 2025 Report: AI and Governance

McKinsey's "Superagency" 2025 Report: AI and Governance

In January 2025, McKinsey & Company released Superagency in the Workplace: Empowering People to Unlock AI’s Full Potential, a report that has already influenced how business leaders discuss artificial intelligence. The report captures a paradox at the heart of today’s workplace. Nearly every company is investing in AI, yet very few are realizing its potential. 

As the report puts it:

“Therein lies the challenge: the long-term potential of AI is great, but the short-term returns are unclear. Over the next three years, 92 percent of companies plan to increase their AI investments. But while nearly all companies are investing in AI, only 1 percent of leaders call their companies “mature” on the deployment spectrum, meaning that AI is fully integrated into workflows and drives substantial business outcomes. The big question is how business leaders can deploy capital and steer their organizations closer to AI maturity.” (p. 2, my emphasis)

They also bring up this issue when they ask: “How can companies harness AI to amplify human agency and unlock new levels of creativity and productivity in the workplace?” (p. 2).

The report’s main argument is that the issue is not a technological one, but one of governance. In this piece, I will provide an overview of the report and its limitations.

One thing should be noted first. The report does not aim to question AI or its uses. Rather, the report aims to present the opportunities AI may offer businesses and the challenges of implementation. This is a consulting report. This explains why the report includes paragraphs like the following:

“Imagine a world where machines not only perform physical labor but also think, learn, and make autonomous decisions. This world includes humans in the loop, bringing people and machines together in a state of superagency that increases personal productivity and creativity (see sidebar “AI superagency”). This is the transformative potential of AI, a technology with a potential impact poised to surpass even the biggest innovations of the past, from the printing press to the automobile. AI does not just automate tasks but goes further by automating cognitive functions. Unlike any invention before, AI-powered software can adapt, plan, guide — and even make — decisions. That’s why AI can be a catalyst for unprecedented economic growth and societal change in virtually every aspect of life. It will reshape our interaction with technology and with one another.” (p. 6, my emphasis)

Keeping in mind that the authors aim to present AI opportunities in a positive light and largely set aside the potential social, political, and cognitive difficulties these technologies and their use may cause, let’s examine the report itself.

What the Report Says and Shows

The survey was conducted in October and November of 2024. The report is based on extensive empirical research, including a survey of 3,613 employees (“managers and independent contributors”) and 238 C-suite leaders (top-ranking executives), primarily (81%) in the United States and five other countries: Australia, India, New Zealand, Singapore, and the United Kingdom (19%). Although the sample includes respondents outside of the US, the findings discussed in the report “pertain solely to US workplaces.” The employees surveyed represented many functional areas, “including business development, finance, marketing, product management, sales, and technology.”

As indicated in the introduction to this article, the data shows that enthusiasm is outpacing strategy overall. 92% of companies plan to increase AI investment over the next three years, yet only 1% of leaders call their companies “mature,” “meaning that AI is fully integrated into workflows” and driving “substantial business outcomes.” In other words, most companies were not yet “mature” — AI had not yet been fully integrated into workflows and was not yet driving substantial business outcomes.

The gap between ambition and execution is huge. However, according to the same report, employees are advancing more quickly than leaders realize. While leaders estimate that only about four percent of employees use generative AI for at least a third of their work, employees report a rate three times greater. Furthermore, 47% of employees — versus 20% of leaders — expect to use generative AI at that level within a year.

The authors of the report came to a simple conclusion based on this situation: employees are not resisting change, but rather, leading it. Employees aged 35–44 emerge as the most confident adopters, often acting as informal “AI help desks” within their teams. On top of this, most want formal training and sanctioned opportunities to experiment.

The authors argue that businesses must think bigger and act faster. They should set concrete, outcome-focused goals. They should also invest in training, establish governance early on, and include non-technical employees in the ideation process.

According to the report, the end state is “superagency,” a condition in which technology amplifies human creativity and decision-making rather than replacing them. On this concept, they write:

“Superagency, a term coined by Hoffman, describes a state where individuals, empowered by AI, supercharge their creativity, productivity, and positive impact. Even those not directly engaging with AI can benefit from its broader effects on knowledge, efficiency, and innovation. AI is the latest in a series of transformative supertools, including the steam engine, internet, and smartphone, that have reshaped our world by amplifying human capabilities. Like its predecessors, AI can democratize access to knowledge and automate tasks, assuming humans can develop and deploy it safely and equitably.” (p. 6, my emphasis)

The idea of equitably deploying AI is more of a slogan than a practical concept. A consulting firm like McKinsey & Company does not aim to provide equitable capabilities to all companies; rather, it aims to provide competitive advantages to its clients. The rhetoric serves one goal: advising clients and potential clients not to miss the technological shift that AI will bring about.

What It Implies for Today’s Businesses

For contemporary businesses, the implications of AI are twofold. 

First, the productivity potential is real. Independent field studies corroborate the report’s optimism. A large experiment on customer-support agents showed that a GPT-based assistant increased productivity by around fifteen percent, with the biggest improvements among novices (Erik Brynjolfsson et al. 2025). 

Second, scaling AI is not a data-science problem but an organizational one. As the authors of the report put it: 

“Achieving AI superagency in the workplace is not simply about mastering technology. It is every bit as much about supporting people, creating processes, and managing governance.” (p. 10)

The report also shows that, although many employees are concerned about cybersecurity, inaccuracy, and privacy, they trust their employers more than other institutions to use AI responsibly. In this case, higher employee trust reduces hesitation and supports the adoption of AI.

Beyond that, the authors call for monitoring AI for fairness, safety, and explainability and note that few leaders who benchmark AI consider it important. They write: 

“One powerful control mechanism is respected third-party benchmarking that can increase AI safety and trust. (…) While benchmarks have significant potential to build trust, our survey shows that only 39 percent of C-suite leaders use them to evaluate their AI systems. Furthermore, when leaders do use benchmarks, they opt to measure operational metrics (for example, scalability, reliability, robustness, and cost efficiency) and performance-related metrics (including accuracy, precision, F1 score, latency, and throughput). These benchmarking efforts tend to be less focused on ethical and compliance concerns: Only 17 percent of C-suite leaders who benchmark say it’s most important to measure fairness, bias, transparency, privacy, and regulatory issues (…).” (p. 24)

Taken together, these findings demonstrate that leaders tend to focus their third-party benchmarking efforts on operational and performance metrics. This limits their ability to address the ethical and compliance issues associated with AI use. Therefore, current evaluations must address concerns such as fairness, bias, transparency, privacy, and regulatory issues.

Recognizing the importance of these dimensions requires developing the necessary capabilities. Leaders need training and skills to ensure the implementation of AI technology is of high quality. Again, this comes down to a governance issue.

Where the Report Falls Short

McKinsey’s framework does have its limitations. Since its data comes from self-reporting, adoption rates and impact estimates are subject to optimism bias. Employees may overstate the percentage of their work that is genuinely mediated by AI, while leaders may understate the amount of informal use. The resulting perception gap may be significant.

While justified, the emphasis on training risks oversimplifying what actually enables responsible performance. Training only teaches users how to prompt. However, it does not redesign workflows to ensure that AI outputs are reviewable and that errors are correctable. Meanwhile, fairness is mainly treated as a technical issue, but fairness also depends on whether AI-assisted decisions feel fair, are explainable, and can be appealed (Jabagi et al. 2025).

The report pays little attention to the everyday maintenance work that makes AI viable. Behind each successful use case are hours of human correction and quality control, as well as dialogue with clients or colleagues to interpret the system’s output. It is essential to recognize this labor if “superagency” is to describe empowerment rather than an additional burden. This work is also instrumental in the general acceptability of AI.

One final limitation concerns the origin and context of the report itself. Superagency in the Workplace is a McKinsey & Company report written by practitioners. It was explicitly “prompted by” the book Superagency: What Could Possibly Go Right with Our AI Future (Authors Equity, January 2025). This context helps explain the report’s rather optimistic tone and focus on leadership strategies, such as its call for respected third-party benchmarking.

While this does not invalidate the findings, it can shift the focus toward executive action and operational metrics, diverting attention from more complex labor processes and sector-specific regulatory barriers. Readers should keep this in mind when interpreting the data and recommended actions.

The Bottom Line

This report is best read as both diagnosis and provocation. Its data reveal momentum — employees are experimenting faster than leaders assume — and its optimism rests on a plausible structure: ambition combined with effective governance. 

Yet “superagency” should not be mistaken for a technological state of grace. True agency lies in the institutionalization of good habits: clarifying responsibilities, making systems inspectable, and ensuring that those who rely on AI remain able to question and correct it. 

For today’s organizations, the question is not whether AI will augment human capacity but whether they can build the conditions that make such augmentation ethical as well as sustainable. The challenge is cultural, organizational, and ethical — how to ensure that the future of work amplifies judgment rather than replacing it with unexamined certainty.

The authors make the same case by looking back at the last major shift, then urging leaders to act now.

“AI could drive enormous positive and disruptive change. This transformation will take some time, but leaders must not be dissuaded. Instead, they must advance boldly today to avoid becoming uncompetitive tomorrow. The history of major economic and technological shifts shows that such moments can define the rise and fall of companies. Over 40 years ago, the internet was born. Since then, companies including Alphabet, Amazon, Apple, Meta, and Microsoft have attained trillion-dollar market capitalizations. Even more profoundly, the internet changed the anatomy of work and access to information. AI now is like the internet many years ago: The risk for business leaders is not thinking too big, but rather too small.” (p. 2)