By Guillaume Pajeot

There is a quiet irony to the moment we find ourselves living through, a hum beneath the noise of every keynote panel, LinkedIn post, and anxious executive briefing about artificial intelligence (AI). For all the breathless commentary about speed, scale, automation, and the algorithmic march toward a supposedly frictionless future, the deeper truth is far more interesting: the more powerful our technologies become, the more essential our humanity becomes. It is a paradox that sits at the heart of modern leadership, and one that many executives have felt intuitively long before they began trying to articulate it.

AI may change the systems. But only humans can change the future.

At Insigniam, we’ve spent the good part of the last year wandering through conversations with CEOs, researchers, and technologists—some awed, some apprehensive, many simply exhausted by the pace of change—and the one through-line that emerges is this: transformation today is no longer an engineering endeavor. It is a human one. The tools matter, of course; they always have. But the tools are not the point. Not anymore.

It is a story about what remains uniquely, irreducibly human in an AI-saturated world—and why the capacity to preserve, elevate, and unleash that humanity may very well determine which enterprises thrive in the coming decade, and which struggle to catch their breath.

Yet, this is also a story about execution, not in the mechanical sense of deadlines or workflows, but in the lived experience of moving a big idea through the bloodstream of an organization until it becomes real. There is no algorithmic substitute for that.

To understand what a human-centric approach to tech-driven transformation really requires, we first have to understand where the fear comes from—and why it tends to overshadow the opportunity hiding beneath it.

Overcoming Apprehension

When OpenAI released ChatGPT in 2022, the world lurched forward as if someone had kicked the floor loose. People waited weeks just to create an account. Offices buzzed with speculation: Would this replace jobs? Accelerate them? Flatten industries? Democratize them? The frenzy was almost comical in its intensity—and yet, for many workers, the reaction was not theoretical at all. It felt personal.

You hear it in the tone of a mid-career professional who hasn’t had time to reskill and wonders if the floor is shifting beneath them. You hear it in the nervous laugh of a senior leader who has spent decades building judgment and intuition the old-fashioned way. And you hear it in the exhausted recognition from executives who, for years, have said some version of the same lament: If only I had more time to think.

Artificial intelligence, curiously enough, might be the first technology in decades that can make good on that lament.

“There’s a misconception that AI is coming to eliminate roles wholesale,” says Steve Steinberg, co-founder of Responsum and a partner at Elixirr. “But in most cases, roles don’t disappear—they evolve. When you remove the low-value, repetitive work that drains people of their energy, you actually give them the mental space to do the work that inspires them.”

It’s a deceptively simple idea: AI clears the underbrush so humans can do the work that only humans can do. But in practice, this idea has profound implications for how organizations operate, how leaders lead, and how enterprises execute.

To understand those implications, it helps to briefly return to a distinction that political philosopher Hannah Arendt drew decades before AI entered the public imagination. Arendt argued that humans engage in three types of activity: we work, we build, and we act. Work is the routine, the repeatable, the necessary. Building is the domain of craft, skill, and creation. And action—the third category—is the one she considers uniquely, properly human. Action is our capacity to initiate something new, to imagine what does not yet exist, and to bring it into being.

No machine, no matter how sophisticated, can take action in that sense. It can mimic, yes. It can combine patterns, project probabilities, and perform tasks with astonishing speed. But inspiration is not a statistical operation. Judgment does not emerge from a dataset. Imagination is not a byproduct of predictive modeling.

That distinction becomes important because the fear surrounding AI is not really about technology at all. It is about identity. It is about meaning. It is about the deeply human desire to feel that our contributions matter, and that our work reflects something of our spirit, not merely our output. When people worry that AI will “replace” them, what they are really afraid of losing is not their job but their dignity.

And yet, if you look closely at the organizations that are navigating this transformation well, you see a different story emerging—one that bends toward creativity rather than erosion, toward elevation rather than displacement.

human-centric guide to AI
As AI takes on more routine tasks, people must lean into qualities like empathy, judgment, and collaboration. These are not soft skills. They are execution skills.

What AI Can (And Can’t) Do

Let’s pause here and acknowledge a simple truth: AI can do many things extraordinarily well. It can process and synthesize massive volumes of information. It can automate tasks with mechanical precision. It can generate content, simulate future scenarios, and perform complex analysis in seconds that would take humans days or weeks.

These are impressive capabilities. They can also be dangerous if misunderstood.

AI is only as sound as the data on which it is trained. It lacks situational awareness, emotional context, moral reasoning, and common sense. It struggles with ambiguity, contradiction, and novelty—the very conditions in which humans operate almost constantly. It does not understand people. It does not understand power. It does not understand how culture moves through an organization like an invisible gravitational field, shaping behavior in ways that no system architecture diagram will ever capture.

“AI does not remove the need for human judgment,” says Mr. Steinberg. “If anything, it actually increases it.”

This becomes especially clear when you consider the role AI can play in organizational execution. At its most powerful, AI does not replace decision-making; it accelerates the path to decision-making. It does not generate alignment; it frees leaders to build alignment. It does not determine priorities; it helps leaders stay focused on the priorities that matter.

To do that, however, executives have to make a fundamental shift: instead of asking, What can AI do? They must ask, What can AI help humans do better?

This distinction may appear semantic, but it is the foundation of a human-centric approach to transformation. It is the shift that turns AI into a partner rather than a threat.

How Culture Dictates AI Success
There is another shift unfolding—one that many executives sense but have not yet named. It is the shift from viewing AI as a technical implementation to understanding it as a cultural transformation.

You see the contours of this shift in odd places. Take IKEA, for example. In 2021, the company retrained 8,500 call center workers—not to replace them but to elevate them. Instead of fielding routine inquiries, many of these employees became interior design advisors, supported by AI systems that handled the simpler questions. In the first year alone, this transformation contributed 3.3% of IKEA’s global revenue through remote design services, with projections to surpass 10% by 2028.

Or look at the U.S. Department of Health and Human Services, which runs an internal “Shark Tank” competition encouraging employees to bring forward ideas that AI could help bring to life. Winners receive funding, visibility, and organizational support. This is not automation. It is motivation.

These examples matter because they reveal a truth we too often overlook: culture is not a byproduct of transformation. It is the catalyst.

If executives want AI to strengthen execution—if they want faster cycles of learning, clearer decision pathways, more coordinated work across functions—they cannot begin with tools. They must begin with people. They must begin with meaning.

Dinika Mahtani, principal at Cherry Ventures, describes the shift this way: “Generative AI will accelerate personalized and applied learning. Linear career pathways will fall away. Upskilling will become fluid, adaptive, and ongoing.”

Implicit in her argument is a larger reality: the organizations that thrive in an AI era will be those that cultivate human adaptability at scale—not as a training initiative, but as a cultural identity.

This requires leaders to ask questions that feel almost philosophical: What does work actually mean now? Where does human value reside? Which tasks express a person’s unique judgment, creativity, or empathy—and which can be automated to create more space for those expressions?

These questions are not rhetorical. They shape operating models, role definitions, talent strategies, and leadership behaviors. They influence how people interpret change—and whether they embrace it or resist it.

The tension between fear and possibility often comes down to where organizations place their attention. Do they fixate on the tasks AI can perform, or on the human abilities it can amplify?

A Shift Begins

One of the most striking insights about AI’s organizational impact comes from research published in the MIT Sloan Management Review. Authors Zoran Latinovic and Sharmila C. Chatterjee argue that AI can serve as an antidote to the siloed, fragmented communication patterns that haunt many large enterprises. With AI-enabled systems, they write, “employees communicate, collaborate, and coordinate their workflows” in ways that create greater synchronicity and fewer blind spots.

In other words: AI dissolves the obstacles to execution that have nothing to do with strategy and everything to do with human coordination. The tools are not doing something magical. They are doing something human. They are connecting people.

And yet, as any leader knows, connection alone does not generate commitment. It does not inspire people to stretch, to imagine, to take risks. For that, you need more than data. You need narrative.

JW Dobbe, an Insigniam consultant who monitors AI trends across several industries, frames the challenge succinctly:

“The narrative surrounding AI is crucial,” says Mr. Dobbe. “How we describe, view, and implement AI shapes its impact, especially on us as human beings.”

Mr. Dobbe’s point is deceptively simple: technology adoption is a communication act before it is a technical act. Fear thrives in the absence of narrative. Cynicism thrives in the absence of meaning. And execution falters in the absence of both.

This is why responsible use is not a compliance checkbox. It is a leadership behavior. It is a signal. It informs how people interpret decisions, how they interpret their own agency, and how they decide whether to trust what their leaders are asking them to do. AI cannot dictate responsibility. Humans must. And this is where the conversation turns.

The Human Ledger

Before we talk about what human-centric transformation looks like, we must address a topic that rarely receives the attention it deserves: accountability.

As AI systems become more deeply embedded in workflow processes, executives will confront scenarios that defy traditional governance frameworks. Consider a situation in which an employee relies on AI for a project deliverable. The output is flawed, perhaps dangerously so. The employee followed all guidelines; the AI made an error. Who is accountable? The human? The algorithm? The leadership team that permitted AI use without adequate guardrails? The question is not abstract—it is imminent.

The instinct to treat AI as a “black box” or shift accountability to an IT function is not only misguided—it erodes trust. A human-centric approach begins with a simple principle: AI is a tool, not an agent. Accountability always rests with the human. Accountability is an act of leadership, not an act of system architecture.

This raises another question, one that is even more profound: What do leaders owe their people in an AI era? They owe clarity of purpose. They owe guardrails that are flexible but firm. They owe training that strengthens human capability, not just technical proficiency. They owe a culture where people are encouraged to think, to question, to imagine, to act.

All of these commitments converge in a critical realization: tech-driven transformation is really character-driven transformation.

Technology can accelerate a system. But character accelerates an enterprise.

Designing the Future

So what does a human-centric approach to AI actually look like? It looks like an organization that treats its people not as operators of a system but as stewards of a future. It looks like leaders who ask different kinds of questions, and who ask them more often.

It begins by elevating the work. That means using AI to strip away the tasks that erode motivation, fragment attention, and suffocate creativity. When people have space to think—truly think—strategy ceases to be an intellectual exercise and becomes a lived practice.

It continues by rebuilding cultural foundations. A human-centric transformation does not start with, “Here is the new tool.” It starts with, “Here is the story we are telling together.” In organizations where meaning is explicit, ambiguity becomes tolerable, and ambiguity is where transformation takes root.

It deepens with role redesign. As AI takes on more routine tasks, people must be trained—not just in how to use AI, but in how to lean into the qualities that distinguish them from AI. Critical thinking. Empathy. Judgment. Imagination. Collaboration. These are not soft skills. They are execution skills. They are the competencies through which strategy becomes reality.

It requires ethical guardrails. Not the performative kind, but the kind that leaders reference daily, not annually. These guardrails protect more than data. They protect dignity.

And it culminates in alignment between human intent and technological capacity. When AI is deployed without a vision of who it is meant to empower, it becomes a system enhancement.
When AI is deployed with a vision of who it is meant to elevate, it becomes a transformation.

Mr. Steinberg describes this alignment through a simple mantra: “Think big, start small, and scale fast.” In his view, the organizations that succeed with AI are the ones that don’t chase grandiosity. They chase clarity. They chase momentum. They treat transformation not as a singular event but as an evolving capacity.

The power of his perspective is that it returns agency to leadership, not to the tool. AI does not transform an organization. People transform an organization. AI merely accelerates—and sometimes amplifies—that transformation.

A Future Of Possibility

There is a line from Studs Terkel’s Working that resurfaces often in these discussions: “Most of us have jobs that are too small for our spirit.” This is not a lament; it is an invitation. And in the context of AI, it feels particularly resonant.

What if the true promise of AI is not efficiency, or productivity, or scale, but spaciousness? What if the great contribution of AI is the reclamation of human imagination inside organizations that have spent decades subordinating imagination to process?

David Brooks, in a column for The New York Times, captured this sentiment with unwavering clarity: “The most important thing about AI may be that it shows us what it can’t do, and so reveals who we are and what we have to offer.”

This is the core of a human-centric approach; AI shows us what is mechanical in our work. Humans show us what is meaningful. AI speeds. Humans deepen. AI performs. Humans enliven. Transformation requires all of it.

As climate models grow more complex, AI can help predict natural disasters with far greater accuracy. But the courage to act on those predictions comes from people, not systems. As healthcare organizations deploy AI-enabled diagnostic tools capable of identifying patterns invisible to the naked eye, it is the clinician—not the model—who must decide how to interpret, apply, and communicate those findings. As educators adopt adaptive learning platforms that personalize instruction, it is the teacher—not the tool—who shapes the emotional landscape of a classroom.

The same is true in business. AI may accelerate the work; only humans can execute the mission.

And that, in the end, may be the most urgent responsibility of any executive today: to protect the conditions in which humanity can flourish inside organizations that are becoming increasingly defined by the technologies they deploy.

Transformation is no longer a question of whether organizations will incorporate AI. That future has already arrived. The question now is how leaders will use AI to elevate not only the work their enterprises do, but the people who do it.

For all the talk about automation, prediction, and scale, the future of business will belong to the enterprises that treat their people as creators, not cogs; as actors, not operators; as thinkers, not nodes in a network.

To lead in an AI era is to hold two truths at once. The first is that AI will continue to reshape industries, roles, and expectations at an accelerating pace. The second is that the qualities that make us uniquely human—our imagination, our empathy, our moral judgment, our capacity for meaning—will matter more, not less.

The future will not be human or machine. It will be human through machine.

And in that future, the leaders who succeed will be the ones who never forget that technology may drive the efficiency of a system, but only people can drive the soul of an enterprise.

They will be the ones who understand that AI can change the work—but only humans can change the future.