Featured article in the CEO World Magazine with Ilona Charles, Executive Director – CEO & Co-founder at shilo.

In this article, Ilona calls out a risk we may not be paying enough attention to, as we look to develop the next generation of leaders.

For decades, leadership development has focused on one core capability: how to lead people well. We teach managers how to motivate, give feedback, build culture and navigate complexity through human relationships. Entire industries have been built around this premise. But quietly, and faster than many organisations realise, the nature of leadership itself is changing. Today’s leaders are no longer just responsible for people. They are increasingly accountable for machines.

AI systems now draft strategy papers, recommend hiring decisions, prioritise customer interactions, flag performance risks and shape how workflows across organisations. In many teams, AI has effectively become a new kind of colleague: one that never sleeps, scales instantly and influences outcomes at speed. Yet we are still preparing leaders as if their only task is to manage humans. Australian research suggests many organisations are already relying on AI systems without leaders having clear visibility of how those systems operate or who ultimately owns their decisions. That gap is becoming one of the most significant leadership risks of the next decade.

Managing people is not the same as governing machines 

Leading people is fundamentally about behaviour. You motivate, coach, influence and course-correct through trust and conversation. Leading a machine or agent is different. You do not motivate an algorithm. You interact with it, provide direction, and govern it. The Governance Institute of Australia has been explicit that AI governance is a leadership and accountability challenge, not a technical task that can be delegated to IT teams.

When AI enters the workflow, leadership shifts from directing effort to exercising judgement. Leaders must decide where automation is appropriate, where human oversight is essential and who is accountable when things go wrong. This requires a different muscle, one that most leadership frameworks barely acknowledge. National AI research shows that while AI systems are rapidly being embedded into workplaces, leadership capability and governance maturity are lagging behind the technology itself. Many leaders will soon find that their next direct report is not human at all. Yet few organisations are explicitly training leaders to manage systems that learn, evolve and sometimes fail in unexpected ways. The risk is not that AI will replace leaders. It is that leaders will outsource judgement too quickly.

Intelligence is scaling. Wisdom is not. 

AI excels at intelligence. It can process vast amounts of information, identify patterns and produce recommendations that appear objective and authoritative. But intelligence is not the same as wisdom. Australian human rights research consistently emphasises that while AI can support decision‑making, it cannot exercise moral judgement, understand lived context or take responsibility for outcomes. Wisdom lives in context. It considers culture, timing, ethics and long-term consequences. It asks not just “Can we?” but “Should we?” and “What does this reinforce over time?” These are leadership questions and they cannot be delegated to machines. Regulators have warned that over‑reliance on AI recommendations, without active human oversight, can increase organisational risk rather than reduce it.

As intelligence becomes commoditised, the true leadership differentiators are becoming clarity, intentionality and judgement. Yet many organisations are moving in the opposite direction, rewarding speed and optimisation whilst underinvesting in the human capability to slow down, question outputs and hold the moral line. This is where leadership preparation is falling short.

The hidden accountability problem 

One of the least discussed challenges of AI-enabled organisations is accountability. When a human makes a poor decision, we know how to respond. When a system does, responsibility often becomes blurred. ASIC has found that AI adoption is accelerating faster than governance frameworks, creating a growing “governance gap” where accountability still rolls up to leadership under existing directors’ duties. Who owns the outcome when an AI-recommended decision causes harm? The leader who approved the tool? The team that trained it? The organisation that deployed it? Australian regulators have made it clear that existing accountability frameworks apply even when decisions are mediated by AI systems, meaning leaders cannot outsource responsibility to technology vendors.

In practice, accountability always rolls up to leadership. Yet many leaders are being handed powerful systems without the governance frameworks, decision rights or ethical guardrails required to oversee them effectively. This is not a technology problem. It is a leadership one.

What leaders now need to learn 

Preparing leaders to lead both people and machines requires a fundamental shift in how we think about leadership development. First, leaders must learn to treat AI as a judgement amplifier, not a judgement replacement. CSIRO’s work on responsible AI reinforces that AI should act as a support to human judgement, not a substitute for it, particularly in decisions that affect people’s lives and livelihoods. AI should widen thinking, surface alternatives and test assumptions, not close decisions prematurely.

Second, leaders must become fluent in asking better questions of systems, not just accepting polished outputs. The quality of leadership will increasingly be reflected in the quality of inquiry, not the speed of response.

Third, organisations must explicitly teach leaders how to design human-machine collaboration. That includes deciding where empathy, discretion and lived experience must remain human and where automation genuinely adds value. Australian policy research highlights that the central challenge is no longer whether organisations adopt AI, but whether leaders are equipped to govern human–machine collaboration responsibly.

Finally, leaders must reclaim their role as meaning-makers. As AI removes more transactional work, people will look to leaders not for answers, but for purpose, coherence and trust. Machines can optimise tasks, but they cannot create belonging or shared belief.

The leadership work we cannot automate 

We are at risk of preparing leaders for a world that no longer exists. The next generation of leadership will not be defined by technical mastery of AI tools, but by the ability to hold human judgement steady in a machine-accelerated environment. Leading people and machines is not about choosing between humanity and technology. It is about integrating both intentionally, ethically and with courage. If we fail to prepare leaders for this reality, the systems will still move forward. But leadership will lag and that is where the real risk lies.