Leaders everywhere are struggling with the same impossible choices. Brilliant AI is creating human disasters. Meanwhile, people are optimising for efficiency while accidentally teaching AI that humans are inefficient and no longer "good enough".
For too long the conversation about artificial intelligence has been framed as a sprint: who has the fastest model, the biggest dataset, the deepest pockets. That framing is wrong. The biggest lie about AI is that it’s a race for technology. It is not. It is a race for our human values, for the things that actually make us who we are.
The biggest lie about AI is that it is a race for technology, but it is a race for our human values and for what actually makes us who we are.
So, for this last couple of years whilst AI continues to grow, I couldn't escape the hardest question in the field:
How do we make AI valuable — without losing what makes us human?
I kept hearing this same question from every boardroom to every classroom
That question appeared in boardrooms and classrooms, in strategy sessions and informal conversations. I heard it from CEOs wrestling with transformation, from HR leaders trying to preserve dignity in automated hiring, from teachers worried about children’s curiosity, and from frontline teams juggling productivity targets with ethical concerns.
Leaders everywhere were faced with impossible choices: brilliant AI creating human disasters; optimisation for efficiency that inadvertently teaches systems humans are inefficient or “not good enough.”
What followed for me was a sustained period of listening, testing, failing, and iterating. I witnessed practical challenges and real-world battles across organisations of every size and sector. Together with teams and leaders I helped build frameworks that clarified values, mapped risks, and surfaced practical guardrails. Some solutions worked. Some didn’t. All of it taught me something essential:
If we want AI to be valuable, we must first decide what — and who — we value.
All of those conversations, frameworks, failures and successes became my third book:
The Values of Artificial Intelligence: How Smart Leaders Capture and Connect AI Value to Human Values
Published by Routledge (Taylor & Francis Group) — launching January 2026.
This book is not only a distillation of lessons learned; it is also a call to action. It proposes tangible frameworks and tools leaders can use to translate abstract ethics into operational decisions.
It documents the huge payoffs AI brings but also the hard trade-offs teams face when they deploy automation into complex human systems, and it shows where simple tweaks — in measurement, incentives, or process design — can make the difference between an AI that amplifies human potential and one that diminishes it.
Why this matters now
Because what we teach AI about human values today shapes the world we are building right now. Values embedded in models and systems lock in incentives and behaviours. Those choices influence hiring, healthcare, education, justice, and how we relate to one another at scale. If leaders treat values as an afterthought, the systems we create will reflect that. If leaders make values central, AI can support human dignity, flourishing, and agency.
As conversations about AI’s role continue to evolve, it’s clear that action must follow reflection. Translating these insights into tangible impact is where initiatives like Boardroom to Classroom come in, turning knowledge into movement, and ideas into shared progress.
And that's not all
For every copy of The Values of Artificial Intelligence you order for your team, we will send a free matching copy to a school, university, or association.
Be sure to use the discount code ‘CISYCVAI20’ for 25% off your book preorder.
Together, we’re solving the global challenge and creating 1000 AI Value Libraries for 1000 institutions worldwide.
So, when you make a contribution, you’re not just giving back knowledge - you’re actually building a lasting legacy by shaping how AI delivers real business, technology, and human value.
Comments ( 0 )