Skip to content
Cover of Daniel Susskind’s A World Without Work, a book about technology, labour, and the future of work.
co-intelligence futurebraining Futureproof

If AI Takes More Work, What Keeps Us Going?

Huibert Evekink
Huibert Evekink

I first read Daniel Susskind’s A World Without Work in 2020, before ChatGPT and Claude became household names a couple of years later.

Rereading it today is a strange experience.

What then felt like a serious but slightly distant argument now seems pretty much on the ball, which speaks to the amount of work and research Daniel and his father Richard have been putting into the future of work over the last few decades. The Susskinds saw the deeper pattern:  technology might not simply help people do their jobs better. It might reduce the need for human labor itself. That is not easy to say without sounding dramatic, unless you are British.

The AI debate is full of people pretending to know the future with supreme confidence. The job apocalypse story says human work is basically a dead man walking. Swiping on, the “keep calm & carry on” story says that technology replaces some tasks, raises productivity, creates demand, and eventually produces new work for humans; just look at the Industrial Revolution.

Both stories are probably partly right, but we can safely assume that most organizations will try to do more with fewer people and more technology. In the death match for speed and dominance, reducing dependency on human labor seems a logical business case. The recent waves of tech layoffs and headcount reductions, combined with the increased token budget for the survivors, are indicative of what will likely happen everywhere (AI-washing aside).

This time may be different

Susskind’s warning is that as machines become capable of doing more of the physical, emotional, and cognitive work that once required human language, judgment, creativity, coordination, and problem-solving, the balance between “complementing” and “substituting” may shift away from historical patterns. The many examples of cutting-edge AI programs and robotics he mentions sound weirdly outdated, only 6 years later.

Work does more than pay the bills

Work has been carrying more than most of us consciously admit, especially in wealthy societies where paid employment has quietly become something far larger than survival.

For most of us, work has become much more than an economic necessity. For better and worse, it gives adult life structure, identity, social contact, status, learning, and a sense of usefulness.

Strip that away, or even just threaten to strip it away, and something deeper than financial anxiety kicks in. The question stops being "will I have income?" and starts being "will I still matter?" That is the question most productivity conversations about AI quietly avoid.

Susskind is very strong on what societies and governments should prepare for, and I agree with him. If work becomes less central, we will need new ways to distribute income, organize contributions, and preserve dignity. Otherwise, the result will not be peaceful leisure, but resentment, exclusion, and social unrest.

But rereading the book today, I was left with another question too: what can we do as human beings while all this is happening, except outsource the whole meaning problem to the state or org?

Most of us are not sitting in policy or board rooms, redesigning the future of labor. Instead, we are being bombarded with AI breakthroughs, layoffs, productivity promises, doom stories, and cheerful posts about how one person with 600 agents did the work of an entire multinational and became a billionaire…overnight.

If we knew for certain that AI would make many jobs disappear, we could at least begin the slide down the change curve. We could grieve, resist, adapt, and start building something else. If we knew for certain that AI would simply create better work, we could train confidently for that future, too.

Nobody does.

So we will have to develop ourselves, stay relevant, and keep improving, with a future that refuses to say whether that development will protect us, reward us, or simply become part of the next thing to be automated.

What we all need to work on

That leaves us with a practical personal problem. Not a policy problem; that one belongs to governments and institutions.

If we cannot rely on certainty about the future, and we cannot afford to simply wait and see, then the question becomes: what do we actually work on?

  • How do we stay relevant without chasing every new tool?

  • How do we use AI without outsourcing too much of our thinking?

  • How do we protect focus when everything becomes faster and more interruptible?

  • How do we build expertise when some of the learning pathways are being automated away?

  • How do we remain responsible when the machine makes it so easy to produce confident output?

  • How do we stay emotionally grounded when the future keeps changing its costume?

None of these questions has clean answers, and anyone selling you one is probably selling you something else, too.

The immediate answer is not prediction. It is capability.

We cannot know exactly which jobs will remain, change, shrink, or disappear. But we can work on the human capabilities that matter across all those futures: using AI well, protecting focus, building expertise, taking responsibility, staying emotionally grounded, and continuing to learn when the ground is moving.

Because whatever future arrives, becoming faster but less able to think is not progress. Becoming more productive but more dependent is not resilience. Becoming more efficient but less connected is not human development. It is just a smoother slide into fragility.

The longer horizon: life beyond (paid) work

There is a larger horizon behind this, but it should not distract from the practical work in front of us.

If AI eventually reduces the role of paid labor more dramatically, the same capabilities will matter even more. If work no longer provides the rhythm, identity, usefulness, discipline, and belonging it once did, we will need to build those things more deliberately ourselves.

But that larger horizon brings us back to the same practical point. The future may bring more work, less work, different work, or a messy combination of all three. In every version, arriving with sharper thinking, stronger judgment, better relationships, and more independence seems like a good idea.

Coming next: Futurebraining AI Work Intelligence

That is what Futurebraining is built around. Not the macro story, that one will keep writing itself with or without us, but the personal one. The one happening in your inbox, your meetings, your decisions, and your development right now.

Futurebraining starts with a practical promise: helping people and teams work at peak performance with AI without becoming dumber, more dependent, or more disconnected in the process.

That is deliberately work-focused and practical. It is about what people are facing now: AI entering roles, teams, decisions, workflows, learning, leadership, and identity at high speed.

In May, we will launch Futurebraining AI Readiness Intelligence for people already in work, whether employed, independent, or leading teams. The free Pulse will be the first entry point: a short assessment that provides a snapshot of where you stand with AI today and what that may mean for your capability, resilience, and development.

The full diagnostic goes deeper. It turns that first snapshot into a more complete intelligence report for people and teams who want a clearer view of their AI readiness, strengths, risks, and next steps.

A student version will follow during the summer.

Neither will tell you your future. That would be impressive, and also a lie, but they can help you see whether AI is making you stronger or weaker today.

Share this post