Thoughts

AI productivity without burnout: the AI vampire problem

There is a moment with agentic AI that feels incredible, and a second moment nobody mentions in vendor demos.

There is a moment with agentic AI that feels incredible. And a second moment nobody mentions in vendor demos.

You describe an outcome. The work appears. Drafts, plans, artefacts. It is fast and honestly a bit addictive.

Then comes the other moment. You look up and realise you have been "piloting" the model for four hours straight. You are mentally fried. You have shipped a lot. But you are not sure you can defend why you shipped it, or whether you checked the right things.

In regulated professional services, that second moment matters more. When AI starts doing real work, the limiting factor isn't typing speed. It is judgment, supervision, and stamina.


10x is real. The cost is too.

There is a growing argument in the tech world that AI has crossed an event horizon. It can produce 10x output for certain classes of work, and it can also drain the human pilot to the point of exhaustion.

The dynamic is easy to recognise. The model makes progress so quickly that you keep feeding it more. Each step requires a decision. Is this good enough? What did it miss? What is the risk? The dopamine of quick wins turns into long, intense sprints.

You aren't doing more "work". You are doing more high-intensity decision-making per hour.

That is why people burn out. Not because the tool is slow. Because it is fast.

The value capture trap

Once a tool can give someone 10x output, two things happen inside a business. Leaders start expecting 10x output. The people doing the work start absorbing the cognitive load.

If you are a practice lead, that shows up as unrealistic throughput expectations, quality dropping quietly under speed, rework and remediation later, and key staff exhausted and harder to retain.

If you are a compliance lead, it shows up as something worse. Inconsistent review standards across practices. "Shadow AI" usage to hit the new pace. Weak evidence trails when supervision asks "how was this produced?"

Speed without supervision doesn't create advantage. It creates fragile outcomes. And fragile outcomes are exactly what regulated firms can't afford.

Why this hits professional services harder than people expect

In financial advice, accounting, legal, debt and mortgage, and similar professional services, the work product has a different shape. It is client-impacting. It is reviewable after the fact. It carries accountability.

So the goal isn't "make people faster". The goal is make the safe path the easy path. That is a cultural and operational design problem, not a model-selection problem.


The fix isn't a bigger strategy deck. It is an operating rhythm.

Most firms respond to AI anxiety by asking for strategy. That is understandable. But "AI vampire" risk doesn't get solved by a roadmap. It gets solved by an operating rhythm that forces sustainable pace and supervision-ready evidence.

Here is what that rhythm looks like in plain terms.

Timebox the pilot work and make it normal

Agentic work expands to fill the day. If you don't timebox it, it will eat everything. Set a default pattern. Sixty to ninety minutes of focused build. Fifteen minutes of checkpoint review. Ship or park the change.

Build checkpoints into the workflow, not into "be careful"

"Trust but verify" needs to be real. For any client-impacting output, the workflow should have a visible gate: draft, review, approve, send. The review should be anchored to a checklist that asks the boring questions. Are the required disclosures present? Are the facts grounded in source artefacts? Have exceptions been flagged rather than smoothed over? Is the tone and template aligned?

When people are tired, checklists win. Memory fails.

Log the evidence by default

Compliance teams don't need a lecture on AI. They need evidence. If an output can't be reconstructed and reviewed later, it isn't deployable in a regulated firm.

At minimum, log the input artefacts used (or references to them), the output drafts, the approvals and edits, and the exceptions and uncertainty flags. This is how you replace "trust me" with "here is what happened".

Standardise approved patterns to reduce cognitive load

Culture isn't "everyone becomes an AI wizard". Culture is a shared set of safe patterns, a prompt and playbook library people actually use, and a shared definition of done.

If every adviser invents their own way of using AI, you get inconsistent output quality, inconsistent risk, and exhausted reviewers. Standardisation is kindness. It reduces decision fatigue.

Run small evals to keep expectations grounded

Models change. Capabilities jump. Failure modes shift. Build tiny internal benchmarks: five to ten tasks that represent your real work, run monthly or whenever you change tools, with a record of what improved and what regressed.

This prevents the CEO myth ("AI is perfect now, so everyone should be 10x") and the staff myth ("AI is useless, so we can ignore it"). Both are dangerous, and both are expensive.


The GTM implication: sell sustainable, supervision-ready speed

This AI vampire theme should change how you position AI services in professional services. The winning offer sounds more like this. We ship a workflow. We build the guardrails. We set the rhythm. We reduce rework and fatigue. We produce evidence a compliance lead can stand behind.

That is why Adapt2AI productises work into short, sharp deployments (a three-day Pilot-in-a-Box) plus role-based workshops. The workshop value isn't tool training. It is operational training. How to timebox agentic work. How to inject context methodically. How to use checkpoints and checklists. How to log evidence and handle exceptions.

That is how you get the upside without eating your team.

The test

If your firm got 10x faster tomorrow, would your supervision model keep up? Could you show what was generated, what was reviewed, and who approved it?

If not, the next step isn't to squeeze more output out of people. It is to build the guardrails and culture that make speed sustainable. Start with one supervised workflow. Train the team on that pattern first, and make it the default before you scale it.