Most professional services firms are doing the sensible thing. Dipping toes into AI. Writing a policy. Running a pilot. Asking for a strategy deck.
That isn't wrong. In regulated environments, strategy matters. Guardrails matter. Security matters.
Here is the hot take though. Your best AI strategy is a set of sensible guardrails, then an overinvestment in AI culture.
Because the speed of change has shifted. The traditional pace of IT innovation is no longer a useful reference point. If your mental model is "we will review this in six months", you are already behind the firms building AI muscle while you are still deciding whether it is real.
Strategy is necessary, but it isn't the main game
In every regulated firm, the first instinct is to plan. Define principles. Choose platforms. Map risks. Design governance. Write policy. Good. Do it.
Don't confuse the existence of a strategy with the existence of capability, though. Capability is what happens on a Tuesday afternoon when an adviser, an admin, or a paraplanner has real work to do and reaches for the tool. If the easiest path is "open a consumer chatbot and paste in client details", your strategy is irrelevant.
Culture is the sum of what people actually do when nobody is watching.
The "coding moment" is coming for knowledge work
We have already seen what modern models did to coding. And copywriting. And analysis. The pattern repeats. It starts as "helpful". It becomes "better than you think". It becomes "I can't believe we used to do this manually".
Professional services is next, because the work is readable, writable, and process-heavy. Drafting, checking, summarising, extracting, comparing, routing exceptions, completing checklists. That is the day.
AI won't replace trust or accountability overnight. It will compress the time it takes to produce many work products. That changes the economics of a practice and the supervision posture of a regulated firm.
If you wait until the models are "perfect", you aren't preparing. You are delaying. And delay has a cost. You don't build the muscle, you build the gap.
Guardrails: the minimum viable strategy
The trap is thinking "guardrails" means a 40-page policy. For most firms, the minimum viable strategy is a one-page stance that answers five questions.
- What data is allowed, and what is not?
- What tools are approved, and what is blocked?
- What must be logged, and where does it live?
- Where is human-in-the-loop mandatory?
- What is the escalation path when confidence is low?
If you can't answer those, you can't defend what is already happening in the business. And make no mistake. It is already happening. Shadow AI is not a future risk. It is present.
Culture: the compounding advantage
Once guardrails exist, the next decision is cultural. Do you want AI use to be ad hoc, individual and inconsistent? Or deliberate, shared and supervised?
The second version doesn't emerge from a policy. It emerges from habits, patterns, and recurring evaluation.
An AI culture means people practise using the tools on real work. The firm has canonical patterns for common tasks. Outputs are reviewed against shared checklists. Improvements are shared, not hoarded. Tool changes are noticed quickly, not six months later.
In other words, the firm learns in public, inside the guardrails.
The old IT playbook doesn't fit
Most IT programs were built for a world where technology shipped slowly, upgrades were planned, and capabilities were stable for years. That world is gone. AI capabilities arrive in chunks. Sometimes big chunks. Sometimes monthly.
If you build a three-year roadmap based on what a model can't do today, you will end up optimising for weaknesses that disappear while you are still in procurement.
You need a different loop. Test real tasks. Measure outcomes. Update patterns. Ship the next workflow. That is why "wait and see" isn't neutral. It actively makes adoption harder, because others are compounding while you are pausing.
What "AI culture" looks like in a regulated firm
This isn't beanbags and hack days. It is operational discipline.
Canonical rule sets
Culture needs a common language. Define the rule set that applies across the firm. Tone and disclosure requirements. What must never be invented. What must be verified. What must be reviewed by a human.
If you are a compliance lead, this is how you standardise across practices.
Evals and checkpoints
If models change fast, you need a way to stay oriented. Build a tiny evaluation suite. Five to ten representative tasks. The definition of "good" for each. What must be logged and reviewed. Run it whenever you change tools or models. Don't rely on vibes.
Curated data and retrieval boundaries
AI culture isn't "give it everything and hope". It is curated examples of good work, approved templates, approved policy inserts, and controlled retrieval sources. More context helps, but only when the boundaries are clear.
AI-friendly workflows
Culture becomes real when workflows change. Draft, review, send becomes the standard pattern for communications. File notes have a structured checklist and a completeness gate. Document packs are triaged and extracted the same way every time.
That is where you move from "people using AI" to "the business operating with AI".
The licensee lens: you are supervising a system now
Compliance teams aren't just supervising advisers anymore. They are supervising workflows that include machine-generated content. That changes the job. It is less about one-off policing and more about standardising approved patterns.
Evidence you can audit. Gates you can explain. Boundaries you can defend. Exceptions you can see and route. If it isn't logged and reviewable, it isn't deployable. That isn't a slogan. It is the operating rule for this era.
The leadership lens: capacity back without compliance debt
For leaders, the attraction is obvious. Time back. Throughput. Less rework. The failure mode is also obvious. Chasing speed creates compliance debt, and the debt comes due later as remediation.
AI culture fixes that by making safe patterns the default. Staff know what is allowed. Outputs are reviewed consistently. Templates and checklists are shared. Exceptions don't get buried.
You don't need everyone to be an AI expert. You need everyone to operate inside the same supervised patterns.
The practical path: guardrails, then reps
If you want to keep this grounded, here is a simple approach.
- Set the one-page guardrails.
- Ship one supervised workflow.
- Run recurring evaluations as tools change.
- Train the organisation in role-based patterns.
- Repeat.
This is why we structure the work as short, sharp deployments and workshops, not long programs. The goal is to build the culture muscle through reps, inside guardrails, on real workflows.
One question to end on
If someone in your firm shipped a client-facing AI-assisted artefact tomorrow, could you show what was generated, what was reviewed, and who approved it?
If not, your next investment shouldn't be a bigger strategy. It should be the culture and the operating model that makes safe adoption the default. If you want help setting that up, start with a AI Fitness Review and we will map the first supervised workflow.