Most of what you've read about AI agents this year doesn't apply to your business.
We've been building agents for UK SMEs for the last 12 months. In that time, three things have actually changed in a way that affects what we can ship for clients. Here's the short version, and what we'd do about each one if we were running your shop.
1. Your AI can now use software like a person
Until late 2025, the biggest blocker on every automation project was the same sentence: "the tool we use doesn't have an API." That was the wall. If your booking system, supplier portal, or back-office app didn't expose a programmatic interface, the agent couldn't touch it, and the work stayed manual.
That wall is now gone. Three things shipped between October 2025 and April 2026 that broke it:
- Anthropic's Claude Computer Use went generally available. The agent gets a screenshot, a mouse, and a keyboard. It clicks buttons, fills forms, reads what's on screen, decides what to do next. Same as a junior would.
- OpenAI shipped GPT-5.4 with native computer use built into the model. Their Operator product graduated from preview to production.
- Google's Gemini Computer Use focused on browser-only workflows, which is about 70% of what an SME owner actually needs.
What this means in practice. If your reception team logs into a third-party portal every morning to pull bookings into a spreadsheet, an agent can do that now. If your bookkeeper logs into HMRC, downloads a PDF, and re-enters the numbers into Xero, an agent can do that. The model doesn't need an API anymore. It just needs a login.
The trade-off is reliability. Computer-use agents are slower than API-based agents (roughly 30 to 90 seconds per task instead of sub-second), and they fail in different ways. A button moves, a captcha appears, the page is slow to load. For high-volume real-time work, you still want a real API where one exists. For overnight batch work and the "this used to take Sarah 2 hours every Tuesday" jobs, computer use is now the right tool.
2. MCP quietly became the standard
The Model Context Protocol is the boring infrastructure piece nobody outside engineering talks about. You should care about it anyway, because it changes what your AI investment is worth in three years.
MCP is a small open standard for how AI models talk to tools. Anthropic published it in late 2024. By March 2026 it had crossed 97 million installs. Every major provider (Anthropic, OpenAI, Google, Microsoft) now ships MCP-compatible tooling. It's the USB-C of AI tooling, and the vendor lock-in everyone worried about a year ago has mostly evaporated.
Why it matters for an SME project:
- The agent we build for you on Claude this month can be moved to GPT-5.4 next year with a config change, not a rewrite. That used to be a weekend's work per integration. Now it's an afternoon.
- New tools your team adopts (a new CRM, a new invoicing platform) can be plugged into your existing agent stack without us writing custom glue code, as long as they ship an MCP server. More and more do.
- You stop being held hostage by whichever AI vendor you picked first.
If a vendor pitches you an "AI integration" in 2026 and they can't tell you how MCP fits in, ask them why. The answer will tell you something.
3. The "one giant chatbot" pattern is officially dead
We wrote about this in the 3-agent stack post last quarter, and it's now mainstream. Both Gartner and Forrester are calling 2026 "the breakthrough year for multi-agent systems," which is consultant-speak for what builders have known for 12 months: small specialist agents beat one big agent every time.
The shape that works in 2026 is the same shape that worked in 2025, just with better models behind it:
- A router that classifies what the customer wants.
- One or more specialists that each do one job well.
- An escalator that hands off cleanly when something is out of scope.
The newer thing in 2026 is that these specialists can now call each other. A booking specialist can ask a billing specialist for a quote, get the answer back, and continue the conversation without the customer noticing a handoff. That's the multi-agent piece. It used to be experimental. It's now boring, in the good way.
For an SME running customer-facing automation, this matters because it's how you get past the "the bot is fine but it can't help with anything weird" problem. The weird stuff is just another specialist, sitting in the same stack.
What did not change in 2026
There is still no AGI. Your agent doesn't understand your business until you teach it. The model picking the right slot in a calendar is still doing classification, not magic. Anyone telling you otherwise is selling you something.
The setup still matters more than the model. We've watched a small business pay £15K for a fancy GPT-5.4 deployment that performed worse than a £3K Claude Sonnet build, because the setup was wrong. Architecture beats model choice almost every time.
The thing that gets your money back is still the same job it always was. Pick a task that takes someone in your business four hours a week, automate it end to end, measure how much time came back. Not "we have AI now." Specifically: "Sarah used to spend Tuesday morning on this, and she doesn't anymore."
What we'd actually do if it were our shop
Three things, in this order:
- Make a list of things in your business that don't have an API. A year ago that list was uninvestable. Today it's the most interesting list you have.
- Don't let any vendor lock you into a single AI provider. If they can't explain how MCP fits in, walk away.
- Pick one job, not ten. The first agent you ship should be embarrassingly narrow. Embarrassingly narrow is what works.
If you're an SME owner reading this and thinking "OK so what should I do about it for my business," that's a 20 minute call. We'll look at your operations, point at the one job worth automating first, and tell you on the call whether AI is even the right tool. No pitch deck. Book here.