Improve your brand visibility in AI search. Try Cleotic AI
Featured image for How your website infrastructure decides whether AI can see you
20 April 2026

How your website infrastructure decides whether AI can see you

Here's a thought experiment. A prospect in Dubai opens ChatGPT and asks, "Who should I talk to about setting up a DIFC will for my family?" The AI thinks for a few seconds, then returns three law firm recommendations, a short comparison, and a suggestion of what to ask on the first call.

Your firm isn't in that list.

It isn't that you're not qualified. You might be the most experienced practice in the city. It's that the AI couldn't read your website properly, couldn't understand what you do, and couldn't find a clean enough answer to cite. So it reached for firms it could understand instead.

This isn't a hypothetical. It's what's happening right now to most brands that built their sites on the wrong foundation. And the uncomfortable truth most marketing leaders are waking up to is this: in the AI era, your content management system is no longer a back-office tool. It's the thing that decides whether you exist.

This post walks through what we learned auditing a new UAE law firm's AI visibility, why their ERP-powered website left them in an AI blind spot, and what brand and marketing leaders should be doing about their own infrastructure before it's too late.

The new reality: AI search is a different game

Traditional search engine optimisation trained a generation of marketers to think about keywords, backlinks, and page-one rankings. That playbook still matters, but it's no longer the whole story.

AI assistants like ChatGPT, Perplexity, Gemini, and Claude don't deliver ten blue links. They deliver an answer. One answer. Maybe with three or four cited sources, if you're lucky. And research from Authoritas suggests a site previously ranked first in traditional search could lose roughly 79% of traffic for that query when results appear in an AI overview instead. That's not a modest shift. That's an extinction-level event for brands that assumed search traffic was all that matters.

The mechanics have also changed. AI systems have moved from just crawling web pages to actively fetching and parsing structured data during their response generation phase. In plain English, they're not just skimming your homepage. They're looking for machine-readable signals that tell them exactly what your business does, who you serve, and why they should cite you rather than a competitor.

Get those signals right and you become the source AI reaches for. Get them wrong, and you become invisible.

What we did for the law firm

A new Dubai-based law firm, specialising in wills and asset protection for expatriates and family businesses, shared a specific need. They wanted to understand how visible they were in AI tools and what they could do to improve. We deployed our AI Brand Visibility Audit tool and ran a structured diagnostic across three dimensions.

Mapping the competitive set

First, we out tool identified the firms that AI assistants were actually recommending for the client's core services. Not the firms the client thought they competed with, the firms the AI was citing when prospective clients asked questions. That list included some expected names, and two surprises, one of which was a boutique practice roughly the client's size that had quietly become the default recommendation for DIFC wills.

Understanding what people actually ask

Next, we reverse-engineered the prompt landscape. What would people in the UAE actually type into ChatGPT or Perplexity when they have a question about wills, succession planning, or asset protection? The answer is rarely the neat keyword phrases that show up in traditional keyword tools. It's full, conversational questions like, "What happens to my property in Dubai if I don't have a will?" or, "Is a DIFC will valid for my assets in India?"

This matters because queries in AI are longer and conversational, averaging 23 words versus 4 in Google. Optimising for the old four-word queries won't get you cited for the new 23-word ones.

Testing three dimensions of visibility

We then built a monitoring framework across three specific categories:

  • Awareness:

Does the AI know the firm exists? Does it surface in response to open-ended industry prompts?

  • Sentiment:

When the AI does mention the firm, what's the tone and what attributes get highlighted?

  • Recommendations:

When someone asks for a firm to hire, does the AI actually recommend this one?

The results were useful. The client had almost no presence in recommendation prompts, inconsistent sentiment across mentions, and weak awareness even for very specific DIFC-related queries where they genuinely offer a differentiated service.

Where the project hit a wall or ERP-website-builder problem

With the audit complete, we had a clear set of recommendations. Rewrite specific service pages with direct, structured answers to the conversational queries AI assistants were asking. Add entity-level information the AI could use to confidently identify the firm. Introduce FAQ and Article schema so AI crawlers could parse the content reliably. Ship a programme of topical authority content to signal expertise.

Standard practice. Straightforward. Except the client's website was built on the website builder module of an enterprise resource planning (ERP) platform, a common choice for businesses that want accounting, operations, and a public-facing site all running under one vendor.

And here's where a platform designed for enterprise resource planning started to reveal its limits as a content management system.

An ERP suite runs accounting, inventory, HR, and CRM, often well. Many of these suites now bundle a website builder module as an add-on, which sounds efficient on paper. One vendor, one login, one invoice.

As a content management system for a firm serious about AI visibility, though, that bundled website builder struggles. Here's what we ran into:

Metadata and schema are constrained. ERP website builders typically handle the basics, page titles, descriptions, sitemaps, alt tags. But when you need to inject custom JSON-LD schema for Organization, LegalService, FAQPage, Person, and Article markup, which is exactly what AI systems parse most heavily, you're fighting the platform. In this case, adding proper schema required a developer writing custom templates, and even then the implementation was fragile: dynamic content containing quotes, ampersands, or HTML entities broke the JSON-LD silently. The schema either worked or it didn't, and when it didn't the AI simply ignored the block and moved on. Silent failure is the worst kind of failure, you think you're visible and you're not.

Publishing is gated and slow. AI visibility isn't a one-time project. It's a continuous cycle of publishing, measuring, refining, and republishing. When every content update requires coordinating with whoever maintains the ERP instance, when A/B testing a headline means raising a ticket, when schema changes require developer time, the tempo of optimisation collapses. You can't iterate your way to visibility if each iteration takes a fortnight.

Content and presentation are entangled. In this type of bundled platform, content lives inside page templates. It isn't stored as clean, structured data that can be repurposed across channels. If an AI assistant is looking for a crisp, factual answer about DIFC will eligibility, it has to extract that answer from a marketing page designed primarily for human visitors. Sometimes it succeeds. Often it doesn't, and reaches for a cleaner source instead.

None of this makes the ERP a bad product. It makes its website builder the wrong tool for the job. You wouldn't run your accounting in a headless CMS. You also shouldn't run your AI visibility strategy in a website module bolted onto your ERP.

What AI actually needs to find you

Before we get into the fix, it's worth being clear about what AI systems genuinely reward. Three things matter most.

Structured, machine-readable content

LLMs prefer content that's modular, explicitly labelled, and separated from presentation. Structured content breaks information into consistent, reusable fields, making it easier for AI to parse and understand content, generate accurate, context-rich answers, and ingest and retrieve information instantly. The difference between a wall of prose and a properly modelled set of content entities is the difference between an AI guessing what you do and an AI knowing what you do.

Schema markup that reflects the page

Schema is the vocabulary that tells AI what's on the page. A 2024 Data World study found that GPT-4 improves its performance from 16% to 54% when content relies on structured data, and Microsoft's Fabrice Canel publicly confirmed in March 2025 that schema markup helps Microsoft's LLMs understand content for Bing Copilot. The schema must match the visible content, AI systems are smart enough to catch mismatches and penalise them, but when it's done correctly it substantially lifts both extraction accuracy and citation rates.

Speed and freshness

AI systems favour current content. A site that can ship an update in an hour will outcompete one that takes three weeks, because the freshness signal is part of what earns a citation. Your CMS either enables that rhythm or it doesn't.

Why headless is the architectural answer

A headless CMS separates the content, stored as structured data in a dedicated system, from the presentation layer, the actual website users see. The two connect via APIs. That sounds like a technical distinction, but the business implications are significant.

With a headless approach, your content becomes:

  • Modular and reusable. A single answer to "What is a DIFC will?" lives in one place and can flow to your website, your app, your chatbot, and a syndicated partner site. AI systems increasingly pull snippets from multiple surfaces, and a consistent answer everywhere builds confidence in your authority.

  • Schema-ready by design. Modern headless platforms let you attach schema at the component level. An FAQ block is marked up automatically. A glossary term exists once and carries its semantic labels everywhere it appears.

  • Fast to update. Marketing teams can publish without developer dependencies. When the AI landscape shifts, and it does, you can respond in hours, not quarters.

  • Separated from presentation. A redesign doesn't mean a content migration. A new channel doesn't mean rewriting everything. Your content outlives the template it happens to sit in today.

Traditional CMS platforms make it difficult to deliver the easily ingested, up-to-date, and consistent content that generative engines and their LLMs prefer, while headless CMS platforms enable you to more easily provide current, consistent content, with rich context, in a format favoured by LLMs. That's not a vendor marketing line, it's the architectural reality that savvy B2B teams have already figured out.

This is why, at General Dataworks, we built our own site on Contentful, and why we now recommend a headless-first CMS as the default foundation for any brand serious about AI visibility. The specific platform choice matters less than the architectural pattern. Contentful, Sanity, Storyblok, Strapi, Prismic, and Hygraph are all credible options depending on team needs and budget.

So what next for our law firm?

The recommendation we've made is straightforward, if not trivial to execute. We're advising the client to migrate off their ERP's website builder and onto a headless CMS, with the ERP retained for its intended purpose of running internal operations.

The migration will happen in three phases. First, a content audit and remodelling exercise to convert existing pages into structured, schema-ready content types, think service entities, FAQ entries, legal guide articles, and team member profiles as discrete, reusable objects rather than free-text pages. Second, a front-end rebuild on a modern framework, with JSON-LD schema injected at build time across Organization, LegalService, FAQPage, Article, and Person markers. Third, an ongoing optimisation cadence driven by our AI Brand Visibility Audit tool, measuring share of voice in AI responses, tracking which prompts cite the firm, and continuously refining content based on what the AI actually picks up.

We expect the first measurable shifts in AI visibility within 60 to 90 days of the new infrastructure going live. That's roughly the window we've seen in comparable GCC projects where the CMS change removed the bottleneck.

The honest caveat: this isn't a small investment. A CMS migration for a professional services firm typically runs into several months and a meaningful budget. But the alternative, staying on the current platform and watching competitors accumulate AI citation share, is more expensive over any serious time horizon.

What this could mean for your brand?

If you're a CMO or marketing director reading this, the question worth asking isn't, "Do we need to do SEO for AI?" The question is, "Is our current infrastructure capable of supporting AI visibility work at all?"

A few honest checks you can run:

  • Open ChatGPT, Perplexity, and Google AI Overviews. Ask three conversational questions a prospect would ask about your category. Note whether your brand appears, and in what light.

  • Ask your web team how long it would take to add a custom FAQPage schema to a key service page. If the answer involves a ticket, a sprint, and a developer queue, your infrastructure is part of the problem.

  • Ask when a marketing leader last shipped a meaningful content update without IT involvement. If the answer is "rarely" or "never", you've confirmed it.

  • Pull your site into a tool like Google's Rich Results Test. See what schema is actually present. In most audits we run on traditional platforms, the answer is "not much, and what's there is broken".

None of these tests require a data team. All of them tell you whether your CMS is an asset or a liability.

Marketing infrastructure used to be a back-office concern. Designers chose templates. IT chose platforms. Marketing worked within whatever was handed to them. That model worked when search engines were relatively forgiving and content just needed to be present to rank.

AI search has changed the terms. Infrastructure is now a front-line strategic decision. Brands that treat CMS choice as a marketing investment, not a technology overhead, are compounding visibility advantages that will take competitors years to close. Brands that don't are quietly falling out of the AI recommendations that increasingly shape buyer decisions.

The law firm we audited is making the switch because their leadership understood the cost of not doing so. Not every organisation will. That's fine. The ones that move first in their category will own the AI citation share that, a year from now, will look like an unfair advantage.

It won't be. It'll just be the result of having built on the right foundation while everyone else was still arguing about keyword density.

Ready to find out where you stand?

Our AI Brand Visibility Audit shows you exactly how AI tools see your brand today, which competitors are capturing the citations you should be, and which infrastructure limitations are standing in your way. It's designed for marketing and C-level leaders who want a clear, unvarnished read on their AI readiness, not a sales pitch dressed up as a report.

Book a 30-minute discovery call with our team. If your infrastructure is the problem, we'll say so. If it isn't, we'll show you what to fix first.

The best time to have been visible in AI was two years ago. The second best time is now.