AI Engineering
14 min read

Agentic Search Optimization: How to Get Your Brand Discovered by AIs

SEO optimized your website for Google's crawlers. Agentic Search Optimization optimizes it for the AI agents that browse, evaluate, and transact on behalf of users. Google-Agent launched. WebMCP shipped in Chrome. The transition is happening. Here is the practitioner's guide from someone who built the infrastructure on both sides.

March 29, 2026

For two decades, SEO meant one thing: structure your website so Google's crawler can parse it, rank it, and show it in a list of ten blue links. You wrote meta descriptions for a robot that matched keywords. You earned backlinks so PageRank would favor you. You submitted sitemaps so Googlebot knew where to look.

That robot is being replaced. Not by a better crawler - by an agent.

Google launched Google-Agent in March 2026 - an AI that browses the web, evaluates options, and completes tasks on behalf of users. WebMCP shipped in Chrome 145, giving AI agents native access to browser capabilities. OpenAI, Anthropic, and Perplexity each have agents that visit websites, extract structured data, and make purchasing decisions without a human ever seeing a search results page.

The user who used to type “best Italian restaurant near me” into Google now tells their AI: “Find me an Italian restaurant near the hotel tonight, check reviews, and make a reservation for two at 7pm.” The AI handles the search, the evaluation, the comparison, and the transaction. Your website is no longer competing for a click. It is competing for an agent's trust.

I have spent the last three months building this from both sides - implementing AI discoverability across devonbleibtrey.com, Fetch.ai, Flockx, ASI:One, and Fetch Business while also building the AI agent infrastructure at Fetch.ai that consumes this data on the other side. This post is what I learned.

SEO Was About Crawlers. ASO Is About Agents.

The difference between SEO and Agentic Search Optimization is not incremental. It is architectural. Google's crawler indexed content and returned links. An AI agent reads content, reasons about it, and takes action. The crawler needed keywords. The agent needs understanding.

Traditional SEO

  • Optimizes for keyword matching
  • Competes for position in a ranked list
  • Success = click-through to your site
  • Human evaluates the results
  • Transaction happens after the click

Agentic Search Optimization

  • Optimizes for semantic understanding
  • Competes for agent trust and selection
  • Success = agent chooses your service
  • AI evaluates, compares, and decides
  • Transaction can happen without a human visit

When a user's personal AI is shopping for a service, it does not see your beautiful landing page. It reads your structured data, checks your reputation signals, evaluates your capabilities against the user's requirements, and either selects you or moves on. The entire funnel - awareness, consideration, decision - happens inside the agent's reasoning, not inside a browser tab.

24%

More accurate AI brand descriptions with llms.txt

llmstxt.org early adopter data

30-40%

Higher AI citation probability with Schema.org Article markup

Search engine structured data studies

840+

Sites already publishing llms.txt for AI discoverability

llmstxt.studio directory, March 2026

What AI Agents Look For (That Crawlers Never Cared About)

Google's crawler cared about HTML structure, meta tags, backlinks, and page speed. AI agents care about those too, but they also need something crawlers never required: a way to understand what you do, what you offer, and whether they can trust you - in machine-readable form.

Identity Declaration

Who are you? What do you do? What don't you do? AI agents need a concise, machine-readable identity document - not a marketing page full of superlatives, but a factual summary an agent can reason about.

Capability Descriptions

What can you do for the user? What are your skills, services, or products? Agents compare capabilities across providers. If yours are not explicitly stated in structured form, the agent cannot evaluate you.

Trust Signals

Verified identity, structured reviews, transparent pricing, consistent information across data sources. Agents do not respond to social proof the way humans do - they verify claims against structured data.

Interaction Protocols

Can the agent interact with you programmatically? Can it request information, check availability, or initiate a transaction? Agents prefer providers they can communicate with directly over those that require human mediation.

The Four Layers of Agentic Search Optimization

After implementing AI discoverability across six sites, I found that the work falls into four layers. Each builds on the previous one. Skip a layer and the ones above it become less effective.

1

Structured Identity: Schema.org JSON-LD

Schema.org is the foundation. AI models - ChatGPT, Claude, Perplexity, Google AI Overviews - all consume Schema.org structured data to understand entities on the web. Without it, an agent has to infer who you are from unstructured HTML. With it, the agent knows your name, your role, your organization, your products, and your expertise.

For every site in our portfolio, we added root-level JSON-LD:

  • Person or Organization schema - who operates this site, their roles, social profiles, expertise areas
  • WebSite schema - site name, URL, description
  • Article schema on every blog post - headline, author, datePublished, keywords
  • SoftwareApplication schema on product pages - for ASI:One, Flockx, and Fetch Business

The impact was measurable. AI models that previously described our products with generic language started using the exact terminology from our Schema.org data.

2

AI-Readable Summary: llms.txt

The llms.txt specification is to AI models what robots.txt is to crawlers - a dedicated file that tells AI systems what your site is about, organized for machine consumption. While robots.txt says “what you can crawl,” llms.txt says “what you should know about me.”

Every site in our portfolio now publishes /llms.txt with a curated summary and /llms-full.txt with extended context. The structure follows a consistent pattern:

  • Identity and mission
  • Products and capabilities
  • Key content organized by theme
  • What we do and what we do not do
  • Link to the full-context version

We also added <link rel="alternate" type="text/plain" href="/llms.txt"> in the HTML head so AI crawlers can discover it without guessing the path.

3

Agent Discovery: A2A Agent Cards

Google's Agent-to-Agent (A2A) protocol defines a standard way for AI agents to discover each other. The Agent Card - published at /.well-known/agent.json - declares what an agent can do, what inputs it accepts, and how to interact with it.

This is forward-looking infrastructure. Today, most agent discovery happens through curated registries like Agentverse. But as the A2A protocol matures, any AI agent will be able to discover your capabilities by fetching your Agent Card - the same way browsers discover favicons and security policies from well-known paths.

For each site, we defined skills that match the site's purpose. devonbleibtrey.com declares five skills (AI-native teams, engineering culture, agent architecture, leadership, products). ASI:One declares four (personal assistant, social coordination, agent collaboration, knowledge management).

4

Agent Interaction: MCP and Verified Registries

The first three layers make you discoverable. This layer makes you interactable. The Model Context Protocol (MCP) gives AI agents a standard way to connect to your services and tools. WebMCP in Chrome 145 means agents running in the browser can access MCP servers natively.

Verified agent registries like Agentverse go further. They provide identity verification, reputation scoring, and discovery infrastructure for AI agents. When a personal AI needs to find a restaurant, a contractor, or a travel agent, it queries the registry for verified agents that match the criteria. If your business has a verified agent in the registry, you are in the consideration set. If not, you are invisible.

This is where Fetch Business fits - it gives any small or medium business a verified AI agent in one click, registered on Agentverse, discoverable by personal AIs like ASI:One. The business does not need to understand protocols. They get an agent that represents them in the network.

The Compound Effect

Each layer amplifies the others. Schema.org tells agents who you are. llms.txt tells them what you know. Agent Cards tell them what you can do. MCP and verified registries let them interact with you. An agent evaluating your site with all four layers can make a confident recommendation. With only one or two, it hedges or moves on.

What We Implemented Across Six Sites

Between March and April 2026, we rolled out the ASO checklist across every site in the Fetch.ai portfolio. The pattern was repeatable, but the content was specific to each site's audience.

SiteSchema.orgllms.txtAgent CardSitemap
devonbleibtrey.comPerson + 27 Articles
fetch.aiOrganization + WebSite-
flockx.ioOrganization + WebSite
asi1.aiOrg + SoftwareApp
business.fetch.aiOrg + Service + FAQ
travelwithmi.comOrganization + WebSite-

The implementation was fast once we had the pattern. The first site (devonbleibtrey.com) took a full day. The last three sites took less than two hours each because we had reusable templates for every layer.

Start With Schema.org

If you do nothing else, add JSON-LD to your root layout. Person or Organization schema with name, description, URL, and social profiles gives AI models enough to cite you accurately. This takes 15 minutes and has the highest immediate impact.

The Practitioner's Checklist

Here is the exact checklist we used. It is ordered by impact - start at the top and work down. Each step is independent, so you can stop at any layer and still have value.

1. Add Schema.org JSON-LD to your root layout

Person or Organization schema with name, description, URL, social profiles. WebSite schema with site metadata. Takes 15 minutes. Maximum impact.

2. Add Article JSON-LD to every blog post

Headline, author (Person with URL and jobTitle), datePublished, keywords. Build a shared utility function so adding it to new posts takes 30 seconds.

3. Publish /llms.txt and /llms-full.txt

Follow the llms.txt specification. Include identity, capabilities, key content, and what you don't do. Add a <link rel='alternate'> in the HTML head.

4. Publish /.well-known/agent.json

A2A Agent Card declaring your skills, input/output modes, and interaction protocols. Forward-looking but low effort to add now.

5. Register on verified agent registries

Agentverse, MCP server directories, Context7 (for developer-facing content). This is how AI agents find you when users ask for services.

6. Create an AI agent for your business

Fetch Business creates a verified agent for small and medium businesses. For developers, Agentverse provides the build-and-deploy platform. The agent becomes your representative in agent networks.

Why This Matters Now (Not in Two Years)

The SEO playbook took a decade to mature. Companies that adopted it early owned their search verticals. Companies that waited spent years and budgets trying to catch up against entrenched competitors.

Agentic search is in its SEO-circa-2005 moment. The standards are forming. The early adopters are staking territory. The agents are starting to choose preferred providers based on structured data quality and interaction capabilities. Within two years, the businesses with verified agents, rich structured data, and machine-readable identity will be the default recommendations. The ones without will be invisible - not because they are bad, but because the agent never found them.

2005

SEO early adopters dominated search for a decade

2026

ASO early adopters are staking territory now

2028

Default agent recommendations will be locked in

The Invisible Business Problem

When an AI agent cannot find structured data about your business, it does not tell the user “I could not find information.” It recommends a competitor that does have structured data. You do not lose a ranking position. You lose the entire consideration set.

The Ecosystem View: Both Sides of the Equation

The reason I can write this with confidence is that Fetch.ai operates on both sides. We build the personal AI ( ASI:One) that discovers and evaluates businesses on behalf of users. We build the business tools ( Fetch Business) that help businesses get discovered by those personal AIs. And we build the agent registry ( Agentverse) that connects them.

This gives me a view that most ASO/AAIO commentators lack: I build the agents that consume this data. The four layers are not theoretical. They reflect the same signals that well-designed AI agents prioritize when evaluating services.

How the Pieces Connect

User: “Find me a travel agent who specializes in European river cruises.”

ASI:One queries Agentverse for verified travel agents with European cruise expertise.

Agentverse returns agents whose capabilities match. These are Fetch Business agents with verified identity and declared skills.

ASI:One evaluates each agent's structured data - Schema.org, llms.txt, reviews - and presents the top options to the user with reasoning.

The user picks one. ASI:One coordinates directly with the business agent to start the engagement.

The business that had all four ASO layers was discoverable, evaluable, and interactable. The one with only a website and no structured data was not in the consideration set at all.

Common Mistakes (From Implementing This Six Times)

Writing llms.txt like marketing copy

AI agents parse for facts, not persuasion. 'The world's most innovative platform' tells an agent nothing. 'A personal AI assistant that coordinates across calendar, messaging, and social networks' tells it everything. Be factual. Be specific. Be boring.

Putting Schema.org in client components

JSON-LD should be rendered on the server, not shipped in the client JavaScript bundle. AI crawlers may not execute JavaScript. Use server components with native <script> tags, not client-side Script components.

Making llms-full.txt too long

The full-context file should be comprehensive but organized. If it exceeds 10,000 tokens, AI models may truncate it. Structure it with clear headers so models can scan for the sections they need.

Declaring capabilities you cannot deliver

An A2A Agent Card that declares skills the agent cannot perform is worse than not having one. Agents will test capabilities. If the declared skill fails, trust drops to zero. Only declare what works.

The Website Is No Longer the Destination

For 25 years, the website was the end of the funnel. Every marketing dollar, every SEO effort, every ad campaign aimed to get a human to your website. The website was where the conversion happened.

In the agentic economy, the website becomes the data source, not the destination. The conversion happens inside the agent's reasoning. The website's job changes from “persuade the human visitor” to “inform the agent accurately.” Your structured data, your llms.txt, your Agent Card, your verified registry presence - these are what the agent reads. Your beautiful landing page with the hero image and the call-to-action button is for the humans who still browse directly. And there will be fewer of them every year.

The transition from SEO to ASO is the same transition we saw from print advertising to search advertising. It did not happen overnight. It did not kill print entirely. But the businesses that recognized the shift early and built for search won the next two decades. The same will be true for agentic search. Build for it now.

Continue Reading

Building for the Agentic Economy?

I help teams implement AI discoverability, build agent-ready infrastructure, and position their products for the transition from search to agentic commerce. If your business needs to be discoverable by AI agents, let's talk.

© 2026 Devon Bleibtrey. All rights reserved.