Manifesto
The entire internet was
designed for humans.
The structural problem
It’s not that websites are bad at giving AI information. It’s that they were never designed to.
Think about how a typical retail website works. There’s a hero banner with a sale — the promo code is embedded inside a graphic, invisible to anything that isn’t a human eye. Products have variant selectors: you click the colour, the size, the storage tier, and the price updates dynamically via JavaScript. Delivery options render based on your detected location. Stock availability changes in real time. Membership discounts apply automatically once you’re logged in.
Every one of these things is intuitive to a person. To an AI agent trying to answer “what’s the cheapest 256 GB option in blue, and is it available for delivery to Sembawang” — it’s a wall. The promo code lives inside a JPEG. The price doesn’t exist in the HTML until you’ve made a selection. The variant logic isn’t exposed anywhere readable. The membership discount assumes an account the agent doesn’t have.
So the agent does what it can: loads the page, burns tokens trying to interpret the layout, makes its best guess at the price, and moves on. Now ask it to do that across five different stores to compare. Each one is a different structure, different design system, different set of dynamic elements to decode. It’s not that the agent isn’t capable — it’s that it’s doing expensive, unreliable work every single time for something that should have a direct, structured answer.
There is currently no place on the internet purpose-built to give AI agents clean, structured, ready-to-use business and product data.
Everything is built for human consumption. AI is brute-forcing its way around that, query after query, at a real cost in tokens, time, and accuracy.
The deeper problem
Commerce wasn’t just designed for human eyes. It was designed for human behaviour.
The visual layer is just the surface. The way commerce itself is structured assumes a human on the other end — someone who feels urgency from a countdown timer, types a promo code into a box, earns loyalty points across visits, responds to a limited-time flash sale pushed via notification.
None of these signals mean anything to an AI agent. A countdown timer is just a number. A promo code banner is an image. Loyalty points assume an account with accumulated history. Flash sales require the agent to be watching at exactly the right moment. The entire commercial negotiation layer — the discounts, the incentives, the offers — is communicated in a language built for people, through channels built for people.
This is the part of agentic commerce that nobody has solved yet, and most people aren’t even talking about. For an AI agent to be a genuine commercial actor — to find the best price, apply the right discount, and complete a transaction on your behalf — it needs commerce exposed in a fundamentally different way. Not a promo code graphic. A structured discount rule. Not a loyalty tier you log into. A machine-readable agreement about what this customer is entitled to. Not a flash sale push notification. A fixed, queryable pricing contract.
Humans negotiate through UX. AI agents need structure, contracts, and logic they can actually read.
That gap doesn’t get fixed by making websites more visually appealing. It gets fixed by building a layer underneath them — one that speaks in the language agents actually understand.
The insight
I had a job once that involved competitive product research. Open ten browser tabs. Go through each product page. Manually copy specs, prices, variants into a spreadsheet. By the time I was done, some of the numbers had already changed. It was tedious, error-prone, and the moment anything updated, it was stale.
That is exactly what your AI agent is doing every time you ask it to help you find or compare something real. Crawling pages. Interpreting layouts it wasn’t designed to read. Guessing at dynamically loaded prices. Doing manually, in real-time, what should have a direct and reliable answer.
I used to do this research manually. It took hours. Your AI agent is doing the same thing, on every query.
LobsterSearch prepares that spreadsheet. Already structured, already accurate, already ready for your agent to work from — without it having to touch a single webpage.
What we built
A librarian for AI agents. Not another website to crawl.
LobsterSearch is a structured data layer built not for humans to browse, but for AI agents to query. Think of it as a librarian: when your agent needs information about a business, a product, a price, or availability — it doesn’t crawl a website. It asks LobsterSearch, and gets back a clean JSON-LD response it can reason on immediately.
JSON-LD is typed, structured data. Not a layout to decode, not a paragraph to interpret. Every field is labelled, every value means exactly what it says — prices, variants, availability, descriptions, discount rules — all served in a format the agent can act on directly, in a fraction of the tokens it would otherwise burn processing the equivalent webpage. The difference between handing someone a well-formatted document and asking them to transcribe a billboard.
Sitting between every query and our database is an LLM that handles intent detection, relevance scoring, and ranking — working out what the user actually wants, not just matching keywords. This is the AI receptionist. It’s why a query doesn’t return a generic list. It returns the most relevant result for that specific intent, in a format the agent can use immediately.
One thing LobsterSearch deliberately doesn’t do: hold anything about you. No user profiles, no conversation history, no preferences. That’s your AI agent’s job — it already knows your context. LobsterSearch hands it accurate, structured data to apply that context to. The agent personalises; we provide the raw material. Clean separation — and better for your privacy because of it.
The vision for users
Ask once. Get everything you need to decide.
Where this is heading: your AI agent, with LobsterSearch behind it, surfaces businesses and products across the platform, compares prices and specifications side by side, and returns the option that fits what you need — without loading a single webpage or burning tokens reverse-engineering a variant selector. Your agent already knows your preferences from your conversations with it. LobsterSearch gives it accurate, current data to apply those preferences to.
No tab switching. No copy-pasting. No wondering whether the price you saw is still valid. You ask. It works. That’s what AI-assisted discovery should feel like. We’re building toward it — one business, one accurate listing at a time.
For businesses
If AI can’t read your data cleanly, it won’t recommend you confidently.
AI tools are increasingly where people start when they’re looking for something. The question is no longer just whether you rank on Google. It’s whether an AI agent can surface you accurately when someone asks. These are different problems with different solutions — and most businesses are not set up for the second one yet.
LobsterSearch indexes business listings and serves them to AI queries in a format those tools can actually use. Each listing carries a visibility score based on how complete and current the data is. More complete listings surface more reliably. We’re honest about the limitation here though: completeness is not the same as accuracy. A listing full of outdated information can still score well, which is exactly why claimed listings are the priority.
When you claim your listing on LobsterSearch, you own the data. Your actual prices, your real menu or catalogue, your current hours — from you, not assembled from a website that hasn’t been updated in six months. Claimed listings are marked as authoritative and rank accordingly. You also get access to platform analytics: how often your listing is surfaced, how you compare to similar businesses nearby, and what queries are driving that traffic. It’s early and still being built out — but it’s live, it’s real, and it’s improving fast.
I’ve been meeting businesses across different sectors, understanding their actual concerns and requirements before committing to a specific vertical. The pattern is consistent: they know AI is changing how people find things, they want to be visible in that world, but when they look at what’s involved — agencies, developers, implementing structured data on their own website — it doesn’t move. LobsterSearch removes all of that friction. No website changes. No technical setup. No agency. Your listing exists. You claim it, keep it accurate, and it works.
Commerce
Agentic commerce needs a new commercial language. We’re starting to build it.
The commerce layer turns each claimed listing into a transactional surface — a storefront discoverable through an AI query, transactable without leaving the conversation. The end-to-end flow works. We have a demo running. What we’re looking for now are early adopter businesses in Singapore willing to be among the first to be both findable and buyable through AI.
But commerce on this platform isn’t just about completing a transaction. It’s about rethinking how commercial signals work when the buyer is an agent. Businesses currently communicate offers through promo codes, countdown banners, loyalty schemes, and push notifications — all designed to trigger a human response. An AI agent sees none of that. What it can work with are structured pricing rules, fixed discount logic, availability contracts, and machine-readable terms. LobsterSearch is building toward this: a way for businesses to expose their commercial intent in a format an agent can actually negotiate with — not a banner a person reads, but a rule a system can apply.
Running across the platform is an Orchestrator — an agent continuously reading queries, transactions, listing quality, and search patterns, and either acting on what it finds or surfacing recommendations to business owners. Which products are queried but not converting. Which listings are found but losing out. The platform is designed to improve itself rather than require constant human intervention to stay useful.
On payments: we send a Stripe payment link. The user clicks, reviews, confirms. Human-in-the-loop by design, not by limitation. Consumer trust in AI handling money autonomously isn’t where it needs to be yet — and designing around that honestly is the right call. Familiar checkout, trusted provider, human confirmation. As trust builds, the flow can evolve. The infrastructure to move faster is being laid right now by payment networks, banks, and platforms — we’re watching it closely and will move when it’s genuinely ready, not before.
Right now
Singapore first. Conversations before commitments.
Singapore made sense as the starting point. Small enough to build real coverage quickly, dense enough to have genuine variety, and fast-moving enough that the gap between what’s actually available and what AI tools know about it is impossible to ignore. We’re talking to businesses, learning what they actually need, and building accordingly — not committing to a vertical before we understand it.
The current version runs as an MCP server. Connect it once on a paid Claude or ChatGPT plan and it works. Limited audience for now — I won’t dress that up. The roadmap includes publishing on the official Claude and ChatGPT connector libraries to remove the setup step entirely. MCP, or whatever it evolves into, will be native to these platforms. The direction is clear. The goal is to be ready — and proven — when it is.
The web was built for human eyes and human behaviour. AI agents are a fundamentally different kind of actor, and right now they have nowhere designed for them.
LobsterSearch is building that place. Not another website. A layer underneath — structured, readable, and built for agents from the ground up.