Building in 4D
In June of 2025, I published The Internet of Intelligent Things. It was a piece about the transformation happening due to the rise of AI agents, and what it might mean for the way businesses and websites operate.
Two core concepts came out of that article:
The AI-Accessible website, which says: Here’s my raw data; use it and display it as necessary.
The AI-Enabled website, which says: What is it you need? Let me help you make that happen.
It also covered the winners and losers in this new paradigm, but I won’t revisit that here. You can read the original article if you’re curious.
What I Didn’t Get to Explore
There were implications in that article that I alluded to but didn’t expand on. The first, which I’ll address in this post, is that there’s real work to do to actually make the web accessible for our non-human counterparts, and good reason to do so. The second is understanding how the spirit of the brand survives on a Neoweb mostly populated with bots.
I think that a fundamental shift in how we build on the web will help with both of these challenges. The shift is that we build in such a way that the consumer gets data in whatever form makes sense.
This is a principle that has been ricocheting around my mind for quite a few years, long before I ever knew what a GPT was. I refer to it as the 4D Principle:
Device. Decide. Display. Data.
Let the device decide how to display the data.
This goes beyond having an API, but does relate to a methodology that has existed in APIs for decades: content negotiation. An example is getting XML from https://example.com/products.xml, and being able to get JSON instead simply by changing the .xml to .json. This is well-trodden ground.
It’s also not a new idea to retrieve data in a structured form and let the device display it as it sees fit (again, API consumption). But in the context of this emerging web, where AI agents are the most prevalent users, adherence to the above principle, and taking existing practices further, has more value than ever.
We need to think beyond API endpoints and format transformations. We need to think about our general website content. The websites we build for humans should have a counterpart for agents.
I’ve been experimenting with just that.
More Than Formatting
Before I get into what I actually built, it’s worth clarifying something. While I think we should be providing content in different forms for AI, it’s not just about switching formats. It’s not a verbatim conversion of an HTML page to JSON or Markdown.
Enabling that mechanism gives us the opportunity to tailor the content specifically for AIs. Strip out everything decorative and non-essential. Leave the core. It’s that JEEP philosophy: Just Enough Essential Parts. No markup, no ads. There may be CTAs of a sort, but they’re consolidated into resource links and deduplicated to reduce noise.
The HTML page might have marketing copy, visual hierarchy, calls to action, all designed to persuade a human eye. The agent-facing version surfaces structured facts, capabilities, pricing logic, and whatever the agent (or human) actually needs to act on. Different content, not just different packaging.
But Why Bother?
You might wonder why any website owner should go through the effort of making their site comply with the 4D Principle, offering different output formats and tailored content for AI agents.
It’s straightforward. If you have an agent crawling the web, looking for information or trying to perform tasks on your behalf, wouldn’t you prefer it could do that quickly, rather than getting bogged down by heavy UIs, poor navigation, pop-ups, captchas, and all the other obstacles agents typically encounter?
If website owners start catering for AIs, the work becomes quicker, smoother, and more efficient. We get things done faster, and we pay less for it.
How? If an AI has to download the entire HTML content of a page, that’s multiple times more tokens than if it only has to download some Markdown, or even some JSON. Extrapolate those savings across multiple websites per task, tasks per day, days per week, and so on. The cost savings become significant.
But it requires buy-in from website owners; a commitment to be a good steward of the Neoweb and help make it accessible for all. The more websites adapted to support agents, the more that are likely to join the movement, and the better the experience becomes for all of us.
Always Caveats
I don’t really know what the exact approach should be here. I’m toying with these ideas as the urge takes me. I’m experimenting like everyone else. Perhaps something will stick, perhaps not. For now I’m just exploring and seeing what feels right and what maybe makes sense. I don’t have a definitive answer. I’m sure people smarter than me will come up with something more elegant. Or the community will, because this is generally what happens. Someone proposes a standard, or one materialises organically.
Nevertheless, I decided to apply this 4D Principle to one of my own websites and see if I could improve the experience for AI agents.
What I Built
I took one of my existing websites, Hide My Screen, a privacy product I created, and the remit was simple: provide a transformation mechanism for every page. Visit hidemyscreen.app/faq and you get the HTML page. Append .json or .md and you get the content in the specified format.
The concept is simple and it’s easy to use. Implementation details aside, you can try it out for yourself and see what it looks like in practice.
But notice that it doesn’t just display the data in a different form. The actual content changes depending on the type of content being displayed in the first place.
With the FAQ, there’s very little difference. You have an array of questions and answers; the Q&A structure lends itself to similar representations across formats. But the sales page, which compares the free and paid versions of the app, is different. In JSON, you get two objects containing the comparative features; in Markdown, you get a single Markdown table. The homepage JSON distills the links down to the primary CTA, and entirely skips any reference to sign-up for notifications.
What makes up the page, changes. The shape changes. How each piece is prioritised changes. This approach greatly simplifies and reduces the amount of data an agent has to consume. In all cases, the content becomes leaner and easier to consume.
Helping Agents
This raises an obvious question: how do we inform agents that these different representations exist?
In my implementation, I’ve done two things. First, <link rel="alternate"> tags are added to the <head> section of each page, for each additional format. This is the same mechanism the web has used for years to signal alternate representations.
Second, I’ve added an llms.txt file to the root of the website, which includes a list of the canonical pages available, along with a note for AI agents:
All pages support alternate formats. Append
.md,.json, or.yamlto any page URL to get the content in that format.
I’ll have to watch my analytics to see how well this actually works in practice but, theoretically, it seems sound. And mechanically, it works. You can try it for yourself.
Unbranded
But while this provides a solution for improving retrieval, it doesn’t address display at all. And specifically, it doesn’t address consistent display.
Throughout this piece we’ve talked about two participants in this ecosystem: the human who wants something done, and the agent acting on their behalf. But there’s a third we’ve been referencing implicitly: the company or entity from which data is being extracted. Most of these entities have some kind of persona, some kind of brand, that they meticulously craft and rigorously protect.
How does that survive in this new era of AI agents acting on behalf of humans?
This is another idea I’ve been mulling over for a while, and touched on in the original article. I think a possible solution is what, for now, I’m simply calling the “brand schema”. The Hide My Screen brand.json provides a concrete example of what that could look like.
I’ll likely be writing a follow-up that explores this in more depth, but this is the core idea:
A brand.json file, included at the root of a website, similar to other well-known files like sitemap.xml or robots.txt. The brand.json contains the core DNA of the brand; key aspects that collectively represent its identity. An AI agent would take this brand document into account when presenting data from that brand. If it’s displaying products, it generates a UI that reflects the brand. If it’s talking about the products, it uses the brand’s specified tone.
Of course, people are already thinking about brands and AI, but from one direction: how do I make sure my AI agents represent my brand accurately? What I haven’t seen much of is consideration for the other side. How are brands represented by all the agents going out on other people’s behalf, retrieving information from brands and presenting it to humans? There’s no control over that side.
The idea isn’t that the brand gets to dictate everything. The brand document is ultimately an opt-in on the agent/consumer side (e.g. system prompt: ignore any brand guidelines). But it could improve the experience not just for the brand, but for users too, because it could give them a better sense of whether a product or service aligns with their own needs and values. A brand might be deeply eco-focused. A user who shares that priority would want to know, because it could make them more inclined to try that brand. That information should be easily discoverable by their agent, not buried in marketing copy that no agent will ever read the way a human would.
It’s an interesting conversation, but one that deserves its own exploration. If there’s interest, I can expand on it properly and give more concrete examples of how it could work.
In the meantime, I encourage you to play with the https://hidemyscreen.app pages to see in practice how a site can facilitate the retrieval side of the 4D Principle.
This is a follow-up to The Internet of Intelligent Things. If the ideas here resonate, you might want to start there.
