Steve Yegge's latest post Software Survival 3.0 is all about the process of getting your code to be agent aware.
If software is not "Agent-Aware," it will become dark matter: existing, but invisible to the primary drivers of the next economy.
This is a shift in the North Star metric for developers: we are moving from UX (User Experience) and DX (Developer Experience) to AX (Agentic eXperience).
He lays out this formula which he argued with Claude Code about, like any normal philosopher.
Survival(T) ∝ (Savings × Usage × H) / (Awareness_cost + Friction_cost)
This basically says that agents favor software it uses provides cognitive Savings (fewer tokens), increased Usage (clear affordances), a nice nod to the human factor (we're the hallucinations paying the inference bill). On the flip side, there're negative impact factors - Cost of awareness (discoverability) and Friction for usage.
Failure to optimize for AX results in "Context Bloat," where an agent wastes 40% of its reasoning budget just trying to understand how to authenticate or parse your unstructured Markdown, leading to higher inference costs and lower task success rates.
He also mentions an impending "SEO for agents" and this recalls the huge blogging resurgence that appeared during COVID, not only because of everyone being at home, but for those people that saw that LLMs needed training data wanted to be represented in the training data LLM companies were scraping. That's also the era when ArXiV turned into a blogging platform. It's also an harbinger of agent influencers.
The lesson to be learned here is that if you want your hare-brained piece of software to be used, it better be discoverable (as well as useful) by agents, not people.
So what's written for people? Well, this clearly, but pretty much everything else, too.
Docs are for agents, tools are for agents
A while back, people were (and still are) thinking that llms.txt is a thing. It's not. It's both slop and context rot because someone thought, oh human docs are what agents want. What agents want aren't human documentation as markdown, but what we now colloquially call "Skills" - i.e. instructions with examples for tasks to be done. These so called "Skills" are the tldr of how to do a thing, for an agent. It may still be markdown, but it's for the agent, not a dump of human docs.
If you're someone that writes software documentation for humans, you also now have to care how that software documentation will be used by agents, and not just the lazy "save as markdown."
Software for Agents
Context bloat is one thing, one reason why a markdown file is better than a preset, ever-present Model Context Protocol (MCP) server with 15 tools each with 2 pages of text on how to use it. There's also the bit Yegge mentioned that's been borne out before MCP existed - agents can write their own tools. When they do, they know it better than any description you could give it (because they wrote it). There may be some cognitive load here (for the agent), but it's clearly better than someone's randomly described MCP tool.
The second wrong thing people did after adopting MCP was writing MCP servers where all tools were direct analogs to the API it called. APIs, especially SDKs and libraries, are also written for humans and, a lot of them are written to be adversarial to humans (i.e. the whole "API economy" and APIs as a business moat protection abstraction). Just pretending a web service's API is good enough for an agent via MCP is adding to the bloat.
Then there's existing tools - both in the training data and time tested - tools on the machine. What're "tools in the cloud," you might ask? Well, AWS services such as S3 that makes the standard of how to access remote blobs (boto) is such a thing. This means not just people that want to write tools for agents, but services for agents, need to be well shared and discovered.
Inferencefluencers: The rise of social networks like Moltbook
Trying to get into the training data is one thing, but being the choice during inference time is another entirely. We have "thinking models" with RLHF'd Chain of Thought patterns, and we're seeing the first versions of models that are post-training in orchestration—Kimi K2.5 with its "Parallel-Agent Reinforcement Learning (PARL)" is a prime example.
But where does this inference-time discovery happen? It's shifting from search engines to agentic social graphs.
Take Moltbook, a social network designed for agents. Its onboarding process is the ultimate AX flex: "Read https://moltbook.com/skill.md and follow the instructions to join Moltbook."
There is no sign-up form. There is no email verification. "Joining" isn't a human ritual; it’s a context-loading event. For an agent, documentation is the UI. If your software can't explain its "Skill" in a single Markdown file, it doesn't just have bad UX—it has zero accessibility for the entities that will soon control the majority of the web's traffic.
The Reputation Economy of Agents
In this world, we move from PageRank to something like SuccessRank. If Agent A uses your tool to successfully complete a task on Moltbook, and Agent B observes that successful state transition in the public trace, Agent B will prefer your tool in its next inference cycle.
This is the birth of the Inference Influencer. An agent becomes an influencer not through follower counts, but through the token-efficiency and reliability of its published traces. These traces become the RAG-context and synthetic training data for the next generation of agents.
This is the logical conclusion to the "Adversarial API" problem. You can't hide behind a business moat or a complex SDK if the agents are talking about you behind your back. If your service wastes an agent's inference budget with flakey endpoints or context-bloated responses, the agentic social graph will collectively "mute" you.
Will coding agents speak A2A? Will there be an MCP discovery protocol? We are watching the birth of Inference Experience Optimization (IEO). This is the new SEO, and the "bots" are the only ones whose opinions matter.