I Told My AI to Monitor Everything. It Built a Better Stack Than I Would Have.
How an autonomous agent chose Prometheus, Grafana, and Loki — then deployed them across my homelab with minimal human intervention.
Disclosure: I’m a Cloudflare employee and shareholder. Opinions expressed here are my own and do not represent the views of Cloudflare.
I recently asked my AI agent to build an observability stack across my homelab. It chose Prometheus, Grafana, and Loki. It provisioned a container, deployed exporters on every machine, configured retention policies, and shipped the whole thing in a single session. I stepped in twice — once to escalate its API permissions, once to work around an SSH issue — but the agent drove the process.
Here’s the interesting part: it never asked me which tools to use.
It didn’t open a ticket asking me to choose between Prometheus and InfluxDB. It didn’t send a message weighing the pros and cons of Loki versus Elasticsearch. It didn’t block, waiting for my input on a Grafana dashboard template.
It had opinions. And those opinions are why the work got done.
The default instinct when building AI agents is to optimise for neutrality. The agent should present options. Let the human decide. Never assume.
This sounds responsible. In practice, it’s a productivity trap.
An agent that asks “which monitoring tool would you prefer?” is really asking a different question: “Would you like to do the research yourself, or should I do nothing until you do?” Every question the agent asks is a context switch for the human. Every round-trip is latency. And every time the human has to stop what they’re doing to make a decision the agent could have made, the promise of autonomous AI gets a little less convincing.
The agents that ship — the ones that actually get work done — are opinionated. They make reasonable defaults, communicate what they’ve chosen, and move forward. If the human disagrees, they course-correct. But the default is motion, not paralysis.
Software engineering has a name for this: convention over configuration. The principle that a framework should provide sensible defaults, and you should only need to specify things when you diverge from the norm.
Ruby on Rails made it famous. Instead of making you configure your database table names, URL routes, and file structure, Rails said: “Put your model here. Name it this. We’ll figure out the rest.” Developers who followed the conventions shipped in hours what used to take days of boilerplate.
The insight was that most decisions aren’t actually decisions. They’re rituals. You don’t want to choose a URL routing convention. You want to build the feature behind the URL.
Convention over configuration was designed to help human developers. But the principle applies even more powerfully when the developer isn’t human.
What makes this particularly striking is that the agent doesn’t arrive as a blank slate. Models already converge on the same tools across different project types and phrasings — the agent has opinions before the platform even speaks. Convention over configuration works at every layer: the ecosystem, the platform, and the model itself.
When a human developer hits an unconfigured choice, they can usually figure it out in a few minutes. They have context, intuition, and Stack Overflow. The cost of an unnecessary decision is annoying but manageable.
For an agent, the cost structure is different. When an agent encounters an ambiguous choice — one where multiple paths are valid and no convention exists — it has three options:
Ask, guess, or stop. None of these are good.
Here’s the thing: agents already have opinions. The amplifying.ai study found that Claude Code picks GitHub Actions for CI/CD 94% of the time, Stripe for payments 91%, shadcn/ui for components 90% — unprompted, with no tool names in the input. The agent converges on a default stack whether or not the platform provides one. An opinionated platform doesn’t fight this — it aligns with it. When the platform’s convention matches the convention the agent already expects, the trilemma disappears entirely. The agent doesn’t need to ask, guess, or stop. It follows the path and keeps moving.
Platform engineering calls these golden paths — opinionated, well-documented, supported ways of building and deploying software. Platform engineers define them as routes “designed to reduce cognitive load” so developers “can focus on solving problems rather than wrestling with setup.”
For human developers, golden paths are about efficiency.
For AI agents, golden paths are about capability. An agent operating on a platform with strong golden paths can autonomously complete tasks that would otherwise require human intervention. The path isn’t just faster — it’s the difference between the agent being able to do the job at all.
There’s a corollary most tool vendors haven’t processed yet. YC partner Jared Friedman noted recently that even the best developer tools mostly don’t let you sign up for an account via API — meaning an agent can’t onboard itself. If Claude Code can’t sign up, it can’t use the tool. API-first account management is table stakes now. Tools that require a browser and a human to complete onboarding are invisible to an agent that’s trying to work autonomously. The golden path has to extend all the way to day zero.
Golden paths tell agents how to build. But agents also need to discover what a platform can do — without a human writing a custom integration first.
MCP (Model Context Protocol) is an emerging standard for exactly this. It lets agents discover and invoke platform capabilities through a uniform interface. Instead of reading API docs, parsing OpenAPI specs, or guessing at endpoints, an agent connects to an MCP server and sees what’s available. Cloudflare’s Code Mode (more on that below) is an early example of what this looks like when taken seriously — collapsing an entire API into a surface an agent can navigate autonomously.
If golden paths are convention over configuration for building, MCP is convention over configuration for connecting.
The best way to see this pattern is to watch it repeat across platforms that got the memo independently.
Push your code to Vercel. That’s the instruction. There is no step two.
Vercel detects your framework — Next.js, SvelteKit, Nuxt, Astro — and auto-configures the build command, output directory, and deployment settings. You don’t configure a CDN. You don’t set up SSL. You don’t write a build script. The platform has opinions about all of these things, and those opinions are almost always right.
For a human developer, this means a five-minute deploy instead of a thirty-minute configuration session. For an agent, it means the difference between “deploy this frontend” being a single command or a multi-step research project into build tooling, CDN setup, and SSL certificates. Vercel’s conventions turn deployment into a solved problem — one an agent can execute without asking a single clarifying question.
Railway’s Railpack analyses your source code and produces a deployable image — no Dockerfile required. Point it at a Go, Python, Node, or Rust repo and it figures out the runtime, dependencies, build steps, and start command. Every decision it makes for you is a decision an agent doesn’t have to make. Agents are less likely than humans to have exotic requirements, so the default covers an even higher percentage of agentic workflows than human ones.
Cloudflare’s developer platform makes a series of architectural decisions that compound:
Each of these is an opinion. Each removes a decision point. And the cumulative effect is a platform where an agent can go from “build me an API with a database and object storage” to a deployed, globally distributed application without asking which region to deploy in, how to connect to the database, or where to put the storage bucket.
The binding model is where this gets sharpest. Instead of environment variables and connection strings, you declare “I need a KV namespace” in a config file and it appears as a typed object in your code. No URL. No credentials. No connection pooling config. An agent literally cannot misconfigure a database connection, because there’s no connection to configure. Vercel and Railway removed deployment decisions. Cloudflare removed infrastructure wiring decisions — a layer deeper.
Cloudflare pushed the principle further with Code Mode, which collapses its entire API — DNS, security, Workers, storage — into just two MCP tools: search() and execute(). Instead of presenting an agent with thousands of tool definitions (which would consume 1.17 million tokens), the agent writes code against a typed SDK. The entire interface fits in roughly 1,000 tokens.
Sometimes the convention doesn’t fit. When Vercel’s auto-detected build config is wrong, you override one setting. When an agent’s choice of monitoring tool doesn’t meet a compliance requirement, you redirect it with a sentence. Opinionated systems need escape hatches — but good ones make the escape hatch a single override, not a redesign. And in enterprises, the golden path still has to pass through your security review, your compliance gates, and your approved vendor list — opinions don’t override governance.
The key is that the agent started with an opinion. Starting from an opinion and adjusting is always faster than starting from nothing and building consensus. The first path gets you a deployed stack that might need tweaking. The second gets you a meeting that might produce a plan.
This isn’t accidental, and it isn’t happening in isolation.
Cloudflare CEO Matthew Prince has been talking about the “Agentic Internet” — a future where autonomous AI agents generate a significant share of web traffic. He’s noted that a human might visit five websites to complete a task, while an autonomous agent might make 5,000 requests.
The platforms that were built to remove friction for human developers are discovering that the same design principles — strong defaults, golden paths, convention over configuration — remove even more friction for autonomous agents. The humans who benefited from opinionated platforms had the option to configure things manually. Agents, practically speaking, don’t.
The platforms that designed for speed are inheriting the agentic era. Not by accident. By architecture.
If you’re building platforms: every sensible default you provide is a decision an agent won’t have to ask about. Every golden path you pave is a task an agent can complete autonomously. The platforms that win the agentic era won’t be the ones with the most features or the most flexibility — they’ll be the ones with the strongest, most reasonable defaults.
If you’re building agents: seek out opinionated platforms. Your agent will move fastest where conventions are strongest. When the platform has already decided how to deploy, how to connect, how to scale — your agent can focus on the problem it was actually asked to solve.
Platform opinions are an interface contract — a promise that says “I’ve already decided this for you. Go build the thing that matters.” For human developers, that promise is about convenience. For AI agents, it’s about capability. The promise doesn’t change. The stakes do.
Convention over configuration was always about shipping faster. The platforms that win the agentic era won’t be the ones with the most features. They’ll be the ones where an agent can ship without asking a single question.
Agentically co-authored.