Volken
An MCP-based deployment layer for enterprise AI tooling.
Early development
Volken is the project I'm in the slow part of. It's the third or fourth thing I've thought it might be, and it's only just settled. This page is the working note of what it now is, and what it isn't yet.
It started inside Cograph. I'd written an MCP server with a handful of tools on it (repository analysis, dependency extraction, summary generation), spent a week getting the local version sharp, and then ran into a wall trying to get it hosted somewhere other than my own machine. The platforms I tried weren't tuned for the protocol's networking. Streamable HTTP, the SSE transport, the way the endpoint expects to be reached. Most of them worked, sort of, after a config-shaped fight. The version that actually shipped was the one running on my laptop, which meant Cograph only worked when my laptop was on.
After enough of those evenings I started writing the platform I wanted to have existed. That's most of the pitch. Volken is the thing I'd have paid someone for if it had been there.
What it is
Managed hosting for MCP servers, CLI-first. You write the tools in your language of choice (TypeScript, Python, Go are the runtimes I'm starting with), and Volken handles everything between that code and the AI client calling it. Two commands. volken init walks the project, detects the runtime, and writes the deploy config. volken deploy builds the image, ships it, brings up an MCP-aware gateway in front of it, and gives you back a URL. Auth, rate limiting, secrets injection, auto-stop compute, and a per-tool-call log stream are defaults, not an upgrade path.
The "CLI-first" choice is deliberate. The audience is developers who already have an editor open and a terminal next to it. A web dashboard would be a second context to keep in your head; a CLI is something you can hold in muscle memory.
Why MCP, and why now
MCP is the protocol AI agents use to talk to anything outside their own context window. Tools, databases, filesystems, internal APIs. Anthropic introduced it; Anthropic, OpenAI, Google, Microsoft and Salesforce all ship against it now. The SDK's monthly downloads went from a hundred thousand to nearly a hundred million in eighteen months, and the protocol was donated to the Linux Foundation late last year. As of writing, it's the closest thing the agent world has to a standard.
The infrastructure around it hasn't caught up. The published numbers (which match what I saw building Cograph) say most MCP servers never leave a developer's laptop, and that of the ones that do, most are dead endpoints by the time anyone tries to call them. Anyone trying to ship a serious tool right now is choosing between rolling their own deployment story, contorting a generic platform, or quietly accepting that the server only runs when their laptop does.
The bet is that the contract is narrow enough to be standardised, and that the everything-around-it (transport, auth, observability, deploy, secrets) can be a platform problem rather than every developer's problem. Building one of those platforms is the work.
Open questions
Things I don't yet have clean answers to:
volken dev story that pairs a local server with MCP Inspector in the browser, but I haven't shipped it.Why publish this now
Two reasons. First, I'd rather show that I think about products before I start them than pretend they spring fully-formed. Second, writing this down forces me to read it back, which is the closest thing I have to a collaborator at this stage.
When there's a working version this page becomes a real case study, and this version of it goes into the colophon as an artefact of the early thinking. Until then, it's a note. If the bet pays off, this is the company. If it doesn't, the thinking earned its keep.