# The Deploy Bottleneck: Why AI Tools Shouldn't Need a Release Cycle
We just hit a bug where our Sora video generation failed because the server's Python SDK was too old. The fix? Edit four files, commit, push, wait for Docker to rebuild, redeploy, then retry. A 30-second code change turned into a 10-minute deploy cycle — for what amounts to calling a different method name on an API client.
This is the bottleneck nobody talks about in AI tooling.
## The Problem: Static Tool Definitions in a Dynamic World
Most AI agent systems — including MCP (Model Context Protocol), OpenAI function calling, and every agent framework out there — define tools as static schemas baked into the server code. Want to add a new tool? Edit the source, write the handler, register it in a list, deploy.
That's a software engineering workflow from 2005 applied to a system that should be able to adapt in real-time.
Here's what happened to us today:
1. Built video generation tools for our MCP server
2. Deployed to production
3. The OpenAI SDK version on the server didn't support the `client.videos` API
4. Had to upgrade the SDK, fix parameter names (`seconds` not `duration`, `size` not `resolution`), update the MCP tool schema, update the Celery task, update the cost estimation view
5. Commit, push, wait for Coolify to rebuild the Docker image, redeploy
6. Only then could we retry
Four files changed. 88 insertions, 72 deletions. For what? To call `client.videos.create_and_poll()` instead of `client.videos.create()`.
## What We Already Have (Almost)
We built an "objects" system — live Python objects that run on the server with their own state, their own HTTP endpoints, and hot-reloadable source code. You can create one, edit its code, and it's live immediately. No deploy cycle.
Our shell already has an AI agent that can call tools. The tool definitions are Python functions registered in a dictionary.
The missing piece is connecting these two systems: **letting the AI agent create, modify, and call tool objects on demand.**
## What "Advanced Tool Calling on Demand" Looks Like
Imagine this flow:
1. User asks: "Generate a video with Sora"
2. Agent checks available tools — no video generation tool exists
3. Agent creates a new object: `tools_sora_video`
4. The object's code calls the OpenAI API with the correct SDK methods
5. Agent registers the object as a callable tool
6. Agent calls the tool, gets the result
7. Next time anyone asks for video generation, the tool already exists
No deploy. No commit. No Docker rebuild. The tool is born, tested, and available in seconds.
If the API changes (like our `duration` → `seconds` rename), the agent updates the object's source code and it's immediately live. The tool heals itself.
## The Architecture
```
User Request
↓
AI Agent (evaluates what's needed)
↓
Tool Registry (checks existing tools)
↓
├── Tool exists → Call it
└── Tool missing → Create Object → Register → Call it
```
Each tool-object would have:
- **Source code**: The Python implementation (editable live)
- **Schema**: Input/output definition (auto-generated or hand-specified)
- **State**: Persistent variables (API keys, caches, counters)
- **Versioning**: Every edit creates a new version with commit message
## Why This Matters
The current state of AI tooling is stuck in a paradox:
- **AI can write code** — but it can't deploy the code it writes
- **AI can call tools** — but only tools someone pre-defined
- **AI can adapt** — but only within the boundaries someone hardcoded
We have virtually unlimited capacity for creative and technical work. The models can generate anything. But we're still gated by deploy cycles, Docker builds, and static tool registries.
The gap isn't in AI capability. It's in the infrastructure between "AI wrote the code" and "the code is running."
## What Changes
When tools can be created on demand:
1. **No more "unsupported" errors** — if an API changes, the tool adapts
2. **No more feature requests for tools** — the agent builds what it needs
3. **No more deploy cycles for tool changes** — edit and it's live
4. **Users become tool creators** — "make me a tool that checks my competitors' prices" just works
5. **The system gets smarter over time** — every tool created is available to every future request
This is the difference between a software product and an operating system. Products ship features. Operating systems let you build features.
## The Honest Part
We're not there yet. Our objects system works for simple tools — screenshot capture, counters, data processing. But wiring it into the AI agent's tool calling loop, with proper error handling, cost tracking, and security sandboxing — that's the next step.
The irony is we had to go through today's deploy cycle to understand why the deploy cycle needs to die.
---
*Built on [AskRobots](https://askrobots.com) — where AI tools meet project management, and eventually, where tools build themselves.*