Run Your Documentation MCP Through an Orchestrator, Not Your Main Coding Mode
Learn why running documentation MCPs in a dedicated researcher mode prevents context pollution and keeps your expensive coding model focused on implementation.
How teams use agents to iterate, review, and ship PRs with proof.
Showing 12 of 122 posts
Learn why running documentation MCPs in a dedicated researcher mode prevents context pollution and keeps your expensive coding model focused on implementation.
Coding benchmarks test isolated function generation, not real-world agentic work. Learn why building your own evals produces reliable signal for model selection.
Why local models struggle with apply diff tool calls and the strategies that actually work for reliable code editing in agentic workflows.
Local models under 14B parameters fail at agentic coding workflows. Learn the practical thresholds for model size and VRAM that actually close the loop.
Local coding models have not kept pace with frontier APIs. Devstral is the one exception that handles agentic workflows reliably with 32GB VRAM.
Learn how to route AI coding tasks by complexity - use local or cheap models for boilerplate and scaffolding while reserving frontier models for debugging and architectural reasoning.
Understanding why Roo Code's orchestrator spawns subtasks with fresh context windows, and how to manage the visibility gap until tooling improves.
Memory banks capture snapshots that drift from reality. Codebase indexing reflects what your code actually is right now, eliminating stale context failures.
Why higher token costs often mean lower task costs - the counterintuitive math of model selection when smaller models spiral through repeated failures.
Learn why MCP tool descriptions alone are not enough for reliable AI agent behavior, and how custom instructions define when tools should be called.
Learn the immutable reference pattern that enabled a 27-hour autonomous agent run to port 47 routes from Laravel to Go and Next.js with 83% success rate for $110.
Learn how to prevent context poisoning in orchestrated AI workflows by treating subtasks like junior developers with atomic, scoped instructions.
Cloud Agents review code, catch issues, and suggest fixes before you open the diff. You review the results, not the process.