Open Source Is Shifting from PRs to Issues
The open source contribution model is reversing. With AI agents that can attempt fixes from issue descriptions, the valuable contribution shifts from writing code to describing problems clearly.
How teams use agents to iterate, review, and ship PRs with proof.
Showing 12 of 122 posts
The open source contribution model is reversing. With AI agents that can attempt fixes from issue descriptions, the valuable contribution shifts from writing code to describing problems clearly.
Why quantized models underperform on long-running agent tasks and how to identify when precision loss is compounding in your workflow.
Why reliable tool execution beats elegant first drafts in AI coding agents - and how to evaluate models for production workflows.
Learn the three critical configuration settings that make Qwen3 Coder perform like Sonnet - temperature, provider selection, and avoiding quantized versions.
Why specialized AI coding modes with detailed system prompts outperform generic modes that require constant correction - and how to calculate the real token and time costs.
Learn how building a custom mode that asks questions instead of waiting for perfect prompts eliminates prompt engineering and extracts the context your AI coding agent needs through conversation.
Learn why forcing AI coding agents to present implementation options before writing code catches wrong approaches early and saves time, tokens, and debugging effort.
Why orchestration workflows fail at integration boundaries and how human checkpoints catch what parallel agents miss.
Learn when to use Orchestrator mode versus Code mode in AI coding agents. Avoid token waste and coordination overhead by matching tool complexity to task scope.
Learn why MCP tool descriptions explain mechanics but not workflow timing, and how to write rules files that tell AI agents when to invoke each tool.
Learn when extended thinking in AI models wastes tokens and time, and how to match reasoning overhead to task complexity for better results.
Learn why detailed specifications can make AI coding agents worse and how mandate-based prompting delivers better results with less token overhead.
Cloud Agents review code, catch issues, and suggest fixes before you open the diff. You review the results, not the process.