Autonomous Coding Agents and Long-Term Context

March 18, 2026

The best production agents have carefully curated context. Too much context and the agent gets off track; too little and it doesn't have enough information to reach an accurate conclusion.

As we build headless coding agents that can run autonomously in parallel, we give them context across Linear, Slack, GitHub, and Notion. We give them a system prompt, dynamic instructions, and skills. We give them tools to discover more context through codebase or web searches.

But there's something in the heads of the best engineers that I don't think we have figured out how to transmit yet.

  • I'd better not abstract this yet because the other team might need this in a few weeks if they keep going in the current direction.
  • This package's roadmap might be under risk in a year because the parent company doesn't have a sustainable business model. Better not use it.
  • Now is the right time to make our test naming structure consistent across the repo, before we onboard another 5 engineers.

All of this context is probably possible to provide to AI agents, but I don't think we've figured out an efficient way to do it yet.

Until coding agents can make autonomous decisions at that level of sophistication, I think long term code maintainability will be an issue.