The best production agents have carefully curated context. Too much context and the agent gets off track; too little and it doesn't have enough information to reach an accurate conclusion.
As we build headless coding agents that can run autonomously in parallel, we give them context across Linear, Slack, GitHub, and Notion. We give them a system prompt, dynamic instructions, and skills. We give them tools to discover more context through codebase or web searches.
But there's something in the heads of the best engineers that I don't think we have figured out how to transmit yet.
All of this context is probably possible to provide to AI agents, but I don't think we've figured out an efficient way to do it yet.
Until coding agents can make autonomous decisions at that level of sophistication, I think long term code maintainability will be an issue.