Red-Green-Code

Deliberate practice techniques for software developers

  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter

Stateless by Design: How to Work With AI Coding Assistants

By Duncan Smith Dec 31 0

coding assistant

In the AI coding assistant era, we can onboard an AI assistant much like a human programmer: by pointing it to documentation and code. But there’s a key difference: while human programmers remember things from one day to the next, AI assistants start each session with a blank slate.

Context Engineering

Since coding assistants forget everything between sessions, we need to build a repeatable onboarding process. The general term for this process is context engineering. Here’s how OpenAI co-founder Andrej Karpathy describes it:

context engineering is the delicate art and science of filling the context window with just the right information for the next step. … Too little or of the wrong form and the LLM doesn’t have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down.

The term context engineering comes from a fundamental feature of LLM architecture: the context window, often described as the model’s “working memory.” For AI coding assistants, starting a new coding session means starting with an empty context window. And you can’t avoid the blank slate problem by doing all your work in one huge session. Since the context window has a finite size, the model eventually has no choice but to drop information. This is a key limitation of LLM-based coding assistants.

As an analogy, imagine that you’re starting a new job as a senior software engineer. You come into the office, meet your co-workers, set up your dev environment, and start submitting pull requests for review. Over time, you learn which of your colleagues knows about each part of the system, and where the integration points are with other teams.

In that analogy, every new AI coding session is like a senior engineer’s first day on the job. The AI agent retains all of its general experience and skills, but it forgets all the onboarding details. It has to learn them again for each session, as if it were “coming into the office” for the first time.

As engineers who want to get the best performance from an AI assistant, we’re in charge of supplying these two elements from Karpathy’s definition:

  • Just the right information
  • For the next step

Just the right information

Once you accept that every session is day one, the question becomes: what does a first‑day engineer need to know? You’re using an AI assistant because you have a task for it, like fixing a bug or starting on a feature. What would a human engineer need to know to do the same task on their first day? They’ll certainly need a system design document that describes each part of the system and how it works. Such a document could include sections like:

  • A high-level architecture diagram
  • Responsibilities of each service
  • Key integration points
  • Known constraints

You’ll need to write the system design document once, then keep it up to date as the system changes. By providing this document to the assistant at the beginning of each session, you give it a detailed map of the territory it’s working in. Next, you’ll need to provide general advice like, “Ask clarifying questions before writing any code.” These are prompts that offset the assistant’s weaknesses. When you upgrade to a new model, it’s good to experiment with these, since the new model may have different strengths and weaknesses compared to the previous version. Finally, you’ll need a requirements document for the task you’re asking the assistant to work on. This document can come out of your group’s planning process. Just give it to the assistant in the same format you use for humans. A requirements document might include sections like UX mock-ups and workflow, inputs and expected outputs, performance requirements, and other systems to integrate with.

For the next step

The goal of context engineering is to make every session stateless by design. As you improve your AI assistant workflow, you should be eager to start a fresh session whenever you need one. AI assistants work best when they’re not pushing against their context window limits, so your process should support that. Rather than relying on context that you build up as you’re chatting with the agent, get in the habit of externalizing state in a form that’s easy to re-load. This means documenting everything a developer needs to know when working on the system.

Agents know how to read and write Markdown documents, so you can use those as a starting point. We have already talked about the basic set of documents you’ll need: the system design document, general advice document, and feature requirements document. If something comes up as you’re chatting with the agent that you need it to remember, copy it in Markdown format and put it in one of those documents.

For memory and coordination, the agent will want to use Markdown documents by default. But they aren’t optimized for that purpose. Instead, an emerging ecosystem of agent observability tools, such as Maxim AI, Langfuse, Arize, Galileo, and LangSmith, provides a customized view into what your agents are up to. Or for a more lightweight, developer-centric option, Steve Yegge’s Beads keeps track of task details, task status, and the dependency graph that relates tasks to each other. Tools like these offer another source of context to help tune the agent for your specific projects and tasks.

The year ahead

In 2026, we’ll need different software engineering skills than we needed just a few years ago. We have to think at a higher level of abstraction, more like an architect than a coder. The AI assistants know how to design and code, but they don’t know about your specific system or feature requirements. When an AI assistant performs poorly on a task, don’t say, “AI isn’t smart enough for the job.” Instead, figure out what information the assistant was missing. This is the context engineering mindset. Our job is no longer to solve the problem directly. Instead, we have to figure out what a skilled developer would need to know to solve the problem.

In this new world, we have to be more diligent about continually refining the requirements for a system. Code can be generated quickly. That allows us to spin up a prototype in a few minutes and get feedback from a product manager. But relying on tribal knowledge is even more precarious than it was in the human-only coding world. A coding agent won’t call a colleague to ask them questions. Decisions have to be documented and saved somewhere that agents can get to them. Keeping an up-to-date library of Markdown files and using an agent-friendly task management system can mean the difference between generic results from an AI assistant and code that integrates well with our systems on the first try. Context engineering isn’t a workaround. It’s the discipline that unlocks the real productivity gains of AI‑assisted development.

Categories: Career

Prev
Next

Stay in the Know

I'm trying out the latest learning techniques on software development concepts, and writing about what works best. Sound interesting? Subscribe to my free newsletter to keep up to date. Learn More
Unsubscribing is easy, and I'll keep your email address private.

Getting Started

Are you new here? Check out my review posts for a tour of the archives:

  • 2023 in Review: 50 LeetCode Tips
  • 2022 in Review: Content Bots
  • 2021 in Review: Thoughts on Solving Programming Puzzles
  • Lessons from the 2020 LeetCode Monthly Challenges
  • 2019 in Review
  • Competitive Programming Frequently Asked Questions: 2018 In Review
  • What I Learned Working On Time Tortoise in 2017
  • 2016 in Review
  • 2015 in Review
  • 2015 Summer Review

Archives

Recent Posts

  • Will AI Coding Assistants “Deskill” Us? January 30, 2026
  • Stateless by Design: How to Work With AI Coding Assistants December 31, 2025
  • Do Coding Bots Mean the End of Coding Interviews? December 31, 2024
  • Another Project for 2024 May 8, 2024
  • Dynamic Programming Wrap-Up May 1, 2024
  • LeetCode 91: Decode Ways April 24, 2024
  • LeetCode 70: Climbing Stairs April 17, 2024
  • LeetCode 221: Maximal Square April 10, 2024
  • Using Dynamic Programming for Maximum Product Subarray April 3, 2024
  • LeetCode 62: Unique Paths March 27, 2024
Red-Green-Code
  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter
Copyright © 2026 Duncan Smith