Red-Green-Code

Deliberate practice techniques for software developers

  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter

Stateless by Design: How to Work With AI Coding Assistants

By Duncan Smith Leave a Comment Dec 31 0

coding assistant

In the AI coding assistant era, we can onboard an AI assistant much like a human programmer: by pointing it to documentation and code. But there’s a key difference: while human programmers remember things from one day to the next, AI assistants start each session with a blank slate.

Context Engineering

Since coding assistants forget everything between sessions, we need to build a repeatable onboarding process. The general term for this process is context engineering. Here’s how OpenAI co-founder Andrej Karpathy describes it:

context engineering is the delicate art and science of filling the context window with just the right information for the next step. … Too little or of the wrong form and the LLM doesn’t have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down.

The term context engineering comes from a fundamental feature of LLM architecture: the context window, often described as the model’s “working memory.” For AI coding assistants, starting a new coding session means starting with an empty context window. And you can’t avoid the blank slate problem by doing all your work in one huge session. Since the context window has a finite size, the model eventually has no choice but to drop information. This is a key limitation of LLM-based coding assistants.

As an analogy, imagine that you’re starting a new job as a senior software engineer. You come into the office, meet your co-workers, set up your dev environment, and start submitting pull requests for review. Over time, you learn which of your colleagues knows about each part of the system, and where the integration points are with other teams.

In that analogy, every new AI coding session is like a senior engineer’s first day on the job. The AI agent retains all of its general experience and skills, but it forgets all the onboarding details. It has to learn them again for each session, as if it were “coming into the office” for the first time.

As engineers who want to get the best performance from an AI assistant, we’re in charge of supplying these two elements from Karpathy’s definition:

  • Just the right information
  • For the next step

Just the right information

Once you accept that every session is day one, the question becomes: what does a first‑day engineer need to know? You’re using an AI assistant because you have a task for it, like fixing a bug or starting on a feature. What would a human engineer need to know to do the same task on their first day? They’ll certainly need a system design document that describes each part of the system and how it works. Such a document could include sections like:

  • A high-level architecture diagram
  • Responsibilities of each service
  • Key integration points
  • Known constraints

You’ll need to write the system design document once, then keep it up to date as the system changes. By providing this document to the assistant at the beginning of each session, you give it a detailed map of the territory it’s working in. Next, you’ll need to provide general advice like, “Ask clarifying questions before writing any code.” These are prompts that offset the assistant’s weaknesses. When you upgrade to a new model, it’s good to experiment with these, since the new model may have different strengths and weaknesses compared to the previous version. Finally, you’ll need a requirements document for the task you’re asking the assistant to work on. This document can come out of your group’s planning process. Just give it to the assistant in the same format you use for humans. A requirements document might include sections like UX mock-ups and workflow, inputs and expected outputs, performance requirements, and other systems to integrate with.

For the next step

The goal of context engineering is to make every session stateless by design. As you improve your AI assistant workflow, you should be eager to start a fresh session whenever you need one. AI assistants work best when they’re not pushing against their context window limits, so your process should support that. Rather than relying on context that you build up as you’re chatting with the agent, get in the habit of externalizing state in a form that’s easy to re-load. This means documenting everything a developer needs to know when working on the system.

Agents know how to read and write Markdown documents, so you can use those as a starting point. We have already talked about the basic set of documents you’ll need: the system design document, general advice document, and feature requirements document. If something comes up as you’re chatting with the agent that you need it to remember, copy it in Markdown format and put it in one of those documents.

For memory and coordination, the agent will want to use Markdown documents by default. But they aren’t optimized for that purpose. Instead, an emerging ecosystem of agent observability tools, such as Maxim AI, Langfuse, Arize, Galileo, and LangSmith, provides a customized view into what your agents are up to. Or for a more lightweight, developer-centric option, Steve Yegge’s Beads keeps track of task details, task status, and the dependency graph that relates tasks to each other. Tools like these offer another source of context to help tune the agent for your specific projects and tasks.

The year ahead

In 2026, we’ll need different software engineering skills than we needed just a few years ago. We have to think at a higher level of abstraction, more like an architect than a coder. The AI assistants know how to design and code, but they don’t know about your specific system or feature requirements. When an AI assistant performs poorly on a task, don’t say, “AI isn’t smart enough for the job.” Instead, figure out what information the assistant was missing. This is the context engineering mindset. Our job is no longer to solve the problem directly. Instead, we have to figure out what a skilled developer would need to know to solve the problem.

In this new world, we have to be more diligent about continually refining the requirements for a system. Code can be generated quickly. That allows us to spin up a prototype in a few minutes and get feedback from a product manager. But relying on tribal knowledge is even more precarious than it was in the human-only coding world. A coding agent won’t call a colleague to ask them questions. Decisions have to be documented and saved somewhere that agents can get to them. Keeping an up-to-date library of Markdown files and using an agent-friendly task management system can mean the difference between generic results from an AI assistant and code that integrates well with our systems on the first try. Context engineering isn’t a workaround. It’s the discipline that unlocks the real productivity gains of AI‑assisted development.

Do Coding Bots Mean the End of Coding Interviews?

By Duncan Smith Leave a Comment Dec 31 0

Human and robot on a hike

Roman Elizarov noticed something different about the 2024 Advent of Code:

AI is reshaping competitive programming, not just in benchmarks or papers, but in real life. My rough guess: this year’s Advent of Code leaderboard features ~80% AI-driven entries in the top 100 for the first time. Advent of Code is still an amazing event to sharpen your software engineering skills and have fun. But in just a year, it has lost much of its relevance as a way to compare problem-solving skills among humans.

If a spot on the AoC leaderboard no longer says much about your skill at solving puzzles, does that predict a change in the way interviewers use coding puzzles to evaluate candidates for programming jobs?

« Continue »

Red-Green-Code: 2016 in Review

By Duncan Smith Leave a Comment Dec 28 0

NM Desert

Year two of this blog has come to an end. Let’s review the topics and posts from 2016.

« Continue »

Three Perspectives on Coding Interviews

By Duncan Smith Leave a Comment Oct 12 0

Interview1
Interview2

It’s October, the time of year when leaves start to fall from the trees, and companies start to recruit college students for summer internships. Last week, I spent half a day interviewing some of those candidates. I also came across a long Hacker News thread called I Hate HackerRank. So I thought it would be a good time to revisit the dreaded coding interview.

« Continue »

Programmer Skills (and Salaries) According to Stack Exchange

By Duncan Smith Leave a Comment Sep 21 0

Skills

In July of this year, Stack Exchange Inc. released an online tool that lets you calculate how much money you would make if you worked there. The number you get out of the tool is based on four factors. There’s a salary floor based on the position you select (e.g., Developer or Product Designer), an adjustment based on your years of professional experience, and a bonus for living in one of a few high cost cities (New York, San Francisco, or London). Finally, the tool takes into account your skills.

Having written in the past about skills for programmers, I was interested to see what Stack Exchange decided was important for success in a programming job. Here’s what I found.

« Continue »

Write Your Own Manual

By Duncan Smith Leave a Comment Jul 20 0

SmallComputerHandbook

You can become a better software developer by improving your ability to organize information. A key part of that skill is being able to communicate what you know in writing. Whether you’re enlightening your peers on Stack Overflow or writing FAQs for your team, it’s good to have a reputation as a person who Knows The Answers. If you can write clear technical designs, how-to guides, and other documentation, you can have a lot of influence. In fact, even if you just write for yourself, you can get things done faster by avoiding trips to the Web to repeatedly look up the same information.

Here are some tips on when, what, and how to write as a programmer.

« Continue »

Rules for Working Intensely

By Duncan Smith Leave a Comment Jul 13 0

Focus

Cal Newport likes to distill the components of productivity into the following formula:

Work Accomplished = Time Spent x Intensity

We all have 24 hours per day, excluding the occasional leap second. That plus the need for sleep puts an upper limit on the Time Spent component of the formula. Intensity, in theory, has no upper limit. You could spend a lifetime getting better at concentrating. So it would seem that the Intensity component is the one to target for improvement.

There’s some truth to that analysis. Cal’s fixed-schedule productivity technique starts by making Time Spent a constant. Intensity is the only component you’re allowed to adjust.

But as you make your way toward that gloriously fixed schedule, it’s helpful to track how many hours you’re actually working, as well as how intensely you’re focusing during those hours. To do that, you have to follow a few rules.

Here are four rules to make use of the Work Accomplished formula.

« Continue »

A Career Skill: Organizing Information

By Duncan Smith Leave a Comment Jun 22 0

Information

Another session of Top Performer is underway, and one of the goals of the early lessons is finding a skill that is important for success in your career. That has me thinking about skills for programmers. Today I’m going to focus on one particular skill that is critical for programmers working on a team, and becomes more critical as you work on larger projects.

I wrote about a few different skill categories in Skills for Programmers. If you’re a new college graduate starting a full-time job, you might have strong algorithmic problem-solving skills, knowledge of Computer Science fundamentals, expertise in a few specific technologies, and good work habits. Depending on your background, you may even have some of the soft skills required for success at a software company. But what new graduates generally don’t have is experience working on a large codebase that a team of programmers has built up over several years.

When you’re building a small project from scratch, technical questions can be answered through a Stack Overflow search. You still have to think about timeless software engineering questions like good design and whether you’re building the right product for your target audience. But you can rely on others who are working with the same technology to get you unstuck.

When you’re maintaining or enhancing a large software system as part of a team, it’s not enough just to know the technology you’re working with. You also have to know the details of the specific system you’re working on. As I wrote a few weeks ago, that means you have to figure out how to learn from local experts.

If you have a general programming question, find the answer on Stack Overflow, and later forget the answer, you can always find it again. I do that all the time. But if you forget an answer that you got from a local expert, you have to go back to that expert to get it again. That’s an inefficient use of a scarce resource, and it might make your expert grumpy. You want to make the best use of your local experts or, if you’re the expert, ensure that others make the best of your time. You can do that using the skill I’ll be exploring today.

« Continue »

Learning From Local Experts

By Duncan Smith Leave a Comment Jun 1 0

Experts

There’s a certain kind of programming problem that you won’t find the solution to on Stack Overflow, programming blogs, or even an old-fashioned paper book. It’s the kind of problem that is, to use an old Stack Overflow term, too localized. Imagine you’re working on a software team and you encounter one of the following situations:

  • You run a local build (without changing any local files), and it fails.
  • You’re working on a feature, and you need to know how to simulate a credit card payment for your product, without actually submitting a credit card.
  • You’re about to submit a database schema change, and you want to test the database upgrade process the way it will actually work in the shared test environment and on the live system.

There are a few ways you could potentially solve these problems. You can always use the old standby technique of Googling keywords and seeing if you can match the general results with the specific needs of your project. That may get you some interesting options, but it’s likely to require a lot of work.

If your team has good internal documentation with a search function that works, then you may be in luck. If you can find documentation about the exact scenario you’re looking for, then you’re even luckier. Keeping internal documentation is a good practice for any software team, but it does require discipline. Stack Overflow is even trying to encourage team documentation (as long as the team is willing to publish it to the world) with their teams feature.

When Google and internal documentation fail, there’s still one more thing you can try: ask a local expert. Here are a few tips on doing that effectively.

« Continue »

Deep Work and Collaboration in Software Development

By Duncan Smith Leave a Comment Mar 30 0

Collaborate

“The relationship between deep work and collaboration is tricky,” writes Cal Newport in his recent book on focused productivity. That’s for sure. The goal of deep work is to expand your cognitive abilities in a distraction-free working environment. But many people don’t work alone. And as anyone who has worked on a team can attest, co-workers can be a source of distraction. How can we reconcile deep work goals with the need for collaboration?

« Continue »

  • 1
  • 2
  • 3
  • Next Page »

Stay in the Know

I'm trying out the latest learning techniques on software development concepts, and writing about what works best. Sound interesting? Subscribe to my free newsletter to keep up to date. Learn More
Unsubscribing is easy, and I'll keep your email address private.

Getting Started

Are you new here? Check out my review posts for a tour of the archives:

  • 2023 in Review: 50 LeetCode Tips
  • 2022 in Review: Content Bots
  • 2021 in Review: Thoughts on Solving Programming Puzzles
  • Lessons from the 2020 LeetCode Monthly Challenges
  • 2019 in Review
  • Competitive Programming Frequently Asked Questions: 2018 In Review
  • What I Learned Working On Time Tortoise in 2017
  • 2016 in Review
  • 2015 in Review
  • 2015 Summer Review

Archives

Recent Posts

  • Stateless by Design: How to Work With AI Coding Assistants December 31, 2025
  • Do Coding Bots Mean the End of Coding Interviews? December 31, 2024
  • Another Project for 2024 May 8, 2024
  • Dynamic Programming Wrap-Up May 1, 2024
  • LeetCode 91: Decode Ways April 24, 2024
  • LeetCode 70: Climbing Stairs April 17, 2024
  • LeetCode 221: Maximal Square April 10, 2024
  • Using Dynamic Programming for Maximum Product Subarray April 3, 2024
  • LeetCode 62: Unique Paths March 27, 2024
  • LeetCode 416: Partition Equal Subset Sum March 20, 2024
Red-Green-Code
  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter
Copyright © 2026 Duncan Smith