Red-Green-Code

Deliberate practice techniques for software developers

  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter

Will AI Coding Assistants “Deskill” Us?

By Duncan Smith Leave a Comment Jan 30 0

Robot helpers

Cal Newport, a longtime critic of digital distraction, has been turning his sights on AI. In a recent article, he addresses AI coding assistants. His argument: coding assistants lead to programmer deskilling, meaning programmers who use these tools are losing their rare and valuable skills, replacing them with just the ability to orchestrate agents. In his view, AI assistants will benefit only technology companies, not their workers. Companies will only need to hire lower-skilled, lower-paid workers, and these workers will use AI agents for high-skill tasks. Newport’s concern isn’t just that programmers will forget how to code. It’s that the entire software ecosystem could lose the deep expertise required to build and maintain complex systems.

One response to this argument says it’s wrong about the consequences of agent use. In this view, programmers won’t get less skilled. They will just develop different skills. We can see this from the history of our industry. Software engineering history is full of new abstractions that allow the next generation of programmers to forget things that their predecessors knew. As compilers got better, programmers forgot how to write assembly by hand. As libraries got better, programmers forgot how to write fundamental algorithms from scratch. AI-generated code is another abstraction. Programmers may forget what it’s like to write code line by line. But in exchange, we will spend more time on architecture, user experience, and security, areas that will still need skilled human input.

But there are a few key differences between AI assistants and previous productivity improvements. If we don’t address these, we risk becoming victims of the deskilling that Newport warns about.

First, previous programming tools behave deterministically. Every time you run the same version of a compiler against the same version of your source code, you get the same output. You may have bugs in your program, and there may even be bugs in the compiler. But there is a well-defined process by which your program gets turned into an executable. In contrast, AI is not a deterministic layer built on top of a stable substrate. It is a probabilistic collaborator whose output must be treated with suspicion. With AI assistants, the failure modes are semantic, not mechanical. The assistant can produce code that compiles, passes tests, and still violates the system’s invariants. So you need a plan that involves a combination of code review and testing to ensure that you, as the human programmer, are confident that the output meets your standards.

Second, it’s not clear what the limitations of AI assistants actually are. A programmer who develops what he thinks as a good system architecture might discover with the right prompt that the assistant has an idea for a better one. The assistant might know more about user experience best practices than any particular programmer. And because it has read millions of codebases, it may propose architectural patterns that no single engineer has encountered. Even for security code, automated code reviews can uncover holes. To avoid falling behind peers, programmers need to push the boundaries of what they ask their assistant to work on. But to avoid deskilling, they need to push the boundaries of their own abilities rather than passively riding the wave of continuous AI improvements.

Finally, previous improvements in software tools had a positive effect on the programming labor market. As tools got more powerful, software ate more of the world, which led to more demand for skilled programmers. The software systems of 2020 were so complex that only highly trained software engineers could work on them. But now that AI assistants can reason over large code repositories, we may finally be going in the other direction. If the only skill that matters is giving the assistant the right context and the right prompt, the result may be deskilling. In the past, better tools increased the demand for skilled programmers. With AI, better tools may reduce it.

In response to these challenges, we need a two-pronged approach. We can’t just ignore AI assistants, since our assistant-using peers will race ahead of us. We need to aggressively push the boundaries of what we ask our assistants to do. When new model versions are released, we need to upgrade and see if they can do things that the previous versions couldn’t. We need to read books and articles to learn techniques to get the most out of the assistants.

But when we find something a model isn’t good at, and better prompting doesn’t seem to help, we need to be prepared to take it on ourselves. This means continual learning, which should be a familiar mode for anyone who has worked on software in the past few decades. The industry is always racing ahead, and programmers had to continually study and learn to stay relevant. In the current era, AI assistants are so capable that it’s tempting to delegate everything to them. But programmers who are paying attention shouldn’t be satisfied with that approach. If we don’t have any skills beyond what the current models are capable of, then any trained model user could replace us.

So as the model makers are making the models better, we need to make ourselves better, finding skills at the edge that only humans can do. We need to practice architecture, debugging, threat modeling, domain reasoning, and understanding systems at a conceptual level. When working in these areas, we should first check what the assistant comes up with, then see if we can improve it. This is the pattern we’ll be using in this era: ask the model, fill in the gaps with our own expertise, repeat.

Stateless by Design: How to Work With AI Coding Assistants

By Duncan Smith Leave a Comment Dec 31 0

coding assistant

In the AI coding assistant era, we can onboard an AI assistant much like a human programmer: by pointing it to documentation and code. But there’s a key difference: while human programmers remember things from one day to the next, AI assistants start each session with a blank slate.

Context Engineering

Since coding assistants forget everything between sessions, we need to build a repeatable onboarding process. The general term for this process is context engineering. Here’s how OpenAI co-founder Andrej Karpathy describes it:

context engineering is the delicate art and science of filling the context window with just the right information for the next step. … Too little or of the wrong form and the LLM doesn’t have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down.

The term context engineering comes from a fundamental feature of LLM architecture: the context window, often described as the model’s “working memory.” For AI coding assistants, starting a new coding session means starting with an empty context window. And you can’t avoid the blank slate problem by doing all your work in one huge session. Since the context window has a finite size, the model eventually has no choice but to drop information. This is a key limitation of LLM-based coding assistants.

As an analogy, imagine that you’re starting a new job as a senior software engineer. You come into the office, meet your co-workers, set up your dev environment, and start submitting pull requests for review. Over time, you learn which of your colleagues knows about each part of the system, and where the integration points are with other teams.

In that analogy, every new AI coding session is like a senior engineer’s first day on the job. The AI agent retains all of its general experience and skills, but it forgets all the onboarding details. It has to learn them again for each session, as if it were “coming into the office” for the first time.

As engineers who want to get the best performance from an AI assistant, we’re in charge of supplying these two elements from Karpathy’s definition:

  • Just the right information
  • For the next step

Just the right information

Once you accept that every session is day one, the question becomes: what does a first‑day engineer need to know? You’re using an AI assistant because you have a task for it, like fixing a bug or starting on a feature. What would a human engineer need to know to do the same task on their first day? They’ll certainly need a system design document that describes each part of the system and how it works. Such a document could include sections like:

  • A high-level architecture diagram
  • Responsibilities of each service
  • Key integration points
  • Known constraints

You’ll need to write the system design document once, then keep it up to date as the system changes. By providing this document to the assistant at the beginning of each session, you give it a detailed map of the territory it’s working in. Next, you’ll need to provide general advice like, “Ask clarifying questions before writing any code.” These are prompts that offset the assistant’s weaknesses. When you upgrade to a new model, it’s good to experiment with these, since the new model may have different strengths and weaknesses compared to the previous version. Finally, you’ll need a requirements document for the task you’re asking the assistant to work on. This document can come out of your group’s planning process. Just give it to the assistant in the same format you use for humans. A requirements document might include sections like UX mock-ups and workflow, inputs and expected outputs, performance requirements, and other systems to integrate with.

For the next step

The goal of context engineering is to make every session stateless by design. As you improve your AI assistant workflow, you should be eager to start a fresh session whenever you need one. AI assistants work best when they’re not pushing against their context window limits, so your process should support that. Rather than relying on context that you build up as you’re chatting with the agent, get in the habit of externalizing state in a form that’s easy to re-load. This means documenting everything a developer needs to know when working on the system.

Agents know how to read and write Markdown documents, so you can use those as a starting point. We have already talked about the basic set of documents you’ll need: the system design document, general advice document, and feature requirements document. If something comes up as you’re chatting with the agent that you need it to remember, copy it in Markdown format and put it in one of those documents.

For memory and coordination, the agent will want to use Markdown documents by default. But they aren’t optimized for that purpose. Instead, an emerging ecosystem of agent observability tools, such as Maxim AI, Langfuse, Arize, Galileo, and LangSmith, provides a customized view into what your agents are up to. Or for a more lightweight, developer-centric option, Steve Yegge’s Beads keeps track of task details, task status, and the dependency graph that relates tasks to each other. Tools like these offer another source of context to help tune the agent for your specific projects and tasks.

The year ahead

In 2026, we’ll need different software engineering skills than we needed just a few years ago. We have to think at a higher level of abstraction, more like an architect than a coder. The AI assistants know how to design and code, but they don’t know about your specific system or feature requirements. When an AI assistant performs poorly on a task, don’t say, “AI isn’t smart enough for the job.” Instead, figure out what information the assistant was missing. This is the context engineering mindset. Our job is no longer to solve the problem directly. Instead, we have to figure out what a skilled developer would need to know to solve the problem.

In this new world, we have to be more diligent about continually refining the requirements for a system. Code can be generated quickly. That allows us to spin up a prototype in a few minutes and get feedback from a product manager. But relying on tribal knowledge is even more precarious than it was in the human-only coding world. A coding agent won’t call a colleague to ask them questions. Decisions have to be documented and saved somewhere that agents can get to them. Keeping an up-to-date library of Markdown files and using an agent-friendly task management system can mean the difference between generic results from an AI assistant and code that integrates well with our systems on the first try. Context engineering isn’t a workaround. It’s the discipline that unlocks the real productivity gains of AI‑assisted development.

Do Coding Bots Mean the End of Coding Interviews?

By Duncan Smith Leave a Comment Dec 31 0

Human and robot on a hike

Roman Elizarov noticed something different about the 2024 Advent of Code:

AI is reshaping competitive programming, not just in benchmarks or papers, but in real life. My rough guess: this year’s Advent of Code leaderboard features ~80% AI-driven entries in the top 100 for the first time. Advent of Code is still an amazing event to sharpen your software engineering skills and have fun. But in just a year, it has lost much of its relevance as a way to compare problem-solving skills among humans.

If a spot on the AoC leaderboard no longer says much about your skill at solving puzzles, does that predict a change in the way interviewers use coding puzzles to evaluate candidates for programming jobs?

« Continue »

Another Project for 2024

By Duncan Smith Leave a Comment May 8 0

Looking through the window of a log cabin

Last week wrapped up the dynamic programming tutorial for this year. I hope you found it useful. Dynamic programming can be hard to grasp at first compared to other LeetCode topics. But once you internalize the steps to find a top-down solution, you may actually be happy when an interviewer asks you to solve a DP problem instead of something else. If dynamic programming still doesn’t make sense after going through the tutorial, this tip from last year may help you come up with a study plan.

Next, I’ll be doing something a bit different. Normally, I post every week about a project I’m working on. You can find projects from past years on the right side of the page, in the Getting Started section. I have decided to try out a more stealthy approach. Rather than the usual weekly post, I’ll be working on something in the background, so you won’t see new posts for a while. Let see how it goes, and I’ll report back. In the meantime, enjoy the archives, and good luck with your own projects this year.

(Image credit: DALL·E 3)

For an introduction to this year’s project, see A Project for 2024.

Dynamic Programming Wrap-Up

By Duncan Smith Leave a Comment May 1 0

A robot studying a textbook

Over the past four months, we covered the key ideas in dynamic programming and solved a few practice problems. Let’s review.

« Continue »

LeetCode 91: Decode Ways

By Duncan Smith Leave a Comment Apr 24 0

A robot kitten playing with a ball of string

Last week, we looked at a dynamic programming counting problem, Climbing Stairs. This week’s problem, LeetCode 91: Decode Ways, is also a counting problem. And although the problems may not seem very similar on the surface, we can adapt the Climbing Stairs solution to solve Decode Ways.

« Continue »

LeetCode 70: Climbing Stairs

By Duncan Smith Leave a Comment Apr 17 0

A robot descending a staircase

For one subcategory of dynamic programming problems, the goal is to count the number of ways to do something. For example, in LeetCode 62: Unique Paths, we counted the number of ways a robot could move from the start position to the end position in a grid.

This week, we’ll tackle another counting problem, LeetCode 70: Climbing Stairs. It’s an Easy problem whose solution we’ll adapt to solve a Medium problem next week.

« Continue »

LeetCode 221: Maximal Square

By Duncan Smith Leave a Comment Apr 10 0

A robot looks down at a patio made of overlapping squares

Solutions to LeetCode problems of Medium or higher difficulty often require a key insight. Even once you understand the problem and the general technique required to solve it, there’s no way to implement an efficient solution without it. Fortunately, dynamic programming gives us heuristics that we can use to point towards the required idea. For LeetCode 221: Maximal Square, the key insight requires some geometric intuition. But once you figure that part out, the rest of the problem is straightforward.

« Continue »

Using Dynamic Programming for Maximum Product Subarray

By Duncan Smith Leave a Comment Apr 3 0

A robot looking up at a number tower

Earlier this year, we solved LeetCode 53: Maximum Subarray, which asked us to find the sum of the subarray (contiguous non-empty sequence of elements) with the largest sum. This week, we’ll look at a related problem that asks for the largest product.

« Continue »

LeetCode 62: Unique Paths

By Duncan Smith Leave a Comment Mar 27 0

Robot on a grid

Many LeetCode problems involve moving around a maze or grid. Mazes tend to be modeled as graphs, but for some problems of this type, dynamic programming is the right approach. We’ll see that in this week’s problem, LeetCode 62: Unique Paths.

« Continue »

  • 1
  • 2
  • 3
  • …
  • 50
  • Next Page »

Stay in the Know

I'm trying out the latest learning techniques on software development concepts, and writing about what works best. Sound interesting? Subscribe to my free newsletter to keep up to date. Learn More
Unsubscribing is easy, and I'll keep your email address private.

Getting Started

Are you new here? Check out my review posts for a tour of the archives:

  • 2023 in Review: 50 LeetCode Tips
  • 2022 in Review: Content Bots
  • 2021 in Review: Thoughts on Solving Programming Puzzles
  • Lessons from the 2020 LeetCode Monthly Challenges
  • 2019 in Review
  • Competitive Programming Frequently Asked Questions: 2018 In Review
  • What I Learned Working On Time Tortoise in 2017
  • 2016 in Review
  • 2015 in Review
  • 2015 Summer Review

Archives

Recent Posts

  • Will AI Coding Assistants “Deskill” Us? January 30, 2026
  • Stateless by Design: How to Work With AI Coding Assistants December 31, 2025
  • Do Coding Bots Mean the End of Coding Interviews? December 31, 2024
  • Another Project for 2024 May 8, 2024
  • Dynamic Programming Wrap-Up May 1, 2024
  • LeetCode 91: Decode Ways April 24, 2024
  • LeetCode 70: Climbing Stairs April 17, 2024
  • LeetCode 221: Maximal Square April 10, 2024
  • Using Dynamic Programming for Maximum Product Subarray April 3, 2024
  • LeetCode 62: Unique Paths March 27, 2024
Red-Green-Code
  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter
Copyright © 2026 Duncan Smith