Red-Green-Code

Deliberate practice techniques for software developers

  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter

Will AI Coding Assistants “Deskill” Us?

By Duncan Smith Leave a Comment Jan 30 0

Robot helpers

Cal Newport, a longtime critic of digital distraction, has been turning his sights on AI. In a recent article, he addresses AI coding assistants. His argument: coding assistants lead to programmer deskilling, meaning programmers who use these tools are losing their rare and valuable skills, replacing them with just the ability to orchestrate agents. In his view, AI assistants will benefit only technology companies, not their workers. Companies will only need to hire lower-skilled, lower-paid workers, and these workers will use AI agents for high-skill tasks. Newport’s concern isn’t just that programmers will forget how to code. It’s that the entire software ecosystem could lose the deep expertise required to build and maintain complex systems.

One response to this argument says it’s wrong about the consequences of agent use. In this view, programmers won’t get less skilled. They will just develop different skills. We can see this from the history of our industry. Software engineering history is full of new abstractions that allow the next generation of programmers to forget things that their predecessors knew. As compilers got better, programmers forgot how to write assembly by hand. As libraries got better, programmers forgot how to write fundamental algorithms from scratch. AI-generated code is another abstraction. Programmers may forget what it’s like to write code line by line. But in exchange, we will spend more time on architecture, user experience, and security, areas that will still need skilled human input.

But there are a few key differences between AI assistants and previous productivity improvements. If we don’t address these, we risk becoming victims of the deskilling that Newport warns about.

First, previous programming tools behave deterministically. Every time you run the same version of a compiler against the same version of your source code, you get the same output. You may have bugs in your program, and there may even be bugs in the compiler. But there is a well-defined process by which your program gets turned into an executable. In contrast, AI is not a deterministic layer built on top of a stable substrate. It is a probabilistic collaborator whose output must be treated with suspicion. With AI assistants, the failure modes are semantic, not mechanical. The assistant can produce code that compiles, passes tests, and still violates the system’s invariants. So you need a plan that involves a combination of code review and testing to ensure that you, as the human programmer, are confident that the output meets your standards.

Second, it’s not clear what the limitations of AI assistants actually are. A programmer who develops what he thinks as a good system architecture might discover with the right prompt that the assistant has an idea for a better one. The assistant might know more about user experience best practices than any particular programmer. And because it has read millions of codebases, it may propose architectural patterns that no single engineer has encountered. Even for security code, automated code reviews can uncover holes. To avoid falling behind peers, programmers need to push the boundaries of what they ask their assistant to work on. But to avoid deskilling, they need to push the boundaries of their own abilities rather than passively riding the wave of continuous AI improvements.

Finally, previous improvements in software tools had a positive effect on the programming labor market. As tools got more powerful, software ate more of the world, which led to more demand for skilled programmers. The software systems of 2020 were so complex that only highly trained software engineers could work on them. But now that AI assistants can reason over large code repositories, we may finally be going in the other direction. If the only skill that matters is giving the assistant the right context and the right prompt, the result may be deskilling. In the past, better tools increased the demand for skilled programmers. With AI, better tools may reduce it.

In response to these challenges, we need a two-pronged approach. We can’t just ignore AI assistants, since our assistant-using peers will race ahead of us. We need to aggressively push the boundaries of what we ask our assistants to do. When new model versions are released, we need to upgrade and see if they can do things that the previous versions couldn’t. We need to read books and articles to learn techniques to get the most out of the assistants.

But when we find something a model isn’t good at, and better prompting doesn’t seem to help, we need to be prepared to take it on ourselves. This means continual learning, which should be a familiar mode for anyone who has worked on software in the past few decades. The industry is always racing ahead, and programmers had to continually study and learn to stay relevant. In the current era, AI assistants are so capable that it’s tempting to delegate everything to them. But programmers who are paying attention shouldn’t be satisfied with that approach. If we don’t have any skills beyond what the current models are capable of, then any trained model user could replace us.

So as the model makers are making the models better, we need to make ourselves better, finding skills at the edge that only humans can do. We need to practice architecture, debugging, threat modeling, domain reasoning, and understanding systems at a conceptual level. When working in these areas, we should first check what the assistant comes up with, then see if we can improve it. This is the pattern we’ll be using in this era: ask the model, fill in the gaps with our own expertise, repeat.

Stateless by Design: How to Work With AI Coding Assistants

By Duncan Smith Leave a Comment Dec 31 0

coding assistant

In the AI coding assistant era, we can onboard an AI assistant much like a human programmer: by pointing it to documentation and code. But there’s a key difference: while human programmers remember things from one day to the next, AI assistants start each session with a blank slate.

Context Engineering

Since coding assistants forget everything between sessions, we need to build a repeatable onboarding process. The general term for this process is context engineering. Here’s how OpenAI co-founder Andrej Karpathy describes it:

context engineering is the delicate art and science of filling the context window with just the right information for the next step. … Too little or of the wrong form and the LLM doesn’t have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down.

The term context engineering comes from a fundamental feature of LLM architecture: the context window, often described as the model’s “working memory.” For AI coding assistants, starting a new coding session means starting with an empty context window. And you can’t avoid the blank slate problem by doing all your work in one huge session. Since the context window has a finite size, the model eventually has no choice but to drop information. This is a key limitation of LLM-based coding assistants.

As an analogy, imagine that you’re starting a new job as a senior software engineer. You come into the office, meet your co-workers, set up your dev environment, and start submitting pull requests for review. Over time, you learn which of your colleagues knows about each part of the system, and where the integration points are with other teams.

In that analogy, every new AI coding session is like a senior engineer’s first day on the job. The AI agent retains all of its general experience and skills, but it forgets all the onboarding details. It has to learn them again for each session, as if it were “coming into the office” for the first time.

As engineers who want to get the best performance from an AI assistant, we’re in charge of supplying these two elements from Karpathy’s definition:

  • Just the right information
  • For the next step

Just the right information

Once you accept that every session is day one, the question becomes: what does a first‑day engineer need to know? You’re using an AI assistant because you have a task for it, like fixing a bug or starting on a feature. What would a human engineer need to know to do the same task on their first day? They’ll certainly need a system design document that describes each part of the system and how it works. Such a document could include sections like:

  • A high-level architecture diagram
  • Responsibilities of each service
  • Key integration points
  • Known constraints

You’ll need to write the system design document once, then keep it up to date as the system changes. By providing this document to the assistant at the beginning of each session, you give it a detailed map of the territory it’s working in. Next, you’ll need to provide general advice like, “Ask clarifying questions before writing any code.” These are prompts that offset the assistant’s weaknesses. When you upgrade to a new model, it’s good to experiment with these, since the new model may have different strengths and weaknesses compared to the previous version. Finally, you’ll need a requirements document for the task you’re asking the assistant to work on. This document can come out of your group’s planning process. Just give it to the assistant in the same format you use for humans. A requirements document might include sections like UX mock-ups and workflow, inputs and expected outputs, performance requirements, and other systems to integrate with.

For the next step

The goal of context engineering is to make every session stateless by design. As you improve your AI assistant workflow, you should be eager to start a fresh session whenever you need one. AI assistants work best when they’re not pushing against their context window limits, so your process should support that. Rather than relying on context that you build up as you’re chatting with the agent, get in the habit of externalizing state in a form that’s easy to re-load. This means documenting everything a developer needs to know when working on the system.

Agents know how to read and write Markdown documents, so you can use those as a starting point. We have already talked about the basic set of documents you’ll need: the system design document, general advice document, and feature requirements document. If something comes up as you’re chatting with the agent that you need it to remember, copy it in Markdown format and put it in one of those documents.

For memory and coordination, the agent will want to use Markdown documents by default. But they aren’t optimized for that purpose. Instead, an emerging ecosystem of agent observability tools, such as Maxim AI, Langfuse, Arize, Galileo, and LangSmith, provides a customized view into what your agents are up to. Or for a more lightweight, developer-centric option, Steve Yegge’s Beads keeps track of task details, task status, and the dependency graph that relates tasks to each other. Tools like these offer another source of context to help tune the agent for your specific projects and tasks.

The year ahead

In 2026, we’ll need different software engineering skills than we needed just a few years ago. We have to think at a higher level of abstraction, more like an architect than a coder. The AI assistants know how to design and code, but they don’t know about your specific system or feature requirements. When an AI assistant performs poorly on a task, don’t say, “AI isn’t smart enough for the job.” Instead, figure out what information the assistant was missing. This is the context engineering mindset. Our job is no longer to solve the problem directly. Instead, we have to figure out what a skilled developer would need to know to solve the problem.

In this new world, we have to be more diligent about continually refining the requirements for a system. Code can be generated quickly. That allows us to spin up a prototype in a few minutes and get feedback from a product manager. But relying on tribal knowledge is even more precarious than it was in the human-only coding world. A coding agent won’t call a colleague to ask them questions. Decisions have to be documented and saved somewhere that agents can get to them. Keeping an up-to-date library of Markdown files and using an agent-friendly task management system can mean the difference between generic results from an AI assistant and code that integrates well with our systems on the first try. Context engineering isn’t a workaround. It’s the discipline that unlocks the real productivity gains of AI‑assisted development.

Do Coding Bots Mean the End of Coding Interviews?

By Duncan Smith Leave a Comment Dec 31 0

Human and robot on a hike

Roman Elizarov noticed something different about the 2024 Advent of Code:

AI is reshaping competitive programming, not just in benchmarks or papers, but in real life. My rough guess: this year’s Advent of Code leaderboard features ~80% AI-driven entries in the top 100 for the first time. Advent of Code is still an amazing event to sharpen your software engineering skills and have fun. But in just a year, it has lost much of its relevance as a way to compare problem-solving skills among humans.

If a spot on the AoC leaderboard no longer says much about your skill at solving puzzles, does that predict a change in the way interviewers use coding puzzles to evaluate candidates for programming jobs?

« Continue »

Red-Green-Code: 2016 in Review

By Duncan Smith Leave a Comment Dec 28 0

NM Desert

Year two of this blog has come to an end. Let’s review the topics and posts from 2016.

« Continue »

Three Perspectives on Coding Interviews

By Duncan Smith Leave a Comment Oct 12 0

Interview1
Interview2

It’s October, the time of year when leaves start to fall from the trees, and companies start to recruit college students for summer internships. Last week, I spent half a day interviewing some of those candidates. I also came across a long Hacker News thread called I Hate HackerRank. So I thought it would be a good time to revisit the dreaded coding interview.

« Continue »

Programmer Skills (and Salaries) According to Stack Exchange

By Duncan Smith Leave a Comment Sep 21 0

Skills

In July of this year, Stack Exchange Inc. released an online tool that lets you calculate how much money you would make if you worked there. The number you get out of the tool is based on four factors. There’s a salary floor based on the position you select (e.g., Developer or Product Designer), an adjustment based on your years of professional experience, and a bonus for living in one of a few high cost cities (New York, San Francisco, or London). Finally, the tool takes into account your skills.

Having written in the past about skills for programmers, I was interested to see what Stack Exchange decided was important for success in a programming job. Here’s what I found.

« Continue »

Write Your Own Manual

By Duncan Smith Leave a Comment Jul 20 0

SmallComputerHandbook

You can become a better software developer by improving your ability to organize information. A key part of that skill is being able to communicate what you know in writing. Whether you’re enlightening your peers on Stack Overflow or writing FAQs for your team, it’s good to have a reputation as a person who Knows The Answers. If you can write clear technical designs, how-to guides, and other documentation, you can have a lot of influence. In fact, even if you just write for yourself, you can get things done faster by avoiding trips to the Web to repeatedly look up the same information.

Here are some tips on when, what, and how to write as a programmer.

« Continue »

Rules for Working Intensely

By Duncan Smith Leave a Comment Jul 13 0

Focus

Cal Newport likes to distill the components of productivity into the following formula:

Work Accomplished = Time Spent x Intensity

We all have 24 hours per day, excluding the occasional leap second. That plus the need for sleep puts an upper limit on the Time Spent component of the formula. Intensity, in theory, has no upper limit. You could spend a lifetime getting better at concentrating. So it would seem that the Intensity component is the one to target for improvement.

There’s some truth to that analysis. Cal’s fixed-schedule productivity technique starts by making Time Spent a constant. Intensity is the only component you’re allowed to adjust.

But as you make your way toward that gloriously fixed schedule, it’s helpful to track how many hours you’re actually working, as well as how intensely you’re focusing during those hours. To do that, you have to follow a few rules.

Here are four rules to make use of the Work Accomplished formula.

« Continue »

A Career Skill: Organizing Information

By Duncan Smith Leave a Comment Jun 22 0

Information

Another session of Top Performer is underway, and one of the goals of the early lessons is finding a skill that is important for success in your career. That has me thinking about skills for programmers. Today I’m going to focus on one particular skill that is critical for programmers working on a team, and becomes more critical as you work on larger projects.

I wrote about a few different skill categories in Skills for Programmers. If you’re a new college graduate starting a full-time job, you might have strong algorithmic problem-solving skills, knowledge of Computer Science fundamentals, expertise in a few specific technologies, and good work habits. Depending on your background, you may even have some of the soft skills required for success at a software company. But what new graduates generally don’t have is experience working on a large codebase that a team of programmers has built up over several years.

When you’re building a small project from scratch, technical questions can be answered through a Stack Overflow search. You still have to think about timeless software engineering questions like good design and whether you’re building the right product for your target audience. But you can rely on others who are working with the same technology to get you unstuck.

When you’re maintaining or enhancing a large software system as part of a team, it’s not enough just to know the technology you’re working with. You also have to know the details of the specific system you’re working on. As I wrote a few weeks ago, that means you have to figure out how to learn from local experts.

If you have a general programming question, find the answer on Stack Overflow, and later forget the answer, you can always find it again. I do that all the time. But if you forget an answer that you got from a local expert, you have to go back to that expert to get it again. That’s an inefficient use of a scarce resource, and it might make your expert grumpy. You want to make the best use of your local experts or, if you’re the expert, ensure that others make the best of your time. You can do that using the skill I’ll be exploring today.

« Continue »

Learning From Local Experts

By Duncan Smith Leave a Comment Jun 1 0

Experts

There’s a certain kind of programming problem that you won’t find the solution to on Stack Overflow, programming blogs, or even an old-fashioned paper book. It’s the kind of problem that is, to use an old Stack Overflow term, too localized. Imagine you’re working on a software team and you encounter one of the following situations:

  • You run a local build (without changing any local files), and it fails.
  • You’re working on a feature, and you need to know how to simulate a credit card payment for your product, without actually submitting a credit card.
  • You’re about to submit a database schema change, and you want to test the database upgrade process the way it will actually work in the shared test environment and on the live system.

There are a few ways you could potentially solve these problems. You can always use the old standby technique of Googling keywords and seeing if you can match the general results with the specific needs of your project. That may get you some interesting options, but it’s likely to require a lot of work.

If your team has good internal documentation with a search function that works, then you may be in luck. If you can find documentation about the exact scenario you’re looking for, then you’re even luckier. Keeping internal documentation is a good practice for any software team, but it does require discipline. Stack Overflow is even trying to encourage team documentation (as long as the team is willing to publish it to the world) with their teams feature.

When Google and internal documentation fail, there’s still one more thing you can try: ask a local expert. Here are a few tips on doing that effectively.

« Continue »

  • 1
  • 2
  • 3
  • Next Page »

Stay in the Know

I'm trying out the latest learning techniques on software development concepts, and writing about what works best. Sound interesting? Subscribe to my free newsletter to keep up to date. Learn More
Unsubscribing is easy, and I'll keep your email address private.

Getting Started

Are you new here? Check out my review posts for a tour of the archives:

  • 2023 in Review: 50 LeetCode Tips
  • 2022 in Review: Content Bots
  • 2021 in Review: Thoughts on Solving Programming Puzzles
  • Lessons from the 2020 LeetCode Monthly Challenges
  • 2019 in Review
  • Competitive Programming Frequently Asked Questions: 2018 In Review
  • What I Learned Working On Time Tortoise in 2017
  • 2016 in Review
  • 2015 in Review
  • 2015 Summer Review

Archives

Recent Posts

  • Will AI Coding Assistants “Deskill” Us? January 30, 2026
  • Stateless by Design: How to Work With AI Coding Assistants December 31, 2025
  • Do Coding Bots Mean the End of Coding Interviews? December 31, 2024
  • Another Project for 2024 May 8, 2024
  • Dynamic Programming Wrap-Up May 1, 2024
  • LeetCode 91: Decode Ways April 24, 2024
  • LeetCode 70: Climbing Stairs April 17, 2024
  • LeetCode 221: Maximal Square April 10, 2024
  • Using Dynamic Programming for Maximum Product Subarray April 3, 2024
  • LeetCode 62: Unique Paths March 27, 2024
Red-Green-Code
  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter
Copyright © 2026 Duncan Smith