Red-Green-Code

Deliberate practice techniques for software developers

  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter

Will AI Coding Assistants “Deskill” Us?

By Duncan Smith Jan 30 0

Robot helpers

Cal Newport, a longtime critic of digital distraction, has been turning his sights on AI. In a recent article, he addresses AI coding assistants. His argument: coding assistants lead to programmer deskilling, meaning programmers who use these tools are losing their rare and valuable skills, replacing them with just the ability to orchestrate agents. In his view, AI assistants will benefit only technology companies, not their workers. Companies will only need to hire lower-skilled, lower-paid workers, and these workers will use AI agents for high-skill tasks. Newport’s concern isn’t just that programmers will forget how to code. It’s that the entire software ecosystem could lose the deep expertise required to build and maintain complex systems.

One response to this argument says it’s wrong about the consequences of agent use. In this view, programmers won’t get less skilled. They will just develop different skills. We can see this from the history of our industry. Software engineering history is full of new abstractions that allow the next generation of programmers to forget things that their predecessors knew. As compilers got better, programmers forgot how to write assembly by hand. As libraries got better, programmers forgot how to write fundamental algorithms from scratch. AI-generated code is another abstraction. Programmers may forget what it’s like to write code line by line. But in exchange, we will spend more time on architecture, user experience, and security, areas that will still need skilled human input.

But there are a few key differences between AI assistants and previous productivity improvements. If we don’t address these, we risk becoming victims of the deskilling that Newport warns about.

First, previous programming tools behave deterministically. Every time you run the same version of a compiler against the same version of your source code, you get the same output. You may have bugs in your program, and there may even be bugs in the compiler. But there is a well-defined process by which your program gets turned into an executable. In contrast, AI is not a deterministic layer built on top of a stable substrate. It is a probabilistic collaborator whose output must be treated with suspicion. With AI assistants, the failure modes are semantic, not mechanical. The assistant can produce code that compiles, passes tests, and still violates the system’s invariants. So you need a plan that involves a combination of code review and testing to ensure that you, as the human programmer, are confident that the output meets your standards.

Second, it’s not clear what the limitations of AI assistants actually are. A programmer who develops what he thinks as a good system architecture might discover with the right prompt that the assistant has an idea for a better one. The assistant might know more about user experience best practices than any particular programmer. And because it has read millions of codebases, it may propose architectural patterns that no single engineer has encountered. Even for security code, automated code reviews can uncover holes. To avoid falling behind peers, programmers need to push the boundaries of what they ask their assistant to work on. But to avoid deskilling, they need to push the boundaries of their own abilities rather than passively riding the wave of continuous AI improvements.

Finally, previous improvements in software tools had a positive effect on the programming labor market. As tools got more powerful, software ate more of the world, which led to more demand for skilled programmers. The software systems of 2020 were so complex that only highly trained software engineers could work on them. But now that AI assistants can reason over large code repositories, we may finally be going in the other direction. If the only skill that matters is giving the assistant the right context and the right prompt, the result may be deskilling. In the past, better tools increased the demand for skilled programmers. With AI, better tools may reduce it.

In response to these challenges, we need a two-pronged approach. We can’t just ignore AI assistants, since our assistant-using peers will race ahead of us. We need to aggressively push the boundaries of what we ask our assistants to do. When new model versions are released, we need to upgrade and see if they can do things that the previous versions couldn’t. We need to read books and articles to learn techniques to get the most out of the assistants.

But when we find something a model isn’t good at, and better prompting doesn’t seem to help, we need to be prepared to take it on ourselves. This means continual learning, which should be a familiar mode for anyone who has worked on software in the past few decades. The industry is always racing ahead, and programmers had to continually study and learn to stay relevant. In the current era, AI assistants are so capable that it’s tempting to delegate everything to them. But programmers who are paying attention shouldn’t be satisfied with that approach. If we don’t have any skills beyond what the current models are capable of, then any trained model user could replace us.

So as the model makers are making the models better, we need to make ourselves better, finding skills at the edge that only humans can do. We need to practice architecture, debugging, threat modeling, domain reasoning, and understanding systems at a conceptual level. When working in these areas, we should first check what the assistant comes up with, then see if we can improve it. This is the pattern we’ll be using in this era: ask the model, fill in the gaps with our own expertise, repeat.

Categories: Career

Prev

Stay in the Know

I'm trying out the latest learning techniques on software development concepts, and writing about what works best. Sound interesting? Subscribe to my free newsletter to keep up to date. Learn More
Unsubscribing is easy, and I'll keep your email address private.

Getting Started

Are you new here? Check out my review posts for a tour of the archives:

  • 2023 in Review: 50 LeetCode Tips
  • 2022 in Review: Content Bots
  • 2021 in Review: Thoughts on Solving Programming Puzzles
  • Lessons from the 2020 LeetCode Monthly Challenges
  • 2019 in Review
  • Competitive Programming Frequently Asked Questions: 2018 In Review
  • What I Learned Working On Time Tortoise in 2017
  • 2016 in Review
  • 2015 in Review
  • 2015 Summer Review

Archives

Recent Posts

  • Will AI Coding Assistants “Deskill” Us? January 30, 2026
  • Stateless by Design: How to Work With AI Coding Assistants December 31, 2025
  • Do Coding Bots Mean the End of Coding Interviews? December 31, 2024
  • Another Project for 2024 May 8, 2024
  • Dynamic Programming Wrap-Up May 1, 2024
  • LeetCode 91: Decode Ways April 24, 2024
  • LeetCode 70: Climbing Stairs April 17, 2024
  • LeetCode 221: Maximal Square April 10, 2024
  • Using Dynamic Programming for Maximum Product Subarray April 3, 2024
  • LeetCode 62: Unique Paths March 27, 2024
Red-Green-Code
  • Home
  • About
  • Contact
  • Project 462
  • CP FAQ
  • Newsletter
Copyright © 2026 Duncan Smith