If you want to get better at programming, you need to get better at algorithms. In some ways, that statement is tautological. To quote Computer Science pioneer Niklaus Wirth, Algorithms + Data Structures = Programs. But besides the algorithms that you write yourself, it’s also worth studying well-known algorithms such as those taught in introductory Computer Science classes. Some software developers object to that idea. They say their language or framework already provides all of the standard algorithms, or that they can easily find them on the Web. Why do they need to learn how they’re implemented? It’s certainly true that professional software engineers shouldn’t re-implement standard algorithms for the purpose of using them in a product. But that’s not the point of learning them. The reason they’re part of CS education is that they contain useful ideas. Here’s one example: In modern programming languages, you don’t have to worry about finding the end of a string. The language hides that aspect of string implementation. But by studying string manipulation algorithms in C, you find out about the idea of a sentinel value. This is helpful in understanding how leaf nodes are represented in a tree. And now you have a couple of examples of a concept you can use in other situations where you need to indicate the end of a section of data.
What does it mean to know an algorithm?
If you want to find out if someone knows what the capital of France is, you can just ask them. What if you want to know whether they’re capable of leading a team of ten developers working on a content management system? That might take a series of conversations over the course of a day, and you still wouldn’t know for sure. Figuring out whether someone “knows” an algorithm is not as simple as the Paris example, but simpler than the dev lead example. One option is to have them explain the algorithm to you. That’s a start, but not quite enough, assuming the purpose of this exercise is to find a good programmer. So before or after they explain how the algorithm works, you should also ask them to implement it in a real programming language.
If you’re explaining and implementing a programming problem in a technical interview, the goal is for the interviewer to believe that you know what you’re talking about, and to be convinced that your program will work. But if you’re at home working on getting better at programming, there’s not always someone around to evaluate you. So an alternative test is: can your implementation pass a series of correctness and performance tests. This is, of course, how competitive programming contests work. The online judge doesn’t care if you know why your submission works, as long as the tests pass. For companies using contests for hiring purposes, this could be a problem. What if their candidate gets a friend to submit a solution? But if you’re using a contest for your own education, that’s not a problem. You just have to be disciplined enough to know when it’s time to look at someone else’s code, and when you should keep struggling with your own.
The learning process
So your goal is to implement an algorithm from scratch in a real programming language without referring to any reference materials. What is the best way to get to the point where you can do that? If learning algorithms was like learning world capitals, a non-programmer could just memorize the implementation. Then, like a medieval scribe, they could illuminate a faithful copy of the algorithm. Clearly that would be a silly idea. But what if a programmer learned an algorithm by repeatedly implementing it. That’s a bit better, since a programmer has some understanding of the implementation and wouldn’t just be memorizing the symbols. In fact, this is a common way for programmers to learn. If you know how to write a record to the database in the system you’re working on, it’s probably not because you made a conscious effort to memorize that process. You just did it enough times that you no longer have to look up the syntax.
Although you could learn an algorithm by repeatedly implementing it, that’s not a very efficient option. And more importantly, it misses the point: the purpose of learning standard algorithms is not to be able to regurgitate them on demand, but rather to learn the algorithmic ideas that they contain. I’m going to propose a process that combines knowing the ideas behind an algorithm, and knowing how to implement it. The implementation step ensures that you’re aware of the practical details of the algorithm, and the idea step helps you apply what you learn to your own algorithms.
A process example: Quicksort
How to Attack a Programming Puzzle explains a process for solving competitive programming problems. In that scenario, you have a problem and you need to come up with a program that implements a solution. In the algorithm learning scenario, you start with both the problem and the implementation, and your goal is to internalize the solution. Here’s a process for doing that.
Step 1: Select an algorithm
A good algorithm to learn is one that contains ideas that will be useful in your future work. It’s worth repeating that you’re not learning this algorithm so that you can reproduce it verbatim in a software project. You’ll almost always have standard libraries that are better suited for that purpose. Instead, you’re looking for ideas.
For this example, I’m going to use Quicksort, one of the most famous algorithms. It has a fairly short implementation, but it’s still complex enough that there are some interesting ideas to learn.
Step 2: Find an explanation of the algorithm that also contains a reference implementation
Since this algorithm learning process depends on learning both why an algorithm works and how to implement it, it’s best to start with a resource that combines the two. Fortunately, that’s easy to find for standard algorithms. I’m going to use Sedgewick’s Java implementation, which has an associated explanation page.
Step 3: Read through the algorithm explanation and take notes on the implementation ideas
Here’s what I got from reading the explanation page:
- Quicksort is a way to sort an array by partitioning it into two parts and then sorting each part separately, using Quicksort for each one.
- The partitioning process rearranges the array so that:
- One element (the partitioning item) is in its final location
- Every element to the left of the partitioning item is less than or equal to it
- Every element to the right of the partitioning item is greater than or equal to it
- The steps to implement the partitioning process are:
- Choose the first item in the array as the partitioning item
- Scan from the left for an item that is greater than or equal to the partitioning item
- Scan from the right for an item that is less than or equal to the partitioning item
- Exchange the two items found, which puts them in the sections where they belong
- When the scan indices cross, exchange the partitioning item with the rightmost entry of the left section
Step 4: Draw your own diagrams of the algorithm
Diagrams explaining how standard algorithms work are easy to find and can be useful for understanding the algorithms. There’s even a video explanation for QuickSort. But just as it’s helpful to paraphrase the verbal description of the algorithm, you’ll learn it better if you create your own diagrams that illustrate the algorithm in a way that makes sense to you. For example, your diagram might end up looking something like this as you trace through the algorithm:
Step 5: Write your own reference implementation of the algorithm
The reference implementation code that you find may not be exactly the code that you want to learn. It may not be in your preferred language, it may handle special cases that you don’t care about, or it may just be written in a style that you don’t like. Even if it seems acceptable at first glance, it’s a good idea to read through the implementation to familiarize yourself with it. If you make any changes, just be careful that you’re not affecting anything critical about the algorithm. You can reduce the chance of accidental breakage by testing the algorithm before and after your changes and comparing the results.
For QuickSort.java, I made a few changes to the code:
- Remove the shuffle step. Shuffling is a way to avoid worst-case performance in a real-world implementation, but it’s not a core part of the algorithm.
- Remove the comparisons and exchanges counting, since it makes the code cleaner. These are also not part of the algorithm, and are included in the example only for pedagogical reasons.
- Modify the test client to work like a UVa OJ submission: read test input from stdin, and write results to stdout. This is a good way to test that your algorithm works: like an online judge, you can compare known good results with your actual results.
- Change the type of data to be sorted from double to int. This doesn’t affect the algorithm, but integer data is more human-readable than real numbers between 0 and 1, which is what the original implementation uses.
My modified version is at QuickSort.java on GitHub.
Step 6: Write pseudocode for your reference implementation
You now have an implementation that you have reduced to the essential core of the algorithm you’re trying to learn. But it’s still a real program, which means it contains implementation details as mandated by your compiler. If you try to learn it right away, you’ll have to learn those implementation details at the same time as you’re learning the conceptual steps of the algorithm. Trying to learn two different levels of abstraction at the same time makes the learning process less effective. To avoid that problem, you can use pseudocode. Since pseudocode doesn’t have to be parsed by a compiler, you can adjust its syntax rules as much as you want. This allows you to express the algorithm at gradually decreasing abstraction levels, until you eventually reach the source code that you started with. As you learn each abstraction level, you build a mental framework that makes it easier to learn the next one. Here’s how that could work with QuickSort.java:
Pseudocode Level 1
Given an array of integers, sort it in place using a divide and conquer approach
That should be easy enough to remember.
Pseudocode Level 2
accept an array of integers quicksort it from the first element to the last element quicksort(array, left, right) if left and right indexes have crossed, return partition the array into left and right sections, where the left section elements are all less than or equal to the right section elements quicksort the left section quicksort the right section
Now we have the high-level strategy, but we haven’t yet covered the most complicated part, the partitioning process.
Pseudocode Level 3
accept an array of integers quicksort it from the first element to the last element quicksort(array, left, right) if left and right indexes have crossed, return partition the array into left and right sections, where the left section elements are all less than or equal to the right section elements quicksort the left section quicksort the right section partition the array create a left pointer at the start of the section create a right pointer at the end of the section while the pointers haven't crossed increment the left pointer until it points to an element greater than or equal to the right pointer's element decrement the right pointer until it points to an element less than or equal to the left pointer's element exchange the left and right elements exchange the partition element with the right element return the partition index
At this point we can stop adding pseudocode levels, since we have almost as much detail as the real implementation. This might be a good time to return to the diagrams you drew in step 4, and decide if you want to add any more details based on the pseudocode.
Step 7: Implement the algorithm from scratch
Now that you have studied the algorithm at several levels of abstraction, it’s time to see if you can implement it yourself. The goal is to see if you understand it at the level of detail required to explain it to a compiler. If you have trouble implementing part of it, that may be a sign that there’s a part of the algorithm, or an aspect of your programming language, that you need to brush up on. Once you can implement your algorithm in a reasonable amount of time with just a little help from your compiler, then it’s time to move on to another algorithm. At that point you’ll understand it better than most programmers do.
Learning the insights
Learning a famous algorithm at this level of detail is like studying a Shakespeare sonnet or one of Euclid’s proofs. You get to take apart the result of years of research and see what makes it tick. While learning other people’s algorithms is not enough to make you good at coming up with your own, it does show you what a good algorithm looks like. For a student of the practical aspects of programming, this approach has the same goal as Cal Newport’s proof obsession strategy for students of theoretical computer science: “learn the insights.” Although you can use a programming language to solve a problem in infinitely many ways, there are a few patterns that keep appearing. By studying standard algorithms to the point where you can reproduce them from scratch, you can internalize these patterns and use them in your own work.
(Image credit: QuickBars.java, with parameters M=100 and N=25)