Roman Elizarov noticed something different about the 2024 Advent of Code:
AI is reshaping competitive programming, not just in benchmarks or papers, but in real life. My rough guess: this year’s Advent of Code leaderboard features ~80% AI-driven entries in the top 100 for the first time. Advent of Code is still an amazing event to sharpen your software engineering skills and have fun. But in just a year, it has lost much of its relevance as a way to compare problem-solving skills among humans.
If a spot on the AoC leaderboard no longer says much about your skill at solving puzzles, does that predict a change in the way interviewers use coding puzzles to evaluate candidates for programming jobs?