When I was in college, I was required to take a Java course. I wasn't particularly interested in Java but nothing was particularly difficult about the class except the tests.

Not because I didn't understand the questions or logic flow, but it was administered in the way one might a test for a foreign language. I mean an actual spoken language, not [random programming language].

Specifically, we were given closed-book tests in which we were expected to hand-write code.

On paper.
With a pencil.
Scored on syntax.

To my mind, this wasn't testing the ability to think like a developer, but rather the ability to memorize syntax of the Java language. At the time AI didn't exist, and even auto-complete in the IDE wasn't a thing, and syntax does matter, but it felt disconnected from how software is actually built.

When it comes to writing code, specific domain knowledge of the syntax ranks fairly low on the scale of what's actually important. Developers constantly lean on other tools; Google, docs, books, and today AI.

If the job is software development, determining that the applicant can write code is worthwhile, but expecting them to sit down and write some arbitrary code without allowing reference to external sources especially under extra, artificial pressure of being watched within interview time constraints, is... unreasonable.

Speed is important.
Correctness is important.
Knowing and recognizing design patterns is important.

These all lead to stable, performant, maintainable code.

Call me an old man yelling at clouds if you like, but the job was never about writing code. It has always been about solving problems. Real developers use references constantly. Knowing when to reach for reference material, and where to look is much more important than memorization. Nobody ships in total isolation.

You could say that writing the code has been a bottleneck in the process. Modern tools -  autocomplete, LSPs, LLMs, etc - can reduce the bottleneck of typing, and shift focus to where it always belonged in the first place: Thinking.

AI doesn't replace engineers. It replaces the need to memorize. Even if you are not going to allow employees to use AI on the job, which frankly seems almost as nonsensical as trying to one-shot "vibe code" an application with AI, expecting someone to write functional code by hand, with no reference material is even more antiquated today than it was over two decades ago.

Instead, interviews should be a conversation, not a test. They should focus on how the candidate thinks about a problem. What they can determine from the specifications provided, what they would do about the portions that may be unclear or that they're unsure of.

Do they ask questions?
Do they think about edge cases?
Can they explain their approach to understanding and solving the problem?
If they can think of more than one way to approach it, can they articulate tradeoffs?
Can they adapt if requirements change?
Can they spot risks that might lead to stability or scalability issues?

I'm less interested in their ability to recall syntax, or regurgitate design patterns off the top of their head, than I am how they reason about a problem that needs to be solved.

Far too often interviews are disconnected from the reality of the job, with code challenges rewarding those who have recently interviewed, those who grind on "leetcode" or otherwise optimize for artificial constraints, completely overlooking judgement, thinking in systems, and most importantly real-world experience.

I've met and worked with incredible developers who suck at live coding, and mediocre ones who ace it.

Interviews should more closely resemble the job - not arbitrary code challenges or algorithm-puzzle style "trick question" nonsense. Most people never need to manually invert binary trees or implement their own sorting algorithms. These are solved problems that exist in most languages standard libraries or community code.

How candidates think about problems, communicate, and collaborate are vastly more important than if they can remember whether the haystack or needle comes first when searching an array or list, or immediately recall which design pattern applies to a specific problem. On the job, it's more often the case that the solution isn't actually a known target than a puzzle to unlock, but the latter is what is usually tested in interviews.

When I interview candidates for development roles I try to get a solid sense of how they think, and most importantly if they're someone I could see myself working with on a daily basis. For the most part, everything else can be learned.

In the age of AI-assisted development, this is even more relevant than ever, and interviews probably ought to adapt accordingly.