QA candidates are increasingly asked to write an automated test during interviews — either as a take-home assignment or in a live coding format. In practice it often evaluates just one thing: the ability to write tests using a familiar framework. At the same time, successful completion does not always correlate with real work, where engineers need to analyze requirements, prioritize checks, and build an effective testing strategy. To address this gap, we introduced a logical live coding task that allows us to observe how candidates make decisions in real time.
In this talk, I will share what signals this format reveals during interviews and how it helps assess the ability to work with real product tasks. I will present practical cases, passing statistics, and the challenges of adopting this approach. We will also discuss why success in writing tests during an interview does not necessarily translate into the ability to ensure quality in a complex product environment.