Testing AI Products: Our New Approaches for QA and SDET

AI systems are reshaping the traditional testing model: identical scenarios can produce different results, tests lose reliability, and fixed expectations no longer reflect product quality. Classical QA approaches stop working, forcing teams to find new ways to stay in control. 

In this talk, I will present practical approaches to testing AI-driven functionality under non-deterministic conditions: how to design tests for variable behaviour, what should be remained for manual verification, and what can be automated, and how to validate results when the “expected answer” is not fixed.

We will explore working with test data, defining quality criteria, and the differences between automated tests, evals, and benchmarks. We will also discuss how and when to use benchmarks effectively in practice. 

This talk is based on real-world experience and will be useful for QA engineers and SDETs working with AI-driven functionality.

Comments ({{Comments.length}})
  • {{comment.AuthorFullName}}
    {{comment.AuthorInfo}}
    {{ comment.DateCreated | date: 'dd.MM.yyyy' }}

To leave a feedback you need to

or
Chat with us, we are online!