Hot News
TestBooster.ai
Back to blogInnovation

20 QA interview questions (with commented answers)

TestBooster
12 min read
20 QA interview questions (with commented answers)

QA interviews tend to be more varied than they seem. In some hiring processes, knowing what a test case is and understanding the bug cycle is enough. In others, especially at tech companies with agile teams, the interviewer wants to know whether you understand automation, whether you can work within a sprint without becoming a bottleneck, and whether you have an informed opinion about AI in the context of testing.

This guide brings together 20 questions that come up frequently in this type of interview. They’re divided into three blocks: fundamentals, practices and methodologies, and automation with trends. Each question includes an expected answer and a comment explaining what the interviewer is really evaluating, because, more often than not, the technically correct answer alone isn’t enough.

According to the U.S. Bureau of Labor Statistics, demand for QA professionals is expected to grow 15% between 2024 and 2034, well above the average for all occupations. The field is in high demand, and the bar has risen accordingly. It’s worth showing up prepared.

How to use this guide

Memorizing answers won’t get you far. What works in an interview is being able to explain the reasoning behind each concept and connect it to your real experience. If you’ve never worked with a particular tool or methodology, say so, and follow up by showing that you understand the underlying principle.

The highlighted comments throughout the text show what the interviewer generally wants to evaluate in each question. Read them carefully: sometimes a question seems technical when it’s actually measuring professional maturity or communication skills.

Block 1 — QA Fundamentals

These seven questions come up in virtually any hiring process, regardless of the seniority level. They cover the concepts every QA professional needs to master.

1. What’s the difference between QA, QC, and Software Testing?

Answer: QA (Quality Assurance) is a preventive process: it involves defining standards, monitoring development, and ensuring the entire process is oriented toward quality. QC (Quality Control) is reactive: it inspects the product to verify that it meets requirements. Software Testing is a specific activity within QC, it’s the execution of checks against the system.

2. What is the STLC and how does it relate to the SDLC?

Answer: The SDLC (Software Development Life Cycle) is the full development cycle, from requirements gathering through delivery and maintenance. The STLC (Software Testing Life Cycle) is the specific testing cycle, running in parallel: requirements analysis, planning, test case design, environment setup, execution, and closure. QA is involved from the requirements phase, not only once the code is ready.

3. What’s the difference between functional and non-functional testing? Give examples.

Answer: Functional tests verify whether the system does what it’s supposed to do, a login that authenticates the user correctly, a calculation that returns the right value. Non-functional tests verify how the system behaves, response time (performance), behavior under load (stress), usability, security, compatibility. Both are necessary; neglecting non-functional testing is one of the most common causes of production issues.

4. What is a test case? How would you write a good one?

Answer: A test case is a set of conditions and steps that verify a specific behavior of the system. A good test case includes: a unique identifier, a clear title, preconditions (what must be true before execution), numbered and objective steps, an expected result, and a defined acceptance criterion. It should also be independent enough to run without relying on other tests, and written clearly enough that someone unfamiliar with the system can reproduce it.

🔍 What the interviewer evaluates: Interviewers notice when a candidate lists the fields mechanically without understanding why each one exists. Talking about acceptance criteria and reproducibility signals practical maturity.

5. What is a test plan? When is it necessary?

Answer: A test plan documents the strategy, scope, resources, schedule, and entry and exit criteria for a testing cycle. It’s necessary in projects with greater complexity, multiple teams, or regulatory requirements. In small agile teams, it can be replaced by a well-structured definition of done and acceptance criteria in user stories, what matters is that the quality agreement is recorded in some form.

6. What’s the difference between Smoke Testing, Sanity Testing, and Regression Testing?

Answer: Smoke Testing is a quick check of the main features to determine whether the build is stable enough for more detailed testing, if authentication doesn’t work, there’s no point testing anything else. Sanity Testing is focused: it verifies whether a specific fix or small change worked without causing obvious breakage nearby. Regression Testing is broader and more systematic: it ensures that previously working features continue to work after any change.

Job candidate with clasped hands listening to two interviewers across a table with a resume in the foreground

7. What do you do when you find a critical bug in production and there’s no documentation for the flow?

Answer: First, I’d assess the immediate impact: how many users are affected, whether a workaround exists, and whether someone beyond the dev team needs to be looped in. Then I’d reproduce the bug with as much detail as I can gather, screenshots, logs, environment conditions. I’d communicate with the team using that evidence rather than waiting for documentation. The absence of docs doesn’t halt the work; it just requires more investigation and direct collaboration with whoever knows the system.

🔍 What the interviewer evaluates: A behavioral question disguised as a technical one. It assesses autonomy, ability to communicate under pressure, and sense of priority, not specific technical knowledge.

Block 2 — Practices and Methodologies

This block covers questions about how you work day to day, especially in agile teams.

8. How do you work as a QA within a Scrum team?

Answer: QA gets involved from story refinement, helping identify acceptance criteria and risks before development even begins. During the sprint, testing happens incrementally as tasks are completed, not in a batch at the end of the cycle.

9. How do you prioritize what to test when time is tight?

Answer: I use a risk-based approach: I first identify which areas have the highest business impact and the highest likelihood of failure. Critical features, payment flows, third-party integrations, and recently changed parts of the codebase get top priority.

🔍 What the interviewer evaluates: Candidates who say “I prioritize what’s most important” without explaining their criteria don’t convince anyone. Mentioning risk-based testing with concrete examples makes a real difference.

10. What is BDD? Have you worked with it in practice?

Answer: BDD (Behavior-Driven Development) is an approach where acceptance criteria are written in structured language that everyone on the team can understand. The most common format is Gherkin, with the Given / When / Then structure. Tools like Cucumber or SpecFlow turn these scenarios into executable automated tests. In practice, BDD’s greatest value is forcing the conversation about expected behavior before any code is written.

11. How do you handle a developer who disagrees that the reported behavior is a bug?

Answer: I start by looking at the acceptance criteria defined for that feature. If the behavior deviates from what was agreed, the argument is technical, not personal. If the criteria are ambiguous or missing, I bring in the product owner to align expectations. The goal isn’t to win the argument, it’s to make sure the decision is based on evidence and that the impact on the user is clear to everyone.

🔍 What the interviewer evaluates: Professional maturity. Candidates who “clash” with developers are a red flag. The interviewer wants to see evidence-based argumentation and the ability to escalate when needed.

12. What is regression testing and when do you run it?

Answer: Regression tests verify that previously validated features continue to work after code changes. They should run whenever there’s a change, whether a bug fix, a new feature, or a dependency update. In teams with CI/CD, automated regression runs on every push or pull request.

13. How do you document test cycle results for a non-technical manager?

Answer: I summarize the cycle in terms of coverage (what was tested), quality (how many bugs were found, by severity and status), and residual risk (what was left out of scope and why). I avoid technical jargon and focus on impact: “3 critical bugs were fixed before release; 1 low-priority bug was accepted as a controlled risk for this version.” Managers need to understand the actual state of the product, not a list of executed test cases.

🔍 What the interviewer evaluates: Communication skills are increasingly expected of mid-level and senior QAs. An analysis of 400 QA job postings published on Medium found that test management tools appeared in 45% of requirements, the market values people who can not only test, but also report clearly.

14. Have you worked with performance testing? What tools do you know?

Answer: Performance tests evaluate how the system behaves under load conditions, response time, throughput, resource usage, and behavior at the limits. The most widely used tools are JMeter (open source, versatile, well-suited for load and stress testing), k6 (code-oriented, easy to integrate into CI/CD), and Gatling (designed for high concurrency, with detailed reports). The right choice depends on the type of test, the project’s tech stack, and how tightly the testing needs to integrate into the pipeline.

🔍 What the interviewer evaluates: Even candidates who aren’t performance specialists gain points by showing they know the tools and understand when each one makes sense. Having no familiarity with this area at all is a negative for mid-level and above roles.

Smiling male candidate shaking hands with an interviewer at the end of a job interview in an office environment

Block 3 — Automation and Trends

This is the block that separates candidates for more technical roles. 

15. What’s the difference between Selenium, Cypress, and Playwright? When would you choose each?

Answer: Selenium is the most mature and supports multiple languages and browsers, but requires more setup and has heavier maintenance due to its WebDriver dependency. Cypress is quick to set up, offers a great debugging experience, and works very well for modern JavaScript applications, its limitations are around multi-domain support and the lack of native mobile testing. Playwright is newer, supports multiple browsers and languages, and handles async scenarios and mobile testing via emulation better.

The right choice depends on the context: the type of application, the team’s primary language, mobile support requirements, the learning curve, and the long-term maintenance cost.

16. What are flaky tests and how would you deal with them day to day?

Answer: Flaky tests are tests that pass on some runs and fail on others without any actual change in system behavior. The most common causes are: timing dependencies (insufficient or hardcoded waits), unstable test data, execution order dependencies, and environment issues. To address them: isolate the problematic test, investigate the root cause, replace fixed waits with conditional ones, and make sure each test is independent and idempotent.

17. What is a CI/CD pipeline and what role do automated tests play in it?

Answer: CI (Continuous Integration) is the practice of integrating code frequently, on every commit, with automated validation. CD (Continuous Delivery/Deployment) extends that all the way to staging or production environments. Automated tests are the pipeline’s safety layer: unit tests run fast on every push, integration tests validate combined components, and E2E tests cover critical flows before delivery. The idea is that no change moves forward in the pipeline without passing the quality checks.

🔍 What the interviewer evaluates: QAs who understand CI/CD are far more valued in DevOps teams. Saying “I run tests in Jenkins” without understanding the pipeline’s logic isn’t enough for senior roles.

18. How do you make sure your automated tests keep working when the application’s layout changes?

Answer: The first step is avoiding brittle selectors, auto-generated IDs, generic CSS classes, or DOM-position-based locators. The right approach is to use dedicated test attributes (data-testid) or semantic locators. On top of that, the Page Object Model pattern centralizes the interaction logic for each screen: when the layout changes, you update one place. More recently, tools with self-healing capabilities can adapt locators automatically when they detect UI changes, significantly reducing maintenance effort.

🔍 What the interviewer evaluates: Test maintenance cost is one of the biggest pain points in the industry. Candidates who understand how to reduce it, through good practices or the right tooling, stand out.

19. How is AI changing QA work? What do you think changes in practice?

Answer: AI is primarily accelerating two things: test case creation (generating scenarios from requirements or real user behavior) and automated test maintenance (with self-healing that adapts scripts to interface changes). The QA role doesn’t disappear, it shifts: less time on repetitive tasks, more focus on coverage strategy, exploratory testing, and risk analysis. Critical thinking about what to test and why remains irreplaceable.

🔍 What the interviewer evaluates: They’re not expecting an AI expert. They want to know if the candidate is up to date, has a formed opinion, and can think critically about how the field is evolving.

20. Which automation tool would you choose today for a brand new project, starting from scratch?

Answer: It depends on a few factors. What type of application, web, mobile, API? What’s the team’s primary language? How much onboarding time is available? What’s the acceptable maintenance cost? For teams that want speed without writing code, there are AI-powered tools built around natural language that eliminate the brittle selector problem entirely.

The best choice isn’t the most popular tool, it’s the one that solves the project’s actual problem with the lowest adoption and maintenance cost.

Speaking of test automation…

Several questions in this third block, about test maintenance, self-healing, codeless creation, and CI/CD integration, circle around a problem QA teams face every week: how much time is lost maintaining tests that break when the layout changes, and how steep the entry cost is with traditional tools like Selenium and Cypress.

TestBooster.ai is a Brazilian platform that addresses this problem directly. With it, you write tests in natural language, no code, no brittle selectors, and the intent-driven AI executes and maintains those tests automatically. When the application’s layout changes, TestBooster adapts, with no broken tests sitting in a maintenance queue.

The platform is up to 24x faster than Cypress or Selenium for test creation, and it’s a world pioneer in mobile test automation using natural language.

If you want to go into an interview, or your next sprint, with automation that actually works and doesn’t turn into technical debt, it’s worth taking a look: testbooster.ai/pt-br

Related Articles