The QA Audit I Run Before Every Engagement
The biggest mistake a QA consultant can make is diving into solutions before understanding the problem. Here is the six-area audit framework I use at the start of every engagement.
The first thing I do when I start a new engagement is run an audit. Not because I need to justify my invoice before I start — but because the single biggest mistake a QA consultant can make is diving into solutions before understanding the problem.
Every team is different. Every codebase is different. The right QA approach for a five-person startup moving fast is completely different from the right approach for a 50-person engineering team with compliance requirements. An audit tells me which one I'm actually looking at.
Here's the six-area framework I use on every new engagement.
The six areas I always examine
1. Test coverage and layer distribution
I start by mapping what types of tests exist and at what layer. Are you unit-testing business logic? Do you have integration tests covering your API contracts? Is there end-to-end automation, and if so, what does it actually cover?
The common problem here isn't a lack of tests — it's an imbalance. Teams often have lots of unit tests and lots of E2E tests, with almost nothing in between. Or they have a huge E2E suite that duplicates what unit tests should be doing, making everything slow and fragile. What I'm looking for: a distribution that reflects the testing pyramid and that actually matches where risk lives in the product.
2. Test reliability
I ask for recent CI run data. What's the pass rate? How often do tests fail intermittently? Are there tests that are disabled or skipped 'temporarily' that nobody has looked at in months?
A test suite with 90% reliability isn't a test suite — it's a suggestion. Engineers stop trusting it, start ignoring failures, and eventually your automation gives you a false sense of security while bugs walk straight through. What I'm looking for: flakiness rate below 1%, no long-term disabled tests, consistent pass rate across environments.
3. CI/CD integration
Where do tests run? When do they run? What happens when they fail? Can a developer merge code without tests passing?
The most common gap here: tests exist, but they don't gate anything. They run as an afterthought, failures don't block the pipeline, and the results aren't reviewed. This is test theatre — all the cost, none of the benefit. What I'm looking for: tests running on every PR, failures blocking merges, results visible in the PR interface.
4. Team practices and culture
This is the one people don't expect me to look at. But it's often the most important. Do developers write tests for their own features? Does the team review test coverage in code review? Is QA involved at the start of a feature, or called in at the end to 'sign it off'? Is the QA person empowered to block a release?
A team with mediocre tools and a strong quality culture will outperform a team with great tools and no culture every time. What I'm looking for: testing as a team norm, not a QA-only responsibility.
5. Risk mapping
Where would a bug hurt most? What are your critical user journeys? What parts of the codebase are most changed, most complex, and most poorly tested?
Risk mapping lets you prioritise. You can't test everything, so you should test the high-risk things better than the low-risk things. Most teams don't have this mapping explicitly — they test what's convenient, not what's important. What I'm looking for: deliberate, documented decisions about test priority that align with business risk.
6. Documentation
Can a new engineer understand your test setup in an hour? Is there a clear guide to running tests locally? Is there documentation for why certain testing decisions were made?
Documentation isn't just for new joiners. It forces you to articulate decisions that might otherwise silently calcify into 'that's just how we do it.' What I'm looking for: a test strategy document, README-level setup instructions, and comments explaining non-obvious test decisions.
What happens after the audit
The audit produces a written report with findings across all six areas, rated by severity and ordered by impact. Not every problem needs fixing immediately — part of the value is knowing which things to tackle first and which can wait.
The report becomes the roadmap for the rest of the engagement. Every recommendation is tied to something we found in the audit, not to a generic best-practice checklist. That's the difference between advice that fits your team and advice that collects dust.
This audit is the starting point for our QA Strategy service — two to four weeks of assessment, ending with a prioritised roadmap you can actually execute.
Learn more about QA Strategy →WORK WITH NEXUS
Got a QA problem? Let's talk about it.