A Glimpse Into Global Software Testing
August 7, 2025

Memory Safety Is Now a Mandate
Software teams building critical systems are facing a quiet revolution. One that doesn’t make headlines, but shifts the foundation beneath every sprint, review, and release.
Earlier this year, TrustInSoft—together with Ferrous Systems and Hitex—surveyed engineers and technical leads across automotive, aerospace, embedded systems, and industrial IoT.
- Memory safety is no longer optional—engineering teams are making it a foundational design requirement across critical systems.
- Traditional testing tools are falling short, pushing teams to explore formal methods that offer mathematical assurance instead of approximation.
- Mixed-language codebases and rising compliance demands are accelerating the need for testing strategies that go deeper and deliver provable results.
Testing Isn’t Just a Phase Anymore
Testing used to live at the end of the pipeline. Something you did after the “real work” of writing code. But not anymore.
For safety-critical software, testing is becoming integral to design. Teams aren’t just asking whether their software works—they want to know, with certainty, that it can’t fail in ways that matter. And they’re looking for tools that can back that up with math, not just logs.
The message from the report is clear: test coverage isn’t enough if the wrong things are being tested—or if entire classes of bugs are missed entirely.
Memory Safety Has Graduated from Buzzword to Baseline
Nearly every respondent agreed: memory safety will be a non-negotiable requirement for mission-critical software within five years. It’s already happening in some corners.
The stakes are high. A buffer overflow or use-after-free bug isn’t just a QA headache—it’s a risk to compliance, safety, and business continuity. These aren’t hypothetical risks either. The report captures the shared experience of teams who’ve seen audits stall, certifications fail, or production systems falter because of hidden memory errors.
And yet, the tools that dominate most testing workflows still operate on heuristics. They’re noisy. They miss things. Or worse, they bury teams in false positives that erode trust.
Tooling Fatigue Is Real
This came through loud and clear: traditional testing tools are nearing their limits.
Dynamic testing can’t see every path. Static analyzers produce more alerts than answers. And when you’re stitching five tools together to find out if your code is actually safe, confidence gets diluted fast.
One respondent said it best: “We use five tools to approximate what we really want to know—which is: can this system fail in production?”
Here’s the thing: they’re not looking for more tools. They’re looking for assurance. That’s different.
Formal Methods Are Stepping Off the Sidelines
Until recently, “formal methods” had a reputation. Academic. Inaccessible. Useful in theory but not in practice.
That perception is changing.
More teams are exploring exhaustive static analysis, symbolic execution, and mathematically sound approaches to software verification. And it’s not because the standards demand it (though increasingly, they do). It’s because the tools they’ve relied on for decades just don’t scale with the complexity of modern systems.
Especially when those systems are a blend of legacy C, newer C++, and an increasing amount of Rust—complete with unsafe blocks and tricky FFI boundaries.
Hybrid Codebases Are a Growing Pain
Rust is on the rise. That’s good news for memory safety, but it introduces new problems when mixed with C or C++.
In theory, Rust protects you from whole classes of bugs. In practice, most codebases aren’t purely Rust. And the moment you cross into legacy territory—or even just call a C library—you’re back in the danger zone.
Few respondents directly answered our question about the risks of mixed-language testing. But that silence says something too. The issue is real. Teams know it. They’re just not sure how to solve it yet.
Compliance Is Driving the Agenda—But Not the Curiosity
Unsurprisingly, industries like automotive and aerospace are leading the way in adopting more rigorous testing methods. That’s partly due to ISO 26262, DO-178C, and other standards that explicitly or implicitly recommend formal verification.
But this isn’t just about checking boxes. The report shows that engineers themselves are behind the push for better tooling—not just compliance teams.
And when they say “better,” they don’t mean faster or cheaper. They mean deeper. Tools that can reason about entire systems, not just scan individual files. Tools that reduce testing time and raise confidence. Tools that don’t just say “something might be wrong” but can prove when it isn’t.
The Shift Is Cultural—and Strategic
The survey reveals a telling correlation: the teams most invested in testing are also the most frustrated with the status quo. They’ve hit the ceiling of what traditional approaches can offer—and they’re reaching for something more.
Formal methods aren’t being adopted everywhere, all at once. But they’re gaining traction in the places that matter most: safety modules, memory-sensitive code, and systems where uptime is everything.
This isn’t a rejection of old practices. It’s a rebalancing. Manual testing, integration suites, unit tests—they all still have a place. But increasingly, they’re seen as the base layer, not the whole stack.
Want the Full Picture?
The 2025 State of Software Assurance Report goes deeper on all of this—with charts, direct quotes, and actionable recommendations.
Whether you're managing a team, writing compliance policy, or just trying to make your systems a little more predictable, this report will help you understand where software testing is headed—and how to stay ahead of it.
Download the full report.