Dr Shreekant Thakkar, Chief Researcher at the Secure Systems Research Centre, Technology Innovation Institute (TII), notes that autonomy will only advance through trust grounded in verification, resilience, prediction, and accountability, ensuring systems perform safely in real‑world conditions.
Every generation redefines what it means to trust technology. For us, that test is autonomy – machines that can act in places too dangerous, complex, or demanding for humans.
Today, these systems are stepping into critical spaces: in hospital wards, wildfire zones, transportation corridors, remote borders, and sensitive infrastructure. The stakes are high, and the need is growing, with autonomy becoming not just helpful, but in many cases, essential.
The question is therefore no longer if autonomy can work, but whether it can be trusted to work safely, securely, and predictably, especially when conditions are degraded, communications disrupted, or risks are rising. Trust cannot rest on controlled demonstrations. It must be earned in the real world, under pressure.
To build public confidence, capability alone will not determine their success; credibility will. A delayed alert in a care home, a misrouted responder during a fire, a failure to detect tampering at a perimeter, or a communication lapse during an infrastructure emergency can do more damage than any isolated technical glitch. These aren’t edge cases, they are defining tests of trust. And trust, once broken, is difficult to restore.
If autonomous systems are to be successful, and we are to see them integrate into our lives in meaningful ways, trust must be engineered from end to end.
That starts with systems that never assume benign conditions. Every device, message, and input must prove it is authentic and expected. If a door sensor goes offline, if a command is injected without proper credentials, or if a wearable streams implausible vitals, the system must catch it, not later, but instantly.
Equally important is how these systems behave. Autonomy must constantly validate not just where things are, but whether what they’re doing is safe and policy-aligned. That means halting equipment, rerouting vehicles, or overriding decisions within milliseconds, not minutes, when something looks wrong. If a hospital robot deviates from its assigned schedule or a drone veering toward a restricted airspace should trigger an automatic, not optional, response.
Next comes foresight. Autonomy needs to be able to look ahead and assess potential outcomes. A hallway that will become smoke-filled in five minutes. A shift in posture and vitals that predicts a fall. A heating pattern that suggests equipment drift. A weather front that will knock out high-bandwidth comms. If the system cannot look ahead, it leaves humans scrambling to catch up. Prediction cannot be a bonus, it must be a baseline requirement.
No less critical is resilience. In care homes, border sites, or during extreme weather, communication links fail. Power fluctuates. Components go offline. A trustworthy system must maintain continuity even when the environment degrades. That means backup links, fallback paths, and graceful degradation, not silence. The mission must continue. Alerts must go through. And critical safety functions must remain operational even when infrastructure falters.
Just as vital is accountability. Autonomous systems must keep humans informed and in charge. Every decision the system makes, whether adjusting a route, pausing a task, or prioritizing a response, should be explainable and visible to operators. Operators need clarity: what happened, why it happened, and what the system is doing next. Without that transparency, autonomy becomes a black box. And black boxes do not earn trust.
As autonomy progresses, these principles – verification, validation, prediction, resilience, and accountability – are not optional. They define whether autonomy is safe to scale in public life. They are the foundation for real-time decision support.
Consider a healthcare facility. A system that notices early signs of patient dehydration or infection, issues timely alerts, and coordinates responses can prevent hospitalizations, but only if it filters out false positives, protects sensitive data, and explains its actions clearly. Or take wildfire management: if autonomy detects a crew entering a danger zone, reroutes drones to re-establish comms, and logs every step for post-incident review, it becomes more than a tool, it becomes a partner. That is the standard we must set.
The next frontier in autonomy is not scale – it’s credibility. Systems that operate alongside people in safety-critical spaces must earn trust through action, not intention. That means autonomy must be explainable, resilient, safe under stress, and governed by policy . If we design for trust from the start, autonomy can become the invisible ally behind safer communities, stronger public services, and faster, smarter response when every second counts. But if trust-building measures are overlooked or delayed, confidence will falter, and with it, adoption.
Autonomy will only succeed when it consistently earns the confidence of those who rely on it. That confidence grows when systems are transparent, dependable in real-world conditions, and aligned with human priorities. Building that trust isn’t a one-time achievement, it’s an ongoing commitment. And it’s one we must uphold together, from design to deployment.











