eMudhra has raised concerns about the emerging cybersecurity and governance challenges posed by autonomous AI systems operating in real-world environments.

The company emphasised the need for behavioural trust frameworks to ensure accountability, security, and verifiable decisions as AI agents begin to act independently. Traditional security models focused on authentication may not suffice as AI-powered robots, autonomous devices, and intelligent systems increasingly perform tasks across sectors such as manufacturing, healthcare, logistics, and public infrastructure. eMudhra described a potential “behavioural trust gap,” where systems, although verified as legitimate, could act unpredictably or deviate from intended operational parameters, creating new attack surfaces and operational safety risks.

Kaushik Srinivasan, EVP of eMudhra, stated that while digital trust has historically focused on verifying identity, the next challenge lies in verifying the behaviour of autonomous machines making decisions in the physical world. The company noted that emerging trust models might need to combine cryptographic identity, behavioural monitoring, policy enforcement, and continuous verification of autonomous actions to ensure operational integrity and safety.

The issue extends beyond enterprise environments to digital public infrastructure, smart cities, industrial automation, and AI-driven services. As adoption accelerates, establishing governance frameworks for autonomous behaviour may become a priority for regulators and industry leaders. Addressing trust in physical AI will be critical to ensuring resilience, safety, and public confidence in the emerging autonomous economy.

Disclaimer: This article is based on a regulatory filing submitted to the National Stock Exchange of India (NSE).

This article is written by Business Desk and reviewed by News Desk before publication.