Abhishek Singh, MD and CEO of the reputed Digital India Corporation on Monday had very strongly emphasized the need for India to have its own framework on Artificial Intelligence (AI) that provides solutions to the diverse nature of the country. While numerous countries have legislative interventions, there is no law in India to regulate AI. “We are thinking in this direction. As and when the Digital India Act is passed, there would be provisions for enforcing guidelines,” Singh said virtually addressing the first CeRAI (Centre for Responsible AI) workshop on responsible AI for India held at IIT Madras.
These specific guidelines on AI would not limit innovation but allow it to prosper and infuse it with creativity while offering ethical and responsible solutions to the users. The Bureau of Indian Standards (BIS) has its own committee on AI which has proposed a draft for Indian standards equivalent to ISO standards. “TRAI came out with a consultative paper expressing concerns on the risks of AI. We are looking at this paper for finalising our framework for responsible and ethical AI,” Singh informed. On an affirmative note, several firms and even start-ups claim they are in for responsible AI, they don’t adopt to those standards. “At times they are in conflict with their commercial interests and are not as ethical as they claim to be. We need to protect the safety and privacy concerns of the people,” he said.
AI should never be confined to driving cars or for entertainment but include diverse fields of healthcare, and agriculture. It should be completely non biased and non- discriminatory in it’s practices. IIT-M has established the CeRAI, an interdisciplinary research centre, to ensure ethical and responsible development of AI-based solutions in the real world. “It is geared towards becoming a premier research centre at national and international level for both fundamental and applied research in responsible AI with immediate impact in deploying AI systems in the Indian ecosystem,” a source told. In the end, AI models need to provide performance guarantees appropriate to the applications they are deployed in. The given points cover data integrity, privacy, robustness of decision-making, etc. It is immensely essential to research into developing assurance and risk models for AI systems in different sectors,