By: R V Raghu, ISACA India Ambassador & Director, Versatilist Consulting India Pvt Ltd
AI is on everyone’s mind, driven primarily by fear of missing out (FOMO). Top management in organizations are bombarded by messages telling them that if their organization is not using AI, then they will be left behind, biting the dust as industry peers use AI to pull ahead.
Seventy-six percent of Indian respondents from ISACA’s 2025 AI Pulse Poll believe employees within their organization use AI, whether it is permitted or not. The key here is whether it is permitted or not. Without clear guidelines on what can and cannot be done with technology, shadow IT often rises. This was the case when cloud services rose on the horizon and employees and business units started adopting cloud solutions like SaaS at the click of a button from their browsers, right under the noses of cybersecurity professionals. The same is now happening with AI, leading to the rise of what is called shadow AI.
The proliferation of AI is making this easier, with AI being integrated into office tools seamlessly, apps and browsers providing easy AI access, and AI serving as sidecars to any business solution you can think of. On one front, this may seem like a good thing. Traditionally technology curves have been steep, with deep-seated reluctance often requiring a hard push from management. With AI it is a different story. Employees and business units are using AI and management is also complicit in what is going on—or at least until things turn sour. Seventy percent of India-based respondents to ISACA’s 2025 AI Pulse Poll say that the use of AI has resulted in time savings for them and their organization, and more than half (59%) believe that AI will have a positive impact on their career in the next year.
This may sound heartening, but the same survey also highlighted that only 32 percent of organizations in India have a formal, comprehensive policy in place for AI. If this does not make you pause, nothing will; because without a policy, there is a governance gap that is akin to setting off into a desert without so much as a compass, let alone a map. The risks abound. Sixty percent of India-based respondents to ISACA’s latest AI Pulse Poll are very or extremely worried that generative AI will be exploited by bad actors. This is the beginning of the proverbial avalanche. In the hands of bad actors, AI can be used to fashion weapons that can have far reaching impact, all while not even as appearing as such. For example, data shows AI-powered phishing and social engineering attacks are now more difficult to detect. Deepfakes are being created with such potency, it is chilling. Other risks such as privacy violations, social engineering, loss of IP, and more are on the horizon. AI is proving to the ultimate double edged sword.
But all is not lost. Enterprises should fall back on first principles thinking and approach AI adoption systematically.
Education is going to be key. This may seem counterintuitive, but without understanding the animal, it may not be possible to deal with it. Education and skilling will be required across the board, starting at the top. Boards and senior management need to understand what AI is, how opaque it can be and how data hungry it is. With other tools there may seem to be some semblance of control; However, with AI, all you may see in many cases is a window to prompt the tool to do something and then get the results. But unless boards and senior management understand what lies under the hood, they will not be able manage and drive the tool. Comprehension also enables an understanding of the risks that may arise which can drive safe adoption.
Skilling is also required across the rank and file so that employees understand not just the tool but also the risks that can arise from the tool itself. Education, skilling, training and awareness will also need to be role specific; for example, so that cybersecurity personnel understand what they are dealing with. The same also applies to business users who need to understand the nuances of data that is an input the tool and what the outputs might mean.
Next is governance. Armed with education, boards and senior management will be in a better position to simply articulate a policy on what can and cannot be done with AI. Governance will also help address critical underlying aspects that need to be managed, such as AI related risks and ethics. Policies are key because they are the guidepost that can help translate the organization’s stance on AI into actionable tools for use by the rest of the organization, including when deciding if AI should be used for a specific purpose. Articulating policies will also help look at everything from the lens of risk—leading to risk-based decision making. Governance can also ensure that appropriate checks and balance are put in place across the AI value chain, and also enable periodic audits in AI usage in the enterprise to ensure transparency.
Among the many actions that can be taken for the safe and secure integration of AI into enterprise, establishing robust AI governance and accelerating up-skilling and re-skilling will go a long way in ensuring that risks are mitigated and in minimizing data breaches from the unbridled use of AI.