AI has become a part of the day-to-day lives of most people. AI tools which generate text or images on request are used by many people for both personal and professional purposes. Progressing a step further than generative AI (GenAI), agentic AI does not require a human prompt and can understand the underlying context and respond autonomously. Agentic AI is distinguished by its autonomous intelligent capacity to interpret and adapt to surrounding context in real time, thereby helping businesses to simultaneously simplify and manage multiple complex processes into smaller processes. This enables agentic AI to create larger, more complex and sophisticated use cases. However, when an AI system becomes this autonomous, the question on ethical considerations embedded within the structure of the agentic AI becomes critically important.
These principles enable agentic AI to operate autonomously and make informed decisions in dynamic environments. The contextual awareness ensures it can adapt to new information and changing circumstances. Being goal-oriented allows it to prioritise actions and strategies that align with its objectives, while its self-contained nature ensures it can function independently without external intervention.
Agentic AI is rapidly gaining popularity among industry leaders due to its revolutionary potential. With its ability to work with minimal human intervention, these systems could be a game-changer across industries for improving efficiency, shortening deliverable timelines and reducing operational and systemic errors.
Let’s illustrate how this technology can improve efficiencies within the banking sector to shorten deliverable timelines and reduce operational and systemic errors. If banks adopt agentic AI to improve their cybersecurity structure, it could monitor transactions and portfolio traffic, scrutinise user behaviour to identify anomalies in the network pattern for cyber threats or vectors and automatically deploy counter-measures in response to the detected cyber incidents. Agentic AI can memorise this entire incident pathway and learn from it to apply remedial action contextually in the future, the next time there is a detection of varying cyber anomalies, isolating affected systems or flagging threat-prone pathways.
However, since it is a relatively new technology, strong regulatory and ethical frameworks are needed to govern the application. The level of independence these systems function would require an immediate and comprehensive reassessment of accountability, with discussions focusing on ways to assure transparency, fairness and uniformity in the decision-making processes. In order to create clear guidelines to protect against its own intuitiveness and consequent significant misuse and bias, leaders are increasingly calling for a collaborative approach among technologists, ethicists and legislators.
Beyond the conventional risks connected with Gen AI-based platforms, the deployment and integration of agentic AI presents new and unprecedented cybersecurity challenges. Understanding these risks and challenges is not just the responsibility of IT or security teams but a leadership concern which ultimately impacts organisational trust, compliance, reputation and business continuity.
Technology is evolving faster than the regulations which govern them. Governance standards and regulations take more time to be drafted and implemented since they aim to close the gaps within the latest technological systems and their adoption. Since agentic AI is gaining popularity among businesses to fulfil a variety of their business needs, he adoption of agentic AI requires the establishment of a robust policy and governance framework to ensure that the technology does not lead to any data breaches or violation of security protocols.
Standards/regulations | How it applies to agentic AI | General Data Protection Regulation – European Union (GDPR – EU) | Personal data used to train or processed by agentic AI must follow strict privacy, transparency, and access rights.1 | Digital Operational Resilience Act (DORA) – EU | DORA focuses on the broader aspects of digital operational resilience, covering aspects like ICT (Information and Communications Technology) risk management, incident reporting, and resilience testing for financial entities – operations that will be soon embedded with agentic AI.2 | National Institute of Standards and Technology (NIST) – US | The recent update NIST CSF 2.0 includes guidance on AI securing algorithms and training data through secured measures.3 | International Organization for Standardization (ISO) 27001 & ISO 42001 | International standards for managing information security have expanded their scope and coverage to include newest AI models.4 | Open Web Application Security Project – (OWASP) Top 10 for large language model (LLM) | Practical security risks in agentic AI using large language models.5 |
---|
Emerging Standards/regulations | How the regulations apply to agentic AI adoption |
---|---|
EU AI Act | AI systems must meet robust cybersecurity, transparency and monitoring obligations6. |
Whitehouse AI Executive Order | Requires developers of AI models to report safety testing results, implements red-teaming and addresses cybersecurity risks.7 |
Indian Digital Personal Data Protection Act (DPDP) | Enforcing DPDP mandates into the operational core of agentic AI systems, firms can uphold data sovereignty while accelerating safe digital innovation.8 |
NIST AI risk management framework | Secure and trustworthy AI system design.9 |
Organisation for Economic Co-operation and Development (OECD) AI principles | Focus on transparency, accountability and robustness of AI systems.10 |
ISO/ International Electrotechnical Commission (IEC) TR 24028 | Support or improves trustworthiness in AI systems, including mitigating AI system vulnerabilities. 11 |
Incorporating agentic AI into existing AI frames of business operations or building a completely new agentic architecture should be carefully measured and considered by leadership and experts. The risk factors increase within critical business operations due to the use of multiple agentic AI models and lack of human intervention. Some of the best practices which can help organisations plan and mitigate the cybersecurity challenges which arise due to agentic AI systems’ adoption are:
Develop an AI usage policy: Clearly define the specific types of AI tools that will be used, the personnels and designations who can use them and document the purpose of the AI tools to avoid shadow AI use and potential data leaks.
Limit AI system access (least privilege): Ensure that agentic AI-embedded systems have only the minimum required access to function (delineated only to a specialised and authorised team) to prevent overreach and misuse.
Implement data security controls: Tag and restrict sensitive data from being disclosed while conducting AI system training or engineering prompts. This will further help in preventing accidental disclosure of sensitive data.
Mandate secure model integrations: Enforce sandboxing, API controls and secure plugin designs when integrating AI with internal tools and workflows.
Implement observability: Log and monitor detailed AI interactions – what AI systems access, generate or execute. Make sure agent actions are traceable to point zero for audits and investigations.
Conduct red teaming and prompt testing: Regularly test AI systems for vulnerabilities within prompt injections, consequent data leaks, and misuse.
Adopt governance frameworks (e.g. ISO 42001, NIST AI RMF): Align AI usage with recognised and standardised security and trust frameworks to better manage compliance and risk.
Human review: Establish human-led validation for high impact AI decisions such as financial approvals, security changes or customer communications.
Monitor third-party risks: Ensure vendors meet the necessary cybersecurity and compliance requirements and thoroughly review contracts for data handling and audit rights.
Train employees on risks and safe use of AI: Educate employees, developers, leadership and third-party staff on data handling, prompt usage, AI limitations and more. Make AI systems application awareness part of on-boarding for new joiners.
Many businesses are still experimenting with GenAI integrations and how agentic AI can be leveraged to meet their business needs. It remains to be seen how businesses, policymakers and governing bodies can collectively work towards ensuring a secure way of implementing agentic AI and developing measures to govern the technology and improve processes, data security and agility. It is time for business leaders to keep abreast of the latest developments and take informed decisions to remain relevant and competitive within their industry.
Stay tuned for in-depth analyses and expert recommendations on navigating the complexities of Agentic AI in our upcoming article.