Agentic AI and cybersecurity: Challenges and key consideration

Agentic AI and cybersecurity: Challenges and key consideration
  • August 05, 2025

AI has become a part of the day-to-day lives of most people. AI tools which generate text or images on request are used by many people for both personal and professional purposes. Progressing a step further than generative AI (GenAI), agentic AI does not require a human prompt and can understand the underlying context and respond autonomously. Agentic AI is distinguished by its autonomous intelligent capacity to interpret and adapt to surrounding context in real time, thereby helping businesses to simultaneously simplify and manage multiple complex processes into smaller processes. This enables agentic AI to create larger, more complex and sophisticated use cases. However, when an AI system becomes this autonomous, the question on ethical considerations embedded within the structure of the agentic AI becomes critically important.

Three core principles

These principles enable agentic AI to operate autonomously and make informed decisions in dynamic environments. The contextual awareness ensures it can adapt to new information and changing circumstances. Being goal-oriented allows it to prioritise actions and strategies that align with its objectives, while its self-contained nature ensures it can function independently without external intervention.

Examining present use cases and leader perspectives on agentic AI’s implementation

Agentic AI is rapidly gaining popularity among industry leaders due to its revolutionary potential. With its ability to work with minimal human intervention, these systems could be a game-changer across industries for improving efficiency, shortening deliverable timelines and reducing operational and systemic errors.

Let’s illustrate how this technology can improve efficiencies within the banking sector to shorten deliverable timelines and reduce operational and systemic errors. If banks adopt agentic AI to improve their cybersecurity structure, it could monitor transactions and portfolio traffic, scrutinise user behaviour to identify anomalies in the network pattern for cyber threats or vectors and automatically deploy counter-measures in response to the detected cyber incidents. Agentic AI can memorise this entire incident pathway and learn from it to apply remedial action contextually in the future, the next time there is a detection of varying cyber anomalies, isolating affected systems or flagging threat-prone pathways.

However, since it is a relatively new technology, strong regulatory and ethical frameworks are needed to govern the application. The level of independence these systems function would require an immediate and comprehensive reassessment of accountability, with discussions focusing on ways to assure transparency, fairness and uniformity in the decision-making processes. In order to create clear guidelines to protect against its own intuitiveness and consequent significant misuse and bias, leaders are increasingly calling for a collaborative approach among technologists, ethicists and legislators. 

Overcoming challenges in agentic AI’s integration

Beyond the conventional risks connected with Gen AI-based platforms, the deployment and integration of agentic AI presents new and unprecedented cybersecurity challenges. Understanding these risks and challenges is not just the responsibility of IT or security teams but a leadership concern which ultimately impacts organisational trust, compliance, reputation and business continuity. 

  • Autonomous access to systems: In order to be aware of the context and to respond in real-time, agentic AI may be granted access to sensitive business emails, files, databases or tools which could increase the risk of misuse of information and access.
  • Unintentional data flow: It may unknowingly pull, combine or share data from multiple systems in ways that violate data security and privacy laws.

  • Unmonitored decision-making: Since agentic AI executes tasks independently, it can sometimes exceed beyond the intended goals. Without proper sandboxing, this can lead to uncontrolled behaviour and unauthorised execution of tasks.
  • Auditability gaps: In its nascent stage, without adequate volume of observability and analysis into the functionality of the agentic AI systems, it may be difficult to reconstruct why an agent took a particular action – posing a serious problem for future investigations or compliance assessments.

  • Blind spots: Employees may use personal AI tools that are not integrated with organisational identity governance policies, potentially creating blind spots and shadow AI usage in organisations.
  • Data poisoning: If the AI models are trained with manipulated inputs either through improper training or malicious vectors, it could create flawed decisions and ultimately lead to executing incorrect or harmful tasks.
  • Unauthorised execution: Without proper, lawfully standardised controls to restrict the AI execution, the models can behave uncontrollably.

Additionally, since agentic AI systems possess access to multiple tools and data sources in the organisation, this can vastly expand the challenges from an identity governance perspective. These can include:

  • Over-provisioned privileges: Agentic systems often require access to multiple internal tools and services. If not carefully managed, this can violate ‘least privilege’ principles.
  • Credential sprawl: Embedding access keys or login credentials within agents introduces risks of credential leakage or misuse within the organisation.

  • Over-reliance: Organisations need to understand the issues with respect to over-reliance on AI system outputs. Without proper monitoring and due diligence of outputs periodically, the volume of inaccurate outputs could go undetected, leading to incorrect decisions and significant reputational risks.
  • Model theft: Competition can steal proprietary models, leading to loss of competitive advantage for the firm.

Furthermore, while hallucinations and inaccurate reporting are already inherent to Gen AI systems, these are magnified multi-fold in agentic AI systems:

  • Automated error propagation: If an agent misinterprets a task or context, it can conduct wrong tasks automatically without human oversight, potentially disrupting the operations.
  • Systemic complexity: As agents interact with other agents, tools and APIs, there is a risk of cascading errors due to incorrect actions undertaken by an agent at one of the points of interaction.

  • Damage stakeholder trust: Unverified content generated by AI which unconsciously gets propagated on external platforms can lead to loss of stakeholder trust
  • Impact on the brand: Misuse of AI within the organisation by external malicious actors or internal employees can lead to stringent regulatory scrutiny and detrimental brand impact
  • Supply chain risks: Use of third-party plug-ins or APIs  can introduce third party risks
  • Integration risk: Poorly vetted integrations can introduce vulnerabilities in the overall system design, leading to data leakages or operational failures, or both.
  • Autonomy driven missteps: Agents might act on flawed instructions or incomplete data taking actions that mis-align with brand, ethics and leadership intent.

Regulations and standards

Technology is evolving faster than the regulations which govern them. Governance standards and regulations take more time to be drafted and implemented since they aim to close the gaps within the latest technological systems and their adoption. Since agentic AI is gaining popularity among businesses to fulfil a variety of their business needs, he adoption of agentic AI requires the establishment of a robust policy and governance framework to ensure that the technology does not lead to any data breaches or violation of security protocols.

Table 1: Existing regulations which cover agentic AI under their ambit

Standards/regulations How it applies to agentic AI
General Data Protection Regulation – European Union (GDPR – EU) Personal data used to train or processed by agentic AI must follow strict privacy, transparency, and access rights.1
Digital Operational Resilience Act (DORA) – EU DORA focuses on the broader aspects of digital operational resilience, covering aspects like ICT (Information and Communications Technology) risk management, incident reporting, and resilience testing for financial entities – operations that will be soon embedded with agentic AI.2
National Institute of Standards and Technology (NIST) – US The recent update NIST CSF 2.0 includes guidance on AI securing algorithms and training data through secured measures.3
International Organization for Standardization (ISO) 27001 & ISO 42001 International standards for managing information security have expanded their scope and coverage to include newest AI models.4
Open Web Application Security Project – (OWASP) Top 10 for large language model (LLM) Practical security risks in agentic AI using large language models.5

Table 2: Emerging regulations surrounding agentic AI

Emerging Standards/regulations How the regulations apply to agentic AI adoption
EU AI Act AI systems must meet robust cybersecurity, transparency and monitoring obligations6.
Whitehouse AI Executive Order Requires developers of AI models to report safety testing results, implements red-teaming and addresses cybersecurity risks.7
Indian Digital Personal Data Protection Act (DPDP) Enforcing DPDP mandates into the operational core of agentic AI systems, firms can uphold data sovereignty while accelerating safe digital innovation.8
NIST AI risk management framework Secure and trustworthy AI system design.9
Organisation for Economic Co-operation and Development (OECD) AI principles Focus on transparency, accountability and robustness of AI systems.10
ISO/ International Electrotechnical Commission (IEC) TR 24028 Support or improves trustworthiness in AI systems, including mitigating AI system vulnerabilities. 11

Best practices for the safe and secure deployment of agentic AI

Incorporating agentic AI into existing AI frames of business operations or building a completely new agentic architecture should be carefully measured and considered by leadership and experts. The risk factors increase within critical business operations due to the use of multiple agentic AI models and lack of human intervention. Some of the best practices which can help organisations plan and mitigate the cybersecurity challenges which arise due to agentic AI systems’ adoption are: 

PwC wins 10 SAP ACE Awards 2024

Develop an AI usage policy: Clearly define the specific types of AI tools that will be used, the personnels and designations who can use them and document the purpose of the AI tools to avoid shadow AI use and potential data leaks.

PwC wins 10 SAP ACE Awards 2024

Limit AI system access (least privilege): Ensure that agentic AI-embedded systems have only the minimum required access to function (delineated only to a specialised and authorised team) to prevent overreach and misuse.

PwC wins 10 SAP ACE Awards 2024

Implement data security controls: Tag and restrict sensitive data from being disclosed while conducting AI system training or engineering prompts. This will further help in preventing accidental disclosure of sensitive data.

PwC wins 10 SAP ACE Awards 2024

Mandate secure model integrations: Enforce sandboxing, API controls and secure plugin designs when integrating AI with internal tools and workflows.

PwC wins 10 SAP ACE Awards 2024

Implement observability: Log and monitor detailed AI interactions – what AI systems access, generate or execute. Make sure agent actions are traceable to point zero for audits and investigations.

PwC wins 10 SAP ACE Awards 2024

Conduct red teaming and prompt testing: Regularly test AI systems for vulnerabilities within prompt injections, consequent data leaks, and misuse.

PwC wins 10 SAP ACE Awards 2024

Adopt governance frameworks (e.g. ISO 42001, NIST AI RMF): Align AI usage with recognised and standardised security and trust frameworks to better manage compliance and risk.

PwC wins 10 SAP ACE Awards 2024

Human review: Establish human-led validation for high impact AI decisions such as financial approvals, security changes or customer communications.

PwC wins 10 SAP ACE Awards 2024

Monitor third-party risks: Ensure vendors meet the necessary cybersecurity and compliance requirements and thoroughly review contracts for data handling and audit rights.

PwC wins 10 SAP ACE Awards 2024

Train employees on risks and safe use of AI: Educate employees, developers, leadership and third-party staff on data handling, prompt usage, AI limitations and more. Make AI systems application awareness part of on-boarding for new joiners.

Way forward

Many businesses are still experimenting with GenAI integrations and how agentic AI can be leveraged to meet their business needs. It remains to be seen how businesses, policymakers and governing bodies can collectively work towards ensuring a secure way of implementing agentic AI and developing measures to govern the technology and improve processes, data security and agility. It is time for business leaders to keep abreast of the latest developments and take informed decisions to remain relevant and competitive within their industry.

Stay tuned for in-depth analyses and expert recommendations on navigating the complexities of Agentic AI in our upcoming article. 

Sricharan Saripalli

Partner, Cyber Security, PwC India

Email

Preethi Yeshwanth

Executive Director, Cybersecurity, PwC India

Email

Priyanjali Moulik

Manager, PwC India

Email

Rubina Malhotra

Manager, PwC India

Email

Follow PwC India

Required fields are marked with an asterisk(*)

Your personal information will be handled in accordance with our Privacy Statement. You can update your communication preferences at any time by clicking the unsubscribe link in a PwC email or by submitting a request as outlined in our Privacy Statement.

Contact us

Sundareshwar  Krishnamurthy

Sundareshwar Krishnamurthy

Partner and Leader – Cybersecurity, PwC India

Sricharan Saripalli

Sricharan Saripalli

Partner, Cyber Security, PwC India

Preethi Yeshwanth

Preethi Yeshwanth

Executive Director, Cybersecurity, PwC India

Hide