How to adopt AI responsibly? - PwC India

From automation to augmentation and beyond, Artificial Intelligence (AI) solutions are used in India and globally by companies to change how business gets done. On careful implementation, this artificial intelligence technology can bring about positive changes through its ability to add value. However, AI has the potential to amplify and perpetuate underlying risks, societal discrimination and inequalities. These risks related to artificial intelligence may either lie in the data or may be inherent in the design of the AI model

Responsible artificial intelligence – India highlights

The aim of the study was to understand the perception of the following aspects of AI Implementation:

Are artificial intelligence solutions in India and globally being widely deployed and used at scale?

  • With regard to artificial intelligence solutions implemented in India and globally, India (62%) is not very far behind global (65%).
  • Indian respondents (53%) significantly outnumber their global counterparts (36%) in admitting that they have no formal approach to identify risks related to artificial intelligence in India.                            
  • 29% respondents feel that they have no tools to access security flaws in their artificial intelligence solutions in India.
  • The respondents from Indian organisations cited lack of understanding of how AI arrives at a decision (58%), and insufficient data (45%) as the primary roadblocks as factors that inhibited the adoption and application of AI systems.
  • Compared to global (40%), fewer Indian executives (32%) expressed concerns about budgets to implement AI.
  • In terms of investing in artificial intelligence by Indian companies, only 7% of Indian respondents said they make AI investments part of their business strategy.

Are artificial intelligence solutions in India and globally working reliably in a desired manner?

  • With regard to artificial intelligence solutions used in India and globally, a majority of decision makers confessed that they may not have robust tools/processes to ensure reliability of their AI solutions.
  • Only 10% of Indian respondents were confident about the reliability of their AI applications.
  • As compared to global, Indian organisations are far behind in their understanding of the reliability criteria for deploying AI solutions at scale.

Ethical, legal or accountability challenges for malfunctioning AI

  • Important ethical considerations and risks related to artificial intelligence in India are data privacy requirements, accountability, robustness and security, interpretability, lawfulness and compliance and fairness.
  • While 65% of global organisations are not confident of detecting a malfunctioning AI system, for India the figure goes up to 81%.
  • 53% respondents do not have a formal approach to evaluate risks related to artificial intelligence in India.
  • Globally, 38% of the respondents have a good understanding of bias criteria and have tools for accessing ongoing biases, whereas only 29% Indian respondents have this capability.
  • In India, only 33% of the respondents are confident that they have the right tools and methodologies to meet the compliance requirements.
  • 67% of the organisations we surveyed in India are unsure of the regulatory compliance with respect to AI and have minimal understanding of tools needed for maintenance of data integrity.
  • More than 70% of the global respondents have an understanding of data privacy requirements, processes and tools. In comparison, only 33% Indian respondents understand data privacy requirements.

Data privacy requirements in India and abroad is a major ethical concern. In order to ensure that legitimate AI initiatives are not derailed during inception because of this risk perception, organisations would need to ensure that risks related to artificial intelligence are identified at an early stage and communicated to relevant stakeholders (both internal and external) with a robust mitigation approach.

PwC’s Responsible AI Toolkit

PwC recognises the importance of responsibly in unleashing the potential of AI. The responsible AI (RAI) framework that PwC has developed provides a practical solution to ensure effective stewardship of the outcomes. This technology-enabled toolkit consists of a plethora of flexible and scalable capabilities curated to enable and support the development and assessment of high-quality, explainable, transparent and ethical AI applications.

The toolkit’s diagnostics are designed to help different stakeholders generate trust and inspire confidence. It provides a set of assets curated to accelerate the evaluation of data, models and their trade-offs, considering the relevance and risks associated with artificial intelligence.

Building a firm foundation for the future of artificial intelligence in India and globally

Responsible AI (RAI) helps mitigate many of the risks – foreseen or unforeseen - associated with AI. RAI helps formulate effective operating models that minimise risks related to artificial intelligence and maximise rewards. It accelerates innovation and the potential to create value – which could be at stake if AI is implemented in the wrong way. We, at PwC, believe that integration of the RAI toolkit with AI-related initiatives will enable businesses to accelerate innovation and realise their vision.

Contact us

Sudipta Ghosh

Sudipta Ghosh

Partner and Leader, Data and Analytics, PwC India

Follow PwC India