Introduction
As artificial intelligence (AI) continues to redefine global power dynamics, India is positioning itself as a key player in AI governance and innovation. The Digital Personal Data Protection (DPDP) Act of 2023 represents India's effort to regulate the collection, storage, and processing of personal data. However, the rapid advancement of AI technologies raises concerns about data security, privacy rights, and accountability. This brief critically examines the intersection of the AI Diffusion Framework and India’s data protection policy, identifying gaps and suggesting policy interventions.
AI Diffusion and Its Intersection with the DPDP Act 2023
The integration of AI into India's digital ecosystem has been accelerated by government-backed initiatives like India Stack, DigiLocker, and ONDC, which facilitate seamless data exchange and digital identity management. However, the widespread use of AI raises significant concerns related to data privacy, security, and regulatory oversight. The DPDP Act 2023 aims to provide a legal framework for data protection but does not adequately address AI-specific risks such as bias in automated decision-making, lack of transparency in AI-driven processes, and challenges in enforcing data minimisation.
AI systems require vast datasets for training, which may conflict with the DPDP Act’s provisions on data minimisation and purpose limitation. Additionally, cross-border data flow regulations under the Act may hinder AI research and development collaborations, affecting India’s global competitiveness. The lack of a dedicated AI governance framework within the DPDP Act also raises concerns about regulatory gaps in monitoring AI applications that process personal data, potentially exposing individuals to privacy risks and discriminatory outcomes. A more tailored regulatory approach is necessary to align AI governance with India’s broader digital transformation goals.

Challenges in the AI-DPDP Nexus
Global AI Governance Dynamics
Recent geopolitical developments, such as the U.S. Interim Final Rule on the Framework for Artificial Intelligence Diffusion, impose export controls on high-end software and hardware essential for AI development. These restrictions could impact India’s access to critical AI technologies, underscoring the need for a resilient domestic AI infrastructure.
Opaque AI Algorithms
AI systems often function as 'black boxes,' making it difficult to ascertain how personal data is processed and used in decision-making.
Potential for Discriminatory Outcomes
AI models trained on biased datasets can lead to unfair treatment of individuals, a challenge that the DPDP Act does not explicitly mitigate.
Absence of an AI Ethics Framework
Countries like the EU have integrated AI-specific ethics into their data protection regimes. India’s DPDP Act does not yet include ethical AI guidelines.
Security Concerns in AI Diffusion
The Bureau of Industry and Security (BIS) in the U.S. imposed export restrictions on AI-related technologies to prevent misuse in military and surveillance applications. This affects India's ability to procure high-end computing chips and AI model weights, raising national security concerns.
Evolving U.S.-India Relations
On February 13th, 2025, Prime Minister Narendra Modi and U.S. President Donald Trump outlined plans to deepen defence collaboration, including India’s acquisition of advanced defence systems like F-35 stealth fighter jets. This partnership aims to bolster India’s deterrence capabilities and U.S.-India military cooperation. Additionally, both nations have set a bold new goal for bilateral trade, "Mission 500," aiming to more than double total bilateral trade to $500 billion by 2030.
Policy Recommendations
To harmonise AI diffusion with India’s data protection framework, it is crucial to integrate AI-specific provisions into the DPDP Act, ensuring that privacy risks, algorithmic transparency, and accountability measures are explicitly addressed. Strengthening AI governance through a dedicated regulatory body or empowering the Data Protection Board to oversee AI applications can enhance compliance with privacy norms.
Given potential international restrictions, India should invest in domestic AI capabilities to reduce reliance on foreign technology while also enforcing explainability standards that mandate transparency in automated decision-making processes.

Sector-specific AI guidelines should be developed to regulate AI applications in critical areas such as healthcare, finance, and law enforcement, ensuring that data privacy and ethical AI use are prioritised. Additionally, India must align its cross-border data policies with global best practices, allowing for seamless AI innovation while safeguarding user privacy. Lastly, national security safeguards should be implemented to prevent unauthorised AI model development in sensitive domains, ensuring responsible and ethical AI deployment.
The DPDP Act 2023 is a crucial step in India’s digital governance landscape, but its effectiveness in regulating AI-driven data processing remains uncertain. AI diffusion introduces new privacy risks that require regulatory foresight. By adopting AI-specific amendments, strengthening oversight mechanisms, and investing in domestic AI capabilities, India can ensure that its data protection framework remains robust in an AI-driven future.
