top of page

One Critical Gap: What Global AI Developers Must Know About India's Data Privacy Law" Introduction

  • Abir Roy & Sneha Sagar
  • Mar 2
  • 5 min read

The India AI Impact Summit 2026 in New Delhi marked a turning point. For the first time, a global AI summit shifted decisively to the Global South—an unmistakable signal that India does not intend to observe the AI revolution from the margins, but to shape its trajectory.


Yet beneath this momentum lies a critical legal fault line. India’s Digital Personal Data Protection Act, 2023 governs how personal data may be collected, processed, and used. The law, as it stands today adopts a rigid, consent‑centric framework. It offers no broad, principle‑based exemption for legitimate interest. Unlike EU’s GDPR, it provides no safety valve for large‑scale, dynamic, and iterative data use—the very foundation of modern AI development. This omission is not incidental. It reflects a policy choice with far‑reaching implications.


For AI developers—domestic and international alike—the absence of a legitimate interest framework introduces legal uncertainty, limits operational flexibility, and constrains the design of AI systems that depend on continuous and repetitive data flows. What is framed as a data protection safeguard thus becomes, in practice, a structural constraint on AI innovation.


For international AI companies processing Indian personal data, and for Indian AI firms competing globally, this is a compliance cost multiplier. Understanding India’s data protection regime, therefore, is no longer a matter of compliance alone. It is a strategic imperative. And the conversation (vartalap) must now move beyond summits and statements to the statute book itself.


The GDPR Benchmark : Flexible, Scalable, and AI-Compatible


Article 6(1)(f) of the General Data Protection Regulation (GDPR) permits personal data processing where it is necessary for the legitimate interests of the data controller or a third party, subject to a structured balancing test against the fundamental rights of the data subject. This principle‑based provision has become foundational to AI development across EU jurisdictions. It enables lawful model training, large‑scale analytics, and continuous system improvement without requiring explicit consent at every stage of processing.


Crucially, the legitimate interest framework does not function as a blanket exemption. Controllers must demonstrate necessity, assess proportionality, document risks, and implement safeguards. This architecture embeds accountability while preserving operational flexibility. Over time, EU regulatory guidance and jurisprudence have refined its contours, making it one of the most scalable and AI‑compatible legal bases available under modern data protection law.


Recent EU legislative initiatives have reinforced this alignment. This reflects a deliberate policy choice: data protection law should constrain misuse and abuse, not inadvertently obstruct technological progress. In effect, the GDPR ecosystem recognises that innovation and rights protection are not opposing objectives, but complementary ones when regulatory design is calibrated correctly.


India’s Framework : Consent‑Centric and Operationally Constraining


India’s Digital Personal Data Protection Act, 2023 (DPDP Act) adopts a markedly different approach. It permits personal data processing on two grounds only: consent, and a narrowly defined set of “legitimate uses” under Section 7. These legitimate uses are tightly circumscribed. There is no residual, principle‑based exemption for legitimate interests.


Consent under the DPDP Act must be free, specific, informed, and unambiguous, tied to clearly articulated purposes. Any material change in processing requires fresh notice and renewed consent. While this model may be workable for static, one‑time data uses, it is structurally misaligned with the realities of AI development. AI systems depend on large, evolving datasets and continuous iteration. Model retraining, performance optimisation, dataset augmentation, and functional expansion are not exceptional events—they are core operational features.


Under the current framework, each such iteration potentially triggers new consent obligations. At scale, these obligations cannot be meaningfully discharged. The result is not heightened protection in practice, but legal uncertainty, compliance friction, and reliance on informal or brittle workarounds. For AI developers operating in India or processing Indian personal data, the consent‑centric architecture becomes a constraint rather than a safeguard.


The Publicly Available Data Exemption : Limited Relief


The DPDP Act does include a narrow carve‑out: it does not apply to personal data that has been made, or caused to be made, publicly available by the Data Principal themselves. For AI developers relying on web‑scraped or publicly accessible datasets, this exemption may appear to offer relief. In practice, it does not.


The statutory threshold is exacting. The data must have been affirmatively made public by the individual. This is fundamentally different from data that is merely accessible online. Content that has been re‑published, aggregated, indexed, or disseminated through intermediary platforms—without a clear, voluntary act of disclosure by the individual—does not meet this standard on a strict reading of the law.


AI training datasets are typically assembled through automated processes such as web crawling and bulk aggregation. Establishing, at scale, that each identifiable data point originates from an affirmative disclosure by the Data Principal is operationally infeasible. The evidentiary burden placed on AI companies is, as a matter of practical law, near‑insurmountable. As a result, this exemption is not a substitute for a legitimate interests framework.


Why the International AI Community Must Pay Attention


Like other privacy regimes, India’s data protection regime has an extra territorial operation. The DPDP Act applies to any entity processing personal data of Indian individuals, regardless of where that entity is headquartered. Given India’s scale as a data market, this grants the Act global relevance.


International AI developers sourcing training data from Indian users, deploying AI systems that process Indian personal data, or incorporating India into global data pipelines must assess their operations against the DPDP Act’s requirements. Processing activities that are routinely lawful under GDPR’s legitimate interests basis may lack an equivalent lawful ground in India. This creates complexity.


Need for a relook


From the perspective of Indian law and policy, the case for introducing a principle‑based legitimate interests exemption into the DPDP Act is compelling. A provision modelled on the structured balancing framework under Article 6(1)(f) of the GDPR—supported by clear accountability obligations and regulatory guidance—would provide AI developers with a lawful and scalable basis for data processing without diluting the Act’s core data‑protection objectives. Properly designed, such an exemption would reduce legal uncertainty, lower compliance friction, and strengthen India’s attractiveness as a destination for AI development and investment, while remaining firmly anchored in rights protection.


India’s ambition to emerge as a global AI powerhouse cannot be realised through regulatory architectures that are structurally misaligned with how AI systems operate. In its current form, the DPDP Act imposes constraints that place Indian developers at a material disadvantage relative to their counterparts in GDPR jurisdictions, while simultaneously complicating India’s integration into global AI development ecosystems. At a moment when cross‑border data flows, collaborative model development, and international deployment are becoming decisive, this misalignment carries real strategic costs.


For international stakeholders, the implications are equally clear. Engagement with India’s evolving data protection framework can no longer be passive or peripheral. As the DPDP Rules are finalised and regulatory interpretation begins to take shape, there is a narrow but critical window to participate in consultations, contribute operational perspectives, and advocate for a framework that reflects the realities of AI development at scale.


Please feel free to reach out to our Team to discuss any of the Technology Law, Competition Law, International Trade and Policy Issues.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page