India is attempting a difficult two-step: accelerate artificial-intelligence adoption across its economy while building credible guardrails for privacy, safety and accountability. Prime Minister Narendra Modi has framed AI as central to India’s next stage of growth, and New Delhi has begun to put money and law behind that promise—even as the country still lacks a single, comprehensive “AI Act.”
The legal foundation is the Digital Personal Data Protection (DPDP) Act, enacted in August 2023. The statute establishes consent-led processing, enumerates individual rights, permits most cross-border data transfers by default (subject to government-notified exceptions) and creates a new Data Protection Board to enforce the regime. Implementation, however, is still in motion: draft DPDP Rules were released in January 2025, with officials signaling phased enforcement and a “sunrise” period for companies to adapt. As of mid-2025, key provisions awaited final notification and the Board’s full operationalization.
Policy makers have paired the legal track with public investment. In March 2024, the Union Cabinet approved the IndiaAI Mission with a budget outlay of about US$1.2 billion to fund compute capacity, datasets, skilling and startup support. The mission is designed to push adoption without leaving safety entirely to private actors.
Regulatory posture toward frontier models has evolved quickly. After a March 2024 advisory from the IT Ministry asked firms to seek approval before releasing “unreliable” or under-tested AI tools, industry pushback prompted a revision two weeks later. The updated advisory dropped the prior-permission requirement but kept obligations to label limitations, build content safeguards and protect elections from AI-driven manipulation.
Deepfakes have become the political and cultural flashpoint. Following a viral synthetic video of actor Rashmika Mandanna in late 2023, the government reminded platforms of their duties under India’s IT Rules and issued advisories to remove such content within 24 hours, moves that were followed by police investigations and arrests. The episode hardened official attitudes toward synthetic media ahead of national elections and accelerated work on takedown protocols.
Ethical tensions extend beyond celebrity cases. India’s experimentation with facial-recognition systems—from airports to policing—has met sustained civil-society scrutiny over accuracy thresholds, due-process risks and potential bias. Rights groups have warned that treating partial “matches” as positives can generate wrongful stops and surveillance creep, underscoring the DPDP Act’s importance and the need for sector-specific safeguards.
New Delhi’s normative work predates the latest wave of generative AI. NITI Aayog’s “Responsible AI” papers laid out principles of accountability, transparency and inclusion, and the government’s own AI strategy materials document pilots across transport, agriculture and public services. The policy debate has since shifted from principles to engineering: how to turn those ideals into enforceable standards without freezing innovation.
Even the public conversation about AI’s cultural boundaries has been reframed by misstatements. A widely cited example of posthumous voice cloning involved not a “musician” but the late chef-author Anthony Bourdain, whose voice was synthetically recreated for a 2021 documentary—sparking a global debate about consent and disclosure in synthetic media. The distinction matters because it illustrates why provenance and permissions—not just technical possibility—sit at the heart of India’s proposed rules on labeling and takedowns.
Against this backdrop, India’s growth thesis for AI remains robust but more grounded than boosterish claims suggest. Independent industry analyses project the domestic AI market in the tens of billions of dollars over the next few years, while official strategy documents argue faster AI adoption could add as much as US$500–600 billion to GDP by 2035 through productivity gains—figures that assume scaled deployment across finance, manufacturing and public services.
The policy balance India is now attempting—tight enough rules to protect rights and elections, loose enough space for start-ups and incumbents to build—will likely be delivered through a patchwork: the DPDP Act and Rules, the IT Act and platform-liability regime, sectoral standards, and IndiaAI funding tied to safety expectations. Whether this becomes a durable model for other democracies will depend less on speeches than on execution: a functioning Data Protection Board, clear enforcement timelines, workable guidance for frontier models, and measurable reductions in harms such as deepfakes and biased surveillance. In the world’s largest democracy, AI governance will be judged not only by how fast the economy grows, but by how well individual rights are kept intact while it does.
© 2025 www.cijeurope.com