logo
logo
Sign in

AI, Data Privacy, and India’s Digital Personal Data Protection Act

avatar
Davies Parker

AI is growing really fast and could add lots of money to the world’s economy. But this growth has made people worry about how we protect our information. That’s why organisations around the world are making rules and guidelines to make sure AI keeps growing while also making sure people’s information stays safe and private.

India’s Digital Personal Data Protection Act 2023: A Compliance Mandate

India’s government, after extensive consultations spanning five years, passed the comprehensive Digital Personal Data Protection Act 2023 (DPDP Act). This act imposes mandatory compliance requirements on companies operating within India. The Minister of IT, Rajeev Chandrasekhar, announced additional data protection rules to supplement the Act by mid-October 2023, with a grace period of a maximum of 12 months for compliance.

Impact on AI and Machine Learning

Studies, like the one conducted by the Boston Consulting Group and IIT-A, underline AI’s potential to contribute significantly to India’s GDP growth. However, the DPDP Act doesn’t explicitly mention AI. Still, its core principles aim to safeguard individuals’ rights and mandate lawful processing of personal data, crucial for AI’s functioning.

Consent Fatigue and Compliance Challenges

The DPDP Act outlines stringent guidelines for processing personal data, emphasizing valid consent or legitimate uses. It delineates instances where processing data is acceptable, including scenarios related to state functions, legal obligations, and public safety. However, challenges arise concerning publicly available data, especially if previously shared personal information is reclassified as private.

AI Model Training and Compliance

Training AI models heavily relies on data collection, raising concerns about data from children or individuals with disabilities. Even if consent is obtained, the Act prohibits processing that may harm a child’s well-being. Identifying such situations in massive data collections for AI training remains challenging.


Compliance and Generative AI Models

Generative AI models often utilize shared personal data for training, potentially violating the Act’s specified purpose clauses. There’s a discrepancy between the presumed use of data and its actual application, violating consent limitations outlined in the DPDP Act.

Ineptness in Compliance Execution

Complying with obligations, such as data accuracy and rectification, becomes intricate, especially in AI-generated data scenarios. Implementing the Act’s Data Subject Rights (DSR) framework, including data access and correction, poses technical challenges in AI models’ data handling.

The Road Ahead: AI as a Compliance Ally

Despite compliance burdens, there’s a proposal to leverage AI as a Consent Manager, managing consents on behalf of Data Principals. This could aid in complying with mandatory requirements, aligning AI usage with regulatory mandates.

A Global Shift Towards Responsible AI

The G20 summit’s “New Delhi Leaders’ Declaration” underscores the prioritization of regulating AI by major economies. Addressing human rights, transparency, fairness, and data protection in AI development signifies an imminent wave of additional compliance requirements, fostering ethical AI innovation without compromising on digitalization.


In navigating the DPDP Act’s impact on AI, it’s evident that balancing technological advancement with data protection mandates is crucial. Leveraging AI as a compliance ally could pave the way for ethical AI development in an increasingly regulated landscape.

Click here for Data Protection and Privacy Services.

collect
0
avatar
Davies Parker
guide
Zupyak is the world’s largest content marketing community, with over 400 000 members and 3 million articles. Explore and get your content discovered.
Read more