DIGITAL DOPPELGÄNGERS: RECONSIDERING INTELLECTUAL PROPERTY AND PERSONALITY RIGHTS IN THE AGE OF AI CLONING IN INDIA
- RFMLR RGNUL

- Oct 22
- 9 min read
This post is authored by Nidhi Kamath and Ayushi Patel, 4th Year, B.A. LL.B (Hons.) students at the Institute of Law, Nirma University
INTRODUCTION: IDENTITY IN THE AGE OF SYNTHETIC MEDIA
The rapid exponential rise of artificial intelligence has opened a world where identity itself can be copied, reconstructed, and traded with scary precision. One of the most subversive forms that has emerged is that of digital doppelgangers, synthetic replicas of human appearance, voice, and persona, produced by sophisticated machine learning algorithms. In India, the recent scandals regarding the unauthorized cloning of playback singers Sonu Nigam and Arijit Singh have brought the matter to public and judicial awareness.
Unlike traditional infringements of copyright or trademark, AI-driven cloning blurs the lines between authorship, creativity, and identity itself. The current statutory regime in the Copyright Act, 1957, and the Trade Marks Act, 1999, lacks any express mechanism to control unauthorized use of biometric similarity, whereas judicial endorsement of publicity rights continues to be ad hoc and mostly limited to celebrity cases like DM Entertainment Pvt. Ltd. v. Baby Gift House and Titan Industries Ltd. v. Ramkumar Jewellers. In a cited case, fraudsters employed AI for impersonating a relative's voice on WhatsApp to dupe the victim into sending ₹1.97 lakh. Experts believe that the spurt is due to the intersection of social engineering with sophisticated generative tools, allowing for more effective deception. A survey by McAfee quoted in the report states that 83% of Indian victims of AI-voice scams incurred financial losses, and almost half lost more than ₹50,000. These attacks are successful through taking advantage of trust, expediency, and psychological vulnerability, and calls for stringent identification verification, due diligence with OTP disclosure, and timely reporting to cybercrime agencies. The consequence is a patchwork of interim orders and judicial ingenuity, but not a coherent regulatory regime. This legislative void is further aggravated by the global and viral spread of deepfakes, which makes traditional enforcement efforts insufficient.
Against this context, the Indian jurisprudence is confronted with a crucial dilemma: how to balance the protection of individuality and creative labour with freedom of expression and technological advancement. The issue of AI cloning requires not just clarification in doctrine but also normative re-conceptualization of the intellectual property-personality rights interface in the era of digital technology.
THE DOCTRINAL ANCHORS OF PERSONALITY RIGHTS IN INDIA
The theoretical basis for personality rights in India has been judicially developed mostly, progressing through a series of high-profile cases over the unauthorised commercial exploitation of celebrity persona. In contrast with copyright or trademark defenses expressly codified in statutory language, recognition of personality rights continues to be grounded in judicial creativity and exhibits aspects both of flexibility and vulnerability in the absence of thorough legislative codification.
An influential expression of these rights was found in ICC Development (International) Ltd. v. Arvee Enterprises, in which the Delhi High Court acknowledged that the unauthorized commercial exploitation of the names and personas of cricket players for ambush marketing purposes was a breach of their personality rights. The Court emphasized that even though a celebrity's identity is not a "work" in copyright law, its misuse to gain commercial benefits constituted unfair exploitation of goodwill. Likewise, in DM Entertainment Pvt. Ltd. v. Baby Gift House, the Delhi High Court enjoined defendants from marketing dolls that looked like singer Daler Mehndi, holding that the unauthorized appropriation of his persona for commercial purposes constituted passing off. The Court accordingly applied the principles of trademark law to personality rights, stressing that a celebrity's persona is of proprietary value which must be protected. The Delhi High Court in Titan Industries Ltd. v. Ramkumar Jewellers further reinforced this trend. In that instance, the Court shielded the unauthorized use of Amitabh Bachchan and Jaya Bachchan's photographs on advertisements of jewellery, clearly acknowledging personality rights as a separate class of rights based on Article 21 of the Constitution that enshrines the right to privacy and dignity.
Together, these rulings confirm that personality rights in India are judicially established as a blend of privacy, dignity, and proprietary interests, but are not codified within statutory intellectual property law. The aggregated jurisprudence is a piecemeal development: whereas Indian courts have repeatedly confirmed the enforceability of personality rights, their foundation is still insecure in the face of a lack of legislative articulation. Though the piecemeal character of the jurisprudence is still apparent, its description as "piecemeal" alone requires a more subtle assessment. Judicial incrementalism, as much as it is typically decried for being narrow in its compass, can be seen to act as a regulatory bulwark against the dangers of overreach by the legislature in areas of constitutionally sensitive protection such as free expression. Additionally, in the digital era where deepfakes and AI-cloning technologies erase the lines separating authenticity and fabrication the debate has to broaden from descriptive analysis to a normative and practical critique of bringing celebrity doctrine to non-celebrities. Expanding in this direction poses complicated issues of visibility, consent, and the asymmetrical abilities for reputation management in a growingly algorithmic public sphere.
AI CLONING AND THE FRAGILITY OF EXISTING IP FRAMEWORKS
The advent of AI-based cloning technologies has revealed the doctrinal vulnerability of India's intellectual property regime, specifically within the context of copyright. The copyright regime under the Copyright Act, 1957, relies on the twin conditions of human authorship and fixation in some material form. The AI-created deepfakes and cloned voices radically undermine both such principles. When an algorithm imitates a singer's voice or produces simulated audiovisual material, there are questions regarding whether such material constitutes an "original work" in the absence of human imagination. Additionally, the status of the person whose voice or image is duplicated while their identity is being used for commercial gain is disputed as well; copyright law does not acknowledge identity as copyrightable subject matter.
The Bombay High Court in Arijit Singh v. Codible Ventures LLP issued an injunction against the unauthorized use of his artificially created voice for commercial purposes, acknowledging that such duplication threatened to dilute both the artistic integrity and the economic value associated with his persona. Previously, Sonu Nigam openly expressed fears regarding AI-made copies of his songs, which, though not yet in court, illustrated the susceptibility of artists to technological misuse. Such conflicts indicate a rapidly growing tension: although Indian law recognizes personality rights and intellectual property safeguards in part, it is far from being capable of confronting the intricate nature of synthetic media. This highlights the inadequacy of copyright alone as a remedy against synthetic media. Lack of an explicit legal framework governing AI-generated derivative works reveals foundational gaps in Indian copyright law. Current provisions, premised on human authorship, provide scant protection against algorithmic copying of creative identity. Recent controversies surrounding AI-cloned vocals of singers like Arijit Singh and Sonu Nigam highlight this doctrinal insufficiency: copyright's originality and fixation cannot sufficiently respond to synthetic reproductions mimicking voice, tone, or artistic style. Therefore, creators are object to misappropriation without effective recourse in law. The problem is not limited to celebrities; as generative technologies spread, regular people are also at risk of having their image or voice taken for themselves without permission. This new reality underscores the necessity to revamp legal principles to fit in three values: creative freedom, technological advancement, and safeguarding of personal and expressive identity in the AI era.
THE CROSSROADS OF DATA PROTECTION AND PERSONALITY RIGHTS
The advent of AI cloning technology has thrown into prominence an uneasy convergence of data protection legislation and personality rights jurisprudence. The Digital Personal Data Protection Act, 2023 (DPDP Act) focuses predominantly on the protection of "personal data," which is defined as any information concerning an individual identifiable by or in relation to such data. In theory, biometric features like face, voice prints, or even behavioural information come within this purview, making their collection and processing subject to the express consent of the "data principal." For cloning, copying a singer's voice or an actor's image cannot be dissociated from the processing of personal data of a sensitive nature and, as such, invokes the consent-oriented requirements of the DPDP regime. However, the commercial misappropriation of identity markers is not covered within the purview of the Act, thus creating a significant gap between personality rights and informational privacy.
A purposive construction, however, indicates that the DPDP Act's requirement of consent to personal data usage can be used to strengthen normative underpinnings of publicity rights, especially against unauthorized exploitation of identity through AI. This harmonised interpretative strategy would enable courts to visualise cloning as a violation of informational autonomy under the DPDP Act and a misappropriation of persona under personality rights jurisprudence. This convergence would therefore be able to offer a stronger normative protection where there is no express statutory acknowledgment of identity as proprietary subject matter, while also keeping domestic law in line with new trends in data-led identity governance at the global level.
THE GLOBAL DIMENSION: COMPARATIVE LESSONS
The legal issues surrounding AI cloning and synthetic media are not confined to India; instead, they represent a worldwide shifting of legal frameworks controlling identity, creativity, and technological exploitation. In America, the right of publicity gives a direct right of action against unauthorized commercial exploitation of a person's image, voice, or personality, independent of intellectual property law. Recent innovations, like the Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) collective bargaining agreements, directly govern the contractual employment of AI-generated clones of actors, requiring informed consent and payment. This illustrates how sectoral negotiation may supplement doctrinal protection, especially in creative sectors susceptible to voice and image copying.
Within the European Union, the framework is based on the General Data Protection Regulation (GDPR), where biometric identifiers are considered sensitive personal data to which greater consent and purpose limitation are required. The newly implemented EU Artificial Intelligence Act (2024) reinforces this framework even further by making transparency obligations applicable to synthetic media and requiring disclosures in cases involving deepfakes or AI-generated likenesses. Combined, these tools merge consumer-facing and data-protecting safeguards in a two-tiered regime for synthetic identity management.
China has followed a more controlling approach. Its guidelines on deep synthesis technology mandate the marking of AI-generated content, advance consent for use of personal likeness, and hold platforms liable for failing to stop harmful synthetic media from being released. In contrast with the U.S. and EU, the Chinese model is marked by active state engagement and robust platform responsibility.
As much as comparative models from the U.S., EU, and China provide varying regulatory reactions to AI cloning and synthetic media, their transplantation in India needs careful examination. The viability of applying publicity rights, consent-based data regimes, or strict platform liability must weighed against India's institutional capacity, enforcement issues, and socio-legal heterogeneity. A hybrid model can be normatively appealing, but one in which it succeeds hinges upon contextual adjustment achieving a balance between technological ingenuity and protection of privacy, freedom of art, and the pragmatics of India's emerging digital governance environment. For India, these comparative paths indicate the possibility of a hybrid framework. Taking over from the U.S., express grant of publicity rights would anchor personal control over identity; the EU model offers a consent-driven data protection system adaptable to biometric exploitation; and the Chinese regime shows the virtue of ex ante platform responsibility. Carefully integrated, these features would be able to fill India's legislative void today, creating an equitable regime that protects both personal freedom and artistic integrity without compromising accountability in the digital arena.
TOWARDS A CODIFIED “RIGHT OF DIGITAL PERSONA” IN INDIA
The atomized protection of identity in India, spread among intellectual property, privacy, and publicity rights created through the judiciary, is poorly adapted to the disruptive forces of AI-based cloning and synthetic media. Dependent on the ad hoc injunction or tortured applications of the copyrights and trademark regime only serves to bring into relief the lack of a consistent statutory vision. Whereas the proposal of a sui generis "Right of Digital Persona" takes an inventive step in protecting identity within the algorithmic era, various prospective obstacles should be considered. Determining the bounded enforceability of a digital persona remains ideologically problematic in differentiating between fair use parody, transformative use, and illegal appropriation. Also important is reconciling the protection of individual identity with constitutional protections of free speech and artistic expression in a way that such rights are not unnecessarily curtailed. Additionally, enforcement capability and institutional preparedness present practical challenges, considering India's disaggregated digital governance framework. Taking cues from the comparative frameworks outlined above, an effective model might synthesize the U.S. focus on publicity rights, the EU's consent-oriented data protection regime, and China's platform accountability systems together to inform a harmonized, context-sensitive legislative framework responsive to India's socio-legal context.
What India needs is not incremental reform of current regimes, but the passing of a specific Personality and Digital Integrity Act, which would enact a sui generis right of digital persona. Such an enactment could reconcile the proprietary aspects of identity with the constitutional protection of privacy, and infuse a consent-based model like data protection law. By affirmatively acknowledging voice, likeness, and biometric characteristics as subject matter capable of protection, India would not only ensure protection for personal autonomy and artistic authenticity but also bequeath a progressive legal framework attuned to the current challenges of the algorithmic era.
.png)
Comments