Home » Blog » Is Indian Law Ready to Tackle AI-Generated Crimes?

Is Indian Law Ready to Tackle AI-Generated Crimes?

Is Indian Law Ready to Tackle AI-Generated Crimes

Artificial Intelligence (AI) is no longer science fiction — it’s a part of everyday life. From chatbots and recommendation engines to driverless cars and facial recognition, AI’s capabilities are growing rapidly. But along with transformative benefits, AI also presents new legal, ethical, and criminal challenges. One of the biggest questions facing India today is:

Is Indian law prepared to tackle crimes that are generated or amplified by artificial intelligence?

This blog explores the current legal landscape, the gaps in existing law, emerging challenges, landmark responses, and what needs to be done for India to become legally ready for the AI era.


What Are AI-Generated Crimes?

AI-generated or AI-facilitated crimes are offenses where AI systems act as tools, enablers, or—even potentially—autonomous agents in committing wrongdoing. These can include:

  • Deepfakes used for fraud, blackmail, or political manipulation

  • Automated bots that commit financial fraud

  • AI tools hacking into systems or bypassing security

  • Fake identities created using AI for illegal access

  • Autonomous systems causing physical harm (e.g., self-driving car accidents)

These crimes are different from traditional crimes because the “perpetrator” may be a complex algorithm rather than a natural person. That raises unique issues around intent, liability, evidence, and prevention.


India does not yet have a dedicated AI law. Instead, existing statutes that touch upon cyber issues are relied on:

a. Indian Penal Code (IPC), 1860

The IPC deals with traditional offenses, including fraud, defamation, identity theft, cheating, and forgery. But it was not designed with AI or digital automation in mind.

b. Information Technology Act, 2000 (IT Act)

The IT Act is the primary law governing digital activity in India. Key provisions include:

  • Section 66 – Computer-related offenses

  • Section 66F – Cyber terrorism

  • Section 43 – Damage to computer systems

These provisions have been used in cybercrime cases but are increasingly seen as insufficient for modern AI-driven offenses.

c. Rules & Guidelines

India has data protection drafts, cybersecurity policies, and digital ethics guidelines, but no comprehensive AI statute exists yet.


What Makes AI Crimes Different and Hard to Regulate?

AI challenges legal systems for several reasons:

a. Lack of Human Intent

The law usually relies on mens rea (guilty intention). When machine learning systems act independently, it becomes difficult to pin criminal intent on a human operator.

b. Attribution Problems

If an AI generates unlawful content (e.g., a deepfake), who is responsible — the developer, the user, the platform, or the AI model itself?

c. Speed and Scale

AI can create harmful content at massive scale (e.g., millions of fake accounts) in seconds. Traditional legal processes are simply too slow.

d. Borderless Nature

AI-generated crimes can originate outside India but affect Indian systems or citizens. Jurisdictional issues complicate enforcement.

These characteristics show why conventional legal frameworks are struggling globally, not just in India.


Deepfakes & Disinformation: A Case Study

What Are Deepfakes?

Deepfakes are realistic fake images, audio, or videos created using AI. They can:

  • Falsely attribute statements to politicians

  • Create fake celebrity content

  • Damage reputations through falsified evidence

So far, Indian law addresses deepfakes through provisions like:

  • Section 66D (IT Act) – Cheating by impersonation

  • Section 499–502 (IPC) – Defamation

However, these sections were not drafted with AI in mind. They lack specific definitions for synthesized media.

Problem: In many cases, deepfake videos may not clearly fall under “impersonation” or “defamation” unless a specific victim brings a complaint.

This gap shows how the legal framework is reactive rather than preventive.


Financial Crimes and Automated Bots

AI bots are used to:

  • Generate fake transactions

  • Bypass security systems

  • Exploit vulnerabilities

  • Commit fraud without direct human involvement

Current laws such as Section 66 of the IT Act cover unauthorized access and damage, but they don’t clearly classify automated AI tools as separate offenders.

In most cases, investigators must trace back to a human user or developer — which is often difficult due to encryption, anonymity, or decentralized systems.


AI in Autonomous Vehicles and Robotics

With the advent of self-driving cars and delivery drones, AI is directly linked to physical harm. If an autonomous vehicle injures someone:

  • Who is responsible?

  • The manufacturer?

  • The programmer?

  • The owner?

  • The AI itself?

Current Indian laws like motor vehicle regulations and tort law try to address these questions, but again they were not drafted with autonomous machines in mind. The Supreme Court or legislature may need to develop new standards for AI liability.


Data Protection, Privacy & AI

AI thrives on data — and India does not yet have a fully enacted data protection law (although the Digital Personal Data Protection Act, 2023 exists). Without a robust privacy framework:

  • AI systems can misuse personal data

  • Users lack clear consent mechanisms

  • Data breach liability is weak

AI raises privacy issues that require a strong legal framework. Without it, AI-generated crimes involving personal data will continue to flourish.


What Has India Done So Far?

India has taken early steps toward AI governance:

a. NITI Aayog’s AI Strategy

The government has published AI strategy documents emphasizing ethics, innovation, and regulation.

b. Draft National Data Governance Policies

These aim to balance innovation with individual rights.

c. Cybersecurity Policies

India has strengthened cybersecurity agencies to respond to digital threats.

However, these are policy frameworks, not laws. Policy can recommend but cannot enforce punishment or liability like a statute.


To tackle AI-generated crimes effectively, India needs:

a. Clear Definitions

Legal definitions for:

  • AI systems

  • Autonomous behavior

  • Deepfake content

  • Machine intent

  • Algorithmic harm

b. Liability Allocation

Who is responsible when an AI system commits or enables a crime?

Options include:

  • Strict liability for developers

  • Shared liability models

  • Insurance-based models

c. Digital Evidence Standards

Courts need clear rules on how to admit AI-generated content as evidence, including authentication of deepfakes.

d. Proactive Oversight

Regulatory bodies that can supervise AI development and flag high-risk tools before they harm.

e. International Cooperation

AI crime often crosses borders. India must participate in global treaties and standards for digital crime enforcement.


Can Existing Laws Be Amended Instead?

Some experts argue that instead of a new law, amendments to existing statutes (IPC and IT Act) may work. For example:

✔ Add special sections on AI misuse
✔ Define “automated agent” and “AI generated content”
✔ Increase penalties for deepfake crimes
✔ Strengthen rules for data privacy

The advantage is speed and familiarity with existing statutes. The drawback is that patchwork amendments may be insufficient for future AI developments.


International Learnings

Countries like the United States, European Union, and Singapore are already debating AI regulation:

  • EU’s AI Act: A comprehensive regulatory framework for AI risk levels

  • US Executive Orders: Guidelines for responsible AI

  • Singapore: Model AI governance framework emphasizing fairness and accountability

India can adapt these global practices while tailoring them to local realities.


The Role of Judiciary in AI Regulation

Until the legislature acts, India’s courts may fill the gap by interpreting existing laws in AI cases. The Supreme Court and High Courts can:

  • Recognize AI harm as a distinct legal category

  • Clarify liability principles

  • Order preventive measures like takedowns of harmful AI content

Judicial pronouncements can shape AI jurisprudence even without a new statute.


Challenges in Enforcement

Even if the law evolves, enforcement remains difficult:

a. Technical Expertise

Police and prosecutors need AI literacy.

b. Forensic Infrastructure

AI crime scenes look different — requiring advanced tools and training.

c. Privacy vs Surveillance

Too much monitoring may protect courts but invade citizen privacy.

Balancing safety with freedom will be a key legal challenge.


Public Awareness and Education

Legal readiness is not just about statutes — it’s also about awareness. Citizens must know:

  • How to identify deepfakes

  • What rights they have

  • How to report AI crimes

  • How to secure their digital identity

Public legal education is critical for effective AI governance.


Conclusion: Are We Ready?

Short answer: Not yet.

India has taken early steps toward addressing digital crime, but its current legal framework is insufficient to effectively tackle AI-generated offenses. The law was drafted in an era before AI’s exponential growth, and trying to retrofit it will only go so far.

To be truly ready, India needs:

A dedicated AI law or comprehensive amendments
Clear legal definitions and liability rules
Strong data protection frameworks
International cooperation on cybercrime
Judicial clarity and enforcement capacity
Public education and expert training

AI is not just a technological revolution — it’s a legal revolution too. If Indian law wishes to govern AI effectively, lawmakers, jurists, technologists, and civil society must collaborate now, before AI’s capabilities outpace the rule of law.

LET’S KEEP IN TOUCH!

We’d love to keep you updated with our latest legal news and Case Analysis

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *