A blast at the heart of the nation
The Red Fort, a symbol of India’s sovereignty and the site where every Independence Day the tricolor unfurls, was shaken this week by a car explosion that left the capital on edge.
On Monday evening, a vehicle parked near the Lal Quila Metro Station went up in flames, killing at least four people and injuring several others. Shattered glass, scattered debris, and the strobe of emergency lights turned Delhi’s old quarter into a warlike scene.
By nightfall, the National Investigation Agency (NIA) had taken over the probe. Investigators suspect the use of an improvised explosive device. Early clues point toward a coordinated act, though officials have not ruled out local criminal networks exploiting geopolitical tensions.
The attack was more than a violent act — it was a message. And it has forced India to confront, once again, the evolving nature of its internal security challenges.
The changing face of domestic threats
India’s internal security today is no longer defined by insurgency in distant corners or sporadic cross-border infiltration. The battlefield has shifted to cities, cyberspace, and social media feeds.
Security officials describe a “hybrid threat” landscape — one where physical violence, online radicalization, financial crime, and misinformation converge.
Urban terrorism, lone-wolf attacks, deepfake propaganda, and cyber-enabled sabotage are blurring traditional boundaries between crime, terror, and warfare.
According to the Ministry of Home Affairs’ Annual Report 2024-25, India recorded a 23% rise in cyber-related incidents targeting government infrastructure, and a notable spike in AI-driven misinformation campaigns during recent state elections.
What makes this moment different is not just the complexity of the threats, but the tools that both the state and its adversaries now wield — especially artificial intelligence.
AI Is Both the Shield and the Sword
Artificial intelligence has become the newest recruit in India’s security arsenal. It promises sharper surveillance, faster detection, and smarter policing.
Across major Indian cities, AI-enabled CCTV networks are scanning for anomalies: unattended bags, erratic movement patterns, and number plates linked to prior alerts. Machine-learning models fuse data from telecom providers, immigration records, and financial transactions to flag suspicious patterns.
“AI allows us to see what human eyes can miss,” a senior Delhi Police officer told ICTpost. “When it works, it saves hours of investigation.”
Indeed, pilots in Hyderabad and Bengaluru have reduced forensic turnaround times by up to 60%. Predictive analytics now help identify vulnerable zones before major festivals or political rallies.
But as every security expert will admit — technology doesn’t choose sides.
The darker mirror of intelligence
The same algorithms that strengthen surveillance can also undermine truth.
In the hours following the Red Fort explosion, doctored videos and AI-generated images flooded social media, falsely claiming responsibility in the name of various groups.
Within minutes, hashtags blaming different communities began trending, forcing fact-checkers and Delhi Police’s cyber unit into a digital firefight against misinformation.
Deepfakes, once a novelty, have become a weapon of manipulation.
A recent study by the Centre for Security and Emerging Technology (CSET) found that India now ranks among the top five countries globally in the volume of deepfake political content detected online. Many of these videos circulate through encrypted channels, escaping moderation until they spark real-world tension.
AI is also aiding criminals in subtler ways — through identity fraud, AI-assisted phishing, and synthetic-voice scams. In 2024, India’s CERT-In recorded more than 1.2 million cybersecurity incidents, a record high.
The policy dilemma: guardrails vs. speed
India’s security architecture is racing to adapt. The NITI Aayog ‘Responsible AI for All’ framework urges “ethical guardrails” for AI use, while the Digital Personal Data Protection Act, 2023 lays out principles for privacy and accountability.
Yet on the ground, implementation lags. Predictive policing tools in some cities operate without independent audits. Facial-recognition deployments have expanded faster than clear legal oversight. Civil-rights advocates warn that unchecked automation could normalize surveillance, eroding trust between citizens and the state.
Policymakers face a delicate balance — between speed and scrutiny. AI can save lives in the aftermath of a blast; it can also misidentify innocents if left unmonitored.
Three imperatives for the next decade
- Transparent AI governance:
India must institutionalize human-in-the-loop protocols and independent oversight for AI systems used by law enforcement. Algorithms that decide who is flagged or detained should be auditable, explainable, and accountable. - Information hygiene and public literacy:
No technology can substitute for an informed citizenry. Schools, civil-society groups, and digital platforms must collaborate to teach “deepfake literacy” — how to question what we see online before reacting emotionally. - Tech-federalism and capacity building:
Since policing is a state subject, the Centre must strengthen coordination and resource sharing — from AI-powered forensic labs to cybersecurity task forces. A unified national database of verified threat intelligence could prevent the duplication and delays that cripple investigations today.
A fragile peace — and a technological choice
The Red Fort blast will soon pass into the archive of India’s long struggle against violence. Yet the questions it raises are enduring:
How do we defend an open, diverse democracy in an era where code, algorithms, and misinformation can all be weaponized?
How do we make technology a guardian, not a suspect?
AI is not destiny. It is a mirror — reflecting both our ingenuity and our recklessness.
India’s internal security will depend not on how powerful our machines become, but on how responsibly we use them.editor@ictpost.com
