By ICTpost Tech Desk
Securing AI with a Donut of Defense: A Smart, Layered Approach to Trustworthy Intelligence
Artificial Intelligence (AI) is no longer an experimental frontier — it’s at the center of modern digital life. From financial forecasting to disease diagnosis, from customer service chatbots to autonomous vehicles, AI is powering mission-critical systems around the world.
But here’s the problem: while AI stands tall at the center, its perimeter often lacks proper defenses. It’s a high-value target with wide-open access points. As AI expands, so do its risks — from misuse and data leakage to outright manipulation and cyberattacks.
The Defensive Donut Model
Cybersecurity experts now propose a straightforward framework: wrap your AI in a “defensive donut” — a ring of protection composed of four layers: Discover, Assess, Control, and Report.
1. Discover: Shine a Light on All AI Activities
“You can’t secure what you can’t see.”
The first step is to detect every instance of AI usage across your organization — both authorized and unauthorized.
The Rise of Shadow AI
- A 2023 Cisco Security Report found that 74% of employees admitted to using AI tools like ChatGPT at work without formal approval.
- These unauthorized tools — called Shadow AI — can introduce risks like data leakage, poor model performance, or compliance violations.
What to Discover
- AI models being used (e.g., OpenAI’s GPT, Google’s PaLM, Hugging Face models).
- Platforms running AI (on-premises, AWS, Azure, Google Cloud).
- APIs and integrations consuming AI-generated outputs.
Tools like Microsoft Purview and IBM Watson OpenScale help detect and manage AI assets across large environments.
2. Assess: Evaluate Vulnerabilities and AI Posture
Once discovered, each AI system must be evaluated for security gaps — a process known as AI Security Posture Management (AISPM).
Key Assessments:
- Vulnerability scanning of AI models and surrounding infrastructure.
- Penetration testing (or red teaming) to simulate real-world attacks.
- Third-party model inspection for malware or backdoors.
In May 2024, SecurityWeek reported that attackers compromised a Fortune 500 company’s chatbot by uploading a tainted LLM from an open-source repository. The model contained prompt injection backdoors to exfiltrate user prompts.
Hugging Face hosts over 1.5 million models, many with little to no vetting. According to a 2024 MITRE Study on AI Threats, 47% of open-source models carried at least one known vulnerability.
Organizations should scan AI artifacts with tools like HiddenLayer and align with standards such as:
3. Control: Block, Monitor, and Guide AI Interactions
Once vulnerabilities are addressed, controls must be implemented to prevent new threats — both from users and external actors.
a) AI Gateway
An AI gateway monitors and filters prompt inputs and outputs.
Top Risk: Prompt Injection
- Listed as the #1 threat in the OWASP Top 10 for LLMs.
- A form of social engineering that tricks AI into violating instructions or security constraints.
- Example: Researchers in 2023 demonstrated prompt injection techniques that bypassed OpenAI’s filters using simple wordplay.
Mitigation strategies:
- Input/output sanitization
- Prompt validation and contextual filters
- Rate limiting and session management
b) Privacy and Data Controls
It’s not just about what goes into the AI — it’s also about what comes out.
In April 2023, Samsung employees inadvertently leaked confidential data to ChatGPT, resulting in an internal ban on AI use.
Controls Needed:
- Data Loss Prevention (DLP) systems
- Output monitoring tools
- Internal fine-tuning to align models with ethical and regulatory requirements
4. Report: Visualize, Govern, and Comply
Security must be visible and reportable.
a) Dashboards for Risk Prioritization
Dashboards should provide:
- Real-time threat analytics
- AI usage heatmaps
- Trends in misuse or misconfiguration
Example: Palo Alto Cortex XSIAM includes AI-aware dashboards and threat detection tools for monitoring LLM usage.
b) Regulatory Compliance
Laws that govern AI use and data include:
- General Data Protection Regulation (GDPR)
- Digital Personal Data Protection (DPDP) Act, 2023 – India
- EU AI Act (under final implementation)
Compliance can be managed with tools mapped to:
- MITRE ATLAS
- ISO/IEC 23894:2023 (AI risk management)
- NIST AI RMF
AI at the Center, Security All Around
The “Defensive Donut” is more than a metaphor — it’s a blueprint for secure AI. When you implement the four layers — Discover, Assess, Control, Report — you turn AI into a safe, strategic advantage.
AI may be the future, but without security, it’s a future full of risk.
Key Takeaways:
- 74% of employees use unauthorized AI tools — Cisco, 2023
- Hugging Face hosts over 1.5 million models — source
- Prompt injection is the #1 threat to LLMs — OWASP, 2023
- Regulatory guardrails coming fast — GDPR, DPDP, EU AI Act
For more stories on AI, cybersecurity, and emerging tech in India, stay with ICTpost.com – where innovation meets insight. editor@ictpost.com
