What Is SASE (Secure Access Service Edge)? Complete Guide 2026

Category: SASE & Zero Trust

By EthicalHacking.ai Team ·

The EU AI Act is the world first comprehensive legal framework for artificial intelligence. It establishes binding rules for the development, deployment, and use of AI systems within the European Union — and for any organization worldwide that offers AI products or services to EU users. The regulation classifies AI systems into risk tiers, imposes mandatory requirements on high-risk systems, bans certain AI practices entirely, and introduces transparency obligations for general-purpose AI models including large language models.

For cybersecurity professionals, the EU AI Act matters because it explicitly mandates cybersecurity controls for high-risk AI systems. Article 15 requires that high-risk AI systems achieve appropriate levels of accuracy, robustness, and cybersecurity — making security a legal compliance requirement rather than a best practice recommendation. Organizations that deploy AI for hiring, credit scoring, law enforcement, critical infrastructure, healthcare, or any other high-risk use case must demonstrate that their AI systems are resilient against adversarial attacks, data poisoning, model manipulation, and other AI-specific threats.

The EU AI Act entered into force on August 1, 2024, with a phased implementation timeline. Prohibited AI practices became enforceable in February 2025. Transparency requirements for general-purpose AI models take effect in August 2026. The full requirements for high-risk AI systems become enforceable in August 2026 — just four months from now. Organizations that have not begun compliance preparation face significant regulatory and financial risk.

## EU AI Act Risk Classification

The EU AI Act organizes AI systems into four risk tiers. Each tier carries different regulatory obligations, with requirements increasing as risk level rises.

### Unacceptable Risk (Banned)

Certain AI practices are prohibited entirely under the EU AI Act because they pose an unacceptable risk to fundamental rights. Banned practices include social scoring systems that evaluate individuals based on social behavior or predicted personality traits for general purposes, real-time remote biometric identification in publicly accessible spaces for law enforcement purposes with limited exceptions, AI systems that exploit vulnerabilities of specific groups based on age, disability, or social or economic situation, AI systems that manipulate human behavior through subliminal techniques causing harm, emotion recognition systems in workplaces and educational institutions with limited exceptions, and untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases. These prohibitions became enforceable in February 2025. Organizations using any of these AI applications must discontinue them or face penalties.

### High Risk

High-risk AI systems are permitted but subject to the most stringent regulatory requirements. The EU AI Act defines high-risk AI in two categories. Annex I covers AI systems that are safety components of products already regulated under EU harmonization legislation — medical devices, automotive systems, aviation equipment, machinery, and similar regulated products. Annex III covers standalone AI systems used in specific high-risk domains including biometric identification and categorization of natural persons, management and operation of critical infrastructure including energy, water, and transport, education and vocational training including AI that determines access to educational institutions or evaluates student performance, employment including AI used for recruitment, screening, hiring, evaluation, or termination decisions, access to essential services including AI used for credit scoring, insurance pricing, and social benefit eligibility, law enforcement including AI used for risk assessment, polygraph analysis, and crime prediction, migration and border control including AI used for visa processing and asylum applications, and administration of justice including AI used to assist judicial decisions.

Organizations deploying AI in any of these domains must comply with the full set of high-risk requirements — including the cybersecurity obligations under Article 15.

### Limited Risk (Transparency Obligations)

AI systems that interact with humans, generate synthetic content, or perform emotion recognition are subject to transparency requirements. Users must be clearly informed when they are interacting with an AI system. AI-generated content including images, video, audio, and text must be labeled as artificially generated. These requirements apply to chatbots, AI assistants, deepfake generation tools, and general-purpose AI models. Transparency requirements take effect in August 2026.

### Minimal Risk

AI systems that do not fall into the above categories — such as AI-powered spam filters, AI in video games, or AI-assisted inventory management — are considered minimal risk and are not subject to mandatory requirements under the EU AI Act. However, the regulation encourages voluntary adoption of codes of conduct for minimal-risk AI.

## Article 15: Cybersecurity Requirements

Article 15 of the EU AI Act mandates that high-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle. This is the core cybersecurity compliance requirement.

### Resilience Against Adversarial Attacks

High-risk AI systems must be resilient against attempts to manipulate their behavior through adversarial inputs. This includes adversarial examples — carefully crafted inputs designed to cause misclassification, data poisoning attacks that corrupt training data to influence model behavior, model evasion techniques that cause the system to produce incorrect outputs, and prompt injection attacks that manipulate AI systems through malicious instructions embedded in inputs.

AI security platforms like [Lakera Guard](/tools/lakera-guard) provide runtime protection against prompt injection and adversarial inputs. [HiddenLayer](/tools/hiddenlayer-platform) detects adversarial attacks, model manipulation, and inference-time threats. [Garak](/tools/garak-scanner) provides open-source vulnerability scanning for large language models to identify susceptibility to adversarial techniques before deployment.

### Data Integrity and Governance

High-risk AI systems must implement robust data governance practices. Training, validation, and testing datasets must be relevant, representative, and as free of errors as possible. Data collection and processing must comply with GDPR requirements. Organizations must document data provenance, quality metrics, and any preprocessing or labeling decisions that could introduce bias.

### Technical Robustness

AI systems must perform reliably across their intended operating conditions. This includes handling edge cases, degrading gracefully when inputs fall outside training distribution, and maintaining consistent accuracy over time as deployment conditions evolve. Organizations must implement monitoring that detects performance degradation, data drift, and model decay.

AI governance platforms like [Watsonx Governance](/tools/watsonx-governance) provide model monitoring, bias detection, drift analysis, and lifecycle management. [Cisco AI Defense](/tools/cisco-ai-defense-platform) delivers AI safety testing, runtime guardrails, and governance controls.

### Logging and Traceability

High-risk AI systems must maintain automatic logging of system operations to enable post-deployment monitoring and incident investigation. Logs must be sufficient to trace AI system decisions and identify the inputs, processing steps, and outputs involved in any specific decision. This traceability requirement means