How Artificial Intelligence impact security

Anders Moen Hagalisletto

March 2026

Artificial Intelligence using machine learning and modern neural networks has been able to solve many new problems  and since 2010 expectations to the new technologies has skyrocketed, followed fear for misuse and making human work in many fields obsolete. The colossal and continuously increasing energy consumption required by the new generation of AI services has been paid little attention – but is outside the scope of this article[1]. A short remark though: Global datacentres consumed 415 Terawatt hours in 2024, roughly 1,5 % of worldwide electricity consumption, and Artificial Intelligence applications consumed 183 Terawatt hours in US in 2024, only in 2024[2]. There have been many speculations of the role and possibilities of applying Artificial Intelligence in the topic of  security since the early 2010s and up until today[3]. Both the attacker and the defender could take advantage of the new technology. In the beginning some researchers fantasized about entirely autonomous security, that AI could act like an autopilot, identify bugs or security flaws on its own – and then write a patch to fix the problem and deploy the patch  very fast, long before a human being had discovered the error.  Some envisioned that AI could create fully Autonomous Cybersecurity Infrastructure (ACI), removing the slow-working humans in the process.  The idea was that neural networks  would independently identify and distinguish threats, significantly more effective than humans. Reality did not meet initial expectations: Although they were or appeared incredibly strong under specific conditions – they broke completely when confronted to unexpected snags. Some researchers coined the phenomena as brittleness[4].

Modern Artificial intelligence applications have already sneaked into every workplace and every workstation and almost every platform through US-based BigTech companies, and are used by most employees in the industrialized world, every day, the so-called invisible AI:

  • Text editors: The most  frequently text-editors use AI for processing and improving efficiency, search engines are increasingly tailoring and learning from the (Microsoft Copilot, Microsoft Word or Google Workspace). Proposal of elegant graphic representations and visualizations can be produced from rough notes[5].
  • Email communication; Both Gmail and Outlook use agentic drafting, which means that they use autonomous AI agents to generate, review and refine text[6].
  • The search engines contain AI-based tools that provide synthesized answers and cited sources, replacing traditional key-word based searches[7].
  • Operation Systems on mobile terminals (iOS and Android) have been extended to include system-wide analysis to mimic and recognize what you see on the screen[8].

One could say that societal awareness and stricter regulation does not come a day too early.

The introduction of the EU AI Act in 2024

The EU Artificial Intelligence Act (AI Act) is the world’s first comprehensive horizontal legal framework for AI[9]. Passed in 2024, it represents a risk-based approach to regulation, meaning the stricter the potential for harm, the tighter the rules. After years of celebrating the new technology and its many possibilities –  the fear of misuse, exploitation and risks has gradually reached the public. The AI Act is the legal  response to these considerations and worries, focusing on fundamental rights and public safety.

The introduction of modern AI, based on machine learning and statistical probability instead of the traditional deterministic, rule-based,  logic-oriented version –  has opened many new possibilities for applications. New AI is flexible, learning from data – which means that AI can apply to many new application domains. Where traditional AI was like recipe book, modern AI is like a chef that has tasted millions of dishes and can thus improvise in new situations from its large knowledge-base. There are some similarities between the establishment of the regulation for privacy (GDPR) and the new EU AI Act. GDPR was a regulatory response to the increased exploitation of private information collected on massive scale from various large-scale enterprises, reclaiming control over the Big Tech companies. GDPR addressed the problem with fragmentation of legislation and responsibility within twenty-eight EU countries, practicing twenty-eight different privacy policies. But equally important it addressed an increasing power imbalance between a small number of very large high-tech companies, mining enormous amount of personal data for commercial exploitation.  The goal of GDPR was to shift the power back to the individual.

Probably motivated by the success of GDPR, EU decided to act in a similar way. One could say that the desire was to avoid typical move-fast-and-break-things, characterized by the internet business. As Anna Pirozzoli stated it “The objective is to create an informed and prepared society for AI. The EU, in particular, aims to establish a reliable and responsible regulatory framework for AI that enhances people’s lives while preserving societal values. In the rapidly evolving landscape of AI, it is important to consider the establishment of appropriate regulations to guide its adoption and implementation.”[10] Instead of laying behind Big Tech and wait for future problems to come, EU acted early. Pirozzoli stated: “The recently implemented Artificial Intelligence Act is a significant step towards addressing the need to limit the potential abuse of AI. It also acknowledges the importance of striking a balance between regulation and innovation, which is critical to ensuring the responsible and beneficial application of AI”[11].

The EU AI Act address the increased threats that the new generation of Artificial Intelligence tools and application pose. But the Act is not only meant to restrict Artificial Intelligence businesses, but foster innovation within EU through standardization. Since the same rule applies for all 27 EU members, a company residing in an EU country  will know that if their product is compliant with AI EU Act – the company  has a market spanning over all countries in the European Union. There are pitfalls though. Compliance costs: small or medium sized companies typically struggle to facilitate for documentation, establish sufficient logging services and third-party testing – compared to their large-scale American counterparts.

Fundamental to the dichotomy in the regulation is the notion of security level, the regulation implicitly presents four risk levels for AI systems, from unacceptable, high risk, limited risk and minimal risk, as described in Table 17.

AI risk levelShort descriptionReference
Unacceptable riskProhibited practices include subliminal or manipulative techniques, social scoring or real-time biometric identification.Article 5
High-risk AI systemsAI is a safety-product already covered by EU -safety law or applies to the use cases education, employment and essential services (economy emergency triage ) listed in Annex III.Article 6-49   Article 51
Limited riskTechnologies that are not dangerous enough to be banned or strictly regulated.Article 50
Minimal riskSystems that do not fall under the other three categories, and are thereforeRecital 104

Table 1 The Risk levels in the EU AI Act explained with references to explicit text in the EU document.

Unacceptable risk: Article 5 describes what are the prohibited AI practices. The summary of the article reads: “The EU AI Act prohibits certain uses of artificial intelligence (AI). These include AI systems that manipulate people’s decisions or exploit their vulnerabilities, systems that evaluate or classify people based on their social behaviour or personal traits, and systems that predict a person’s risk of committing a crime. The Act also bans AI systems that scrape facial images from the internet or CCTV footage, infer emotions in the workplace or educational institutions, and categorize people based on their biometric data. However, some exceptions are made for law enforcement purposes, such as searching for missing persons or preventing terrorist attacks.”

High-risk AI systems: The largest chunk of the EU regulation concern High-risk AI systems, the entire Chapter 3 (Article 6 – Article 49). A system is considered High-risk if (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I (a list of Union Harmonization Legislation), as described in paragraph 1. In addition the AI systems referred to in Annex III should be considered High-risk. These include:

  1. Biometric (a) identification systems, except systems solely used for confirmation of identity, (b) biometric categorization systems according to sensitive or protected attributes and (c) emotion recognition systems.
  2. Critical infrastructure like for instance management and operation of critical digital infrastructure, road, traffic, water supply, gas heating or electricity.
  3. Educational and vocational training, including (a) systems to determine access or admission at all levels, (b) evaluation of learning outcomes, (c) assessment of appropriate level of education that an individual will receive or access, at all levels, (d) systems used for monitoring and detecting prohibited behaviour of students during tests, at all levels.
  4. Systems used for employment, workers management and self-employment, including (a) systems to select natural persons, (b) decision-systems affecting to work-related relationships.
  5. Access to and enjoyment of essential private services an essential public services and benefits, (a) systems for evaluating eligibility for essential public assistance benefits and services (typically healthcare), (b) systems for evaluating creditworthiness, (c) systems for pricing in case of life and health insurance (d) systems for evaluating and classifying emergency calls by natural persons.
  6. Systems for law enforcement (a) by or on behalf of authorities to assess risk of a natural person becoming victim of criminal offence, (b) systems for polygraph tools (c) to evaluate reliability of evidence, (d) assessing risk of natural persons offending or re-offending (e) for detection, investigation or prosecution of criminal offences.
  7. Systems for migration, asylum and border control management such as (a) systems used as polygraphs or similar tools, (b) to assess risks of irregular migration or health posed by natural person, (c) assisting  competent authorities for examining applications for asylum, visa or residence permits and complaints, (d) detecting, recognizing or identifying natural persons.
  8. Systems for the administration of justice and democratic processes, like (a) researching and interpreting facts and the law and in applying the law to concrete facts or to be used in alternative ways to resolve disputes, (b) systems intended to be used for  influencing  outcomes of elections or referendums.

The reason why all these domains are mentioned is that if an AI system fails or produce incorrect answers or is misused – it can have devastating consequences to both individuals and societal trust.

The Role of Cyber Security in the EU AI Act

The EU AI Act address the concept security forty-two times in the 135 pages long document, whereby cybersecurity is the most frequent occurrence. Article 15 address Accuracy, Robustness and Cybersecurity directly, but the article is short, only five paragraphs long. The summary states that high-risk AI systems should be designed to be accurate, robust and secure, without specifying more detailed what concrete requirements should apply. Since the approach in the document is risk-based, the requirements are formulated in rather vague, the systems should achieve an appropriate level of accuracy, robustness and cybersecurity[12] , the robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans[13].  The regulation for ordinary cybersecurity relies on cybersecurity regulation[14] and best state-of-the-art practices.  The question arises: Is  cyber security is sufficiently addressed in the regulation? The answer is probably no – since the technology and its application is continuously evolving.

Two important questions should be addressed:

  1. What practical tasks should be done within an organization do to comply with regulation and manage Artificial Intelligence in a responsible and good way?
  2. What measures should be in place before AI is permitted, implemented and used actively inside the organization?

The second question seems a bit awkward, in the light of BigTech companies’  extensive application of AI in ordinary IT products and services, discussed in the introduction. The EU AI  Act and the ISO 42001 standard enter the scene after the game has started.

How can we break this down into its constituents? Let us consider what the ICT architects could and should do, and let us distinguish between three distinct architecture roles, the enterprise architect, the solution architect and the security architect.

  • The enterprise architect is focusing on the big picture, the IT strategy, by aligning IT-strategy with business goals, typically working on a high level to define principles, roadmaps and avoiding that business units create siloed or redundant systems. Enterprise architects do typically appear in large or very large organizations, but not in medium size or small organizations, where the portfolio of systems and business units and departments are manageable.
  • The solution architect  focus on individual projects or systems. If the organization is large the solution architect takes the strategic vision from the Enterprise Architect and apply it to specific projects or applications. The solution architect has responsibility for documenting the high-level designs and low-level designs of solutions – in practice this means producing  architectural diagrams showing component topologies, behaviour – typically in the form of message interaction diagrams and dependencies between data and components used. The solution architect is well aware and updated on standards, regulation and relevant policies for the domain the team are designing and implementing.
  • The security architect has the responsibility of assuring that the systems designed or used within the organization are secure. In practice this means the security architect may act as a compliance officer, assuring or measuring maturity of solutions or systems according to international compliance standards for the relevant topic of domain. The security architect should also possess extensive understanding of the relevant security mechanisms that could be used to secure applications and systems and is capable of producing security diagrams that highlights the security mechanisms, the potential threats and attacks and is typically involved in risk assessment processes.

Some organizations also use other IT architect-titles, like data architect, network/infrastructure architect, cloud architect , integration architect, business architect, to name a few. Except the business architect, all these roles or titles might be subsumed under the role solution architect. How does the EU AI act map onto the concrete architecture roles?

Architect roleNatural or typical responsibilities
Enterprise ArchitectCreate and maintain an Inventory and classification of AI systems in use by the organization. This information could be part of or integrated with  the Configuration Management Database (CMDB) or similarIntegrate/align existing policies, like ISO 27001 and ISO 9001, GDPR, or other relevant regulation, with the AI policy (ISO 42001) and the EU AI Act.Establish standards for third party AI, including  model cards and system transparency descriptions, required by Article 13 of the EU AI Act.
Solution Architect Translate the legal requirements from the EU AI Act into technical blueprints in particular Article 13, 12, 14 and 10. Principles of how to comply can be formulated high-level, but the solution architect should oversee and control that the principles are implemented in a adequate manner. Examples: Desing the user interface so that users know they are interacting with an AI and marked/tagged (Article 13), assure that appropriate logging is turned on so that incidents and undesired events can be traced later on (Article 12), facilitate override capabilities to stop or reverse decision made by an AI component (Article 14), and document the origin and lineage of training data (Article 10). Create system architecture overview and dependency graphs showing exactly how the AI modules interact and intervene with the existing non-AI systems and data within the organization or applications, preferably represented, integrated or connected to a CMDB or similar system.Assure in concrete systems, application or services the architect is responsible for that the functionality of AI integrated services is properly understood, documented and implemented.
Security ArchitectOverall responsibility of the well-functioning of the AI within the system, applications and organization. This means that the AI implementation does not pose a threat in itself and that the AI-application is protected towards external threats and internal failures, leading to security breaches. Apply risk analysis to AI application.Establish protection against data-poisoning through corrupting training data and protection against inference attacks – where the AI model’s logic is stolen.Establish regular tests for bias and robustness to hallucinationsProtect the weights and parameters of the model itself so that they are not stolen and used by malicious actors.Automate monitoring is a safe and secure manner, to prevent toxic content and abuse of Person Identifiable Information.

Table 2 Suggestion of architectural roles in an enterprise and how these roles map to typical responsibilities and tasks in the implementation of the EU AI Act.

What impact might the application of the generation new Artificial Intelligence have on security in general?

In cyber security there are two main perspectives – the defender side and the attacker side[15]. Artificial intelligence impacts the defender and the attacker quite differently.  Let us consider the defender first: Large amounts of data might be analysed very quickly due to smart application of AI, saving time for the SOC team, and mitigating security breaches fast. AI improves Managed Detection and Response (MDR) in several respects:

  1. Threat hunting and threat intelligence: Systems can be trained to detect and identify threats, by collecting and processing threat data from multiple sources across an organization. 
  2. Improve SOC operations: Providers of software for Managed Detection and Response expect that AI  can be used to improve their tools and the overall performance  and accuracy of the existing tools, leading to more efficient work by the SOC teams applying these tools
  3. Enhance and tailor Cybersecurity training: AI can be used to build personalized security training programs for ordinary employees, addressing what is needed only and strengthen those who need more challenges – if they have a role that demands particular competencies. 
  4. Increase Security innovation: The blogger at Sophos advocates for that  AI and Machine Learning can boost the innovation in assisting with the quick adaptation that SOC-teams must comply with in order to be up-to-date with the never-ending threat landscape.

Are there topics in security where AI does not help?

Almost no one back in 2010s predicted that AI could be used to write its own exploit code or mimic a CEO’s voice with very high accuracy by 2025. Both scenarios are now reality. So are there no limits to what state-of-the-art AI-software can be used for in order to generate attacks? There are limits:

Current Artificial Intelligence systems cannot prevent the following attacks:

  1. Social engineering attacks that use psychological manipulation, like persuading through phone calls or urgent pressure. AI does not have any  intuition and is not able to detect dubious or harming  interpersonal interaction in real life, like coercion through fear or unjust submission.
  2. Physical security breaches: AI lives in the digital realm and does not have any measures to stop the physical intrusion by a physical attacker into the premisses where data and systems reside.
  3. Adversarial machine learning (attack), where AI is attacking “itself”: There are three methods (a) evasion attacks, where the attacker uses a separate AI to calculate specific patterns of noise that confuse the good AI into making mistakes, (b) data poisoning, where the attacker feeds the system with enormous amount of suspicious, malicious transactions, labelling these data as good or safe, then over time relearning the AI-application that unsafe is safe, (c) model extraction, by the attacker automating millions of queries to a hidden AI (e.g. top-tier diagnostic tool) and then record the output, in order to train their own model, mimicking the hidden AI, and thereby stealing its brain.
  4. Zero-day Exploits: Since AI models in security are trained on existing known threats and exploits, the models cannot recognize the new pattern in the attack. Among the Zero-day attacks we find exploitations of design-flaws. Reconstructing the design of a potential large system is currently not possible – due to lacking research and technology on how to reconstruct design – based on the running implementation[16].
  5. Insider threats with legitimate access that perform malicious actions or breaches due to careless behaviour or non-intentional errors by legitimate users tends. An AI-based monitoring tool or intrusion detection tool would only observe supposedly legitimate behaviour by a legitimate user.

Management systems for Artificial Intelligence

An artificial Intelligence Management System (AIMS) is a specialized, tailored framework designed to help organizations govern monitor and mitigate the risk of using AI technologies. Why do we need an AIMS? Legislators and industry leaders have realized that modern AI is significantly different than traditional software, including traditional AI. Traditional software is deterministic and therefore in principle predictable, while artificial intelligence is probabilistic, meaning that its behaviour changes as it progresses learning from new data. AIMS address three challenges: assuring that individual persons have the overall responsibility for explaining why an AI component making a specific decision, avoid that the AI become biased or inaccurate over time, and assure that there is a person in charge and responsible if an AI system causes harm or breaks regulation.   The International Standard for Artificial Intelligence Management systems ISO/IEC 42001[17], states that their document “specifies the requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI (artificial intelligence) management system within the context of an organization”[18]. The standard is intended for use by an organization providing or using products  or services that utilize AI systems, and help the organization develop, provide or use AI-systems responsibly in pursuing its objectives and meet applicable requirements, obligations related to interested parties and expectations from them. So what are the challenges as seen from the perspective of ISO? The authors mention three overall challenges[19]:

  1. The use of AI for automated decision-making, sometimes in a non-transparent and non-explainable way, can require specific management beyond the management of classical IT systems.
  2. The use of data analysis, insights and machine learning, rather than human-coded logic to design systems, both increases the applications opportunities for AI systems and changes the way that such systems are developed, justified and deployed.
  3. AI systems that perform continuous learning change their behaviour during use. They require special consideration to ensure their responsible use continues with changing behaviour.

An AIMS is like a dashboard for the company or organization management, typically including four topics, policy, risk assessment, data quality, and documentation  see Table 18.

Component nameDescription of component
AI Policy and GovernanceDeciding what kind of rules should apply within the organization.
Risk AssessmentPerform  risk analysis of what could go wrong, impact on users, and potential mitigations of risks.
Data Quality ManagementManage and monitor the quality data for learning, assuring that data is  not biased and legal.
Documentation and transparencyThe AI management system should provide enough evidence in form of logs and reports on how AI models are tested and validated.

Table 3 Core components of a typical Artificial Intelligence Management System.

We may combine and sew together the  implementing AI management (ISO 42001) and the compliance with the EU AI Act, including the three architecture roles: enterprise architect, solution architect and the security architect. A stepwise approach, divided into scoping, risk assessment, design, operations and audit and how  and at which stage each architecture role typically would perform their actions is described in

Implementation phaseArchitect actionCompliance target
ScopingThe enterprise architect builds the AI asset inventory.ISO 42001, Clause 4
Risk AssessmentThe security architect performs impact assessment of the proposed AI implementations.EU AI Act Art. 9
DesignThe solution architect integrates transparency interfaces.EU AI Act Art. 13/50
OperationsThe security architect put up a solution for monitoring  model drift and potential biasISO 42001 Annex A
AuditAll three architect roles should ideally be involved in the documentation phase (although unrealistic in practice). For high risk systems there should be updated documentation giving a General description of the systemA detailed map of the architecture used and the training methodologyInformation about the data used for training and manipulationDetailed instructions for how to use the system and provide human oversight.EU AI Act Art. 11

Table 4 Implementation roadmap for architects, integrating both ISO/IEC 42001 and EU AI Act.


[1] The paper by James O’Donnell and Casey Crownhart: We did the math on AI’s energy footprint. Here’s the story you haven’t heard, published online by  MIT Technology review, May 20, 2025, describes the situation about AI and energy consumption in detail:  Article on AI and Energy consumption.

[2] See the article written by Asrar Energy Consumption Analysis 2025–2030. According to the author, the   projections are expected to reach 606 TWh already by 2030, where image generation and reasoning models taking the largest amount of consumption.

[3] A recent YouTube vide shows Bernie Sanders and Alexandria Ocasi- Cortez presented the recent  AI Data Center Moratorium Act.

[4] H. Abebayo: Human-in-the-Loop Explainable AI for Reliable Autonomous Cybersecurity Infrastructure, January 2026,  https://www.preprints.org/manuscript/202601.2031

[5] AI Tools in 2026: What Each Platform Does Best in Real-World Workflows  

[6] arxiv.org: Email in the Era of LLMs

[7] arxiv.org: Search Engines in an AI Era: The False Promise of Factual and Verifiable Source-Cited Responses

[8] Top 5 AI Trends to Watch in 2026.

[9] https://artificialintelligenceact.eu/the-act/

[10] A. Pirozzoli: The Human-centric Perspective in the Regulation of Artificial Intelligence, section V. The Human-centric approach, European papers, Link to A. Pirozzoli’s paper.

[11] Same paper, Section V.

[12] Article 15, item 1.

[13] Article 15, item 4.

[14] The European Cybersecurity Act, EU 2019/881 – EUR-Lex, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32019R0881

[15] The company Sophos has published a well-written blog on AI in Cybersecurity: https://www.sophos.com/en-us/cybersecurity-explained/ai-in-cybersecurity

[16] There has been research into extracting precise architecture drawings and models from source code but there is still a long way to go to reach for us to have correct automated extraction or reconstruction of a design from source code or execution environment, although language models and AI tools have been used recently – to enhance and enrich the resulting models.

[17] ISO/IEC 42001:2023(E)

[18] Ibid. page 1, Chapter 1, Scope.

[19] Ibid. page vi, Introduction.

Tags: No tags

Comments are closed.