Written by Markus Zeischke

In most cases, it is not malicious intent but human carelessness. Under time pressure or simply out of convenience, employees use unauthorised tools (shadow AI) or upload sensitive data to public LLMs. In the age of AI, a minor slip-up – such as a thoughtless prompt – can instantly escalate into a massive data breach.
Identity security is the digital lock. Zero Trust and the principle of least privilege ensure that users only have access to what they absolutely need. AI-powered systems detect in real time when a login pattern (location, time, device) is atypical and block compromised accounts before any damage is done.
AI can act as a digital immune system: it detects deepfakes by analysing synthetic frequencies, exposes AI-based phishing attempts by identifying unusual communication contexts, and monitors data flows to external AI platforms. It protects people in situations where their biological perception reaches its limits when confronted with highly sophisticated AI-generated fakes.
Imagine the following scenario: An employee receives an urgent call from their CFO. The voice sounds completely authentic. “Please authorize the payment immediately. It’s time-sensitive.” The payment is authorized. But the CFO never actually called.
Such deepfake deceptions are no longer the exception. The number of synthetic audio and video files has exploded worldwide from 500,000 in 2023 to 8 million in 2025 (+900 % annually). At the same time, employees are becoming increasingly susceptible to manipulation through AI-optimized social engineering attacks. AI phishing now achieves a click-through rate of 54 % – 4.5 times higher than traditional phishing emails (12 %). And internally, a second danger is emerging: 15% of employees use GenAI tools, 72 % of them via private accounts, which can quickly lead to leaks via shadow AI.
The result: the modern insider is no longer just a human being, but a human being supported or deceived by AI.
These examples demonstarte that AI is not only changing external attack techniques, but is also shifting the risk within companies. Employees under time pressure who react to convincing deepfakes or unconsciously use shadow AI tools can quickly become part of the attack chain.
This creates a new threat profile: the Insider 2.0 – a combination of human behavior, digital tools, and AI-enabled deception. Security strategies must understand how this insider threat emerges, how AI acts as a multiplier, and which insider risks need reassessment.
In a typical modern business, employees work under time pressure, use digital tools, and switch between meetings, emails and chats. At any moment, an attacker could exploit this dynamic. The concept of the insider is by no means new. But what has changed is the interplay between people and technology. AI is shifting the boundaries of what an insider can trigger, whether consciously or unconsciously. Traditionally, insiders can be broadly divided into three roles.
The malicious insider: Intent is the key factor here. Someone deliberately accesses confidential information, manipulates systems or pursues personal or financial motives. AI is shifting the balance of power: tasks that previously required technical expertise can now be carried out through simple queries or automated systems. Malicious insiders need less expertise and have more opportunities to act undetected.
The careless insider: This is the most common form. The individual acts not out of malice, but due to time pressure, convenience or ignorance. They might use an unauthorized AI tool, upload confidential content, or inadvertently disclose login credentials. AI exacerbates this form of negligence because seemingly harmless actions, such as a prompt or an upload, can suddenly expose large amounts of data. What appears to be a shortcut can quickly become a security issue.
The compromised insider: The most dangerous scenario is often one that no longer originates from the individual themselves. An account is taken over, a device is infected, or a login is misused. From the outside, everything appears legitimate. AI enablers attackers to disguise themselves more effectively by mimicking communication patterns, generating credible internal requests, and operating with a precision that circumvents conventional detection methods. The compromised account becomes the perfect insider – invisible, functional and trustworthy.
Traditional roles remain in place, but AI is changing the rules of the game.
It is not the insider who is changing, but the opportunities and threats. Whether acting intentionally, negligently or under external control, every type of insider is made stronger, faster and harder to detect by AI. Therefore, the Insider 2.0 is not a new type of perpetrator, but rather the result of a new ecosystem. It is one in which humans and machines collaborate closely, attackers mimic human behaviour, and AI permeates everyday work processes.
This shifts the security question away from “Who has access?” to “What can this access cause?”
In a business context, AI becomes a tool that unleashes its potential in both directions: It boosts efficiency and creativity while opening doors that were previously closed. The following examples illustrate how insider threats can arise in everyday life.
In many companies, shadow AI has long been part of everyday life. Employees use AI tools to make their work easier, such as creating quick drafts, performing analyses, translating documents, and generating recommendations. The idea is pragmatic; the consequences are often invisible.
For each time confidential content finds its way into an unauthorized system, the potential for a data leak arises. Unlike with traditional tools, it is not always clear where the information ends up, how it is processed or stored, or if it will resurface later.
Therefore, shadow AI is less a technical problem than a cultural one. It is not the technology itself that poses the risk, but rather its uncontrolled use in everyday life.
While shadow AI inadvertently opens doors, deepfakes are designed to deliberately undermine trust. An artificially generated voice of a senior executive, a manipulated video message or a deceptively genuine approval instruction can be enough to set decisions in motion that no one intended.
In the past, one could rely on certain cues, such as tone of voice, speech patterns, gestures, and demeanor. Today, however, these signals can be manipulated. The result? Decisions based on personal credibility become vulnerable. Teams that need to react quickly risk being unable to distinguish between trusted and untrusted communication channels.
This scenario is not futuristic; it is already happening. It is changing the way companies need to secure internal communication.
Traditional phishing relied on a scattergun approach, sending lots of messages and hoping for a few hits. AI turns this principle on its head. Rather than using generic bait, AI generates messages that seem perfectly tailored to the recipient – whether in the organization’s style, the department’s tone, or the project's logic.
This makes social engineering more personalized, credible and difficult to detect. Well-faked chat histories, precisely worded queries, and seemingly harmless files can all appear to come from colleagues.
Thus, the attacks become less visible and more sophisticated. Companies must prepare for the possibility that AI phishing attempts will not appear as external attacks, but rather as internal communications that carry things forward in a plausible manner.
These scenarios all have one thing in common: AI transforms external attacks and internal errors into hybrid risks that are hidden within the organization. In this way, AI itself becomes an ‘accomplice’. Not because it is malicious, but because it provides capabilities that can be exploited by attackers and unwitting employees alike.
At the same time, AI naturally also opens up new defensive possibilities: Security platforms increasingly use machine learning to detect unusual user activity, suspicious access patterns, and data movements early on, making insider risks more visible.
New insider risks cannot be contained by technology alone. AI has revealed that threats can no longer be easily categorized as internal or external, intentional or unintentional. What is needed is a combination of governance, culture and technology. In that order.
Although governance may appear to be a formal process at first glance, it actually forms the foundation of every AI security strategy. Rather than tightening rules, it provides guidance:
Companies that set clear guidelines not only promote security but also foster trust. Employees know how to use AI without crossing the line. Importantly, governance is a navigation system, not a control mechanism. Employees should not avoid using AI. They should know how to use it safely.
In the age of AI, annual training slides are ineffective. AI-powered deceptions are too versatile, too realistic and too dynamic. Therefore, you should focus on:
The goal is not to make employees suspicious, but rather to make them aware of the patterns generated by AI-powered attacks. This is more about intuition than knowledge: the feeling that ‘something isn’t quite right’, even when everything appears perfect.
Technical measures remain essential, but they need to be reimagined. They should me more like dynamic guardrails than rigid walls. Key components could include:
AI is not only the source of new risks, but also the most effective response to them. With its capabilities, it can directly address these new threat scenarios:
The aim is to create a technical environment in which employees can work productively, free from the threat that misconduct, manipulation or deception lead to disaster.
Prevention does not rely on additional layers of control, but rather on a combination of factors. Governance provides direction. Awareness fosters competence. Technical safeguards protect day-to-day operations. Together, these factors create an environment in which AI is not a risk, but rather a tool that organizations can use safely and responsibly.
The path to the future of security strategies does not necessarily lie in more complex access controls, but rather in a human-centric security model that combines three elements:
Although people remain the most critical component of security, AI is now part of the equation. Resilience arises from considering both people, with their strengths and weaknesses, and machines, with their capabilities and risks, together.
In the age of AI, security no longer means merely controlling people, but shaping responsibility – which is often supported by technologies that protect people where their biological perception reaches its limits.
Risks often manifest themselves through changes in behaviour: extreme stress, frustration or a sudden interest in data outside one’s own area of responsibility. AI attacks also exploit the ‘compliance reflex’ (obedience towards (fake) superiors). An open security culture, in which concerns can be reported without fear, is the best protection here.