Written by Markus Zeischke

In most cases, it is not malicious intent but human carelessness. Under time pressure or simply out of convenience, employees use unauthorised tools (shadow AI) or upload sensitive data to public LLMs. In the age of AI, a minor slip-up – such as a thoughtless prompt – can instantly escalate into a massive data breach.
Identity security is the digital lock. Zero Trust and the principle of least privilege ensure that users only have access to what they absolutely need. AI-powered systems detect in real time when a login pattern (location, time, device) is atypical and block compromised accounts before any damage is done.
AI can act as a digital immune system: it detects deepfakes by analysing synthetic frequencies, exposes AI-based phishing attempts by identifying unusual communication contexts, and monitors data flows to external AI platforms. It protects people in situations where their biological perception reaches its limits when confronted with highly sophisticated AI-generated fakes.
Let’s imagine the following scenario: An employee receives an urgent call from their CFO. The voice sounds completely authentic: “Please authorise the payment immediately. It’s time-sensitive.” The payment is authorised. But the CFO never actually called.
Such deepfake deceptions are no longer the exception: the number of synthetic audio and video files has exploded worldwide from 500,000 in 2023 to 8 million in 2025 (+900 % annually). At the same time, employees are becoming increasingly easy to manipulate through AI-optimised social engineering attacks: AI phishing now achieves a click-through rate of 54 % – 4.5 times higher than traditional phishing emails (12 %). And internally, a second danger is emerging: 15% of employees use GenAI tools, 72 % of them via private accounts, which can quickly lead to leaks via shadow AI.
The result: the modern insider is no longer just a human being, but a human being supported or deceived by AI.
These examples show that AI is not only changing external attack techniques, but is also shifting the risk within a company. Employees who act under time pressure, react to convincing deepfakes or unconsciously use shadow AI tools can quickly become part of the attack chain.
This gives rise to a new threat profile: the Insider 2.0 – a combination of human behaviour, digital tools and AI-enabled deception. The challenge for security strategy lies in understanding how this insider threat emerges, why AI acts as a multiplier, and which insider risks need to be reassessed.
In a typical modern business, people work under time pressure, use digital tools, and switch between meetings, emails and chat; at any moment, an attacker could exploit precisely this dynamic. The concept of the insider is by no means new. But what has changed is the interplay between people and technology. AI is shifting the boundaries of what an insider can trigger, whether consciously or unconsciously. Traditionally, insiders can be broadly divided into three roles:
The malicious insider: Here, intent is the key factor: someone deliberately accesses confidential information, manipulates systems or pursues personal or financial motives. AI is shifting the balance of power: tasks that previously required technical expertise can now be carried out through simple queries or automated systems. The malicious insider needs less expertise and, at the same time, has more opportunities to act undetected.
The careless insider: This is the most common form. The individual acts not out of malice, but due to time pressure, convenience or ignorance. They might use an unauthorised AI tool, upload confidential content or inadvertently disclose login credentials. AI makes this form of negligence particularly risky because seemingly harmless actions, such as a prompt or an upload, can suddenly expose large amounts of data. What appears to be a shortcut quickly becomes a security issue.
The compromised insider: The most dangerous scenario is often one that no longer originates from the individual themselves: An account is taken over, a device is infected, a login is misused, and from the outside, everything appears legitimate. AI ensures that attackers can disguise themselves even more effectively: they mimic communication patterns, generate credible internal requests and operate with a precision that circumvents conventional detection methods. The compromised account becomes the perfect insider – invisible, functional and trustworthy.
Traditional roles remain in place, but AI is changing the rules of the game:
It is not the insider who is changing, but the opportunities and threats. Whether acting intentionally, negligently or under external control, every type of insider is made stronger, faster and harder to detect by AI. The Insider 2.0 is therefore not a new type of perpetrator, but the result of a new ecosystem. One in which humans and machines work in close collaboration, attackers mimic human behaviour, and AI permeates everyday work processes.
This shifts the security question away from: “Who has access?” to: “What can this access cause?”.
When AI becomes a tool in a business context, it unleashes its potential in both directions: it boosts efficiency and creativity, but it also opens doors that were previously closed. These examples show how insider threats can arise in everyday life.
In many companies, shadow AI has long been part of everyday life. Employees use AI tools to make their work easier: a quick draft, an analysis, a translation, a recommendation. The idea is pragmatic, the consequences often invisible.
For every time confidential content finds its way into an unauthorised system, a potential data leak arises. And unlike with traditional tools, it is not always clear where the information ends up, how it is processed or stored, and whether it might resurface later.
Shadow AI is therefore less a technical problem than a cultural one: it is not the technology itself that poses the risk, but its uncontrolled use in everyday life.
Whilst shadow AI inadvertently opens doors, deepfakes are designed to deliberately undermine trust. An artificially generated voice of a senior executive, a manipulated video message or a deceptively genuine approval instruction can be enough to set decisions in motion that nobody intended.
In the past, one could rely on certain characteristics: tone of voice, speech patterns, gestures, demeanour. Today, these signals can be manipulated. The result: decisions based on personal credibility become vulnerable. Teams that need to react quickly run the risk of no longer being able to distinguish between trusted and untrusted communication channels.
This scenario is not futuristic; it is already happening. And it is changing the way companies need to secure internal communication.
Traditional phishing relied on a scattergun approach: lots of messages, few hits. AI turns this principle on its head. Instead of generic bait, it generates messages that seem perfectly tailored to the recipient – whether in the organisation’s style, the department’s tone, or the logic of the project.
This makes social engineering more personalised, more credible and harder to spot. A well-faked chat history, a precisely worded query, a seemingly harmless file – all of these can appear as though they come from colleagues.
The attacks thus become less visible and, at the same time, more sophisticated. Companies must prepare for the fact that AI phishing does not appear as an external attack, but rather as internal communication that carries things forward in a plausible manner.
These scenarios all have one thing in common: AI transforms external attacks and internal errors into hybrid risks that disguise themselves within the organisation. In this way, AI itself becomes an ‘accomplice’. Not because it is malicious, but because it provides capabilities that can be exploited by attackers and unwitting employees alike.
At the same time, AI naturally also opens up new possibilities on the defensive side: security platforms are increasingly using machine learning to detect unusual user activity, suspicious access patterns or data movements at an early stage and to make insider risks visible more quickly.
The new insider risks cannot be contained by technology alone. AI has shown that threats can no longer be neatly divided into internal and external, or intentional and unintentional. What is needed is a combination of governance, culture and technology. In that order.
At first glance, governance appears to be a formal process, but in practice it forms the foundation of every AI security strategy. It is less about tightening rules and more about providing guidance:
Companies that set clear guidelines not only create security but also build trust. Employees know how they are permitted to use AI without straying into a grey area. And importantly: governance is not a control mechanism, but a navigation system. Employees should not avoid using AI. They should know how to do so safely.
In the age of AI, annual training slides are ineffective. AI-powered deceptions are too versatile, too realistic and too dynamic. You should therefore focus on:
The aim is not to make employees suspicious, but to make them sensitive to the patterns generated by AI-powered attacks. It is less about knowledge and more about intuition: the feeling that ‘something isn’t quite right’, even when everything appears perfect.
Technical measures remain essential, but they need to be reimagined: less like rigid walls, more like dynamic guardrails. Key components here could include:
AI is not only the source of new risks, but also the most effective response to them. With its capabilities, it can directly address these new threat scenarios:
The aim is to create a technical environment in which employees can work productively and where misconduct, manipulation or deception do not lead to disaster.
Prevention does not rely on additional layers of control, but on a combination of factors: governance provides direction. Awareness fosters competence. Technical safeguards protect day-to-day operations. This creates an environment in which AI does not become a risk, but rather a tool that organisations can use safely and responsibly.
The path to the future of security strategies does not necessarily lie in more complex access controls, but rather in a human-centric security model that combines three elements:
People remain the most critical component of security, but AI is now part of the equation. Resilience arises when both are considered together: people, with their strengths and weaknesses, and machines, with their capabilities and risks.
Security in the age of AI no longer means merely controlling people, but shaping responsibility – in many cases supported by technologies that protect people where their biological perception reaches its limits.
Risks often manifest themselves through changes in behaviour: extreme stress, frustration or a sudden interest in data outside one’s own area of responsibility. AI attacks also exploit the ‘compliance reflex’ (obedience towards (fake) superiors). An open security culture, in which concerns can be reported without fear, is the best protection here.