The use of AI is on the rise and has become an integral part of the workplace. However, gullible and careless use increases the risks of such as data breaches. Appropriate awareness measures can help counteract this.
The use of AI is on the rise, yet the associated risks are often overlooked. As a result, AI results are frequently adopted without being checked. This allows errors to creep in. If company secrets or confidential data are also fed into the AI, this can have massive consequences for a business. AI-specific awareness measures can help.
Almost half of the workforce, 45 per cent, uses artificial intelligence (AI) at work, and the trend is rising. This was revealed by a study conducted by the German TÜV Association. However, many of them place blind trust in AI and accept results without checking them, as various studies show. Only around a third of employees have specific guidelines in place within their company regarding the use of AI. At 54 per cent, more than half of employees state that there are neither rules nor prohibitions regarding the use of AI. Without rules and guidelines, users are left to their own devices. It is then up to their own judgement to decide whether data is sensitive or confidential. As a result, sensitive data is very often entered into AI systems. According to a study commissioned by HP and Microsoft, this affects up to 80 per cent of employees. This includes not only trade secrets but also personal data. This can quickly lead to data leaks, data protection breaches and copyright issues. Users are often unaware that the data they enter can be used for training purposes. Companies must therefore expect that sensitive internal data may appear in AI results.
AI is not transparent and unreliable
However, the reliability of AI systems is lower than is generally assumed. “ChatGPT, Gemini and other chatbots invent up to 40 per cent of their answers and present them as facts,” reports Tagesschau.de on a study by the European Broadcasting Union (EBU). And that’s not all: “On the other hand, there are genuine hallucinations: the AI fills in missing information by generating statistically plausible strings of words – even if these are factually incorrect,” the report states. Verifying the results is made even more difficult because the AI “sometimes even invents sources that do not exist or links facts that do not belong together”. The result is alarming, concludes Tagesschau.de, adding: “Many users are completely unaware that chatbots can hallucinate. They assume that the technology operates objectively and accurately – a dangerous misconception.” Users often view AI systems as something akin to intelligent search engines and attribute a comparable level of reliability to them. For most people, it is therefore not clear that AI models merely perform calculations based on huge volumes of text, mostly sourced from the internet, and merely estimate which words are likely to go together. This happens on the basis of statistics; meaning and truthfulness take a back seat.
For this reason, the EBU also recommends a few measures for the use of chatbots. Users should never blindly trust AI and should always verify important information. When dealing with news and facts, they should rely on established media rather than AI or social media. “They are not suitable as fact-checkers or news sources; at the very least, no one should rely on them 100 per cent.”
Awareness measures are essential
However, this is unlikely to be sufficient for the use of AI in a corporate context. Here, employers will have no choice but to address specific AI awareness. Relevant training programmes are now available on the market, and most traditional providers of security awareness have incorporated these into their offerings. However, the content can vary considerably and should, where necessary, be tailored to the company during negotiations with suitable providers.
A core component of such measures should definitely be teaching the basics of AI and how it works, so that users develop realistic expectations when using artificial intelligence and recognise both its opportunities and limitations. This also includes teaching about typical AI errors, such as hallucinations or bias, i.e. results with a particular slant. Furthermore, the training should cover the legal framework regarding data protection and copyright, as well as the EU AI Act. This enables employees to better assess which data should be classified as sensitive. In the practical component, it is essential to use various AI systems. This allows training participants to recognise the differences and specific features of the various models and subsequently select the system best suited to the task at hand.
Typically, awareness measures are divided into a basic section relevant to all employees and into department- or task-specific advanced modules. However, the responsible use of AI systems requires not only the knowledge listed above, but also appropriate policies or guidelines from the company.
Sources
Springer Professional: AI usage in Germany is rising significantly (in German)
TÜV-Verband: Almost one in two workers uses artificial intelligence at work (in German)
Smartcompany.com: 81% of employees confess to sharing confidential business info with free AI tools
Washington Post: Why you shouldnt count on humans to prevent AI hiring bias
BCG: BCG-Studie zeigt: BCG study shows: Two-thirds of Germans use AI in the workplace (in German)
Bitkom: Employees are increasingly using shadow AI (in German)
Tagesschau: AI generates one in three answers(in German)


