
Hacking AI - How to Turn Machine Learning to be Evil by Data Poisoning
Those who use AI in industrial automation and find that their systems do not deliver should check their models as soon as possible.
Topic
Industry 4.0 / IoT / Edge ComputingMobile SecurityNetwork Security / Patch ManagementSIEM / Threat Analytics / SOC
When & Where
Thu, 10/27/2022, 14:00 - 14:30
Details
Format:
it-sa insights
Session description
AI is beautiful, but it makes a lot of work. Loosely based on Karl Valentin ("Art is beautiful,..."), this statement applies today to all application areas of AI and its most widespread element, machine learning (ML). Anyone who uses AI/ML in industrial automation and is surprised that their systems do not deliver what was agreed upon should have the setting or training of their models checked as soon as possible. Either by the supplier or by an independent service provider. Such algorithms are considered sophisticated, demanding, very complex and very hermetic. But they are vulnerable and error-prone just like any other code. Rule of thumb: The more complex and therefore more extensive, the more prone to failure. Currently, the nimbu ...
Moderator
