Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

  • it-sa insights
  • Knowledge Forum D
  • Industry 4.0 / IoT / Edge Computing
  • Mobile Security
  • Network Security / Patch Management
  • SIEM / Threat Analytics / SOC

Hacking AI - How to Turn Machine Learning to be Evil by Data Poisoning

Those who use AI in industrial automation and find that their systems do not deliver should check their models as soon as possible.

10/27/2022 2:00:00 PM – 10/27/2022 2:30:00 PM
On site
  • it-sa insights
  • Knowledge Forum D

Those who use AI in industrial automation and find that their systems do not deliver should check their models as soon as possible.

Language: German

Questions and Answers: No

grafischer Background
close

This video is available to the it-sa 365 community. Please register or log in with your login data.

Action description


AI is beautiful, but it makes a lot of work. Loosely based on Karl Valentin ("Art is beautiful,..."), this statement applies today to all application areas of AI and its most widespread element, machine learning (ML). Anyone who uses AI/ML in industrial automation and is surprised that their systems do not deliver what was agreed upon should have the setting or training of their models checked as soon as possible. Either by the supplier or by an independent service provider. Such algorithms are considered sophisticated, demanding, very complex and very hermetic. But they are vulnerable and error-prone just like any other code. Rule of thumb: The more complex and therefore more extensive, the more prone to failure. Currently, the nimbus of AI as unapproachable and infallible is disappearing. And anyone who, as a provider of corresponding products or platforms, has been able to give the impression that the work product of their data scientists is artificial intelligence, must now rethink. They must admit that there are many, sometimes embarrassing, gaps lurking inside their algorithms. Users can and should therefore scrutinize AI methods and systems for work and result security. In the lecture, you will learn what you need to watch out for. read more

Speaker

show more

Event

This action is part of the event it-sa Expo 2022

Organizer

Downloads - exclusively for registered users