Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

action image teaser hk2
IT Security Talks Stream I

Data protection and security requirements for AI systems

Based on the legal requirements, attack types and required measures for AI systems are explained.

calendar_today Wed, 08.03.2023, 15:00 - 15:30

event_available Digital

Action Video

south_east

Action description

south_east

Speaker

south_east

Themes

Data protection / GDPR Data security / DLP / Know-how protection Legislation, standards, regulations

Event

This action is part of the event IT Security Talks

Action Video

grafischer Background
close

This video is available to the it-sa 365 community. 
Please register or log in with your login data.

Action description

Artificial intelligence (AI) is on everyone's lips, including legislators and supervisory authorities. Therefore, there is already a wealth of legal requirements, especially from a data protection standpoint or to avoid discrimination. In all phases of an AI system, from design to feedback of results, it is necessary to go through and document with regard to all data (e.g., raw data, training data, test data, verification data) how the legal requirements are met and risks are avoided. In the case of sequential Deep Learning, each layer must be considered and evaluated individually. Assurance goals include transparency and explainability, data minimization, intervenability, availability, integrity, and confidentiality. Responsibility must be identified and communicated. In particular, the controller must ensure that the data protection principles (Article 5 GDPR) are complied with and that the security of the processing (Article 32 GDPR) is guaranteed. AI systems can pose risks to individuals in a variety of ways, some of which are difficult to identify, foresee or prove. The controller must identify these risks and define, implement and operate specific measures for them. Everything together must then be documented. 
From the very beginning during the planning and specification of an AI system, it is necessary to consider what outcomes should be considered appropriate and correct. The purpose and expectations of the system must be clearly described. The technical and organizational measures are to be defined, in particular the respective access rights. Rule violations, purpose extensions and purpose violations must be identified and documented during operation. There must be possibilities to intervene in the processing in order to stop it if necessary. It must also be possible to provide information on how decisions and forecasts were arrived at by an AI system. If the raw or training data is inadequate or subject to errors (so-called bias), this can lead to incorrect results or discrimination. 
More than 70 technical and organizational measures for AI components and AI systems are recommended by the German data protection supervisory authorities (DSK) for the phases in the life cycle of an AI. This catalog is a suitable aid for structuring and monitoring AI projects. The typical attack types such as evasion (false result when using the AI), poisoning (manipulating data during AI training), backdoor attacks (introducing vulnerabilities during AI training) as well as for extracting the AI model, the training pattern or individual data are also to be considered. Different levels of attacker knowledge and attack vectors must be kept in mind to test resilience. Classic measures include pseudonymization, anonymization, encryption or distribution of data across multiple systems (so-called federated learning). 
An important dimension is the quality of the AI system. In unsupervised learning, the results must be regularly interpreted or checked afterwards by humans, if necessary by means of a verification model. Otherwise, black-box testing can be used to generate synthetic test data, which is used to test what influence the input parameters have on the output of the AI component. 
The presentation gives an overview and shows what can be used as a checklist. 
 
... read more

Language: German

Questions and Answers: Yes

Speaker

show more
close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.