For the use of AI-based systems in (high) risk applications (e.g. in the automotive or pharmaceutical environment) it is necessary that these systems are secure. This security must not only be guaranteed during normal operation (safety), but the systems must also be able to resist attacks (security). In this presentation we will briefly consider some security risks that are considered by our independent testing laboratory for IT security when using machine learning. We will show properties required for secure AI systems and finally give a short insight into our testing methodology for AI, which can also simulate attacks.