• 09/05/2025
  • it-sa News

it-sa Expo&Congress: Keynote speech by Jean-Marc Rickli on artificial intelligence as a geopolitical game changer

Exclusive: Wars and conflicts will take on new forms through the use of artificial intelligence and could lead to a new threat situation, according to Jean-Marc Rickli, keynote speaker at this year's it-sa Expo&Congress. This also applies to cyberspace.

Written by Uwe Sievers

Portrait  Jean-Marc Rickli

Exclusive: Wars and conflicts will take on new forms through the use of artificial intelligence and could lead to a new threat situation, according to the keynote speaker at this year's it-sa Expo&Congress, Jean-Marc Rickli. This also applies to cyberspace.

In his special keynote speech on 9 October 2025 at this year's it-sa Expo&Congress, Jean-Marc Rickli will provide an insight into how AI will impact international security and redefine international politics. He is regarded as one of Switzerland's most important security experts and wears many hats, acting as an advisor to various international bodies and organisations. He previously held various professorships, including at King's College London. In this interview, Rickli talks about the geopolitical changes that artificial intelligence (AI) could lead to.

 

Your activities are extremely varied, what is your main function?

My main role is as Head of Global and Emerging Risks at the Geneva Centre for Security Policy (GCSP). I work on emerging risks related to new technologies and am particularly interested in how the development of these technologies notably artificial intelligence, neurotechnologies and synthetic biology will affect international security. My area of expertise is warfare with two regional expertise on Europe and the Middle East.

I am conducting research on these topics and have also set up the Polymath Initiative which aims are improving the communication between the scientific community and policy-makers by training the former to global governance. They then act as translators for explaining scientific developments to policy-makers and also provide wider context about the possible implications of scientific discoveries to their own scientific community.

 

What were the most important stages of your career?

Before joining the GCSP, I worked in Qatar as part of my professorship in the Department of Defence Studies at King's College London. Before that, I was professor at Khalifa University in Abu Dhabi, teaching international security. I have a PhD in international relations from Oxford University, but I also studied mechanical engineering physics and maths. All in all, I have a background in both technical science and political science.

 

AI is an important topic for you. What role will AI play in armed international conflicts in the future, beyond deepfakes and disinformation?

AI is already being used extensively in its original function of analytical enabler, i.e. for data evaluation or big data analysis. This is being used in both the Gaza and Ukraine wars, for example, to improve intelligence gathering. The Israelis also use it to identify Hamas targets.

AI is also increasingly being used in drones. In order to offset electronic warfare and jamming, two techniques are being used in Ukraine currently. Drones can be guided by fiber optic cables or equipped with an algorithm that takes over navigation should the connection with the drone pilot fail. The latter has been used in the Ukrainian attack “Spiderweb” against Russian strategic bombers deep into Russian territory.

In the future, AI is likely to be increasingly used in drone swarms to overwhelm enemy defences. The principle here is to use a mass of drones in order to exceed the defence capabilities of the defender. AI here could be used to coordinate these drones. Classic surface-to-air systems are designed to defend against powerful large missiles, which are usually quite cost-intensive. If masses of cheap drones have to be intercepted with expensive Patriot missiles, for instance, this is very costly. The cost advantage is then clearly on the side of the attackers. It is worth noting that AI is already being used in air defense systems such as the Israeli Iron Dome. In addition, we are witnessing the development of increasingly autonomous weapons. Fully autonomous weapons rely heavily on AI to understand the context, navigate their environment and define, identify and strike a target. Although there are negotiations at the UN to ban such weapons, this has not yet been successful. At the same time, the development of such systems is progressing, but we have not yet seen a fully autonomous lethal autonomous weapon system.
Another field of application for AI is autonomous agents, i.e. agents that are increasingly autonomous. Unlike chatbots for instance, an AI agent understands the question and independently develops a strategy to solve the problem and carry it out. In the military domain, we are currently seeing the development of AI agent for advising military commanders. This is just the beginning of such developments. This will lead to the growing autonomy of machine.

 

Will physical conflicts such as wars increasingly take place in cyberspace in the future?

I don't think so. What we are currently seeing in Ukraine, for example, is a classic war of physical attrition such as the First World War but unlike the latter, the digital and physical world are increasingly merging. Intelligence increasingly relies on input from the digital domain to identify people, but it is physical weapons such as drones or missile that are neutralizing the targets. In the war in Ukraine we have witnessed the first battles between robots, i.e. situations in which robots (ground, air or maritime) face each other in 2024. This is for now marginal, but the use of robots will probably proliferate in the future. AI is also used in the cyberspace and will be increasingly so in the future. I am thinking, for example, of autonomous malware that independently identifies targets, searches for security vulnerabilities and develops and executes the appropriate actions to neutralize the adversary. At the same time, we will see a strong democratization of cyber weapons. When a malware is created, it is almost impossible to stop its proliferation. You cannot stop the proliferation of codes. Also a malware can be reverse-engineered to improve it in order to use it for their one’s own purposes.

In the future, we will have to deal with the idea that because of the growing level of autonomy in technology, it is becoming an actor of war in itself. Already in 2019, in a book my co-author and I made the argument that technology increasingly has to be considered as a surrogate in warfare. This does not mean that humans have become irrelevant but that warfare becomes more complex with new actors. Consequently, traditional risks are still there, but new ones are also being added. The range of potential threats is growing, which makes defence more difficult as the attack surface grows by the day.

If you want to optimally counter this, you need to increase your resilience. Organisations must become more resilient. They should assume that they will be attacked, and the key is the ability to absorb the shock and continue functioning. To do this, it is necessary to identify centres of gravity and protect and reinforce them. Each centres of gravity will require different contingency measures. For instance, when it comes to disinformation, increasing resilience among the population implies investing in education, sensitisation and raising awareness, ideally starting at school.

 

What role will AI play in cyber warfare in the future?

AI is very good as an intelligence tool, for collecting and analysing data and then infering patterns. AI can and will improve defence in cybersecurity. But the same tools can always be used for offensive purposes They are also good at identifying security gaps for instance. You can never anticipate all gaps, every anti-virus software is only as good as the last malware identified. Zero-day vulnerability cannot be identified by antivirus. In the future, we should put more emphasis on resilience than defence. The latter is reactive whereas the former is anticipatory. Attackers optimise their algorithms very quickly. You cannot only be reactionary as you cannot keep up with the rate of innovation when designing defences, you have to anticipate. Thus focusing on resilience supported by foresight analyses should definitely be in your security tool kit.

 

What geopolitical changes could AI bring about in international conflicts? Will the state of AI development influence who is the world power?

The question is how AI is redistributing global power. Two countries, the USA and China are currently leading the way in technological developments, all the big techs are coming from there. Europe is not playing in the same league, which is highly problematic for European sovereignty and strategic autonomy. In times of increased competition between the great powers, AI has become a determining factor in this race for global power.

The rivalry between China and the USA is growing, and technology plays a key role in this. The US is putting pressure on the EU not to work with Chinese technology and companies. In a recent post on Truth Social on Aug 26, president Trump is threatening of sanctions and tariffs any actors that would impede the development of US digital companies. This is specifically targeting the EU’s Single Digital Act whose purpose is to impose limitations to social media platforms. Yet, because Europe is unable to counter this and has no digital champions, Europe is heavily dependent on the technology developed by US companies and to a lesser extent Chinese companies. This increases Europe’s vulnerabilities. This is compounded by the lock-in effect of existing digital platforms. In addition, the growing technological decoupling between China and the USA is increasingly forcing companies and countries to choose side.

 

What role will information play in future conflicts?

This is where I see the greatest influence on international security. Disinformation is not new; the Trojan horse, for example, was partially based on disinformation to let it in. What has changed is the scale and reach. The digital domains allows collecting an enormous amount of data on each individual, companies and countries. AI is making the processing of such data easier and allow to profile people to a level of granularity that is unseen in human history. AI allows to conduct disinformation campaign that target specific individuals with tailored messages but at a national level. With the generation of new data in not only the digital domain but also in the cognitive and biological domains, it will be increasingly possible to move from information to cognitive warfare that is  controlling how and what people think in order to control how people act. For instance, the combination data stemming from social media and smart devices and consumer tech such as smartwatches make it possible to collect massive amounts of data on people's emotional states. Knowing emotional states makes it easier to control, amplify or even trigger them. The control of such data, notable cognitive data, will become increasingly strategic. AI plays a decisive role in this, because scaling is a major capability of AI. The more such data we collect, the easier it will become to control larger masses of people.

Further reading:

Jean-Marc Rickli, Andreas Krieg: Surrogate Warfare: The Transformation of War in the 21st Century”

Special Keynote: Security and geopolitical implications of emerging technologies

An overview of the security implications of emerging technologies on international security and geopolitics

Special keynote at it-sa Expo&Congress 2025 with Jean-Marc Rickli, Head of Global and Emerging Risks and Founder and Director of the Polymath Initiative at the Geneva Centre for Security Policy (GCSP) in Geneva, Switzerland