245375713_4764913470195031_1397340936289832432_n

Main takeaways:

  • Short definition of Artificial Intelligence: “[t]he theory and development of computer systems
    able to perform tasks normally requiring human intelligence, such as visual perception, speech
    recognition, decision-making, and translation between languages” [Mary B: 2018]
  • Artificial Intelligence (AI) can be used in different stages of a conflict, prevention, mediation and rebuilding. It can help prepare for meetings, understand the context better, and give early warning about possible rising conflict situations.
  • AI is not exactly a new concept in this field, and there are already quite many different successful projects that have given more space for new innovative AI tools and trust in its usage in this field.
  • Many people still have prejudices against using AI in the global conflict field. This can be changed by showing and showcasing how AI can help and giving more transparency to it.

On 18 September 2021, YPFP Brussels hosted an event with Pavla Danisova, Policy Officer in EEAS and Dr Katharina Höne, Director of Research at DiploFoundation. YPFP Brussels Programmes Officer Emilia Happel moderated the interview and the discussion.

The discussion started with Ms Pavla Danisova, who opened up different ways AI can be used in the field and how EEAS has been using it in their operations. Ms Danisova especially talked about Horizontal scanning, which is a technique that aims to identify countries and regions where the situation might deteriorate in the coming six months. This technique combines qualitative and quantitative tools to look at different triggers and trends that are happening in the region. Further, the method relates various data sources on political instability, such as protests, riots, and violence against civilians. Horizon scanning needs follow-up actions as it cannot prevent conflicts, but it shows where the efforts on conflict prevention and mediation are the most needed.

After Ms Danisova, Dr Katharina Höne spoke about her experiences working with AI in diplomacy and how organisations like DiploFoundation and UN have been using AI in different field projects. She presented five examples of how AI has been used as a tool in different negotiation points. These included, for example, early monitoring, understanding better the context and being able to consult a wider pool of citizens. She then continued to show how AI is used in preparation for negotiations as AI can automate the process of going through preparation papers and picking up the key points. She further pointed out how these tools can  separate further developed countries from less developed ones. This is because developed countries have the money and sources to use these tools, whereas developing countries must use human sources and be slower in their processes and opinions. Dr Höne also talked about how AI can be used wrong if the risks are not mitigated from the beginning. These risks include bias in the algorithm, transparency and data privacy.

Both speakers agreed that there is still mistrust in using AI as a tool in global conflict. Further, they agreed that it is essential to build trust in AI tools by showing what they can do and how they work. There should also be further cooperation between countries and businesses on AI. In order for it to work correctly, AI will need access to an immense pool of data. And for it not to be misused, countries would need to cooperate to ensure that there is an excellent set standard and an ethical framework and a desire to amplify shared norms and ethics.

 

Written by Emilia Happel, Programmes Officer, YPFP Brussels