- AI and machine learning are effective tools in reviewing huge amounts of content and helping identify mis/disinformation and user accounts that are used for the purpose of amplifying harmful messages.
- It is important to look at the content, the origins, and the reach of the message to be able to verify it and see if it is misleading and harmful.
- The main challenges of content moderation are noticing harmful messages in time (within the first 30min from the posting time) and finding effective ways to address harmful content.
On 17 June, 2021 YPFP Brussels hosted an event on how AI is used to counter disinformation. The event was moderated by YPFP Brussels Deputy Director of Security and Defence, Marija Sulce. For the discussion we were joined by Lyric Jain, the founder and CEO of Logically – a leading technology company in content moderation and fact checking. Logically combines advanced AI and machine learning with one of the world’s largest dedicated fact-checking teams. This event was part of YPFP’s Innovation and Technology Initiative, which hosts events that explore the ways technology is changing the world of foreign policy, and the Disinformation Series that explores what threats disinformation can pose to our society and how it can affect state security.
The event began with a presentation by Mr. Jain on how Logically was created, what it does, and why its work is important. Mr. Jain founded the company after seeing the real-life negative effects of mis/disinformation during the 2016 US elections and the Brexit referendum. Logically now works with the US, UK, and Indian governments as well as social media and private companies to help identify and mitigate the presence of harmful information online. Currently the company works mostly with Covid misinformation, supporting election integrity, and countering foreign influence operations.
According to Mr. Jain, technology is a powerful tool that is the key to countering mis/disinformation online. The technology that Logically has created helps to quickly identify information that could be mis/disinformation as well as user accounts that could be used by actors wanting to amplify certain harmful messages (such as bot or troll accounts). Logically does not only look at the content of the messages, but also at the ways the messages are being amplified and the veracity and behaviour of the accounts. With more complex cases, human content moderators review and fact check the content. Logically’s experience shows how technology can be used effectively to mitigate and de-risk various situations that arise from mis/disinformation and further to combat the actors behind the disinformation campaigns.
The question-and-answer session mainly focused on the challenges of content moderation. One challenge is the fact that identifying harmful information must be done very quickly – within the first 30 minutes – to make sure the message does not have the chance to spread. If content is up for one or two hours, it can spread widely and become viral. Another challenge is addressing this content. Logically works with local law enforcement, national security agencies, and social media platforms to try to minimize the impact of disinformation, identify harmful actors, and take down the information that is misleading or even harmful. But this is complex as sometimes the content of mis/disinformation is not necessarily illegal and is only on the verge of being harmful. Thus, Logically also produces fact-checked information to help counter inaccurate information and to provide alternative, verified content.
Written by Marija Sulce, Deputy Director, Security and Defence Program, YPFP Brussels.