AI AND HUMAN TRAFFICKING

16 January 2024 – by Agnes Martony

On the 24th of March 2023, a picture of Pope Francis in a white puffer jacket was published on Reddit. It was widely believed that the Pontiff was out on a winter walk in a very fashionable outfit. Nobody questioned the authenticity or originality of the picture, however, shortly after it emerged that this was not in fact a real image but rather a fabricated man-made image where they used an image of a real person in non-existing situation.

During 2023, it emerged that the internet was flooded with pictures and short videos where, mostly women and children, were in situations which did not happen in reality. Many times, the picture of the person was a real image but was merged with an action or situation, which they did not in fact take part in but viewers still believed to be true. This is what we call a deepfake.

All of this is possible because artificial intelligence gives us a tool to recognise pictures and merge it with other pictures. It is also now possible that you ask the artificial intelligence in text format to produce you an image or a video according to your written request based on certain criteria.

That way, you can ask the artificial intelligence to produce an illicit video or picture of a public figure or your next-door neighbour, or someone from your class at school. At present level of technology, you can share this picture or video with whom you may wish, can be uploaded or downloaded to other platforms and that way it is impossible to remove from the world wide web forever after. The AI applications are so advanced, that in some cases it only requires one image to be able to create a deepfake. It is so easy to reach that ordinary school children are able to download the application and create their own images. It is more than easy to create now Child Sexual Abuse Material (CSAM).

As we know a CSAM could be a tool to push someone towards prostitution, that way can be used as a tool by those in human trafficking. 

It also poses a problem for law enforcement, as the picture is a fabricated picture and it is impossible to know that the abused person is real or is an image generated person. That way, if law enforcement decides to go after the possible victim who is in fact a non-existing person, a lot of time and resources will be taken away from helping real victims.

Another problem is that the AI applications itself could be trained of illicit images that way any future generated images could be based on trafficked or sexually exploited persons’ images. A biased base of training material can produce a distorted or unwanted imaginery for the end user as it happened to someone who wanted to produce an avatar for herself in a video game. She has submitted her one picture and from 100 generated images 16 were topless and a further 14 were overtly sexualized poses.

To mention positive use of artificial intelligence, some anti-trafficking organisations use ChatGPT to help them with communicating with victims, allowing them to reach out and continue communication on a larger scale due to the speed of the application.

print