AI and cybercrime

Although new technologies hold unlimited potential, their damaging effects are often minimized or even ignored. In recent years, the emergence and growing development of artificial intelligence (AI) and its applications are increasingly visible in our daily lives.

From a criminological point of view, artificial intelligence can be used in all tasks that require classification and generation of data, detection of anomalies (for example, detecting fraudulent transactions) or even prediction (recidivism rate). However, what should be noted is that these tasks are narrow; that is, they only focus on specific aspects. In addition, criminologists can forget the social challenges posed by technological disruption.

The authors of this article, therefore, focused on several aspects of AI-related to cybercrime.

Crimes committed with AI (AI as a tool)

AI can serve as a powerful tool for malicious criminal use by expanding and changing the inherent nature of existing threats or introducing new ones.

Several studies have looked at the use of AI in social engineering attacks because most cybercrimes or attacks start with this type of attack. Phishing emails are often generic and therefore are either quickly detected by spam filters or are not convincing enough to the public, and therefore are less likely to cause victims. With AI, systems can automatically learn and combine the characteristics of other phishing attacks to avoid spam filters. AI is also being developed as a defence tool (e.g. “Panacea”) that allows cybercriminals to respond to and engage in conversations to gain more information about their real identity and waste their time.

The best-known example of the use of AI for malicious intent is the case of “DeepFakes,” which criminals can use to create material for harassment, blackmail or sextortion synthetically. A 2019 report shows that 96% of online videos that have used Deepfake involve the making of non-consensual pornography.

AI crimes (AI as an attack surface)

AI crimes refer to attacks that exploit vulnerabilities in systems to trick AI systems. The authors of the article cited the example of Microsoft’s chat Tay chatbot on Twitter, which was hijacked and used to communicate racist slogans hours after its launch.

Often, vulnerabilities are exacerbated in the machine learning supply chain. The machine learning process is often outsourced or uses pre-trained models so that criminals can create BadNet. This maliciously formed network works fine under the user’s typical scenario but contains “gates.” – specific entries that trick the system into behaving incorrectly or unsafely. In summary, criminals can act on the environment, which increases the vulnerabilities of Internet of Things systems and poses challenges for the development of smart cities.

Photo by Michael Dziedzic on Unsplash

AI crime (AI as an intermediary)

This category of crime refers to crimes committed by AI. The authors of the article are referring to one particular case. In 2015, a group of artists unleashed a buyer’s bot on the dark web to buy drugs. This case has raised legal questions, particularly concerning the legal status of AI and its potential use as an intermediary or shield for criminal acts because, in this case, it is the bot that bought the drugs.

Use of AI in Law Enforcement

New attention is being paid to AI by law enforcement agencies, particularly about the use of predictive technologies based on machine learning.

Another focus is on replacing humans with AI in law enforcement. Here we have to distinguish between the task and the job. Police work involves many distinct aspects (patrolling, filling out forms, etc.) where AI could be helpful. On the other hand, police work also involves broader functions such as, for example, acting as community police, investigation, victim support, arrests which are tasks more difficult to replace by AI.

Finally, state surveillance facilitated by IA raises a whole series of questions. Because extensive databases containing the names and faces of citizens already exist, and because facial recognition technologies can be easily and quickly integrated into the architecture of surveillance cameras, AI can prove to be a powerful monitoring tool. We can mention the use of “iBorderCtrl” in the European Union. This technology, deployed in airports, claims to provide automated detection of deception based on facial micro-expressions, although many concerns have been raised about the scientific basis for this approach. Other technologies are currently being tested, including echolocation to identify human activity, “Speech2Face”, which allows users to reconstruct facial images that identify age, gender and ethnicity from audio and the Pentagon’s “Jetson” laser, which recognizes unique signatures of heartbeats (through clothing) at a range of up to 200 meters.

In conclusion, the use of AI by cybercriminals or for malicious activities offers many new avenues of study for criminologists.

To cite: Hayward, K. J et Maas, M.M. (2020). Articifial ntelligence and crime : A primer for criminologists. Crime Media Culture, 2020, 1-25