Vehicle Connectivity

Two Sides of the Same Coin

Is AI a Curse or a Blessing for Cybersecurity?

5 min
Cybercriminals can misuse advanced language models for their illegal activities.

While AI offers potential for risk detection, it also increases the range of cyber threats. We explain why the connected car is so vulnerable to digital attacks and whether AI is more of an opportunity or a risk for cybersecurity.

Development, production, and distribution in the automotive industry are being optimised at lightning speed with the help of artificial intelligence, and the number of relevant use cases is increasing at a similar pace. According to tech giant IBM, artificial intelligence can identify anomalies in digital environments, conduct risk analyses, and optimise security in terms of user-friendliness. However, AI is a double-edged sword: alongside its potential, the risks it poses also seem to be growing.

"AI can formulate almost perfectly sounding phishing emails or even programme code for malware," say experts from the German information and telecommunications industry association, who classify the innovation as a significant challenge for cybersecurity. According to the Federal Office for Information Security (BSI), both attacks on AI systems and attacks by AI systems are currently the focus of intensive research. Is the automotive industry facing an enormous arms race in prevention for proxy wars between AI applications?

These are the points of attack

Autonomous vehicles, permanent connectivity, and increasingly complex software are driving the automotive industry forward - but at the same time, they are creating more and more potential attack surfaces for cyberattacks. The USB and diagnostic interface, as well as the Bluetooth module, are particularly among the possible entry points for criminal interventions. Also at the forefront: the keyless entry system. In over 600 cases, the ADAC has already been able to show that manufacturers use insecure technology here. Additionally, hackers could try to gain access to vehicle systems from anywhere via the increasingly standard built-in SIM card.

American cybersecurity expert Sam Curry, Vice President for the IT security company Zscaler, reportedly uncovered alarming security gaps in the backend and business IT systems of well-known car manufacturers such as BMW, Porsche, and Mercedes as early as 2023. These gaps affected not only private vehicles but also emergency vehicles like police cars and ambulances. In addition to the ability to control lights, horns, car doors, and engines, Curry even managed to copy vehicle data, reset settings, and in some cases, access the manufacturers' internal networks.

This is how AI simplifies entry into cybercrime

With the help of AI tools and appropriately equipped language models, practically anyone - even without software or IT knowledge - can become active in areas of cybercrime. Although ChatGPT is programmed not to produce illegal or unethical responses, media reports suggest that these restrictions can be bypassed through so-called jailbreaks. These are specific prompts that can still lead the AI to produce texts it should not produce. Additionally, there are comparable unmoderated chatbots like FraudGPT on the darknet, which are trained for criminal purposes. These applications enable the drafting of phishing emails, the development of cracking tools and malware, or the creation of hostile training data to sabotage machine learning.

Claudia Eckert, managing director of the Fraunhofer Institute for Applied and Integrated Security (AISEC), categorises potential risks and their impact on cybersecurity into three categories:

  • Individual usage risks: Dangers that arise from the specific application of AI models by individual users - for example, when confidential information is entered, the model provides incorrect or dangerous answers, or malware enters the system through its use.
  • AI-generated classes of attacks: New forms of cyberattacks where Artificial Intelligence is used to automatically create and execute malware, fake content, or targeted attacks - often faster, more varied, and harder to detect than before.
  • AI-assisted attack preparation: Attackers use Artificial Intelligence to identify vulnerabilities in IT systems more quickly, plan attacks more precisely, and evaluate complex data effectively - even without deep technical knowledge. This makes cyberattacks more frequent, effective, and harder to fend off.

Study: Artificial Intelligence Intensifies Cybercrime

A study by Sopra Steria also shows that the fear of AI being used by cybercriminals is increasing. Three-quarters of companies and authorities feel more threatened by the malicious use of AI. In the next twelve months, 81 percent of organizations plan to improve their cybersecurity. Particularly alarming is that 71 percent of professionals and executives believe that cybercriminals use AI better than companies do for defense. Nevertheless, the greatest risk remains human, not artificial intelligence: 43 percent see inappropriate responses to phishing attacks as the biggest vulnerability.

AI enables proactive protective measures against cyberattacks

On the other hand, AI can also be used to analyse large amounts of data along the automotive value chain, identify unusual activities or patterns that may indicate cyberattacks, and thus detect potential risks early, as industry experts from the VDA report to automotiveIT.

"AI can also help identify and fix potential vulnerabilities or security gaps in real-time or even during development, before they can be exploited," said a VDA spokesperson. "Overall, the use of AI in automotive cybersecurity enables a proactive and adaptive defence against an increasingly complex threat landscape. This ultimately increases the safety of vehicles and occupants." In the long term, artificial intelligence is even capable of keeping pace with the rapid development of new cyberattacks through continuous training and appropriate adjustments, thereby ensuring vehicle safety in the long run.

New regulations aim to make AI use safer

To ensure that AI in cars does not become a security vulnerability, the BSI initially calls for new, recognised criteria to check the prerequisites for the safe use of AI procedures. Suitable concepts and methods are currently not available or not fully developed. Together with automotive supplier ZF, the authority is therefore developing suitable testing methods and tools to secure AI systems in vehicles as part of the AIMobilityAudit project.

“The idea came from the BSI, where we said there is this new key technology without which automated driving is not possible. We want it to be used in a trustworthy manner, accepted by us from a cybersecurity perspective, but also accepted by consumers,” explains Arndt von Twickel, Head of the BSI Department for Cybersecurity for Intelligent Transport Systems and Industry 4.0.

The United Nations Economic Commission for Europe (UNECE) also aims for clearly defined rules for the use of AI and other technologies in vehicles to ensure software reliability and cybersecurity. In this endeavour, the UNECE established the Working Party on Automated/Autonomous and Connected Vehicles (GRVA) in 2019. “Given the nature of machine learning and deep learning, GRVA has, for example, examined how the industry uses such AI technologies and for which use cases,” says the Economic Commission. “It has developed relevant definitions for its work in the automotive sector and is examining the need to develop AI-specific regulations in the form of recommendations or guidelines that address the specific risks of this technology.”

In the summer of 2024, GRVA and UNECE jointly published guidelines and recommendations on safety requirements, assessment, and testing procedures for automated driving systems as a basis for the further development of regulatory requirements. As a result, several car manufacturers announced that they would discontinue parts of their model ranges to save on the costly upgrading of vehicles.

AI is not yet considered as a cybersecurity measure

Ultimately, it seems that critical concerns about the impact of Artificial Intelligence on cybersecurity are currently somewhat more in the foreground than the opportunities arising from AI. "AI is a fundamental technology that can both provide great benefits and cause harm. Regulation and bans will not deter cybercriminals, especially those operating internationally and sometimes with state support, from using AI. It is all the more important to use the potential of AI in cyber defence today and to drive developments forward at pace," says Susanne Dehmel, member of the Bitkom management team.

According to Accenture's latest study, State of Cybersecurity Resilience 2025, 90 percent of companies are not yet adequately prepared for AI-supported threats. Only 25 percent comprehensively implement encryption and access controls. 77 percent lack basic security practices to protect critical data, interfaces, and cloud infrastructures, and 63 percent of respondents have neither a clear cyber strategy nor the necessary technical skills.

This article was originally published on 04 April 2024 on automotiveIT and has been continuously updated since then.