Insecure and inaccessible code can hinder investment in connected vehicles and limit innovation

Automakers are embracing artificial intelligence (AI) in a bid to create more personalized user experiences in connected vehicles. In March 2022, Google’s Waymo revealed driverless ride-hailing services were soon to be offered in cities such as San Francisco — highlighting how the use of AI technology has increased as a result of companies like Google, Amazon, Apple, Microsoft, and others entering the automobile market.

It’s clear that manufacturers are seeking to deliver the best possible user experience (UX), but vehicle safety and regulation needs to be carefully examined. And as this technology is more widely adopted, software developers will eventually be seen as the new mechanics.

In the UK, the Government is claiming that driverless cars will be on UK roads by 2025, a short time-scale that raises questions about how quickly regulations will change and how autonomous vehicles (AVs) will evolve. Connected vehicles are becoming much like mobile phones — there is an increased consumer demand for new features, and software updates are required to deliver them. But rapid development, without proper regulation and testing brings a heightened risk of those vehicles becoming vulnerable from their source code. This was seen in 2015, when security researchers shocked the car industry by infiltrating a Jeep being driven by a (consenting) tech journalist.

There is a real threat to innovation and investment within the sector which stems from a consumer demand for fast paced development of autonomous vehicles, coupled with premature regulation.

Putting AI in the driver’s seat

As AI becomes more integrated and integral to improving automotive UX, we begin to see original equipment manufacturers (OEMs) adopting the technology, both within and outside of the vehicle. AI is being employed to improve manufacturing, vehicle design, testing, and supply chain management. Looking more closely at Waymo’s self-driving vehicle, we see the significant impact that the deployment of AI and machine learning (ML) can make within the industry. The need to continuously access many variations of datasets from Google, in real-time, is a painstaking task — especially considering that both the base software and all dataset variations are in need of testing.

Many areas of AI today don’t necessarily have immediate safety concerns. Typically, it is used primarily to enhance UX, focusing on vehicle safety, making use of cloud-based navigation, speech recognition, weather and surface recognition. But, if there were to be a failure in navigation, there is the potential for accidents and collisions to occur that could result in fatalities. This is why having an underlying advanced driver assistance system (ADAS) to realign AI/ML systems regularly is essential to automotive safety.

Developer to mechanic — security, standards, and regulations

Historically, the use of AI in vehicles has been met with tight regulations that have delayed the speed of innovation. The pace of technological advancement outpacing the regulations in place, meaning developers have been prevented from introducing AI safely and securely. However, increasing consumer demand for connected devices and vehicles has seen numbers grow by 270 percent over the last five years, and means OEMs are under pressure to incorporate AI at pace.

As we approach a period of fast AI development and deployment, driven by consumer demand, the main conversation topic has shifted to code complexity, and the regulations that dictate vehicle security. It is important to remember that code security must not be overlooked, even if the leading function of AI implementation is not to enhance safety-critical features, and concentrates on the user experience.

There must be an understanding that security concerns over code vulnerabilities due to widespread inaccessibility for developers will be ever-present, exacerbated by a constant need for software updates. Without this access to source code, developers cannot detect areas of weakness, and security concerns arise. As such, it is crucial to have access to the source code for tools and runtime software used in the development process, and equally important to have visibility into the projects you create for use in the vehicle.

Automotive cybersecurity and its future in the cloud

Irrespective of the deployment of AI/ML technologies, there will be an increased reliance on cloud-based technologies and data within vehicles. As we approach a future where all cars on the road will be connected, the cloud-based aspects of autonomous vehicles must be adopted for longevity. Naturally, local ADAS-like safety features based on LiDAR, radar, and cameras will be necessary to guarantee system security during testing and implementation.

The aggressive adoption of open-source software continues to create potential threats, such a local hacking. And as the mechanics of the future — developers are in need of the access to a broad development toolkit, and governing bodies must employ appropriate regulation. Particularly given how consumer expectations for up-to-date software and regular modernized features will only continue to rise.

Open-source software that is commercially licensed will undoubtedly become more prevalent. For efficiency in product development, it is extremely important to have access to the source code for tools and runtime software used in the development process, and equally important to have visibility into the projects that are created for use in vehicles. Commercial licensing and IP protection is an absolute necessity, while making the underlying source code available to the development community.

Photo Credit: LifetimeStock/Shutterstock

Patrick Shelly is Manager Solutions Engineering at The Qt Company