Geo Week News

January 9, 2024

TomTom and Microsoft will develop AI-powered Conversational Automotive Assistant

The combined effort will result in a new AI solution for automotive that uses Microsoft’s AI and cloud analytics capabilities. Built on TomTom’s Digital Cockpit, it will allow for new ways of interacting between people and their vehicles.
Image via TomTom

TomTom and Microsoft have been partners for years. Their collaboration started in 2016 with TomTom powering Azure Maps location services. Now, a new collaboration will focus on bringing generative AI to the automotive industry. The term "generative AI" refers to artificial intelligence that is capable of generating text, images, or other media, using generative models. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

Developing an AI-powered conversational automotive assistant

For this new collaboration, TomTom has developed a fully integrated, AI-powered conversational automotive assistant that enables more sophisticated voice interaction with infotainment, location search, and vehicle command systems. Through the assistant, drivers can converse naturally with their vehicle and ask to navigate to a certain location, find specific stops along their route, and vocally control onboard systems.

Gianluca Brugnoli, TomTom’s VP of Design who leads the company’s UX design team, believes that voice assistants powered by generative AI will be the next revolutionary tech for in-vehicle experiences. He states that generative AI is going to add an emotional layer to interaction with the car, and will also help personalize the experience of the vehicle while creating adaptive experiences based on the vehicle’s knowledge of its surroundings and the emotional state of the driver.

Leveraging Microsoft cloud analytics and AI capabilities

The assistant uses Microsoft’s advancements in AI, particularly the large language models from Microsoft’s Azure OpenAI Service in addition to Azure Kubernetes Services, Azure Cosmos DB, and Azure Cognitive Services. It is built into TomTom’s Digital Cockpit, an open, modular, in-vehicle infotainment platform that unites automotive features, vehicle interfaces, and apps, but can also be integrated into other automotive infotainment systems. 

Image via TomTom

Here’s a brief description of Microsoft Azure OpenAI Service, and the three Microsoft cloud analytics and AI capabilities that are used for developing the voice assistant:

  • Azure OpenAI Service: provides REST API access to OpenAI's powerful language models to perform tasks such as content generation, summarization, image understanding, semantic search, and natural language to code translation.
  • Azure Kubernetes Service: a managed container orchestration service based on the open source Kubernetes system, which is available on the Microsoft Azure public cloud. It is used to handle critical functionality such as deploying, scaling and managing Docker containers and container-based applications.
  • Azure Cosmos DB: a globally distributed, JSON-based database delivered as a ‘Platform as a Service’ (PaaS) in Microsoft Azure that allows users to build and distribute their applications across an Azure data center automatically without any need for prior configuration.
  • Azure Cognitive Services: a collection of ready-to-use artificial intelligence (AI) APIs to incorporate advanced AI capabilities into developer projects, with features such as computer vision, natural language processing, speech recognition, machine translation, and more.

TomTom at CES 2024

The new automotive assistant will be demoed at CES 2024 in Las Vegas, along with TomTom’s new mapping tech, TomTom Orbis Maps, and Map Maker. Map Maker is a tool that allows users to craft and style maps to fit their brand, use case, and application. TomTom will be showing off maps made for various use cases such as EV, delivery, and ride-hailing.  Map Maker also has an AI capability that enables customizing a map by simply asking it to display and augment the map to how you want. This isn’t a voice command driven interface, but a true large language model (LLM)-powered tool where it interprets the user’s prompts and applies it to the map. Large language models use transformer models and are trained using massive datasets, which enables them to recognize, translate, predict, or generate text or other content.

TomTom Orbis Maps is an integrated map of various data layers including dynamic data layers. It combines open map data and TomTom’s proprietary data, as well as data sources from Overture Maps Foundation, OpenStreetMap (OSM), partner data, and sensor derived observations (SDOs). The common mapping standard allows for interoperability of map data and accelerated innovations in the geolocation space. At CES 2024, TomTom will be showing customers and partners how its different map layers can solve challenges most important to them, from fleet and logistics to ride hailing, automated driving, or EV.

Resources:

1)      TomTom joins forces with Microsoft to bring Generative AI into the vehicle

2)      Here's a sneak peek at our CES 2024 product demos

3)      TomTom Orbis Maps

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion