• Use Cases

Synthetic Data for Next-Gen Driver Monitoring Systems

By: SKY ENGINE AI
scroll down ↓to find out more

Introduction

What is a DMS system and why it is important

DMS (Driver Monitoring System) is a safety solution employed in cars in order to minimize the number of road accidents caused by distracted drivers. It consists of a detector or a set of detectors placed in the vehicle cabin coupled with a pre-trained computer vision AI model to detect in real time potentially dangerous states and behaviors of the driver and/or passengers. Regulatory organizations, such as Euro NCAP, and authorities, e.g. the European Parliament, push for the increased utilization of DMS systems in new cars and even introduce new laws that make DMS mandatory. In the EU, for example, by 2026 all vehicles introduced to the market are required to have the ability to detect driver drowsiness and distraction. On the other hand, Euro NCAP (The European New Car Assessment Programme) and its Australian counterpart, ANCAP, will award 4 or 5-star safety ratings to cars that have a well-performing DMS along with high results from other safety tests. North America is not falling behind and introduced regulations pertaining to the safety of children left alone in cars (The Hot Cars Act of 2021). Moreover, the Alliance of Automobile Manufacturers has voluntarily committed to integrating rear seat warning systems to prevent Pediatric Vehicular Heat Stroke, as standard equipment in passenger cars, and reports that 2025 car models will include those systems.

Synthetic data for training and validation of DMS systems

To train a dependably functioning computer vision AI model, it is necessary to use vast amounts of well-labeled data. This is usually the bottleneck of any machine learning model training process, especially for uses related to human safety, mostly due to the following issues: 

  • privacy concerns, 
  • dataset balancing, 
  • time and cost constraints, stemming from data scarcity and manual labeling.

Synthetic data offers a reliable solution to the limitations posed by real-world datasets. This artificially generated data very closely resembles real-world scenarios, allowing for the creation of vast amounts of diverse and well-balanced data for training superior DMS models. 

What you can do with SKY ENGINE AI’s Synthetic Data Cloud for Vision AI

SKY ENGINE AI goes one step further—not only do we give access to superior synthetic data generators for the development of robust and versatile DMS solutions, but we also allow data scientists and AI developers to create the datasets on their own, according to their specifications. 

For a well-performing DMS system, a number of features have to be covered while training the computer vision AI algorithm, such as the driver and occupant state or the recognition of gestures. SKY ENGINE AI allows for that and adds even more:

Driver and occupant state (i.e. drowsiness, distraction, emotions)

To train vision AI models for reliable in-cabin monitoring systems, high-quality synthetic data is required, with dense and accurate 3D key points and rich ground truth. Synthetic, and yet very realistic, humans were created for the job of simulating the driver and occupant behaviors in the car. They perform various activities, of which some may be considered illegal (depending on the country and regulatory). Such events should be accurately detected by any modern monitoring system, such as the DMS. 

Lighting conditions – internal and external

At SKY ENGINE AI, we have developed a complete toolchain of components for generating synthetic environments such as cars of different models and makes, objects, and external surroundings. Moreover, any genuine car interior may be replicated to help our customers create AI models faster. We employ procedural object geometries and materials arrangement with various randomized factors, such as:

  • camera angles, 
  • lighting, 
  • daytime, 
  • external surroundings 

The illumination variation introduced to the synthetic data simulated in the SKY ENGINE AI cloud can also include lighting from external objects spilling into the cabin on the driver, occupants, and objects. It is also possible to add lighting variation such as light sources, environment lights, and emissive objects. Moreover, SKY ENGINE AI supports modeling of point light and spotlight with energy dissipation, image and HDR environment lights, as well as light sources with custom geometry (defined in mesh) and surface luminance (defined in material). 

Gaze estimation package and facial landmarks

SKY ENGINE AI Synthetic Data Cloud supports multiple ground truth types, which are available for imagery data generated in different modalities (i.e. RGB, NIR, etc.).

3D Facial Landmarks 

Precise 3D key points corresponding to facial features are generated based on the provided specification. This data is crucial for training models that perform tasks, such as facial recognition, emotion detection, and head pose estimation – essential for robust DMS functionality.

Gaze Vector Extraction

SKY ENGINE AI facilitates the extraction of gaze vectors for various DMS applications. This data can be used for:

  • Drowsiness and distraction estimation so computer vision models can reliably identify driver fatigue and inattentiveness based on gaze patterns.
  • Personalization – for vehicle features that adapt to driver attention, such as automatic dimming of displays when the gaze is averted.

By utilizing these capabilities, data scientists and AI developers can train DMS models with exceptional accuracy, developing more advanced and reliable DMS solutions and driving innovation in automotive safety and user experience.Driver and occupants’ activities simulation for action recognition

To achieve unprecedented accuracy in detecting and recognizing multiple actions performed by drivers and occupants, we require robust synthetic data simulation functionalities. These AI models must be rigorously trained on a vast array of well-balanced and diverse data that accurately reflects a variety of activities. This includes scenarios such as:

  • eating different types of food,
  • drinking,
  • smoking (including electronic devices and cigarettes), 
  • making phone calls, 
  • using phones or laptops, and carrying various objects. 

Additionally, we employ a 3D pose estimation framework, which utilizes 3D skeletons and a model of configured key points to analyze the positions of drivers and occupants. To enhance this setup, we also incorporate head pose estimation, which tracks the centroid position and Euler angles of the head.

Driver and occupants objects handling

Our data generation platform enables the scene characters to interact with hundreds of objects, just as humans in real life do. The list of objects includes, but is not limited to:

  • cups,
  • bags,
  • smoking devices,
  • food,
  • keys,
  • mobile phones,
  • tablets,
  • cigarettes,
  • books. 

Larger objects, such as handbags, boxes, grocery bags, laptops, sports equipment, children’s toys, and blankets, can also be included when placed on car seats.

Varying weather conditions, time of the day and exterior environments

Simulations generated with SKY ENGINE AI’s Platform also incorporate dynamic environmental factors like weather conditions and time of day, with options for cloud cover, rain, haze, snow, dust, or clear, sunny skies. These scenarios can be configured for various settings, such as urban or rural landscapes. Additionally, the simulations can parametrise light intensity and diffusion within the vehicle’s interior, allowing for realistic reflections and shadows cast by sunlight, as well as effects on objects, seats, child seats, and passengers.

Example Scenarios:

  • Snowy winter, urban environment: driver and occupants, smoking activities
  • Summer, countryside: driver handling a coffee cup in sunny weather
  • Dusk: driver and passenger using phones, items placed on the front seat
  • Cloudy urban day: driver eating, in-cabin objects reflecting in side and roof windows
  • Rainy day: driver with glasses holding a drink, passenger on a call, items on the front and rear seats
  • Evening, urban scene: driver looking back, bag on front seat, child seat in rear

SKY ENGINE AI’s Synthetic Data Cloud can generate scenes of any lighting, background, time of the day or weather conditions reflecting also the impact of that evolving light in the scene on the car’s interior and all the humans, pets and objects present.

Gesture recognition

Our synthetic Data Cloud uses a detailed 3D skeleton of a hand to simulate gestures necessary for hands on wheel monitoring, indications, pose estimation, objects handling, touching controls and panels, etc. Moreover, our Data Cloud can serve hand/palm pose estimation models, or spatiotemporal models operating on extracted position of joints.

Simulated seat belts and their status

SKY ENGINE AI provides simulated seat belts and their status (on, off, incorrectly fastened) for driver and occupants, including children in child seats with separate harnesses. Available in numerous realizations, adaptive to human geometry using generative simulation for a perfect body fit, including driver and occupant activities.

In the vehicle’s cabin 

The SKY ENGINE AI Synthetic Data Cloud enables creation of very realistic humans and objects to propel the development of next generation in-cabin monitoring systems. With our data cloud, we created hundreds of drivers and passengers, for the job of simulating the driver and the occupant behaviors in the car. These synthetic humans were created along with the entire car interior and context under changing outside environment conditions to preserve the impact they have on the trained vision AI models.

Children at different ages and with a variety of settings can be simulated in child seats of numerous models/makes and carrying out several activities, including:

  • playing with toys, 
  • using phones or tablets, 
  • eating, 
  • reading, 
  • drinking, and more. 

SKY ENGINE AI’s simulated synthetic data can serve 3D pose estimation with dense child skeleton annotations, activity tracking with spatiotemporal action classification models operating on extracted position of joints. Seat belts in child seats can be separately simulated with their annotations.
Pets of different kinds and breeds can be included in the scene (i.e. dogs, cats, etc.) with accessories such as animal carriers. Also, interactions of a driver and occupants with pets and/or accessories can be simulated.

Synthetic Data Cloud platform features for the in-cabin monitoring and automotive industry

Multi-modality support, custom camera/sensor characteristics, and pixel-perfect, diverse ground truths.

SKY ENGINE AI Synthetic Data Cloud offers a physically-based simulation environment enabling production of data for numerous actions and activities of a driver and the occupants in the in-cabin context. Being sensor-agnostic, SKY ENGINE AI provides tools that make it possible to re-create custom real camera/sensor’s characteristics including focal length, field of view (FOV) and relationship between them, quantity of random noise, modulation transfer function (MTF) of the lens setup, perspective, aspect ratio, available contrast and more.

The Platform enables generation of pixel-perfect ground truth for several in-cabin objects which can be expressly simulated with depth maps, normal vector maps, instance masks, bounding boxes and 3D key points ready for AI training. Additionally, there are render passes dedicated to deep learning, animation and motion capture systems support and determinism with advanced machinery for randomization strategies of scene parameters for an active learning approach.

Synthetic data simulation and ground truth generation for the Driver Monitoring System (DMS)

SKY ENGINE AI Synthetic Data Cloud offers an efficient solution for simulating data in several modalities including ground truth generation of any kind as yet outlined, so that the computer vision developers can quickly build their data stack and seamlessly train the AI models covering numerous situations and corner cases in the dataset. The following parameters can be modified within the scene created in the SKY ENGINE AI cloud:

Image:

  • Rendered image resolution and ray tracing quality.
  • Scene textures resolution (full, reduced for space and time optimization).
  • Scene textures parameters (car interior materials, patterns, colors).
  • Shaders’ parameters (clearcoat level, sheen level, subsurface scattering level, light iterations, antialiasing level, and more).
  • Modality selector (visible light, near infrared).
  • Post processing (tone mapping, AI denoising, blur / motion blur, light glow).

Environment: 

  • Outside environmental maps type.
  • Environmental light intensity.
  • IR point lights strength, angle and direction.

Lenses and cameras:

  • Lens parameters: type (pinhole, fisheye, anamorphic), focal lengths, principal point, distortion coefficients (radial, tangential).
  • Camera position and orientation (e.g., rear view mirror, console, etc.).
  • Camera randomization ranges (X, Y, Z, roll, pitch, yaw).

Driver and occupants:

  • Occupancy probabilities on each seat (adults, children on child seats, empty child seats, big items, piles of items, empty seats). For items, also between seats.
  • Driver and occupant action probabilities (drinking, eating, smoking, driving, idle, looking around, grabbing an object from another seat etc.).
  • Seatbelt status (fastened/unfastened) probabilities.

It’s important to note that all parameters can be defined as rules for randomization (range, predefined distribution, custom probabilistic distribution).

Example application of in-cabin monitoring systems–insurance industry

Robustly trained computer vision AI models incorporated in the DMS and occupant monitoring  systems (OMS), enable real-time assistance in identifying in-car activities and help to recognize event risks, such as their probability, direct cause, likelihood of potential injuries and severity accumulated over pre-defined periods of time. Accidents can be predicted, managed, and, at best, largely prevented. Insurers may be advised of the most likely events, given on-the-road and in-cabin circumstances, with a high level of accuracy. This can assist in discussing policies and applying discounts to the drivers or fleet owners, reinforcing strict safety measures, noticing fewer issues, and recommending additional training assistance. Such a system could apply driver scoring for post-accident analysis of the behavior to further optimize the insurance policy and its cost, which can be especially interesting for fleet owners. In the premium offering, drivers could get a monetary value in the form of monthly cash-backs, based on their road safety performance measured by accurate evidence-based risk indicators for insurance companies.

The DMS is expected to keep all the data aboard the vehicle to avoid triggering privacy concerns. However, to dispute any insurance claims, it could be employed to work with a recording mode on. Fleet owners or insurance companies could get insights on the detailed status of the driver and can alert a dispatch center or the driver to deliver real-time, life-saving alerts. Insurers could offer Pay-How-You-Drive (PHYD) policy models to enable accurate evidence-based risk indicators, including drivers scoring. Next, the drivers could get a monetary value in the form of monthly cash-backs or policy discounts, based on their road safety performance.

Other uses of in-cabin monitoring systems

In-cabin monitoring systems extend beyond private cars to applications in commercial vehicles like trucks, buses, trains, and airplanes, where safety and operational efficiency are critical. These systems can detect driver states such as fatigue, distraction, or intoxication in lorries or cabs, ensuring safer long-haul transportation. For trains and airplanes, monitoring the operators’ attentiveness enhances compliance with safety protocols, while occupant detection supports personalized experiences or emergency response. SKY ENGINE AI’s Synthetic Data Cloud enables generation of robust training datasets for these systems, incorporating diverse, real-world-like conditions, accelerating the development and reliability of CV algorithms.

Summary 

In-cabin monitoring systems capable of detecting driver and occupant state, identification of seat belts, gaze and gestures estimation can effectively have their vision AI models trained on synthetic data with dense 3D key points and rich ground truth. Synthetic, and very realistic humans, objects and car interiors with surroundings can be created for the job of simulating the driver and the occupant behavior in the car, performing several activities that may also be prohibited by law. 

Let us know about your cases and get access to the SKY ENGINE AI Platform or get tailored AI models or synthetic datasets for your driver and occupant monitoring applications. As we support many more industries than just the automotive, a broad range of customization is available even for specific sensors and environments.