Autonomous vehicle developers could soon use generative AI to get more out of the data they gather on the roads. Helm.ai this week unveiled GenSim-2, its new generative AI model for creating and modifying video data for autonomous driving.
The company said the model introduces AI-based video editing capabilities, including dynamic weather and illumination adjustments, object appearance modifications, and consistent multi-camera support. Helm.ai said these advancements provide automakers with a scalable, cost-effective system to enrich datasets and address the long tail of corner cases in autonomous driving development.
Trained using Helm.ai’s proprietary Deep Teaching methodology and deep neural networks, GenSim-2 expands on the capabilities of its predecessor, GenSim-1. Helm.ai said the new model enables automakers to generate diverse, highly realistic video data tailored to specific requirements, facilitating the development of robust autonomous driving systems.
Founded in 2016 and headquartered in Redwood City, CA, the company develops AI software for ADAS, autonomous driving, and robotics. Helm.ai offers full-stack real-time AI systems, including deep neural networks for highway and urban driving, end-to-end autonomous systems, and development and validation tools powered by Deep Teaching and generative AI. The company collaborates with global automakers on production-bound projects.
Helm.ai has multiple generative AI-based products
With GenSim-2, development teams can modify weather and lighting conditions such as rain, fog, snow, glare, and time of day (day, night) in video data. Helm.ai said the model supports both augmented reality modifications of real-world video footage and the creation of fully AI-generated video scenes.
Additionally, it enables customization and adjustments of object appearances, such as road surfaces (e.g., paved, cracked, or wet) to vehicles (type and color), pedestrians, buildings, vegetation, and other road objects such as guardrails. These transformations can be applied consistently across multi-camera perspectives to enhance realism and self-consistency throughout the dataset.
“The ability to manipulate video data at this level of control and realism marks a leap forward in generative AI-based simulation technology,” said Vladislav Voroninski, Helm.ai’s CEO and founder. “GenSim-2 equips automakers with unparalleled tools for generating high fidelity labeled data for training and validation, bridging the gap between simulation and real-world conditions to accelerate development timelines and reduce costs.”
Helm.ai said GenSim-2 addresses industry challenges by offering an alternative to resource-intensive traditional data collection methods. Its ability to generate and modify scenario-specific video data supports a wide range of applications in autonomous driving, from developing and validating software across diverse geographies to resolving rare and challenging corner cases.
In October, the company released VidGen-2, another autonomous driving development tool based on generative AI. VidGen-2 generates predictive video sequences with realistic appearances and dynamic scene modeling. The updated system offers double the resolution of its predecessor, VidGen-1, improved realism at 30 frames per second, and multi-camera support with twice the resolution per camera.
Helm.ai also offers WorldGen-1a generative AI foundation model that it said can simulate the entire autonomous vehicle stack. The company said it can generate, extrapolate, and predict realistic driving environments and behaviors. It can generate driving scenes across multiple sensor modalities and perspectives.
GIPHY App Key not set. Please check settings