Apollo Autonomous Vehicle Platform
Various sensors, such as LiDAR, cameras and radar collect environmental data surrounding the vehicle. Using sensor fusion technology perception algorithms can determine in real time the type, location, velocity and orientation of objects on the road. This autonomous perception system is backed by both Baidu’s big data and deep learning technologies, as well as a vast collection of real world labeled driving data. The large-scale deep-learning platform and GPU clusters. Simulation provides the ability to virtually drive millions of kilometers daily using an array of real world traffic and autonomous driving data. Through the simulation service, partners gain access to a large number of autonomous driving scenes to quickly test, validate, and optimize models with comprehensive coverage in a way that is safe and efficient.
Learn more
NVIDIA DRIVE Map
NVIDIA DRIVE® Map is a multi-modal mapping platform designed to enable the highest levels of autonomy while improving safety. It combines the accuracy of ground truth mapping with the freshness and scale of AI-based fleet-sourced mapping. With four localization layers—camera, lidar, radar, and GNSS—DRIVE Map provides the redundancy and versatility required by the most advanced AI drivers. DRIVE Map is designed for the highest level of accuracy, the ground truth map engine creates DRIVE Maps using rich sensors—cameras, radars, lidars, and differential GNSS/IMU—with NVIDIA DRIVE Hyperion data collection vehicles. It achieves better than 5 cm accuracy for higher levels of autonomy (L3/L4) in selected environments, such as highways and urban environments. DRIVE Map is designed for near real-time operation and global scalability. Based on both ground truth and fleet-sourced data, it represents the collective memory of millions of vehicles.
Learn more
Cognata
Cognata delivers full product lifecycle simulation for ADAS and autonomous vehicle developers. Automatically-generated 3D environments and realistic AI-driven traffic agents for AV simulation. Autonomous vehicles ready-to-use scenario library and simple authoring to create millions of AV edge cases. Closed-loop testing with painless integration. Configurable rules and visualization for autonomous simulation. Measured and tracked performance. Digital twin grade 3D environments of roads, buildings, and infrastructure that are accurate down to the last lane marking, surface material, and traffic light. A global, cost-effective, and efficient architecture built for the cloud from the beginning. Closed-loop simulation or integration with your CI/CD environment are a few clicks away. Enables engineers to easily combine control, fusion, and vehicle models with Cognata’s environment, scenario, and sensor modeling capabilities.
Learn more
LidarView
LidarView is an open source platform developed by Kitware for real-time visualization, recording, and processing of 3D LiDAR data. Built atop ParaView, it efficiently renders large point clouds and offers features such as 3D visualization of time-stamped LiDAR returns, a spreadsheet inspector for attributes like timestamp and azimuth, and the ability to display multiple data frames simultaneously. Users can input data from live sensor streams or recorded .pcap files, apply 3D transformations to point clouds and manage subsets of laser data. LidarView supports various sensors, including models from Velodyne, Hesai, Robosense, Livox, and Leishen, enabling visualization of live streams and replaying of recorded data. The platform integrates advanced algorithms for Simultaneous Localization and Mapping (SLAM), facilitating accurate environmental reconstruction and sensor localization. It also incorporates AI and machine learning capabilities for scene classification.
Learn more