Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (2024)

SebastianHuch, LucaScalerandi, EstebanRivera, MarkusLienkampManuscript received Month XX, XXXX; revised Month XX, XXXX.SebastianHuch, EstebanRivera, and MarkusLienkamp are with the Institute of Automotive Technology, School of Engineering and Design, Technical University of Munich, Boltzmannstraße 15, 85748 Garching, Germany (e-mail: sebastian.huch@tum.de, esteban.rivera@tum.de, lienkamp@tum.de)LucaScalerandi is with the Department of Informatics, Technical University of Munich, Boltzmannstraße 3, 85748 Garching, Germany (e-mail: luca.scalerandi@tum.de)

Abstract

LiDAR object detection algorithms based on neural networks for autonomous driving require large amounts of data for training, validation, and testing.As real-world data collection and labeling are time-consuming and expensive, simulation-based synthetic data generation is a viable alternative.However, using simulated data for the training of neural networks leads to a domain shift of training and testing data due to differences in scenes, scenarios, and distributions.In this work, we quantify the sim-to-real domain shift by means of LiDAR object detectors trained with a new scenario-identical real-world and simulated dataset.In addition, we answer the questions of how well the simulated data resembles the real-world data and how well object detectors trained on simulated data perform on real-world data.Further, we analyze point clouds at the target-level by comparing real-world and simulated point clouds within the 3D bounding boxes of the targets.Our experiments show that a significant sim-to-real domain shift exists even for our scenario-identical datasets.This domain shift amounts to an average precision reduction of around 14%times14percent14\text{\,}\mathrm{\char 37\relax} for object detectors trained with simulated data.Additional experiments reveal that this domain shift can be lowered by introducing a simple noise model in simulation.We further show that a simple downsampling method to model real-world physics does not influence the performance of the object detectors.

Index Terms:

Autonomous vehicles, LiDAR, point cloud, deep learning, synthetic data, object detection, simulation, domain shift

I Introduction

Autonomous vehicles (AVs) have the potential to increase road safety and reduce emissions [1].Therefore, AV research and development has made great strides forward in recent years to such an extent that the first autonomous shuttles with limited operational design domain (ODD) are driving on public roads.Nevertheless, those are hand-tailored examples of specific situations which can not be easily generalized.In order to extend those ODDs, a reliable software pipeline consisting of sequential (interconnected) modules for perception, prediction, planning, and control is required.In such a pipeline, the perception module estimates the state of the AV with respect to its environment, through sensors like camera, LiDAR, and RADAR, and provides the relevant information needed to perform the driving task to the subsequent modules.Part of the perception module is object detection algorithms, which use the sensor readings as an input and the output is a list of objects.These lists contain the object’s position, shape, and orientation, and represent the different traffic participants present in the environment.

These object detection algorithms are highly dependent on machine learning to extract features from raw sensor data.For example, deep neural networks such as YOLOv3 [2] or PV-RCNN [3] are used to detect objects in camera images or LiDAR point clouds, respectively.

Most of the approaches to train those networks are based on supervised learning, meaning they need labeled data.Depending on the data domain, labeling individual frames or point clouds can be very costly in terms of time, effort, and money.Specifically, for a real-world AV dataset, a vehicle equipped with cameras and LiDAR sensors records sensor data from specific scenarios.Afterward, the captured dataset is manually labeled frame by frame.Additionally, such a dataset can become outdated quickly due to constant upgrades to the AV’s sensors, e.g., higher resolution of a new LiDAR sensor.

An alternative to labeled real-world data is synthetic data, which can be categorized into augmented real-world data or simulated data.The former can be generated, e.g., by extracting objects from an already labeled real-world dataset and placing these objects into other frames of that dataset.This method is known as domain randomization and is performed to improve an object detector’s performance, for example, for camera object detection [4].Although the results of domain randomization are promising, the dataset is still limited to the scenarios captured during the initial dataset generation.

The second form of synthetic data is a dataset generated in a simulation environment with virtual sensor models.These simulations can model a photo-realistic 3D environment and include a physics simulation for static and dynamic actors, such as vehicles and pedestrians.Several open-source simulations specially designed for autonomous driving are available, e.g., CARLA [5].The main advantages of datasets generated in the simulation are the unlimited data generation, including automatic pointwise labels, the option to vary environment conditions and sensor configurations, and rapid scenario construction, which allows the simulation of potentially dangerous edge cases.

The effectiveness of a dataset is not only defined by the quantity of data but also by the quality of the data in terms of realism, distribution, and diversity [6].Although neural networks can be trained with large simulated datasets, this does not per se guarantee good performance in a real-world application.The success of training and testing on different domains, for example, simulated and real-world, as in Fig.1, is based on the assumption that both domains share the same feature space and distribution [7].However, in real-world applications, this assumption is not satisfied.Existing simulation environments do not generate sensor data identical to real-world data due to the fact that simulated sensor models do not model both real-world physics and sensor characteristics accurately.Therefore, the performance of object detection algorithms trained with simulated data is affected by this phenomenon, which is called domain shift.More generally, the term domain shift, also called domain gap, refers to the difference between two domains, as observed through the available data.The analysis of this domain shift and its quantification in the scope of LiDAR sensors are the main content of this paper.

More specifically, our main contributions are as follows:

  • We propose a method to generate scenario-identical datasets which can be used to quantify the sim-to-real domain shift using 3D LiDAR object detectors.

  • We quantify the domain shift of simulated and real-world datasets and further break down the results by exploiting a point cloud target-level analysis.

  • We show that simple modifications to the virtual sensor model, such as noise, can increase the object detector’s performance, and thus lower the sim-to-real domain shift.

  • We provide a labeled distribution-aligned (scenario-identical) LiDAR point cloud dataset with simulated and real-world data for the evaluation of future domain adaptation approaches: https://github.com/TUMFTM/Sim2RealDistributionAlignedDataset.

This work is organized as follows:Sec.II discusses the approaches of similar works in this research area.In Sec.III, we first explain the data extraction for our real-world dataset and the data generation in simulation.This is followed by introducing domain shift quantification, including neural networks, metrics for evaluation, and point cloud target-level analysis.The results of the experiments and their discussion can be found in Sec.IV and Sec.V, respectively.We summarize our work in Sec.VI.

II Related Work

The performance discrepancy between the unequal source and target domains has been studied for years [8].Even minimal changes in the source domain, like a camera parameter change, can have big effects on a model’s performance [9].Neural networks, even though trained on similar samples from one domain, frequently underperform with reference to test data from other domains [8], [10].

Several works cover the quantification of the domain shift, but many of them refer to camera images instead of LiDAR point clouds.

Adam et al. [11] and Nowruzi et al. [6] train multiple camera 2D object detectors on simulated data and test the networks on a mixed dataset containing real-world and simulated images.They calculate standard object detection metrics, such as mean average precision (mAP), to quantify the domain shift.Both works conclude that a domain shift is noticeable by a difference in the mAP of networks trained on simulated or real-world data.[6] further investigate the potential of training with mixed datasets and fine-tuning networks on the target (real-world) dataset.Their studies show that fine-tuning performs better than mixed data training and that fine-tuning can lower but not eliminate the domain shift.The performance of mixed data training is also the research topic of Seib et al. [12] and Burdorf et al. [13].Their results indicate that synthetic data can replace real-world data to an extent, but proportionately more synthetic than real-world data is required. Moreover, synthetic data can be successfully used for network pre-training leading to better performance compared to training on real-world data only [14][15].

Similar research to quantify the domain shift has also been conducted for LiDAR point clouds with 3D object detectors.Dworak et al. [16] train three object detectors with data generated in the simulation environment CARLA and evaluate their performance on the real-world KITTI dataset [17].Although networks achieve an mAP of up to 87% when trained and evaluated on CARLA (“sim-to-sim”), the best network only reaches an mAP of 19% if trained on CARLA and tested on KITTI (“sim-to-real”).The authors also experimented with fine-tuning and mixed-data training.The results of these methods follow the findings of [6] for camera 2D object detectors.

To evaluate their methods for synthetic LiDAR data generation, Fang et al. [18] and Manivasagam et al. [19] compared their generated data with the CARLA and KITTI datasets by training object detectors and evaluating on KITTI.Both works highlight the sim-to-real domain shift between the CARLA and KITTI datasets.

Yue et al. [20] and Spiegel et al. [21] conduct similar LiDAR sim-to-real comparison experiments but focus on semantic segmentation instead of object detection.However, the sim-to-real domain shift is also measurable with networks designed for different tasks, such as segmentation.

The works of Tsai et al. [22] and Wang et al. [10] investigate the real-to-real domain shift, which occurs when a neural network is trained and tested on different real-world datasets.While [22] focus on the difference in the scan pattern of the LiDAR sensors used in the KITTI, nuScenes [23], and Waymo [24] datasets, [10] point out the statistical differences in vehicle shapes and sizes in the datasets collected in different countries.[10] also suggest a domain adaptation approach using statistical normalization to improve cross-dataset performance.

The existence of a domain shift leads to the study of domain adaptation which is concerned with creating models, adapting data, and applying other techniques to allow models to generalize well to a target domain even though they have been trained on a different source data distribution [8].Several works propose domain adaptation methods to minimize camera domain shift, such as domain randomization [4][25][26], domain augmentation [27], or generative adversarial networks [28].Similar efforts also exist that cover the LiDAR domain shift, but they directly target synthetic data generation [20][18] instead of modifying existing datasets.All the mentioned domain adaptation works benchmark the effectiveness of their method by training object detectors with the synthetic (source), the adapted synthetic, and the real-world (target) datasets and evaluate the performance on the real-world dataset.

Ljungqvist et al. [29] compare 2D object detectors trained with synthetic and real-world image data with a different method.Instead of calculating metrics based on the network’s final outputs, i.e., the 3D bounding boxes of the predicted objects, the authors compare the similarity of the outputs of each network layer.For each layer, they calculate the similarity index linear centered kernel alignment (CKA) [30] and conduct a layer-wise comparison of networks trained with synthetic and real-world data.The analysis shows a high similarity in the early network layers and a relatively low similarity in the network detection head.

Triess et al. [31] define a metric to quantify the realism of generated LiDAR point clouds.This metric is based on an adversarial learning technique and can be applied to unseen data.They prove the effectiveness of their quantitative metric by evaluating semantic segmentation networks.

We address the limitations of the related work.Similar works quantified the domain shift based on synthetic and real-world datasets with different scenes, scenarios, and distributions, e.g., by using a dataset recorded in the simulation environment CARLA and the real-world Waymo dataset.This approach does not allow one to conclude whether the domain shift originates from an unrealistic sensor model in simulation, from the distribution shift, or a combination of both.We aim to investigate the domain shift of synthetic and real-world point clouds using datasets with identical scenarios and distributions. which we will explain in the following sections.

Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (35)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (36)

III Methodology

In this section, we present our method for the quantification and analysis of the sim-to-real domain shift.The method consists of three consecutive steps, namely dataset generation, object detection algorithms selection and training, and performance evaluation.First, we capture real-world data, develop a pipeline to automatically label the data, and generate a simulated dataset based on real-world data.To validate the generated datasets, we perform a high-level comparison based on statistical parameters; this is described in Sec.III-A and III-B.The second step of our method compromises the selection of neural networks for object detection.Together with the explanation of the configuration and training parameters of the networks, this is described in Sec.III-C.The last step of our method is the evaluation of the selected networks trained with the generated datasets.We present KPIs used for domain shift quantification in Sec.III-D.

III-A Dataset Generation

To investigate the nature of the sim-to-real domain shift, a novel dataset is used, as none of the existing datasets fulfills the requirement of being distribution-aligned.This dataset consists of two subsets representing the same environment, agents, and scenarios.These subsets include the real dataset and a sim dataset derived from the real-world counterpart.Not only will the same scene and vehicles be used as in the real dataset, but also all driving scenarios are replayed in the simulated environment so that the discrepancy between the real and sim datasets is limited to a minimum.Thus, the domain shift can be isolated leaving out possible external influences which could influence its quantification.

All real-world measurement data was captured during the Indy Autonomous Challenge (IAC) in Las Vegas in 2022.The IAC was the world’s first head-to-head autonomous race car competition between international universities.The race cars used were the Dallara AV-21 (Fig.2a), a modified Dallara IL-15, equipped with various sensors and programmed to drive autonomously.During the single and multi-vehicle races along the oval race track, large amounts of sensor data were collected.Each race car is equipped with three identical LiDAR sensors, each with a horizontal field of view (FoV) of 120°times120degree120\text{\,}\mathrm{\SIUnitSymbolDegree}, to cover a combined FoV of 360°times360degree360\text{\,}\mathrm{\SIUnitSymbolDegree} at 20Hztimes20hertz20\text{\,}\mathrm{Hz} up to a range of 250mtimes250meter250\text{\,}\mathrm{m} (at 10%times10percent10\text{\,}\mathrm{\char 37\relax} surface reflection).LiDAR point clouds with pointwise x𝑥x-, y𝑦y-, z𝑧z-, and intensity𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦intensity-values, as well as the GPS trajectories of all vehicles, were among the data collected during the training sessions and the actual competition.

Capturing the GPS coordinates of every vehicle is not only beneficial for auto-labeling the real-world point clouds but also allows precise re-simulation and, thus, the generation of the simulated dataset.The GPS data was captured at a rate of 20Hztimes20hertz20\text{\,}\mathrm{Hz} and included the position, orientation, and velocity of each vehicle.We match the GPS coordinates of all vehicles with the point clouds in each time step to create the labels w.r.t. the ego vehicle’s local coordinate system.In the second step, these labels are refined by calculating the point distribution within and around each 3D bounding box and shifting boxes based on the point distribution to mitigate potential labeling errors.

The real-world data was used to create the scenario-identical synthetic dataset using a 3D simulator.The simulator is based on Unity and includes 3D models of Dallara AV-21 (Fig.2b) vehicles, the 3D environment of the race track, and a custom LiDAR sensor model.This sensor model was configured to match the characteristics of the real-world LiDAR sensor.The sensor model is capable of calculating pointwise intensity𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦intensity-values based on the ray incidence angle and target material.However, the intensities are not validated and therefore not used in the following domain shift quantification and we focus on x𝑥x, y𝑦y, and z𝑧z.

Each dataset consists of 32,9513295132{,}951 labeled point clouds from three runs on the race track with several laps, each with speeds of up to 70ms1times70timesmetersecond170\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}.As the following frames show high similarity, we only select every \nth5 point cloud for neural network training, validation, and testing.The final datasets consist of 6,00060006{,}000 individual point clouds divided into a training, validation, and test set in the ratios 4646\frac{4}{6}, 1616\frac{1}{6}, and 1616\frac{1}{6}, respectively.In the following, we refer to the recorded real-world and generated simulation datasets as real and sim datasets, respectively.

III-B Statistical Dataset Comparison

To investigate the domain shift between the real and sim data, both datasets should be compared.All comparisons are made using the data that are used to train the models.

To analyze any discrepancies, we iteratively compare the real and sim data pairs.Therefore, we use the training data loader and extract the data just before the model starts training.This way ensures that all analyzed samples correspond to what the model will encounter later during training.The fact that the real and sim datasets are scenario-identical allows the loading of pairs of samples showing the same scene on the track.All comparisons are based on these corresponding samples.

All valid samples, i.e., the ground truth bounding boxes in the observable range, from one dataset are loaded and matched to the samples of the other dataset based on their time-stamp-correspondent ID.After the matching, the samples are sorted according to their distance from the ego vehicle.We use three different ranges:

  • close-range r1=[0.0m,33.3m[subscript𝑟1times0.0metertimes33.3meterr_{1}=\left[$0.0\text{\,}\mathrm{m}$,$33.3\text{\,}\mathrm{m}$\right[,

  • mid-range r2=[33.3m,66.6m[subscript𝑟2times33.3metertimes66.6meterr_{2}=\left[$33.3\text{\,}\mathrm{m}$,$66.6\text{\,}\mathrm{m}$\right[, and

  • long-range r3=[66.6m,100.0m]subscript𝑟3times66.6metertimes100.0meterr_{3}=\left[$66.6\text{\,}\mathrm{m}$,$100.0\text{\,}\mathrm{m}$\right].

This distinction allows us to better understand the effects of range on performance metrics.Each data sample consists of the point cloud and its target, i.e., the corresponding 3D bounding box label.

To compare the data on a high level before training object detection algorithms, we calculate multiple statistical parameters of each dataset and compare them cross-dataset.These statistical parameters contain the mean, minimum and maximum values of the point cloud ranges, the number of points per entire point cloud, and the number of points per target bounding box.

III-C Object Detection Networks and Configuration

To quantify the dataset similarity and performance of the re-simulated data evaluated on the real dataset, we use state-of-the-art 3D object detection algorithms.Object detection on point clouds can be categorized into point-based and voxel-based approaches.As these approaches might behave differently to a domain shift, we choose one algorithm for each category.Note that our goal is not to compare the network’s performance against each other but to compare the datasets by evaluating each network individually.For the voxel-based approach, we choose PointPillars [32], which extracts features from vertical columns of the point cloud to predict 3D bounding boxes of the objects.PointRCNN [33] is our choice for a point-based object detection algorithm, which uses PointNet++ [34] as a backbone to extract local features at the point-level.

As our datasets contain only one object class that needs to be detected, we adapt the output layers of both algorithms to predict a single class only.The anchor size is set to the ground truth dimensions of the race car, with l=4.88m𝑙times4.88meterl=$4.88\text{\,}\mathrm{m}$, w=1.90m𝑤times1.90meterw=$1.90\text{\,}\mathrm{m}$, and h=1.18mtimes1.18meterh=$1.18\text{\,}\mathrm{m}$ for length l𝑙l, width w𝑤w, and height hh, respectively.We empirically test different network parameters, such as the voxel size for PointPillars, the number of sampled points in the feature extractor of PointRCNN, or the number of filters in both networks.For all other parameters, we use the default values for each network.Although the LiDAR sensors capture reflections at over 200mtimes200meter200\text{\,}\mathrm{m} distance, we limit the detection range of our networks to a horizontal range of 100mtimes100meter100\text{\,}\mathrm{m} in the dimensions x and y.We remove the intensity channel from both networks and only use x𝑥x, y𝑦y, and z𝑧z as input features.

Each combination of network and dataset is trained for 757575 epochs, after which no further decrease in validation loss is found.In general, the network training is non-deterministic, whereas the non-determinism is more pronounced at PointRCNN.Therefore, we train every network and dataset configuration with identical parameters five times.We report the mean and standard deviation for the selected KPIs (see Sec.III-D) in the results.

III-D KPIs for Domain Shift Quantification

For the quantitative evaluation of the object detection algorithms presented, we need a metric that assesses the performance of each trained network and dataset configuration.This metric should be based on the network’s final outputs, i.e., the predicted 3D bounding boxes.We use average precision (AP), which is a standard metric for object detection.This metric compares the predicted 3D bounding boxes with the ground truth 3D bounding boxes and classifies each predicted box into a true positive (TP) or false positive (FP) based on the 3D overlap with the ground truth boxes.If a predicted bounding box reaches a certain threshold of intersection over union (IoU) with a ground truth box, it is classified as TP, otherwise as FP.Based on the TP, FP, and missed ground truth boxes, i.e., false negatives (FN), the recall and precision of the network and dataset configuration can be calculated.The final AP is the area under the precision-recall-curve; more precisely, we use the 40-point interpolated AP as described in [35].We report the AP for two different IoU thresholds, 50%times50percent50\text{\,}\mathrm{\char 37\relax} and 70%times70percent70\text{\,}\mathrm{\char 37\relax} overlap, denoted as 3D AP (0.5) and 3D AP (0.7), respectively.

IV Results

AttributeReal-world dataSimulated data
Point cloud rangein metersmeanx𝑥x2.22.22.22.12.12.1
y𝑦y1.31.3-1.30.50.5-0.5
z𝑧z1.21.21.21.91.91.9
minx𝑥x100.0100.0-100.0100.0100.0-100.0
y𝑦y100.0100.0-100.087.087.0-87.0
z𝑧z12.012.0-12.01.51.5-1.5
maxx𝑥x100.0100.0100.0100.0100.0100.0
y𝑦y100.0100.0100.0100.0100.0100.0
z𝑧z25.125.125.123.323.323.3
Number of pointsper point cloudmean73,1237312373{,}12378,7767877678{,}776
min52,7625276252{,}76252,6955269552{,}695
max79,6907969079{,}69081,5388153881{,}538
Number of pointsper target boxmean219219219251251251
min00555
max4,95949594{,}9596,46564656{,}465

Following the method presented in Sec.III, this section starts by presenting the results of the statistical dataset comparison and the quantification of the domain shift using object detection algorithms in Sec.IV-A and Sec.IV-B, respectively.This is followed by a detailed point cloud target-level analysis in Sec.IV-C and the following additional study in Sec.IV-D.

IV-A Statistical Dataset Comparison

Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (37)

TableI provides an overview of the statistical metrics selected for the comparison of the datasets.By comparing the point cloud range of sim and real, it can be seen that in the simulation, certain locations are not hit, which are hit in reality by the LiDAR.An example is the y coordinate of the minimum (100mtimes100meter-$100\text{\,}\mathrm{m}$ vs. 87mtimes87meter-$87\text{\,}\mathrm{m}$).A model trained on the sim data will never encounter points in certain locations that are included in the real dataset.Note that none of these missing locations are on the race track, which means that the trained model will not see a single instance where such outside points contain a bounding box of a target.

Generally, the LiDAR used is capable of hitting and measuring objects further away than 100mtimes100meter100\text{\,}\mathrm{m}, but from our observations, these long-distance measurements mostly resemble noise, making the additional computing effort to treat them not worth it.In addition, the number of measured points drops drastically in a higher range.

One of the most noticeable differences is the number of points collected from the target.It is quite intuitive that with increasing distance of the target to the ego vehicle, the number of points drops, i.e., the density of the target point cloud is reduced.This effect is less dominant within the sim dataset, where by default no measurements are lost due to the increased distance.We can roughly estimate the loss ratio of the laser beams in the real dataset by comparing the mean number of points per target box: 21925187%219251times87percent\frac{219}{251}~{}\approx~{}$87\text{\,}\mathrm{\char 37\relax}$.There is even one instance where the target in the real dataset is not hit entirely, which provides an interesting edge case.

For both the real and the sim datasets, we only consider the heading angle (yaw) and neglect the roll and pitch angles, which is common in the domain of autonomous driving.The heading angle of the vehicles in the simulation is taken from the real data, and therefore they are identical for every sample pairing.Overall, the vast majority of the samples in our dataset show target boxes with only a small relative heading angle.Thus, the target is seen from mostly similar positions, but these small differences are enough for the LiDAR to capture points throughout the target vehicle.

From the birds-eye view of two corresponding representative images taken from the real and the sim datasets (Fig.1), it can be seen how similar the point clouds generally are.A noticeable difference is a trapezoidal-shaped area (in the real sample) behind the vehicle in the driving direction (the driving direction is to the right).This area in the scan is due to the placement of the LiDAR on top of the vehicle.This leads to a blind spot caused by the rear wing shielding part of the track from being reached by the laser beams, which can be seen in Fig.2a.This blind area is slightly larger than 10mtimes10meter10\text{\,}\mathrm{m} long and 5mtimes5meter5\text{\,}\mathrm{m} wide.A couple of similar artifacts resulting from the ego vehicle can be seen around the point cloud origin.None of these blind spots are problematic in this dataset since cases where the target is mostly hidden in this space do not exist; compare to Fig.3, showing the target vehicle locations of the entire dataset.

Just as important as keeping an eye on the point clouds is to analyze the location distribution of the targets.This is relevant because a model trained with most targets in close proximity might struggle to generalize to targets farther away since the target point clouds follow a different and more inaccurate measured shape.Fig.3 shows the location in the x𝑥x-y𝑦y-plane of the target vehicles for each sample in the real dataset.Furthermore, we show the distribution of the target locations for the x𝑥x- and y𝑦y-axis.Two main locations of the target vehicle are predominant: to the left front (+35mtimes35meter+$35\text{\,}\mathrm{m}$, +5mtimes5meter+$5\text{\,}\mathrm{m}$) and to the right back (30mtimes30meter-$30\text{\,}\mathrm{m}$, 5mtimes5meter-$5\text{\,}\mathrm{m}$).

Although minor differences exist, we argue that the statistical dataset comparison shows an overall high similarity between the real dataset and the derived sim dataset.Furthermore, the similarity of our scenario-identical datasets is higher than any publicly available dataset pair, which makes them suitable for the following detailed sim-to-real domain shift analysis.

IV-B Object Detection Evaluation

This section presents the quantitative results of object detection algorithms trained with real or sim datasets.To account for the non-deterministic training, each network and dataset pair is trained five times with identical configurations, as stated in Sec.III-C.We report the mean and standard deviation of the calculated AP of the five training runs for each network and dataset pair.A table presenting all results for PointRCNN and PointPillars can be found in TableIII in Appendix Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level.In the following, our notation is as follows: An experiment with a network trained on real data and evaluated on sim data is noted as “real-to-sim”, i.e., the first word describes the training dataset, and the last word describes the test dataset.

Fig.4a shows the 3D AP (0.7) for PointRCNN for all four possible pairings of real and sim data for training and testing.In this experiment, training and testing are conducted on the full range, including targets up to a range of 100mtimes100meter100\text{\,}\mathrm{m}.The performance comparison of sim-to-real (38.23%times38.23percent38.23\text{\,}\mathrm{\char 37\relax}) and real-to-real (51.96%times51.96percent51.96\text{\,}\mathrm{\char 37\relax}) indicates the existence of a distinct sim-to-real domain shift.Although the network was trained with exactly the same scenarios and target distributions, performance drops by almost 14%times14percent14\text{\,}\mathrm{\char 37\relax} just by training with the sim data.

Lower performance of the opposite domain can also be observed when the networks are tested on sim data.Comparing sim-to-sim (96.82%times96.82percent96.82\text{\,}\mathrm{\char 37\relax}) with real-to-sim (62.53%times62.53percent62.53\text{\,}\mathrm{\char 37\relax}), the real-to-sim domain shift of around 34%times34percent34\text{\,}\mathrm{\char 37\relax} is even more pronounced than the sim-to-real domain shift (14%times14percent14\text{\,}\mathrm{\char 37\relax}).This can be explained by the overall higher performance of sim-to-sim (96.82%times96.82percent96.82\text{\,}\mathrm{\char 37\relax}) compared to real-to-real (51.96%times51.96percent51.96\text{\,}\mathrm{\char 37\relax}).The behavior of a higher real-to-sim domain shift compared to the sim-to-real domain shift is typical for simulated data and agrees with the results of [16] and [6].

The results for PointRCNN are consistent with the results of PointPillars.In general, PointRCNN achieves higher AP by a large margin in most train-test pairings, except that PointPillars outperforms PointRCNN in the sim-to-sim pairing.However, as stated before, this pairing leads to an almost perfect AP, independent of the network choice.

As expected, lowering the IoU threshold for the AP calculation from 70%times70percent70\text{\,}\mathrm{\char 37\relax} to 50%times50percent50\text{\,}\mathrm{\char 37\relax} leads to higher APs overall, without exception.The absolute AP difference between IoU 70%times70percent70\text{\,}\mathrm{\char 37\relax} and 50%times50percent50\text{\,}\mathrm{\char 37\relax} is lowest when the AP is close to 100%times100percent100\text{\,}\mathrm{\char 37\relax}, i.e., for sim-to-sim pairings.

Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (82)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (83)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (84)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (85)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (86)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (87)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (88)

We further analyze the domain shift by evaluating the networks in close-range r1subscript𝑟1r_{1}, mid-range r2subscript𝑟2r_{2}, and long-range r3subscript𝑟3r_{3} as defined in Sec.III-B.The results showing 3D AP (0.7) for PointRCNN for these three ranges are depicted in Fig.4b-4d.As expected, overall performance decreases with an increasing range, an observation that is valid for each of the four train-test pairings.The loss of performance is most pronounced when the close-range is compared with the mid-range.It is interesting to note that sim-to-sim performance only decreases slightly from close-range (99.88%times99.88percent99.88\text{\,}\mathrm{\char 37\relax}) to long-range (89.47%times89.47percent89.47\text{\,}\mathrm{\char 37\relax}), whereas the real-to-real performance drops from 90.41%times90.41percent90.41\text{\,}\mathrm{\char 37\relax} to 16.31%times16.31percent16.31\text{\,}\mathrm{\char 37\relax}.

Another observation concerns the sim-to-real domain shift:In the mid-range, the performance of sim-to-real and real-to-real is almost identical; hence, no sim-to-real domain shift can be observed.However, in close-range and long-range, there is a sim-to-real domain shift, leading to the general sim-to-real domain shift in the combined full range 4a.

Fig.5 shows the t-Distributed Stochastic Neighbor Embedding (t-SNE) of the high-dimensional latent feature space of PointRCNN trained on real or sim data.T-SNE is a statistical method for the reduction of dimensionality of high-dimensional data and is suitable for the visualization of these data in two-dimensional plots [36].Neighboring points in the low-dimensional representation are usually similar in the high-dimensional input space.Each point represents one feature vector generated by the inference of PointRCNN with one point cloud of the real test set.In total, the plot shows 2,50025002{,}500 points for each dataset used for training, that is, the real or sim dataset.Within each dataset, five clusters can be identified that originate from the five training runs of each configuration.The plot indicates that the feature vectors are distinct, and hence, the network learned the different distributions of the real and sim data.

IV-C Point Cloud Target-Level Analysis

On the basis of the quantitative results, we further qualitatively analyze the domain shift based on the point clouds of the targets.In the datasets used, the target class is a single non-deformable object.This enables visualization of the target shape the network will see during the training by aggregating the point clouds at the target-level.If the aggregated shape is clearly distinguishable from the environment, the object detection network should detect it with high accuracy.

Fig.6 shows that with increasing distance between the LiDAR and the target, the aggregated point clouds are dominated by noise.It is relevant for object detection to have enough points that follow the shape of the target.Especially in the real dataset, long-range points become overly noisy, to the point that the target shape is barely recognizable anymore; compare Fig.6c.A hard-to-recognize shape is per se not problematic if the relative difference to the environment is big enough to still be able to infer its existence.This becomes especially challenging for object detection algorithms without an attention mechanism that might hint at the most likely subsequent location (e.g., [37] or [38]).Note also that distance-dependent noise due to outlier points is present in the simulated data, although it is less dominant.However, the difference in noise is distributed as part of the domain shift (real-to-sim and sim-to-real), as this implies an additional generalization task for the model.

The reflections at long-range are mostly of the back of the vehicle due to the imbalance of target vehicle locations (compare Fig.3).This does not affect the domain shift, as it occurs equally in the real and sim datasets.Still, this explains the back-heavy point clouds in Fig.6c and Fig.6f.

Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (89)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (90)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (91)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (92)

Fig.7 shows approximately 20,0002000020{,}000 aggregated points from 404040 scans of the real dataset.Using a selection of scans and projecting them onto a 2D plane helps to visualize the LiDAR scan layers and the resulting normalized, distorted shape.These graphs highlight inaccuracies in the automatic point cloud labeling pipeline utilized to create the real dataset.The top view in 7b clearly shows that even though the wheels make up a major part of the front view 7c and back view 7d, the point density in the area of the wheels is sparse.An explanation for this is the low reflectivity of dark tires and low ray incidence angles on the upper and lower parts of the wheels, both of which lead to points being dropped.This point dropout is a common occurrence with real-world LiDAR sensors and, if not modeled in simulation, can further increase the sim-to-real domain shift.

To conclude the point cloud target-level analysis, the two discussed effects of LiDAR noise and dropout might have a high impact on the sim-to-real domain shift and will be further analyzed in the following section.

IV-D Additional Study of Main Influencing Factors

Based on previous findings on LiDAR noise and dropout, we generate two new simulated datasets to further quantify the impact of these effects and to analyze the source of the sim-to-real domain shift.Both datasets originate from the sim dataset used in previous experiments.The first dataset is created to analyze the impact of sensor noise, which was identified to differentiate between the real and sim point clouds when comparing them at the target-level, as shown in Sec.IV-C.To create this dataset, we add a noise profile to our sensor simulation that adds random Gaussian noise to the placement of the points in the longitudinal ray direction with a standard deviation of σ=2cm𝜎times2centimeter\sigma~{}=~{}$2\text{\,}\mathrm{cm}$.We refer to this dataset as sim noise.The second dataset is created to analyze the impact of the LiDAR dropout, which was also identified as a domain shift source.Therefore, we applied downsampling on the original sim dataset with a ratio of 0.8, meaning that 20%times20percent20\text{\,}\mathrm{\char 37\relax} of the points in each point cloud are dropped.This downsampled dataset is referred to as sim downsampled.

We train PointRCNN and PointPillars on both derived datasets and evaluate them on all four datasets, that is, real, sim, sim noise, and sim downsampled.The 3D AP (0.7) of PointRCNN for the 16 train-test combinations of the four datasets is shown in Fig.8.Similar to the results in Sec.IV-B, the results of PointRCNN and PointPillars are consistent, although the overall AP of PointPillars is lower.For reference, TableIII in Appendix Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level includes the extensive results of both networks for all train-test pairs.Compared to training with the sim dataset, training with sim noise is advantageous when testing on the real dataset, leading to an AP increase from 38.23%times38.23percent38.23\text{\,}\mathrm{\char 37\relax} (sim) to 41.61%times41.61percent41.61\text{\,}\mathrm{\char 37\relax} (sim noise) and therefore closing the sim-to-real domain shift.It can be noted that training with sim downsampled compared to sim even degrades the performance minimally on all tested datasets.

Fig.9 shows the t-SNE plot for PointRCNN trained on all four datasets and tested on real data.As in Fig.5, each trained dataset was tested with 2,50025002{,}500 real point clouds, resulting in 10,0001000010{,}000 feature vectors shown in this plot.PointRCNN can distinguish between the datasets, visible by each of the five training runs per dataset clearly separated from the clusters of the other datasets.This plot also shows that the clusters of sim noise are blended into the clusters of real, meaning that there is a greater similarity between sim noise and real than between sim and real, which supports the quantitative results.

Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (108)

Fig.10 depicts the aggregated target point clouds of the two additional datasets, sim noise and sim downsampled, for close-, mid-, and long-range.The Gaussian noise of the sim noise data resembles the real data more compared to the original sim data.However, at long-range in Fig.10c, the shape of the aggregated target point cloud of sim noise is still identifiable as a vehicle, which is not the case for the real data in Fig.6c.The aggregated target point clouds of sim downsampled in Figs. 10d-10f are identical to those of the original sim data, with the only difference being that sim downsampled include 20%times20percent20\text{\,}\mathrm{\char 37\relax} fewer points.

Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (109)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (110)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (111)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (112)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (113)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (114)

To compare our results with different metrics for domain shift quantification, we calculate the Chamfer Distance (CD) and Earth Mover’s Distance (EMD), which are common metrics for measuring similarity between two point sets.We calculate these distances for each of the 4,000 point cloud pairings from the training datasets; specifically, we compare the real dataset with each of the three sim datasets.The mean CD and mean EMD for each of the 4,000 point cloud pairings are presented in TableII.EMD coincides with the results of the domain shift quantification by means of object detection algorithms, showing a decrease of the sim-to-real domain shift when introducing noise and an increase of the sim-to-real domain shift when adding downsampling.However, the CD also shows a decrease of the sim-to-real domain shift for sim downsampled.This can be explained by the fact that CD is highly sensitive to outliers, which are less frequent in sim downsampled and therefore lead to a lower CD.

Domain ADomain BCDEMD
RealSim2,333,68217.322
Sim Noise2,328,00917.309
Sim Downsampled2,288,59317.324

V Discussion

In this section, we discuss the results presented in Sec IV.The statistical dataset comparison in Sec.IV-A shows that although the sim dataset is derived from the real dataset, there are still differences between the two datasets.For example, the 3D simulation environment shows deviations from reality outside of the drivable area due to missing static objects.This can be observed in Fig.1 with more points in the real point cloud behind the track wall.However, we show that the distributions of points and targets between the datasets sim and real are very similar, which makes our datasets suitable for domain shift analysis.

Our evaluation based on object detection algorithms reveals the existence of a sim-to-real domain shift, proving that the domain shift found in previous works is not only based on scenario discrepancy due to dissimilarities in the datasets used but is also present in our scenario-identical real and sim datasets.As expected, the networks achieve the highest performance when trained and tested on the same domain (real-to-real and sim-to-sim).However, for PointRCNN trained on real data, testing on sim data (real-to-sim) yields a higher AP than testing on the domain with which it was trained (real-to-real), which is counterintuitive at first.This performance increase can be explained by analyzing the target point clouds in Fig.6: Target point clouds in sim data are more structured with less noise and denser than the same point clouds in real data, especially at higher ranges, leading to an improved detection performance regardless of the type of dataset used for training.

The GPS data and point cloud data recorded with the real-world vehicle are not synchronized, leading to minor inaccuracies in auto-labeling due to the data interpolation of discrete-time stamps.As described in Sec.III-A, the target positions are refined using the point distribution in the proximity of the initial position determined by the auto-labeling process.This leads to an overall good fit of the 3D bounding boxes to the underlying point cloud.However, in some frames, there is an offset of the 3D bounding box and the target point cloud; this is visible in Fig.7a, with one frame shifted to the vehicle’s rear by about 0.8mtimes0.8meter0.8\text{\,}\mathrm{m}.Those labeling inaccuracies are more frequent at long range due to lower point density and self-occlusion, leading to low average precision of real-to-real at long range compared to close range.Furthermore, PointRCNN and PointPillars only estimate seven degrees of freedom for each object and neglect the roll and pitch angles.Our auto-labeling pipeline also defines these two angles to be zero.However, these two angles are more pronounced at higher distances, leading to an additional labeling offset and further explaining the low real-to-real performance at long range.

Fig.6 shows the aggregated target point clouds in different range sections.All range sections show noisy measurements for the real and sim data, although the sim data in these graphs are simulated without noise.Part of the noise beneath the vehicle comes from the labeling process and is not specific to either real or sim data.For data labeling, the location and size of the vehicle are used. All points in the vehicle-sized bounding box around the GPS location will be considered part of the vehicle.This assumption is valid for most point clouds, except when the inclination of the road leads to ground reflections being considered as part of the target point cloud, which occurs more frequently for longer distances.In this way, structured noise, such as lines, is introduced.These noise artifacts can be seen in the lower area between the tires.It is important to keep in mind that a segmentation mask is not predicted, but a bounding box is instead; thus, such effects are expected.

In this work, we quantify the domain shift based on the task of object detection and not semantic or instance segmentation, which are also very common tasks using LiDAR point clouds.We focus on object detection, as segmentation requires pointwise labeled point clouds.These are costly to obtain for real-world data and, compared to the position and orientation of the target boxes, can not be generated by an auto-labeling pipeline.However, since segmentation offers a more fine-grained understanding of the environment, the analysis of the domain gap using segmentation algorithms can be investigated in follow-up work.

The additional study of noise and downsampling in the simulation on the performance of object detectors demonstrated a performance increase for sim noise and a performance decrease for sim downsampled compared to the original sim data.Although the performance increase of sim noise was expected because it more closely modeled the behavior in the real world, the performance decrease of sim downsampled needs further explanation.The downsampling method used in this work is based on random selection and is not based on physical aspects, such as the angle of inclination or the material and color of the reflected surface, as these properties are not available in the simulation environment and increase the simulation complexity to a high degree.Therefore, the utilized downsampling method is unfavorable to object detection performance.

To eliminate the downsampling method as a source of the increase in domain shift, we experimented with downsampling that is not random, but based on the point distribution difference between the real and sim dataset.Each point within a point cloud is assigned with a probability of being removed, which is derived from the previously calculated distribution difference of real and sim datasets.This leads to the alignment of point distributions of the real and sim downsampled datasets; specifically, long-range points are more likely to be downsampled, as the real dataset is more sparse at higher distances.Although the resulting sim downsampled dataset reduces the difference in the distribution of points, the performance of PointRCNN and PointPillars in terms of 3D AP for all evaluated datasets is almost identical to the random downsampled dataset and the differences are within the standard deviation.The influence of a more realistic simulation regarding the downsampling effect based on physical aspects can be investigated in future work.

Visualization of the high-dimensional feature vectors of the network using t-SNE dimensionality reduction revealed that the network could distinguish between real and sim data and that sim noise is more similar to real than sim is to real.However, the t-SNE plot also implies that sim downsampled is more similar to real than sim is to real, which does not coincide with the results of the quantitative evaluation of the networks.Note that t-SNE does not always preserve global structures and is based primarily on local structures of the input data [36].

Chamfer Distance (CD) and Earth Mover’s Distance (EMD) were used to compare our results with other metrics used to measure the domain similarity between two point sets.Although CD indicates a higher domain similarity of sim downsampled to real than sim to real due to the sensitivity to outliers, we could validate our findings with the results of EMD.

Finally, there are limitations regarding the datasets we used for the domain shift quantification.Our requirements were a scenario-identical pair of real and sim datasets to isolate the domain shift.We achieved this by using a dataset that is limited to simple scenarios and only one class of objects to be detected.We still argue that the datasets used in our work give a good indication of the LiDAR domain shift and can generalize well to other datasets since the LiDAR noise and dropout differences we highlighted in our additional study usually exist between real and sim data, independent of specific scenarios or the number of object classes.For future work, the domain shift of datasets with more complex scenarios and a higher variety of vehicles can be analyzed using our presented method.Furthermore, taking the pointwise intensity𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦intensity-values into account can be investigated in future work.

VI Conclusion

To summarize, in this paper, we quantify the sim-to-real domain shift by means of LiDAR object detectors and further analyze the point clouds at the target-level to determine the influence factors leading to domain shift.First, we record a real-world dataset, which is auto-labeled using GPS positions, and generate a simulated counterpart whose scenarios are derived from the real-world dataset.We show that the data and label distributions of both datasets are similar and hence, both datasets form the basis of our domain shift analysis.

To perform domain shift quantification, we trained two LiDAR object detection networks, namely PointRCNN and PointPillars, on the sim and real datasets and tested each network on both datasets.The evaluation metric is the average precision, and since training is non-deterministic, we train each network five times and report the mean of the achieved average precision.The experiments show the existence of a domain shift in both directions, whereas the sim-to-real domain shift amounts to around 14%times14percent14\text{\,}\mathrm{\char 37\relax} in AP difference.We further analyzed the target point clouds by aggregation of single point clouds for three different range sections, that is, close-, mid-, and long-range.For long-range real data, the shape of the target is not identifiable as a vehicle, which explains the poor network performance for this range section.

The aggregated target point clouds further indicate a qualitative sim-to-real domain shift, and we identify noise and downsampling as two potential factors of domain shift.We generated additional simulated datasets modeling these two factors to analyze them further.Object detection experiments with sim noise and sim downsampled show that noise does indeed have an influence on the sim-to-real domain shift; this is evident from the reduction of the sim-to-real domain shift from 14%times14percent14\text{\,}\mathrm{\char 37\relax} to 10%times10percent10\text{\,}\mathrm{\char 37\relax}.However, the downsampling method used in our work did not reduce the domain shift but even slightly increased it.

Overall, we introduced a method for the quantification of the LiDAR sim-to-real domain shift based on object detectors and quantified the domain shift for distribution-aligned datasets.Our experiments showed that noise is part of the domain shift; however, there are still other effects that contribute to the domain shift.The analysis of these effects, as well as a more realistic simulation of downsampling based on physical aspects, is part of future work.Furthermore, one possibility for future work is the application of domain adaptation for LiDAR point clouds to reduce the sim-to-real domain shift.

Contributions

As the first author, Sebastian Huch initiated the idea of this paper and contributed essentially to its conception, implementation, and content. Luca Scalerandi and Esteban Rivera contributed to the conception of this research, the experimental data generation, and the revision of the research article. Markus Lienkamp made an essential contribution to the conception of the research project. He revised the paper critically for important intellectual content. He gave final approval of the version to be published and agreed to all aspects of the work. As a guarantor, he accepts responsibility for the overall integrity of the paper.

References

  • [1]A.Costley, C.Kunz, R.Sharma, and R.Gerdes, “Low Cost, Open-SourcePlatform to Enable Full-Sized Automated Vehicle Research,” IEEETransactions on Intelligent Vehicles, vol.6, no.1, pp. 3–13, 3 2021.
  • [2]J.Redmon and A.Farhadi, “YOLOv3: An Incremental Improvement,” 4 2018.[Online]. Available: http://arxiv.org/abs/1804.02767
  • [3]S.Shi, C.Guo, L.Jiang, Z.Wang, J.Shi, X.Wang, and H.Li, “PV-RCNN:Point-Voxel Feature Set Abstraction for 3D Object Detection,” 2020IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 122019. [Online]. Available: http://arxiv.org/abs/1912.13192
  • [4]J.Tremblay, A.Prakash, D.Acuna, M.Brophy, V.Jampani, C.Anil, T.To,E.Cameracci, S.Boochoon, and S.Birchfield, “Training Deep Networks withSynthetic Data: Bridging the Reality Gap by Domain Randomization,” 4 2018.[Online]. Available: http://arxiv.org/abs/1804.06516
  • [5]A.Dosovitskiy, G.Ros, F.Codevilla, A.Lopez, and V.Koltun, “CARLA: AnOpen Urban Driving Simulator,” 11 2017. [Online]. Available:http://arxiv.org/abs/1711.03938
  • [6]F.E. Nowruzi, P.Kapoor, D.Kolhatkar, F.A. Hassanat, R.Laganiere, andJ.Rebut, “How much real data do we actually need: Analyzing objectdetection performance using synthetic and real data,” 7 2019. [Online].Available: http://arxiv.org/abs/1907.07061
  • [7]L.T. Triess, M.Dreissig, C.B. Rist, and J.M. Zöllner, “A Survey onDeep Domain Adaptation for LiDAR Perception,” 6 2021. [Online]. Available:http://arxiv.org/abs/2106.02377http://dx.doi.org/10.1109/IVWorkshops54471.2021.9669228
  • [8]S.Ben-David, J.Blitzer, K.Crammer, A.Kulesza, F.Pereira, and J.W.Vaughan, “A theory of learning from different domains,” MachineLearning, vol.79, no. 1-2, pp. 151–175, 5 2010. [Online]. Available:http://link.springer.com/10.1007/s10994-009-5152-4
  • [9]S.Dodge and L.Karam, “Understanding how image quality affects deep neuralnetworks,” 2016 8th International Conference on Quality of MultimediaExperience, QoMEX 2016, 6 2016.
  • [10]Y.Wang, X.Chen, Y.You, L.Erran, B.Hariharan, M.Campbell, K.Q.Weinberger, and W.-L. Chao, “Train in Germany, Test in The USA: Making 3DObject Detectors Generalize,” 5 2020. [Online]. Available:http://arxiv.org/abs/2005.08139
  • [11]G.Adam, V.Chitalia, N.Simha, A.Ismail, S.Kulkarni, V.Narayan, andM.Schulze, “Robustness and Deployability of Deep Object Detectors inAutonomous Driving,” 2019 IEEE Intelligent Transportation SystemsConference, ITSC 2019, pp. 4128–4133, 10 2019.
  • [12]V.Seib, B.Lange, and S.Wirtz, “Mixing Real and Synthetic Data to EnhanceNeural Network Training – A Review of Current Approaches,” 7 2020.[Online]. Available: http://arxiv.org/abs/2007.08781
  • [13]S.Burdorf, K.Plum, and D.Hasenklever, “Reducing the Amount of Real WorldData for Object Detector Training with Synthetic Data,” 1 2022. [Online].Available: https://arxiv.org/abs/2202.00632v1
  • [14]A.Gaidon, Q.Wang, Y.Cabon, and E.Vig, “Virtual Worlds as Proxy forMulti-Object Tracking Analysis,” Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition, vol.2016-December, pp. 4340–4349, 5 2016. [Online]. Available:https://arxiv.org/abs/1605.06457v1
  • [15]Y.Cabon, N.Murray, and M.Humenberger, “Virtual KITTI 2,” 1 2020.[Online]. Available: https://arxiv.org/abs/2001.10773v1
  • [16]D.Dworak, F.Ciepiela, J.Derbisz, I.Izzat, M.Komorkiewicz, and M.Wojcik,“Performance of LiDAR object detection deep learning architectures based onartificially generated point cloud data from CARLA simulator,” 201924th International Conference on Methods and Models in Automation andRobotics, MMAR 2019, pp. 600–605, 8 2019.
  • [17]A.Geiger, P.Lenz, C.Stiller, and R.Urtasun, “Vision meets robotics: TheKITTI dataset,” http://dx.doi.org/10.1177/0278364913491297, vol.32,no.11, pp. 1231–1237, 8 2013. [Online]. Available:https://journals.sagepub.com/doi/full/10.1177/0278364913491297
  • [18]J.Fang, D.Zhou, F.Yan, T.Zhao, F.Zhang, Y.Ma, L.Wang, and R.Yang,“Augmented LiDAR Simulator for Autonomous Driving,” 11 2018. [Online].Available: http://arxiv.org/abs/1811.07112%****␣main.bbl␣Line␣150␣****http://dx.doi.org/10.1109/LRA.2020.2969927
  • [19]S.Manivasagam, S.Wang, K.Wong, W.Zeng, M.Sazanovich, S.Tan, B.Yang,W.-C. Ma, and R.Urtasun, “LiDARsim: Realistic LiDAR Simulation byLeveraging the Real World,” in 2020 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR).IEEE, 6 2020, pp. 11 164–11 173. [Online]. Available:https://ieeexplore.ieee.org/document/9157601/
  • [20]X.Yue, B.Wu, S.A. Seshia, K.Keutzer, and A.L. Sangiovanni-Vincentelli,“A LiDAR Point Cloud Generator: from a Virtual World to AutonomousDriving,” 3 2018. [Online]. Available:http://arxiv.org/abs/1804.00103
  • [21]S.Spiegel and J.Chen, “Using Simulation Data From Gaming Environments ForTraining A Deep Learning Algorithm On 3D Point Clouds,” ISPRS Annalsof the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol.VIII-4/W2-2021, pp. 67–74, 2021.
  • [22]D.Tsai, J.S. Berrio, M.Shan, S.Worrall, and E.Nebot, “See Eye to Eye: ALidar-Agnostic 3D Detection Framework for Unsupervised Multi-Target DomainAdaptation,” IEEE Robotics and Automation Letters, vol.7, no.3,pp. 7904–7911, 7 2022. [Online]. Available:https://ieeexplore.ieee.org/document/9804815/
  • [23]H.Caesar, V.Bankiti, A.H. Lang, S.Vora, V.E. Liong, Q.Xu, A.Krishnan,Y.Pan, G.Baldan, and O.Beijbom, “nuScenes: A multimodal dataset forautonomous driving,” Proceedings of the IEEE Computer SocietyConference on Computer Vision and Pattern Recognition, pp. 11 618–11 628,3 2019. [Online]. Available: https://arxiv.org/abs/1903.11027v5
  • [24]P.Sun, H.Kretzschmar, X.Dotiwalla, A.Chouard, V.Patnaik, P.Tsui, J.Guo,Y.Zhou, Y.Chai, B.Caine, V.Vasudevan, W.Han, J.Ngiam, H.Zhao,A.Timofeev, S.Ettinger, M.Krivokon, A.Gao, A.Joshi, Y.Zhang, J.Shlens,Z.Chen, and D.Anguelov, “Scalability in Perception for AutonomousDriving: Waymo Open Dataset,” in 2020 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR).IEEE, 6 2020, pp. 2443–2451. [Online]. Available:https://ieeexplore.ieee.org/document/9156973/
  • [25]A.Prakash, S.Boochoon, M.Brophy, D.Acuna, E.Cameracci, G.State,O.Shapira, and S.Birchfield, “Structured Domain Randomization: Bridgingthe Reality Gap by Context-Aware Synthetic Data,” 10 2018. [Online].Available: http://arxiv.org/abs/1810.10093
  • [26]X.Yue, Y.Zhang, S.Zhao, A.Sangiovanni-Vincentelli, K.Keutzer, and B.Gong,“Domain Randomization and Pyramid Consistency: Simulation-to-RealGeneralization without Accessing Target Domain Data,” 9 2019. [Online].Available: http://arxiv.org/abs/1909.00889
  • [27]H.AbuAlhaija, S.K. Mustikovela, L.Mescheder, A.Geiger, and C.Rother,“Augmented Reality Meets Computer Vision : Efficient Data Generation forUrban Driving Scenes,” International Journal of Computer Vision,vol. 126, no.9, pp. 961–972, 8 2017. [Online]. Available:https://arxiv.org/abs/1708.01566v1
  • [28]Y.Lin, K.Suzuki, H.Takeda, and K.Nakamura, “GENERATING SYNTHETIC TRAININGDATA FOR OBJECT DETECTION USING MULTI-TASK GENERATIVE ADVERSARIALNETWORKS,” ISPRS Annals of the Photogrammetry, Remote Sensing andSpatial Information Sciences, vol. V-2-2020, pp. 443–449, 8 2020.
  • [29]M.Ljungqvist, O.Nordander, A.Mildner, T.Liu, and P.Nugues, “ObjectDetector Differences When using Synthetic and Real Training Data,” inProceedings of the 17th International Joint Conference on ComputerVision, Imaging and Computer Graphics Theory and Applications, G.M.Farinella, P.Radeva, and K.Bouatouch, Eds.SCITEPRESS - Science and Technology Publications, 2022, pp.48–59. [Online]. Available:https://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0010778200003124
  • [30]S.Kornblith, M.Norouzi, H.Lee, and G.Hinton, “Similarity of NeuralNetwork Representations Revisited,” 5 2019. [Online]. Available:http://arxiv.org/abs/1905.00414
  • [31]L.T. Triess, C.B. Rist, D.Peter, and J.M. Zöllner, “A RealismMetric for Generated LiDAR Point Clouds,” 8 2022. [Online]. Available:http://arxiv.org/abs/2208.14958
  • [32]A.H. Lang, S.Vora, H.Caesar, L.Zhou, J.Yang, and O.Beijbom,“PointPillars: Fast Encoders for Object Detection from Point Clouds,”2019. [Online]. Available: https://github.com/nutonomy/second.pytorch
  • [33]S.Shi, X.Wang, and H.Li, “PointRCNN: 3D Object Proposal Generation andDetection from Point Cloud,” 2019 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR), 12 2018. [Online]. Available:http://arxiv.org/abs/1812.04244
  • [34]C.R. Qi, L.Yi, H.Su, and L.J. Guibas, “PointNet++: Deep HierarchicalFeature Learning on Point Sets in a Metric Space,” ADVANCES IN NEURALINFORMATION PROCESSING SYSTEMS, 6 2017. [Online]. Available:http://arxiv.org/abs/1706.02413
  • [35]A.Simonelli, S.R.R. Bulò, L.Porzi, M.López-Antequera, andP.Kontschieder, “Disentangling Monocular 3D Object Detection,”2019 IEEE/CVF International Conference on Computer Vision (ICCV), 52019. [Online]. Available: http://arxiv.org/abs/1905.12365
  • [36]L.Van DerMaaten and G.Hinton, “Visualizing Data using t-SNE,”Journal of Machine Learning Research, vol.9, pp. 2579–2605, 2008.
  • [37]Y.Wang, Z.Zhang, N.Zhang, and D.Zeng, “Attention Modulated MultipleObject Tracking with Motion Enhancement and Dual Correlation,”Symmetry, vol.13, no.2, p. 266, 2 2021. [Online]. Available:https://www.mdpi.com/2073-8994/13/2/266
  • [38]Y.Liu, X.Li, T.Bai, K.Wang, and F.-Y. Wang, “Multi-object tracking withhard-soft attention network and group-based cost minimization,”Neurocomputing, vol. 447, pp. 80–91, 8 2021. [Online]. Available:https://linkinghub.elsevier.com/retrieve/pii/S0925231221003416
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (115)Sebastian Huch received his BEng degree from the Baden-Wuerttemberg Cooperative State University (DHBW) Stuttgart, Germany, in 2016 and his MSc degree from the Technical University of Darmstadt, Germany, in 2018. He is currently pursuing his PhD degree in mechanical engineering at the Institute of Automotive Technology at the Technical University of Munich (TUM), Germany. His research interests include LiDAR simulation, LiDAR perception, and LiDAR domain adaptation for autonomous driving.
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (116)Luca Scalerandi received his BSc degree in computer science from the Technical University of Munich (TUM), Munich, Germany, in 2021. He is currently pursuing his master’s degree in computer science at TUM. He works part-time as a computer vision engineer at DeepScenario. His broader scientific interest lies in multi-object tracking, motion analysis, and scenario understanding.
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (117)Esteban Rivera received his BSc degree in Electronic Engineering and Physics from the Universidad de los Andes, Bogotá. Colombia in 2016 and his MSc in Electrical Engineering from the Karlsruhe Institute for Technology in 2019. Later he worked as a data scientist for Appgate Inc and developed authentication algorithms based on deep learning for the finance industry. Currently, he is pursuing his PhD at the Institute for Automotive Technology at the Technical University of Munich (TUM), Germany. His research interests include computer vision, camera-based object detection, and sensor fusion.
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (118)Markus Lienkamp carries out research in the area of autonomous vehicles with the objective of creating an open-source software platform. He is a professor at the Institute of Automotive Technology at the Technical University of Munich (TUM) and is involved in the CREATE project in Singapore. After studying mechanical engineering at TU Darmstadt and Cornell University, Prof. Lienkamp obtained his doctorate from TU Darmstadt (1995). He worked at Volkswagen as part of an international trainee program and took part in a joint venture between Ford and Volkswagen in Portugal. Returning to Germany, he led the brake testing department of the VW commercial vehicle development section in Wolfsburg. He was later appointed head of the Electronics and Vehicle research department in the Volkswagen Group’s Research Division. His main priorities were advanced driver assistance systems and electromobility concepts. Prof. Lienkamp has headed the Chair of Automotive Technology at TUM since November 2009.
NetworkTrain datasetTest dataset3D AP (0.5)3D AP (0.7)Recall (0.5)Recall (0.7)
PointRCNNRealReal74.33 (1.9)51.96 (1.21)92.12 (0.7)80.92 (1.21)
Sim86.56 (1.0)62.53 (2.62)93.78 (0.48)87.26 (0.88)
Sim Noise77.27 (2.08)52.25 (2.15)87.88 (0.74)77.48 (0.98)
Sim Downsampled83.64 (0.72)58.17 (3.37)92.7 (0.5)84.62 (0.89)
SimReal56.29 (1.36)38.23 (0.98)76.56 (0.78)65.5 (0.95)
Sim97.31 (1.35)96.82 (1.16)99.46 (0.1)98.64 (0.21)
Sim Noise88.96 (1.11)88.48 (1.19)94.4 (0.82)92.96 (0.83)
Sim Downsampled96.31 (0.18)96.28 (0.17)99.04 (0.14)98.08 (0.26)
Sim NoiseReal60.36 (1.35)41.61 (1.65)78.3 (0.63)67.8 (0.78)
Sim97.07 (1.07)96.58 (0.12)99.42 (0.12)98.48 (0.16)
Sim Noise96.93 (1.29)96.43 (0.43)99.08 (0.13)98.16 (0.21)
Sim Downsampled96.21 (0.29)96.19 (0.29)99.08 (0.25)97.92 (0.3)
Sim DownsampledReal55.84 (2.72)37.57 (2.44)75.98 (0.55)65.72 (0.17)
Sim97.33 (1.08)96.82 (0.94)99.3 (0.11)98.48 (0.32)
Sim Noise86.86 (2.48)86.38 (3.31)92.78 (1.91)90.74 (2.29)
Sim Downsampled95.93 (0.5)95.87 (0.54)99.02 (0.13)98.12 (0.19)
PointPillarsRealReal69.22 (0.0)41.1 (0.0)80.5 (0.0)55.7 (0.0)
Sim68.65 (0.03)20.68 (0.0)83.1 (0.0)41.0 (0.0)
Sim Noise67.32 (0.0)19.98 (0.0)83.2 (0.0)40.7 (0.0)
Sim Downsampled63.44 (0.0)18.41 (0.02)82.3 (0.0)39.7 (0.0)
SimReal30.49 (0.02)13.36 (0.12)69.7 (0.0)39.9 (0.0)
Sim98.75 (0.0)98.18 (0.0)99.7 (0.0)98.9 (0.0)
Sim Noise98.98 (0.0)98.63 (0.0)99.8 (0.0)99.1 (0.0)
Sim Downsampled98.63 (0.0)98.11 (0.0)99.6 (0.0)98.7 (0.0)
Sim NoiseReal39.81 (0.02)18.77 (0.08)68.8 (0.0)43.2 (0.0)
Sim99.41 (0.0)99.22 (0.0)99.8 (0.0)98.7 (0.0)
Sim Noise98.95 (0.0)98.51 (0.0)99.7 (0.0)98.3 (0.0)
Sim Downsampled99.07 (0.0)98.71 (0.0)99.8 (0.0)98.0 (0.0)
Sim DownsampledReal30.25 (0.16)14.05 (0.02)67.0 (0.0)39.1 (0.0)
Sim99.15 (0.0)94.96 (0.0)99.6 (0.0)96.7 (0.0)
Sim Noise98.73 (0.0)95.28 (0.0)99.5 (0.0)96.9 (0.0)
Sim Downsampled98.31 (0.0)94.46 (0.0)99.6 (0.0)97.5 (0.0)
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level (2024)

References

Top Articles
Latest Posts
Article information

Author: Rob Wisoky

Last Updated:

Views: 5953

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Rob Wisoky

Birthday: 1994-09-30

Address: 5789 Michel Vista, West Domenic, OR 80464-9452

Phone: +97313824072371

Job: Education Orchestrator

Hobby: Lockpicking, Crocheting, Baton twirling, Video gaming, Jogging, Whittling, Model building

Introduction: My name is Rob Wisoky, I am a smiling, helpful, encouraging, zealous, energetic, faithful, fantastic person who loves writing and wants to share my knowledge and understanding with you.