Detect objects and lanes from measurements in 3D simulation environment
Automated Driving Toolbox / Simulation 3D
The Simulation 3D Vision Detection Generator block generates detections from camera measurements taken by a vision sensor mounted on an ego vehicle in a 3D simulation environment. This environment is rendered using the Unreal Engine® from Epic Games®. The block derives detections from simulated actor poses that are based on cuboid (box-shaped) representations of the actors in the scenario. For more details, see Algorithms.
The block generates detections at intervals equal to the sensor update interval. Detections are referenced to the coordinate system of the sensor. The block can simulate real detections that have added random noise and also generate false positive detections. A statistical model generates the measurement noise, true detections, and false positives. To control the random numbers that the statistical model generates, use the random number generator settings on the Measurements tab of the block.
If you set Sample time to -1
, the block uses the
sample time specified in the Simulation 3D Scene Configuration block. To use
this sensor, you must include a Simulation 3D Scene Configuration block in your
model.
Note
The Simulation 3D Scene Configuration block must execute before the Simulation 3D Vision Detection Generator block. That way, the Unreal Engine 3D visualization environment prepares the data before the Simulation 3D Vision Detection Generator block receives it. To check the block execution order, right-click the blocks and select Properties. On the General tab, confirm these Priority settings:
Simulation 3D Scene Configuration — 0
Simulation 3D Vision Detection Generator —
1
For more information about execution order, see How Unreal Engine Simulation for Automated Driving Works.
The sensor is unable to detect lanes and objects from vantage points too close to the ground. After mounting the sensor block to a vehicle by using the Parent name parameter, set the Mounting location parameter to one of the predefined mounting locations on the vehicle.
If you leave Mounting location set to
Origin
, which mounts the sensor on the ground below the
vehicle center, then specify an offset that is at least 0.1 meter above the ground. Select
Specify offset, and in the Relative translation [X, Y, Z]
(m) parameter, set a Z value of at least
0.1
.
To visualize detections and sensor coverage areas, use the Bird's-Eye Scope. See Visualize Sensor Data from Unreal Engine Simulation Environment.
Because the Unreal Engine can take a long time to start between simulations, consider logging the signals that the sensors output. See Configure a Signal for Logging (Simulink).
To generate detections, the Simulation 3D Vision Detection Generator block feeds the actor and lane ground truth data that is read from the Unreal Engine simulation environment to a Vision Detection Generator block. This block returns detections that are based on cuboid, or box-shaped, representations of the actors. The physical dimensions of detected actors are not based on their dimensions in the Unreal Engine environment. Instead, they are based on the default values set in the Actor Profiles parameter tab of the Vision Detection Generator block, where all actors are approximately the size of a sedan. If you return detections that have occlusions, then the occlusions are based on all actors being of this one size.
visionDetectionGenerator
| cameraIntrinsics
(Computer Vision Toolbox)