Generate Road Scene Using Lanes from Labeled Recorded Data
This example shows how to generate a road scene using lanes from labeled camera images and raw lidar data.
You can create a virtual scene from recorded sensor data that represents real-world roads, and use it to perform safety assessments for automated driving applications.
You can use a combination of camera images and lidar point clouds to generate an accurate road scene that contains lane information. Camera images enable you to easily identify scene elements, such as lane markings and road boundaries, while lidar point clouds enable you to accurately measure distances and depth. Additionally, you can improve the labeling of lane boundaries in images by using a variety of lane detectors that perform reliably regardless of the camera used. In contrast, labeling lane boundaries directly in lidar point clouds is difficult, as they do not contain RGB information and the accuracy of lane detectors for point clouds is dependent on the sensor used. This example fuses lane labels, from images, with lidar point clouds to generate an accurate scene.
In this example, you:
Load a sequence of lidar point clouds and lane-labeled images.
Fuse the lane labels with the lidar point clouds.
Extract the lane boundary points from the fused point clouds.
Smooth the extracted points to form consistent lane geometry.
Generate a RoadRunner HD Map from the extracted lane boundary points.
The workflow in this example requires labeled camera images, raw lidar data, and camera-to-lidar calibration information.
Load Sensor Data and Lane Labels
Download a ZIP file containing a subset of sensor data from the PandaSet data set, and then unzip the file. This file contains data for a continuous sequence of 80 point clouds and images. The data also contains lane labels in the form of pixel coordinates. The data stores these labels as a groundTruth
object. Alternatively, you can obtain lane labels using a lane detector.
dataFolder = tempdir; dataFilename = "PandasetSequence.zip"; url = "https://ssd.mathworks.com/supportfiles/driving/data/" + dataFilename; filePath = fullfile(dataFolder,dataFilename); if ~isfile(filePath) websave(filePath,url) end unzip(filePath,dataFolder) dataset = fullfile(dataFolder,"PandasetSequence");
Load the downloaded data set into the workspace using the helperLoadData
function.
[cameraData,~,imds,pcds,gTruth] = helperLoadData(dataset);
imds
and pcds
are datastores containing the images and point clouds, respectively, While gTruth
is a groundTruth
object that contains lane labels in pixel coordinates, and cameraData
is a structure with these fields
calibration
— Camera-to-lidar calibration for each timestamp.sensor
— Camera parameters, specified as amonoCamera
object.timestamps
— Time of capture, in seconds, for the point cloud and image data.
cameraData
cameraData = struct with fields:
calibration: [80×1 struct]
sensor: [1×1 monoCamera]
timestamps: [80×1 double]
filePath: [80×1 string]
Create a datastore that contains both the images and the point clouds by using the combine
function.
ds = combine(pcds,imds);
Read the first point cloud and an image from the datastore.
frame = read(ds); ptCld = frame{1}; img = frame{2};
Visualize the point cloud and the lane-labeled image. Then, return to the start of the datastore.
Note: You can view the labels for all timestamps using the Image Labeler app by importing the groundTruth
object.
ax = pcshow(ptCld,ColorSource="Intensity");
zoom(ax,4)
xlim(ax,[-20 50])
zlim(ax,[-5 20])
figure imshow(img) hold on points = gTruth.LabelData{1,:}{1}; for i = 1:length(points) plot(points{i}(:,1),points{i}(:,2),LineWidth=3); end hold off
reset(ds)
Fuse Lane Labels with Lidar Point Clouds
Because the groundTruth
object contains lane labels in the pixel coordinate system, and parabolicLaneBoundary
models represent lanes in the vehicle coordinate system, you must convert the lane labels to parabolicLaneBoundary
models. Convert the lane labels using the helperConvertLabelsToLaneBoundaries
function and use a laneData
object to efficiently store and read the lane labels.
laneBoundaries = helperConvertLabelsToLaneBoundaries(gTruth,cameraData.sensor); laneBoundaries = laneData(cameraData.timestamps,laneBoundaries);
Overlay the lane labels onto the images and fuse the images with the corresponding point clouds for all timestamps by using the helperGenerateFusedPointClouds
function. The function returns an array of pointCloud
objects, which contains the fused point clouds for each timestamp. The function associates each lane boundary point with an RGB value of [0 255-i 0]
to use when extracting the lane geometry. i
is the index of the lane, starting from the left.
fusedPointCloudArray = helperGenerateFusedPointClouds(laneBoundaries,ds,cameraData);
Concatenate the entire sequence of fused point clouds by using the pccat
function. Save the concatenated point cloud as a PCD file to enable you to import it into RoadRunner.
Note: If the point clouds are in vehicle coordinate system, you can stitch them by using a point cloud registration algorithm. For more information, see 3-D Point Cloud Registration and Stitching.
fusedPointCloud = pccat(fusedPointCloudArray); pcwrite(fusedPointCloud,"fusedPointCloud.pcd",Encoding="compressed")
Extract Lane Geometry from Fused Point Clouds
Extract the 3-D positions of the lane boundaries, by using their corresponding RGB values, from the entire sequence of fused point clouds.
figure numLaneBoundaries = 4; boundaryPoints = cell(numLaneBoundaries,1); for i = 1:numLaneBoundaries idx = find(fusedPointCloud.Color(:,1)==0 & fusedPointCloud.Color(:,2)==(255-i) & fusedPointCloud.Color(:,3)==0); boundaryPoints{i} = fusedPointCloud.Location(idx,:); plot(boundaryPoints{i}(:,1),boundaryPoints{i}(:,2),"bo") hold on end title("Boundary Points Extracted from Fused Point Cloud") hold off
Smooth Extracted Lane Boundary Points
Labeling inaccuracies and fusing labels with point clouds creates noise in the extracted lane boundary points. Create smooth boundary points by fitting a parabolic curve through the extracted points.
Note: The road in this sequence is relatively straight, so the example uses a single parabolic curve is used for the entire sequence of lane boundary detections.
laneBoundaryModels = []; for i = 1:length(boundaryPoints) points = boundaryPoints{i}; p = findParabolicLaneBoundaries(points(:,1:2),1,MaxNumBoundaries=1); laneBoundaryModels = [laneBoundaryModels; p]; end
Visualize the smoothed lane boundary points. Note that the lane boundaries are not aligned because the lane boundaries are occluded for a long duration by cars or other roadside objects.
figure for i = 1:length(laneBoundaryModels) p = laneBoundaryModels(i); x = linspace(p.XExtent(1),p.XExtent(2),20); y = polyval(p.Parameters,x); plot(x,y,"bo") hold on end title("Smoothed Boundary Points") hold off
Trim each lane boundary to keep them aligned with the shortest lane boundary by using the helperTrimBoundaries
function. Visualize the trimmed lane boundary points.
laneBoundaryModels = helperTrimBoundaries(laneBoundaryModels); figure for i = 1:length(boundaryPoints) p = laneBoundaryModels(i); x = linspace(p.XExtent(1),p.XExtent(2),20); y = polyval(p.Parameters,x); boundaryPoints{i} = [x' y']; plot(boundaryPoints{i}(:,1),boundaryPoints{i}(:,2),"bo") hold on end title("Trimmed Boundary Points") hold off
Create RoadRunner HD Map
HD maps contain lane information, which is useful for automated driving applications such as sensing, perception, localization, and planning. Create a RoadRunner HD Map from the extracted lane boundary points by using the helperCreateRRHDMap
function.
rrMap = helperCreateRRHDMap(boundaryPoints);
Visualize the map.
plot(rrMap)
Write the RoadRunner HD Map to a binary file, which you can import into RoadRunner.
write(rrMap,"rrMap.rrhd")
To open RoadRunner using MATLAB®, specify the path to your RoadRunner project. This code shows the path for a sample project folder location in Windows®.
rrProjectPath = "C:\RR\MyProject";
Specify the path to your local RoadRunner installation folder. This code shows the path for the default installation location in Windows.
rrAppPath = "C:\Program Files\RoadRunner R2023a\bin\win64";
Open RoadRunner using the specified path to your project.
rrApp = roadrunner(rrProjectPath,InstallationFolder=rrAppPath);
Import the rrMap.rrhd
scene into RoadRunner.
importScene(rrApp,fullfile(pwd,"rrMap.rrhd"),"RoadRunner HD Map")
You can import the fused point cloud PCD file into RoadRunner, for visual validation of the generated roads with respect to the imported point clouds, by using the Point Cloud Tool (RoadRunner).
This figure shows the generated road scene built using RoadRunner Scene Builder with the fused point clouds overlaid.
Note: Use of the Scene Builder Tool requires a RoadRunner Scene Builder license.
Helper Functions
helperGenerateFusedPointClouds
— Overlays the lane labels onto the images and fuses the images with the point clouds. The function associates each lane boundary point with an RGB value of [0 255-i 0]
to use when extracting the lane geometry. i
is the index of the lane, starting from the left.
function pointclouds = helperGenerateFusedPointClouds(ld,ds,cameraData) % Function to insert lane boundary labels into images and generate fused % point clouds. % % This is a helper function for example purposes and may be removed or % modified in the future. sensor = cameraData.sensor; intrinsics = sensor.Intrinsics; cal = cameraData.calibration; reset(ds); pointclouds=[]; start_frame=0; for i = 1:length(ld.LaneBoundaryData) % Read the labels labels = readData(ld,"RowIndices",i); labels = labels{1,2:end}; idx = cellfun("isempty",labels); labels = labels(~idx); num_lanes = length(labels); % Create camera-to-lidar transformation frameNum = start_frame + i; rot = quat2rotm([cal(frameNum).heading.w cal(frameNum).heading.x cal(frameNum).heading.y cal(frameNum).heading.z]); tran = [cal(frameNum).position.x cal(frameNum).position.y cal(frameNum).position.z]; tform = rigid3d(rot',tran); % Read the image and point cloud data = read(ds); pc = data{1}; img = data{2}; % Overlay the lane labels onto the image for n = 1:num_lanes current_lane = labels{n}; x = linspace(current_lane.XExtent(1),25,10); img = insertLaneBoundary(img,current_lane,sensor,x,Color=[0 255-n 0],LineWidth=15); end % Fuse camera to lidar [pcOut,~,indices] = fuseCameraToLidar(img,pc,intrinsics,tform); pcOut = select(pcOut,indices); pointclouds = [pointclouds; pcOut]; end end
helperCreateRRHDMap
— Creates a RoadRunner HD Map by using smoothed lane boundary points.
function rrMap = helperCreateRRHDMap(boundaryPoints) % Function to create a RoadRunner HD Map using smoothed lane boundary points % % This is a helper function for example purposes and may be removed or % modified in the future. % Create roadrunnerHDMap object rrMap = roadrunnerHDMap; % Add lane boundaries to the map for i = 1:length(boundaryPoints) points = boundaryPoints{i}; rrMap.LaneBoundaries(i) = roadrunner.hdmap.LaneBoundary(ID=sprintf("LaneBoundary%d",i),Geometry=points); end % Calculate lane centers and add them to the map for i = 1:3 idx = min(size(rrMap.LaneBoundaries(i).Geometry,1),size(rrMap.LaneBoundaries(i+1).Geometry,1)); laneCenters = (rrMap.LaneBoundaries(i).Geometry(1:idx,:) + rrMap.LaneBoundaries(i+1).Geometry(1:idx,:))/2; rrMap.Lanes(i) = roadrunner.hdmap.Lane(ID=sprintf("Lane%d",i),Geometry=laneCenters, ... LaneType="Driving",TravelDirection="Forward"); % Assign left and right boundary points for each lane leftBoundary(rrMap.Lanes(i),sprintf("LaneBoundary%d",i),Alignment="Forward"); rightBoundary(rrMap.Lanes(i),sprintf("LaneBoundary%d",i+1),Alignment="Forward"); end end
See Also
Functions
Related Topics
- Overview of Scenario Generation from Recorded Sensor Data
- Extract Vehicle Track List from Recorded Camera Data for Scenario Generation
- Extract Vehicle Track List from Recorded Lidar Data for Scenario Generation
- Generate RoadRunner Scenario from Recorded Sensor Data
- Generate Scenario from Actor Track List and GPS Data
- Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation
- Extract Lane Information from Recorded Camera Data for Scene Generation