Main Content

read

Return target data using lidar

Since R2024b

    Description

    [pointCloud,range,reflectivity,semantic] = read(lidar) returns a point cloud, distances from the sensor to object points, reflectivity of surface materials, and the semantic identifiers of the objects in a scene. You can set the field of view and angular resolution in the sim3d.sensors.Lidar object specified by lidar.

    Input Arguments

    collapse all

    Virtual lidar sensor that detects target in the 3D environment, specified as a sim3d.sensors.Lidar object.

    Example: lidar = sim3d.sensors.Lidar

    Output Arguments

    collapse all

    Point cloud data, returned as an m-by-n-by 3 array of positive, real-valued [x, y, z] points. m and n define the number of points in the point cloud, as shown in this equation:

    m×n=VFOVVRES×HFOVHRES

    where:

    • VFOV is the vertical field of view of the lidar, in degrees, as specified by the VerticalFieldOfView argument.

    • VRES is the vertical angular resolution of the lidar, in degrees, as specified by the VerticalAngularResolution argument.

    • HFOV is the horizontal field of view of the lidar, in degrees, as specified by the HorizontalFieldOfView argument.

    • HRES is the horizontal angular resolution of the lidar, in degrees, as specified by the HorizontalAngularResolution argument.

    Each m-by-n entry in the array specifies the x, y, and z coordinates of a detected point in the sensor coordinate system. If the lidar does not detect a point at a given coordinate, then x, y, and z are returned as NaN.

    You can create a point cloud from these returned points by using point cloud functions in the pointCloud (Computer Vision Toolbox) object.

    Data Types: single

    Distance to object points measured by the lidar sensor, returned as an m-by-n positive real-valued matrix. Each m-by-n value in the matrix corresponds to an [x, y, z] coordinate point returned by the pointCloud output argument.

    Data Types: single

    Reflectivity of surface materials, returned as an m-by-n matrix of intensity values in the range [0, 1], where m is the number of rows in the point cloud and n is the number of columns. Each point in the reflectivity output corresponds to a point in the pointCloud output. The function returns points that are not part of a surface material as NaN.

    To calculate reflectivity, the lidar sensor uses the Phong reflection model [1]. The model describes surface reflectivity as a combination of diffuse reflections (scattered reflections, such as from rough surfaces) and specular reflections (mirror-like reflections, such as from smooth surfaces).

    Data Types: single

    Semantic label identifier for each pixel in the image, output as an m-by-n array of RGB triplet values. m is the vertical resolution of the image. n is the horizontal resolution of the image.

    The table shows the object IDs used in the default scenes. If a scene contains an object that does not have an assigned ID, that object is assigned an ID of 0. The detection of lane markings is not supported.

    IDType
    0

    None/default

    1

    Building

    2

    Not used

    3

    Other

    4

    Pedestrians

    5

    Pole

    6

    Lane Markings

    7

    Road

    8

    Sidewalk

    9

    Vegetation

    10

    Vehicle

    11

    Not used

    12

    Generic traffic sign

    13

    Stop sign

    14

    Yield sign

    15

    Speed limit sign

    16

    Weight limit sign

    17-18

    Not used

    19

    Left and right arrow warning sign

    20

    Left chevron warning sign

    21

    Right chevron warning sign

    22

    Not used

    23

    Right one-way sign

    24

    Not used

    25

    School bus only sign

    26-38

    Not used

    39

    Crosswalk sign

    40

    Not used

    41

    Traffic signal

    42

    Curve right warning sign

    43

    Curve left warning sign

    44

    Up right arrow warning sign

    45-47

    Not used

    48

    Railroad crossing sign

    49

    Street sign

    50

    Roundabout warning sign

    51

    Fire hydrant

    52

    Exit sign

    53

    Bike lane sign

    54-56

    Not used

    57

    Sky

    58

    Curb

    59

    Flyover ramp

    60

    Road guard rail

    61Bicyclist
    62-66

    Not used

    67

    Deer

    68-70

    Not used

    71

    Barricade

    72

    Motorcycle

    73-255

    Not used

    Dependencies

    To enable semantic output, set the EnableSemanticOutput argument of the sim3d.sensors.Camera object to 1.

    References

    [1] Phong, Bui Tuong. “Illumination for Computer Generated Pictures.” Communications of the ACM 18, no. 6 (June 1975): 311–17. https://doi.org/10.1145/360825.360839.

    Version History

    Introduced in R2024b

    See Also

    | | | (Computer Vision Toolbox) | (Computer Vision Toolbox)