Fig 1: 3D bounding box projection and point cloud bird’s eye view.

KITTI 3D Object Detection Dataset

KITTI is one of the well known benchmarks for 3D Object detection. Working
with this dataset requires some understanding of what the different files and their contents are. Goal here is to do some basic manipulation and sanity checks to get a general understanding of the data. 4 different types of files from the KITTI 3D Objection Detection dataset as follows are used in the article.

camera_2 image (.png),
camera_2 label (.txt),
calibration (.txt),
velodyne point cloud (.bin),

Fig 2 : The many coordinate systems used in the dataset.

For each frame , there is one of these files with same name but different extensions. The image files are regular png file and can be displayed by any PNG aware software. The label files contains the bounding box for objects in 2D and 3D in text. Each row of the file is one object and contains 15 values , including the tag (e.g. Car, Pedestrian, Cyclist). The 2D bounding boxes are in terms of pixels in the camera image . The 3D bounding boxes are in 2 co-ordinates. The size ( height, weight, and length) are in the object co-ordinate , and the center on the bounding box is in the camera co-ordinate.

The point cloud file contains the location of a point and its reflectance in the lidar co-ordinate. The calibration file contains the values of 6 matrices — P03, R0_rect, Tr_velo_to_cam, and Tr_imu_to_velo.

The Px matrices project a point in the rectified referenced camera coordinate to the camera_x image. camera_0 is the reference camera coordinate. R0_rect is the rectifying rotation for reference coordinate ( rectification makes images of multiple cameras lie on the same plan). Tr_velo_to_cam maps a point in point cloud coordinate to reference co-ordinate.

Will do 2 tests here. The first test is to project 3D bounding boxes from label file onto image. Second test is to project a point in point cloud coordinate to image. The algebra is simple as follows. The first equation is for projecting the 3D bouding boxes in reference camera co-ordinate to camera_2 image. The second equation projects a velodyne co-ordinate point into the camera_2 image.

y_image = P2 * R0_rect * R0_rot * x_ref_coord

y_image = P2 * R0_rect * Tr_velo_to_cam * x_velo_coord

In the above, R0_rot is the rotation matrix to map from object coordinate to reference coordinate.

The code is relatively simple and available at github.

--

--