M²BEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Birds-Eye View Representation
Abstract
In this paper, we propose M$²BEV, a unified framework that jointly performs 3D object detection and map segmentation in the Birds Eye View~(BEV) space with multi-camera image inputs. Unlike the majority of previous works which separately process detection and segmentation, M$²BEV infers both tasks with a unified model and improves efficiency. M2BEV efficiently transforms multi-view 2D image features into the 3D BEV feature in ego-car coordinates. Such BEV representation is important as it enables different tasks to share a single encoder. Our framework further contains four important designs that benefit both accuracy and efficiency: (1) An efficient BEV encoder design that reduces the spatial dimension of a voxel feature map. (2) A dynamic box assignment strategy that uses learning-to-match to assign ground-truth 3D boxes with anchors. (3) A BEV centerness re-weighting that reinforces with larger weights for more distant predictions, and (4) Large-scale 2D detection pre-training and auxiliary supervision. We show that these designs significantly benefit the ill-posed camera-based 3D perception tasks where depth information is missing. M2BEV is memory efficient, allowing significantly higher resolution images as input, with faster inference speed. Experiments on nuScenes show that M$²BEV achieves state-of-the-art results in both 3D object detection and BEV segmentation, with the best single model achieving 42.5 mAP and 57.0 mIoU in these two tasks, respectively.
Attached Files
Submitted - 2204.05088.pdf
Files
Name | Size | Download all |
---|---|---|
md5:06c7985e0e4468ff262679532be436e3
|
15.8 MB | Preview Download |
Additional details
- Eprint ID
- 115587
- DOI
- 10.48550/arXiv.arXiv.2207.05850
- Resolver ID
- CaltechAUTHORS:20220714-212525848
- Created
-
2022-07-15Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field