Relative Pose Estimation of Objects using 3D Cameras

The challenging problem of fusing the data from central cameras such as 3D depth, infrared, omnidirectional or plain perspective camera is a must in the recent mobile robotics perception systems. Also the active perception task, i.e. the ability of getting the most information from a scene is getting into focus, thus this project aims to tackle this novel problem in the near future.

One of the main application of this approach is the 3D object recognition in dynamic space (e.g. production line, people tracking, etc).

Ppossible track within this project include:

  • Deep learning based end2end object pose estimation
  • Change detection in 3D environements
  • Active perception for 2D cameras

Interested and motivated persons joining this research topic are welcome, future grants are envisaged focusing on this research field.

Apply to this project by contacting Levente Tamas.

Start date (duration): 
October, 2018
Project status and type: 

2D/3D data fusion for lidar, omni and infrared cameras