Sensor Fusion for human pose tracking with ToF cameras, RGB cameras and accelerometers

This thesis proposes the use of image processing and data acquisition to estimate the pose of a subject.

An RGB camera (or a ToF camera) can be used for image acquisition.

The acquisition of joints accelerations through a network of wearable IMU sensors is managed with Raspberry and MQTT communication protocol is used to share the information.

A sensor fusion algorithm allows the union of information and the reduction of uncertainty.

Everything is used in virtual environment for the real-time movement of an avatar displayed with Hololens and a visual feedback of the movement.