Current Research‎ > ‎

Dense6D

If it is possible to measure the position and its motion to every image point, one can predict the behavior of pedestrians much more accurately than today (Has he seen me? Will he  cross the street). In addition, typical motion patterns become visible instantly which might aid classical pedestrian recognition.
The image on the right shows the motion of the limbs of a fast moving person with a color-code (green - stationary ... red - fast moving, Source: [10b]). The left leg stands still while the right is moving forward quickly. The fast motion of the left arm matches accordingly. The upper body assumes an intermediate speed to the left (orange).


Thanks to Thomas Müller (see previous page) we can compute dense flow for arbitrary scenes.  The fusion of dense optical flow (upper image) with the stereo result (lower left image) enables an estimation of motion vectors for every image point in the scene. The video shows a result of a typical crossing scene, the colors encode the speed. Thanks to the efficient implementation of Clemens Rabe,  the approach runs in real-time on a GPU. Both the optical flow and the 6D filters are computed on the GPU. A published paper on ECCV showed the benefit of a temporal integration of Dense6D compared to differential alternatives often referred to as "scene flow" [1].