Current Research‎ > ‎

Dense Optical Flow

Modern variational methods are able to compute Optical Flow for every pixel in an image and run in real-time on the Graphics Card (GPU).  C.Zach und T.Pock [10] have laid pioneering work for this. However, the displacements are limited to about 10-15 Pixel, otherwise large errors might occur. As shown here, traffic scenes exhibit significantly larger displacements.

Original image of a traffic scene
Optical Flow result using a new method  [15] of the 6D-Vision team
Optical Flow computed with Literature method  ("TV-L1")
 
The images show a highway scenario. In the middle, an Optical Flow estimate is shown that is quite realistic. The colors encode the colors of the color wheel as shown below on the right, e.g. red encodes a motion to the right. The saturated the color, the larger the displacement. The right image shows the result of a literature flow algorithm that exhibits errors in areas with large displacements (compare to the middle image).

The middle image result comes from a new method introduced by Thomas Müller [15], a PhD student of the 6D-Vision team. The improvements stem from 2 reasons: For one, the method uses prior knowledge from stereo and ego-motion and assumes a stationary scene. Another reason is the use of PowerFlow that determines displacements of any length and compatible flow fields to this solution are favored. The result estimates the real displacements in traffic scenarios with moving objects very well and is superior to other methods. It runs in real-time on a common GPU.