Motion-Aware Temporal Coherence for Video Resizing

ACM Transaction on Graphics (Proceedings of SIGGRAPH Asia 2009)

Yu-Shuen Wang1    Hongbo Fu2    Olga Sorkine3    Tong-Yee Lee1    Hans-Peter Seidel4

1National Cheng Kung University, Taiwan
2
School of Creative Media, City University of Hong Kong
3New York University, 4Max-Planck-Institut Informatix


Overview of our automatic content-aware video resizing framework. We align the original frames of a video clip to a common coordinate system by estimating interframe camera motion, so that corresponding components have roughly the same spatial coordinates. We achieve spatially and temporally coherent resizing of the aligned frames by preserving the relative positions of corresponding components within a grid-based optimization framework. The final resized video is reconstructed by transforming every video frame back to the original coordinate system.

Abstract
Temporal coherence is crucial in content-aware video retargeting. To date, this problem has been addressed by constraining temporally adjacent pixels to be transformed coherently. However, due to the motion-oblivious nature of this simple constraint, the retargeted videos often exhibit flickering or waving artifacts, especially when significant camera or object motions are involved. Since the feature correspondence across frames varies spatially with both camera and object motion, motion-aware treatment of features is required for video resizing. This motivated us to align consecutive frames by estimating interframe camera motion and to constrain relative positions in the aligned frames. To preserve object motion, we detect distinct moving areas of objects across multiple frames and constrain each of them to be resized consistently. We build a complete video resizing framework by incorporating our motion-aware constraints with an adaptation of the scale-and-stretch optimization recently proposed by Wang and colleagues. Our streaming implementation of the framework allows efficient resizing of long video sequences with low memory cost. Experiments demonstrate that our method produces spatiotemporally coherent retargeting results even for challenging examples with complex camera and object motion, which are difficult to handle with previous techniques.
Paper

    PDF (24M)
Videos

   

   
Supplemental results: illustrations, more resizing results

BibTeX
@ARTICLE{Wang:2009,
        author = {Yu-Shuen Wang and Hongbo Fu and Olga Sorkine and Tong-Yee Lee and Hans-Peter Seidel},
        title = { Motion-Aware Temporal Coherence for Video Resizing},
        journal = {ACM Trans. Graph. (Proceedings of ACM SIGGRAPH ASIA)},
        year = {2009},
        volume = {28},
        number = {5},
    }
Thanks

We would like to thank the anonymous reviewers for their constructive comments. We thank Hui-Chih Lin for the video production and Andrew Nealen for the video narration. We also thank Miki Rubinstein for providing the seam carving implementation used in our comparisons and Joyce Meng for her help with the video materials and licensing. The used video clips are permitted by ARS Film Production, Blender Foundation and MAMMOTH HD. Yu-Shuen Wang and Tong-Yee Lee are supported by the Landmark Program of the NCKU Top University Project (contract B0008) and the National Science Council (contracts NSC-97-2628-E-006-125-MY3 and NSC- 96-2628-E-006-200-MY3), Taiwan. Hongbo Fu is partly supported by a start-up research grant at CityU (Project No. 7200148). Olga Sorkine's research is supported in part by an NYU URCF grant and an NSF award IIS-0905502.