Gianfranco Doretto / Publications

Dynamic textures

Soatto, S., Doretto, G., and Wu, Y. N.
Dynamic textures
In Proceedings of IEEE International Conference on Computer Vision, pp. 439–446, Vancouver, BC, Canada, July 2001.
Oral Presentation

Download

PDF (929.6kB )  

Abstract

Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include sea-waves, smoke, foliage, whirlwind but also talking faces, traffic scenes etc. We present a novel characterization of dynamic textures that poses the problems of modelling, learning, recognizing and synthesizing dynamic textures on a firm analytical footing. We borrow tools from system identification to capture the “essence” of dynamic textures; we do so by learning (i.e. identifying) models that are optimal in the sense of maximum likelihood or minimum prediction error variance. For the special case of secondorder stationary processes we identify the model in closed form. Once learned, a model has predictive power and can be used for extrapolating synthetic sequences to infinite length with negligible computational cost. We present experimental evidence that, within our framework, even low dimensional models can capture very complex visual phenomena.

BibTeX

@InProceedings{soattoDW01iccv,
  Title                    = {Dynamic textures},
  Author                   = {Soatto, S. and Doretto, G. and Wu, Y. N.},
  Booktitle                = iccv,
  Year                     = {2001},
  Address                  = {Vancouver, BC, Canada},
  Month                    = jul,
  Pages                    = {439--446},
  Volume                   = {2},
  Abstract                 = {Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include sea-waves, smoke, foliage, whirlwind but also talking faces, traffic scenes etc. We present a novel characterization of dynamic textures that poses the problems of modelling, learning, recognizing and synthesizing dynamic textures on a firm analytical footing. We borrow tools from system identification to capture the “essence” of dynamic textures; we do so by learning (i.e. identifying) models that are optimal in the sense of maximum likelihood or minimum prediction error variance. For the special case of secondorder stationary processes we identify the model in closed form. Once learned, a model has predictive power and can be used for extrapolating synthetic sequences to infinite length with negligible computational cost. We present experimental evidence that, within our framework, even low dimensional models can capture very complex visual phenomena.},
  Bib2html_pubtype         = {Refereed Conferences},
  Bib2html_rescat          = {Dynamic Textures, Visual Motion Analysis},
  File                     = {soattoDW01iccv.pdf:doretto\\conference\\soattoDW01iccv.pdf:PDF;soattoDW01iccv.pdf:doretto\\conference\\soattoDW01iccv.pdf:PDF},
  Wwwnote                  = {<span class="wwwnote">Oral Presentation</span>}
}