From dynamic texture to dynamic shape and appearance models: An overview

Doretto, G. and Soatto, S.
From dynamic texture to dynamic shape and appearance models: An overview
In Mirmehdi, M., Xie, X., and Suri, J., editors, Handbook of texture analysis, Imperial College Press, 2008.

Download

PDF (899.8kB )  

Abstract

In modeling complex visual phenomena one can employ rich models that characterize the global statistics of images, or choose simple classes of models to represent the local statistics of a spatiotemporal “segment,” together with the partition of the data into such segments. Each segment could be characterized by certain statistical regularity properties in space and/or time. The former approach is often pursued in Computer Graphics, where a global model is necessary to capture effects such as mutual illumination or cast shadows. However, such models cannot be uniquely inferred as they are far more complex than the data, and one has to revert to a much simpler representation that, for instance, models the visual complexity of single segments in terms of statistical variability from a nominal model. In this chapter we do so by modeling the image variability of dynamic scenes through the joint temporal variation of shape and appearance. We describe how this framework can be specialized to Dynamic Texture models for both static and moving cameras. The characterization poses the problems of modeling, learning, and synthesis of video sequences that exhibit certain temporal regularity properties (such as sea-waves, smoke, foliage, talking faces, flags in wind, etc.), using tools from time series analysis, system identification theory, and finite element methods.

BibTeX

@InCollection{dorettoS08chapter,
  Title                    = {From dynamic texture to dynamic shape and appearance models: {A}n overview},
  Author                   = {Doretto, G. and Soatto, S.},
  Booktitle                = {Handbook of texture analysis},
  Publisher                = {Imperial {C}ollege {P}ress},
  Year                     = {2008},
  Chapter                  = {9},
  Editor                   = {Mirmehdi, M. and Xie, X. and Suri, J.},
  Abstract                 = {In modeling complex visual phenomena one can employ rich models that characterize the global statistics of images, or choose simple classes of models to represent the local statistics of a spatiotemporal “segment,” together with the partition of the data into such segments. Each segment could be characterized by certain statistical regularity properties in space and/or time. The former approach is often pursued in Computer Graphics, where a global model is necessary to capture effects such as mutual illumination or cast shadows. However, such models cannot be uniquely inferred as they are far more complex than the data, and one has to revert to a much simpler representation that, for instance, models the visual complexity of single segments in terms of statistical variability from a nominal model. In this chapter we do so by modeling the image variability of dynamic scenes through the joint temporal variation of shape and appearance. We describe how this framework can be specialized to Dynamic Texture models for both static and moving cameras. The characterization poses the problems of modeling, learning, and synthesis of video sequences that exhibit certain temporal regularity properties (such as sea-waves, smoke, foliage, talking faces, flags in wind, etc.), using tools from time series analysis, system identification theory, and finite element methods.},
  Bib2html_pubtype         = {Book Chapters},
  Bib2html_rescat          = {Dynamic Textures, Visual Motion Analysis, Shape and Appearance Modeling},
  File                     = {dorettoS08chapter.pdf:doretto\\chapter\\dorettoS08chapter.pdf:PDF;dorettoS07chapter.pdf:doretto\\chapter\\dorettoS07chapter.pdf:PDF},
  Owner                    = {doretto},
  Timestamp                = {2007.01.19}
}