Dynamic Textures: Joint Modeling of Spatial and Temporal Statistics

2 minute read

In analyzing visual processes there may be portions of videos that can be modeled as dynamic textures, which means that they exhibit temporal stationarity. In addition to that, within a single frame they may also exhibit repetitions of the same patterns, like in image textures, which means that the visual process is spatially stationary as well. Therefore, it makes sense to design models that can capture the structure of the joint spatial, and temporal statistics, for the purpose of enabling recognition and segmentation. (Doretto et al., 2004) introduces a model for this kind of dynamic textures, which combines a tree representation of Markov random fields, for capturing the spatial stationarity, with linear dynamic systems, for capturing the temporal stationarity of the visual process. The effectiveness of the model is demonstrated by showing extrapolation of video in both space and time domains. The framework sets the stage for simultaneous segmentation and recognition of spatio-temporal events.


  1. ECCV
    Spatially homogeneous dynamic textures Doretto, G., Jones, E., and Soatto, S. In Proceedings of European Conference on Computer Vision, 2004. Oral abstract bibTeX pdf