Gianfranco Doretto / Research / Project

Dynamic Texture Editing

Editing dynamic textures by changing model parameters


The goal of this project is to design algorithms for synthesizing and editing realistic sequences of images of dynamic scenes that exhibit some form of temporal regularity. Such scenes include flowing water, steam, smoke, flames, foliage of trees in wind, crowds, dense traffic flow etc.
Algorithms that aims at synthesizing and editing video sequences traditionally use a physical model of the scene. This approach is commonly known as physics-based rendering (PBR). While this approach is very principled and it allows full editing power of the scene, on the other hand it is usually computationally intensive, targeted to specific scenes, to not mention that in general it is difficult to build physical models that well explain certain phenomena, and for certain applications these approaches may be overkill. These reasons motivate the image-based rendering approaches (IBR) that try to use images of real scenes to synthesize other images. IBR techniques typically allow to get impressive results without much computation, and are often simpler to code. One major drawback of IBR techniques is often their lack of flexibility in terms of editing power.
We use an IBR approach that relies on a simple statistical model for the video sequences, and study to what extent model parameters can be modified to provide as much editing power as possible. Despite the difficulty of editing IBR models, we found that our approach allows to interactively change and reverse the speed of the dynamic visual process of the scene, change its spatial frequency content, an change the statistical source that generates the simulation. The editing parameters allow to synthesize several interesting visual effects, and every kind of manipulation can be carried out online and in real-time.
Notice that this framework, combined with recognition, could be used to infer higher level properties of the observed dynamic visual process. For example, one could estimate the energy of the wind in a certain natural environment, or the wave energy flow in ocean waves.
The main contributions of our approach are:
  • A simple and efficient IBR framework for modifying the temporal and spatial behavior of dynamic textures.
  • The definition of the conditions under which model parameters of a dynamic texture can be modified.
  • Thorough explanation of the relationship between model parameters and visual appearance of the scene.
  • Several demonstrations of the power of this approach, that allow online and real-time interaction with the editor.


The following examples demonstrate the power of our approach to extrapolate and manipulate new video sequences. Given a training sequence we apply the learning procedure for dynamic textures, and extract the parameters of the model. We then simulate and edit the model to synthesize new video sequences.
The images of the movies show on the left a depiction of the scene, and on the right the corresponding values of some editing parameters. The first slider depicts the speed, that ranges from 0 to 3, or from −3 to 0, times the speed of the training sequence. The second slider represents the intensity of the driving noise of the simulation; it ranges from 0 to 3 times the intensity of the driving noise of the training sequence. The last three sliders depict the weights that determine the spatial frequency content of the video sequence. When they are set to one the simulation has the same frequency content of the training sequence. The first slider weights the coarse scale frequencies, the second the middle scale frequencies, and the last one the fine scale frequencies.
Note that the following examples include only a selected subset among all the possible changes of the editing parameters. For example, one could include rotations of the spatial frequencies and the state space, or process the color channels independently, or perform some other operations.
Note that the learning procedure has been applied directly to the raw data, and no preprocessing has been performed. Also, for portability issues, the .avi movies are MPEG compressed (video coder V1), and the quality of the synthesized images has degraded accordingly.


In the following example one can see that increasing the intensity of the driving input results in an apparently more "turbulent" smoke, or that amplifying the coarse frequency components results in a thinner "hazy" smoke. Also, one can make the smoke appear more "grainy" or "patchy," and can adjust or reverse the speed at his own wish.
Download .avi movie [609Kb]

Ocean waves

In this example you can see how to produce a "rougher" sea movement with larger waves by amplifying intensity and coarse and fine scales, or how to produce a "lake effect" with more gentle and smooth waves. Finally, increasing the intensity and the fine scale, while decreasing the coarse and middle scale results in a "rain effect", like rain pouring on a pond.
Download .avi movie [3.25Mb]


This example shows how playing with the intensity and scale parameters results in interesting effects that appear to be the results of changing the nozzle of the fountain, from a "spurty" fountain, to a "spraylike" fountain. Also, the fountain can be slowed down and brought to a complete stop.
Download .avi movie [2.58Mb]


Here we show the effects of altering a dynamic texture of a flame, including changing the spatial scales, speed, direction etc. Non-realistic effects can also be achieved by altering the dynamics of each color component independently.
Download .avi movie [2.63Mb]

Related Publications

  • Doretto, G. and Soatto, S.
    Editable dynamic textures.
    In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 137–142, Madison, Wisconsin, USA, June 2003.
    Details   BibTeX   PDF (507.5kB )  
  • Doretto, G. and Soatto, S.
    Editable dynamic textures.
    In Conference Abstracts and Applications of SIGGRAPH '02, pp. 177, San Antonio, Texas, USA, July 2002.
    Details   BibTeX   PDF (822.3kB )  
  • Doretto, G. and Soatto, S.
    Editable dynamic textures.
    Technical Report TR020001, UCLA Computer Science Department, 2002.
    Details   BibTeX   PDF (832.5kB )  
  • Doretto, G., Chiuso, A., Wu, Y. N., and Soatto, S.
    Dynamic textures.
    International Journal of Computer Vision, 51(2):91–109, 2003.
    Details   BibTeX   PDF (2.6MB )