Long-duration tracking of individuals across large sites remains an almost untouched research area. Trucks of individuals acquired in disjoint fields of view have to be connected despite the fact that the same person will appear in a different pose, from a different viewpoint, and under different illumination conditions. Ultimately, this is an identity-matching problem, which might be approached by using traditional biometric cues, such as face. However, practical scenarios prevent from relying on good quality acquisition of face images at standoff distance. Therefore, in luck of more stable biometric data, one can revert to the whole-body appearance information, provided that a person will not change clothes between sightings. (Doretto et al., 2011) and (Wang et al., 2007) presents a model for the appearance of people. It aims at describing the spatial distribution of the albedo of an object (person) as it is seen from the perspective of each of its constituent (body) parts. Estimating the model entails computing an occurrence matrix, for which a state-of-the-art algorithm enabling real-time performance is derived. It exploits a generalization of the popular integral image representation, which can be widely applicable for quickly computing complex vector valued image statistics.


  1. JAIHC
    Appearance-based person reidentification in camera networks: problem overview and current approaches Doretto, G., Sebastian, T., Tu, P., and Rittscher, J. Journal of Ambient Intelligence and Humanized Computing, 2011. abstract bibTeX pdf html
  2. ICCV
    Shape and appearance context modeling Wang, X., Doretto, G., Sebastian, T. B., Rittscher, J., and Tu, P. H. In Proceedings of IEEE International Conference on Computer Vision, 2007. abstract bibTeX pdf