Long-duration tracking of individuals across large sites remains an almost untouched research area. Trucks of individuals acquired in disjoint fields of view have to be connected despite the fact that the same person will appear in a different pose, from a different viewpoint, and under different illumination conditions. Ultimately, this is an identity-matching problem, which might be approached by using traditional biometric cues, such as face. However, practical scenarios prevent from relying on good quality acquisition of face images at standoff distance. Therefore, in luck of more stable biometric data, one can revert to the whole-body appearance information, provided that a person will not change clothes between sightings. (Doretto et al., 2011) and (Wang et al., 2007) presents a model for the appearance of people. It aims at describing the spatial distribution of the albedo of an object (person) as it is seen from the perspective of each of its constituent (body) parts. Estimating the model entails computing an occurrence matrix, for which a state-of-the-art algorithm enabling real-time performance is derived. It exploits a generalization of the popular integral image representation, which can be widely applicable for quickly computing complex vector valued image statistics.
References
JAIHC
Appearance-based person reidentification in camera networks: problem
overview and current approachesDoretto, G.,
Sebastian, T.,
Tu, P.,
and Rittscher, J.
Journal of Ambient Intelligence and Humanized Computing,
2011.
abstractbibTeXpdfhtml
Recent advances in visual tracking methods allow following a given
object or individual in presence of significant clutter or partial
occlusions in a single or a set of overlapping camera views. The
question of when person detections in different views or at different
time instants can be linked to the same individual is of fundamental
importance to the video analysis in large-scale network of cameras.
This is the person reidentification problem. The paper focuses on
algorithms that use the overall appearance of an individual as opposed
to passive biometrics such as face and gait. Methods that effectively
address the challenges associated with changes in illumination, pose,
and clothing appearance variation are discussed. More specifically,
the development of a set of models that capture the overall appearance
of an individual and can effectively be used for information retrieval
are reviewed. Some of them provide a holistic description of a person,
and some others require an intermediate step where specific body
parts need to be identified. Some are designed to extract appearance
features over time, and some others can operate reliably also on
single images. The paper discusses algorithms for speeding up the
computation of signatures. In particular it describes very fast procedures
for computing co-occurrence matrices by leveraging a generalization
of the integral representation of images. The algorithms are deployed
and tested in a camera network comprising of three cameras with non-overlapping
field of views, where a multi-camera multi-target tracker links the
tracks in different cameras by reidentifying the same people appearing
in different views.
@article{dorettoSTR11jaihc,
abbr = {JAIHC},
author = {Doretto, G. and Sebastian, T. and Tu, P. and Rittscher, J.},
title = {Appearance-based person reidentification in camera networks: problem
overview and current approaches},
journal = {Journal of Ambient Intelligence and Humanized Computing},
year = {2011},
volume = {2},
pages = {127-151},
affiliation = {West Virginia University, P.O. Box 6901, Morgantown, WV 26506, USA},
bib2html_pubtype = {Journals},
bib2html_rescat = {Human Reidentification, Identity Management, Video Analysis, Appearance
Modeling, Shape and Appearance Modeling, Integral Image Computations,
Track Matching},
file = {dorettoSTR11jaihc.pdf:doretto\\journal\\dorettoSTR11jaihc.pdf:PDF},
issn = {1868-5137},
issue = {2},
keyword = {Engineering},
owner = {doretto},
publisher = {Springer Berlin / Heidelberg},
timestamp = {2010.10.17},
url = {http://dx.doi.org/10.1007/s12652-010-0034-y}
}
ICCV
Shape and appearance context modeling
Wang, X.,
Doretto, G.,
Sebastian, T. B.,
Rittscher, J.,
and Tu, P. H.
In Proceedings of IEEE International Conference on Computer Vision,
2007.
abstractbibTeXpdf
In this work we develop appearance models for computing the similarity
between image regions containing deformable objects of a given class
in realtime. We introduce the concept of shape and appearance context.
The main idea is to model the spatial distribution of the appearance
relative to each of the object parts. Estimating the model entails
computing occurrence matrices. We introduce a generalization of the
integral image and integral histogram frameworks, and prove that
it can be used to dramatically speed up occurrence computation. We
demonstrate the abiity of this framework to recognize an individual
walking across a network of cameras. Finally, we show that the proposed
approach outperforms several other methods.
@inproceedings{wangDSRT07iccv,
abbr = {ICCV},
author = {Wang, X. and Doretto, G. and Sebastian, T. B. and Rittscher, J. and Tu, P. H.},
title = {Shape and appearance context modeling},
booktitle = {Proceedings of IEEE International Conference on Computer Vision},
year = {2007},
pages = {1--8},
bib2html_pubtype = {Conferences},
bib2html_rescat = {Human Reidentification, Video Analysis, Appearance Modeling, Shape
and Appearance Modeling, Integral Image Computations, Track Matching},
file = {wangDSRT07iccv.pdf:doretto\\conference\\wangDSRT07iccv.pdf:PDF;wangDSRT07iccv.pdf:doretto\\conference\\wangDSRT07iccv.pdf:PDF},
owner = {doretto},
timestamp = {2007.01.19}
}