Recent advances in augmented, mixed, and virtual reality, coupled with the need to perform analysis and decision-making on large-scale collections of volumetric images stimulate the research in immersive analytics. Volumetric data is ubiquitous, and fields that need to interpret and analyze them to support their activities are numerous. Neuroscience, geology, and material science are some examples, but there are entire industries that rely heavily on 3D data, for example, healthcare, construction, quality control, and security. While tools like CAVEs have been able to provide compelling immersive environments (Morehead et al., 2014), current low-cost AR/VR headsets have paved the way for a new era where data analysis and research hypothesis formulation can take advantage of the immersive dimension, by leveraging a unique tool like syGlass(Pidhorskyi et al., 2018). This requires the development of novel image quantification methods, given the large sale of the collections. For immersive microscopy image analysis we developed algorithms and pipelines for volumetric segmentation (Holcomb et al., 2016) and volumetric instance-based segmentation (Keaton et al., 2023) for cell counting, which are adaptable, and that integrate the power of computer vision and machine learning with the immersive user experience to fully unleash its potential.
References
WACV
CellTranspose: Few-shot Domain Adaptation for Cellular Instance Segmentation
Keaton, M. R.,
Zaveri, R. J.,
and Doretto, G.
In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision,
2023.
abstractbibTeXarXivpdf
Automated cellular instance segmentation is a process utilized for accelerating biological research for the past two decades, and recent advancements have produced higher quality results with less effort from the biologist. Most current endeavors focus on completely cutting the researcher out of the picture by generating highly generalized models. However, these models invariably fail when faced with novel data, distributed differently than the ones used for training. Rather than approaching the problem with methods that presume the availability of large amounts of target data and computing power for retraining, in this work we address the even greater challenge of designing an approach that requires minimal amounts of new annotated data as well as training time. We do so by designing specialized contrastive losses that leverage the few annotated samples very efficiently. A large set of results show that 3 to 5 annotations lead to models with accuracy that: 1) significantly mitigate the covariate shift effects; 2) matches or surpasses other adaptation methods; 3) even approaches methods that have been fully retrained on the target distribution. The adaptation training is only a few minutes, paving a path towards a balance between model performance, computing requirements and expert-level annotation needs.
@inproceedings{keatonZD23wacv,
abbr = {WACV},
author = {Keaton, M. R. and Zaveri, R. J. and Doretto, G.},
booktitle = {{Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision}},
title = {{CellTranspose: Few-shot Domain Adaptation for Cellular Instance Segmentation}},
year = {2023},
month = jan,
pages = {455--466},
publisher = {IEEE},
bib2html_pubtype = {Conferences},
arxiv = {2212.14121}
}
arXiv
syGlass: Interactive Exploration of Multidimensional Images Using Virtual Reality Head-mounted Displays
Pidhorskyi, S.,
Morehead, M.,
Jones, Q.,
Spirou, G.,
and Doretto, G.arXiv.org:1804.08197,
2018.
abstractbibTeXarXivpdf
The quest for deeper understanding of biological systems has
driven the acquisition of increasingly larger
multidimensional image datasets. Inspecting and manipulating
data of this complexity is very challenging in traditional
visualization systems. We developed syGlass, a software
package capable of visualizing large scale volumetric data
with inexpensive virtual reality head-mounted display
technology. This allows leveraging stereoscopic vision to
significantly improve perception of complex 3D structures,
and provides immersive interaction with data directly in 3D.
We accomplished this by developing highly optimized data
flow and volume rendering pipelines, tested on datasets up
to 16TB in size, as well as tools available in a virtual
reality GUI to support advanced data exploration,
annotation, and cataloguing.
@article{pidhorskyiMJSD18tr,
abbr = {arXiv},
author = {Pidhorskyi, S. and Morehead, M. and Jones, Q. and Spirou, G. and Doretto, G.},
title = {{syGlass: Interactive Exploration of Multidimensional Images Using Virtual Reality Head-mounted Displays}},
journal = {arXiv.org:1804.08197},
arxiv = {1804.08197},
month = apr,
year = {2018},
bib2html_pubtype = {Tech Reports}
}
Rapid and semiautomatedextraction of neuronal cell bodies and nuclei from electron microscopy image stacks
Holcomb, P. S.,
Morehead, M.,
Doretto, G.,
Chen, P.,
Berg, S.,
Plaza, S.,
and Spirou, G.
Methods in Molecular Biology,
2016.
abstractbibTeXpdf
Connectomics—the study of how neurons wire together in the brain—is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes.
@article{holcombMDCBPS16methods,
author = {Holcomb, P. S. and Morehead, M. and Doretto, G. and Chen, P. and Berg, S. and Plaza, S. and Spirou, G.},
title = {Rapid and semiautomatedextraction of neuronal cell bodies and nuclei from electron microscopy image stacks},
journal = {Methods in Molecular Biology},
year = {2016},
volume = {1427},
pages = {277--290},
bib2html_pubtype = {Journals}
}
3DUI
BrainTrek: An Immersive Environment for Investigating Neuronal
Tissue
Morehead, M.,
Jones, Q.,
Blatt, J.,
Holcomb, P.,
Schultz, J.,
DeFanti, T.,
Ellisman, M.,
Doretto, G.,
and Spirou, G. A.
In Proceedings of the IEEE Symposium on 3D User Interfaces,
2014.
abstractbibTeXpdf
The high degree of complexity in cellular and circuit structure of
the brain poses challenges for understanding tissue organization,
extrapolated from large serial sections electron microscopy (ssEM)
image data. We advocate the use of 3D immersive virtual reality (IVR)
to facilitate the human analysis of such data. We have developed
and evaluated the BrainTrek system: a CAVE-based IVR environment
with a dedicated and intuitive user interface tailored to the investigation
of neural tissue by scientists and educators.
@inproceedings{moreheadJBHSFEDS143dui,
abbr = {3DUI},
author = {Morehead, M. and Jones, Q. and Blatt, J. and Holcomb, P. and Schultz, J. and DeFanti, T. and Ellisman, M. and Doretto, G. and Spirou, G. A.},
title = {Brain{T}rek: {A}n Immersive Environment for Investigating Neuronal
Tissue},
booktitle = {Proceedings of the IEEE Symposium on 3D User Interfaces},
year = {2014},
pages = {157--158},
address = {Minneapolis, MN},
month = mar,
bib2html_pubtype = {Conferences},
file = {moreheadJBHSFEDS143dui.pdf:doretto/conference/moreheadJBHSFEDS143dui.pdf:PDF}
}