Mixed reality has the potential to increase the usefulness of remote collaboration by allowing remote users to interact with a virtual 3D reconstruction of the physical world. Specifically, remote users can now annotate and virtually navigate through image-based reconstructions to complete tasks that are related to the physical environment. Ultimately, this allows users to collaborate virtually, thus saving time, energy, and money. However, due to the 3D nature of these mixed-reality reconstructions, existing annotation and virtual navigation methods are not optimal, causing the end user experience to suffer. This dissertation addresses this user interface problem by introducing novel constraints for both 2D gesture annotation authoring and photo-based virtual navigation.
First, for 2D gesture annotations, there exist inherent ambiguities in going from 2D to 3D, and prior methods do not adequately display such annotations in 3D. We propose to interpret and constrain the rendering of 2D gesture annotations in 3D via an automatic interpretation method and an interactive disambiguation approach, targeting dense and sparse reconstructions, respectively. Experimental results indicate that our methods are more accurate than baseline approaches and that our anchoring of annotations in 3D enables faster comprehension of the annotations than a baseline method.
Second, we propose semi-constrained snapping-to-photos interfaces for virtual navigation of 3D image-based reconstructions. Our point-of-interest and point-of-view snapping-to-photos interfaces offer a compromise between fully constrained-to-photos and free-flight travel interfaces. Experimental results, using both dense indoor and sparse outdoor scene reconstructions, indicate the usefulness of our interfaces over prior approaches and that our snapping-to-photos interfaces are favored over a fully constrained-to-photos baseline. Additionally, we also contribute user experiments considering the specific movement of orbiting to photos in 3D with results including that our hybrid interface was favored over a baseline approach.
In summary, this thesis contributes to enabling simple and useful annotation authoring and virtual navigation user interfaces for mixed-reality remote collaboration.