SceneCam: Improving multi-camera remote collaboration using augmented reality
- Publication Type:
- Conference Proceeding
- Adjunct Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2019, 2020, 00, pp. 28-33
- Issue Date:
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
The embargo period expires on 9 Jan 2022
© 2019 IEEE. Systems for remote collaboration on physical tasks generally use AR/VR technologies to create a shared visual space for collaborators to perform tasks together. The shared space often comes from a single camera view. Prior research has not reported on the benefits of using multiple cameras for remote collaboration. On the contrary, there seems to be some usability issues, which must be addressed, when designing remote collaboration systems that use multiple cameras to capture different areas and perspectives of a task space. To be usable, a multi-camera remote collaboration system must indicate to the local user which camera the remote user is looking at and vice versa, the system must make it fast and easy for the remote user to obtain the right camera view for a given collaborative task. We present SceneCam, an AR prototype with which we explore different techniques for improving the usability of multi-camera remote collaboration by making camera selection easier and faster. Specifically, SceneCam implements two camera selection techniques. The first technique nudges the remote user to manually select an optimal camera view of the local user's actions. The second technique automatically selects an optimal camera view of the local user and shows it to the remote user. Additionally, SceneCam implements two focus-in-context views (exocentric and egocentric views) that provide the remote user with a spatial overview of the local user's whereabouts in relation to the multiple task space areas and direct visual access to the camera views of said areas. Camera selection techniques (manual point-And-click, nudging, automatic), and focus-in-context views (no focus-in-context view, exocentric, egocentric) make up the two dimensions in a design space for multi-camera remote collaboration. We describe how SceneCam spans this design space. Lastly, as part of future work we discuss some hypotheses regarding the effects of the proposed camera selection techniques, focus-in-context views and combinations hereof on the usability of multi-camera remote collaboration.
Please use this identifier to cite or link to this item: