Kraus, Matthias ; Angerbauer, Katrin ; Buchmüller, Juri ; Schweitzer, Daniel ; Keim, Daniel A. ; Sedlmair, Michael ; Fuchs, Johannes: Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020 — ISBN 9781450367080, S. 546:1–546:14
Abstract
Heatmaps are a popular visualization technique that encode 2D density distributions using color or brightness. Experimental studies have shown though that both of these visual variables are inaccurate when reading and comparing numeric data values. A potential remedy might be to use 3D heatmaps by introducing height as a third dimension to encode the data. Encoding abstract data in 3D, however, poses many problems, too. To better understand this tradeoff, we conducted an empirical study (N=48) to evaluate the user performance of 2D and 3D heatmaps for comparative analysis tasks. We test our conditions on a conventional 2D screen, but also in a virtual reality environment to allow for real stereoscopic vision. Our main results show that 3D heatmaps are superior in terms of error rate when reading and comparing single data items. However, for overview tasks, the well-established 2D heatmap performs better.BibTeX
Kurzhals, Kuno ; Göbel, Fabian ; Angerbauer, Katrin ; Sedlmair, Michael ; Raubal, Martin: A View on the Viewer: Gaze-Adaptive Captions for Videos. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020 — ISBN 9781450367080, S. 139:1–139:12
Abstract
Subtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to related content (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions.BibTeX
Streichert, Annalena ; Angerbauer, Katrin ; Schwarzl, Magdalena ; Sedlmair, Michael: Comparing Input Modalities for Shape Drawing Tasks. In: Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP) : ACM, 2020 — ISBN 9781450371346, S. 1–5
Abstract
With the growing interest in Immersive Analytics, there is also a need for novel and suitable input modalities for such applications. We explore eye tracking, head tracking, hand motion tracking, and data gloves as input methods for a 2D tracing task and compare them to touch input as a baseline in an exploratory user study (N=20). We compare these methods in terms of user experience, workload, accuracy, and time required for input. The results show that the input method has a significant influence on these measured variables. While touch input surpasses all other input methods in terms of user experience, workload, and accuracy, eye tracking shows promise in respect of the input time. The results form a starting point for future research investigating input methods.BibTeX
Weiß, M. ; Angerbauer, K. ; Voit, A. ; Schwarzl, M. ; Sedlmair, M. ; Mayer, S.: Revisited: Comparison of Empirical Methods to Evaluate Visualizations Supporting Crafting and Assembly Purposes. In: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics. (2020), S. 1–10
Abstract
Ubiquitous, situated, and physical visualizations create entirely new possibilities for tasks contextualized in the real world,such as doctors inserting needles. During the development of situated visualizations, evaluating visualizations is a core requirement.However, performing such evaluations is intrinsically hard as the real scenarios are safety-critical or expensive to test. To overcomethese issues, researchers and practitioners adapt classical approaches from ubiquitous computing and use surrogate empiricalmethods such as Augmented Reality (AR), Virtual Reality (VR) prototypes, or merely online demonstrations. This approach’s primaryassumption is that meaningful insights can also be gained from different, usually cheaper and less cumbersome empirical methods.Nevertheless, recent efforts in the Human-Computer Interaction (HCI) community have found evidence against this assumption, whichwould impede the use of surrogate empirical methods. Currently, these insights rely on a single investigation of four interactive objects.The goal of this work is to investigate if these prior findings also hold for situated visualizations. Therefore, we first created a scenariowhere situated visualizations support users in do-it-yourself (DIY) tasks such as crafting and assembly. We then set up five empiricalstudy methods to evaluate the four tasks using an online survey, as well as VR, AR, laboratory, and in-situ studies. Using this studydesign, we conducted a new study with 60 participants. Our results show that the situated visualizations we investigated in this studyare not prone to the same dependency on the empirical method, as found in previous work. Our study provides the first evidence thatanalyzing situated visualizations through different empirical (surrogate) methods might lead to comparable resultBibTeX
Yu, Xingyao ; Angerbauer, Katrin ; Mohr, Peter ; Kalkofen, Denis ; Sedlmair, Michael: Perspective Matters: Design Implications for Motion Guidance in Mixed Reality. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020
BibTeX