Abdelaal, Moataz ; Lhuillier, Antoine ; Hlawatsch, Marcel ; Weiskopf, Daniel: Time-Aligned Edge Plots for Dynamic Graph Visualization. In: 2020 24th International Conference Information Visualisation (IV), 2020 24th International Conference Information Visualisation (IV), 2020
BibTeX
Achberger, Alexander ; Cutura, René ; Türksoy, Oguzhan ; Sedlmair, Michael: Caarvida: Visual Analytics for Test Drive Videos. In: Proceedings of the International Conference on Advanced Visual Interfaces, Proceedings of the International Conference on Advanced Visual Interfaces, 2020, S. 1--9
Abstract
We report on an interdisciplinary visual analytics project wherein
automotive engineers analyze test drive videos. These videos are
annotated with navigation-specific augmented reality (AR) con�tent, and the engineers need to identify issues and evaluate the
behavior of the underlying AR navigation system. With the increas�ing amount of video data, traditional analysis approaches can no
longer be conducted in an acceptable timeframe. To address this
issue, we collaboratively developed Caarvida, a visual analytics tool
that helps engineers to accomplish their tasks faster and handle an
increased number of videos. Caarvida combines automatic video
analysis with interactive and visual user interfaces. We conducted
two case studies which show that Caarvida successfully supports
domain experts and speeds up their task completion time.BibTeX
Balestrucci, Priscilla ; Angerbauer, Katrin ; Morariu, Cristina ; Welsch, Robin ; Chuang, Lewis L ; Weiskopf, Daniel ; Ernst, Marc O ; Sedlmair, Michael: Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper). In: 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV) : IEEE, 2020, S. 11--18
Abstract
Among the many changes brought about by the COVID-19 pandemic, one of the most pressing for scientific research concerns user testing. For the researchers who conduct studies with human participants, the requirements for social distancing have created a need for reflecting on methodologies that previously seemed relatively straightforward. It has become clear from the emerging literature on the topic and from first-hand experiences of researchers that the restrictions due to the pandemic affect every aspect of the research pipeline. The current paper offers an initial reflection on user-based research, drawing on the authors' own experiences and on the results of a survey that was conducted among researchers in different disciplines, primarily psychology, human-computer interaction (HCI), and visualization communities. While this sampling of researchers is by no means comprehensive, the multi-disciplinary approach and the consideration of different aspects of the research pipeline allow us to examine current and future challenges for user-based research. Through an exploration of these issues, this paper also invites others in the VIS-as well as in the wider-research community, to reflect on and discuss the ways in which the current crisis might also present new and previously unexplored opportunities.BibTeX
Baumann, Martin ; Koch, Steffen ; John, Markus ; Ertl, Thomas: Interactive Visualization for Reflected Text Analytics. In: Reiter, N. ; Pichler, A. ; Kuhn, J. (Hrsg.) ; Reiter, N. ; Pichler, A. ; Kuhn, J. (Hrsg.): Reflektierte Algorithmische Textanalyse, Reflektierte Algorithmische Textanalyse. Berlin : de Gruyter, 2020 — ISBN 9783110693973, S. 269--296
BibTeX
Baumann, Martin ; Minasyan, Harutyun ; Koch, Steffen ; Kurzhals, Kuno ; Ertl, Thomas: AnnoXplorer: A Scalable, Integrated Approach for the Visual Analysis of Text Annotations. In: Proc. 15th Int. Jt. Conf. Comput. Vis., Imaging and Comput. Graph. Theory and App. - IVAPP, Proc. 15th Int. Jt. Conf. Comput. Vis., Imaging and Comput. Graph. Theory and App. - IVAPP, 2020, S. 62--75
BibTeX
Bernard, Jürgen ; Hutter, Marco ; Zeppelzauer, Matthias ; Sedlmair, Michael ; Munzner, Tamara: SepEx: Visual Analysis of Class Separation Measures. In: Turkay, C. ; Vrotsou, K. (Hrsg.) ; Turkay, C. ; Vrotsou, K. (Hrsg.): Proceedings of the International Workshop on Visual Analytics (EuroVA), Proceedings of the International Workshop on Visual Analytics (EuroVA) : The Eurographics Association, 2020 — ISBN 978-3-03868-116-8, S. 1–5
Abstract
Class separation is an important concept in machine learning and visual analytics. However, the comparison of class separation for datasets with varying dimensionality is non-trivial, given a) the various possible structural characteristics of datasets and b) the plethora of separation measures that exist. Building upon recent findings in visualization research about the qualitative and quantitative evaluation of class separation for 2D dimensionally reduced data using scatterplots, this research addresses the visual analysis of class separation measures for high-dimensional data. We present SepEx, an interactive visualization approach for the assessment and comparison of class separation measures for multiple datasets. SepEx supports analysts with the comparison of multiple separation measures over many high-dimensional datasets, the effect of dimensionality reduction on measure outputs by supporting nD to 2D comparison, and the comparison of the effect of different dimensionality reduction methods on measure outputs. We demonstrate SepEx in a scenario on 100 two-class 5D datasets with a linearly increasing amount of separation between the classes, illustrating both similarities and nonlinearities across 11 measures.BibTeX
Boukhelifa, N. ; Bezerianos, A. ; Chang, R. ; Collins, C. ; Drucker, S. ; Endert, A. ; Hullman, J. ; North, C. ; u. a.: Challenges in Evaluating Interactive Visual Machine Learning Systems. In: IEEE Computer Graphics and Applications, IEEE Computer Graphics and Applications. Bd. 40 (2020), Nr. 6, S. 88–96
Abstract
In interactive visual machine learning (IVML), humans and machine learning algorithms collaborate to achieve tasks mediated by interactive visual interfaces. This human-in-the-loop approach to machine learning brings forth not only numerous intelligibility, trust, and usability issues, but also many open questions with respect to the evaluation of the IVML system, both as separate components, and as a holistic entity that includes both human and machine intelligence. This article describes the challenges and research gaps identified in an IEEE VIS workshop on the evaluation of IVML systems.BibTeX
Brich, Nicolas ; Schulz, Christoph ; Peter, Jörg ; Klingert, Wilfried ; Schenk, Martin ; Weiskopf, Daniel ; Krone, Michael: Visual Analysis of Multivariate Intensive Care Surveillance Data. In: Kozlíková, B. ; Krone, M. ; Smit, N. ; Nieselt, K. ; Raidou, R. G. (Hrsg.) ; Kozlíková, B. ; Krone, M. ; Smit, N. ; Nieselt, K. ; Raidou, R. G. (Hrsg.): Eurographics Workshop on Visual Computing for Biology and Medicine, Eurographics Workshop on Visual Computing for Biology and Medicine : The Eurographics Association, 2020 — ISBN 978-3-03868-109-0
BibTeX
Bruder, Valentin ; Müller, Christoph ; Frey, Steffen ; Ertl, Thomas: On Evaluating Runtime Performance of Interactive Visualizations. In: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics. Bd. 26 (2020), S. 2848–2862
Abstract
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification.BibTeX
Chotisarn, Noptanit ; Merino, Leonel ; Zheng, Xu ; Lonapalawong, Supaporn ; Zhang, Tianye ; Xu, Mingliang ; Chen, Wei: A Systematic Literature Review of Modern Software Visualization. In: Journal of Visualization, Journal of Visualization. Bd. 23 (2020), Nr. 4, S. 539--558
Abstract
We report on the state-of-the-art of software visualization.To ensure reproducibility, we adopted the Systematic Literature Review methodology. That is, we analyzed 1440 entries from IEEE Xplore and ACM Digital Library databases. We selected 105 relevant full papers published in 2013–2019, which we classified based on the aspect of the software system that is supported (i.e., structure, behavior, and evolution). For each paper, we extracted main dimensions that characterize software visualizations, such as software engineering tasks, roles of users, information visualization techniques, and media used to display visualizations. We provide researchers in the field an overview of the state-of-the-art in software visualization and highlight research opportunities. We also help developers to identify suitable visualizations for their particular context by matching software visualizations to development concerns and concrete details to obtain available visualization tools.BibTeX
Dias, Martin ; Orellana, Diego ; Vidal, Santiago ; Merino, Leonel ; Bergel, Alexandre: Evaluating a Visual Approach for Understanding JavaScript Source Code. In: Proceedings of the 28th International Conference on Program Comprehension, Proceedings of the 28th International Conference on Program Comprehension : ACM, 2020, S. 128–138
Abstract
To characterize the building blocks of a legacy software system (e.g., structure, dependencies), programmers usually spend a long time navigating its source code. Yet, modern integrated development environments (IDEs) do not provide appropriate means to efficiently achieve complex software comprehension tasks. To deal with this unfulfilled need, we present Hunter, a tool for the visualization of JavaScript applications. Hunter visualizes source code through a set of coordinated views that include a node-link diagram that depicts the dependencies among the components of a system, and a treemap that helps programmers to orientate when navigating its structure.
In this paper, we report on a controlled experiment that evaluates Hunter. We asked 16 participants to solve a set of software comprehension tasks, and assessed their effectiveness in terms of (i) user performance (i.e., completion time, accuracy, and attention), and (ii) user experience (i.e., emotions, usability). We found that when using Hunter programmers required significantly less time to complete various software comprehension tasks and achieved a significantly higher accuracy. We also found that the node-link diagram panel of Hunter gets most of the attention of programmers, whereas the source code panel does so in Visual Studio Code. Moreover, programmers considered that Hunter exhibits a good user experience.BibTeX
Franke, Max ; John, Markus ; Knabben, Moritz ; Keck, Jana ; Blascheck, Tanja ; Koch, Steffen: LilyPads: Exploring the Spatiotemporal Dissemination of Historical Newspaper Articles. In: Proceedings of the 15th International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP, Proceedings of the 15th International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP : SciTePress, 2020 — ISBN 978-989-758-402-2, S. 17--28
BibTeX
Frieß, F. ; Braun, M. ; Bruder, V. ; Frey, S. ; Reina, G. ; Ertl, T.: Foveated Encoding for Large High-Resolution Displays. In: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics. Bd. 27 (2020), Nr. 2, S. 1–10
Abstract
Collaborative exploration of scientific data sets across large high-resolution displays requires both high visual detail as wellas low-latency transfer of image data (oftentimes inducing the need to trade one for the other). In this work, we present a system thatdynamically adapts the encoding quality in such systems in a way that reduces the required bandwidth without impacting the detailsperceived by one or more observers. Humans perceive sharp, colourful details, in the small foveal region around the centre of the fieldof view, while information in the periphery is perceived blurred and colourless. We account for this by tracking the gaze of observers,and respectively adapting the quality parameter of each macroblock used by the H.264 encoder, considering the so-called visual acuityfall-off. This allows to substantially reduce the required bandwidth with barely noticeable changes in visual quality, which is crucial forcollaborative analysis across display walls at different locations. We demonstrate the reduced overall required bandwidth and the highquality inside the foveated regions using particle rendering and parallel coordinateBibTeX
Frieß, Florian ; Müller, Christoph ; Ertl, Thomas: Real-Time High-Resolution Visualisation. In: Krüger, J. ; Niessner, M. ; Stückler, J. (Hrsg.) ; Krüger, J. ; Niessner, M. ; Stückler, J. (Hrsg.): Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV) : The Eurographics Association, 2020 — ISBN 978-3-03868-123-6, S. 127–135
Abstract
While visualisation often strives for abstraction, the interactive exploration of large scientific data sets like densely sampled 3Dfields or massive particle data sets still benefits from rendering their graphical representation in large detail on high-resolutiondisplays such as Powerwalls or tiled display walls driven by multiple GPUs or even GPU clusters. Such visualisation systemsare typically rather unique in their setup of hardware and software which makes transferring a visualisation application fromone high-resolution system to another one a complicated task. As more and more such visualisation systems get installed,collaboration becomes desirable in the sense of sharing such a visualisation running on one site in real time with another high-resolution display on a remote site while at the same time communicating via video and audio. Since typical video conferencesolutions or web-based collaboration tools often cannot deal with resolutions exceeding 4K, with stereo displays or with multi-GPU setups, we designed and implemented a new system based on state-of-the-art hardware and software technologies totransmit high-resolution visualisations including video and audio streams via the internet to remote large displays and back.Our system architecture is built on efficient capturing, encoding and transmission of pixel streams and thus supports a multitudeof configurations combining audio and video streams in a generic approacBibTeX
Garcia, Rafael ; Weiskopf, Daniel: Inner-Process Visualization of Hidden States in Recurrent Neural Networks. In: Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, Proceedings of the 13th International Symposium on Visual Information Communication and Interaction. Eindhoven, Netherlands : Association for Computing Machinery, 2020 — ISBN 9781450387507
Abstract
In this paper, we introduce a visualization technique aimed to help machine learning experts to analyze the hidden states of layers in recurrent neural networks (RNNs). Our technique allows the user to visually inspect how hidden states store and process information throughout the feeding of an input sequence into the network. It can answer questions such as which parts of the input data had a higher impact on the prediction and how the model correlates each hidden state configuration with a certain output. Our visualization comprises several components: our input visualization shows the input sequence and how it relates to the output (using color coding); hidden states are visualized by nonlinear projection to 2-D visualization space via t-SNE in order to understand the shape of the space of hidden states; time curves are employed to show the details of the evolution of hidden state configurations; and a time-multi-class heatmap matrix visualizes the evolution of expected predictions for multi-class classifiers. To demonstrate the capability of our approach, we discuss two typical use cases for long short-term memory (LSTM) models applied to two widely used natural language processing (NLP) datasets.BibTeX
Goffin, Pascal ; Blascheck, Tanja ; Isenberg, Petra ; Willett, Wesley: Interaction Techniques for Visual Exploration Using Embedded Word-Scale Visualizations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems : Association for Computing Machinery, 2020, S. 1--13
BibTeX
Gralka, P. ; Wald, I. ; Geringer, S. ; Reina, G. ; Ertl, T.: Spatial Partitioning Strategies for Memory-Efficient Ray Tracing of Particles. In: 2020 IEEE 10th Symposium on Large Data Analysis and Visualization (LDAV), 2020 IEEE 10th Symposium on Large Data Analysis and Visualization (LDAV), 2020, S. 42–52
BibTeX
Heyen, Frank ; Munz, Tanja ; Neumann, Michael ; Ortega, Daniel ; Vu, Ngoc Thang ; Weiskopf, Daniel ; Sedlmair, Michael: ClaVis: An Interactive Visual Comparison System for Classifiers. In: Proceedings of the International Conference on Advanced Visual Interfaces, Proceedings of the International Conference on Advanced Visual Interfaces. Salerno, Italy : Association for Computing Machinery, 2020 — ISBN 9781450375351
Abstract
We propose ClaVis, a visual analytics system for comparative analysis of classification models. ClaVis allows users to visually compare the performance and behavior of tens to hundreds of classifiers trained with different hyperparameter configurations. Our approach is plugin-based and classifier-agnostic and allows users to add their own datasets and classifier implementations. It provides multiple visualizations, including a multivariate ranking, a similarity map, a scatterplot that reveals correlations between parameters and scores, and a training history chart. We demonstrate the effectivity of our approach in multiple case studies for training classification models in the domain of natural language processing.BibTeX
Hube, Natalie ; Lenz, Oliver ; Engeln, Lars ; Groh, Rainer ; Sedlmair, Michael: Comparing Methods for Mapping Facial Expressions to Enhance Immersive Collaboration with Signs of Emotion. In: IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) : IEEE, 2020, S. 30–35
Abstract
We present a user study comparing a pre-evaluated mapping approach with a state-of-the-art direct mapping method of facial expressions for emotion judgment in an immersive setting. At its heart, the pre-evaluated approach leverages semiotics, a theory used in linguistic. In doing so, we want to compare pre-evaluation with an approach that seeks to directly map real facial expressions onto their virtual counterparts. To evaluate both approaches, we conduct a controlled lab study with 22 participants. The results show that users are significantly more accurate in judging virtual facial expressions with pre-evaluated mapping. Additionally, participants were slightly more confident when deciding on a presented emotion. We could not find any differences regarding potential Uncanny Valley effects. However, the pre-evaluated mapping shows potential to be more convenient in a conversational scenario.BibTeX
Hube, Natalie ; Müller, Mathias ; Lapczyna, Esther ; Wojdziak, Jan: Mixed Reality based Collaboration for Design Processes. In: i-com, i-com. Bd. 19, De Gruyter (2020), Nr. 2, S. 123--137
BibTeX
Hägele, David ; Abdelaal, Moataz ; Oguz, Ozgur S. ; Toussaint, Marc ; Weiskopf, Daniel: Visualization of Nonlinear Programming for Robot Motion Planning. In: Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, Proceedings of the 13th International Symposium on Visual Information Communication and Interaction. Eindhoven, Netherlands : Association for Computing Machinery, 2020 — ISBN 9781450387507
Abstract
Nonlinear programming targets nonlinear optimization with constraints, which is a generic yet complex methodology involving humans for problem modeling and algorithms for problem solving. We address the particularly hard challenge of supporting domain experts in handling, understanding, and trouble-shooting high-dimensional optimization with a large number of constraints. Leveraging visual analytics, users are supported in exploring the computation process of nonlinear constraint optimization. Our system was designed for robot motion planning problems and developed in tight collaboration with domain experts in nonlinear programming and robotics. We report on the experiences from this design study, illustrate the usefulness for relevant example cases, and discuss the extension to visual analytics for nonlinear programming in general.BibTeX
Islam, Alaul ; Bezerianos, Anastasia ; Lee, Bongshin ; Blascheck, Tanja ; Isenberg, Petra: Visualizing Information on Watch Faces: A Survey with Smartwatch Users. In: IEEE Visualization Conference (VIS) -- Short Papers, IEEE Visualization Conference (VIS) -- Short Papers : IEEE Computer Society Press, 2020, S. 156--160
BibTeX
Knittel, Johannes ; Koch, Steffen ; Ertl, Thomas: PyramidTags: Context-, Time- and Word Order-Aware Tag Maps to Explore Large Document Collections. In: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics. (2020)
BibTeX
Kraus, M. ; Schäfer, H. ; Meschenmoser, P. ; Schweitzer, D. ; Keim, D. A. ; Sedlmair, M. ; Fuchs, J.: A Comparative Study of Orientation Support Tools in Virtual Reality Environments with Virtual Teleportation. In: 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020, S. 227–238
Abstract
Movement-compensating interactions like teleportation are commonly deployed techniques in virtual reality environments. Although practical, they tend to cause disorientation while navigating. Previous studies show the effectiveness of orientation-supporting tools, such as trails, in reducing such disorientation and reveal different strengths and weaknesses of individual tools. However, to date, there is a lack of a systematic comparison of those tools when teleportation is used as a movement-compensating technique, in particular under consideration of different tasks. In this paper, we compare the effects of three orientation-supporting tools, namely minimap, trail, and heatmap. We conducted a quantitative user study with 48 participants to investigate the accuracy and efficiency when executing four exploration and search tasks. As dependent variables, task performance, completion time, space coverage, amount of revisiting, retracing time, and memorability were measured. Overall, our results indicate that orientation-supporting tools improve task completion times and revisiting behavior. The trail and heatmap tools were particularly useful for speed-focused tasks, minimal revisiting, and space coverage. The minimap increased memorability and especially supported retracing tasks. These results suggest that virtual reality systems should provide orientation aid tailored to the specific tasks of the users.BibTeX
Kraus, Matthias ; Angerbauer, Katrin ; Buchmüller, Juri ; Schweitzer, Daniel ; Keim, Daniel A. ; Sedlmair, Michael ; Fuchs, Johannes: Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020 — ISBN 9781450367080, S. 546:1–546:14
Abstract
Heatmaps are a popular visualization technique that encode 2D density distributions using color or brightness. Experimental studies have shown though that both of these visual variables are inaccurate when reading and comparing numeric data values. A potential remedy might be to use 3D heatmaps by introducing height as a third dimension to encode the data. Encoding abstract data in 3D, however, poses many problems, too. To better understand this tradeoff, we conducted an empirical study (N=48) to evaluate the user performance of 2D and 3D heatmaps for comparative analysis tasks. We test our conditions on a conventional 2D screen, but also in a virtual reality environment to allow for real stereoscopic vision. Our main results show that 3D heatmaps are superior in terms of error rate when reading and comparing single data items. However, for overview tasks, the well-established 2D heatmap performs better.BibTeX
Kumar, Ayush ; Mohanty, Debesh ; Kurzhals, Kuno ; Beck, Fabian ; Weiskopf, Daniel ; Mueller, Klaus: Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data. In: ACM Symposium on Eye Tracking Research and Applications, ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany : Association for Computing Machinery, 2020 — ISBN 9781450371353
Abstract
Eye movement data analysis plays an important role in examining human cognitive processes and perceptions. Such analysis at times needs data recording from additional sources too during experiments. In this paper, we study a pair programming based collaboration using two eye trackers, stimulus recording, and an external camera recording. To analyze the collected data, we introduce the EyeSAC system that synchronizes the data from different sources and that removes the noisy and missing gazes from eye tracking data with the help of visual feedback from the external recording. The synchronized and cleaned data is further annotated using our system and then exported for further analysis.BibTeX
Kumar, Ayush ; Howlader, Prantik ; Garcia, Rafael ; Weiskopf, Daniel ; Mueller, Klaus: Challenges in Interpretability of Neural Networks for Eye Movement Data. In: ACM Symposium on Eye Tracking Research and Applications, ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany : Association for Computing Machinery, 2020 — ISBN 9781450371346
Abstract
Many applications in eye tracking have been increasingly employing neural networks to solve machine learning tasks. In general, neural networks have achieved impressive results in many problems over the past few years, but they still suffer from the lack of interpretability due to their black-box behavior. While previous research on explainable AI has been able to provide high levels of interpretability for models in image classification and natural language processing tasks, little effort has been put into interpreting and understanding networks trained with eye movement datasets. This paper discusses the importance of developing interpretability methods specifically for these models. We characterize the main problems for interpreting neural networks with this type of data, how they differ from the problems faced in other domains, and why existing techniques are not sufficient to address all of these issues. We present preliminary experiments showing the limitations that current techniques have and how we can improve upon them. Finally, based on the evaluation of our experiments, we suggest future research directions that might lead to more interpretable and explainable neural networks for eye tracking.BibTeX
Kurzhals, Kuno ; Burch, Michael ; Weiskopf, Daniel: What We See and What We Get from Visualization: Eye Tracking Beyond Gaze Distributions and Scanpaths. In: CoRR, CoRR. Bd. abs/2009.14515 (2020)
Abstract
Technical progress in hardware and software enables us to record gaze data in everyday situations and over long time spans. Among a multitude of research opportunities, this technology enables visualization researchers to catch a glimpse behind performance measures and into the perceptual and cognitive processes of people using visualization techniques. The majority of eye tracking studies performed for visualization research is limited to the analysis of gaze distributions and aggregated statistics, thus only covering a small portion of insights that can be derived from gaze data. We argue that incorporating theories and methodology from psychology and cognitive science will benefit the design and evaluation of eye tracking experiments for visualization. This position paper outlines our experiences with eye tracking in visualization and states the benefits that an interdisciplinary research field on visualization psychology might bring for better understanding how people interpret visualizations.BibTeX
Kurzhals, Kuno ; Göbel, Fabian ; Angerbauer, Katrin ; Sedlmair, Michael ; Raubal, Martin: A View on the Viewer: Gaze-Adaptive Captions for Videos. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020 — ISBN 9781450367080, S. 139:1–139:12
Abstract
Subtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to related content (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions.BibTeX
Kurzhals, Kuno ; Rodrigues, Nils ; Koch, Maurice ; Stoll, Michael ; Bruhn, Andres ; Bulling, Andreas ; Weiskopf, Daniel: Visual Analytics and Annotation of Pervasive Eye Tracking Video. In: Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany : ACM, 2020 — ISBN 9781450371339, S. 16:1-16:9
Abstract
We propose a new technique for visual analytics and annotation of long-term pervasive eye tracking data for which a combined analysis of gaze and egocentric video is necessary. Our approach enables two important tasks for such data for hour-long videos from individual participants: (1) efficient annotation and (2) direct interpretation of the results. Exemplary time spans can be selected by the user and are then used as a query that initiates a fuzzy search of similar time spans based on gaze and video features. In an iterative refinement loop, the query interface then provides suggestions for the importance of individual features to improve the search results. A multi-layered timeline visualization shows an overview of annotated time spans. We demonstrate the efficiency of our approach for analyzing activities in about seven hours of video in a case study and discuss feedback on our approach from novices and experts performing the annotation task.BibTeX
Marky, Karola ; Voit, Alexandra ; Stöver, Alina ; Kunze, Kai ; Schröder, Svenja ; Mühlhäuser, Max: ”I Don’t Know How to Protect Myself”: Understanding Privacy Perceptions Resulting from the Presence of Bystanders in Smart Environments. In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. Tallinn, Estonia : Association for Computing Machinery, 2020 — ISBN 9781450375795
Abstract
IoT devices no longer affect single users only because others like visitors or family members - denoted as bystanders - might be in the device’s vicinity. Thus, data about bystanders can be collected by IoT devices and bystanders can observe what IoT devices output. To better understand how this affects the privacy of IoT device owners and bystanders and how their privacy can be protected better, we interviewed 42 young adults. Our results include that owners of IoT devices wish to adjust the device output when visitors are present. Visitors wish to be made aware of the data collected about them, to express their privacy needs, and to take measures. Based on our results, we show demand for scalable solutions that address the tension that arises between the increasing discreetness of IoT devices, their increase in numbers and the requirement to preserve the self-determination of owners and bystanders at the same time.BibTeX
Men, H. ; Hosu, V. ; Lin, H. ; Bruhn, A. ; Saupe, D.: Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database. In: Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2020, S. 1–6
Abstract
Professional video editing tools can generate slow-motion video by interpolating frames from video recorded at astandard frame rate. Thereby the perceptual quality of such in-terpolated slow-motion videos strongly depends on the underlyinginterpolation techniques. We built a novel benchmark databasethat is specifically tailored for interpolated slow-motion videos(KoSMo-1k). It consists of 1,350 interpolated video sequences,from 30 different content sources, along with their subjectivequality ratings from up to ten subjective comparisons per videopair. Moreover, we evaluated the performance of twelve exist-ing full-reference (FR) image/video quality assessment (I/VQA)methods on the benchmark. In this way, we are able to show thatspecifically tailored quality assessment methods for interpolatedslow-motion videos are needed, since the evaluated methods –despite their good performance on real-time video databases – donot give satisfying results when it comes to frame interpolation.BibTeX
Men, Hui ; Hosu, Vlad ; Lin, Hanhe ; Bruhn, Andrés ; Saupe, Dietmar: Subjective annotation for a frame interpolation benchmark using artefact amplification. In: Quality and User Experience, Quality and User Experience. Bd. 5 (2020), Nr. 1. — Article Number: 8
Abstract
Current benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. It contains interpolated frames from 155 methods applied to each of 8 contents. For this purpose, we collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data (3720 comparisons of 20 votes each) we reconstructed absolute quality scale values according to Thurstone’s model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA, which weights the local differences between an interpolated image and its ground truth.BibTeX
Merino, L. ; Lungu, M. ; Seidl, C.: Unleashing the Potentials of Immersive Augmented Reality for Software Engineering. In: 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2020, S. 517–521
Abstract
In immersive augmented reality (IAR), users can wear a head-mounted display to see computer-generated images superimposed to their view of the world. IAR was shown to be beneficial across several domains, e.g., automotive, medicine, gaming and engineering, with positive impacts on, e.g., collaboration and communication. We think that IAR bears a great potential for software engineering but, as of yet, this research area has been neglected. In this vision paper, we elicit potentials and obstacles for the use of IAR in software engineering. We identify possible areas that can be supported with IAR technology by relating commonly discussed IAR improvements to typical software engineering tasks. We further demonstrate how innovative use of IAR technology may fundamentally improve typical activities of a software engineer through a comprehensive series of usage scenarios outlining practical application. Finally, we reflect on current limitations of IAR technology based on our scenarios and sketch research activities necessary to make our vision a reality. We consider this paper to be relevant to academia and industry alike in guiding the steps to innovative research and applications for IAR in software engineering.BibTeX
Merino, Leonel ; Sotomayor-Gómez, Boris ; Yu, Xingyao ; Salgado, Ronie ; Bergel, Alexandre ; Sedlmair, Michael ; Weiskopf, Daniel: Toward Agile Situated Visualization: An Exploratory User Study. In: Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2020 — ISBN 9781450368193, S. LBW087:1–LBW087:7
Abstract
We introduce AVAR, a prototypical implementation of an agile situated visualization (SV) toolkit targeting liveness, integration, and expressiveness. We report on results of an exploratory study with AVAR and seven expert users. In it, participants wore a Microsoft HoloLens device and used a Bluetooth keyboard to program a visualization script for a given dataset. To support our analysis, we (i) video recorded sessions, (ii) tracked users' interactions, and (iii) collected data of participants' impressions. Our prototype confirms that agile SV is feasible. That is, liveness boosted participants' engagement when programming an SV, and so, the sessions were highly interactive and participants were willing to spend much time using our toolkit (i.e., median ≥ 1.5 hours). Participants used our integrated toolkit to deal with data transformations, visual mappings, and view transformations without leaving the immersive environment. Finally, participants benefited from our expressive toolkit and employed multiple of the available features when programming an SV.BibTeX
Merino, Leonel ; Schwarzl, Magdalena ; Kraus, Matthias ; Sedlmair, Michael ; Schmalstieg, Dieter ; Weiskopf, Daniel: Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 -- 2019). In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020
Abstract
We present a systematic review of 45S papers that report on evaluations in mixed and augmented reality (MR/AR) published in ISMAR, CHI, IEEE VR, and UIST over a span of 11 years (2009-2019). Our goal is to provide guidance for future evaluations of MR/AR approaches. To this end, we characterize publications by paper type (e.g., technique, design study), research topic (e.g., tracking, rendering), evaluation scenario (e.g., algorithm performance, user performance), cognitive aspects (e.g., perception, emotion), and the context in which evaluations were conducted (e.g., lab vs. in-thewild). We found a strong coupling of types, topics, and scenarios. We observe two groups: (a) technology-centric performance evaluations of algorithms that focus on improving tracking, displays, reconstruction, rendering, and calibration, and (b) human-centric studies that analyze implications of applications and design, human factors on perception, usability, decision making, emotion, and attention. Amongst the 458 papers, we identified 248 user studies that involved 5,761 participants in total, of whom only 1,619 were identified as female. We identified 43 data collection methods used to analyze 10 cognitive aspects. We found nine objective methods, and eight methods that support qualitative analysis. A majority (216/248) of user studies are conducted in a laboratory setting. Often (138/248), such studies involve participants in a static way. However, we also found a fair number (30/248) of in-the-wild studies that involve participants in a mobile fashion. We consider this paper to be relevant to academia and industry alike in presenting the state-of-the-art and guiding the steps to designing, conducting, and analyzing results of evaluations in MR/AR.BibTeX
Munz, Tanja ; Schäfer, Noel ; Blascheck, Tanja ; Kurzhals, Kuno ; Zhang, Eugene ; Weiskopf, Daniel: Comparative Visual Gaze Analysis for Virtual Board Games. In: The 13th International Symposium on Visual Information Communication and Interaction (VINCI 2020), The 13th International Symposium on Visual Information Communication and Interaction (VINCI 2020). (2020)
BibTeX
Munz, Tanja ; Schaefer, Noel ; Blascheck, Tanja ; Kurzhals, Kuno ; Zhang, Eugene ; Weiskopf, Daniel: Demo of a Visual Gaze Analysis System for Virtual Board Games. In: ACM Symposium on Eye Tracking Research and Applications, ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany : Association for Computing Machinery, 2020
BibTeX
Obaidellah, Unaizah ; Blascheck, Tanja ; Guarnera, Drew ; Maletic, Jonathan: A Fine-grained Assessment on Novice Programmers’ Gaze Patterns on Pseudocode Problems. In: ACM Symposium on Eye Tracking Research and Applications, ACM Symposium on Eye Tracking Research and Applications : Association for Computing Machinery, 2020
BibTeX
Okanovic, Dusan ; Beck, Samuel ; Merz, Lasse ; Zorn, Christoph ; Merino, Leonel ; van Hoorn, Andre ; Beck, Fabian: Can a Chatbot Support Software Engineers with Load Testing? Approach and Experiences. In: Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE), Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE), 2020. — Accepted, S. 120–129
Abstract
Even though load testing is an established technique to assess load-related quality properties of software systems, it is applied only seldom and with questionable results. Indeed, configuring, executing, and interpreting results of a load test require high effort and expertise. Since chatbots have shown promising results for interactively supporting complex tasks in various domains (including software engineering), we hypothesize that chatbots can provide developers suitable support for load testing. In this paper, we present PerformoBot, our chatbot for configuring and running load tests. In a natural language conversation, PerformoBot guides developers through the process of properly specifying the parameters of a load test, which is then automatically executed by PerformoBot using a state-of-the-art load testing tool. After the execution, PerformoBot provides developers a report that answers the respective concern. We report on results of a user study that involved 47 participants, in which we assessed our tool's acceptance and effectiveness. We found that participants in the study, particularly those with a lower level of expertise in performance engineering, had a mostly positive view of PerformoBot.BibTeX
Pathmanathan, Nelusa ; Becher, Michael ; Rodrigues, Nils ; Reina, Guido ; Ertl, Thomas ; Weiskopf, Daniel ; Sedlmair, Michael: Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality. In: Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany : ACM, 2020 — ISBN 9781450371346, S. 50:1-50:5
Abstract
Visualization in virtual 3D environments can provide a natural way for users to explore data. Often, arm and short head movements are required for interaction in augmented reality, which can be tiring and strenuous though. In an effort toward more user-friendly interaction, we developed a prototype that allows users to manipulate virtual objects using a combination of eye gaze and an external clicker device. Using this prototype, we performed a user study comparing four different input methods of which head gaze plus clicker was preferred by most participants.BibTeX
Patkar, Nitish ; Merino, Leonel ; Nierstrasz, Oscar: Towards Requirements Engineering with Immersive Augmented Reality. In: Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming, Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming. Porto, Portugal : ACM, 2020 — ISBN 9781450375078, S. 55–60
Abstract
Often, requirements engineering (RE) activities demand project stakeholders to communicate and collaborate with each other towards building a common software product vision. We conjecture that augmented reality (AR) can be a good fit to support such communication and collaboration. In this vision paper, we report on state-of-the-art research at the intersection of AR and RE. We found that requirements elicitation and analysis have been supported by the ability of AR to provision on-the-fly information such as augmented prototypes. We discuss and map the existing challenges in RE to the aspects of AR that can boost the productivity and user experience of existing RE techniques. Finally, we elaborate on various envisioned usage scenarios in which we highlight concrete benefits and challenges of adopting immersive AR to assist project stakeholders in RE activities.BibTeX
Pflüger, Hermann: Computer Vision and Art History (2020)
BibTeX
Reina, Guido ; Childs, Hank ; Matković, Kresimir ; Bühler, Katja ; Waldner, Manuela ; Pugmire, David ; Kozl\’ıková, Barbora ; Ropinski, Timo ; u. a.: The moving target of visualization software for an increasingly complex world. In: Computers & Graphics, Computers & Graphics. Bd. 87, Elsevier BV (2020), S. 12--29
BibTeX
Rodrigues, Nils ; Schulz, Christoph ; Lhuillier, Antoine ; Weiskopf, Daniel: Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces. In: Proceedings of the Graphics Interface Conference (GI) (forthcoming), Proceedings of the Graphics Interface Conference (GI) (forthcoming), 2020, S. 0:1-0:11
Abstract
We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our cluster-flow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on A*. It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).BibTeX
Schatz, Karsten ; Frieß, Florian ; Schäfer, Marco ; Ertl, Thomas ; Krone, Michael: Analyzing Protein Similarity by Clustering Molecular Surface Maps. In: Eurographics Workshop on Visual Computing for Biology and Medicine, Eurographics Workshop on Visual Computing for Biology and Medicine, 2020, S. 103–114
BibTeX
Sondag, Max ; Meulemans, Wouter ; Schulz, Christoph ; Verbeek, Kevin ; Weiskopf, Daniel ; Speckmann, Bettina: Uncertainty Treemaps. In: Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2020, S. 111–120
Abstract
Rectangular treemaps visualize hierarchical numerical data by recursively partitioning an input rectangle into smaller rectangles whose areas match the data. Numerical data often has uncertainty associated with it. To visualize uncertainty in a rectangular treemap, we identify two conflicting key requirements: (i) to assess the data value of a node in the hierarchy, the area of its rectangle should directly match its data value, and (ii) to facilitate comparison between data and uncertainty, uncertainty should be encoded using the same visual variable as the data, that is, area. We present Uncertainty Treemaps, which meet both requirements simultaneously by introducing the concept of hierarchical uncertainty masks. First, we define a new cost function that measures the quality of Uncertainty Treemaps. Then, we show how to adapt existing treemapping algorithms to support uncertainty masks. Finally, we demonstrate the usefulness and quality of our technique through an expert review and a computational experiment on real-world datasets.BibTeX
Straub, Alexander ; Ertl, Thomas: Visualization Techniques for Droplet Interfaces and Multiphase Flow. In: Lamanna, G. ; Tonini, S. ; Cossali, G. E. ; Weigand, B. (Hrsg.) ; Lamanna, G. ; Tonini, S. ; Cossali, G. E. ; Weigand, B. (Hrsg.): Droplet Interactions and Spray Processes, Droplet Interactions and Spray Processes. Bd. 121 : Springer International Publishing, 2020 — ISBN 978-3-030-33338-6, S. 203–214
Abstract
The analysis of large multiphase flow simulation data poses an interesting and complex research question, which can be addressed with interactive visualization techniques, as well as semi-automated analysis processes. In this project, the focus lies on the investigation of forces governing droplet evolution. Therefore, our proposed methods visualize and allow the analysis of droplet deformation and breakup, droplet behavior and evolution, and droplet-internal flow. By deriving quantities for interface stretching and bending, we visualize and analyze the influence of surface tension force on breakup dynamics, and forces induced by Marangoni convection. Using machine learning to train a simple model for the prediction of physical droplet properties, we provide a visual analysis framework that can be used to analyze large simulation data. Computing droplet-local velocity fields where every droplet is observed separately in its own frame of reference, we create local, interpretable visualizations of flow within droplets, allowing for the investigation of the influence of flow dynamics on droplet evolution.BibTeX
Streichert, Annalena ; Angerbauer, Katrin ; Schwarzl, Magdalena ; Sedlmair, Michael: Comparing Input Modalities for Shape Drawing Tasks. In: Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP) : ACM, 2020 — ISBN 9781450371346, S. 1–5
Abstract
With the growing interest in Immersive Analytics, there is also a need for novel and suitable input modalities for such applications. We explore eye tracking, head tracking, hand motion tracking, and data gloves as input methods for a 2D tracing task and compare them to touch input as a baseline in an exploratory user study (N=20). We compare these methods in terms of user experience, workload, accuracy, and time required for input. The results show that the input method has a significant influence on these measured variables. While touch input surpasses all other input methods in terms of user experience, workload, and accuracy, eye tracking shows promise in respect of the input time. The results form a starting point for future research investigating input methods.BibTeX
Tang, Tan ; Li, Renzhong ; Wu, Xinke ; Liu, Shuhan ; Knittel, Johannes ; Koch, Steffen ; Yu, Lingyun ; Ren, Peiran ; u. a.: PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning. In: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics. Bd. 27, IEEE (2020), Nr. 2, S. 294--303
BibTeX
Voit, Alexandra ; Niess, Jasmin ; Eckerth, Caroline ; Ernst, Maike ; Weingärtner, Henrike ; Woundefinedniak, Paweł W.: ‘It’s Not a Romantic Relationship’: Stories of Adoption and Abandonment of Smart Speakers at Home. In: 19th International Conference on Mobile and Ubiquitous Multimedia, 19th International Conference on Mobile and Ubiquitous Multimedia. Essen, Germany : Association for Computing Machinery, 2020 — ISBN 9781450388702, S. 71–82
Abstract
Smart speakers become increasingly ubiquitous in our homes. Consequently, we need to study how smart speakers affect the members of a household. Understanding the adoption of a smart speaker can assure it does not negatively influence the social dynamics within a household and create opportunities for further assistance. We deployed an Amazon Echo dot in nine households with 20 inhabitants who were new smart speaker users. We conducted multiple interviews, inquiring how a smart speaker was integrated into a household from day one. We investigated the development of social rules around using the device and how the smart speaker was appropriated. Users developed different strategies of using the device which altered social behaviours in some households. Further, we identified barriers and unmet requirements in introducing smart speakers to home environments. Our work contributes to an understanding of ubiquitous assistance for user groups at home.BibTeX
Voit, Alexandra ; Weber, Dominik ; Abdelrahman, Yomna ; Salm, Marie ; Woundefinedniak, Paweł W. ; Wolf, Katrin ; Schneegass, Stefan ; Henze, Niels: Exploring Non-Urgent Smart Home Notifications Using a Smart Plant System. In: 19th International Conference on Mobile and Ubiquitous Multimedia, 19th International Conference on Mobile and Ubiquitous Multimedia. Essen, Germany : Association for Computing Machinery, 2020 — ISBN 9781450388702, S. 47–58
Abstract
With the rise of the Internet of Things, home appliances become connected and they can proactively provide status information to users. Facing a steadily increasing number of notification sources, it is unclear how information from smart home devices should be provided without overloading the users’ attention. In this paper, we investigate the design of non-urgent smart home notifications using a smart plant system. Based on feedback from focus groups, we designed four notification types and compared them in an eight-week in-situ study. We show that notifications displayed on smart home devices are preferred to those received on smartphones. Event-based notifications are unobtrusive, actionable and are preferred to persistent notifications. We derive guidelines that address the need of being in control, opportune locations for notification delivery at opportune moments, notification blindness, the importance of discretizing continuous information, and combining related notifications.BibTeX
Weiskopf, Daniel: Vis4Vis: Visualization for (Empirical) Visualization Research. In: Chen, M. ; Hauser, H. ; Rheingans, P. ; Scheuermann, G. (Hrsg.) ; Chen, M. ; Hauser, H. ; Rheingans, P. ; Scheuermann, G. (Hrsg.): Foundations of Data Visualization, Foundations of Data Visualization : Springer International Publishing, 2020 — ISBN 978-3-030-34444-3, S. 209--224
Abstract
Appropriate evaluation is a key component in visualization research. It is typically based on empirical studies that assess visualization components or complete systems. While such studies often include the user of the visualization, empirical research is not necessarily restricted to user studies but may also address the technical performance of a visualization system such as its computational speed or memory consumption. Any such empirical experiment faces the issue that the underlying visualization is becoming increasingly sophisticated, leading to an increasingly difficult evaluation in complex environments. Therefore, many of the established methods of empirical studies can no longer capture the full complexity of the evaluation. One promising solution is the use of data-rich observations that we can acquire during studies to obtain more reliable interpretations of empirical research. For example, we have been witnessing an increasing availability and use of physiological sensor information from eye tracking, electrodermal activity sensors, electroencephalography, etc. Other examples are various kinds of logs of user activities such as mouse, keyboard, or touch interaction. Such data-rich empirical studies promise to be especially useful for studies in the wild and similar scenarios outside of the controlled laboratory environment. However, with the growing availability of large, complex, time-dependent, heterogeneous, and unstructured observational data, we are facing the new challenge of how we can analyze such data. This challenge can be addressed by establishing the subfield of visualization for visualization (Vis4Vis): visualization as a means of analyzing and communicating data from empirical studies to advance visualization research.BibTeX
Weiß, M. ; Angerbauer, K. ; Voit, A. ; Schwarzl, M. ; Sedlmair, M. ; Mayer, S.: Revisited: Comparison of Empirical Methods to Evaluate Visualizations Supporting Crafting and Assembly Purposes. In: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics. (2020), S. 1–10
Abstract
Ubiquitous, situated, and physical visualizations create entirely new possibilities for tasks contextualized in the real world,such as doctors inserting needles. During the development of situated visualizations, evaluating visualizations is a core requirement.However, performing such evaluations is intrinsically hard as the real scenarios are safety-critical or expensive to test. To overcomethese issues, researchers and practitioners adapt classical approaches from ubiquitous computing and use surrogate empiricalmethods such as Augmented Reality (AR), Virtual Reality (VR) prototypes, or merely online demonstrations. This approach’s primaryassumption is that meaningful insights can also be gained from different, usually cheaper and less cumbersome empirical methods.Nevertheless, recent efforts in the Human-Computer Interaction (HCI) community have found evidence against this assumption, whichwould impede the use of surrogate empirical methods. Currently, these insights rely on a single investigation of four interactive objects.The goal of this work is to investigate if these prior findings also hold for situated visualizations. Therefore, we first created a scenariowhere situated visualizations support users in do-it-yourself (DIY) tasks such as crafting and assembly. We then set up five empiricalstudy methods to evaluate the four tasks using an online survey, as well as VR, AR, laboratory, and in-situ studies. Using this studydesign, we conducted a new study with 60 participants. Our results show that the situated visualizations we investigated in this studyare not prone to the same dependency on the empirical method, as found in previous work. Our study provides the first evidence thatanalyzing situated visualizations through different empirical (surrogate) methods might lead to comparable resultBibTeX
Yang, Chia-Kai ; Blascheck, Tanja ; Wacharamanotham, Chat: A Comparison of a Transition-Based and a Sequence-Based Analysis of AOI Transition Sequences. In: ACM Symposium on Eye Tracking Research and Applications, ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany : Association for Computing Machinery, 2020
BibTeX
Yu, Xingyao ; Angerbauer, Katrin ; Mohr, Peter ; Kalkofen, Denis ; Sedlmair, Michael: Perspective Matters: Design Implications for Motion Guidance in Mixed Reality. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020
BibTeX
Zhou, Liang ; Rivinius, Marc ; Johnson, Chris R. ; Weiskopf, Daniel: Photographic High-Dynamic-Range Scalar Visualization. In: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Visualization and Computer Graphics. Bd. 26 (2020), Nr. 6, S. 2156–2167
Abstract
We propose a photographic method to show scalar values of high dynamic range (HDR) by color mapping for 2D visualization. We combine (1) tone-mapping operators that transform the data to the display range of the monitor while preserving perceptually important features, based on a systematic evaluation, and (2) simulated glares that highlight high-value regions. Simulated glares are effective for highlighting small areas (of a few pixels) that may not be visible with conventional visualizations; through a controlled perception study, we confirm that glare is preattentive. The usefulness of our overall photographic HDR visualization is validated through the feedback of expert users.BibTeX
Öney, Seyda ; Rodrigues, Nils ; Becher, Michael ; Reina, Guido ; Ertl, Thomas ; Sedlmair, Michael ; Weiskopf, Daniel: Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality. In: Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP) : ACM, 2020, S. 49:1-49:5
Abstract
Gaze tracking in 3D has the potential to improve interaction with objects and visualizations in augmented reality. However, previous research showed that subjective perception of distance varies between real and virtual surroundings. We wanted to determine whether objectively measured 3D gaze depth through eye tracking also exhibits differences between entirely real and augmented environments. To this end, we conducted an experiment (N = 25) in which we used Microsoft HoloLens with a binocular eye tracking add-on from Pupil Labs. Participants performed a task that required them to look at stationary real and virtual objects while wearing a HoloLens device. We were not able to find significant differences in the gaze depth measured by eye tracking. Finally, we discuss our findings and their implications for gaze interaction in immersive analytics, and the quality of the collected gaze data.BibTeX