Big data – especially from simulations – comes with a huge amount of information that needs to be analyzed. In particular for complex data, parts of the analysis are performed by multiple experts – often by employing visualization. Therefore, collaborative interactive work with visual analysis tools is important, both on standard desktop displays and large screens. Furthermore, analyzing big data can be a time-consuming and long process that may be handled in several analysis sessions. Storing and visualizing provenance and workflow information can help continue the work on a previous visualization and analysis session and facilitate the setup of scientific workflows. In this way, visual analysis becomes scalable across users and display devices.
Even though the direct visualization of big data already is a computer-science challenge, it is not sufficient to provide insight into data with large or complex information content. To facilitate data and cognitive scalability, analysis by computer-based methods to reduce the visual contents and adapt it to user needs is also important. To this end, feature extraction methods will be developed and methods from machine learning, statistics, and data mining will be adopted.
An overarching theme is the incorporation of uncertainty to make the analysis reliable. Therefore, uncertainty will be modeled and included throughout the analysis process. Finally, in context of interactive visual analysis, big data requires us to exploit the computational power of modern and future HPC architectures with efficient parallel algorithms for visualization and computer-based analysis methods alike.