Expanding the Horizons of Situated Visualization: The Extended SV Model
Abstract
:1. Introduction
2. Situated Visualization
2.1. Fundamental Notions
2.2. SV Characterization
2.2.1. Space Perspective
Physically and Perceptually SV
Embedded Visualization
2.2.2. Time Perspective
2.2.3. Place, Activity, Community Perspectives
3. Expanding the SV Model
3.1. Enhancing the SV Model
3.2. Systematic Analysis
3.2.1. Space Perspective
- A visualization is physically situated in space if at least one of its physical sub-presentations is physically close and aligned to its corresponding data’s physical sub-referent (i.e., the matching physical sub-presentation and its corresponding physical sub-referent share the same space and are seen at the same time);
- A visualization is perceptually situated in space if at least one of its percepts (physical or virtual sub-presentations) appears to be close to the percept of its matching and aligned data’s physical sub-referent (i.e., the matching sub-presentation and its corresponding physical sub-referent are seen at the same time);
- A visualization is embedded if at least one of its physical sub-presentations is deeply integrated and aligned to its corresponding data’s physical sub-referent (i.e., the matching pairs are in the same space and are seen at the same time).
- A visualization is remote spatially situated if at least one of its physical sub-referents cannot be seen from the user’s current location, but its data are seen aligned with its corresponding physical sub-presentation (i.e., the corresponding pairs do not share the same space but are seen at the same time).
3.2.2. Time Perspective
- A temporal referent is any meaningful period of time, social temporality or moment to which the data refers;
- A visualization is temporally situated if at least one of its data’s temporal sub-referents is close to the period of time, the social temporality, or the moment its corresponding and aligned physical sub-presentation is observed or recorded.
- A visualization is asynchronously situated if at least one of its temporal sub-referents cannot be seen from the user’s current time, but its data are seen aligned with its corresponding physical sub-presentation (i.e., the corresponding pairs do not share the user’s current time but are seen at the same time).
3.2.3. Place Perspective
- A local referent is any meaningful characteristic or characteristics of the place to which the data refers;
- A visualization is locally situated if at least one of its physical sub-presentations provides information that closely embodies the identity, history or socio-cultural meaning of its corresponding and aligned data’s local sub-referent.
3.2.4. Activity Perspective
- An activity referent is the meaningful activity to which the data refers;
- A visualization is situated regarding the activity if at least one of its physical sub-presentations provides information that is closely related with its matching and aligned data’s activity sub-referent.
- A visualization is situated regarding the activity’s role if at least one of its physical sub-presentations provides information about the playing part of the activity’s intervenient that is closely related to its matching and aligned data’s activity sub-referent.
3.2.5. Community Perspective
- A communal referent is the meaningful person or group of persons associated to a space, a time, a place, an activity, or a content to which the data refers;
- A visualization is communally situated if at least one of its physical sub-presentations provides data that is closely related with its corresponding and aligned data’s communal sub-referents.
- A visualization is multidisciplinary situated if at least one of its physical sub-presentations provides information that is closely understood by its corresponding data’s communal sub-referents.
3.2.6. Content Perspective
- A content referent is any meaningful input/output information to which the data refers;
- A visualization is situated regarding the content if at least one of its physical sub-presentations provides data that is closely related with its corresponding and aligned data’s content sub-referents;
- A visualization is comprehensively situated if at least one of its physical sub-presentations provides the correct, wholly and organised information that is closely related to its corresponding and aligned data’s content sub-referent;
- A visualization is interactively situated if at least one of its physical sub-presentations provides the needed data for a closely understandable interaction with its matching and aligned data’s content sub-referent;
- A visualization is emotionally situated if at least one of its physical sub-presentations engages users emotionally, leading to feelings and attitudes related with its corresponding and aligned data’s content sub-referents.
3.2.7. Ethics Perspective
- An ethical reference encompasses the ethical considerations and implications related to the principles, values, rights, and interests of all parties involved in a given space, time, place, activity, community, or content to which the data refers;
- A visualization is ethically situated if, during the design phase and when it is used, all of its designers and physical sub-presentations respect the ethical principles, values, rights, and interests of the users, the data, and the society associated with its corresponding and aligned data’s ethical sub-referents;
- A visualization is ethically diverse if all of its physical sub-presentations take into account the diversity of backgrounds, norms, values, and practices associated with its corresponding and aligned data’s ethical sub-referents.
4. Challenges in SV for AR/MR
- Data overloadThe presentation and interaction with a large amount of data are a challenge for any kind of visualization, and not only for AR/MR. The visualization of all the information leads to confusion and lack of clarity. A possible solution for the data overload problem, among others, is filtered and automatically overlaid the annotations through object recognition techniques.
- Visual interferenceThe essential information must be clearly distinguished from the irrelevant one. Important landmarks or any kind of essential information might be occluded or not occluded, by the annotations. Ref. [36] shows an example of essential information occluded by annotations and the same annotations in irrelevant parts of the image (in the case, in the sky).
- Visual coherence and registration errorsIn AR/MR, supplementary virtual information (usually visual) is superimposed onto the user’s environment, in the real world and in real-time. When the result of that process makes sense to the user, it can be said it is coherent. A coherent visual result offers the user clearer visual cues on the location, shape and characteristics of the virtual objects and the interactions between them and real objects. The registration is how accurately the overlay process is performed. In other words, when the digital content is in the exact right position in the real-world image it is said the virtual data are aligned with the real data. When the virtual data are not in the right location, the digital content is considered misaligned or with registration errors. So, virtual information must be consistent with the real-world and erroneous registration may communicate false information.
- Dynamics of AR/MR and temporal coherenceVariations of the scenario or the user’s viewpoint can produce distracting alterations of the visualization. A change in the user’s viewpoint can result in a reordering of the annotations. As can be understood, these modifications produce confusing outcomes because, for example, the annotations can be misleading. To avoid this, the changes must be performed in a temporally coherent way. It is necessary for digital content to maintain a record of the user’s viewpoint operations throughout the course of interaction.
- Egocentric viewpoint limitationTypically, AR/MR systems are based on one view (or camera). At any time, the user only has the viewpoint of that camera towards the 3D world, referred to as the egocentric viewpoint. In this context, the egocentric viewpoint involves a camera-to-objects representational system, establishing where the world objects are concerning the camera location (or the user’s position).There are two main reasons why the visualization and usability of an AR/MR system are severely impaired by the limitation of the egocentric viewpoint of the user. The first is that there are many situations in which the viewpoint is not the most appropriate to present the relevant information, restricting the ability of the users to explore all the information in real world environments (for example, when training with a machine and only part of its main panel is viewed) [25]. The second, and most important, is that some visualization tasks cannot be performed from an egocentric viewpoint, for example, to show to a student a comprehensive layout of the entire campus. Knowing that the overview task could be the initial point or a recurrent one when the AR/MR application is being used, its absence weakens the user’s interaction. Some solutions or mitigation for the egocentric viewpoint limitation challenge can be seen on [25,28,49].
- Situated analyticsThe mixture of analytics and SV, the situated analytics (SA), raises challenges at the following levels:
- -
- Theoretical and practicalSA takes a less formal approach to data analysis compared to traditional methods. As a result, it requires a fresh perspective on how to develop, test and assess tools that are context-specific. This involves exploring novel techniques, guidelines and models [18]. An example of that is the DXR toolkit, presented in [50], that offers developers an efficient way to build immersive data visualization designs, with a new succinct declarative visualization grammar inspired by Vega-Lite.
- -
- EthicalThe information that SA, and even SV, provide could improve understanding and be highly valuable to the users, but could also reveal sensitive information. This is a problem of any visualization system. So, all the delicate information should be dealt with extreme care. Another concern relates to the possibility of an inexhaustible data collection (giving a wrong idea because it is only a partial view of reality), despite the correctness of the gathering process. Selectively displaying data is a way to introduce some prejudice by over-emphasizing some elements. The problem of fake news, which involves the use of altered, incorrect, or wrong facts, as well as biased data collection practices, should also be addressed. Such practices may reflect the perspectives of specific groups of people and propagate stereotypes and prejudices.
5. Practical Scenarios
5.1. Space Perspective
5.2. Time Perspective
5.3. Place Perspective
5.4. Activity Perspective
5.5. Community Perspective
5.6. Content Perspective
5.7. Ethics Perspective
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Dağtaş, A.; Zaimoglu, S. The Language Learning Journey of ELT Teachers: A Narrative Approach. In Autoethnographic Perspectives on Multilingual Life Stories; IGI Global: Hershey, PA, USA, 2022; pp. 202–216. [Google Scholar]
- Speicher, M.; Hall, B.D.; Nebeling, M. What is mixed reality? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–15. [Google Scholar]
- Korkmaz, E.; Morali, H.S. A meta-synthesis of studies on the use of augmented reality in mathematics education. Int. Electron. J. Math. Educ. 2022, 17, em0701. [Google Scholar]
- Dube, T.J.; İnce, G. A novel interface for generating choreography based on augmented reality. Int. J. Hum. Comput. Stud. 2019, 132, 12–24. [Google Scholar] [CrossRef]
- Steffen, J.H.; Gaskin, J.E.; Meservy, T.O.; Jenkins, J.L.; Wolman, I. Framework of affordances for virtual reality and augmented reality. J. Manag. Inf. Syst. 2019, 36, 683–729. [Google Scholar] [CrossRef]
- Iqbal, M.Z.; Mangina, E.; Campbell, A.G. Current Current Challenges and Future Research Directions in Augmented Reality for Education. Multimodal Technol. Interact. 2022, 6, 75. [Google Scholar] [CrossRef]
- Laird, D.; Holton, E.F.; Naquin, S.S. Approaches to Training and Development: Revised and Updated; Basic Books: New York, NY, USA, 2003. [Google Scholar]
- Kosara, R. Visualization criticism-the missing link between information visualization and art. In Proceedings of the 2007 11th International Conference Information Visualization (IV’07), Zurich, Switzerland, 4–6 July 2007; IEEE: New York, NY, USA, 2007; pp. 631–636. [Google Scholar]
- Card, M. Readings in Information Visualization: Using Vision to Think; Morgan Kaufmann: Burlington, MA, USA, 1999. [Google Scholar]
- Schmalstieg, D.; Hollerer, T. Augmented Reality: Principles and Practice; Addison-Wesley Professional: Boston, MA, USA, 2016. [Google Scholar]
- White, S.M. Interaction and Presentation Techniques for Situated Visualization. Ph.D. Thesis, Columbia University, New York, NY, USA, 2009. [Google Scholar]
- Kalkofen, D.; Sandor, C.; White, S.; Schmalstieg, D. Visualization techniques for augmented reality. In Handbook of Augmented Reality; Springer: Berlin/Heidelberg, Germany, 2011; pp. 65–98. [Google Scholar]
- Pezanowski, S.; MacEachren, A.; Savelyev, A.; Robinson, A. SensePlace3: A geovisual framework to analyze place–time–attribute information in social media. Cartogr. Geogr. Inf. Sci. 2017, 45, 420–437. [Google Scholar] [CrossRef]
- Reitberger, W.; Obermair, C.; Ploderer, B.; Meschtscherjakov, A.; Tscheligi, M. Enhancing the shopping experience with ambient displays: A field study in a retail store. In Ambient Intelligence. AmI 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 314–331. [Google Scholar]
- Perovich, L.J.; Wylie, S.A.; Bongiovanni, R. Chemicals in the Creek: Designing a situated data physicalization of open government data with the community. IEEE Trans. Vis. Comput. Graph. 2020, 27, 913–923. [Google Scholar] [CrossRef]
- ElSayed, N.A.M.; Thomas, B.H.; Smith, R.T.; Marriott, K.; Piantadosi, J. Using augmented reality to support situated analytics. In Proceedings of the 2015 IEEE Virtual Reality (VR), Arles, France, 23–27 March 2015; IEEE: New York, NY, USA, 2015; pp. 175–176. [Google Scholar]
- ElSayed, N.A.M.; Thomas, B.H.; Marriott, K.; Piantadosi, J.; Smith, R.T. Situated analytics: Demonstrating immersive analytical tools with augmented reality. J. Vis. Lang. Comput. 2016, 36, 13–23. [Google Scholar] [CrossRef]
- Thomas, B.H.; Welch, G.F.; Dragicevic, P.; Elmqvist, N.; Irani, P.; Jansen, Y.; Schmalstieg, D.; Tabard, A.; ElSayed, N.A.M.; Smith, R.T.; et al. Situated Analytics. Immersive Anal. 2018, 11190, 185–220. [Google Scholar]
- Dwyer, T.; Henry Riche, N.; Klein, K.; Stuerzlinger, W.; Thomas, B. Immersive analytics (Dagstuhl seminar 16231). In Dagstuhl Reports; Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Wadern, Germany, 2016; Volume 6, pp. 1–9. [Google Scholar]
- Bach, B.; Sicat, R.; Pfister, H.; Quigley, A. Drawing into the AR-CANVAS: Designing embedded visualizations for augmented reality. In Proceedings of the Workshop on Immersive Analytics, IEEE Vis., Phoenix, AR, USA, 1–6 October 2017. [Google Scholar]
- Ens, B.; Bach, B.; Cordeil, M.; Engelke, U.; Serrano, M.; Willett, W.; Prouzeau, A.; Anthes, C.; Büschel, W.; Dunne, C.; et al. Grand challenges in immersive analytics. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–17. [Google Scholar]
- Elmqvist, N.; Irani, P. Ubiquitous analytics: Interacting with big data anywhere, anytime. Computer 2013, 46, 86–89. [Google Scholar] [CrossRef]
- Marques, B.; Silva, S.; Alves, J.; Rocha, A.; Dias, P.; Santos, B.S. Remote collaboration in maintenance contexts using augmented reality: Insights from a participatory process. Int. J. Interact. Des. Manuf. (IJIDeM) 2022, 16, 419–438. [Google Scholar] [CrossRef]
- Martins, N.C.; Marques, B.; Dias, P.; Santos, B.S. Augmenting the Reality of Situated Visualization. In Proceedings of the International Conference on Information Visualization, IV, Vienna, Austria, 19–22 July 2022; pp. 1–7. [Google Scholar]
- Tatzgern, M. Situated Visualization in Augmented Reality. Ph.D. Thesis, Graz University of Technology, Graz, Austria, 2015. [Google Scholar]
- Willett, W.; Jansen, Y.; Dragicevic, P. Embedded data representations. IEEE Trans. Vis. Comput. Graph. 2017, 23, 461–470. [Google Scholar] [CrossRef] [Green Version]
- Bressa, N.; Korsgaard, H.; Tabard, A.; Houben, S.; Vermeulen, J. What’s the Situation with Situated Visualization? A Survey and Perspectives on Situatedness. IEEE Trans. Vis. Comput. Graph. 2021, 28, 107–117. [Google Scholar] [CrossRef]
- Martins, N.C.; Dias, P.; Santos, B.S. Egocentric viewpoint in mixed reality situated visualization: Challenges and opportunities. In Proceedings of the 2020 24th International Conference Information Visualisation (IV), Melbourne, Australia, 7–11 September 2020; IEEE: New York, NY, USA, 2020; pp. 9–15. [Google Scholar]
- White, S.; Morozov, P.; Oda, O.; Feiner, S. Progress towards site visits by situated visualization. In Proceedings of the ACM CHI 2008 Workshop: Urban Mixed Reality, Florence, Italy, 5–10 April 2008. [Google Scholar]
- White, S.; Feiner, S. SiteLens: Situated visualization techniques for urban site visits. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 1117–1120. [Google Scholar]
- Martins, N.C.; Marques, B.; Alves, J.; Araújo, T.; Dias, P.; Santos, B.S. Augmented reality situated visualization in decision-making. Multimed. Tools Appl. 2021, 81, 14749–14772. [Google Scholar] [CrossRef]
- Moere, A.V.; Hill, D. Designing for the situated and public visualization of urban data. J. Urban Technol. 2012, 19, 25–46. [Google Scholar] [CrossRef]
- Fuhrmann, A.; Loffelmann, H.; Schmalstieg, D. Collaborative augmented reality: Exploring dynamical systems. In Proceedings of the Visualization’97 (Cat. No. 97CB36155), Phoenix, AZ, USA, 24 October 1997; IEEE: New York, NY, USA, 1997; pp. 459–462. [Google Scholar]
- Meiguins, B.S.; do Carmo, R.C.; Goncalves, A.S.; Godinho, P.I.A.; de Brito Garcia, M. Using augmented reality for multidimensional data visualization. In Proceedings of the Tenth International Conference on Information Visualisation (IV’06), London, UK, 5–7 July 2006; IEEE: New York, NY, USA, 2006; pp. 529–534. [Google Scholar]
- Nee, A.Y.C.; Ong, S.; Chryssolouris, G.; Mourtzis, D. Augmented reality applications in design and manufacturing. CIRP Ann. 2012, 61, 657–679. [Google Scholar] [CrossRef]
- Grasset, R.; Langlotz, T.; Kalkofen, D.; Tatzgern, M.; Schmalstieg, D. Image-driven view management for augmented reality browsers. In Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Atlanta, GA, USA, 5–8 November 2012; IEEE: New York, NY, USA, 2012; pp. 177–186. [Google Scholar]
- Eissele, M.; Kreiser, M.; Ertl, T. Context-controlled flow visualization in augmented reality. In Proceedings of the Graphics Interface 2008, Windsor, ON, Canada, 28–30 May 2008; pp. 89–96. [Google Scholar]
- Jansen, Y.; Dragicevic, P. An interaction model for visualizations beyond the desktop. IEEE Trans. Vis. Comput. Graph. 2013, 19, 2396–2405. [Google Scholar] [CrossRef] [Green Version]
- Fleck, P.; Calepso, A.S.; Hubenschmid, S.; Sedlmair, M.; Schmalstieg, D. RagRug: A Toolkit for Situated Analytics. IEEE Trans. Vis. Comput. Graph. 2023, 29, 3281–3297. [Google Scholar] [CrossRef]
- Pederson, T. From Conceptual Links to Causal Relations—Physical-Virtual Artefacts in Mixed-Reality Space. Ph.D. Thesis, Umeå University, Umeå, Sweden, 2003. [Google Scholar]
- Dourish, P. Re-space-ing place: “Place” and “space” ten years on. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work, Banff, AB, Canada, 4–8 November 2006; pp. 299–308. [Google Scholar]
- Bødker, S. Through the Interface: A Human Activity Approach to User Interface Design, 1st ed.; CRC Press: Boca Raton, FL, USA, 1991. [Google Scholar]
- Craig, A.B. Understanding Augmented Reality: Concepts and Applications; Morgan Kaufmann: Amsterdam, The Netherlands, 2013. [Google Scholar]
- Pejsa, T.; Kantor, J.; Benko, H.; Ofek, E.; Wilson, A. Room2room: Enabling life-size telepresence in a projected augmented reality environment. In Proceedings of the 19th ACM Conference On Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; pp. 1716–1725. [Google Scholar]
- Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
- Zhang, P.; Bai, G. An activity systems theory approach to agent technology. Int. J. Knowl. Syst. Sci. 2005, 2, 60–65. [Google Scholar]
- Marques, B.; Silva, S.S.; Alves, J.; Araujo, T.; Dias, P.M.; Santos, B.S. A conceptual model and taxonomy for collaborative augmented reality. IEEE Trans. Vis. Comput. Graph. 2021, 28, 5113–5133. [Google Scholar] [CrossRef]
- Kruijff, E.; Swan, J.E.; Feiner, S. Perceptual issues in augmented reality revisited. In Proceedings of the 2010 IEEE International Symposium on Mixed and Augmented Reality, Seoul, Korea, 13–16 October 2010; IEEE: New York, NY, USA, 2010; pp. 3–12. [Google Scholar]
- Martins, N.C.; Marques, B.; Rafael, S.; Dias, P.; Santos, B.S. Seeing Clearly: A Situated Air Quality Visualization with AR Egocentric Viewpoint Extension. In Proceedings of the Workshop on Visualisation in Environmental Sciences (EnvirVis) at EuroVis, Leipzig, Germany, 12–16 June 2023. accepted. [Google Scholar]
- Sicat, R.; Li, J.; Choi, J.; Cordeil, M.; Jeong, W.K.; Bach, B.; Pfister, H. DXR: A toolkit for building immersive data visualizations. IEEE Trans. Vis. Comput. Graph. 2018, 25, 715–725. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Literature Overview | Article [24] | The Current Article (Extension of [24]) |
---|---|---|
• Explains the SV model [11,25,26] | • Proposes an expansion of the SV model | • Refines the expansion of the proposed SV model to address the explanation gaps identified in [24] |
• Creates detailed SV definitions for space and time perspectives, but no comprehensive definitions for place, activity and community perspectives [27] | • To avoid confusion between the definitions of SV across different fields of study, systematizes and rewrites all known definitions of SV (standardized definitions were developed for space, time, place, activity, and community) | • Refines some definitions presented in [24] to provide better clarity on the physically situated, perceptually situated, and embedded definitions, and explains more comprehensively the space, time, place, activity, community and content perspectives in order to address identified gaps |
• Proposes new categories for space, time, place, activity and community | • Proposes the emotionally situated category under the SV content perspective | |
• Proposes the SV content perspective | • Introduces the ethics perspective and its corresponding category | |
• Presents the challenges in SV for AR/MR |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Martins, N.C.; Marques, B.; Dias, P.; Sousa Santos, B. Expanding the Horizons of Situated Visualization: The Extended SV Model. Big Data Cogn. Comput. 2023, 7, 112. https://doi.org/10.3390/bdcc7020112
Martins NC, Marques B, Dias P, Sousa Santos B. Expanding the Horizons of Situated Visualization: The Extended SV Model. Big Data and Cognitive Computing. 2023; 7(2):112. https://doi.org/10.3390/bdcc7020112
Chicago/Turabian StyleMartins, Nuno Cid, Bernardo Marques, Paulo Dias, and Beatriz Sousa Santos. 2023. "Expanding the Horizons of Situated Visualization: The Extended SV Model" Big Data and Cognitive Computing 7, no. 2: 112. https://doi.org/10.3390/bdcc7020112
APA StyleMartins, N. C., Marques, B., Dias, P., & Sousa Santos, B. (2023). Expanding the Horizons of Situated Visualization: The Extended SV Model. Big Data and Cognitive Computing, 7(2), 112. https://doi.org/10.3390/bdcc7020112