research-article
Authors: Seonghoon Ban, Taeha Yi, and Kyung Hoon Hyun
Proceedings of the ACM on Computer Graphics and Interactive Techniques, Volume 6, Issue 2
Article No.: 19, Pages 1 - 9
Published: 16 August 2023 Publication History
- 0citation
- 118
- Downloads
Metrics
Total Citations0Total Downloads118Last 12 Months118
Last 6 weeks19
New Citation Alert added!
This alert has been successfully added and will be sent to:
You will be notified whenever a record that you have chosen has been cited.
To manage your alert preferences, click on the button below.
Manage my Alerts
New Citation Alert!
Please log in to your account
Get Access
- Get Access
- References
- Media
- Tables
- Share
Abstract
The authors present an art installation entitled "Intersection of Seeing" in which volumetric images were created using nine cameras, one of which moved autonomously throughout the exhibition space. The objective of the installation was to provide a unique visual experience of a volumetrically captured space reconstructed based on mixed reality. A robot module designed to navigate autonomously among visitors in the space is also described, and five different visualization methods are proposed as a new artistic expression. Visitors experienced mixed reality by appreciating layered mechanisms projecting a representation of the real world onto an immersive virtual world in real-time.
References
[1]
Seonghoon Ban and Kyung Hoon Hyun. "Pixel of matter: New ways of seeing with an active volumetric filmmaking system," Leonardo 53, No.4, 434--437 (2020).
[2]
Seonghoon Ban. Virtual Mob, Interactive Installation, Daejeon Museum of Art, 2019.
[3]
David Birchfield, Kelly Phillips, Assegid Kidané and David Lorig, "Interactive public sound art: a case study," Proceedings of the conference on New interfaces for musical expression. 2006.
[4]
Raffy Bushman and Nubiya Brandon, "All Kinds of Limbo," VR performance, National Theatre (2019).
[5]
John Desnovers-Stewart, Ekaterina Stepanova, Bernhard Riecke and Patrick Pennefather, "Body RemiXer: extending bodies to stimulate social connection in an immersive installation." ACM SIGGRAPH 2020 Art Gallery. 2020. 394-400.
[6]
Mingsong Dou, Philip Davidson and Sean Ryan Fanello, "Motion2Fusion: Real-time volumetric performance capture," in ACM Transactions on Graphics 36, No.6, 246:1--16 (2017).
Digital Library
[7]
Harun Farocki, the Series of "Parellel (I-VI)," 2012-14, <www.harunfarocki.de/installations/2010s/2012/parallel.html>, accessed 27 September 2021.
[8]
Or Fleisher and Shirin Anlen, "Volume: 3D reconstruction of history for immersive platforms," in ACM SIGGRAPH 2018 Posters (2018).
[9]
Kaiwen Guo, Peter Lincoln, Philip Davidson, Jay Busch, Xueming Yu, Matt Whalen and Geoff Harvey, "The relightables: Volumetric performance capture of humans with realistic relighting." ACM Transactions on Graphics (TOG) 38, no. 6, 1--19 (2019).
Digital Library
[10]
Marc Habermann, Weipeng Xu, Michael Zollhöfer, Gerard Pons-Moll, and Christian Theobalt, "LiveCap: Real-time human performance capture from monocular video," ACM Transactions on Graphics 38, No.2, 14:1--17 (2019).
Digital Library
[11]
Ryoji Ikeda, "Test Pattern (100m Verision)," 2013, < https://artsandculture.google.com/asset/ryoji-ikeda-test-pattern-100m-version-at-ruhrtriennale-2013-vernissagetv/-AFxhge8RCjrAg?hl=en&avm=4>>, accessed 20 January 2023.
[12]
Susan R. Koff, "Toward a Definition of Dance Education," Childhood Education 77, No.1, 27--32 (2000).
[13]
Rosalind Krauss, "A Voyage on the North Sea: Art in the Age of the Post-Medium Condition" (London: Thames and Hudson, 1999), pp. 42--47.
[14]
William Latham, Stephen Todd, Peter Todd, and Lance Putnam, "Exhibiting mutator VR: Procedural art evolves to virtual reality," Leonardo 54, No.3, 274--281 (2021).
[15]
John Law, "Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity." Systems practice 5, No.4, 379--393 (1992).
[16]
Yuehua Liu, Tharam Dillon, Wenjin Yu, Wenny Rahayu, and Fahed Mostafa, "Noise Removal in the Presence of Significant Anomalies for Industrial IoT Sensor Data in Manufacturing," IEEE Internet of Things Journal 7, No.8, 7084--7096 (2020).
[17]
Lev Manovich, " Post-media Aesthetics Medium in Crisis," 2001, <www.alice.id.tue.nl/references/manovich-2005.pdf>, accessed 27 September 2021.
[18]
Hiroshi Matoba and Seonghoon Bahn, "Kinetic Media Lab / Node 5:5 [prototype]," 2016, <artsandculture.google.com/asset/kinetic-media-lab-node-5-5-prototype-hiroshi-matoba-seonghoon-bahn/-wHudDCXHFyECA?hl=en>, accessed 27 September 2021.
[19]
Paul Milgram and Fumio Kishino, "Taxonomy of mixed reality visual displays," IEICE Transactions on Information and Systems 77, No.12, 1321--1329 (1994).
[20]
Robert Overweg, Series of "Flying and floating" and "The end of the virtual world," 2015, <www.robertoverweg.com/Art>, accessed 27 September 2021.
[21]
Maja Petric, We Are All Made of Light, immersive art installation, 2018. See www.majapetric.com/waamol1 (accessed 18 April 2023)
[22]
Eldritch Priest, "Boring formless nonsense: Experimental music and the aesthetics of failure" (New York: Bloomsbury, 2013), pp.136--137.
[23]
Jeffrey Shaw, "Legible City," 1989, <www.jeffreyshawcompendium.com/portfolio/legible-city>, accessed 27 September 2021.
[24]
teamLab, "Story of the forest," 2016, <www.teamlab.art/w/story-of-the-forest>, accessed 27 September 2021.
[25]
Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel Chang, Leonidas Guibas, Hao Su, "SAPIEN: A simulated part-based interactive environment," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[26]
Tao Yu, Jianhui Zhao and Zerong Zheng, "DoubleFusion: Real-Time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor," Proceedings of the IEEE conference on computer vision and pattern recognition. 2020.
[27]
Emin Zerman, Néill O'Dwyer, Gareth W. Young, and Aljosa Smolic. "A case study on the use of volumetric video in augmented reality for cultural heritage." In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, 1--5 (2020).
Index Terms
Intersection of Seeing: New Ways of Experiencing Reality using Autonomous Volumetric Capture System
Computing methodologies
Computer graphics
Shape modeling
Volumetric models
Human-centered computing
Human computer interaction (HCI)
Interaction paradigms
Mixed / augmented reality
Recommendations
- Dill Pickle: Interactive Theatre Play in Virtual Reality
VRST '22: Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology
“Dill Pickle”, is the first interactive immersive theatre experience in virtual reality that uses volumetric capture. In the play, a volumetrically captured actor plays the character of Robert. The user interacts with Robert through utterances that are ...
Read More
- Space Ocean Library: Interactive Narrative Experience in VR
ISS '22: Companion Proceedings of the 2022 Conference on Interactive Surfaces and Spaces
“Space Ocean Library” is an interactive narrative VR experience that transports you to a study-turned mystical purgatory. In this room, you explore the connectivity of humans through the objects we keep around to remember our lives. “Space Ocean Library”...
Read More
- Exploring Effect of Level of Storytelling Richness on Science Learning in Interactive and Immersive Virtual Reality
IMX '22: Proceedings of the 2022 ACM International Conference on Interactive Media Experiences
Immersive and interactive storytelling in virtual reality (VR) is an emerging creative practice that has been thriving in recent years. Educational applications using immersive VR storytelling to explain complex science concepts have very promising ...
Read More
Comments
Information & Contributors
Information
Published In
Proceedings of the ACM on Computer Graphics and Interactive Techniques Volume 6, Issue 2
August 2023
126 pages
EISSN:2577-6193
DOI:10.1145/3616531
Issue’s Table of Contents
Copyright © 2023 ACM.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [emailprotected].
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Published: 16 August 2023
Published inPACMCGITVolume 6, Issue 2
Permissions
Request permissions for this article.
Check for updates
Author Tags
- Autonomous robot module
- Human-computer interaction
- Virtual reality
- Volumetric capture
Qualifiers
- Research-article
- Research
- Refereed
Funding Sources
Contributors
Other Metrics
View Article Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
Total Citations
118
Total Downloads
- Downloads (Last 12 months)118
- Downloads (Last 6 weeks)19
Other Metrics
View Author Metrics
Citations
View Options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in
Full Access
Get this Article
View options
View or Download as a PDF file.
PDFeReader
View online with eReader.
eReaderHTML Format
View this article in HTML Format.
HTML FormatMedia
Figures
Other
Tables