Next Article in Journal
Forest Classroom: A Case Study of Educational Augmented Reality Design to Facilitate Classroom Engagement
Next Article in Special Issue
Identifying Strategies to Mitigate Cybersickness in Virtual Reality Induced by Flying with an Interactive Travel Interface
Previous Article in Journal
Real-Time Flood Forecasting and Warning: A Comprehensive Approach toward HCI-Centric Mobile App Development
Previous Article in Special Issue
Velocity-Oriented Dynamic Control–Display Gain for Kinesthetic Interaction with a Grounded Force-Feedback Device
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Good News, the Bad News, and the Ugly Truth: A Review on the 3D Interaction of Light Field Displays

1
Department of Networked Systems and Services, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Műegyetem rkp. 3., 1111 Budapest, Hungary
2
Wireless Multimedia and Networking Research Group, Department of Computer Science, School of Computer Science and Mathematics, Faculty of Science, Engineering and Computing, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK
3
Sigma Technology, Közraktár Str. 30-32, 1093 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(5), 45; https://doi.org/10.3390/mti7050045
Submission received: 15 March 2023 / Revised: 24 April 2023 / Accepted: 25 April 2023 / Published: 27 April 2023
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)

Abstract

:
Light field displays offer glasses-free 3D visualization, which means that multiple individuals may observe the same content simultaneously from a virtually infinite number of perspectives without the need of viewing devices. The practical utilization of such visualization systems include various passive and active use cases. In the case of the latter, users often engage with the utilized system via human–computer interaction. Beyond conventional controls and interfaces, it is also possible to use advanced solutions such as motion tracking, which may seem seamless and highly convenient when paired with glasses-free 3D visualization. However, such solutions may not necessarily outperform conventional controls, and their true potentials may fundamentally depend on the use case in which they are deployed. In this paper, we provide a review on the 3D interaction of light field displays. Our work takes into consideration the different requirements posed by passive and active use cases, discusses the numerous challenges, limitations and potentials, and proposes research initiatives that could progress the investigated field of science.

1. Introduction

As we live in a 3D world, the need for its faithful 3D representation is not surprising. While photography was introduced back in 1839, English scientist and inventor Sir Charles Wheatstone was already utilizing his work on stereopsis [1], which was initially based on drawings, to create photographic stereoscopic pairs and exhibit them via stereoscopes in the early 1840s. Stereoscopes gained immense popularity in the subsequent decades, partially due to the stereoscopes of Sir David Brewster [2,3] and of Oliver Wendell Holmes [4]. The principle of stereoscopes is to present the viewer two photographs of the same object or scene, captured from different perspectives. Technically, one image is allocated to one eye.
Although we have come a rather long way since the era of stereoscopes, contemporary digital stereoscopic 3D (S3D) imaging is evidently based on the same concept. However, the concept also extends to the need for a viewing device. Today, use case contexts reach far beyond the entertainment of individuals via stereoscopic photographs, and the modern applications of 3D visualization technologies may benefit tremendously from autostereoscopy—the viewing-device-free visualization of S3D contents. Yet, it is important to highlight that the following technologies are not actually autostereoscopic, as they achieve 3D visualization without generating stereoscopic pairs. Such technologies are commonly labeled as “glasses-free 3D”, as no glasses or any other viewing devices are required.
The first technology that may come to mind is holography. It was initially proposed by Hungarian physicist Dennis Gabor [5,6,7] in the 1940s but only became a reality after the development of the laser in the 1960s. Technically speaking, as holography is the recording and the reconstruction of wavefronts, holograms may be created based on any type of wave (e.g., electron holography [8]), not just light. During the recording phase of laser holography, two laser beams, commonly split from one laser beam, are directed onto a recording medium (i.e., photographic plate): the so-called “object beam” (also known as “illumination beam” or “signal beam”), which is reflected off the object, and the reference beam, which directly reaches the medium (typically directed by a mirror) and carries information about the beam itself (e.g., wavelength). This latter beam is used as a reconstruction beam to reproduce the light that was reflected off the object.
Another important type of 3D visualization is enabled by volumetric displays. Volumetric displays create holographic wavefronts in a confined space that may be viewed from any angle. Such displays can be categorized as swept-volume and static-volume displays [9]. Swept-volume displays [10,11,12] rely on retinal persistence, also known as the persistence of vision [13], which means that the visual perception of an object continues for a short time after the light rays coming from the objects stop reaching the retina. Because of this property of the human visual system (HVS), the viewer perceives the decomposed slices of the visualized content, which is implemented by the translational or rotational motion of a surface, as a single 3D object. The very same principle is used to create spinning LED displays [14,15]. Static-volume displays are “static” as such systems have no moving parts [16]. Using lasers is a common approach [17], but conventional projectors are employed as well, particularly in the case of fog and vapor displays [18,19,20].
Using projectors is also common for light field technology, although lasers are an option as well [21]. Similarly to the previously detailed technologies, light field visualization aims to provide a 3D experience that is analogous to what the real world has to offer; of course, the coherent light (i.e., laser) of holography and volumetric displays does not occur in nature. Some in the scientific field address light field displays as a “window to the world”. Basically, if we look out the closed window of a house and see an object outside, we never actually see the object itself, just the incoherent light rays that travel toward our pupils from the surface of the object. However, if we move to the left or to the right alongside the window, then our pupils shall capture different light rays, and thus, we shall see the object from a different perspective. This difference depends on the distance between the window and the object: the closer the object is, the more change we shall perceive in perspective. This is commonly known as the parallax effect. It is important to note here that this applies to both the horizontal and the vertical axis. Therefore, if we bend our knees slightly or if we stand on the tips of our toes, the perspective shall change as well.
Both glasses-free 3D technologies may provide a continuous parallax. However, while the perceived holograms are composed of phase-conjugated rays, light fields are formed by so-called “fan beams” coming from each point of the screen [22]. This latter technical term is based on the expansive nature of light propagation. Regarding terminology, all these technologies are considered to be field of light displays (FoLDs) [23,24]. Ultimately, the long-term duty of FoLD technologies is to pass the visual Turing test [25], which means that such displays provide visual experiences that cannot be distinguished from reality, i.e., perfecting the “window to the world”.
Unlike multi-view displays that rely on so-called “sweet spots” [26], which are evenly distributed positions in front of the display from which the same perspective with limited variations can be perceived, light field displays utilize the entire field of view (FOV) and provide a smooth, continuous parallax within. Light field displays are also often addressed as super multi-view (SMV) displays in the scientific literature [27,28,29]. This terminology also extends to near-eye solutions [30,31,32], which, as the name suggests, use viewing devices near the eyes of the observer. The differences between multi-view and light field displays are exhibited in Figure 1. As highlighted by the figure, the continuity of the parallax over the FOV means that any angle of observation within the FOV provides a correct perspective.
Projection-based light field displays, which also appear in the scientific literature as “projector-based light field displays”, use an array of optical engines (i.e., projectors) and a reflective holographic screen. Possibly the most well-known implementations of projection-based light field displays are the HoloVizio displays of Holografika [33,34,35], first introduced roughly 18 years ago. These displays provide horizontal-only parallax (HOP), which means that vertical changes in the angle of observation do not affect the perspective. It is feasible to create vertical-only parallax (VOP) displays; however, they are not practical, as the human eyes are horizontally separated, and the vast majority of changes in the angle of observation in utilization scenarios are also horizontal. Supporting both horizontal and vertical parallax classifies the visualization system as a full parallax (FP) display.
Light field displays are slowly but surely emerging. As it is an expensive and resource-demanding visualization technology (e.g., high power consumption, great computation capacity, massive data size, etc.), not only are light field displays still yet to penetrate the consumer market, but the number of research institutions that have access to such displays is quite limited as well. This, of course, does not stifle innovation, as many institutions propose and develop their own prototypes. Beyond the any-size, any-aspect and any-shape light field display of Holografika [36], there are the layer-based solutions (i.e., multiple LCD layers are used to create the depth of field) of Teng and Liu [37], Alpaslan and El-Ghoroury [38], and Lanman et al. [39]; the integral imaging methods (i.e., a micro-lens array transforms the light rays) of Zhao et al. [40], Yu et al. [41] and Lee et al. [42]; and the projection-based implementations of Zhong et al. [43], Shim et al. [44], and Jang et al. [45], and many more.
Light field displays can be used in numerous different contexts. At the time of writing this paper, while some use cases are still in a purely conceptual phase, others have specific system designs or are already implemented on the level of prototypes. For example, designs for large-scale light field cinema systems (i.e., those with the size of conventional cinema standards) have been proposed [46], yet constructing these systems would be greatly challenging and excessively expensive—not to mention the lack of appropriate contents. Among the largest state-of-the-art systems are screens with 140-inch [47] and 162-inch [48] diagonal sizes, while even the smaller conventional 2D movie theatre screens are more than 50% larger. On the other hand, light field telepresence prototypes have already been implemented and tested [49,50,51]. Naturally, use cases that may utilize general-purpose light field displays are not affected by the emergence of dedicated devices.
The use cases of light field visualization may be either passive or active. In the context of passive use cases, the observer has no option for human–computer interactions (HCI); the individual is purely an observer of the visualized content and has no means of interaction. Active use cases allow the observer to provide input to the system via a control mechanism. A classic example for interaction is the adjustment of the viewing parameters (e.g., rotation) of a static model or scene, which may be executed by using either conventional controls (e.g., the keyboard of the system’s computer) or more advanced solutions (e.g., gesture recognition). For such a glasses-free 3D visualization technology, using a touchless user interface (TUI) may seem to be the evident choice to maximize the potential of active use cases. In fact, it is possible to provide a light field user interface for the user to enable proper 3D interaction. For example, such a solution may combine light field visualization within arm’s reach and hand gesture recognition via a motion-gesture sensor.
However, advanced user interfaces are not necessarily better than conventional ones in every single aspect. One particular aspect can be the precision of the input. While it may be marginal for certain use cases, it may be of pinnacle importance for others. Another important performance metric can be task completion time, which may be crucial in professional contexts, as well as in other activities (e.g., competitive gaming).
The comparative study of Pittarello et al. [52] investigated different aspects of 3D interaction for keyboard and mouse setups, gamepads, and Leap Motion, an optical hand-tracking module. The studied aspects were usability, emotional involvement, cognitive involvement, aesthetics, novelty, the will to play again, and physical fatigue. The obtained results clearly indicate that optical hand tracking under-performs in terms of usability, while it induces the smallest extent of physical fatigue. Additionally, optical hand tracking resulted in the highest number of task errors in the experiment. Similar findings are reported by the work of Ardito et al. [53], which used a Nintendo Wii Remote (also known as WiiMote), as well as the other two conventional controls. The collected data show that the mean task completion time for the WiiMote is nearly 50% higher compared to the keyboard and mouse and the gamepad. Moreover, the users expressed that handling the WiiMote was roughly twice as difficult as the other controllers, and this was also reflected in subjective measurements of personal preference and satisfaction.
While the experiments mentioned above were carried out for 2D displays, head-mounted displays (HMDs), such as virtual reality (VR) devices, may benefit more from advanced controllers. Indeed, commercial VR systems now have their own special controllers that dominate use cases of entertainment, and digital gloves (also known as smart gloves or haptic gloves) are being investigated by the scientific community [54,55,56,57,58] and have emerged on the consumer market as well [59,60]. For light field visualization, such HCI solutions may be considered, yet the device-free nature of optical tracking may match the glasses-free nature of light field displays better.
In this paper, we review the state-of-the-art solutions for 3D interactions with light field displays. Our work specifically focuses on projection-based light field visualization, and thus, we do not address near-eye light field displays—which are analogous to HMD-based systems on many fronts. The analysis presented in this paper separately discusses the many passive and active use cases of light field visualization. The latter may vary greatly in terms of HCI, as certain use cases may have more specialized requirements toward the aspects of interaction performance. We also highlight how specific use case archetypes may be implemented as either passive or active and demonstrate potential hybrid multi-user solutions. Essentially, this paper elaborates on the good news (i.e., the advantages and the potentials) and the bad news (i.e., the challenges and the limitations) about 3D interactions with light field displays, as well as the ugly truth (i.e., that 3D interaction is not universally superior to conventional controls, and in fact, may be easily outperformed by such). Furthermore, our work proposes future research efforts that are necessary to advance our understanding of the related HCI, which may ultimately assist the emergence of the active use cases of light field visualization.
The remainder of this paper is structured as follows. The brief history of light field visualization and the current state of the research efforts are reviewed in Section 2. The different use cases are listed and categorized in Section 3. The findings on 3D light field interactions are analyzed in Section 4. The discussion on HCI is presented in Section 5. The paper is concluded in Section 6.

2. Historical Overview and State-of-the-Art Research of Light Field Visualization

Although real light field displays emerged in the past two decades, the concept of such a form of visualization was already conceived in the beginning of the 20th century. In 1908, Gabriel Lippmann [61] proposed integral photography, which relied on the same principle as today’s integral imaging solutions. Basically, a micro-lens array is employed in order to facilitate small differences in perspective. Even in the case of rendering for state-of-the-art projection-based light field displays, if the input is an array of 2D images—a one-dimensional array for HOP and VOP content, and a two-dimensional array for FP content—then the differences between these images also encapsulate such small disparities.
The technical term “light field” was introduced by Andrey Gershun [62] in 1936, although Michael Faraday already considered light as a field in 1846 [63]. The complex characteristics of light fields were quantified by Adelson and Bergen in 1991 through the plenoptic function [64]. The plenoptic function describes the intensity (i.e., radiance) of all the light rays in a region of 3D space. It may be parametrized by a position (three coordinates) and a direction (two angles). This parametrization is the basis of 5D plenoptic modeling or image-based rendering (IBR), as proposed by McMillan and Bishop in 1995 [65]. The 5D model represents light fields as a set of panoramic images captured from the different positions in space. In 1996, Levoy and Hanrahan presented the light slab representation [66]. It builds on the idea that radiance does not change along a line in free space, where free space refers to regions free of occluders. The light slab is a 4D representation, as a ray can be described by its intersections with two planes (i.e., two point pairs). Such planes are often illustrated as parallel planes placed in front of and behind the 3D object or scene, but their position is arbitrary. In fact, there are numerous alternative 4D representations, such as two point pairs on the surface of a sphere. The 4D light field is also known as a lumigraph [67]. In 2002, Yang et al. introduced a capture system composed of 64 webcameras in an 8-by-8 array [68]. The first commercial plenoptic camera was the Raytrix R11 in 2010, followed by the consumer-grade light field cameras of Lytro in 2012. Regarding visualization, besides the aforementioned HoloVizio displays, the 360-degree system presented by Jones et al. [69] in 2007 should be mentioned, as well as the different works of Lanman et al. and Wetzstein et al. [39,70,71,72].
As the representation of light field contents is significantly larger than of conventional 2D and S3D contents, compression is an essential research topic. Among the most notable initiatives is JPEG Pleno [73,74,75]. At the time of writing this paper, there are already five ISO/IEC JPEG Pleno standards published, and two more are currently under development. Other proposals of light field compression include the works of Magnor and Girod [76], Liu et al. [77], Chen et al. [78], and Jiang et al. [79,80].
The primary goal of light field visualization research is ultimately to provide an exceptional quality of experience (QoE), which is imperative to the successful launch of related use cases. For instance, compression typically serves the purpose of perceptual coding, which means that the size of data is reduced without compromising the perceived quality. This statement is also applicable to other types of visualization technologies, as well as audio. In the context of light field visualization, subjective quality evaluation commonly accompanies research efforts related to compression [81,82,83,84,85,86,87,88]. Other topics addressing light field QoE include, but are definitely not limited to, objective quality assessment (i.e., the prediction or estimation of subjectively perceived quality through metrics and models) [89,90,91,92,93,94]; datasets for objective and subjective assessment [95,96,97,98,99,100,101,102,103]; reconstruction, subsampling, interpolation, and view synthesis [104,105,106,107,108,109,110]; spatial and angular resolution [111,112,113,114]; methodology and viewing conditions [115,116,117,118,119,120]; human factors and content impact [121,122,123,124]; and rendering [125,126,127].
Beyond these topics, there are numerous research questions that are still to be thoroughly investigated in the area of light field QoE [128], such as immersion, interaction, inter-user effects, perceptual fatigue, and many more. In Section 4 of this paper, we analyze the few published scientific contributions that address 3D light field interaction.

3. Use Cases of Light Field Visualization

In this section, we classify and describe the use cases of light field visualization. In the cases of passive utilization, individuals do not directly interact with the system, and thus, HCI either does not play an essential role or it is completely absent from the users’ perspective. Note that the classification of the vast majority of use cases depends on the implementation (i.e., both use case classes are feasible).

3.1. Passive Use Cases

3.1.1. Prototype Review

Prototype review is one of the most common instances of industrial visualization via light field technology. Usually a static model or scene is visualized, but it is also meaningful to display animated content. During the passive implementation of a prototype review, individuals (e.g., stakeholders, developers, etc.) may move within the field of view (FOV)—or rather, valid viewing area (VVA)—of the light field display to observe the content (e.g., a mechanical component) from various perspectives. The FOV is the angle measured from the screen of the display in which light rays are reproduced, while the VVA takes into consideration the actual shape of the area, defined by the overlapping spread of light rays (also known as emission cones), and may also limit the viable viewing distance through light ray density (i.e., angular resolution). Passive prototype review is feasible if and only if the orientation of the prototype and the display FOV together enable all the perspectives of interest and the smallest detail of interest is properly perceivable at the default content zoom.

3.1.2. Medical Imaging

The primary considerations of passive prototype review for feasibility are also applicable to passive medical imaging. However, such implementations are less likely to be practical, as limitations in observation may result in decreased diagnostics accuracy. While content interaction (i.e., rotation and zoom) may not be necessarily utilized for each and every instance of medical data analysis, the lack of such option may increase the percentage of false negatives (i.e., a health-related issue remains undetected) and false positives (i.e., a non-present health-related issue is falsely confirmed).

3.1.3. Resource Exploration

In the context of light field display systems, the use cases of resource exploration primarily refer to the visualization of oil and gas resources [129,130]. The aforementioned considerations regarding perspectives and details apply to the passive instances of resource exploration as well; however, these use cases are rarely time-sensitive, unlike many applications of medical imaging. At this point, the option of automatic rotation (i.e., the visualized object rotates slowly in a given direction) needs to be mentioned, which may be used for industrial use cases such as prototype review and resource exploration. For medical contexts, it may be feasible, but mostly for training and education purposes (e.g., the visualization of an organ affected by a certain disease).

3.1.4. Training and Education

Various instances of training—particularly specialized training—and education support the passive implementations of the utilization of light field visualization. In such cases, the perspective of interest is provided to the individual by default—or the content is animated—and changes to the parameters of visualization are not possible (e.g., due to the lack of HCI). Evidently, no interaction at all is required from the individual for passive use cases of training and education. A straightforward example for such a use case can be the provision of 3D educational multimedia.

3.1.5. Digital Signage

Digital signage is a very typical utilization of visualization technologies for commercial purposes (e.g., advertising on billboards). The emergence of 3D digital signage was expected for long, as the primary function of such visuals is to attract attention, which can be achieved by the content, by the visualization, or by their combination. Digital signage in general is a dominantly passive use case (i.e., the individual observes an eye-catching digital billboard), although active implementations are also possible—particularly for small-scale units.

3.1.6. Cultural Heritage Exhibition

Cultural heritage exhibition (e.g., an exhibition at a museum) as a passive use case is rather straightforward. Individuals either observe a static 3D object or scene, or an animated content. Object orientation is often not an issue. For example, for the visualization of the 3D life-size replica of a classical-era vase, if only a given portion of the vase is decorated, then that portion should face the audience; or if the entire circumference of the vase is imbued with unique imagery, then the vase should be slowly rotating.

3.1.7. Traffic Control

From all the different types of traffic control, light field visualization could benefit air traffic control the most, as it may show accurate vertical distances between the aircrafts. In passive implementations, the operator purely observes the visualized region, and does not interact with it. Interaction with the system is not necessary, and is not necessarily beneficial, as explained in the active variant of this use case.

3.1.8. Driver Assistance Systems

Driver assistance systems in the investigated context are technically light field windshields. The main rationale behind the implementation of such vehicles is that vehicle- or traffic-related information is visualized closely to where the driver’s visual attention should be—namely, on the road. There are numerous applications based on Vehicle-to-Everything (V2X) communication, yet the data they convey is either shown on a smartphone or the digital dashboard of the vehicle [131,132,133,134]. Such light field solutions are particularly beneficial to the driver’s reactive capabilities to V2X-based information [135]. Also, as the visualized content itself is 3D, the driver does not need to regularly switch between 2D and 3D visuals. In passive implementations, the driver receives relevant information via the visualization system.

3.1.9. Defense Applications

There are multiple utilization scenarios for light field visualization in the context of defense applications. They are economically feasible as well, since the military tends to have a generous budget. One particular form of light field technology in this context is known as 3D battlespace visualization [136]. It is analogous to air traffic control in many aspects, yet interaction may be greatly beneficial to such systems.

3.1.10. Telepresence

One can look at the telepresence use case as a “3D video call”, although it is more than that. The purpose of true-to-scale systems—such as the prototype of Cserkaszky et al. [50]—is to enable a sense of presence via realistic size and glasses-free 3D visualization. Other implementations are feasible as well, such as the levitating system of Zhang et al. [51], which only displays the head of the individual; or the cylindrical teleconferencing system of Gotsch et al. [49]. 3D teleconferencing can also be implemented by general-purpose displays, such as the Looking Glass light field display in the design of Blackwell et al. [137]. In a passive use case, the individual interacts with others, and not with the system itself.

3.1.11. Home Multimedia Entertainment

The final use case which may be both passive and active is home multimedia entertainment. In its passive form, it is analogous to simply watching a movie on the television. Of course, its active variants differ more than the option to pause the content or to change its sound volume. Moreover, from all the use cases discussed so far, home multimedia entertainment is the only one that necessitates a privately-owned light field display—even telepresence can begin emerge (i.e., in professional contexts) without having consumer-grade light field visualization systems.

3.1.12. Cinematography

The only use case in this list that is strictly passive is cinematography. As stated earlier, while large-scale light field cinema systems have already been proposed [46], they are greatly challenging to implement. However, such solutions carry immense potentials of innovation on many fronts. First of all, ticket pricing for light field cinema would differ quite a lot from conventional pricing schemes, as closer seats could provide better 3D perception of the movie. Basically, the perceived density of light rays fundamentally depends on the viewing distance. This means that it is more difficult to address the two eyes of the observer with at least two distinct light rays at greater distances. Moreover, light field cinema could open various artistic options to be explored. For instance, storytelling could be affected by the perspective (e.g., some details could be perceptually occluded from one perspective, while visible from another).

3.2. Active Use Cases

3.2.1. Prototype Review

For the majority of active use cases, including the active implementations of prototype review, there are two types of content visualization. One is a typical model viewer, which enables the modifications of the viewing parameters, such as rotation and zoom—for both static and animated contents. The other type includes all the content-related interactions. For example, in the case of prototype review, if the leg of a bipedal robot is moved through commands via an HCI, then that is a content-related interaction. However, if a looping animation of a bipedal robot is displayed, then changing its orientation via the HCI is a view-related interaction. Prototype review is not a time-sensitive use case (i.e., strict deadlines and rushed development do not make a use case time-sensitive). The audience is commonly composed of multiple individuals. It is sufficient only if a single individual engages with the HCI at a time. Input accuracy is primarily relevant for content-related interaction.

3.2.2. Medical Imaging

The most common operations for medical imaging are view-related interactions. Content-related interactions are more frequent in the context of medical training. Medical imaging can be a time-sensitive use case, as in many scenarios, the need for medical treatment can be urgent. It is possible to have simultaneous observers, but having a single medical expert as the user is also typical. Even if there are multiple observers, typically, there is no need for simultaneous interactions. However, input accuracy may be important due to the potentially time-sensitive nature of the use case.

3.2.3. Resource Exploration

Content-related interactions are feasible for the active instances of resource exploration (e.g., changing the visualized drilling positions), yet view-related interactions are much more common, such as zooming in on an oil field. Generally, it is not a time-sensitive use case, as the primary purpose of the utilization of light field visualization is to aid careful planning. Simultaneous viewers are typical, yet for both prototype review and resource exploration, single-viewer scenarios are possible. For visualization with multiple observers, a single input for interaction is sufficient, and input accuracy is not of the greatest concern.

3.2.4. Training and Education

The active implementations of the use cases for training and education include both content-related and view-related interactions. The utilization of light field displays may be time-sensitive, particularly for specialized training. Simultaneous users may be common, and simultaneous inputs may be common as well, the accuracy of which may be of pinnacle importance.

3.2.5. Digital Signage

Active digital signage is primarily applicable to small-scale instances (i.e., sidewalk signage), and it is not feasible for billboards and façade-size signage (i.e., the largest format, used on the surface of buildings). The use is rather simple: the individual approaches the digital signage, is interested by its content, and interacts with it for more information. Although view-related interaction is meaningful in such a context (e.g., the individual may rotate a commercial product to view it from different angles), options for content-related interaction (e.g., modifying the color of the commercial product) are expected to dominate the use case. Again, the entire essence of signage is to obtain the attention of individuals, and therefore, it should be as attractive as possible. Digital signage is not a time-sensitive use case, and while there are expected to be simultaneous viewers, such small-scale systems shall mainly focus on the input of a single individual. Regarding the input itself, its accuracy is not particularly important. Still, the overall experience should excel, as it may greatly contribute to the financial decisions of the individual (e.g., buying a commercial product or subscribing to a service).

3.2.6. Cultural Heritage Exhibitions

The active use cases of cultural heritage exhibitions are rather similar to those of digital signage, as one of their primary goals is to grab attention and make the individual interested. Of course, the intention is to convey cultural heritage, to enrich the individual with cultural knowledge, and not to generate profit. For this purpose, museums and exhibitions often experiment with novel technologies, as they tend to gain interest of younger generations. Both interaction types may serve this purpose well. Exhibitions of cultural heritage are far from being time-sensitive, and simultaneous viewers are very typical. There is the potential for simultaneous input, much moreso than in the case of digital signage. However, accuracy is significantly more important if the content-related interaction plays a central role in the experience. Basically, insufficient input accuracy may easily degrade the experience and make the individual lose interest in the cultural content.

3.2.7. Traffic Control

In the passive variant of traffic control, particularly air traffic control, it was mentioned that interaction is not necessarily beneficial to the active use case. For instance, changing the zoom level may be counter-productive and hazardous. If the operator zooms in on a particular region, then other portions of the region are not visible for that given duration. Of course, at the same time, content-related interaction can be rather advantageous, such as adding information overlays dynamically (e.g., the visualization of calculated trajectories). Traffic control is a highly time-sensitive use case. While there is the potential for simultaneous users, simultaneous input is not expected. However, since this is not only a time-critical but a safety-critical use case as well, input accuracy is extremely important.

3.2.8. Driver Assistance Systems

The interaction type for driver assistance systems is mostly content-related, such as adjusting the visualized information. It is expected that the usage of the windshield surface shall be strictly regulated by compulsory future standards, which also decreases the relevance of view-related interactions. Typically, the only user is the driver, who is also the sole source of input. Just as in air traffic control, driver assistance systems are greatly time-sensitive. Therefore, the input of such solutions must be highly accurate.

3.2.9. Defense Applications

For defense applications, such as a 3D battlespace, there are many information overlays that can greatly assist decision makers. These include the visualization of various ranges, such as radar, sonar, or even ballistic ranges. Both interaction types are feasible, although the considerations for zoom are analogous to air traffic control. Real-time defense applications are time-sensitive use cases, typically with multiple viewers and a single input, the accuracy of which is absolutely crucial.

3.2.10. Telepresence

Although the telepresence use case is more passive than active, both view-related and content-related interactions can be meaningful. For instance, if one party is too far from the camera array, the other party could zoom in on the view for a better visual experience. While large-scale, portrait-oriented systems are designed to encompass a single individual, many solutions may easily accommodate multiple simultaneous user on one end. As the use case is not fundamentally designed to be active, simultaneous inputs are not expected, and there are no major requirements about input accuracy.

3.2.11. Home Multimedia Entertainment

Similarly to telepresence, home multimedia entertainment is a mostly passive use case, although functionalities of contemporary smart televisions are expected. It is not a time-sensitive use case, and the input generally does not play a major role. Regarding simultaneous viewers, light field visualization poses no restriction about the number, unlike HMD-based technologies.

3.2.12. Gaming

One of the most important active-only use cases is gaming, which is evidently based on content-related interaction. Gaming is commonly time-sensitive, unless timerless turn-based games or similar genres are played, and the accuracy of the input is typically important. Possibly the greatest potential of light field gaming is split-domain gaming. While split-screen gaming divides the screen based on the number of players (e.g., in the case of two players, either horizontally or vertically), split-domain gaming allocates a VVA to each player. An example of the VVAs of two players is shown in Figure 2. In the middle, the perspectives of the two players overlap; thus, no valid visualization can be perceived in that region. The main benefit is that both players can utilize the entire screen, and an added bonus is that during competitive gaming, the two players may not see each other’s views (i.e., no “screen peeking”).

3.2.13. Metaverse

Another potential utilization of light field visualization is the active use case of the metaverse [138]. For such, interactions are expected to have similar characteristics to gaming. The metaverse can be used for a virtually infinite number of purposes. Although the concept itself dates back to Neal Stephenson’s book published in 1992 [139], practical applications of the metaverse are currently being shaped.

4. Research on 3D Light Field Interactions

A summary of the typical parameters for active use cases elaborated in the previous section is shown in Table 1. The table emphasizes that task completion time can be crucial for the use cases, as well as the accuracy of the input. In this section, we review the state-of-the-art research on 3D light field interactions in light of the different use cases.
Adhikarla et al. [140,141] proposed a 3D light field HCI via a prototype light field display. The framework was designed for realistic direct haptic interaction. The solution relied on a leap motion controller for hand tracking and a HoloVizio-like, small-scale, back-projection light field display for visualizing the HCI. In essence, it consisted of a projector array, two sidewall mirrors, a holographic screen, and, of course, a computer that controlled the projector array.
The proposed HCI was evaluated in a subjective study with 12 test participants. In order to directly compare the light field interface to a conventional 2D solution, the authors designed a so-called “2D mode” and a “3D mode” for the experiment. In the case of the 2D mode, the perceived visualization was uniformly close to the physical surface of the device (i.e., without any variation in depth), while for the 3D mode, the distance from the screen varied up to 7 cm. Three tiles (i.e., squares) were visualized on the interface, one of which was red. The task of the test participant was to touch the red tile. In the 2D mode, the three tiles were distributed on a plane, while in the 3D mode, the depth of the tile varied as well, between 0 cm (i.e., the tile was in the plane of the 2D mode) and 7 cm.
The experiment measured task completion time, cognitive workload, and QoE. The obtained results indicate that the same task required significantly more time to complete in the 3D mode. For cognitive workload, the NASA-TLX (Task Load Index) [142] was used, the results of which show higher loads for most aspects (frustration, effort, performance, temporal demand, mental demand, and total workload), but the difference was not statistically significant. Regarding QoE, the User Experience Questionnaire (UEQ) of Laugwitz et al. [143] was used, and it revealed that the light field HCI achieved better attractiveness, efficiency, stimulation, and novelty, although none of the categories achieved statistical significance in their differences.
In a different work of Adhikarla et al. [144], the usage of hand gestures for panning, rotating, and zooming was investigated in the context of a 3D map. The hand gestures were tracked by a leap motion controller, and the map was visualized on the HoloVizio C80 light field cinema system [47]. The HCI was implemented by separating the sensed zone of the device into two parts: a hover zone and an interaction zone. Hand movement in the hover zone resulted in no action, while the interaction zone responded to the pre-defined movements for panning, rotating, and zooming on the map. The solution was evaluated by experts, and it was concluded that it may be difficult for the user to keep track of hand positioning within the zones.
Yamaguchi and Higashida [145,146] proposed a small-scale visualization system, the screen of which was composed of a 2D array of small elementary holograms that functioned as convex mirrors. User interaction with the projected content was tracked by a color image sensor, which detected the light scattered by the user’s finder. For testing purposes, the character “T” and the text “Touch Screen” were visualized, and if the individual touched the screen (i.e., the light scattered), then the text “OK” appeared as well. In a subsequent test, the characters “Y” and “N” were visualized, resulting in a “Yes” or a “No” if touched, respectively. A limitation of this solution is that interactions can only be registered if light is scattered, so finger motions between visualized areas are not detectable. The authors highlighted interactive digital signage as an active use case.
Chavarría et al. [147,148,149] used the same projection-based system for HCI and enhanced its registration procedure (i.e., detecting the user’s fingers) to combat the aforementioned limitation. The work demonstrates novel functions achieved by the proposed method, such as mid-air light field drawing without any additional device. The authors also tested the system as an ATM interface with grab and poke gestures. Subjective studies related to performance and QoE are yet to be carried out.
The RePro3D display of Yoshida et al. [150] was demonstrated through interactions with a computer-generated character (i.e., an animated fairy). The input interface used an infrared camera to recognize the hand gestures. However, the user wore a haptic device on the finger [151] for tactile sensation. Yet, as it was solely used for feedback, the considerations of bare-finger touch [152,153,154] were still relevant. In the investigated use case, the animated character, who was superimposed in 3D space, responded to touch with both visual and audio cues. A limitation of the solution is that the positional relationship between the hand and the animated character is not fully addressed (i.e., if the user’s hand was placed perceptually in front of the character, the character was not hidden by the hand). On the level of interaction, only the binary action of touch (i.e., whether the user perceptually touches the 3D content or not) was investigated.
Matsubayashi et al. [155] used ultrasound haptic feedback in two user studies. Regarding the first study, the task of the test participants was to estimate the position and angle of the virtual object based on haptic feedback. The estimation itself was carried out as view-related interactions (i.e., repositioning and rotation), which were executed via a keyboard. During the second study, the test participants were asked to lift a virtual cube, which was to be performed with and without visual access to the cube; in the case of the latter, the test participants had to rely on haptic feedback. The obtained results emphasize the importance of angle recognition and demonstrate that haptic feedback may compensate issues of occlusion, the topic of which is investigated by several recent works. For example, Yasui et al. [156] proposed an occlusion-robust sensing method based on aerial imaging by retro-reflection (i.e., reflecting light back to its source with minimal scattering).
Sang et al. [157] introduced a light field visualization system for medical imaging. The supported view-related interactions were rotating and zooming. However, the work did not detail the means of interaction.
In the experiment by Tamboli et al. [158], canonical 3D object orientation was addressed. The task of the test participants was to rotate 3D objects into their preferred orientation. The objects were visualized on the HoloVizio C80 light field display. As it was important from the perspective of the scientific work to obtain accurate data, the authors decided to include a conventional controller in the tests. The test participants used the thumbstick of the controller to rotate the 20 objects.

5. Discussion

This section aims to discuss and summarize the good news, the bad news, and the ugly truth about 3D interactions with light field displays.

5.1. The Good News

The good news is that light field visualization and its interactive use cases have absolutely immense potential. Particularly in the era of the COVID-19 pandemic, the possibility of avoiding physical contact during interaction is simply invaluable. The use cases are numerous, and as the technology emerges, there may be even more than the ones covered by this article. Eventually, passive and active instances of light field visualization may become an organic part of everyday life.
Although the availability of light field displays at the time of writing this paper is quite limited, there are more and more prototypes being built and tested by institutions. Regarding research, there is a continuous stream of scientific efforts, all of which contribute to the successful future emergence of the use cases of light field visualization.
Even without haptic feedback, the device-free nature of 3D interactions through projected light field HCIs combines well with the glasses-free 3D nature of light field displays. Furthermore, as shown via recent research efforts, haptic feedback for such systems can actually be implemented without the need for additional user devices.
Generally, control by hand gestures is quite intuitive. It is sufficient to see how fast humanity adapted to the touchscreens of smartphones, tablets, and other devices. Moreover, the analysis of hand gestures on 3D interfaces is expected to follow the research directions on touchscreens, such as identifying [159,160,161,162] or characterizing the user [163,164], including via gender [165,166,167] and age [168,169,170] recognition.
An additional benefit of light field HCIs is that they are much more durable as controllers, as there is no physical contact with the user. For instance, gamepad controllers and joysticks are more susceptible to damage when their users engage with intense fighting games or games that require frequent input, not to mention that the controller may be a victim of the player’s frustration.
While split-domain visualization is most apparent for gaming, many other use cases may benefit from it as well. For example, light field displays in defense use cases may be used by multiple personnel simultaneously, with different information overlays. In such a scenario, both individuals could perceive the same real-time map of military entities, but one could overview radar or sonar ranges, while the other could supervise strike ranges or trajectories.
Light field visualization may also benefit from other technological advancements, such as the emerging type of diffractive optics known as metasurfaces [171,172,173,174]. Metasurfaces, which are typically metallic [175,176,177] or dielectric [178,179,180], are subwavelength-patterned surfaces that may be used in meta-optics to control the phase, the amplitude, and the polarization of light rays. With such and even more advanced optical technologies, light field visualization may take significant steps toward passing the visual Turing test [25], which is the ultimate goal of any glasses-free 3D imaging system.

5.2. The Bad News

On a more pessimistic note, the development of light field technology is constrained by significant challenges and limitations, which also extend to 3D interactions. There are important trade-offs between display characteristics, more densely-aligned projectors are needed, many commercial systems should not be too great in terms of size and weight, and heat dissipation should be properly addressed, not to mention data size, computational requirements, power consumption, and the expense of manufacturing, which also translates into commercial cost. These factors not only delay the emergence of light field displays and their use cases, but they slow down research efforts as well. While it is true that many scientific contributions do not rely on actual light field displays (i.e., light field contents can be visualized by other display technologies as well, including conventional 2D displays), 3D interactions via light field necessitates such displays. One may say that augmented reality (AR) has the potential to emulate the perceptual circumstances; however, it is not even remotely straightforward to match the QoE of a glasses-free visualization technology with the QoE of an HMD-based system.
In order for a display system to achieve a sufficient QoE, it needs to be somewhat free of blur, the parallax effect must be smooth and continuous, and crosstalk should be completely avoided. Blurred visuals can be caused by both insufficient spatial and angular resolution. It is a limiting property of light field visualization that the displayed content is always the sharpest in the plane of the screen. Angular resolution is of the utmost importance, not only because it enables 3D perception at given distances (i.e., viewing the same visualization from a greater distance may result in a more 2D-like visual experience), but also because it determines the characteristics of the achieved parallax effect. Technically speaking, a disturbed parallax can severely degrade the QoE as well as hinder interaction performance. One of the worse threats to QoE and to interaction is the crosstalk effect, during which adjacent perspectives interfere with each other and may potentially make the visualized content unrecognizable. Constraining the depth of visualization may indeed be a solution to avoid such issues; however, perceived depth is one of the most important building blocks of the entire 3D experience.
The size of the display, and thus, the size of the HCI, is also a difficult matter. If the HCI is too small, then that can be a serious compromise against input accuracy. However, larger systems may be more difficult to implement in a given context, they may not even be possible for certain use cases, or there may be potential issues regarding visualization quality. In the world of QoE, there are many interesting questions of preference, particularly when choosing between characteristics that may degrade the QoE [181,182]. In research efforts, it would be beneficial to see the preference between smaller but higher-quality systems and larger but lower-quality ones. Of course, in many scenarios, display size is dictated by the use case itself.
There are also significant technical considerations, including challenges and outright drawbacks, for specific use cases. In the passive use cases, the aforementioned display size can be a major issue for cinematography due to the challenges of both manufacturing a single screen in that size and creating an appropriate projection system. Regarding digital signage in general, having a light field visualization system outdoors is definitely a challenging endeavour. While unfavorable lighting conditions (e.g., exceptionally sunny weather) can be overcome by the necessary projector properties, the system may have a great maintenance cost (e.g., due to potential damage), guaranteeing proper operation temperatures may pose an issue, the continuous operation itself may be taxing in general, and the total system size of interactive units may also be difficult to minimize.
It applies to each and every use case that achieving a visual-Turing-test-passing level of excellence in light field visualization requires that the observer may focus within the visualized content at different depths; in the case of general 3D perception, the eyes normally focus on the plane of the screen. In order to enable this perceptual phenomenon, super-resolution is needed. In scientific research, the technical term super-resolution often refers to the enhancement of the resolution of light field content, also known as image super-resolution [183,184,185,186]. Super-resolution also means an angular resolution so high that at least two distinct light rays may address a single pupil of an individual with respect to a given point on the screen, which is required for the above-mentioned focusing. This concept is illustrated in Figure 3. The problem is that many use cases need to support greater viewing distances. However, the farther away the observer is, the lower the perceived angular density is. At the time of writing this paper, enabling super-resolution even for the shortest feasible viewing distances is a major technological challenge.

5.3. The Ugly Truth

The ugly truth is that no matter how much research is conducted on light field visualization, it is possible that 3D interactions will not be able to surpass conventional controls in many aspects. The most important characteristics of interactions are task completion time, input accuracy, cognitive demand, and QoE. For task completion time, we can see that 3D interactions generally take more time to complete. This is partially due to the fact that the interface elements may be at different depths, and therefore, it evidently takes more time to reach elements that are physically farther away. If every element on an interface aligns to a single plane, then this issue may be mitigated. However, there are many time-sensitive use cases, which may not tolerate the additional action delay. This aspect is highly intertwined with input accuracy, which is also crucial to a great number of use cases, yet based on what we can see so far, 3D light field interfaces tend to under-perform in comparison with conventional controls. Although cognitive demand may be an issue as well, it is less important in terms of use case success, and there may also be a phase of adaptation that may compensate. After all, new technologies may be demanding at first. Note that compensation to a certain degree for task completion time and input accuracy is expected. Regarding QoE, on the one hand, such 3D interfaces may provide an exceptional experience through novelty and visual appearance, but on the other hand, guaranteeing a sufficiently good visual experience is a rather challenging task for light field visualization. Furthermore, poor task performance—particularly the potential frustration caused by insufficient input accuracy—may severely penalize the overall QoE.
It is also possible that many of the use cases will remain constrained. For example, light field technology can only visualize spatially finite contents. If we think about HCIs, this is not necessarily an issue, as such visual interfaces are meant to be finite by definition. Of course, there may be design elements that point toward great depths and distances, but those do not contribute to HCI functionalities. However, let us consider the use case of gaming, where certain genres tend to visualize virtually infinite distances (e.g., outdoor, open-world, first-person games). An example for a passive use case could be cinematography.
At the end of the day, we need to ask ourselves the question: Is it really such a great problem if the user prefers conventional controls over 3D light field interfaces? Light field HCIs may have numerous benefits, yet we need to face the fact that they are not necessary for the success of many active use cases. Light field visualization technology, as the name suggests, is a visualization technology, and while using it as an HCI is indeed an option worth considering, the primary focus shall always remain on the visualization of the content. Of course, it should be noted that 3D light field HCIs may be used in conjunction with other visualization technologies as well. A good example for such is replacing the physical touchscreen at ticket vendor machines; while the information is visualized on a flat 2D screen, the touch-free input comes from a virtual 3D interface. Naturally, this is the part where considerations regarding such 3D input are redundantly repeated.

6. Conclusions

In this paper, we provided a comprehensive review on 3D interactions with light field displays. We categorized the potential use cases by interaction and analyzed the active use cases by interaction type, time sensitivity, simultaneous users, simultaneous input, and input accuracy. We examined the state-of-the-art solutions and discussed the positive and negative aspects of current and future research. We conclude that the utilization of the technology has immense potential, and both the passive and the active use cases may greatly benefit humanity, yet there are significant constraints, and it is quite possible that 3D interactions via light field shall not prove to be superior in every single aspect.
Regarding future work, there is a virtually infinite, absolutely vast ocean of research questions that need to be addressed. Technically speaking, every active use case should be properly evaluated with particular emphasis on their characteristics, and further use cases should be explored. From the perspective of the authors of this article, possibly the most exciting research direction is the investigation of split-domain solutions. Domain separation, dynamic domains, domain capacities, uneven domain distribution, simultaneous input, asynchronous solutions, and inter-user effects should be addressed.

Author Contributions

Conceptualization, P.A.K. and A.S.; methodology, P.A.K. and A.S.; investigation, P.A.K.; resources, A.S.; writing—original draft preparation, P.A.K. and A.S.; writing—review and editing, P.A.K. and A.S.; visualization, P.A.K.; supervision, P.A.K.; project administration, A.S.; funding acquisition, P.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

Project no. TKP2021-NVA-02 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Tibor Balogh and Holografika for the know-how and expertise that ultimately led to the creation of this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented Reality
FoLDField of Light Display
FOVField of View
FPFull Parallax
HCIHuman–Computer Interaction
HMDHead-Mounted Display
HVSHuman Visual System
HOPHorizontal-Only Parallax
IBRImage-Based Rendering
IECInternational Electrotechnical Commission
ISOInternational Organization for Standardization
JPEGJoint Photographic Experts Group
QoEQuality of Experience
S3DStereoscopic 3D
SMVSuper Multi-View
TLXTask Load Index
TUITouchless User Interface
UEQUser Experience Questionnaire
V2XVehicle-to-Everything
VOPVertical-Only Parallax
VVAValid Viewing Area
VRVirtual Reality

References

  1. Wheatstone, C. XVIII. Contributions to the physiology of vision—Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philos. Trans. R. Soc. Lond. 1838, 128, 371–394. [Google Scholar]
  2. Brewster, D., II. Description of several new and simple stereoscopes for exhibiting, as solids, one or more representations of them on a plane. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1852, 3, 16–26. [Google Scholar] [CrossRef]
  3. Brewster, D. The Stereoscope; Its History, Theory, and Construction: With Its Application to the Fine and Useful Arts and to Education; John Murray: London, UK, 1856. [Google Scholar]
  4. Holmes, O.W. The stereoscope and the stereograph. Atl. Mon. 1859, 3, 1–8. [Google Scholar]
  5. Dennis, G. A new microscopic principle. Nature 1948, 161, 777–778. [Google Scholar]
  6. Gabor, D. Microscopy by reconstructed wave-fronts. Proc. R. Soc. Lond. Ser. Math. Phys. Sci. 1949, 197, 454–487. [Google Scholar] [CrossRef]
  7. Gabor, D. Holography, 1948–1971. Science 1972, 177, 299–313. [Google Scholar] [CrossRef]
  8. Haine, M.; Mulvey, T. The formation of the diffraction image with electrons in the Gabor diffraction microscope. JOSA 1952, 42, 763–773. [Google Scholar] [CrossRef]
  9. Blundell, B.G.; Schwarz, A.J. The classification of volumetric display systems: Characteristics and predictability of the image space. IEEE Trans. Vis. Comput. Graph. 2002, 8, 66–75. [Google Scholar] [CrossRef]
  10. Gately, M.; Zhai, Y.; Yeary, M.; Petrich, E.; Sawalha, L. A three-dimensional swept volume display based on LED arrays. J. Disp. Technol. 2011, 7, 503–514. [Google Scholar] [CrossRef]
  11. Sawalha, L.; Tull, M.P.; Gately, M.B.; Sluss, J.J.; Yeary, M.; Barnes, R.D. A large 3D swept-volume video display. J. Disp. Technol. 2012, 8, 256–268. [Google Scholar] [CrossRef]
  12. Asahina, R.; Nomoto, T.; Yoshida, T.; Watanabe, Y. Realistic 3D swept-volume display with hidden-surface removal using physical materials. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March–1 April 2021; pp. 113–121. [Google Scholar]
  13. Hardy, A.C. A study of the persistence of vision. Proc. Natl. Acad. Sci. USA 1920, 6, 221–224. [Google Scholar] [CrossRef]
  14. Dhruv, A.; Shah, D.; Shah, D.; Raikar, A.; Bhattacharjee, S. Wireless Remote Controlled POV Display. Int. J. Comput. Appl. 2015, 115, 4–9. [Google Scholar] [CrossRef]
  15. Al-Natsheh, W.H.; Hammad, B.K.; Zaid, M.A.A. Design and implementation of a cylindrical persistence of vision display. In Proceedings of the 2019 6th International Conference on Electrical and Electronics Engineering (ICEEE), Istanbul, Turkey, 16–17 April 2019; pp. 215–219. [Google Scholar]
  16. Langhans, K.; Guill, C.; Rieper, E.; Oltmann, K.; Bahr, D. Solid Felix: A static volume 3D-laser display. In Proceedings of the Stereoscopic Displays and Virtual Reality Systems X, Santa Clara, CA, USA, 20–24 January 2003; SPIE: Bellingham, DC, USA, 2013; Volume 5006, pp. 161–174. [Google Scholar]
  17. Downing, E.; Hesselink, L.; Ralston, J.; Macfarlane, R. A three-color, solid-state, three-dimensional display. Science 1996, 273, 1185–1189. [Google Scholar] [CrossRef]
  18. Lam, M.L.; Chen, B.; Lam, K.Y.; Huang, Y. 3D fog display using parallel linear motion platforms. In Proceedings of the 2014 International Conference on Virtual Systems & Multimedia (VSMM), Hong Kong, China, 9–12 December 2014; pp. 234–237. [Google Scholar]
  19. Lam, M.L.; Huang, Y.; Chen, B. Interactive volumetric fog display. In SIGGRAPH Asia 2015 Emerging Technologies; Association for Computing Machinery: New York, NY, USA, 2015; pp. 1–2. [Google Scholar]
  20. Lam, M.L.; Chen, B.; Huang, Y. A novel volumetric display using fog emitter matrix. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4452–4457. [Google Scholar]
  21. Vasconcelos, R.; Zeuner, J.; Greganti, C. Laser light field display. In Proceedings of the Advances in Display Technologies XII, San Francisco, CA, USA, 3 March 2022; SPIE: Bellingham, DC, USA, 2022; Volume 12024, pp. 33–41. [Google Scholar]
  22. Son, J.Y.; Lee, H.; Lee, B.R.; Byeon, J.; Park, M.C. Holographic and light field displays: What are the differences? In Proceedings of the 2017 16th Workshop on Information Optics (WIO), Interlaken, Switzerland, 3–7 July 2017; pp. 1–2. [Google Scholar]
  23. Abileah, A. 65-3: Invited Paper: Light-Field, Holographic and Volumetric Display Measurements. In Proceedings of the SID Symposium Digest of Technical Papers, San Francisco, CA, USA, 22–27 May 2016; Wiley Online Library; Volume 47, pp. 888–891. [Google Scholar]
  24. Bichal, A.; Burnett, T. 15-2: Metrology for Field-of-Light Displays. In Proceedings of the SID Symposium Digest of Technical Papers, Los Angeles, CA, USA, 21–25 May 2018; Wiley Online Library; Volume 49, pp. 165–168. [Google Scholar]
  25. Hamilton, M.; Wells, N.; Soares, A. On Requirements for Field of Light Displays to Pass the Visual Turing Test. In Proceedings of the 2022 IEEE International Symposium on Multimedia (ISM), Naples, Italy, 5–7 December 2022; pp. 86–87. [Google Scholar]
  26. Boev, A.; Bregovic, R.; Gotchev, A. Signal processing for stereoscopic and multi-view 3D displays. In Handbook of Signal Processing Systems; Springer: New York, NY, USA, 2013; pp. 3–47. [Google Scholar]
  27. Yang, L.; Sang, X.; Yu, X.; Liu, B.; Yan, B.; Wang, K.; Yu, C. A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction. Opt. Express 2018, 26, 34412–34427. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, P.; Sang, X.; Yu, X.; Gao, X.; Yan, B.; Liu, B.; Liu, L.; Gao, C.; Le, Y.; Li, Y.; et al. Demonstration of a low-crosstalk super multi-view light field display with natural depth cues and smooth motion parallax. Opt. Express 2019, 27, 34442–34453. [Google Scholar] [CrossRef] [PubMed]
  29. Wan, W.; Qiao, W.; Pu, D.; Chen, L. Super multi-view display based on pixelated nanogratings under an illumination of a point light source. Opt. Lasers Eng. 2020, 134, 106258. [Google Scholar] [CrossRef]
  30. Ueno, T.; Takaki, Y. Super multi-view near-eye display to solve vergence–accommodation conflict. Opt. Express 2018, 26, 30703–30715. [Google Scholar] [CrossRef]
  31. Liu, L.; Cai, J.; Pang, Z.; Teng, D. Super multi-view near-eye 3D display with enlarged field of view. Opt. Eng. 2021, 60, 085103. [Google Scholar] [CrossRef]
  32. Liu, L.; Ye, Q.; Pang, Z.; Huang, H.; Lai, C.; Teng, D. Polarization enlargement of FOV in Super Multi-view display based on near-eye timing-apertures. Opt. Express 2022, 30, 1841–1859. [Google Scholar] [CrossRef]
  33. Balogh, T. The HoloVizio system. In Proceedings of the Stereoscopic Displays and Virtual Reality Systems XIII, San Jose, CA, USA, 28 January–1 February 2006; SPIE: Bellingham, DC, USA, 2006; Volume 6055, pp. 279–290. [Google Scholar]
  34. Balogh, T.; Kovács, P.T.; Barsi, A. Holovizio 3D display system. In Proceedings of the 2007 3DTV Conference, Kos, Greece, 7–9 May 2007; pp. 1–4. [Google Scholar]
  35. Balogh, T.; Kovács, P.T.; Dobrányi, Z.; Barsi, A.; Megyesi, Z.; Gaál, Z.; Balogh, G. The Holovizio system–New opportunity offered by 3D displays. In Proceedings of the TMCE, Izmir, Turkey, 21–25 April 2008; pp. 79–89. [Google Scholar]
  36. Balogh, T.; Barsi, A.; Kara, P.A.; Guindy, M.; Simon, A.; Nagy, Z. 3D light field LED wall. In Proceedings of the Digital Optical Technologies 2021, Online, 20 June 2021; SPIE: Bellingham, DC, USA, 2021; Volume 11788, pp. 180–190. [Google Scholar]
  37. Teng, D.; Liu, L. P-95: Full Resolution 3D Display on Computer Screen Free from Accommodation-convergence Conflict. In Proceedings of the SID Symposium Digest of Technical Papers, Los Angeles, CA, USA, 21–26 May 2017; Wiley Online Library; Volume 48, pp. 1607–1609. [Google Scholar]
  38. Alpaslan, Z.Y.; El-Ghoroury, H.S. Small form factor full parallax tiled light field display. In Proceedings of the Stereoscopic Displays and Applications XXVI, San Francisco, CA, USA, 17 March 2015; SPIE: Bellingham, DC, USA, 2015; Volume 9391, pp. 92–101. [Google Scholar]
  39. Lanman, D.; Wetzstein, G.; Hirsch, M.; Heidrich, W.; Raskar, R. Polarization fields: Dynamic light field display using multi-layer LCDs. In Proceedings of the SA’11: SIGGRAPH Asia 2011, Hong Kong, China, 12–15 December 2011; ACM: New York, NY, USA, 2011; pp. 1–10. [Google Scholar]
  40. Zhao, W.X.; Wang, Q.H.; Wang, A.H.; Li, D.H. Autostereoscopic display based on two-layer lenticular lenses. Opt. Lett. 2010, 35, 4127–4129. [Google Scholar] [CrossRef]
  41. Yu, X.; Sang, X.; Gao, X.; Chen, Z.; Chen, D.; Duan, W.; Yan, B.; Yu, C.; Xu, D. Large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues. Opt. Express 2015, 23, 25950–25958. [Google Scholar] [CrossRef]
  42. Lee, B.; Park, J.H.; Min, S.W. Three-dimensional display and information processing based on integral imaging. In Digital Holography and Three-Dimensional Display: Principles and Applications; Springer: New York, NY, USA, 2006; pp. 333–378. [Google Scholar]
  43. Zhong, Q.; Chen, B.; Li, H.; Liu, X.; Xia, J.; Wang, B.; Xu, H. Multi-projector-type immersive light field display. Chin. Opt. Lett. 2014, 12, 060009. [Google Scholar] [CrossRef]
  44. Shim, H.; Lee, D.; Park, J.; Yoon, S.; Kim, H.; Kim, K.; Heo, D.; Kim, B.; Hahn, J.; Kim, Y.; et al. Development of a scalable tabletop display using projection-based light field technology. J. Inf. Disp. 2021, 22, 285–292. [Google Scholar] [CrossRef]
  45. Jang, W.; Shim, H.; Lee, D.; Park, J.; kyu Yoon, S.; Kim, H.; Chun, S.; Lee, K. Development of High Performance 35” Tabletop Display using Projection-based Light Field Technology. In Proceedings of the Digital Holography and Three-Dimensional Imaging, Bordeaux, France, 19–23 May 2019; Optica Publishing Group: Washington, DC, USA, 2019; p. M3A.5. [Google Scholar]
  46. Kara, P.A.; Martini, M.G.; Nagy, Z.; Barsi, A. Cinema as large as life: Large-scale light field cinema system. In Proceedings of the 2017 International Conference on 3D Immersion (IC3D), Brussels, Belgium, 11–12 December 2017; pp. 1–8. [Google Scholar]
  47. Balogh, T.; Nagy, Z.; Kovács, P.T.; Adhikarla, V.K. Natural 3D content on glasses-free light-field 3D cinema. In Proceedings of the Stereoscopic Displays and Applications XXIV, Burlingame, CA, USA, 12 March 2013; SPIE: Bellingham, DC, USA, 2013; Volume 8648, pp. 103–110. [Google Scholar]
  48. Yang, S.; Sang, X.; Yu, X.; Gao, X.; Liu, L.; Liu, B.; Yang, L. 162-inch 3D light field display based on aspheric lens array and holographic functional screen. Opt. Express 2018, 26, 33013–33021. [Google Scholar] [CrossRef] [PubMed]
  49. Gotsch, D.; Zhang, X.; Merritt, T.; Vertegaal, R. TeleHuman2: A Cylindrical Light Field Teleconferencing System for Life-size 3D Human Telepresence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Volume 18, p. 552. [Google Scholar]
  50. Cserkaszky, A.; Barsi, A.; Nagy, Z.; Puhr, G.; Balogh, T.; Kara, P.A. Real-time light-field 3D telepresence. In Proceedings of the 2018 7th European Workshop on Visual Information Processing (EUVIP), Tampere, Finland, 26–28 November 2018; pp. 1–5. [Google Scholar]
  51. Zhang, X.; Braley, S.; Rubens, C.; Merritt, T.; Vertegaal, R. LightBee: A self-levitating light field display for hologrammatic telepresence. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow Scotland, UK, 4–9 May 2019; pp. 1–10. [Google Scholar]
  52. Pittarello, F.; Dumitriu, A.; Piazza, E. 3D interaction with mouse-keyboard, gamepad and leap motion: A comparative study. In Proceedings of the Smart Objects and Technologies for Social Good: Third International Conference, GOODTECHS 2017, Pisa, Italy, 29–30 November 2017; Proceedings 3. Springer: Berlin/Heidelberg, Germany, 2018; pp. 122–131. [Google Scholar]
  53. Ardito, C.; Buono, P.; Costabile, M.F.; Lanzilotti, R.; Simeone, A.L. Comparing low cost input devices for interacting with 3D Virtual Environments. In Proceedings of the 2009 2nd Conference on Human System Interactions, Catania, Italy, 21–23 May 2009; pp. 292–297. [Google Scholar]
  54. Perret, J.; Vander Poorten, E. Touching virtual reality: A review of haptic gloves. In Proceedings of the ACTUATOR 2018; 16th International Conference on New Actuators, Bremen, Germany, 25–27 June 2018; VDE: Berlin, Germany, 2018; pp. 1–5. [Google Scholar]
  55. Shigapov, M.; Kugurakova, V.; Zykov, E. Design of digital gloves with feedback for VR. In Proceedings of the 2018 IEEE East-West Design & Test Symposium (EWDTS), Kazan, Russia, 14–17 September 2018; pp. 1–5. [Google Scholar]
  56. Shor, D.; Zaaijer, B.; Ahsmann, L.; Immerzeel, S.; Weetzel, M.; Eikelenboom, D.; Hartcher-O’Brien, J.; Aschenbrenner, D. Designing Haptics: Comparing Two Virtual Reality Gloves with Respect to Realism, Performance and Comfort. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16–20 October 2018; pp. 318–323. [Google Scholar]
  57. Civelek, T.; Fuhrmann, A. Virtual Reality Learning Environment with Haptic Gloves. In Proceedings of the 2022 3rd International Conference on Education Development and Studies, Hilo, HI, USA, 9–11 March 2022; pp. 32–36. [Google Scholar]
  58. Kim, S.; Gu, S.; Kim, J. Variable Shape and Stiffness Feedback System for VR Gloves Using SMA Textile Actuator. Fibers Polym. 2022, 23, 836–842. [Google Scholar] [CrossRef]
  59. Perret, J.; Vander Poorten, E. Commercial haptic gloves. In Proceedings of the 15th Annual EuroVR Conference, London, UK, 22 October 2018; VTT Technology: Espoo, Finland, 2018; pp. 39–48. [Google Scholar]
  60. Caeiro-Rodríguez, M.; Otero-González, I.; Mikic-Fonte, F.A.; Llamas-Nistal, M. A systematic review of commercial smart gloves: Current status and applications. Sensors 2021, 21, 2667. [Google Scholar] [CrossRef]
  61. Lippman, G. La photographie integrale. Comptes-Rendus Acad. Des Sci. 1908, 146, 446–451. [Google Scholar]
  62. Gershun, A. The light field. J. Math. Phys. 1939, 18, 51–151. [Google Scholar] [CrossRef]
  63. Faraday, M. LIV. Thoughts on ray-vibrations. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1846, 28, 345–350. [Google Scholar] [CrossRef]
  64. Adelson, E.H.; Bergen, J.R. The plenoptic function and the elements of early vision. Comput. Model. Vis. Process. 1991, 1, 3–20. [Google Scholar]
  65. McMillan, L.; Bishop, G. Plenoptic modeling: An image-based rendering system. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, Los Angeles, CA, USA, 15 September 1995; pp. 39–46. [Google Scholar]
  66. Levoy, M.; Hanrahan, P. Light field rendering. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 4–9 August 1996; pp. 31–42. [Google Scholar]
  67. Gortler, S.J.; Grzeszczuk, R.; Szeliski, R.; Cohen, M.F. The lumigraph. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 4–9 August 1996; pp. 43–54. [Google Scholar]
  68. Yang, J.C.; Everett, M.; Buehler, C.; McMillan, L. A real-time distributed light field camera. Render. Tech. 2002, 2002, 3. [Google Scholar]
  69. Jones, A.; McDowall, I.; Yamada, H.; Bolas, M.; Debevec, P. Rendering for an interactive 360° light field display. In ACM SIGGRAPH 2007 Papers; Association for Computing Machinery: New York, NY, USA, 2007; pp. 1–10. [Google Scholar]
  70. Lanman, D.; Hirsch, M.; Kim, Y.; Raskar, R. Content-adaptive parallax barriers: Optimizing dual-layer 3D displays using low-rank light field factorization. In ACM SIGGRAPH Asia 2010 Papers; Association for Computing Machinery: New York, NY, USA, 2010; pp. 1–10. [Google Scholar]
  71. Wetzstein, G.; Lanman, D.; Heidrich, W.; Raskar, R. Layered 3D: Tomographic image synthesis for attenuation-based light field and high dynamic range displays. In ACM SIGGRAPH 2011 Papers; Association for Computing Machinery: New York, NY, USA, 2011; pp. 1–12. [Google Scholar]
  72. Wetzstein, G.; Lanman, D.R.; Hirsch, M.W.; Raskar, R. Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Trans. Graph. 2012, 31, 1–11. [Google Scholar] [CrossRef]
  73. Ebrahimi, T.; Foessel, S.; Pereira, F.; Schelkens, P. JPEG Pleno: Toward an efficient representation of visual reality. IEEE Multimed. 2016, 23, 14–20. [Google Scholar] [CrossRef]
  74. Schelkens, P.; Alpaslan, Z.Y.; Ebrahimi, T.; Oh, K.J.; Pereira, F.M.; Pinheiro, A.M.; Tabus, I.; Chen, Z. JPEG Pleno: A standard framework for representing and signaling plenoptic modalities. In Proceedings of the Applications of Digital Image Processing XLI, San Diego, CA, USA, 23 August 2018; SPIE: Bellingham, DC, USA, 2018; Volume 10752, pp. 544–553. [Google Scholar]
  75. Schelkens, P.; Astola, P.; Da Silva, E.A.; Pagliari, C.; Perra, C.; Tabus, I.; Watanabe, O. JPEG Pleno light field coding technologies. In Proceedings of the Applications of Digital Image Processing XLII, San Diego, CA, USA, 12–15 August 2019; SPIE: Bellingham, DC, USA, 2019; Volume 11137, pp. 391–401. [Google Scholar]
  76. Magnor, M.; Girod, B. Data compression for light-field rendering. IEEE Trans. Circuits Syst. Video Technol. 2000, 10, 338–343. [Google Scholar] [CrossRef]
  77. Liu, D.; Wang, L.; Li, L.; Xiong, Z.; Wu, F.; Zeng, W. Pseudo-sequence-based light field image compression. In Proceedings of the 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Seattle, WA, USA, 11–15 July 2016; pp. 1–4. [Google Scholar]
  78. Chen, J.; Hou, J.; Chau, L.P. Light field compression with disparity-guided sparse coding based on structural key views. IEEE Trans. Image Process. 2017, 27, 314–324. [Google Scholar] [CrossRef]
  79. Jiang, X.; Le Pendu, M.; Farrugia, R.A.; Guillemot, C. Light field compression with homography-based low-rank approximation. IEEE J. Sel. Top. Signal Process. 2017, 11, 1132–1145. [Google Scholar] [CrossRef]
  80. Jiang, X.; Le Pendu, M.; Guillemot, C. Light field compression using depth image based view synthesis. In Proceedings of the 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, China, 10–14 July 2017; pp. 19–24. [Google Scholar]
  81. Dricot, A.; Jung, J.; Cagnazzo, M.; Pesquet, B.; Dufaux, F.; Kovács, P.T.; Adhikarla, V.K. Subjective evaluation of Super Multi-View compressed contents on high-end light-field 3D displays. Signal Process. Image Commun. 2015, 39, 369–385. [Google Scholar] [CrossRef]
  82. Viola, I.; Řeřábek, M.; Bruylants, T.; Schelkens, P.; Pereira, F.; Ebrahimi, T. Objective and subjective evaluation of light field image compression algorithms. In Proceedings of the 2016 Picture Coding Symposium (PCS), Nuremberg, Germany, 4–7 December 2016; pp. 1–5. [Google Scholar]
  83. Viola, I.; Řeřábek, M.; Ebrahimi, T. Comparison and evaluation of light field image coding approaches. IEEE J. Sel. Top. Signal Process. 2017, 11, 1092–1106. [Google Scholar] [CrossRef]
  84. Paudyal, P.; Battisti, F.; Sjöström, M.; Olsson, R.; Carli, M. Towards the perceptual quality evaluation of compressed light field images. IEEE Trans. Broadcast. 2017, 63, 507–522. [Google Scholar] [CrossRef]
  85. Viola, I.; Takahashi, K.; Fujii, T.; Ebrahimi, T. Rendering-dependent compression and quality evaluation for light field contents. In Proceedings of the Applications of Digital Image Processing XLII, San Diego, CA, USA, 12–15 August 2019; SPIE: Bellingham, DC, USA, 2019; Volume 11137, pp. 414–426. [Google Scholar]
  86. Bakir, N.; Fezza, S.A.; Hamidouche, W.; Samrouth, K.; Déforges, O. Subjective evaluation of light field image compression methods based on view synthesis. In Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2–6 September 2019; pp. 1–5. [Google Scholar]
  87. Viola, I.; Ebrahimi, T. An in-depth analysis of single-image subjective quality assessment of light field contents. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–6. [Google Scholar]
  88. PhiCong, H.; Perry, S.; Cheng, E.; HoangVan, X. Objective quality assessment metrics for light field image based on textural features. Electronics 2022, 11, 759. [Google Scholar] [CrossRef]
  89. Tamboli, R.R.; Kara, P.A.; Bisht, N.; Barsi, A.; Martini, M.G.; Jana, S. Objective quality assessment of 2D synthesized views for light-field visualization. In Proceedings of the 2018 International Conference on 3D Immersion (IC3D), Brussels, Belgium, 5–6 December 2018; pp. 1–7. [Google Scholar]
  90. Shi, L.; Zhou, W.; Chen, Z.; Zhang, J. No-reference light field image quality assessment based on spatial-angular measurement. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 4114–4128. [Google Scholar] [CrossRef]
  91. Shan, L.; An, P.; Meng, C.; Huang, X.; Yang, C.; Shen, L. A no-reference image quality assessment metric by multiple characteristics of light field images. IEEE Access 2019, 7, 127217–127229. [Google Scholar] [CrossRef]
  92. Paudyal, P.; Battisti, F.; Carli, M. Reduced reference quality assessment of light field images. IEEE Trans. Broadcast. 2019, 65, 152–165. [Google Scholar] [CrossRef]
  93. Luo, Z.; Zhou, W.; Shi, L.; Chen, Z. No-reference light field image quality assessment based on micro-lens image. In Proceedings of the 2019 Picture Coding Symposium (PCS), Ningbo, China, 12–15 November 2019; pp. 1–5. [Google Scholar]
  94. Zhou, W.; Shi, L.; Chen, Z.; Zhang, J. Tensor oriented no-reference light field image quality assessment. IEEE Trans. Image Process. 2020, 29, 4070–4084. [Google Scholar] [CrossRef] [PubMed]
  95. Rerabek, M.; Ebrahimi, T. New light field image dataset. In Proceedings of the 8th International Conference on Quality of Multimedia Experience (QoMEX), number CONF, Lisbon, Portugal, 6–8 June 2016. [Google Scholar]
  96. Paudyal, P.; Olsson, R.; Sjöström, M.; Battisti, F.; Carli, M. SMART: A light field image quality dataset. In Proceedings of the 7th International Conference on Multimedia Systems, Klagenfurt, Austria, 10–13 May 2016; pp. 1–6. [Google Scholar]
  97. Murgia, F.; Giusto, D. A database for evaluating the quality of experience in light field applications. In Proceedings of the 2016 24th Telecommunications Forum (TELFOR), Belgrade, Serbia, 22–23 November 2016; pp. 1–4. [Google Scholar]
  98. Shekhar, S.; Kunz Beigpour, S.; Ziegler, M.; Chwesiuk, M.; Paleń, D.; Myszkowski, K.; Keinert, J.; Mantiuk, R.; Didyk, P. Light-field intrinsic dataset. In Proceedings of the British Machine Vision Conference 2018 (BMVC), Newcastle, UK, 3–6 September 2018; British Machine Vision Association: Durham, UK, 2018. [Google Scholar]
  99. Tamboli, R.R.; Reddy, M.S.; Kara, P.A.; Martini, M.G.; Channappayya, S.S.; Jana, S. A high-angular-resolution turntable data-set for experiments on light field visualization quality. In Proceedings of the 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), Cagliari, Italy, 29 May–1 June 2018; pp. 1–3. [Google Scholar]
  100. Zakeri, F.S.; Durmush, A.; Ziegler, M.; Bätz, M.; Keinert, J. Non-planar inside-out dense light-field dataset and reconstruction pipeline. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 1059–1063. [Google Scholar]
  101. Moreschini, S.; Gama, F.; Bregovic, R.; Gotchev, A. CIVIT dataset: Horizontal-parallax-only densely-sampled light-fields. In Proceedings of the European Light Field Imaging Workshop, Borovets, Bulgaria, 4–6 June 2019; Volume 6. [Google Scholar]
  102. Gul, M.S.K.; Wolf, T.; Bätz, M.; Ziegler, M.; Keinert, J. A high-resolution high dynamic range light-field dataset with an application to view synthesis and tone-mapping. In Proceedings of the 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
  103. Guindy, M.; Adhikarla, V.K.; Kara, P.A.; Balogh, T.; Simon, A. CLASSROOM: Synthetic high dynamic range light field dataset. In Proceedings of the Applications of Digital Image Processing XLV, San Diego, CA, USA, 21–26 August 2022; SPIE: Bellingham, DC, USA, 2022; Volume 12226, pp. 153–162. [Google Scholar]
  104. Wang, S.; Ong, K.S.; Surman, P.; Yuan, J.; Zheng, Y.; Sun, X.W. Quality of experience measurement for light field 3D displays on multilayer LCDs. J. Soc. Inf. Disp. 2016, 24, 726–740. [Google Scholar] [CrossRef]
  105. Tamboli, R.R.; Appina, B.; Channappayya, S.S.; Jana, S. Achieving high angular resolution via view synthesis: Quality assessment of 3D content on super multiview lightfield display. In Proceedings of the 2017 International Conference on 3D Immersion (IC3D), Brussels, Belgium, 11–12 December 2017; pp. 1–8. [Google Scholar]
  106. Cserkaszky, A.; Barsi, A.; Kara, P.A.; Martini, M.G. To interpolate or not to interpolate: Subjective assessment of interpolation performance on a light field display. In Proceedings of the 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, China, 10–14 July 2017; pp. 55–60. [Google Scholar]
  107. Perra, C.; Song, W.; Liotta, A. Effects of light field subsampling on the quality of experience in refocusing applications. In Proceedings of the 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), Cagliari, Italy, 29 May–1 June 2018; pp. 1–3. [Google Scholar]
  108. Perra, C. Assessing the quality of experience in viewing rendered decompressed light fields. Multimed. Tools Appl. 2018, 77, 21771–21790. [Google Scholar] [CrossRef]
  109. Yue, D.; Gul, M.S.K.; Bätz, M.; Keinert, J.; Mantiuk, R. A benchmark of light field view interpolation methods. In Proceedings of the 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
  110. Min, X.; Zhou, J.; Zhai, G.; Le Callet, P.; Yang, X.; Guan, X. A metric for light field reconstruction, compression, and display quality evaluation. IEEE Trans. Image Process. 2020, 29, 3790–3804. [Google Scholar] [CrossRef]
  111. Kovács, P.T.; Lackner, K.; Barsi, A.; Balázs, Á.; Boev, A.; Bregović, R.; Gotchev, A. Measurement of perceived spatial resolution in 3D light-field displays. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 768–772. [Google Scholar]
  112. Tamboli, R.R.; Appina, B.; Channappayya, S.; Jana, S. Super-multiview content with high angular resolution: 3D quality assessment on horizontal-parallax lightfield display. Signal Process. Image Commun. 2016, 47, 42–55. [Google Scholar] [CrossRef]
  113. Alpaslan, Z.Y.; El-Ghoroury, H.S.; Cai, J. P-32: Parametric Characterization of Perceived Light Field Display Resolution. In Proceedings of the SID Symposium Digest of Technical Papers, San Francisco, CA, USA, 22–27 May 2016; Wiley Online Library; Volume 47, pp. 1241–1245. [Google Scholar]
  114. Kara, P.A.; Cserkaszky, A.; Barsi, A.; Papp, T.; Martini, M.G.; Bokor, L. The interdependence of spatial and angular resolution in the quality of experience of light field visualization. In Proceedings of the 2017 International Conference on 3D Immersion (IC3D), Brussels, Belgium, 11–12 December 2017; pp. 1–8. [Google Scholar]
  115. Viola, I.; Řeřábek, M.; Ebrahimi, T. Impact of interactivity on the assessment of quality of experience for light field content. In Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, 31 May–2 June 2017; pp. 1–6. [Google Scholar]
  116. Huang, Z.; Yu, M.; Xu, H.; Song, Y.; Jiang, H.; Jiang, G. New quality assessment method for dense light fields. In Proceedings of the Optoelectronic Imaging and Multimedia Technology V, Beijing, China, 2 November 2018; SPIE: Bellingham, DC, USA, 2018; Volume 10817, pp. 292–301. [Google Scholar]
  117. Kara, P.A.; Tamboli, R.R.; Cserkaszky, A.; Martini, M.G.; Barsi, A.; Bokor, L. The viewing conditions of light-field video for subjective quality assessment. In Proceedings of the 2018 International Conference on 3D Immersion (IC3D), Brussels, Belgium, 5–6 December 2018; pp. 1–8. [Google Scholar]
  118. Viola, I.; Ebrahimi, T. Comparison of Interactive Subjective Methodologies for Light Field Quality Evaluation. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1865–1869. [Google Scholar]
  119. Kara, P.A.; Tamboli, R.R.; Cserkaszky, A.; Barsi, A.; Simon, A.; Kusz, A.; Bokor, L.; Martini, M.G. Objective and subjective assessment of binocular disparity for projection-based light field displays. In Proceedings of the 2019 International Conference on 3D Immersion (IC3D), Brussels, Belgium, 11 December 2019; pp. 1–8. [Google Scholar]
  120. Kara, P.A.; Barsi, A.; Tamboli, R.R.; Guindy, M.; Martini, M.G.; Balogh, T.; Simon, A. Recommendations on the viewing distance of light field displays. In Proceedings of the Digital Optical Technologies 2021, Online, 21–24 June 2021; SPIE: Bellingham, DC, USA, 2021; Volume 11788, pp. 166–179. [Google Scholar]
  121. Paudyal, P.; Gutierrez, J.; Le Callet, P.; Carli, M.; Battisti, F. Characterization and selection of light field content for perceptual assessment. In Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, 31 May–2 June 2017; pp. 1–6. [Google Scholar]
  122. Tamboli, R.R.; Appina, B.; Kara, P.A.; Martini, M.G.; Channappayya, S.S.; Jana, S. Effect of primitive features of content on perceived quality of light field visualization. In Proceedings of the 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), Cagliari, Italy, 29 May–1 June 2018; pp. 1–3. [Google Scholar]
  123. Simon, A.; Kara, P.A.; Guindy, M.; Qiu, X.; Szy, L.; Balogh, T. One step closer to a better experience: Analysis of the suitable viewing distance ranges of light field visualization usage contexts for observers with reduced visual capabilities. In Proceedings of the Novel Optical Systems, Methods, and Applications XXV, San Diego, CA, USA, 21–26 August 2022; SPIE: Bellingham, DC, USA, 2022; Volume 12216, pp. 133–143. [Google Scholar]
  124. Simon, A.; Guindy, M.; Kara, P.A.; Balogh, T.; Szy, L. Through a different lens: The perceived quality of light field visualization assessed by test participants with imperfect visual acuity and color blindness. In Proceedings of the Big Data IV: Learning, Analytics, and Applications, Orlando, FL, USA, 31 May 2022; SPIE: Bellingham, DC, USA, 2022; Volume 12097, pp. 212–221. [Google Scholar]
  125. Paudyal, P.; Battisti, F.; Carli, M. Effect of visualization techniques on subjective quality of light field images. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 196–200. [Google Scholar]
  126. Guindy, M.; Barsi, A.; Kara, P.A.; Adhikarla, V.K.; Balogh, T.; Simon, A. Camera animation for immersive light field imaging. Electronics 2022, 11, 2689. [Google Scholar] [CrossRef]
  127. Kara, P.A.; Guindy, M.; Xinyu, Q.; Szakal, V.A.; Balogh, T.; Simon, A. The effect of angular resolution and 3D rendering on the perceived quality of the industrial use cases of light field visualization. In Proceedings of the 16th International Conference on Signal Image Technology & Internet based Systems (SITIS), Dijon, France, 19–21 October 2022. [Google Scholar]
  128. Kara, P.A.; Tamboli, R.R.; Shafiee, E.; Martini, M.G.; Simon, A.; Guindy, M. Beyond Perceptual Thresholds and Personal Preference: Towards Novel Research Questions and Methodologies of Quality of Experience Studies on Light Field Visualization. Electronics 2022, 11, 953. [Google Scholar] [CrossRef]
  129. Balogh, T.; Kovács, P. Holovizio: The next generation of 3D oil & gas visualization. In Proceedings of the 70th EAGE Conference and Exhibition-Workshops and Fieldtrips. European Association of Geoscientists & Engineers, Rome, Italy, 9–12 June 2008. [Google Scholar]
  130. Favalora, G.E. Progress in volumetric three-dimensional displays and their applications. In Proceedings of the Frontiers in Optics, San Jose, CA, USA, 11–15 October 2009; Optica Publishing Group: Washington, DC, USA, 2009. [Google Scholar]
  131. Diewald, S.; Möller, A.; Roalter, L.; Kranz, M. DriveAssist-A V2X-Based Driver Assistance System for Android. In Proceedings of the Mensch & Computer Workshopband; Oldenbourg Wissenschaftsverlag: Munich, Germany, 2012; pp. 373–380. [Google Scholar]
  132. Olaverri-Monreal, C.; Jizba, T. Human factors in the design of human–machine interaction: An overview emphasizing V2X communication. IEEE Trans. Intell. Veh. 2016, 1, 302–313. [Google Scholar] [CrossRef]
  133. Xu, T.; Jiang, R.; Wen, C.; Liu, M.; Zhou, J. A hybrid model for lane change prediction with V2X-based driver assistance. Phys. A Stat. Mech. Its Appl. 2019, 534, 122033. [Google Scholar] [CrossRef]
  134. Hirai, T.; Murase, T. Performance evaluations of PC5-based cellular-V2X mode 4 for feasibility analysis of driver assistance systems with crash warning. Sensors 2020, 20, 2950. [Google Scholar] [CrossRef]
  135. Kara, P.A.; Wippelhauser, A.; Balogh, T.; Bokor, L. How I met your V2X sensor data: Analysis of projection-based light field visualization for vehicle-to-everything communication protocols and use cases. Sensors 2023, 23, 1284. [Google Scholar] [CrossRef] [PubMed]
  136. Kara, P.A.; Balogh, T.; Guindy, M.; Simon, A. 3D battlespace visualization and defense applications on commercial and use-case-dedicated light field displays. In Proceedings of the Big Data IV: Learning, Analytics, and Applications, Orlando, FL, USA, 31 May 2022; SPIE: Bellingham, DC, USA, 2022; Volume 12097, pp. 183–191. [Google Scholar]
  137. Blackwell, C.J.; Khan, J.; Chen, X. 54-6: Holographic 3D Telepresence System with Light Field 3D Displays and Depth Cameras over a LAN. In Proceedings of the SID Symposium Digest of Technical Papers, Virtual, 17–21 May 2021; Wiley Online Library; Volume 52, pp. 761–763. [Google Scholar]
  138. Fattal, D. Lightfield displays: A window into the metaverse. In Proceedings of the SPIE AR, VR, MR Industry Talks 2022, San Francisco, CA, USA, 8 March 2022; SPIE: Bellingham, DC, USA, 2022; Volume 11932. [Google Scholar]
  139. Stephenson, N. Snow Crash; Bantam Books: New York, NY, USA, 1992. [Google Scholar]
  140. Adhikarla, V.K.; Jakus, G.; Sodnik, J. Design and evaluation of freehand gesture interaction for light field display. In Proceedings of the International Conference on Human-Computer Interaction, Los Angeles, CA, USA, 2–7 August 2015; Springer: Cham, Switzerland, 2015; pp. 54–65. [Google Scholar]
  141. Adhikarla, V.K.; Sodnik, J.; Szolgay, P.; Jakus, G. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller. Sensors 2015, 15, 8642–8663. [Google Scholar] [CrossRef]
  142. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  143. Laugwitz, B.; Held, T.; Schrepp, M. Construction and evaluation of a user experience questionnaire. In Proceedings of the HCI and Usability for Education and Work: 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2008, Graz, Austria, 20–21 November 2008; Proceedings 4. Springer: Berlin/Heidelberg, Germany, 2008; pp. 63–76. [Google Scholar]
  144. Adhikarla, V.K.; Woźniak, P.; Barsi, A.; Singhal, D.; Kovács, P.T.; Balogh, T. Freehand interaction with large-scale 3D map data. In Proceedings of the 2014 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), Budapest, Hungary, 2–4 July 2014; pp. 1–4. [Google Scholar]
  145. Yamaguchi, M.; Higashida, R. 3D touchable holographic light-field display. Appl. Opt. 2016, 55, A178–A183. [Google Scholar] [CrossRef]
  146. Yamaguchi, M. Full-parallax holographic light-field 3-D displays and interactive 3-D touch. Proc. IEEE 2017, 105, 947–959. [Google Scholar] [CrossRef]
  147. Chavarría, I.A.S.S.; Nakamura, T.; Yamaguchi, M. Interactive optical 3D-touch user interface using a holographic light-field display and color information. Opt. Express 2020, 28, 36740–36755. [Google Scholar] [CrossRef]
  148. Chavarría, I.A.S.S.; Nakamura, T.; Yamaguchi, M. Automatic registration of gesture-sensor data and light-field for aerial 3D-touch interface. In Proceedings of the 3D Image Acquisition and Display: Technology, Perception and Applications, Washington, DC, USA, 19–23 July 2021; Optica Publishing Group: Washington, DC, USA, 2021. [Google Scholar]
  149. Chavarría, I.A.S.S.; Shimomura, K.; Takeyama, S.; Yamaguchi, M. Interactive 3D touch and gesture capable holographic light field display with automatic registration between user and content. J. Soc. Inf. Disp. 2022, 30, 877–893. [Google Scholar] [CrossRef]
  150. Yoshida, T.; Shimizu, K.; Kurogi, T.; Kamuro, S.; Minamizawa, K.; Nii, H.; Tachi, S. RePro3D: Full-parallax 3D display with haptic feedback using retro-reflective projection technology. In Proceedings of the 2011 IEEE International Symposium on VR Innovation, Singapore, 19–20 March 2011; pp. 49–54. [Google Scholar]
  151. Minamizawa, K.; Fukamachi, S.; Kajimoto, H.; Kawakami, N.; Tachi, S. Gravity grabber: Wearable haptic display to present virtual mass sensation. In ACM SIGGRAPH 2007 Emerging Technologies; Association for Computing Machinery: New York, NY, USA, 2007; pp. 1–4. [Google Scholar]
  152. Huang, Y.P.; Wang, G.Z.; Ma, M.C.; Tung, S.Y.; Huang, S.Y.; Tseng, H.W.; Kuo, C.H.; Li, C.H. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor. In Proceedings of the Three-Dimensional Imaging, Visualization, and Display, Orlando, FL, USA, 27–28 April 2011; SPIE: Bellingham, DC, USA, 2011; Volume 8043, pp. 183–200. [Google Scholar]
  153. Wang, G.Z.; Huang, Y.P.; Chang, T.S.; Chen, T.H. Bare finger 3D air-touch system using an embedded optical sensor array for mobile displays. J. Disp. Technol. 2013, 10, 13–18. [Google Scholar] [CrossRef]
  154. Hu, J.; Li, G.; Xie, X.; Lv, Z.; Wang, Z. Bare-fingers touch detection by the button’s distortion in a projector–camera system. IEEE Trans. Circuits Syst. Video Technol. 2013, 24, 566–575. [Google Scholar]
  155. Matsubayashi, A.; Makino, Y.; Shinoda, H. Direct finger manipulation of 3D object image with ultrasound haptic feedback. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow Scotland, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  156. Yasui, M.; Watanabe, Y.; Ishikawa, M. Occlusion-robust sensing method by using the light-field of a 3D display system toward interaction with a 3D image. Appl. Opt. 2019, 58, A209–A227. [Google Scholar] [CrossRef]
  157. Sang, X.; Gao, X.; Yu, X.; Xing, S.; Li, Y.; Wu, Y. Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing. Opt. Express 2018, 26, 8883–8889. [Google Scholar] [CrossRef]
  158. Tamboli, R.R.; Kara, P.A.; Cserkaszky, A.; Barsi, A.; Martini, M.G.; Jana, S. Canonical 3D object orientation for interactive light-field visualization. In Proceedings of the Applications of Digital Image Processing XLI, San Diego, CA, USA, 17 September 2018; International Society for Optics and Photonics: Bellingham, DC, USA; Volume 10752, pp. 77–83. [Google Scholar]
  159. Kolly, S.M.; Wattenhofer, R.; Welten, S. A personal touch: Recognizing users based on touch screen behavior. In Proceedings of the Third International Workshop on Sensing Applications on Mobile Phones, Toronto, ON, Canada, 6 November 2012; pp. 1–5. [Google Scholar]
  160. Teh, P.S.; Zhang, N.; Teoh, A.B.J.; Chen, K. Recognizing your touch: Towards strengthening mobile device authentication via touch dynamics integration. In Proceedings of the 13th International Conference on Advances in Mobile Computing and Multimedia, Brussels, Belgium, 11–13 December 2015; pp. 108–116. [Google Scholar]
  161. Alzubaidi, A.; Kalita, J. Authentication of smartphone users using behavioral biometrics. IEEE Commun. Surv. Tutor. 2016, 18, 1998–2026. [Google Scholar] [CrossRef]
  162. Alghamdi, S.J.; Elrefaei, L.A. Dynamic authentication of smartphone users based on touchscreen gestures. Arab. J. Sci. Eng. 2018, 43, 789–810. [Google Scholar] [CrossRef]
  163. Bevan, C.; Fraser, D.S. Different strokes for different folks? Revealing the physical characteristics of smartphone users from their swipe gestures. Int. J. Hum.-Comput. Stud. 2016, 88, 51–61. [Google Scholar] [CrossRef]
  164. Antal, M.; Bokor, Z.; Szabó, L.Z. Information revealed from scrolling interactions on mobile devices. Pattern Recognit. Lett. 2015, 56, 7–13. [Google Scholar] [CrossRef]
  165. Miguel-Hurtado, O.; Stevenage, S.V.; Bevan, C.; Guest, R. Predicting sex as a soft-biometrics from device interaction swipe gestures. Pattern Recognit. Lett. 2016, 79, 44–51. [Google Scholar] [CrossRef]
  166. Jain, A.; Kanhangad, V. Gender recognition in smartphones using touchscreen gestures. Pattern Recognit. Lett. 2019, 125, 604–611. [Google Scholar] [CrossRef]
  167. Guarino, A.; Lettieri, N.; Malandrino, D.; Zaccagnino, R.; Capo, C. Adam or Eve? Automatic users’ gender classification via gestures analysis on touch devices. Neural Comput. Appl. 2022, 34, 18473–18495. [Google Scholar] [CrossRef]
  168. Vatavu, R.D.; Anthony, L.; Brown, Q. Child or adult? Inferring Smartphone users’ age group from touch measurements alone. In Proceedings of the Human-Computer Interaction–INTERACT 2015: 15th IFIP TC 13 International Conference, Bamberg, Germany, 14–18 September 2015; Proceedings, Part IV 15. Springer: Berlin/Heidelberg, Germany, 2015; pp. 1–9. [Google Scholar]
  169. Acien, A.; Morales, A.; Fierrez, J.; Vera-Rodriguez, R.; Hernandez-Ortega, J. Active detection of age groups based on touch interaction. IET Biom. 2019, 8, 101–108. [Google Scholar] [CrossRef]
  170. Cheng, Y.; Ji, X.; Li, X.; Zhang, T.; Malebary, S.; Qu, X.; Xu, W. Identifying child users via touchscreen interactions. ACM Trans. Sens. Netw. 2020, 16, 1–25. [Google Scholar] [CrossRef]
  171. Lee, G.Y.; Hong, J.Y.; Hwang, S.; Moon, S.; Kang, H.; Jeon, S.; Kim, H.; Jeong, J.H.; Lee, B. Metasurface eyepiece for augmented reality. Nat. Commun. 2018, 9, 4562. [Google Scholar] [CrossRef] [PubMed]
  172. Zhou, Y.; Kravchenko, I.I.; Wang, H.; Zheng, H.; Gu, G.; Valentine, J. Multifunctional metaoptics based on bilayer metasurfaces. Light. Sci. Appl. 2019, 8, 80. [Google Scholar] [CrossRef] [PubMed]
  173. Li, Z.; Lin, P.; Huang, Y.W.; Park, J.S.; Chen, W.T.; Shi, Z.; Qiu, C.W.; Cheng, J.X.; Capasso, F. Meta-optics achieves RGB-achromatic focusing for virtual reality. Sci. Adv. 2021, 7, eabe4458. [Google Scholar] [CrossRef]
  174. Ou, K.; Wan, H.; Wang, G.; Zhu, J.; Dong, S.; He, T.; Yang, H.; Wei, Z.; Wang, Z.; Cheng, X. Advances in Meta-Optics and Metasurfaces: Fundamentals and Applications. Nanomaterials 2023, 13, 1235. [Google Scholar] [CrossRef]
  175. Wei, Z.; Cao, Y.; Su, X.; Gong, Z.; Long, Y.; Li, H. Highly efficient beam steering with a transparent metasurface. Opt. Express 2013, 21, 10739–10745. [Google Scholar] [CrossRef]
  176. Huang, Y.W.; Chen, W.T.; Tsai, W.Y.; Wu, P.C.; Wang, C.M.; Sun, G.; Tsai, D.P. Aluminum plasmonic multicolor meta-hologram. Nano Lett. 2015, 15, 3122–3127. [Google Scholar] [CrossRef]
  177. Hakobyan, D.; Magallanes, H.; Seniutinas, G.; Juodkazis, S.; Brasselet, E. Tailoring orbital angular momentum of light in the visible domain with metallic metasurfaces. Adv. Opt. Mater. 2016, 4, 306–312. [Google Scholar] [CrossRef]
  178. Overvig, A.C.; Shrestha, S.; Malek, S.C.; Lu, M.; Stein, A.; Zheng, C.; Yu, N. Dielectric metasurfaces for complete and independent control of the optical amplitude and phase. Light. Sci. Appl. 2019, 8, 92. [Google Scholar] [CrossRef]
  179. Hu, Y.; Li, L.; Wang, Y.; Meng, M.; Jin, L.; Luo, X.; Chen, Y.; Li, X.; Xiao, S.; Wang, H.; et al. Trichromatic and tripolarization-channel holography with noninterleaved dielectric metasurface. Nano Lett. 2019, 20, 994–1002. [Google Scholar] [CrossRef] [PubMed]
  180. Zou, C.; Amaya, C.; Fasold, S.; Muravsky, A.A.; Murauski, A.A.; Pertsch, T.; Staude, I. Multiresponsive dielectric metasurfaces. ACS Photonics 2021, 8, 1775–1783. [Google Scholar] [CrossRef]
  181. Hoßfeld, T.; Egger, S.; Schatz, R.; Fiedler, M.; Masuch, K.; Lorentzen, C. Initial delay vs. interruptions: Between the devil and the deep blue sea. In Proceedings of the 2012 Fourth International Workshop on Quality of Multimedia Experience, Melbourne, VIC, Australia, 5–7 July 2012; pp. 1–6. [Google Scholar]
  182. Kara, P.A.; Martini, M.G.; Rossi, S. One spoonful or multiple drops: Investigation of stalling distribution and temporal information for quality of experience over time. In Proceedings of the 2016 International Conference on Telecommunications and Multimedia (TEMU), Heraklion, Greece, 25–27 July 2016; pp. 1–6. [Google Scholar]
  183. Yoon, Y.; Jeon, H.G.; Yoo, D.; Lee, J.Y.; Kweon, I.S. Light-field image super-resolution using convolutional neural network. IEEE Signal Process. Lett. 2017, 24, 848–852. [Google Scholar] [CrossRef]
  184. Zhang, S.; Lin, Y.; Sheng, H. Residual networks for light field image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11046–11055. [Google Scholar]
  185. Wang, Y.; Wang, L.; Yang, J.; An, W.; Yu, J.; Guo, Y. Spatial-angular interaction for light field image super-resolution. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 290–308. [Google Scholar]
  186. Wang, Y.; Yang, J.; Wang, L.; Ying, X.; Wu, T.; An, W.; Guo, Y. Light field image super-resolution using deformable convolution. IEEE Trans. Image Process. 2020, 30, 1057–1071. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Differences between multi-view and light field displays.
Figure 1. Differences between multi-view and light field displays.
Mti 07 00045 g001
Figure 2. VVAs in split-domain light field gaming.
Figure 2. VVAs in split-domain light field gaming.
Mti 07 00045 g002
Figure 3. Differences between general 3D experience and super-resolution.
Figure 3. Differences between general 3D experience and super-resolution.
Mti 07 00045 g003
Table 1. Summary of the typical parameters for active use cases.
Table 1. Summary of the typical parameters for active use cases.
UseInteractionTimeSimultaneousSimultaneousInput
CaseTypeSensitivityUsersInputAccuracy
Prototype reviewView-relatedNoYesNoLow importance
Medical imagingView-relatedPotentialPotentialNoHigh importance *
Resource explorationView-relatedNoYesNoLow importance
Training and educationBothPotentialPotentialPotentialHigh importance
Digital signageContent-relatedNoYesNoLow importance
Cultural heritage exhibitionBothNoYesPotentialHigh importance
Traffic controlBothYesPotentialNoHigh importance
Driver assistance systemsContent-relatedYesNoNoHigh importance
Defense applicationsBothYesYesNoHigh importance
TelepresenceBothNoPotentialNoLow importance
Home multimedia entertainmentContent-relatedNoPotentialNoLow importance
GamingContent-relatedYesPotentialPotentialHigh importance
MetaverseContent-relatedYesPotentialPotentialHigh importance
* if time-sensitive.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kara, P.A.; Simon, A. The Good News, the Bad News, and the Ugly Truth: A Review on the 3D Interaction of Light Field Displays. Multimodal Technol. Interact. 2023, 7, 45. https://doi.org/10.3390/mti7050045

AMA Style

Kara PA, Simon A. The Good News, the Bad News, and the Ugly Truth: A Review on the 3D Interaction of Light Field Displays. Multimodal Technologies and Interaction. 2023; 7(5):45. https://doi.org/10.3390/mti7050045

Chicago/Turabian Style

Kara, Peter A., and Aniko Simon. 2023. "The Good News, the Bad News, and the Ugly Truth: A Review on the 3D Interaction of Light Field Displays" Multimodal Technologies and Interaction 7, no. 5: 45. https://doi.org/10.3390/mti7050045

APA Style

Kara, P. A., & Simon, A. (2023). The Good News, the Bad News, and the Ugly Truth: A Review on the 3D Interaction of Light Field Displays. Multimodal Technologies and Interaction, 7(5), 45. https://doi.org/10.3390/mti7050045

Article Metrics

Back to TopTop