Next Article in Journal
Research on the Detection Method of Coal Mine Roadway Bolt Mesh Based on Improved YOLOv7
Next Article in Special Issue
Recognition of Human Mental Stress Using Machine Learning: A Case Study on Refugees
Previous Article in Journal
Logging In-Operation Battery Data from Android Devices: A Possible Path to Sourcing Battery Operation Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Consumer Subjective Impressions in Virtual Reality Environments: The Role of the Visualization Technique in Product Evaluation

by
Almudena Palacios-Ibáñez
1,
Francisco Felip-Miralles
2,
Julia Galán
2,
Carlos García-García
2 and
Manuel Contero
1,*
1
Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, 46022 Valencia, Spain
2
Department of Industrial Systems Engineering and Design, Universitat Jaume I, 12071 Castellon, Spain
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(14), 3051; https://doi.org/10.3390/electronics12143051
Submission received: 15 June 2023 / Revised: 4 July 2023 / Accepted: 10 July 2023 / Published: 12 July 2023
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)

Abstract

:
The availability and affordability of consumer virtual reality (VR) devices have fueled their adoption during the product design process. High fidelity virtual prototypes can be created more quickly and are more cost-effective than using traditional methods, but certain product features are still difficult to evaluate, resulting in perceptual differences when a product is assessed using different visualization techniques. In this paper, we report two case studies in which a group of participants evaluated different designs of a product typology (i.e., a watering can) as presented in VR, VR with passive haptics (VRPH) and in a real setting (R) for the first case study, and VR and R for the second case study. The semantic differential technique was used for product evaluation, and an inferential statistical method using aligned rank transform (ART) proceedings was applied to determine perceptual differences between groups. Our results showed that product characteristics assessed by touch are the most susceptible to being affected by the environment, while the user background can have an effect in some product features.

1. Introduction

Different key factors can help to ensure the success of a product apart from aesthetics [1], such as a strong market orientation, a clear and compelling product concept, a well-designed development process, and its evaluation, especially during the early stages of the design process [2]. In this context, ensuring a precise perception of a product’s features is crucial to guarantee an accurate evaluation of it [3]. The increasing competitiveness of the market is causing traditional forms of interaction with the product to be replaced by digital means [4], and the way that products are presented is becoming increasingly important for the optimization of the design process. In this regard, it is crucial to understand our perception of virtual prototypes in order to interact efficiently with it [5], since nowadays, the presentation medium is also considered an important factor for the success of a product [6].
With the advancement of virtual reality (VR) devices, virtual prototyping (VP) is gaining popularity in different fields [7,8]. The availability and affordability of VR standalone head-mounted displays (HMD), e.g., Quest 2 or Pico 4, have fueled their adoption in product design and are becoming more prevalent among product designers [9,10]. The metaverse is changing how products are designed, as gaining feedback from potential users during the design process can also be more cost-effective when physical prototypes are not necessarily needed for concept validation (especially in the early stages of the product development process). For example, there are various VR applications that facilitate the design process when individuals are not physically present in the same space. For instance, ShapeXR incorporates features such as eye-tracking and facial tracking to enhance interaction and communication among collaborators [11], or Microsoft Maquette, a general purpose mock-up tool for spatial prototyping within virtual reality [12].
In this context, VP is becoming an affordable and versatile alternative to physical prototyping [13], as it involves less time and money than traditional methods.
Although VR was proposed as a new tool in engineering design many years ago [14], it has gained increasing popularity in recent years as a powerful tool for product development given the technological advances [10], especially during the early stages, when many design variations must be produced [13,15]. This technology allows designers to create immersive virtual environments (VE) to evaluate different design concepts through VP, which provides a more efficient, versatile and cost-effective way to assess new products [13]. Virtual prototypes are easy to modify and share [16], and designers can test different design options in real time without the need for physical prototypes. Concepts can be visualized and tested in a realistic and interactive way, which can lead to more accurate and effective product design as it is positively affected by the user’s confidence and accuracy in the assessment [17]. Additionally, VR can facilitate the evaluation of virtual products in a more natural and intuitive way, as users can interact with them in a similar way to how they would interact with real products [18]. However, some product features are difficult to evaluate [19], as VE may not provide the same level of realism as real-world settings, or they may not accurately reproduce the tactile feedback that users experience when interacting with physical products. This can affect how users perceive and evaluate the products in a VE [20,21].
The introduction of haptic feedback in the VE can be particularly useful in the product design process [16]. Passive haptics provide a sense of touch in VR by synchronizing physical objects to virtual assets [22], significantly reducing costs without the use of intrusive devices, just the user’s hands. This has been shown to increase the sense of presence, improve cognitive mapping of the environment, and improve training performance [23]. In this context, passive haptics can be less expensive and easier to implement than active haptics, as it does not require the use of physical devices [24], which makes it more comfortable. Additionally, the introduction, and recent improvement, of hand-tracking in VR standalone HMD provides a more natural, intuitive and positive virtual experience [25]. With hand-tracking, users can interact with virtual objects using natural hand gestures and movements, rather than using a controller or other external input device. This can help to make the VR experience feel more realistic and allow users to more easily navigate and interact with virtual content.
The use of VR in product design, especially with the incorporation of passive haptic feedback through hand-tracking, has the potential to revolutionize the way products are designed and evaluated; however, assessing products in a VE can be challenging for several reasons. It is assumed that our perceptual and emotional responses to a product that is perceived through a VE are similar to those elicited by the actual product, which is not necessarily the case [26,27], particularly when evaluating product features that rely heavily on our sense of touch [21,28,29]. Overall, it is important to consider the potential perceptual differences when evaluating products in a VE and to use a combination of different evaluation methods (such as physical testing, user testing, and simulations) to obtain a comprehensive understanding of the products being evaluated. Few studies in the literature have examined the impact of hand-tracking and passive haptics on product assessment, so it is necessary to shed some light on this issue.
Studying perceptual variations in VR for product assessment is important for improving the accuracy and effectiveness of product design and evaluation. Further research in this area can help designers and researchers to better understand how different design elements and visualizations impact people’s perceptions and evaluations of products in VR and identify opportunities for improvement and innovation in the field.
In this paper, we report the results of two case studies where a group of participants were asked to evaluate different designs of a watering can using different visualization techniques. Users were also asked to indicate their intended purchasing decision and to rate their level of confidence in their response.

2. Related Work

VR is currently portrayed as a powerful tool for design [30] and product presentation [31,32]. It enables interactions within a VE that are far more effective than those made possible by traditional approaches such as sketches, renderings, or real images of the product [33]. This allows for more natural and accurate evaluations, likely to the perception one would have when viewing the real product. Despite this potential, there are certain limitations, as it may not be able to provide the same level of realism as a real environment, potentially resulting in significant perceptual differences and distorting the opinions of viewers.
Söderman was one of the first authors to make advances in this field [34,35]. Despite the results (no perceptual differences were found between the visualization techniques used), subsequent studies have proven that the visual media can have a significant effect on the user’s subjective impressions, considering the limitations of previous studies [26].
With the emergence of new visualization methods, some authors decided to analyze how immersive VR environments (with the use of an HMD) impact the user’s subjective impressions. For example, Felip et al. discovered that those features of the product evaluated through touch were the most susceptible to being affected by the medium [29]. Generally, these studies presented a significant limitation: the evaluation of a single product, which makes it difficult to extrapolate their conclusions to an entire type of product [18]. In this context, Palacios-Ibáñez et al. carried out similar studies by increasing the number of alternatives [3,36]. The authors corroborate that product features such as size (Large–Small) or weight (Heavy–Light) are susceptible to being affected by the visualization technique, but characteristics associated with the aesthetics of the product can also be influenced by the medium depending on the type of product. In addition, they also noted that the form of evaluation was an important factor when analyzing perceptual differences between sets of objects, as a joint evaluation helps to minimize differences between media.
In this case, several studies have shown that feeling the touch of physical objects in VE can improve global immersion, knowledge about the spatial environment, and users’ sense of presence [23,37]. Passive haptics can help to ensure that those features related to touch are not severely affected during the evaluation of a VP [21]. However, the absence of a virtual model of the hands can complicate the evaluation of the product to some extent, due to the lack of a real-time correspondence between the subject’s real hands and the VE.
More recent studies have introduced the hand-tracking technique during the virtual experience [20] thanks to the development of low-cost autonomous HMDs, demonstrating that the introduction of passive haptics during evaluation in a VE can be an effective tool for the evaluation of virtual prototypes and that the user’s background could influence the evaluation of some product features. However, it would be necessary to observe if this is also true for other types of products.
In this research, a novel type of product in this field of study was selected. On the other hand, the hand-tracking technique was also introduced during the virtual experience to provide a more natural and intuitive experience, similar to that of a real environment.

3. Research Aim and Hypotheses

The purpose of our research was to examine the impact of the visualization technique on product evaluation to determine whether there are variations in users’ subjective impressions when a product is viewed in different settings. Furthermore, we also examined the effect of users’ backgrounds on product evaluation, as this area of study has not received sufficient attention in the product design field [20,38].
In this regard, we conducted two case studies with a specific purpose. The first case study (A1) aimed to investigate the impact of passive haptics in a VE on product evaluation using non-intrusive hand-tracking. A group of participants evaluated four designs of the same product typology through three mediums: a real setting (R), VR, and VR with passive haptics (VRPH). The second case study (A2) aimed to determine if a more advanced virtual experience (increased interaction) would influence product evaluation. In this case, a group of participants from different backgrounds evaluated three designs of the same product typology used in A1 through two visualization techniques: R and VR.
The following hypotheses were postulated: the visualization technique can impact the user’s subjective impressions of the product (H1); the visual media used to display a product can affect the purchase decision (H2); and that the confidence in the response can be affected by the visualization technique (H3). Additionally, for A1, we postulated that the product’s geometry can influence its perception together with the visualization technique (H4), while for A2, we also postulated that the user’s background can influence the evaluation of the product (H5).

4. Materials and Methods

The aim of this research was to examine the impact of physical stimuli during the presentation of a product on the perception of its characteristics and purchase intentions. To achieve this, the two case studies described below were designed with formal approval from the Deontological Commission ethics committee of the Universitat Jaume I (approval number CD/102/2021). For both cases, a range of watering can designs were selected. These designs were chosen due to the participants’ familiarity with the objects, as they are commonly found in domestic settings, and due to their evaluable properties, which can assist in determining the effect of presentation formats.

4.1. Case Study I

For this case study, four watering cans were selected. For the experiment, three visually similar rooms were prepared where the products to be evaluated were presented; in this case, four watering cans were selected from the Ikea catalog (Vattenkrasse, Chilifrukt, Bittergurka, Förenlig), the characteristics of which are described in Table 1.
These were presented in three separate rooms (Figure 1), each of which presented the watering cans in a different format: R, VR, and VRPH. In each medium, each participant similarly interacted with the four products, and then evaluated their characteristics, before doing so again in the other media:
1—Room 1 (R1): R. In this room, participants could touch the real surface of the physical watering cans, but they could not lift them off the table. To prevent this from happening, the base of each watering can was glued to a foam cardboard sheet using hot-melt adhesive, which was also fixed to the table. All the physical elements of the room, including colors, textures, distances, and lighting, were used as a reference for the virtual reality modeling of rooms 2 and 3;
2—Room 2 (R2): VR. In this room, a 3D model of the watering cans was used. All the elements of the virtual scene accurately reproduced those used in R1. Participants saw a virtual representation of their hands (avatar), with which they could approach the watering cans to simulate being able to touch them;
3—Room 3 (R3): VRPH. The same scene as in R2 was used, combined with the physical elements used in R1. In this way, the VR representation was enriched because participants could physically touch the watering cans, as they were located on the table in the same coordinates as their virtual counterparts. The scheme in Table 2 details the characteristics of each room and the actions that can be performed in them.

4.2. Case Study II

In this case, the participants were required to observe, manipulate, and evaluate three watering-can models. These products were presented at equidistant from each other and placed on a table (160 × 80 × 100 cm) in front of a light gray wall. The order of placement of the products on the table was changed in each experimental condition to balance any possible effects that might appear.
To conduct the experiment, two visually similar rooms were prepared where the products to be evaluated, in this case, three watering cans from the Ikea catalog (Förenlig, Bittergurka, and PS 2002), were presented, the characteristics of which are described in Table 3.
The watering cans were presented in two separate rooms (Figure 2), each one presenting them in a different format: R and VR. In each medium, each participant interacted with the three products in a similar way before evaluating their characteristics, before doing so again in the other medium. Table 4 shows the characteristics of each room and the actions that can be performed in them.
1—Room 1 (R1): R. In this room, participants could pick up the physical products by the handle, lift them up, and move them in space. All physical elements of the room, with their colors, textures, distances, and lighting, were used as a reference for the VR modeling of room R2;
2—Room 2 (R2): VR. In this room, a 3D model of the products was used. All elements of the virtual scene accurately reproduced those used in R1. Participants saw a virtual representation of their hands (avatar), with which they could pick up the watering cans by the handle, lift them up, and move them in space.

4.3. Product Evaluation

The evaluation of the watering cans was carried out using the best–worst scaling (BWS) method [39], specifically Case 1 (object case). This methodology, based on ranking models [40,41,42], requires participants to make choices between various options, determining which is the “best” and which is the “worst” in each case.
Applied to our study, this methodology consisted of comparing which of the different watering cans (best or worst) met each of the properties to be evaluated. This method has advantages over other forms of evaluation, such as the Likert scale and rating scales, because it is easy for humans to identify extremes in a comparison [43], allows for better discrimination between attributes, and the valuation emitted is easier to interpret than the values of a rating scale. In addition, it improves the consistency of responses and can be more easily understood by survey respondents [44].
To determine the properties to be evaluated, the following methodology was followed. First, a Google Forms survey was sent to 76 regular users of this product (48 women and 28 men, aged between 19 and 30, with an average age of 20.56, SD = 1.72) asking them to select the 5 properties that they considered most important when choosing a watering can from a list of 10 (formal aesthetics, weight, resistance, stability, style, capacity, comfort of grip and handling, ease of filling, versatility, irrigation dosage, or lifespan). They were also offered the possibility of writing down another adjective that they considered important and that was not on the list. Then, the eight properties chosen by the highest percentage of users were selected, which would be used to evaluate the product considering the most relevant aspects: grip comfort (92.1%), ease of filling (69.7%), weight (60.5%), capacity (57.9%), irrigation or pouring precision (51.3%), lifespan (43.4%), formal aesthetics (34.2%), and stability (15.8%). Finally, these properties were taken as a reference to define the two adjectives that would determine the “best” and “worst” options for each property. In this way, for example, for the “comfort” property, the adjective “comfortable” would correspond to the best option, while “uncomfortable” would identify the worst (Table 5).

4.4. Materials

The virtual room (R2) was modeled with Unity 2020.2.2f1, using the real scene as a reference, ensuring that the shapes, sizes, colors, textures, distances between elements, and lighting were similar in both (Figure 3). The R2 and R3 scenes were displayed using an Oculus Quest 2 HMD upgraded to version 36.0, a stand-alone immersive VR device with a single fast-switch LCD of 1832 × 1920 pixels per eye and a refresh rate of 72 Hz. This HMD can detect the position of the participants’ hands in real time to generate an avatar that helped them interact with the objects. The passthrough capability was enabled for the calibration of the virtual objects before starting the experiment, and the hand-tracking interaction capability was enabled to provide the non-intrusive virtual models of the user’s hands. Additionally, Oculus Integration SDK version 36.0 was used. The scene used realtime light with hard shadows enabled, and materials were built using a standard shader. The virtual objects were modeled in Autodesk Inventor 2023, and UV mapping was completed in Blender 2.93.0. In R2 for Case Study I, as there was no physical watering can, the hand avatar’s behavior was programmed to stop when it met the surface of the virtual watering can, preventing both volumes from intersecting, thus achieving a realistic visual sensation of collision between volumes equivalent to what occurred in room R1.

4.5. Sample

To estimate the minimum sample size, we performed an a priori power analysis with G*Power 3.1.9.7 [45]. To estimate the sample size for the experimental work, a Wilcoxon signed-rank test (matched pairs) with the following input parameters: effect size: 0.50, α = 0.05, (1-β) = 0.80, and two tails. A sample size of 35 was obtained.
Two case studies were conducted in this experiment, so the samples selected were different for each. For the first case study, different Industrial Design and Product Development Engineering students at the Universitat Jaume I participated in the study. In this case, a total of 53 volunteers took part in the experiment (15 men and 38 women) aged between 19 and 30 years old (mean = 20.70 years; SD = 1.76).
For the second case study, a total of 70 volunteers participated in the experiment, (50 men and 20 women), aged between 19 and 29 years old (mean = 21.67; SD = 2.13). Of these, 34 had an Industrial Design and Product Development Engineering background, while the remaining had an Industrial Engineering background.

4.6. Experimental Protocol

In both case studies, participants arrived at the test location in stages. Before their arrival, they were informed of the protocol to be followed and provided voluntary written consent to participate in the experiment, with formal approval from the Deontological Commission ethics committee of the Universitat Jaume I (approval number CD/102/2021).
The first phase of the experiment consisted of welcoming the participant (2 min) and directing them to proceed to the designated room (R1, R2, or R3 in the first study, or R1 or R2 in the second study as appropriate). The second phase took place within the room (3 min). The researcher noted in the survey the visualization technique used and prepared the order of placement of the watering cans on the table, from left to right. Depending on the case study and room, the participant may perform different actions, which are detailed below:
  • First case study:
    R1. The participant entered the room. They were explained that they would see a table with four designs of a watering can, which they should touch with their hands gently (not lift them).
    R2 or R3. Before entering the room, the researcher helped the participant put on the VR headset and helped them enter the room. They were instructed to look and move their hands in front of the viewer to familiarize themselves with their avatar. Similar to R1, the researcher explained that they would see a table with four products, which they should touch with their hands gently and not lift them.
  • Second case study:
    R1. The participant entered the room. They were explained that they would see a table and on it there were three watering cans, which they should pick up by the handle, lift them and move them in space.
    R2. Before entering the room, the researcher helped the participant put on the VR headset and helped them enter the room. They were instructed to look and move their hands in front of the viewer to familiarize themselves with their avatar. They were explained that they would see a virtual room with a table and on it there were three watering cans, which they should pick up by the handle, lift them and move them in space.
Once the visualization was ended, the researcher asked each question from the evaluation survey and recorded the participant’s answers. The user left the room once the survey was completed. At this point, the experiment was ended.

5. Results

5.1. Case Study I

First, three participants were discarded from the sample as they were identified as outliers, thus their responses could negatively impact the results. The final sample size considered for this case study was 50. Second, three datasets were obtained: (1) semantic scales; (2) overall evaluation; and (3) purchase decision.
Descriptive statistics for each dataset in our study are shown in Table 6 and Table 7. The descriptive statistics for the response confidence were MVR = 0.96, MdnVR = 1.00, SDVR = 0.20 for VR, MVRPH = 0.98, MdnVRPH = 1.00, SDVRPH = 0.16 for VRPH, and MR = 0.97, MdnR = 1.00, SDR = 0.17 for R. Stacked bar charts for this case study are shown in Figure 4.
On the other hand, an inferential statistical method was applied to test the hypotheses described, and a normality test was performed on each dataset to select the appropriate statistical test. As the sample size was >50, we used Kolmogórov–Smirnov’s normality test (significance level of 0.05). Results showed that the data did not follow a normal distribution, so parametric tests proved unsuitable. In this regard, we applied the aligned rank transform (ART) procedure [46], as it is known to provide a powerful and robust nonparametric alternative to traditional techniques [47]. It relies on a preprocessing step that “aligns” data before applying averaged ranks. After this step, common ANOVA procedures can be applied [48].
In our study, we performed a series of repeated measures ANOVAs for each of the semantic scales (Table 8) after the ART procedures to obtain detailed results, as well as post hoc tests (Tukey correction was applied) when perceptual differences were found between media to determine the exact groups involved (Table 9). For the response confidence and the purchase decision, the Q of Cochran test was applied. No statistically significant differences were found between means for the response confidence (χ2(2) = 5.478, p < 0.065) nor the purchase decision (χ2(2)A = 3.500, p < 0.174; χ2(2)B = 1.000, p < 0.607; χ2(2)C = 3.714, p < 0.156; χ2(2)D = 0.200, p < 0.905).
Results showed that some semantic scales (Table 8) were affected by the visualization technique. However, the overall evaluation was not impacted by the medium. In this context, the weight (P3) and stability (P8) features were the most influenced by the representation method. Additional post hoc tests (outlined in Table 9) revealed that these differences were primarily observed between VR–R, with stability also showing differences between VRPH–R.
Different two-factor ANOVAs for the semantic scales that showed perceptual differences between the visualization techniques were performed to observe if there was a mixed effect between the medium used to display the product and its design. The results showed that (F(6) = 3.973, pWEIGHT = <0.001; F(6) = 10.224, pSTABILITY = <0.001).

5.2. Case Study II

Three participants were discarded from the sample for similar reasons as the first case study, so the final sample size considered for this case study was 67. Again, three datasets were obtained: (1) semantic scales; (2) overall evaluation; and (3) purchase decision. Descriptive statistics for each dataset are shown in Table 10 and Table 11. The descriptive statistics for the response confidence were MVR = 0.98, MdnVR = 1.00, SDVR = 0.14 for VR, and MR = 0.99, MdnR = 1.00, SDR = 0.12 for R. The stacked bar charts for the semantic scales are shown in Figure 5.
Similarly, an inferential statistical method was applied to test the hypotheses described. The Kolmogórov–Smirnov normality test (significance level of 0.05) showed that the data did not follow a normal distribution, so parametric tests proved unsuitable again. In this regard, since classic nonparametric statistical tests only allow for the analysis of a single factor, we applied the ART procedure one more time (apart from being a powerful table and robust nonparametric alternative to traditional techniques). In our study, we performed a two-factor repeated measures ANOVA (Table 12) after the ART procedures. For the response confidence and the purchase decision, the Q of Cochran test was applied. No statistically significant differences were found between means for the response confidence (χ2(2) = 1.059, p = 0.303), while design C showed differences between means for the purchase decision (χ2(2)A = 1.143, p < 0.285; χ2(2)B = 3.600, p < 0.058; χ2(2)C = 9.941, p = 0.002).
Our two-factor ANOVA results showed that, although different characteristics showed statistically significant differences from the influence of both factors, some of these can be highlighted. In this regard, “Easy to fill–Difficult to fill” and “Light–Heavy” were the most-affected features by the change of media (F1), as each of the designs selected was influenced by this factor. On the other hand, the filling capacity was also influenced by the user background (F2) for each of the watering cans. Additionally, design A and B were affected by the participants’ backgrounds for “Light–Heavy” and “Long-lasting–Perishable”. Finally, designs A and C showed a mixed effect between factors for filling capacity and grip comfort.

6. Discussion

In this research, two case studies were carried out to investigate the influence of the visualization technique used to display a product on the subjective impressions of the user. In addition, we studied whether differences in the evaluation of a product depended on the participant’s background (industrial design or industrial engineering). Here, a group of participants observed different watering can designs together and evaluated each of the products using the semantic differential technique. In addition, they made a purchase decision and gave an evaluation of their confidence in the response.
First, the authors postulated that the visualization technique could have an impact on the user’s subjective impressions of the product (H1). In this context, the results of the statistical analysis showed that some product features can be affected by the medium, which is in line with findings obtained by other authors [26,29].
In A1, the semantic scales “Light–Heavy” and “Stable–Unstable” were influenced by the visual display for three of the four watering can models (Table 8), while in A2, the two-factor ANOVA (Table 12) showed a combined effect between product design and the medium used to present it, which may explain why some designs did not show perceptual differences for those features, according to Palacios-Ibáñez et al. [3]. Additionally, in the case of weight, brightness may have affected size perception, as brighter objects often appear larger and closer to the user [49]. In this case, a larger product size may be confused with a larger product weight. Although attempts were made to maintain the same lighting conditions for the different experimental conditions, slight differences may have influenced the results. Descriptive statistics showed that the product was perceived as heavier in the VE, so the products may have appeared brighter due to the lights, or the materials used. Pairwise comparisons showed that differences were mostly found between the VR–R media, where interaction differences were also present (touch was not available in VR). Even if tactile feedback was available in the VRPH medium, the virtual media had the same visual conditions, so statistically significant differences between the virtual media should not necessarily be present. In the case of stability, these differences were also present between VRPH–R.
These characteristics are evaluated mainly through the sense of touch and sight. In the case of weight, the absence of touch could have hindered the evaluation of this feature, resulting in statistically significant differences between means [20,21]. In the second case, the visual differences between the VE and R could have introduced variability in the user’s response. Some authors made a classification of the product features according to Jordan’s pleasure categories [50]. Their results [36] showed that the physical pleasure category is the most affected by the medium, which is in line with our results.
The remaining features did not show statistically significant differences between mediums (Table 8), which can be attributed to the design of the virtual experience. In this sense, the ability to physically pick up and move the product to simulate its use may justify the differences between the visualization technique for the ease of filling observed in A2 (Table 12) but not in A1. On the other hand, technical limitations prevented the simulation of the full functionality of the product, therefore users were unable to fill them with water and use them realistically, which may explain the absence of differences in features such as comfort, or precision. In this regard, participants primarily relied on the product’s geometry, which was consistent across all mediums (which can explain the absence of a generalizable outcome regarding the existing differences in aesthetics or size). In future studies, it is advisable to make the virtual experience as similar as possible to the real one to ensure more robust results.
Similar results were obtained for A2. Although the overall evaluation showed no differences between the experimental conditions, the results of the statistical analysis (Table 12) showed that the adjectives “Easy to fill–Difficult to fill” and “Light–Heavy” were the most affected by the medium, as statistically significant differences were observed for each watering can design. Similarly, brightness may have also affected the results [39]. Furthermore, although several bipolar pairs of adjectives were influenced by the medium, not all categories were affected in the same manner, as confirmed by Galán et al. [28]. In this context, the results agree with previous studies in that adjectives that require sensory interaction, such as touch, are more sensitive to the change of display medium [21,24,28]. For this case, H1 was confirmed.
In the second hypothesis, the authors speculated that the visual media used to display a product could affect the purchase decision (H2). The only design that showed differences between means for this dataset was C in A2, thus H2 was not confirmed. There are two possible explanations for this: first, although access to information is critical when making a purchase decision [51], the medium used to present the information may not have been a determining factor in our case, since the selected media may have displayed the information necessary for the purchase intention to occur or not (which agrees with [52]); second, the selected product may not have elicited a positive purchase decision given the limited sample used (Table 7 and Table 11 showed low values for this dataset). Additionally, if we look at the descriptive statistics for this dataset in A1, we see that the highest scores tend to be found in the virtual media; therefore, showing this product using a virtual environment may have favored the purchase decision.
Volunteers also gave a rating on the response confidence during the evaluation. Although the descriptive statistics showed that participants felt more confident in their responses in the VRPH or R medium, and more insecure in VR, the statistical analysis showed no statistically significant differences in either of the two case studies (A1 and A2), thus H3 was rejected. The results of the descriptive statistics were not sufficient to conclude that the medium significantly affects confidence in the response. The introduction of tactile feedback during the experience explains this result, which appears to increase confidence in response but not to a significant extent compared to a fully synthetic environment.
For A1, we specifically questioned that the geometry of the product could influence its perception along with the visualization technique (H4). Many authors have examined the relationship between the shape of a product and the emotions it elicits since aesthetics is one of the first channels through which designers communicate with consumers [53,54]. The two-factor ANOVA showed that there is a combined factor between design and medium for two of the characteristics: weight and stability. Our results are consistent with those obtained by other authors, who suggest that geometric product features (i.e., product aesthetics) may influence users’ subjective impressions [55,56]. In this case, H4 was confirmed.
Finally, for A2, we also postulated that the users’ background may influence product evaluation (H5). The results of the statistical analysis (Table 11) showed that the ease of filling, weight, and duration of the product can be influenced by the subject’s background. In addition, for “Easy to fill–Difficult to fill”, there was a combined factor between medium and background. This can be explained by the training given to the different volunteers in the study, who can, for better or worse, intuit the behavior of the materials (related to the weight and durability of the object) or the usability of the product (related to the ease of filling). This is in line with the results obtained by [20]. For the case of the watering cans, H5 was confirmed.
Our results show that the visual medium used to present a product may significantly affect how the product is perceived. However, other factors, such as geometry, can also influence the user’s subjective impressions of a product. Therefore, not all products will yield the same response when the presentation medium is changed. For certain product features, technologies such as VRPH can be effective tools, but it is important to recognize how a particular medium relates to a specific product typology.

7. Conclusions

Our study demonstrates that the visual medium used to present a product can influence the user’s subjective impressions. Product evaluation is an essential task during the development of a new concept, so it is important to keep in mind that the use of a virtual prototype can, to some extent, distort the perception of some product features. In this context, the use of a virtual prototype may introduce some errors during product assessment. Some features are more difficult to evaluate using this prototyping technique, especially those that have to do with touch. In this regard, the use of passive haptics can help to minimize these differences, so VRPH can be an effective tool during the design process. We have shown that a VP cannot definitively replace physical prototypes, but both evaluation methods can complement each other to make the design process as fast and cost-effective as possible.
On the other hand, we also showed that an individual’s background can also influence product perception of certain product features. This shows that it is important that the evaluations produced during the development of a product are conducted with end users. In this way, the evaluation will be as accurate as possible.
The authors are aware of the limitations of the study. On the one hand, the age range of the participants was very narrow, which may have negatively influenced some of the results (such as the purchase decision). On the other hand, the small light differences between the experimental conditions could also have influenced the assessment of some semantic scales.
Our work contributes to the fields of product design and human–computer interaction by empirically assessing the reliability of VR and VRPH as tools for product evaluation in the early stages of product development, highlighting the importance of understanding how new technologies affect user perception in the design process. In future studies, we plan to use physiological measures, such as eye-tracking technologies, to analyze user behavior more accurately and objectively during product evaluation activities.

Author Contributions

Conceptualization, A.P.-I., F.F.-M., J.G., C.G.-G. and M.C.; methodology, A.P.-I., F.F.-M., J.G., C.G.-G. and M.C.; software, A.P.-I.; validation, A.P.-I., F.F.-M., J.G., C.G.-G. and M.C.; formal analysis, A.P.-I.; investigation, A.P.-I., F.F.-M., J.G., C.G.-G. and M.C.; resources, A.P.-I., F.F.-M.; data curation, A.P.-I.; writing—original draft preparation, A.P.-I. and F.F.-M.; writing—review and editing, F.F.-M. and M.C.; visualization, A.P.-I.; supervision, M.C.; project administration, A.P.-I., F.F.-M., J.G., C.G.-G. and M.C.; funding acquisition, A.P.-I., F.F.-M., J.G., C.G.-G. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by: (1) Spanish Ministry of Education and Vocational Training [FPU19/03878]; (2) Generalitat Valenciana [CIAICO/2021/037], and (3) Universitat Jaume I [UJI-B2019-39]. The APC was waived for this research.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data available on request due to restrictions (e.g., privacy or ethical).

Acknowledgments

The authors wish to thank designer Pablo Ayuso for his support.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Chen, Y. Neurological effect of the aesthetics of product design on the decision-making process of consumers. NeuroQuantology 2018, 16, 501–506. [Google Scholar] [CrossRef] [Green Version]
  2. Cooper, R.G. The drivers of success in new-product development. Ind. Mark. Manag. 2019, 76, 36–47. [Google Scholar] [CrossRef]
  3. Palacios-Ibáñez, A.; Pirault, S.; Ochando-Marti, F.; Contero, M.; Camba, J.D. An Examination of the Relationship between Visualization Media and Consumer Product Evaluation. IEEE Trans. Vis. Comput. Graph. 2023, 1–15. [Google Scholar] [CrossRef] [PubMed]
  4. Jiang, Z.; Benbasat, I. Investigating the influence of the functional mechanisms of online product presentations. Inf. Syst. Res. 2007, 18, 454–470. [Google Scholar] [CrossRef] [Green Version]
  5. Hornsey, R.L.; Hibbard, P.B.; Scarfe, P. Size and shape constancy in consumer virtual reality. Behav. Res. Methods 2020, 52, 1587–1598. [Google Scholar] [CrossRef]
  6. Yoo, J.; Kim, M. The effects of online product presentation on consumer responses: A mental imagery perspective. J. Bus. Res. 2014, 67, 2464–2472. [Google Scholar] [CrossRef]
  7. Ludlow, B.L. Virtual Reality: Emerging Applications and Future Directions. Rural. Spec. Educ. Q. 2015, 34, 3–10. [Google Scholar] [CrossRef]
  8. Aziz, H. Virtual Reality Programs Applications in Healthcare. J. Heal. Med. Inform. 2018, 9, 305. [Google Scholar] [CrossRef]
  9. Meta: Introducing Meta: A Social Technology Company. Available online: https://about.fb.com/news/2021/10/facebook-company-is-now-meta/ (accessed on 25 February 2022).
  10. Berni, A.; Borgianni, Y. Applications of Virtual Reality in Engineering and Product Design: Why, What, How, When and Where. Electronics 2020, 9, 1064. [Google Scholar] [CrossRef]
  11. Shapes Corp. Quest Pro update with eye and face tracking for avatars, colored MR and stylus. Available online: https://www.shapesxr.com/post/new-avatars-eye-and-mouth-tracking-color-passthrough-stylus-and-more (accessed on 3 July 2023).
  12. Corporation, M. Microsoft Maquette. Available online: https://apps.microsoft.com/store/detail/microsoft-maquette/9NGD2SNL8Z4W?hl=en-us&gl=us (accessed on 3 July 2023).
  13. Cecil, J.; Kanchanapiboon, A. Virtual engineering approaches in product and process design. Int. J. Adv. Manuf. Technol. 2007, 31, 846–856. [Google Scholar] [CrossRef]
  14. Ottosson, S. Virtual reality in the product development process. J. Eng. Des. 2002, 13, 159–172. [Google Scholar] [CrossRef]
  15. Ye, J.; Campbell, R.I.; Page, T.; Badni, K.S. An investigation into the implementation of virtual reality technologies in support of conceptual design. Des. Stud. 2006, 27, 77–97. [Google Scholar] [CrossRef]
  16. Bordegoni, M. Product Virtualization: An Effective Method for the Evaluation of Concept Design of New Products. In Innovation in Product Design; Bordegoni, M., Rizzi, C., Eds.; Springer: London, UK, 2011; pp. 117–141. [Google Scholar] [CrossRef]
  17. Hannah, R.; Joshi, S.; Summers, J.D. A user study of interpretability of engineering design representations. J. Eng. Des. 2012, 23, 443–468. [Google Scholar] [CrossRef]
  18. Ye, J.; Badiyani, S.; Raja, V.; Schlegel, T. Applications of Virtual Reality in Product Design Evaluation. In Human-Computer Interaction; HCI Applications and Services; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1190–1199. [Google Scholar] [CrossRef]
  19. Chu, C.-H.; Kao, E.-T. A Comparative Study of Design Evaluation with Virtual Prototypes Versus a Physical Product. Appl. Sci. 2020, 10, 4723. [Google Scholar] [CrossRef]
  20. Palacios-Ibáñez, A.; Alonso-García, M.; Contero, M.; Camba, J.D. The influence of hand tracking and haptic feedback for virtual prototype evaluation in the product design process. J. Mech. Des. 2022, 145, 041403. [Google Scholar] [CrossRef]
  21. Galán, J.; Felip, F.; García-García, C.; Contero, M. The influence of haptics when assessing household products presented in different means: A comparative study in real setting, flat display, and virtual reality environments with and without passive haptics. J. Comput. Des. Eng. 2021, 8, 330–342. [Google Scholar] [CrossRef]
  22. Lindeman, R.W.; Sibert, J.L.; Hahn, J.K. Hand-held windows: Towards effective 2D interaction in immersive virtual environments. In Proceedings of the IEEE Virtual Reality (Cat. No. 99CB36316), Houston, TX, USA, 13–17 March 1999; pp. 205–212. [Google Scholar] [CrossRef] [Green Version]
  23. Insko, B.E. Passive Haptics Significantly Enhances Virtual Environments; The University of North Carolina at Chapel Hill: Chapel Hill, NC, USA, 2001. [Google Scholar]
  24. Jerald, J. What Is Virtual Reality? In The VR Book; Association for Computing Machinery: New York, NY, USA, 2015; p. 9. [Google Scholar] [CrossRef]
  25. Voigt-Antons, J.-N.; Kojic, T.; Ali, D.; Moller, S. Influence of Hand Tracking as a Way of Interaction in Virtual Reality on User Experience. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, 26–28 May 2020; IEEE: Athlone, Ireland, 2020; pp. 1–4. [Google Scholar] [CrossRef]
  26. Artacho-Ramírez, M.A.; Diego-Mas, J.A.; Alcaide-Marzal, J. Influence of the mode of graphical representation on the perception of product aesthetic and emotional features: An exploratory study. Int. J. Ind. Ergon. 2008, 38, 942–952. [Google Scholar] [CrossRef]
  27. Palacios-Ibáñez, A.; Ochando-Martí, F.; DCamba, J.; Contero, M. The influence of the visualization modality on consumer perception: A case study on household products. In Proceedings of the 13th International Conference on Applied Human Factors and Ergonomics, New York, NY, USA, 24–28 July 2022; Amic, G.H., Ed.; pp. 160–166. [Google Scholar] [CrossRef]
  28. Galán, J.; García-García, C.; Felip, F.; Contero, M. Does a presentation Media Influence the Evaluation of Consumer Products? A Comparative Study to Evaluate Virtual Reality, Virtual Reality with Passive Haptics and a Real Setting. Int. J. Interact. Multimed. Artif. Intell. 2021, 6, 196. [Google Scholar] [CrossRef]
  29. Felip, F.; Galán, J.; García-García, C.; Mulet, E. Influence of presentation means on industrial product evaluations with potential users: A first study by comparing tangible virtual reality and presenting a product in a real setting. Virtual Real. 2019, 24, 439–451. [Google Scholar] [CrossRef]
  30. Rodríguez-Parada, L.; Pardo-Vicente, M.-Á.; Sánchez-calle, A.; Pavón-Domínguez, P. Conceptual Design Using Virtual Reality: Case Study with Portable Light. In Lecture Notes in Mechanical Engineering; Springer: Berlin/Heidelberg, Germany, 2022; pp. 81–90. [Google Scholar] [CrossRef]
  31. Bordegoni, M.; Carulli, M. Evaluating Industrial Products in an Innovative Visual-Olfactory Environment. J. Comput. Inf. Sci. Eng. 2016, 16, 1–9. [Google Scholar] [CrossRef] [Green Version]
  32. Martínez-Navarro, J.; Bigné, E.; Guixeres, J.; Alcañiz, M.; Torrecilla, C. The influence of virtual reality in e-commerce. J. Bus. Res. 2019, 100, 475–482. [Google Scholar] [CrossRef]
  33. Jiang, Z.; Benbasat, I. The effects of presentation formats and task complexity on online consumers’ product understanding. MIS Q. Manag. Inf. Syst. 2007, 31, 475–500. [Google Scholar] [CrossRef] [Green Version]
  34. Söderman, M. Comparing Desktop Virtual Reality with handmade sketches and real products exploring key aspects for end-users’ understanding of proposed products. J. Des. Res. 2002, 2, 7–26. [Google Scholar] [CrossRef]
  35. Söderman, M. Virtual reality in product evaluations with potential customers: An exploratory study comparing virtual reality with conventional product representations. J. Eng. Des. 2005, 16, 311–328. [Google Scholar] [CrossRef]
  36. Palacios-Ibáñez, A.; Navarro-Martínez, R.; Blasco-Esteban, J.; Contero, M.; Camba, J.D. On the application of extended reality technologies for the evaluation of product characteristics during the initial stages of the product development process. Comput. Ind. 2023, 144, 103780. [Google Scholar] [CrossRef]
  37. Azmandian, M.; Hancock, M.; Benko, H.; Ofek, E.; Wilson, A.D. Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; ACM: New York, NY, USA, 2016; pp. 1968–1979. [Google Scholar] [CrossRef]
  38. Forbes, T.; Barnes, H.; Kinnell, P.; Goh, M. A Study into the Influence of Visual Prototyping Methods and Immersive Technologies on the Perception of Abstract Product Properties. In Proceedings of the DS 91: Proceedings of NordDesign 2018, Linköping, Sweden, 14–17 August 2018. [Google Scholar]
  39. Finn, A.; Louviere, J.J. Determining the Appropriate Response to Evidence of Public Concern: The Case of Food Safety. J. Public Policy Mark. 1992, 11, 12–25. [Google Scholar] [CrossRef]
  40. Luce, R.D. Individual Choice Behavior: A Theoretical Analysis; John Wiley & Sons: New York, NY, USA, 1959. [Google Scholar]
  41. Luce, R.D. Preference, utility, and subjective probability. Handb. Math. Psychol. 1965, 3, 235–406. [Google Scholar]
  42. Marley, A.A.J. Some probabilistic models of simple choice and ranking. J. Math. Psychol. 1968, 5, 333–355. [Google Scholar] [CrossRef] [Green Version]
  43. Helson, H. Adaptation-Level Theory: An experimental and systematic approach to behavior; Harper and Row: New York, NY, USA, 1964. [Google Scholar]
  44. Marley, A.A.J.; Louviere, J.J. Some probabilistic models of best, worst, and best-worst choices. J. Math. Psychol. 2005, 49, 464–480. [Google Scholar] [CrossRef]
  45. Faul, F.; Erdfelder, E.; Lang, A.G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods. 2007, 39, 175–191. [Google Scholar] [CrossRef]
  46. Higgins, J.J.; Blair, R.C.; Tashtoush, S. The Aligned Rank Transform Procedure. Conf. Appl. Stat. Agric. 1990. [Google Scholar] [CrossRef] [Green Version]
  47. Mansouri, H.; Paige, R.L.; Surles, J.G. Aligned rank transform techniques for analysis of variance and multiple comparisons. Commun. Stat. Theory Methods 2004, 33, 2217–2232. [Google Scholar] [CrossRef]
  48. Wobbrock, J.O.; Findlater, L.; Gergle, D.; Higgins, J.J. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems—CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; p. 143. [Google Scholar] [CrossRef] [Green Version]
  49. Singh, G.; Ellis, S.R.; Swan, J.E. The Effect of Focal Distance, Age, and Brightness on Near-Field Augmented Reality Depth Matching. IEEE Trans. Vis. Comput. Graph. 2020, 26, 1385–1398. [Google Scholar] [CrossRef] [Green Version]
  50. Jordan, P.W. Designing Pleasurable Products: An Introduction to the New Human Factors; CRC Press: Boca Raton, FL, USA, 2002. [Google Scholar]
  51. O’Keefe, R.M.; McEachern, T. Web-based Customer Decision Support Systems. Commun. ACM 1998, 41, 71–78. [Google Scholar] [CrossRef]
  52. Ant Ozok, A.; Komlodi, A. Better in 3D? an empirical investigation of user satisfaction and preferences concerning two-dimensional and three-dimensional product representations in business-to-consumer e-commerce. Int. J. Hum. Comput. Interact. 2009, 25, 243–281. [Google Scholar] [CrossRef]
  53. Hsu, C.C.; Fann, S.C.; Chuang, M.C. Relationship between eye fixation patterns and Kansei evaluation of 3D chair forms. Displays 2017, 50, 21–34. [Google Scholar] [CrossRef]
  54. Liu, X.; Yang, S. Study on product form design via Kansei engineering and virtual reality. J. Eng. Des. 2022, 33, 412–440. [Google Scholar] [CrossRef]
  55. Perez Mata, M.; Ahmed-Kristensen, S.; Brockhoff, P.B.; Yanagisawa, H. Investigating the influence of product perception and geometric features. Res. Eng. Des. 2017, 28, 357–379. [Google Scholar] [CrossRef]
  56. Achiche, S.; Maier, A.; Milanova, K.; Vadean, A. Visual Product Evaluation: Using the Semantic Differential to Investigate the Influence of Basic Geometry on User Perception. In Systems, Design, and Complexity; American Society of Mechanical Engineers: New York, NY, USA, 2014; Volume 11, pp. 1–10. [Google Scholar] [CrossRef]
Figure 1. Physical rooms used for the first case study: R1 (a); R2 (b); and R3 (c).
Figure 1. Physical rooms used for the first case study: R1 (a); R2 (b); and R3 (c).
Electronics 12 03051 g001
Figure 2. Physical rooms used for the second case study: R1 (a); and R2 (b).
Figure 2. Physical rooms used for the second case study: R1 (a); and R2 (b).
Electronics 12 03051 g002
Figure 3. Modeling of the space and watering cans for room R2 in Case Study II (a), based on the real scene R1 (b).
Figure 3. Modeling of the space and watering cans for room R2 in Case Study II (a), based on the real scene R1 (b).
Electronics 12 03051 g003
Figure 4. Stacked bar charts for the semantic scales (first case study).
Figure 4. Stacked bar charts for the semantic scales (first case study).
Electronics 12 03051 g004
Figure 5. Stacked bar charts for the semantic scales (second case study).
Figure 5. Stacked bar charts for the semantic scales (second case study).
Electronics 12 03051 g005
Table 1. Physical watering can characteristics used in Room 1 (modeled virtually for Rooms 2–3).
Table 1. Physical watering can characteristics used in Room 1 (modeled virtually for Rooms 2–3).
Product Model
A (Vattenkrasse)B (Chilifrukt)C (Bittergurka)D (Förenlig)
Electronics 12 03051 i001Electronics 12 03051 i002Electronics 12 03051 i003Electronics 12 03051 i004
MaterialMetalGlassMetal, WoodMetal
Capacity (L)0.91.221.5
Weight (g)240385480660
Height (cm)162130.526
Base diameter (cm)11.5-8.58
Base area (cm2)103.82126.8256.7250.24
Price (€)1512129
Table 2. Room features for Case Study I.
Table 2. Room features for Case Study I.
R1 (R)R2 (VR)R3 (VRPH)
Visual stimulusLight beige floor, light gray walls, light gray table, four watering cans
Physical stimulusFour watering cansNoneFour watering cans
Actions: TouchYesYes (virtual)Yes
AvatarNo (real hands)Yes (virtual hands)Yes (virtual hands)
HMDNoneOculus Quest 2Oculus Quest 2
Table 3. Features of the physical watering cans used in room R1, modeled virtually for room R2.
Table 3. Features of the physical watering cans used in room R1, modeled virtually for room R2.
Product Model
A (Förenlig)B (Bittergurka)C (PS 2002)
Electronics 12 03051 i005Electronics 12 03051 i006Electronics 12 03051 i007
MaterialMetalMetal, WoodPlastic
Capacity (L)1.521.2
Weight (g)66048090
Height (cm)2630.532
Base diameter (cm)88.5-
Base area (cm2)50.2456.7254.24
Price (€)9121.5
Table 4. Room features for Case Study II.
Table 4. Room features for Case Study II.
R1 (R)R2 (VR)
Visual stimulus Light beige floor, light gray walls, very light gray table, three watering cans.
Physical stimulus Three watering cansNone
Actions: TouchCatchYesYes (virtual)
LiftingYesYes (virtual)
MoveYesYes (virtual)
Avatar No (real hands)Yes (virtual hands)
HMD NoneOculus Quest 2
Table 5. Adjectives used to evaluate the most relevant properties of watering cans using the best–worst scaling method.
Table 5. Adjectives used to evaluate the most relevant properties of watering cans using the best–worst scaling method.
Product FeaturesAdjective (Best)Adjective (Worst)
P1Grip comfortComfortableUncomfortable
P2Ease of fillingEasy to fillDifficult to fill
P3WeightLightHeavy
P4Filling capacitySmallBig
P5Irrigation precisionAccurateImprecise
P6Shelf lifeLong-lastingPerishable
P7AestheticsBeautifulUgly
P8StabilityStableUnstable
Table 6. Descriptive Statistics for the Semantic Scales.
Table 6. Descriptive Statistics for the Semantic Scales.
ABCD
VRVRPHRVRVRPHRVRVRPHRVRVRPHR
Comfortable–
Uncomfortable
M−0.32−0.30−0.30−0.54−0.42−0.520.620.500.660.240.220.16
Mdn0.000.000.00−1.000.001.001.001.001.000.000.000.00
SD0.590.580.540.580.580.610.490.680.480.520.580.55
Easy to fill–
Difficult to fill
M−0.96−0.94−0.940.260.120.160.640.720.640.060.100.14
Mdn−1.00−1.00−1.000.000.000.001.001.001.000.000.000.00
SD0.200.240.240.490.390.470.530.500.530.240.360.35
Light–
Heavy
M0.100.160.300.500.280.08−0.90−0.78−0.760.300.340.38
Mdn0.000.000.001.000.000.00−1.00−1.00−1.000.000.000.00
SD0.420.550.510.580.640.670.300.470.430.510.520.57
Small–
Big
M1.000.981.000.00−0.020.00−0.86−0.90−0.82−0.14−0.06−0.18
Mdn1.001.001.000.000.000.00−1.00−1.00−1.000.000.000.00
SD0.000.140.000.000.140.000.350.300.390.350.310.39
Accurate–
Imprecise
M0.560.640.60−0.90−0.92−0.86−0.06−0.06−0.140.400.340.40
Mdn1.001.001.00−1.00−1.00−1.000.000.000.000.000.000.00
SD0.540.490.500.300.270.350.310.310.350.500.480.50
Long-lasting–
Perishable
M−0.10−0.080.00−0.66−0.66−0.700.000.08−0.020.760.660.72
Mdn0.000.000.00−1.00−1.00−1.000.000.000.001.001.001.00
SD0.360.440.350.590.590.580.540.530.520.480.560.54
Beautiful–
Ugly
M−0.38−0.40−0.380.000.020.020.360.360.300.020.020.06
Mdn−0.50−0.500.000.000.000.000.500.000.000.000.000.00
SD0.700.670.670.700.710.770.720.690.650.520.550.59
Stable–
Unstable
M0.220.220.420.060.180.34−0.90−0.88−0.900.620.480.14
Mdn0.000.000.000.000.000.00−1.00−1.00−1.001.000.500.00
SD0.420.470.500.510.520.480.300.390.360.490.540.54
Highest values and corresponding adjective are shown in italics, lowest values and corresponding adjective in bold.
Table 7. Descriptive Statistics for the Overall Evaluation and Purchase Decision.
Table 7. Descriptive Statistics for the Overall Evaluation and Purchase Decision.
ABCD
VRVRPHRVRVRPHRVRVRPHRVRVRPHR
LikeM−0.48−0.44−0.38−0.22−0.20−0.160.380.320.240.320.320.30
Mdn−0.00−0.00−0.500.000.000.000.000.000.000.000.000.00
SD0.610.640.700.650.670.650.640.590.660.550.620.61
Purchase decisionM0.060.080.120.060.040.040.320.300.240.340.360.34
Mdn0.000.000.000.000.000.000.000.000.000.000.000.00
SD0.240.270.330.240.200.200.470.460.430.480.490.48
Highest values are shown in italics, lowest values are shown in bold.
Table 8. One Factor ANOVA for the Semantic Scales and the Overall Evaluation.
Table 8. One Factor ANOVA for the Semantic Scales and the Overall Evaluation.
ABCD
dfFSig.FSig.FSig.FSig.
Comfortable–Uncomfortable20.081p = 0.9221.550p = 0.2181.290p = 0.2800.649p = 0.525
Easy to fill–Difficult to fill0.197p = 0.8222.370p = 0.0990.940p = 0.3941.370p = 0.258
Light–Heavy3.470p = 0.0357.780p < 0.0013.400p = 0.0380.600p = 0.551
Small–Big1.000p = 0.3721.000p = 0.3721.090p = 3392.330p = 0.102
Accurate–Imprecise0.859p = 0.4270.696p = 0.5011.450p = 0.2411.130p = 0.328
Long-lasting–Perishable1.940p = 0.1490.252p = 0.7771.050p = 0.3541.110p = 0.335
Beautiful–Ugly0.035p = 0.9660.045p = 0.9560.804p = 0.4500.282p = 0.755
Stable–Unstable5.750p = 0.0049.82p < 0.0010.073p = 0.93020.700p < 0.001
Like1.140p = 0.3250.270p = 0.7641.830p = 0.1660.062p = 0.940
p-values showing perceptual differences (<0.05) are shown in bold.
Table 9. Post hoc for the Semantic Scales.
Table 9. Post hoc for the Semantic Scales.
ABCD
Light–HeavyVR–VRPHp = 0.664p = 0.122p = 0.133
VR–Rp = 0.006p < 0.001p = 0.047
VRPH–Rp = 0.302p = 0.15p = 0.801
Stable–UnstableVR–VRPHp = 0.995p = 0.158 p = 0.144
VR–Rp = 0.029p < 0.001p < 0.001
VRPH–Rp = 0.009p = 0.031p < 0.001
p-values showing perceptual differences (<0.05) are shown in bold.
Table 10. Descriptive Statistics for the Semantic Scales.
Table 10. Descriptive Statistics for the Semantic Scales.
ABC
VRRVRRVRR
Comfortable–UncomfortableM−0.52−0.550.120.330.400.22
Mdn−1.00−1.000.000.001.000.00
SD0.700.760.790.640.680.76
Easy to fill–Difficult to fillM−0.730.01−0.100.100.84−0.12
Mdn−1.000.000.000.001.000.00
SD0.570.990.460.390.480.93
Light–HeavyM0.03−0.61−0.73−0.280.700.90
Mdn0.00−1.00−1.000.001.001.00
SD0.650.520.540.600.520.35
Small–BigM0.970.87−0.25−0.30−0.72−0.57
Mdn1.001.000.000.00−1.00−1.00
SD0.170.460.500.600.450.53
Accurate–ImpreciseM0.690.60−0.55−0.45−0.13−0.15
Mdn1.001.00−1.00−1.000.000.00
SD0.500.580.580.680.800.80
Long-lasting–PerishableM0.330.550.03−0.07−0.36−0.48
Mdn0.001.000.000.00−1.00−1.00
SD0.750.580.800.740.770.77
Beautiful–UglyM−0.22−0.100.010.100.210.00
Mdn0.000.000.000.000.000.00
SD0.670.700.900.860.830.89
Stable–UnstableM0.900.97−0.37−0.30−0.52−0.67
Mdn1.001.000.000.00−1.00−1.00
SD0.390.170.550.520.590.47
Highest values for the adjectives in bold are shown in bold.
Table 11. Descriptive Statistics for the Overall Evaluation and Purchase Decision.
Table 11. Descriptive Statistics for the Overall Evaluation and Purchase Decision.
ABC
VRRVRRVRR
LikeM−0.270.16−0.150.060.420.09
Mdn0.000.000.000.001.000.00
SD0.730.750.780.8300.780.85
Purchase decisionM0.120.180.180.270.540.34
Mdn0.000.000.000.001.000.00
SD0.330.390.390.450.500.48
Highest values are shown in bold.
Table 12. Two-Factor ANOVA for the Semantic Scales, with Media as the First Factor and Background as the Second Factor.
Table 12. Two-Factor ANOVA for the Semantic Scales, with Media as the First Factor and Background as the Second Factor.
dfABC
FSig.FSig.FSig.
Comfortable–UncomfortableMedia20.040p = 0.8412.010p = 0.1613.530p = 0.065
Background0.670p = 0.4160.243p = 0.6243.830p = 0.055
Mixed16.100p < 0.0010.223p = 0.63811.384p < 0.001
Easy to fill–Difficult to fillMedia125.460p < 0.00114.500p < 0.001213.984p < 0.001
Background223.000p < 0.0017.320p = 0.00986.600p < 0.001
Mixed207.200p < 0.0011.990p = 0.163192.990p < 0.001
Light–HeavyMedia57.450p < 0.00153.183p < 0.0017.780p = 0.007
Background53.700p < 0.00154.700p < 0.0013.100p = 0.083
Mixed3.590p = 0.06318.400p < 0.0012.840p = 0.097
Small–BigMedia2.670p = 0.1070.454p = 0.5036.990p = 0.010
Background2.600p = 0.1121.890p = 0.1746.380p = 0.014
Mixed5.910p = 0.0180.042p = 0.8382.300p = 0.134
Accurate–ImpreciseMedia1.450p = 0.2342.210p = 0.1420.003p = 0.957
Background0.010p = 0.9211.130p = 0.2910.300p = 0.586
Mixed0.520p = 0.4734.300p = 0.0421.910p = 0.171
Long-lasting–PerishableMedia2.630p = 0.1091.050p = 0.3101.540p = 0.219
Background5.980p = 0.0174.080p = 0.0480.003p = 0.954
Mixed0.210p = 0.6481.360p = 0.2481.540p = 0.219
Beautiful–UglyMedia4.290p = 0.0422.640p = 0.1093.430p = 0.068
Background0.020p = 0.8840.181p = 0.6720.151p = 0.699
Mixed1.580p = 0.2130.003p = 0.9500.363p = 0.549
Stable–UnstableMedia0.270p = 0.6081.880p = 0.1754.130p = 0.046
Background0.100p = 0.7530.073p = 0.7880.501p = 0.481
Mixed0.210p = 0.6504.950p = 0.0293.660p = 0.060
p-values showing perceptual differences (<0.05) are shown in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Palacios-Ibáñez, A.; Felip-Miralles, F.; Galán, J.; García-García, C.; Contero, M. Consumer Subjective Impressions in Virtual Reality Environments: The Role of the Visualization Technique in Product Evaluation. Electronics 2023, 12, 3051. https://doi.org/10.3390/electronics12143051

AMA Style

Palacios-Ibáñez A, Felip-Miralles F, Galán J, García-García C, Contero M. Consumer Subjective Impressions in Virtual Reality Environments: The Role of the Visualization Technique in Product Evaluation. Electronics. 2023; 12(14):3051. https://doi.org/10.3390/electronics12143051

Chicago/Turabian Style

Palacios-Ibáñez, Almudena, Francisco Felip-Miralles, Julia Galán, Carlos García-García, and Manuel Contero. 2023. "Consumer Subjective Impressions in Virtual Reality Environments: The Role of the Visualization Technique in Product Evaluation" Electronics 12, no. 14: 3051. https://doi.org/10.3390/electronics12143051

APA Style

Palacios-Ibáñez, A., Felip-Miralles, F., Galán, J., García-García, C., & Contero, M. (2023). Consumer Subjective Impressions in Virtual Reality Environments: The Role of the Visualization Technique in Product Evaluation. Electronics, 12(14), 3051. https://doi.org/10.3390/electronics12143051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop