Next Article in Journal
Fabrication and Characterization of Gelatin/Zein Nanofiber Films Loading Perillaldehyde for the Preservation of Chilled Chicken
Next Article in Special Issue
Effect of Various Types of Sugar Binder on the Physical Properties of Gum Powders Prepared via Fluidized-Bed Agglomeration
Previous Article in Journal
Supplementation of Double Cream Cheese with Allium roseum: Effects on Quality Improvement and Shelf-Life Extension
Previous Article in Special Issue
The Sensory Quality and the Textural Properties of Functional Oolong Tea-Infused Set Type Yoghurt with Inulin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Home Use Tests with Differing Time and Order Controls

Department of Food Science and Nutrition & Kimchi Research Institute, Pusan National University, Busan 46241, Korea
*
Author to whom correspondence should be addressed.
Foods 2021, 10(6), 1275; https://doi.org/10.3390/foods10061275
Submission received: 12 May 2021 / Revised: 31 May 2021 / Accepted: 31 May 2021 / Published: 3 June 2021

Abstract

:
Consumer tests are classified in terms of the location of testing as laboratory tests or central location tests (CLTs) and home use tests (HUTs). CLT is generally used in sensory tests due to the ease of test control, whereas HUT has higher validity because of real consumption. However, the lack of test control in HUT is a major issue. In order to investigate the error occurrence and efforts required to minimize errors, three groups of tests were designed differing time and order control and evaluation was conducted using six snacks with texture differences. Errors related to time, order, and consumer or sample number were higher for more controlled conditions, however, most errors were recoverable using identification information except for cases of no response. Additionally, consumers preferred to consume all snacks in the evening at home, which differed from the typical 9 a.m. to 6 p.m. evaluation time in CLT. However, the timing differed for consumers with self-reported snacking time. The research title that included the term ‘home’ might have influenced the participants’ choice of location for evaluation. Overall, there was no significant difference between the results of groups despite different time and order controls, which could increase the applicability of HUT.

1. Introduction

Acceptance tests are crucial for food companies as acceptability can be used as an estimation for the possible repurchase of products by customers. These tests are usually conducted at sensory laboratories under a controlled environment using samples and preparations, which have been used to predict potential long-term purchases [1]. Consumer acceptance tests are largely divided into laboratory tests or central location tests (CLT) and home use tests (HUT). In CLT, consumers visit a specific place such as a shopping mall, hospital, or sensory test lab for undergoing the test. Most of the external factors, except certain environmental attributes under investigation, can be controlled at these places. Thus, environmental control is important in sensory tests. Even though CLT is widely used, it is unreasonable to evaluate the whole product experience with a small serving presented in a relatively short exposure time. Thus, its validity has been questioned in the real world [2,3] because consumers are practically affected by a variety of environmental factors in day-to-day life. Hence, control elements of sensory tests have been improved and tested to reflect real-use environments for years [4,5,6,7]. On the contrary, HUT is conducted at home where the consumers can evaluate in natural circumstances, thus it is one of the most noticeable methods to measure acceptability in real consumption. Nevertheless, the biggest challenge of HUT is control, as the evaluation is autonomous and is influenced by external factors such as evaluation of an uncertain amount of sample, improper focus, and interference of other family members’ during evaluation.
In this respect, several comparative studies between CLT and HUT have been done as the context effect suggests that the results would be different depending on the test location [4,5,8,9,10]. Most studies drew higher scores from HUT as the consumers could evaluate naturally in a relaxed setting for a prolonged period [5,9,10]. On the other hand, participants in controlled settings approached the tests analytically, in order to detect differences among the provided samples in a better way [5,11]. In addition, Boutrolle et al. [5] suggested that the contextual effect was influenced by different types of products. Hence, using CLT would be a better option for the evaluation of samples with small differences and no relation to consumption. Even though HUT would be more relevant for the evaluation of sample types, as they would be more specifically related in certain contexts of consumption, there is not enough information on the implementation of HUT and their influence on data acquisition.
Several studies using HUT have dealt with the real situation in many ways, however, minimal information is available about the mode of implementation. Therefore, the information required to build structural standardization is inadequate. Accordingly, it is important to examine the various factors in HUT. The use of a maximum of three samples is encouraged for two reasons: better understanding of the experiment for participants and digression from the natural situation [12,13]. Sample sizes in HUT are larger than those of CLT as the evaluation period is longer. The samples provided for CLT are insufficient compared to natural consumption situations [5,12,13]. Some studies asked the participants to consume a minimum quantity of samples [5,9], however, most studies did not mention if they used any instruction regarding the quantity of sample consumed. Zandstra et al. [1] conducted a study for comparing consumption. The results suggested that the group of participants that received identical products continuously consumed lower amounts of samples than the free-choice group.
Product information on the package should be considered in HUT design. Consumers noticed package cues during repeated consumption [14]. Mahieu et al. [15] researched if the participants got the sensory description from wine labels. Conversely, using a container labeled with a three-digit code enabled not providing the original package of the sample to consumers [10]. However, perishable food required careful consideration regarding variables such as temperature [11] and repackaging may raise food safety concerns.
For procedures, samples were provided through the lab stopping by [1], visits to participant homes in order to provide next testing samples and collect their answered questionnaire [5], or using a delivery service [16]. Unfortunately, most studies just stated the delivery status as ‘delivered’ or ‘sent’, hence, handling of the samples was unclear. Participants in previous studies were asked to record their answers on a score sheet [16], however, currently, online questionnaires are being implemented [14,15,17]. In addition, photographs of the participants [10], videos with observations [18], or interviews [19] are also taken for evaluation. The sample order was randomly allocated. Zandstra et al. [1] compared acceptance depending on the degree of freedom to choose samples. Participants were asked to evaluate alone [1] or with family, friends, or both [17]. Zandstra et al. [1] allowed products with different ingredients to be used.
CLT has a fixed time limit for participants to evaluate samples, thus, hedonic scores could be affected [5]. In contrary, HUT is conducted over a period of one week at least in order to help the participants develop an overall liking for the sample products by long exposure [13,20]. Thus, boredom [16] and familiarity with unusual flavors [21] were investigated. One of the greatest benefits of HUT is that the samples can be evaluated at any time, although it could be tested at certain time period [14]. Furthermore, after evaluating a sample, a forced minimum time lag before testing the next sample was set in HUT evaluations [10,15]. Some factors of HUT could be controlled to meet the aims of the test. A summary of external factors at HUT in the aforementioned paragraphs can be found in Figure 1.
Some challenges from the past remain, however, with the development of internet technology, the data can be collected and checked on a real-time basis. Furthermore, HUT is being used more owing to the prohibition of public gatherings due to the COVID-19 pandemic. These can be an alternative method of testing in the new normal era. Although sensory evaluation has started moving out of the controlled laboratory environment in order to reflect real consumption, COVID-19 accelerated the speed of change. HUT is a necessary form of consumer test. There is a lack of studies done on these tests, hence, they need to be investigated further for development and standardization. In addition, many studies mention the importance and common limitations of HUT, however, the information on methods for overcoming these limitations is not enough. Therefore, it is imperative to review the kind of errors that could occur while conducting HUT and find the amount of control/effort required to minimize the error.
The objectives of this study were as follows: (1) to determine if home-use tests could be an alternative to central location tests or laboratory tests, (2) to study the kind of errors that could occur while conducting home use tests, and (3) to suggest critical control factors.

2. Materials and Methods

2.1. Participants

A total of 300 Korean participants (218 females and 82 males between 19 and 65 years) were recruited through the online bulletin board in Pusan National University or word-of-mouth and enrolled utilizing an online survey tool (Survey Monkey, Palo Alto, CA, USA). Participants were asked to use QR codes. The demographic information of the consumers, along with their snacking frequency and usual snacking time on a daily basis, are shown in Table 1. Participants were selected based on the frequency of snack consumption (at least once every other day), willingness to consume samples, and absence of any food allergies and pregnancy. People were asked to choose items that they were not willing to consume as a snack from a list. Those who were unwilling to consume any of the test samples were automatically excluded from the study. The addresses of the participants were collected for shipping samples for testing at home. After the experiment, participants who completed the evaluation received mobile gift cards as compensation. This study was approved by the Institutional Review Board at Pusan National University (PNU IRB/2020_59_HR).

2.2. Samples

Jeltema et al. [22] divided individuals into four major groups depending on their mouth behavior: Crunchers, Chewers, Suckers, and Smooshers. Six commercial snacks (Table 2) with a clear difference in texture were selected according to mouth behavior. Samples were individually wrapped in a Kraft bag, labeled with three-digit random codes and consumer numbers. Participants received all samples and instructions at once by postal delivery.

2.3. Test Design

Participants were divided into three groups. Each group had 100 participants and their sex, age, and address were considered in order to balance their proportion. If the address provided for delivery of samples was the same, we considered the participants to be living together and assigned them to the same group to minimize confusion during evaluation. Group A was the ‘No control’ group for time and order, thus consumers were allowed to proceed with the evaluation at any time and in any order they wanted. Group B was the ‘Order control’ group that consisted of participants who were instructed to evaluate in a preassigned testing order. However, they could evaluate whenever they wanted. Group C was the ‘Time and order control’ group that had a preassigned order and evaluated one sample every 2 days, three times per week. No instructions were given regarding the minimum amount of consumption and presence of other people with the participants while conducting the test. The instruction manual used pictograms in the description for ease of understanding. Participants were simply referred to quick response (QR) codes on the instruction manual or click links that were sent via text messages. The ‘No control’ and ‘Order control’ groups used QR codes provided on the instruction manual, whereas the ‘Time and order control’ group received messages including the link to the online questionnaire at 10 am on Monday, Wednesday, and Friday for two consecutive weeks. If the participant did not respond to the questionnaire for more than 72 h since the last evaluation, a reminder text message was sent to finish the assigned evaluation. Participants in the ‘Time and order control’ group were reminded 2 days later to maintain the evaluation intervals with other participants in the same group. The participant evaluation was terminated if they ignored the message three times consecutively. A schematic diagram of test design for comparing three home use test settings with differing test controls regarding evaluation time and test order is found in Figure 2.

2.3.1. Questionnaire

Consumers filled out their identification number and product number each time and selected the time and location of their evaluation. The questionnaire was composed of queries regarding the product acceptability, texture characteristics, and prior knowledge of product. Overall acceptability, liking for the packaging, flavor, and texture were evaluated by using the nine-point hedonic scale (1 = “dislike extremely” and 9 = “like extremely”). Flavor intensity, texture intensity, and afterfeel intensity were measured using the nine-point scale (1 = “extremely weak” and 9 = “extremely strong”), whereas the amount of residue was measured on the six-point scale (0 = “None” and 5 = “very much”). Participants checked suitable texture terms using check-all-that-apply (CATA) and the mouth behaviors (cruncher, chewer, sucker, or smoosher) that were relevant to the corresponding sample were determined. Subsequently, they answered yes/no questions about knowledge of samples, brand name, and experience. Additionally, willingness to purchase was measured by using the five-point scale (1 = “I would definitely not buy” and 5 = “I would definitely buy”).

2.3.2. Supplementary Questionnaire

After all the evaluations were done, participants were questioned about demographics, motivation of snacking, and mouth behavior. In snacking questionnaire, participants were questioned about three main parts. First was frequency, time, and reason for snacking. The second was a liking towards 11 snacks (snack, cookie or cracker, bread, fruit, chocolate, coffee, ice cream, beverage, jelly, nuts, and rice cake) using the nine-point hedonic scale (1 = “dislike extremely” and 9 = “like extremely”). Lastly, they were asked to respond to questions about the motivation of snacking using the six-point scale (1 = “never” and 6 = “always”) [23].
In the mouth behavior questionnaire, the participants were questioned for preference of mouth behavior. They responded to the degree of preference with the six-point scale (1 = “strongly disagree” and 6 = “strongly agree”) using questions that revealed the difference in the texture of food items, and photographs representing each of the four mouth behaviors [24,25,26,27]. They also responded to questions about the condition of their teeth and were asked to allocate the importance of taste, flavor, and texture by percentage.

2.4. Data Analysis

Data were divided into two categories: completed without error and error occurred. Their frequency was recorded. Data that did not have any errors was considered complete data. Data having errors related to time delay, evaluation order, entry of wrong sample number, or consumer identification were recovered by tracking personal identification information (last four digits of the phone number). Missing response cases were not followed up due to time passing after consumption. After confirmation, all data, except that of participants who did not respond, were corrected and used for analysis. Demographic information was completed with further requests from participants. Therefore, the number of participants whose data was available was different depending on the samples. The number of days taken for completion of evaluation with all samples was presented as mean, minimum, median, and maximum values of the difference between the start and end date using a data unit. The frequency and percentage for evaluation time and place, knowledge of samples, mouth behavior (MB), and amount of consumption were calculated. Liking and perceived intensity scores, willingness to purchase, and adequate portion size were analyzed using analysis of variance (ANOVA) to determine significant differences among groups and samples within each group. When significance was found, the Fisher’s least significant difference (LSD) was conducted as a post-hoc test at a significance level of 0.05. Additionally, the evaluated order from the ‘No control’ group was counted as the frequency.
Data from CATA was presented in terms of the frequency of selected sensory attributes and used for correspondence analysis (CA). The RV coefficient test was also performed using the results of CA to compare samples and terms between groups.
Statistical analysis was performed using SAS® Software 9.4 (SAS Institute Inc., Cary, NC, USA). RV coefficient tests were analyzed using XLSTAT® Software package (Version 2020.2.1 Addinsoft SARL, New York, NY, USA).

3. Results and Discussion

3.1. Checking of Errors and Analyzable Data

The frequency of analyzable data acquired from evaluations is shown in Table 3. Complete data indicates data collected correctly for the conditions of each group without any errors, including time delay. Overall completeness was the highest in the ‘No control’ group and was the lowest in the ‘Order control’ group. Completion of the survey was influenced by the degree of controls such as preassigned order and evaluation interval for each group. As the ‘No control’ group had the least control compared to the other two groups, only seven participants exceeded the evaluation period.
Simple errors such as the entry of incorrect consumer numbers and the three-digit random codes were detected and data was saved. In ‘Order control’ and ‘Time and order control’ groups, some evaluations were done without following test design protocols, however, their sample number was identified with data. When the evaluation was delayed, participants were reminded and the evaluation period was extended. In addition, no response was also counted as an error. The pattern of overall error occurrence was similar to that of completeness, with ‘Order control’ and ‘Time and order control’ groups having errors nearly twice as compared to ‘No control’ groups. ‘Order control’ group accounted for most errors from preassigned orders. However, a few also occurred in the ‘Time and order control’ group despite the notification. Additionally, the completeness from ‘Order control’ was the lowest, although the degree of control was not greater than that of the ‘Time and order control’ group as the preassigned order error was counted several times per person. When preassigned order error was treated as one per participant, despite its occurrence being more than once, then the order error occurrence was the highest in the ‘Time and order control’ group, followed by ‘Order control’ and ‘No control’ groups. Other errors were mostly related to incorrect entry of numbers, such as consumer numbers or three-digit random codes.
Recoverable errors referred to error data that was correctable. Even if the same consumer repeated mistakes more than twice, these were treated as an error. Participants had to enter the last four digits of their phone numbers for each evaluation, hence, their consumer numbers could be traced and errors could be rectified.
The sum of missing data in each group was in the following order: ‘No control’ group followed by ‘Order control’ and ‘Time and order control’ groups. For the ‘Order control’ group, missing data for apple sauce and potato chips was higher than the total number of dropped consumers. Moreover, participants who did not complete all evaluations were also in the order of ‘No control’, ‘Order control’, and ‘Time and order control’, however, some of their data were included.
Participants who did not complete the evaluation within the preset time period were considered for extended evaluation. Participants for extended evaluation from the ‘Time and order control’ group were considerably greater in number compared to the ‘No control’ and ‘Order control’ groups. For the ‘Time and order control’ group, the participants could not proceed to the next step on their own because they were informed of the testing sample within the evaluation intervals. Hence, the ‘Time and order control’ group had a lesser frequency in the no response and dropped consumers, although their number was highest in the extended evaluation.
All other errors were recoverable with confirmation, except for non-response data. The response rate for all groups was high and the frequency of error differed depending on the degree of control, which could be converted into complete data using the identification information.

3.2. Evaluation Time and Place

Information of evaluation time and place for each group is shown in Table 4. Most samples were evaluated in the evening except for jelly in the ‘No control’ group and candy in the ‘Time and order control’ group. Consumers in ‘No control’ and ‘Order control’ groups evaluated samples in the afternoon frequently. However, for the ‘Time and order control’ group, the frequency of evaluation was slightly higher in the morning, probably because the notification was sent in the morning. Samples in each group were evaluated with the least frequency at dawn. Depending on food types, it may have a more appropriate time of the day. Birch et al. [28] indicated that breakfast food items were preferred in the morning than in the afternoon, whereas food items associated with dinner were preferred in the afternoon than in the morning. CLT is normally conducted within a fixed time period, usually from 10 am to 6 pm, whereas participants of HUT choose appropriate consumption time according to their convenience unless noted otherwise, hence, natural behavior can be practiced. A comparison of the evaluation time for CLT and HUT shows that most HUT participants usually conducted the evaluation in the late afternoon or evening [9]. This led to increased satisfaction due to free conditions [5]. Comparison of liking categories depending on evaluation time showed no difference (p > 0.05). In this study, evaluation time did not influence acceptability.
All samples were mostly consumed at home, followed by the workplace. The snack consumption location for Canadians and Norwegians was home more frequently, followed by the workplace [29,30]. In addition, participants were informed about the ‘Home use test’ before the experiment. Most of them provided their home address for evaluation location as they might have thought that considering the name of the test, they had to evaluate at home. More than half of the participants were office workers, therefore, the evaluation location had a greater influence on the snacking time compared to the supplement questionnaire and real evaluation time.

3.3. Number of Days Taken for Home Use Test (HUT)

Table 5 shows the number of days taken for HUT with six samples by calculating the difference value between the start and end date. ‘No control’ and ‘Order control’ groups showed a similar pattern in the number of days taken including mean, minimum, and median values, whereas the ‘Time and order control’ group had a much higher value. When comparing the maximum number of days taken, ‘No control’ and ‘Time and order control’ groups had a higher value than the ‘Order control’ group.
The testing days were not designated, hence, ‘No control’ and ‘Order control’ participants could conduct the test in one day, whereas ‘Time and order control’ participants could take up to 10 days for completing the evaluation, considering the interval time and the date of sending a text message. Interestingly, some participants from the ‘Time and order control’ group communicated with their acquaintances in a different control group and received the survey link or QR code before their designated evaluation link was sent. However, we could not consider acquaintances in assigning participants into the same or different group. The instructions have to clearly stated that confidentiality should be maintained during and after participation, even if the participants are acquainted with each other.
The results of the maximum days taken by the ‘Time and order control’ group reflected the effect of the periodic testing intervals and reminders. One consumer from the ‘No control’ group took 26 days to complete the testing. The missing data were found a few days later and completed. The maximum number of days for completion of evaluation in the ‘No control’ group was 15 days without this data. However, several participants from the ‘Time and order control’ group who received the evaluation and supplementary questionnaire on the last day did not evaluate carefully. They completed only one of the questionnaires and the remainder was completed after the reminder. Another downside of sending reminders was that some participants from the ‘No control’ and ‘Order control’ groups completed all remaining evaluations at once after receiving the reminder because they thought that they were supposed to finish the questionnaires immediately. Furthermore, some participants lost the testing samples and requested to receive more samples. Therefore, they needed more time for testing.

3.4. Consumer’s Liking and Perceived Intensity of Samples

Table 6 presents mean values for consumer acceptability, perceived intensity, and amount of residue for participants of ‘No control’, ‘Order control’, and ‘Time and order control’ groups. The mean liking scores were generally between ‘Neither like nor dislike’ and ‘Like moderately’. In general, the liking, intensity, and amount of residue scores showed similarity among the three groups. Each liking category indicated very similar results for samples: spread wafer had the highest score in overall, package, and flavor liking category, while potato chips had the highest score in the texture liking category. Candy ranked the highest for after feel and texture intensity. Apple sauce scored the lowest in every liking category and intensity. The amount of residue showed the highest value for spread wafer and the lowest value for candy. There were no significant differences (p > 0.05) between groups in liking, perceived intensity, and amount of residue, while there were significant differences (p < 0.05) within groups.
Overall, liking was rated positively ranging between neutral to like moderately, probably because all products were commercially available [31] and the comfortable condition in context could have positively influenced acceptability [5,32]. Apple sauce is not available for sales in Korean markets, therefore, the Korean consumers were not familiar with the product. However, its taste and texture were liked as they are similar to other products, such as apple pie [33]. Considering the results, participants did not have much knowledge of apple sauce (Table 7). Additionally, many participants answered in open-ended questions that they would eat the remaining sample with other snacks such as bread or yogurt rather than eating apple sauce by itself.
Although spread wafer was also an unfamiliar product (Table 7), its liking score was relatively high, which could have been affected by brand awareness [34,35] and familiarity with the spread used for filling in the spread wafer. Soerensen et al. [36] indicated there was no dynamic liking when novel flavors were added because of a high perceived familiarity with chocolates. In addition, well-known samples were evaluated first, whereas unfamiliar samples were evaluated later in the ‘No control’ group (Table 7 and Table 8).
Although samples were wrapped in Kraft bags, consumers in the ‘No control’ group might have opened all bags to choose their evaluation order. More than half of the participants had knowledge of samples except for apple sauce. Although the spread wafer had low product awareness and experience, its brand awareness was high. On the other hand, apple sauce was mostly evaluated last and it had the lowest brand and product and brand awareness, and experience.

3.5. Purchase Intent and Price Willing to Pay

The results of purchase intent and price that the consumers were willing to pay are shown in Table 9. There was no significant difference between the groups (p > 0.05) and the samples showed significant difference within each group (p < 0.05). Similar to overall liking, spread wafer rated the highest in purchase intent, while apple sauce rated the lowest. Although apple sauce is similar to baby food, it was the only product not available in Korea, and thus was an unfamiliar product for the participants.
Participants were asked the price that they were willing to pay in Korean won (KRW) for a provided quantity of each sample as an open-ended question and the mean values (and SD) of the responses are shown in Table 9. There was a significant difference (p < 0.05) within groups while there was no significant difference (p > 0.05) between groups. All samples were rated similarly among the three groups, except potato chips and spread wafer in the ‘No control’ group. Participants were willing to pay the highest price for jelly and the lowest for candy. The problem was that candy, cereal bar, and spread wafer were provided in a quantity of more than one, thereby confusing the consumers whether the question was for only one piece or all provided. Most samples were rated higher than the original price except for potato chip and spread wafer (Table 2 and Table 9). For apple sauce and spread wafer, the difference of the values between the original price and the participants’ response was bigger than that of others due to participant’s unfamiliarity with the products.

3.6. Analysis of the Texture

3.6.1. Mouth Behavior Used during Consumption

Consumers chose all relevant mouth behavior such as cruncher, chewer, sucker, or smoosher during consumption (Table 10). The highest frequency of mouth behavior for each sample was similar between groups. The mouth behavior commonly used for apple sauce and candy was sucker, and that for cereal bar, jelly, potato chips, and spread wafer was chewer, which was slightly different from the expected results (Table 2). Although the smoosher category included soft food, such as ripe bananas and custard, apple sauce was close to liquid, and a large portion of consumers answered that they could not feel any texture and drank it like a juice. Moreover, the chewer category was also selected because of tiny particles. Others had the highest frequency in the chewer category except for candy because its texture changed during eating. Jeltema et al. [22] mentioned that people perceived the overall texture of a food item as the texture that lasts the complete duration, rather than that at the beginning.

3.6.2. Correspondence Analysis of Texture Characteristics

Correspondence analysis (CA) biplot shows the relationship between snack samples and the 51 texture attributes evaluated using the CATA method from each group (Figure 3). With Dimension 1 (Dim 1) and 2 (Dim 2), Figure 4a–c explains the 65.15% data variance in ‘No control’, 65.22% in ‘Order control’, and 66.11% in ‘Time and order control’ group. The RV coefficients provided that the terms configuration was similar between ‘No control’ and ‘Order control’ groups (RV = 0.963, p < 0.001), ‘No control’ and ‘Time and order control’ groups (RV = 0.961, p < 0.001), and ‘Order control’ and ‘Time and order control’ groups (RV = 0.955, p < 0.001). Samples configuration for the groups were as follows: ‘No control’ and ‘Order control’ group (RV = 0.989, p < 0.001), ‘No control’ and ‘Time and order control’ group (RV = 0.997, p < 0.001), and ‘Order control’ and ‘Time and order control’ group (RV = 0.984, p < 0.001). Samples with a difference in texture were dispersed into each quadrant and were explained by nearby texture characteristics.

3.7. Analysis of Portion Size by Consumers

The amount of consumption evaluated by consumers is shown in Table 11. All groups indicated similarity. More than half of the participants consumed all the provided quantities of cereal bar, potato chips, and spread wafer, whereas more than 40 participants consumed under 1/3 of the provided quantity for apple sauce, candy, and jelly. Liking was positively related to consumption [37,38]. However, in our study, overall liking was high for candy and jelly, while their consumption was low. This might be related to the time required to intake these food items because of their texture attributes (Figure 3). Furthermore, the adequate portion size evaluated by consumers was lower than the provided quantity when samples provided were considered as 100 percent (Figure 4). There was a significant difference (p < 0.05) within groups while there was no significant difference between groups (p > 0.05). The quantity of samples provided in CLT is relatively smaller than that of HUT along with a brief exposure time [39], hence, the prediction of the amount of consumption in a laboratory setting could be missed out. Gough et al. [40] found that participants might underestimate the portion size consumed in laboratory settings because of a tendency to conceal their eating behavior.

3.8. Suggestions for the Home Use Test and Limitations

Contrary to CLT, many external factors influence testing in a real setting; hence, greater efforts are required for the evaluation of several samples in HUT. After follow-up, the final number of participants with completed data was high, despite the occurrence of various errors. Most errors were related to incorrect entry of consumer numbers or three-digit random codes. Other errors included not following the preassigned order of testing or evaluation period extension. The consumer and sample numbers were both three-digit numbers. This might have confused the participants as they had to enter these numbers directly using open-ended questions, despite receiving six samples at once. Nevertheless, identification information helped in the modification of errors.
Kraft bags were used for hiding packages of samples before evaluation. However, it may not have served the purpose of random sample selection in the ‘No control’ group as familiar and/or preferred products were evaluated first. Moreover, a few participants unwrapped Kraft bags as soon as they received them, thereby mixing the sample numbers. In such cases, repackaging was required until the effect of packaging was studied and consistency of the food quality was ensured by avoiding external factors such as temperature, long contact with humid air, or direct sunlight. Moreover, it is important to check if the shelf life of the product would last an extended evaluation period.
Six samples were provided simultaneously by postal delivery before the start date. However, some participants from the ‘No control’ and ‘Order control’ groups conducted the test before the announcement of the start date using the QR code on the instruction manual. On the other hand, the ‘Time and order control’ group could start testing only on the day the survey link was provided. The survey link should be provided for the first sample evaluation if the start date of the evaluation is to be fixed as it eases the follow-up process.
In this study, new findings were that the job profile of the participants and the time of sending messages could influence evaluation time. The term ‘home’ used in the research title in the testing information may have influenced the evaluation place. Accordingly, the QR code allowed more freedom than the link in terms of evaluation time. In addition, the access time in online evaluation did not coincide with evaluation time in the questionnaire despite its mention in the written instructions. Hence, it was important to emphasize the instruction prominently or by using a video for better understanding.
Social communication was not considered in our study. Snacks are generally eaten alone, however, their acceptability and consumption can be influenced by social interaction. In our study, family members or housemates were classified in the same group. However, this factor was excluded from the analysis due to low occurrences. Furthermore, some participants who were acquaintances contacted each other regarding the testing, although they were not in the same group during the evaluation period. Thus, better instruction was needed to avoid communication among participants for reduction of errors, such as participants from the ‘Time and order control’ group receiving survey links from the ‘Order control’ group. The recommendations for HUT is summarized in the Figure 5.

4. Conclusions

This study investigated and compared the results of three groups differing in time and order control using six samples with different textures in the home use test. It aimed to determine the amount of control/effort that would be required to handle errors that might occur while conducting HUT. Overall, the results of the evaluation were similar between the groups, regardless of the degree of control. Thus, HUT can be utilized similarly to CLT as consumer tests in terms of the number of samples. HUT allows the evaluation of samples in a real environment and can be designed to evaluate long-term usage. They can be utilized to improve the launch of new products and evaluate their success.
Not much research has been conducted in a realistic environment and there is almost no report on errors that may occur while conducting HUT. However, a few of its disadvantages are its cost and high dropout rate. This study evaluated two control factors of preassigned order and evaluation time in HUT. It included the evaluation of six snack samples by participants, which is normally evaluated in one session in CLT or laboratory evaluation. If CLT or laboratory tests were included as a control compared to HUT, our findings would have been better validated. As a small number of consumer sample participated in this study, our findings may not be generalized. When conducting HUT with more consumers providing a higher number of samples than traditional HUT of one or two samples, more errors or higher dropout rate might be observed. More experiments on HUT should be performed for generalization. Furthermore, other control factors, such as sample temperature, should also be considered in the future.

Author Contributions

Conceptualization, J.L.; Methodology, J.L.; Formal Analysis, N.L.; Investigation, N.L.; Resources, J.L.; Data Curation, N.L.; Writing—Original Draft Preparation, N.L.; Writing—Review and Editing, J.L.; Visualization, N.L.; Supervision, J.L.; Project Administration, N.L.; Funding Acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Pusan National University (PNU IRB/2020_59_HR at 26 May 2020).

Informed Consent Statement

Participant consent was waived due to online survey.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zandstra, E.; de Graaf, C.; van Trijp, H. Effects of variety and repeated in-home consumption on product acceptance. Appetite 2000, 35, 113–119. [Google Scholar] [CrossRef] [Green Version]
  2. Meiselman, H.L. Methodology and theory in human eating research. Appetite 1992, 19, 49–55. [Google Scholar] [CrossRef]
  3. Stelick, A.; Dando, R. Thinking outside the booth—The eating environment, context and ecological validity in sensory and consumer research. Curr. Opin. Food Sci. 2018, 21, 26–31. [Google Scholar] [CrossRef]
  4. Meiselman, H.; Johnson, J.; Reeve, W.; Crouch, J. Demonstrations of the influence of the eating environment on food acceptance. Appetite 2000, 35, 231–237. [Google Scholar] [CrossRef] [PubMed]
  5. Boutrolle, I.; Delarue, J.; Arranz, D.; Rogeaux, M.; Köster, E.P. Central location test vs. home use test: Contrasting results depending on product type. Food Qual. Prefer. 2007, 18, 490–499. [Google Scholar] [CrossRef]
  6. Meiselman, H.L. The future in sensory/consumer research: ………. evolving to a better science. Food Qual. Prefer. 2013, 27, 208–214. [Google Scholar] [CrossRef]
  7. Jaeger, S.; Hort, J.; Porcherot, C.; Ares, G.; Pecore, S.; MacFie, H. Future directions in sensory and consumer science: Four perspectives and audience voting. Food Qual. Prefer. 2017, 56, 301–309. [Google Scholar] [CrossRef]
  8. Karin, W.; Annika, Å.; Anna, S. Exploring differences between central located test and home use test in a living lab context. Int. J. Consum. Stud. 2015, 39, 230–238. [Google Scholar] [CrossRef]
  9. Schouteten, J.J.; Gellynck, X.; Slabbinck, H. Influence of organic labels on consumer’s flavor perception and emotional profiling: Comparison between a central location test and home-use-test. Food Res. Int. 2019, 116, 1000–1009. [Google Scholar] [CrossRef] [PubMed]
  10. Zhang, M.; Jo, Y.; Lopetcharat, K.; Drake, M. Comparison of a central location test versus a home usage test for consumer perception of ready-to-mix protein beverages. J. Dairy Sci. 2020, 103, 3107–3124. [Google Scholar] [CrossRef]
  11. Sveinsdóttir, K.; Martinsdóttir, E.; Thórsdóttir, F.; Schelvis, R.; Kole, A.; Thórsdóttir, I. Evaluation of farmed cod products by a trained sensory panel and consumers in different test settings. J. Sens. Stud. 2010, 25, 280–293. [Google Scholar] [CrossRef]
  12. Resurreccion, A.V. Consumer Sensory Testing for Product Development; Aspen Publishers: Gaithersburge, ML, USA, 1998. [Google Scholar]
  13. Meilgaard, M.C.; Carr, B.T.; Civille, G.V. Sensory Evaluation Techniques; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  14. Tijssen, I.O.; Zandstra, E.H.; Boer, A.D.; Jager, G. Taste matters most: Effects of package design on the dynamics of implicit and explicit product evaluations over repeated in-home consumption. Food Qual. Prefer. 2019, 72, 126–135. [Google Scholar] [CrossRef]
  15. Mahieu, B.; Visalli, M.; Thomas, A.; Schlich, P. Free-comment outperformed check-all-that-apply in the sensory characterisation of wines with consumers at home. Food Qual. Prefer. 2020, 84, 103937. [Google Scholar] [CrossRef]
  16. Zandstra, E.; Weegels, M.; Van Spronsen, A.; Klerk, M. Scoring or boring? Predicting boredom through repeated in-home consumption. Food Qual. Prefer. 2004, 15, 549–557. [Google Scholar] [CrossRef]
  17. Schouteten, J.J.; De Steur, H.; Sas, B.; De Bourdeaudhuij, I.; Gellynck, X. The effect of the research setting on the emotional and sensory profiling under blind, expected, and informed conditions: A study on premium and private label yogurt products. J. Dairy Sci. 2017, 100, 169–186. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. De Wijk, R.; Kaneko, D.; Dijksterhuis, G.; van Zoggel, M.; Schiona, I.; Visalli, M.; Zandstra, E. Food perception and emotion measured over time in-lab and in-home. Food Qual. Prefer. 2019, 75, 170–178. [Google Scholar] [CrossRef]
  19. Spinelli, S.; Dinnella, C.; Ares, G.; Abbà, S.; Zoboli, G.; Monteleone, E. Global Profile: Going beyond liking to better understand product experience. Food Res. Int. 2019, 121, 205–216. [Google Scholar] [CrossRef] [PubMed]
  20. Lawless, H.T.; Heymann, H. Sensory Evaluation of Food: Principles and Practices; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  21. Porcherot, C.; Issanchou, S. Dynamics of liking for flavoured crackers: Test of predictive value of a boredom test. Food Qual. Prefer. 1998, 9, 21–29. [Google Scholar] [CrossRef]
  22. Jeltema, M.; Beckley, J.; Vahalik, J. Food texture assessment and preference based on Mouth Behavior. Food Qual. Prefer. 2016, 52, 160–171. [Google Scholar] [CrossRef]
  23. Kim, M. Comparisons of Eating Motivation and Behavior Based on Degree of Diet Restraint or Diabetes Mellitus. Master’s Thesis, Pusan National University, Busan, Korea, 2020. [Google Scholar]
  24. Cattaneo, C.; Liu, J.; Bech, A.C.; Pagliarini, E.; Bredie, W.L. Cross-cultural differences in lingual tactile acuity, taste sensitivity phenotypical markers, and preferred oral processing behaviors. Food Qual. Prefer. 2020, 80, 103803. [Google Scholar] [CrossRef]
  25. Jeong, S. Consumers’ Recognition of Texture and Its Relationship with (dis)Preferred Mouth Behavior. Master’s Thesis, Pusan National University, Busan, Korea, 2021. [Google Scholar]
  26. Jeltema, M.; Beckley, J.H.; Vahalik, J. Importance of understanding mouth behavior when optimizing product texture now and in the future. Food Texture Des. Optim. 2014, 423–442. [Google Scholar] [CrossRef]
  27. Jeltema, M.; Beckley, J.; Vahalik, J. Model for understanding consumer textural food choice. Food Sci. Nutr. 2015, 3, 202–212. [Google Scholar] [CrossRef]
  28. Birch, L.L.; Billman, J.; Richards, S.S. Time of day influences food acceptability. Appetite 1984, 5, 109–116. [Google Scholar] [CrossRef]
  29. Myhre, J.B.; Løken, E.B.; Wandel, M.; Andersen, L.F. The contribution of snacks to dietary intake and their association with eating location among Norwegian adults—Results from a cross-sectional dietary survey. BMC Public Health 2015, 15, 369. [Google Scholar] [CrossRef] [Green Version]
  30. Vatanparast, H.; Islam, N.; Masoodi, H.; Shafiee, M.; Patil, R.P.; Smith, J.; Whiting, S.J. Time, location and frequency of snack consumption in different age groups of Canadians. Nutr. J. 2020, 19, 85. [Google Scholar] [CrossRef] [PubMed]
  31. Meiselman, H.L. Emotion measurement: Theoretically pure or practical? Food Qual. Prefer. 2017, 62, 374–375. [Google Scholar] [CrossRef]
  32. Zandstra, E.H.; Lion, R. Chapter 4 In-home testing. In Context: The Effects of Environment on Product Design and Evaluation; Meiselman, H.L., Ed.; Woodhead Publishing: Cambridge, UK, 2019; pp. 67–85. [Google Scholar] [CrossRef]
  33. Lee, J. Qualitative Emotion Research While Consuming Foods. Master’s Thesis, Pusan National University, Busan, Korea, 2019. [Google Scholar]
  34. Deliza, R.; MacFie, H. The generation of sensory expectation by external cues and its effect on sensory perception and hedonic ratings: A review. J. Sens. Stud. 1996, 11, 103–128. [Google Scholar] [CrossRef]
  35. Varela, P.; Ares, G.; Giménez, A.; Gámbaro, A. Influence of brand information on consumers’ expectations and liking of powdered drinks in central location tests. Food Qual. Prefer. 2010, 21, 873–880. [Google Scholar] [CrossRef]
  36. Soerensen, J.G.; Waehrens, S.S.; Byrne, D.V. Predicting and Understanding Long-Term Consumer Liking of Standard Versus Novel Chocolate: A Repeated Exposure Study. J. Sens. Stud. 2015, 30, 370–380. [Google Scholar] [CrossRef]
  37. Dohle, S.; Rall, S.; Siegrist, M. I cooked it myself: Preparing food increases liking and consumption. Food Qual. Prefer. 2014, 33, 14–16. [Google Scholar] [CrossRef]
  38. Sørensen, L.B.; Møller, P.; Flint, A.; Martens, M.; Raben, A. Effect of sensory perception of foods on appetite and food intake: A review of studies on humans. Int. J. Obes. 2003, 27, 1152–1166. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Colla, K.; Keast, R.; Hartley, I.; Liem, D.G. Using an online photo based questionnaire to predict tasted liking and amount sampled of familiar and unfamiliar foods by female nutrition students. J. Sens. Stud. 2021, 36, e12614. [Google Scholar] [CrossRef]
  40. Gough, T.; Haynes, A.; Clarke, K.; Hansell, A.; Kaimkhani, M.; Price, B.; Roberts, A.; Hardman, C.A.; Robinson, E. Out of the lab and into the wild: The influence of portion size on food intake in laboratory vs. real-world settings. Appetite 2021, 162, 105160. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Summary of external factors in home use test (HUT).
Figure 1. Summary of external factors in home use test (HUT).
Foods 10 01275 g001
Figure 2. Schematic diagram of test design for comparing three home use tests with differing test controls.
Figure 2. Schematic diagram of test design for comparing three home use tests with differing test controls.
Foods 10 01275 g002
Figure 3. Correspondence analysis biplots using 51 texture attributes from the (a) ‘No control’, (b) ‘Order control’, and (c) ‘Time and order control’ groups. A total of 51 attributes were provided for CATA, Rhombus (◆) indicates samples.
Figure 3. Correspondence analysis biplots using 51 texture attributes from the (a) ‘No control’, (b) ‘Order control’, and (c) ‘Time and order control’ groups. A total of 51 attributes were provided for CATA, Rhombus (◆) indicates samples.
Foods 10 01275 g003
Figure 4. Adequate portion size evaluated by consumers. Adequate portion size was presented as an open-ended question (written down as the ratio value compared to the provided sample amount as 100). There are no significant differences between groups; samples sharing the same letter at the top of bars means no significant differences within each group (α = 0.05).
Figure 4. Adequate portion size evaluated by consumers. Adequate portion size was presented as an open-ended question (written down as the ratio value compared to the provided sample amount as 100). There are no significant differences between groups; samples sharing the same letter at the top of bars means no significant differences within each group (α = 0.05).
Foods 10 01275 g004
Figure 5. A summary of instructions in HUT.
Figure 5. A summary of instructions in HUT.
Foods 10 01275 g005
Table 1. Demographic information of the three groups of consumers.
Table 1. Demographic information of the three groups of consumers.
VariablesNo ControlOrder ControlTime and
Order Control
Total
N%N%N%N%
Sex
Female70(3) 170.074(2)74.07474.0218(5)72.7
Male30(2)30.02626.026(1)26.082(3)27.3
Age
19–2532(3)32.02828.03030.090(3)30.0
26–3551(2)51.051(1)51.051(1)51.0153(4)51.0
36–451313.013(1)15.01212.038(1)12.7
46–5533.05.05.055.0134.3
56–6511.03.03.022.062.0
Job2
Student3334.72525.52525.38328.4
Office worker4749.55253.15555.615452.7
Self-employed11.144.111.062.1
Housewife33.266.177.1165.5
Not working77.488.299.1248.2
Others44.233.122.093.1
Snacking frequency per day
022.133.100.051.7
15962.15455.16161.617459.6
22728.43030.63030.38729.8
355.377.144.0165.5
≥422.144.144.0103.4
Usual snacking time
Between breakfast
and lunch
2316.12820.01812.96916.3
Between lunch
And dinner
7854.57452.97150.722352.7
Between post dinner
and before sleeping
4229.43827.15136.413131.0
1 The number in parentheses indicates consumers who dropped out of study. 2 Job and snacking question for the participants of ‘No control’ (n = 95), ‘Order control’ (n = 98), and ‘Time and order control’ (n = 99) groups were asked at the end of evaluation, thus the number between the groups differed.
Table 2. Information of six samples evaluated.
Table 2. Information of six samples evaluated.
LabelProductManufacturerAmount Per
Package
Recommended Serving Size on PackageUnits
Provided
Weight ProvidedPrice for Quantity Provided (USD) 1Mouth Behavior
Apple sauceMott’s® Applesauce AppleMott’s, LLP, Plano, TX, USA113 g113 g1 container113 g0.59Smoosher
CandyRicola Lemon Mint (sugar free)Ricola Ltd., Laufen, Switzerland342 g3.6 g4 drops14.4 g0.35Sucker
Cereal barKellogg’s®
Rice Krispies Treats®
Kellogg, Battle Creek, MI, USA22 g22 g2 bars44 g0.59Chewer
JellyHARIBO
Mega-Roulette
Haribo, Solingen, Germany45 g45 g1 package45 g0.38Chewer
Potato chipLAY’S®
Classic Potato Chips
Frito-Lay, INC., Plano, TX, USA42.5 g42.5 g1 package42.5 g1.09Cruncher
Spread
wafer
Nutella
B-ready
Ferrero OHG mbH, Hessen,
Germany
22 g22 g2 bars44 g1.47Cruncher & Smoosher
Symbol: ®—Stands for Registered Trademark. 1 Exchange rate at 1230 KRW for 1 USD (as of May 2020).
Table 3. Frequency of available data used, complete data and error data that were modifiable in each group 1.
Table 3. Frequency of available data used, complete data and error data that were modifiable in each group 1.
GroupNo ControlOrder
Control
Time and Order
Control
Completed without error
Apple sauce926980
Candy897977
Cereal bar857981
Jelly887783
Potato chip977383
Spread wafer908095
Error occurrence frequency
(Time, Evaluation Order, Sample number error)
Apple sauce83620
Candy112823
Cereal bar152220
Jelly122317
Potato chip33018
Spread wafer10217
Recovered error
Apple sauce32819
Candy71922
Cereal bar121818
Jelly82216
Potato chip32417
Spread wafer9184
No response
Apple sauce531
Candy421
Cereal bar331
Jelly411
Potato chip030
Spread wafer121
Total number of dropped consumers521
Extended evaluation7857
Final number of consumers for data analysis 2
Apple sauce959799
Candy969899
Cereal bar979799
Jelly969999
Potato chip10097100
Spread wafer999899
1 Frequency indicates number of consumers having one or more error in each sample evaluation. 2 Final completed number of consumers are 100 minus ‘no response’. All other errors were recoverable with confirmation.
Table 4. Information of evaluation time and place for each consumer group 1.
Table 4. Information of evaluation time and place for each consumer group 1.
GroupTimePlace of Consumption
Morning
(6 a.m.–12 p.m.)
Afternoon
(12 p.m.–6 p.m.)
Evening
(6 p.m.–12 a.m.)
Dawn
(12 a.m.–6 a.m.)
HomeWork PlaceSchoolOthers
N%N%N%N%N%N%N%N%
No control
Apple sauce1313.73132.64850.533.27983.21313.722.111.1
Candy2021.13334.73637.977.46971.91717.744.266.3
Cereal bar1616.83840.04042.133.28385.61010.333.111.0
Jelly1414.74143.23435.877.47982.31010.400.077.3
Potato chip1616.83031.64749.577.48888.088.022.022.0
Spread wafer2425.33233.73840.055.37676.82020.222.011.0
Order control
Apple sauce3132.02525.83435.177.28081.61414.333.111.0
Candy2626.83233.03536.155.27676.81515.233.055.1
Cereal bar2525.82626.84344.333.17677.61717.322.033.1
Jelly2222.73637.13738.144.17575.81515.244.055.1
Potato chip1313.42929.94647.499.38385.677.233.144.1
Spread wafer2525.82626.84142.366.28384.71010.233.122.0
Time and order control
Apple sauce2929.33030.33737.433.07979.81717.211.022.0
Candy3030.33434.33232.333.07676.81515.233.055.1
Cereal bar3434.32828.33535.422.07575.81818.222.044.0
Jelly3333.33030.33434.322.07777.81616.233.033.0
Potato chip2626.33030.34242.422.08383.01313.022.022.0
Spread wafer2929.32828.33636.466.18181.81515.200.033.0
1 The number of participants who completed the test differs depending on the group. The number of participants for each group were as follows: ‘No control’ (n = 95), ‘Order control’ (n = 98), and ‘Time and order control’ (n = 99). Thus, the percentage was added for comparison.
Table 5. Number of days 1 taken for the home use test (HUT) with six samples.
Table 5. Number of days 1 taken for the home use test (HUT) with six samples.
GroupNo ControlOrder ControlTime and Order Control
Mean (±SD)9.0 ± 4.19.0 ± 3.815.0 ± 2.7
Minimum1.01.010.0
Median9.09.015.0
Maximum26.018.025.0
Abbreviation: SD—standard deviation. 1 Value of the difference between start and end date using date unit (‘Time and order control’ had fixed interval evaluation).
Table 6. Consumer’s liking and perceived intensity of ‘No control’, ‘Order control’, and ‘Time and order control’ groups 1, 2, 3.
Table 6. Consumer’s liking and perceived intensity of ‘No control’, ‘Order control’, and ‘Time and order control’ groups 1, 2, 3.
SampleOverallPackageFlavorTextureAfterfeelAfterfeel IntensityTexture IntensityAmount of Residue
No control
Apple sauce4.8 d5.5 c5.7 d4.8 c4.9 c5.6 b2.6 f1.6 c
Candy6.3 c6.2 ab6.6 b6.3 b6.3 a6.3 a7.4 a1.1 d
Cereal bar6.2 c6.0 b6.1 cd6.5 b5.6 b6.1 a4.2 e2.2 b
Jelly6.5 bc6.0 b6.4 bc6.1 b5.8 b5.6 b6.5 b1.3 cd
Potato chip6.9 ab6.1 b6.6 b7.4 a5.5 b6.2 a5.2 c2.5 ab
Spread wafer7.2 a6.6 a7.1 a6.5 b5.5 b6.2 a4.6 d2.7 a
p value<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001
LSD0.450.370.420.430.450.370.400.29
Order control
Apple sauce4.5 c5.4 b5.3 e4.5 d4.5 c6.0 bc2.2 f1.6 c
Candy6.3 b6.4 a6.5 bc6.1 c6.2 a6.4 a7.3 a0.9 e
Cereal bar6.2 b5.7 b6.1 d6.3 c5.4 b6.1 bc3.9 e2.2 b
Jelly6.3 b6.2 a6.5 cd6.0 c5.6 b5.8 c6.6 b1.3 d
Potato chip7.1 a6.2 a6.9 ab7.4 a5.7 b6.1 abc5.1 c2.5 a
Spread wafer7.3 a6.6 a7.0 a6.8 b5.5 b6.3 ab4.4 d2.7 a
p value<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001
LSD0.480.390.440.480.470.360.410.28
Time and order control
Apple sauce5.0 c5.6 c5.5 c4.7 d5.0 c5.7 c2.1 f1.5 c
Candy6.2 b6.1 ab6.4 b6.0 c6.1 a6.3 a7.4 a1.2 d
Cereal bar6.4 b5.8 bc6.3 b6.5 b5.4 bc6.1 ab3.9 e2.3 b
Jelly6.3 b5.7 c6.3 b5.6 c5.5 b5.7 c6.5 b1.3 cd
Potato chip6.9 a6.2 a6.8 a7.2 a5.4 bc5.9 bc5.3 c2.5 ab
Spread wafer7.1 a6.4 a7.1 a6.7 b5.4 bc6.0 abc4.7 d2.6 a
p-value<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001
LSD0.450.370.390.490.450.360.390.29
Abbreviation: LSD—least significant difference. 1 Evaluated using the nine-point scale from 1 (dislike extremely) to 9 (like extremely) and the amount of residue was rated using the six-point scale from 0 (none) to 5 (very much). 2 A lower case alphabet indicates significant differences within each group (α = 0.05). 3 There was no significant difference between groups (α = 0.05).
Table 7. Knowledge of samples 1.
Table 7. Knowledge of samples 1.
SampleProduct AwarenessBrand AwarenessExperienceTotal
N%N%N%N
No control
Apple sauce44.244.233.295
Candy5355.24243.84951.096
Cereal bar4445.45556.74142.397
Jelly6870.88891.75860.496
Potato chip7474.06161.05757.0100
Spread wafer3434.38888.92020.299
Order control
Apple sauce77.266.255.297
Candy5758.24748.05354.198
Cereal bar4546.46263.93940.297
Jelly7979.89596.06161.699
Potato chip7173.25657.75758.897
Spread wafer3131.68485.72222.498
Time and order control
Apple sauce88.177.166.199
Candy5151.53434.34646.599
Cereal bar4444.46767.74141.499
Jelly7878.88888.96363.699
Potato chip7373.06767.06060.0100
Spread wafer2626.38686.91818.299
1 The frequency of awareness, brand awareness, and experience were measured using Yes or No.
Table 8. Evaluated order frequency for each sample in the ‘No control’ group.
Table 8. Evaluated order frequency for each sample in the ‘No control’ group.
Sample123456Cumulative Evaluation (N)
Apple sauce67611234295
Candy76928212596
Cereal bar1117281815897
Jelly1420221517896
Potato chip471913876100
Spread wafer1529191713699
Total (N) 11009897979695
1 The frequency of the chosen sample each time from 1 to 6.
Table 9. Purchase intent and price willing to pay.
Table 9. Purchase intent and price willing to pay.
SamplePurchase Intent 1Appropriate Price (USD) 2
No ControlOrder ControlTime and
Order Control
No ControlOrder ControlTime and Order Control
MeanSDMeanSDMeanSD
Apple sauce2.3 c,31.9 d2.1 d1.02 ab0.481.09 ab0.531.08 a0.55
Candy3.3 b3.4 ab3.0 c0.41 e0.480.44 e0.480.50 d0.54
Cereal bar3.2 b3.0 c3.1 bc0.80 d0.340.75 d0.390.73 c0.36
Jelly3.3 ab3.2 bc3.2 bc1.06 a0.401.15 a0.551.11 a0.46
Potato chip3.3 b3.5 ab3.4 ab0.91 c0.290.99 bc0.331.00 a0.35
Spread wafer3.6 a3.5 a3.6 a0.93 bc0.360.96 c0.420.88 b0.40
p-value<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001
LSD0.320.310.310.100.110.11
Abbreviation: LSD—least significant difference; SD—standard deviation. 1 Purchase intent was measured using the five-point scale from 1 = definitely would not purchase to 5 = definitely would purchase. 2 Appropriate price was asked as an open-ended question in KRW. And exchange rate of 1230 KRW was used to calculate the price in USD (as of May 2020). Converted price was used for the analysis. 3 Sharing the same lower case letter means there is no significant difference between samples (α = 0.05). There was no significant difference between groups (α = 0.05).
Table 10. Mouth behavior of the consumers used during consumption.
Table 10. Mouth behavior of the consumers used during consumption.
SampleCruncherChewerSuckerSmoosherTotal
N%N%N%N%N
Total
Apple sauce72.411438.413043.84615.5297
Candy10019.39117.627252.55510.6518
Cereal bar9520.027858.6132.78818.6474
Jelly112.528564.58118.36514.7442
Potato chip18635.127251.3101.96211.7530
Spread wafer17733.025848.1366.76512.1536
No control
Apple sauce00.03132.05354.61313.495
Candy3119.52012.68956.01911.996
Cereal bar2617.19260.553.32919.197
Jelly32.09363.33020.42114.396
Potato chip7038.08948.410.52413.0100
Spread wafer5532.98249.1106.02012.099
Order control
Apple sauce22.24044.03437.41516.597
Candy3619.73418.69451.41910.498
Cereal bar3019.49360.042.62818.197
Jelly53.49665.82416.42114.499
Potato chip5835.29054.531.8148.597
Spread wafer5932.68848.6116.12312.798
Time and order control
Apple sauce54.64339.44339.41816.599
Candy3318.83721.08950.6179.799
Cereal bar3923.49355.742.43118.699
Jelly32.09664.42718.12315.499
Potato chip5832.09351.463.32413.3100
Spread wafer6333.58846.8158.02211.799
Table 11. The amount of consumption evaluated by consumers.
Table 11. The amount of consumption evaluated by consumers.
GroupLess Than 1/31/31/23/4AllTotal
N%N%N%N%N%N
No control
Apple sauce2223.22324.288.455.33738.995
Candy2020.81818.83031.311.02728.196
Cereal bar22.199.32929.955.25253.697
Jelly77.33637.51414.666.33334.496
Potato chip44.01616.01212.088.06060.0100
Spread wafer44.066.13434.322.05353.599
Order control
Apple sauce2121.62424.71414.422.13637.197
Candy2727.62222.42424.500.02525.598
Cereal bar44.122.13738.133.15152.697
Jelly1414.13535.41818.277.12525.399
Potato chip44.11818.61111.388.25657.797
Spread wafer33.177.12929.633.15657.198
Time and order control
Apple sauce1212.12929.31515.277.13636.499
Candy3030.31616.22727.344.02222.299
Cereal bar55.155.13535.422.05252.599
Jelly1313.13737.41515.255.12929.399
Potato chip11.02424.077.01212.05656.0100
Spread wafer33.066.12828.311.06161.699
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, N.; Lee, J. Comparison of Home Use Tests with Differing Time and Order Controls. Foods 2021, 10, 1275. https://doi.org/10.3390/foods10061275

AMA Style

Lee N, Lee J. Comparison of Home Use Tests with Differing Time and Order Controls. Foods. 2021; 10(6):1275. https://doi.org/10.3390/foods10061275

Chicago/Turabian Style

Lee, Nahyung, and Jeehyun Lee. 2021. "Comparison of Home Use Tests with Differing Time and Order Controls" Foods 10, no. 6: 1275. https://doi.org/10.3390/foods10061275

APA Style

Lee, N., & Lee, J. (2021). Comparison of Home Use Tests with Differing Time and Order Controls. Foods, 10(6), 1275. https://doi.org/10.3390/foods10061275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop