Next Article in Journal
Dietary Supplementation with Biobran/MGN-3 Increases Innate Resistance and Reduces the Incidence of Influenza-like Illnesses in Elderly Subjects: A Randomized, Double-Blind, Placebo-Controlled Pilot Clinical Trial
Next Article in Special Issue
Recommendations for Integrating Evidence-Based, Sustainable Diet Information into Nutrition Education
Previous Article in Journal
Clinical Presentation of Celiac Disease and Diagnosis Accuracy in a Single-Center European Pediatric Cohort over 10 Years
Previous Article in Special Issue
Validation of a Home Food Environment Instrument Assessing Household Food Patterning and Quality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Exploratory Approach to Deriving Nutrition Information of Restaurant Food from Crowdsourced Food Images: Case of Hartford

1
Department of Geography, University of Connecticut, Storrs, CT 06269, USA
2
Department of Allied Health Sciences, University of Connecticut, Storrs, CT 06269, USA
3
Department of Computer Science & Engineering, University of Connecticut, Storrs, CT 06269, USA
*
Author to whom correspondence should be addressed.
Nutrients 2021, 13(11), 4132; https://doi.org/10.3390/nu13114132
Submission received: 26 September 2021 / Revised: 12 November 2021 / Accepted: 16 November 2021 / Published: 18 November 2021
(This article belongs to the Special Issue Diet Quality, Food Environment and Diet Diversity)

Abstract

:
Deep learning models can recognize the food item in an image and derive their nutrition information, including calories, macronutrients (carbohydrates, fats, and proteins), and micronutrients (vitamins and minerals). This technology has yet to be implemented for the nutrition assessment of restaurant food. In this paper, we crowdsource 15,908 food images of 470 restaurants in the Greater Hartford region on Tripadvisor and Google Place. These food images are loaded into a proprietary deep learning model (Calorie Mama) for nutrition assessment. We employ manual coding to validate the model accuracy based on the Food and Nutrient Database for Dietary Studies. The derived nutrition information is visualized at both the restaurant level and the census tract level. The deep learning model achieves 75.1% accuracy when compared with manual coding. It has more accurate labels for ethnic foods but cannot identify portion sizes, certain food items (e.g., specialty burgers and salads), and multiple food items in an image. The restaurant nutrition (RN) index is further proposed based on the derived nutrition information. By identifying the nutrition information of restaurant food through crowdsourced food images and a deep learning model, the study provides a pilot approach for large-scale nutrition assessment of the community food environment.

1. Introduction

Americans’ eating habits have been through a drastic change—they are spending more on eating out rather than cooking at home [1,2]. According to the United States (US) Department of Agriculture (USDA) Economic Research Service Food Expenditure Series, the total sales of food prepared away from home (FAFH) surpassed that prepared at home (FAH) for the first time in 2014 [3]. The gap between the two expenditures continued to widen over the last few years. In 2019, the expenditures for FAFH were approximately $389,677 million, while the total expenditure for FAFH exceeded $418,933 million [3]. The largest portion of the FAFH (i.e., 36.8% based on the 2019 Food Expenditure Series [3]) was consumed at a limited-service restaurant, which is generally known as a fast-food restaurant. Compared to FAH, FAFH is relatively calorie-dense and nutrient-poor, as it contains more saturated fat, sodium, and cholesterol but less dietary fiber [4]. Thus, the change in dietary behaviors has posed risks for FAFH consumers to develop obesity and obesity-related chronic diseases (e.g., Type II diabetes and cardiovascular diseases). For example, recent literature identified a strong association between FAFH consumption and calorific intake among children, which strengthened the evidence of the health adversities as a result of FAFH consumption [5,6].
To evaluate the nutrition of FAFH, it is essential to employ nutrition assessment methods on individual diets. This evaluation normally takes two different approaches. The first approach refers to the dietary assessment using nutritional biomarkers. Nutritional biomarkers are clinical instruments to identify the existence of nutrients in biological samples and thus can be used as a proxy for the human body’s nutrient absorption and metabolic response to food consumption [7,8]. While nutritional biomarkers can objectively quantify the nutritional status of samples, their employment is equipment-dependent and restricted to clinical settings. Additionally, the evaluation results are subject to an individual’s disease status or homeostatic regulation [9]. The second approach refers to the individual dietary assessment, such as the food frequency questionnaire (FFQ), 24-h dietary recall (24HR), and dietary record (DR). This approach evaluates individuals’ food consumption and dietary patterns by structured surveys or in-depth interviews. Although the individual dietary assessment is a direct observation of food consumption patterns and can be implemented in non-clinical settings, it is subject to systematic biases induced by human subjects, including recall error and reporting bias [10]. Another prevailing issue in both approaches is that they require considerable efforts in data collection and processing, including personnel training, sample testing, interviewing, and data coding. For these reasons, these traditional nutrition assessment methods cannot be easily implemented on a large scale.
Advances in food image capturing and recognition technologies provide alternative means to dietary data collection and nutrition assessment. Food image recognition was initially explored in a pilot study by Williamson et al. [11], which employed digital photography and visual estimation for food selections, plate waste, and portion sizes. This method, further coined as the Remote Food Photography Method (RFPM) [12], was reaffirmed by cross-validating the estimated nutrients in food photographs with trained raters [12] and nutritional biomarkers [13]. The development and advancements in mobile devices and internet services further popularized the use of this technology, allowing individuals to record their dietary intake. For example, a prototype mobile device, called Wellnavi, was used in capturing dietary data for clinical assessments and dietary interventions [14]. In another case, the captured food images were cross-validated with individuals’ voices to verbally describe food items [15]. However, mobile devices in these initial attempts served only as instruments for data collection, storage, and transfer. The actual nutrition assessment component was still reliant on traditional measures, such as FFQ, 24HR, and DR [10]. More recently, technological advancements in computer science provide new opportunities for leveraging dietary data for effective nutrition assessment. These computer-aided methods, primarily deep learning models, can identify the actual food item in an image [16,17,18]. Meanwhile, complexities in food images, such as portion sizes [19] and the co-existence of multiple food items [20], were also resolved by deep learning algorithms. In addition, proprietary mobile apps were developed to estimate the nutrition facts of the food in an image through cloud services, where a nutrient composition database is hosted. Examples of these cutting-edge food image recognition apps (where nutrition facts can be simultaneously estimated) include Calorie Mama, Foodzilla, Lose it!, and Mealviser.
While food image recognition by deep learning models has the potential to inform dietary decisions and facilitate health promotion, to date, this method has not been applied on a large scale (e.g., all restaurants in a city). In this paper, we investigate the applicability of this technology for the nutrition assessment of restaurant food using crowdsourced food review images in a US metropolitan area, the Greater Hartford region. Then, we validate the deep-learned results with manually coded nutrition information. Lastly, the paper explores the implication of the new method for community food environmental studies through nutrition mapping and food inequality assessment. These endeavors not only exhibit an exploratory case study of the deep learning model for nutrition assessment but also the potential of the new method for assisting health policymaking and health promotion.

2. Materials and Methods

Our case study was conducted in the Greater Hartford region in the US. As the capital city of Connecticut, Hartford is the fourth most populous city in the state. The total population of Hartford County was estimated to be 891,720 as of 2019, where the demographics (74.8% white, 15.8% African American, 6.1% Asian, 0.6% Native American) were close to the national average [21]. Because of its vibrant economy and diverse populations, the city serves as a cultural destination and food hub for Central and Eastern Connecticut. As Hartford has intensive spatial interaction with surrounding areas in terms of traffic, human movements, and services, we expanded our study area to Hartford and its five satellite cities (Bloomfield, West Hartford, East Hartford, Newington, and Wethersfield) to portray a comprehensive foodscape.

2.1. Data Collection

As the study was focused on the nutrition assessment of restaurant food, we utilized two datasets: the restaurant directory and food review images. The first dataset, the restaurant directory, included the business information (e.g., name, address, hours of operation, and contact information) of all restaurants in the study area. This dataset was sourced by using Yelp (i.e., a business listing website with foci on restaurant ratings and reviews) and its Fusion application programming interface (API) [22]. The data were further processed by Python codes to retrieve additional information, such as restaurant category, rating, and review count. Then, we cleaned the data by manually cross-validating with Google Place (i.e., a business listing website with foci on restaurant locations and reviews), purging restaurants that were unidentified, permanently closed, or mislabeled (e.g., supermarkets, convenience stores, food pantries). This cross-validation process using Yelp and Google Place ensured a high degree of accuracy and timeliness in the restaurant directory. Eventually, 487 restaurants fit the inclusion criteria and composed the initial restaurant directory dataset for further investigation. These restaurants were visualized in ESRI ArcGIS Pro, as shown in Figure 1.
Our second dataset consisted of food review images shared by online users. These images were collected from two different sources—Google Place and Tripadvisor (i.e., a travel advisory website with user-generated reviews)—and were combined for the same restaurant listing. Specifically, the image data collection was conducted by using the simple mass downloader extension on Google Chrome [23]. A total of 19,907 images were initially collected and were manually refined against the following exclusion criteria: (1) the image was staged or was part of an advertisement; (2) the image featured beverages; (3) the image was about a non-food item, such as buildings, dining environments, and people; (4) the restaurant had less than five images. The data collection from two sources and the follow-up filtering process ensured a high degree of completeness and accuracy in the food image dataset. Our final food image dataset included 15,908 images from 470 restaurants, where each image was related to the restaurant by a common ID. These food images were further standardized to 544 by 544 pixels by Python scripting for further processing in the deep learning model.
It should be noted that as of 2021, Google, Yelp, and Tripadvisor were the top three review platforms for consumers to make decisions about business patronage [24]. Thus, the choice of the three platforms in our study in terms of retrieving the restaurant directory, crowdsourcing food images, and cross-validation ensured the representativeness of the data and avoided the possible selection bias of just focusing on one platform.

2.2. Nutrition Assessments by Deep Learning Model and Manual Coding

We employed a proprietary deep learning model, Calorie Mama [25], for the nutrition assessment of food images. Developed by Azumio Inc. (Palo Alto, US), Calorie Mama is a deep learning-based image recognition model aiming at the nutrition assessment of food images. We chose Calorie Mama, as a recent comparative study showed that Calorie Mama was the most accurate platform with a top 1 accuracy of 63% and a top 5 accuracy of 88% [26]. When a food image is loaded into the deep-learning model, the model returns the most likely food label and its corresponding nutrition information. The derived nutrition information includes calories, macronutrients (carbohydrates, fats, and proteins), and micronutrients (vitamins and minerals) in the International System of Units (SI) (e.g., calories per 1 kg food). In addition, the model can identify not only fresh produce but also prepared dishes, including regional cuisines and ethnic specialty dishes. Figure 2 shows an example of the nutrition assessment for a crowdsourced food image.
We employed the Calorie Mama API to batch process all collected food images and then derived their nutrition information. To validate the results, two trained raters performed a manual nutrition assessment on a random sample of 281 images from 20 restaurants. The two raters independently coded the sample, including food type, nutrition information, and portion size, based on the 2017–2018 Food and Nutrient Database for Dietary Studies (FNDDS) [27]. Out of the 281 images, 75 images were double-coded to examine the inter-rater reliability and ensure the validity of the assessment. The rated nutrition information was then compared with that identified by the deep learning model.

2.3. Restaurant Nutrition (RN) Index

Lastly, we performed the nutrition assessment of the restaurants based on the average calories of all food images for each restaurant (i.e., average calories per 1 kg food) and visualized the nutrition information in a Geographic Information System (GIS). Furthermore, we proposed the restaurant nutrition (RN) index by aggregating the restaurants’ calorie estimates on the census tract level. We validated this new index by (1) quantitatively comparing it with an established food environment index—the Modified Retail Food Environment Index (mRFEI) [28], which evaluates the ratio of healthy food retailers (e.g., supermarkets) to unhealthy food retailers (e.g., fast-food restaurants) in a census tract, and (2) assessing the Pearson’s correlations between the RN index and key variables derived from the CDC’s 2018 Social Vulnerability Index (SVI). The SVI utilizes American Community Survey’s (ACS) 5-year estimates to determine the relative vulnerability of census tracts under four categories: socioeconomic status (SES), household composition and disability, minority status and language, and housing type and transportation [29]. The flow chart of the study is shown in Figure 3.

3. Results

3.1. Deep Learning Model Validation

Out of the 75 images that were double-coded, the two raters agreed with each other on 71 images in terms of food types and FNDDS codes, reaching inter-rater reliability of 94.7%. Out of the 281 coded images, we found that the deep learning model correctly identified 211 food images, reaching an accuracy level of 75.1%. Four images were incorrectly identified by both the manual coding and the deep learning model due to poor image quality. It is noted that the deep learning model had more specific and accurate food labels for 24 images, which were mostly ethnic food items (i.e., specifically for Korean dishes and less so for Mexican, Italian, and Chinese dishes). These ethnic food labels were not specified in the FNDDS.
The deep learning model is subject to various limitations. First, we found that the model had inaccurate identifications for images containing multiple food items, where it could only identify one of the food items present in the image. Second, the identification of certain food items was less precise and accurate. For example, the model identified most sandwiches and burgers as the “beef burger”; it also labeled many specialty salads as the “Caesar salad”, while there was apparent variability in the salad type by manual coding. Third, the deep learning model was unable to estimate the portion size from a food image.

3.2. Nutrition Mapping

The nutrition information identified from the 15,908 images for 470 restaurants can be further employed to estimate the nutrition quality of restaurants, which cannot be easily accomplished by traditional nutrition assessment methods. In this study, we estimated the nutrition quality of a restaurant by averaging the calories for all of its food images in the normalized SI unit (i.e., average calories for 1 kg food). Since the nutrition information is standardized on the restaurant level, it can be compared across all restaurants. We then employed ESRI ArcGIS Pro to map the calorific level in five color-coded classes, where the blue dots represent the lowest-calorie level and the red dots the highest. The mapping result is shown in Figure 4.

3.3. Restaurant Nutrition Index—Measuring the Community Food Environment

The derived nutrition information on the restaurant level can be further leveraged for justifying critical inequality issues in the community food environment. This revelation can contribute to the lack of nutrition component in existing community food environmental studies [30]. Specifically, we have proposed a restaurant nutrition (RN) index by aggregating the restaurants’ calorie estimates for each census tract (mean = 2214.92, standard deviation (SD) = 276.98, min = 1500, max = 3028.88). The result is shown in Figure 5. This result reveals that the high-index census tracts (red-colored tracts in Figure 5), representing areas with the relative concentration of high-calorie restaurants, are mostly found in the northeast quadrant of the study area, which also happens to be the areas with low food access and fewer healthy food retailers as measured by other food environment indices, such as the Food Access Research Atlas [31] and the mRFEI. While the low-index census tracts (green-colored tracts in Figure 5) are more scattered geographically, a moderate consistency is identified between the RN index and the mRFEI in some census tracts, especially in the western and southern quadrants of the study area. Overall, we have identified a moderate consistency and weak Pearson’s correlation between the RN index and the mRFEI (r = −141, p = 0.26).
To further explore the inequality patterns in the nutrition landscape, we correlated the derived RN index with selected socioeconomic and demographic variables in CDC’s 2018 SVI data on the census tract level [29]. The Pearson’s correlation analysis was performed between selected SVI variables with the RN index across census tracts with available data (n = 66). The result is shown in Table 1.
Table 1 shows that there are moderate positive correlations between the RN index and three SVI variables representing socioeconomic status, household composition, and housing type, respectively. These correlated variables include % persons (age 25+) with no high school diploma (r = 0.24, p = 0.057), % single-parent household with children (r = 0.29, p = 0.018), and % persons in group quarters (r = 0.37, p = 0.002). The result signifies that the food inequality pattern in terms of restaurant nutrition does not have a perfect one-to-one matching with social vulnerability in the study area, as the RN index only correlates with some social vulnerability indicators but not the others (e.g., poverty rate, income, and vehicle access). The result can be explained by the argument of using fast-food access as a deprivation indicator—a systematic literature review shows that while many studies (i.e., 16 out of 21 studies) identified that fast-food restaurants were more prevalent in SES-deprived areas, other studies did not reveal such a correlation [32]. To this end, our study can shed insights into the justification of the food inequality—although the Greater Hartford region is regarded as one of the most segregated US metropolitan areas [33], access to nutritional restaurants across different neighborhoods may be less segregated and exhibit a complex geographical pattern.

4. Discussion

While deep learning models have been vigorously developed for food image recognition and nutrition analysis, this study is among the first to leverage this emerging technology for large-scale nutrition assessment of restaurant food. By crowdsourcing food images from food review websites, the study provides a pilot approach to restaurant nutrition assessment and can complement traditional nutrition assessment measures.
First, the study is among the first to bridge a crowdsourcing approach with a deep learning model to improve the efficiency of nutrition assessment. Existing nutrition assessment measures, including individual dietary assessments (e.g., FFQ, 24HR, and DR) [10] and nutrition environment assessments (e.g., Nutrition Environment Measure Survey—Restaurant [NEMS-R]) [34], collect data on the individual or restaurant level. While they standardize the protocol and variables in the assessment, they are subject to considerable efforts in data collection, testing, and coding. The crowdsourcing method applied to all restaurants on a large scale can automate dietary data collection by largely reducing time, labor, and cost. Additionally, we validated the accuracy of the deep learning model at 75.1% based on the FNDDS codes. Although the accuracy level was not as high as those of the traditional measures, the new method can serve to gain an overarching picture of the regional restaurants’ nutrition landscape at a relatively low cost and high efficiency. From a practical perspective, this method is easily scalable and can be implemented for the nutrition assessment of small, individual-owned restaurants or in underdeveloped countries where nutrition labeling is not readily available.
Second, mapping the nutritional information of restaurants can shed insights into health policymaking and health promotion. With advances in geospatial technologies, primarily GIS, it has become viable to reveal the spatial distribution of food sources across communities [35] and develop food indices and tools, such as the Food Access Research Atlas [31], the Food Environment Atlas [36], the “food swamp” index [37,38], and the mRFEI [28]. These spatial endeavors have been criticized for an overemphasis on the spatial pattern of food establishments (e.g., proximity, density, varieties), while lacking the food quality and nutrition measures to justify the food environment–diet relationship [39,40]. The quality and nutrition of food sources, coined by Glanz et al. [30] as the consumer nutrition environment, play a pivotal role in dictating community health. Our study takes a major step in filling this gap by empirically measuring food quality and nutrition in this consumer nutrition environment. While our results show that restaurants with higher calories are more likely to locate in socially vulnerable areas to some degree (e.g., population with lower education, more single-parent households, and living in group quarters), the weak correlation between our proposed RN index and the mRFEI is also somewhat expected and points to the potential discrepancies between spatial food provisioning and food nutrition. By revealing the inequality of nutrition information across different communities, stakeholders can go beyond the simple categorization of food sources and be informed of regional pockets where calorie-dense, nutrient-poor restaurants prevail and where efforts for nutrition assistance and improvement should prioritize. Moreover, the nutrition mapping results can be further developed into an interactive tool to facilitate health promotion in communities inundated with calorie-dense, nutrient-poor restaurants.
As a pilot study, this research has limitations. First, using the crowdsourcing approach deviated from a systematic sampling method and might not fully characterize all restaurants’ nutrition information. The primary dataset, the food images, was solicited from two food review websites. This crowdsourcing approach excluded nutrition information from restaurants that lacked an online presence. For example, it was found that restaurant reviews on Google Place were unevenly distributed, where national chain restaurants were less likely to receive a review than independently-operated restaurants [41]. Second, there were considerable uncertainties about the users who uploaded the food images, as the restaurant reviewers did not mirror the demographics of local residents because of the existence of the “digital divide” [42]. Specifically, young adults were overrepresented in the online community [43] and could influence the restaurants and the food items being reviewed. Third, our tests of validity showed that the deep learning model achieved only 75.1% accuracy, as the model was incapable of estimating portion sizes, identifying certain food items, or distinguishing multiple food items in an image. It is expected that the model performance can be improved by incorporating other food image recognition methods [16,17,18] and by cross-validating the results with trained raters. Finally, we only focused on the Greater Hartford region on the census tract level, and therefore the correlation results may not be generalized for another study area or on a different analysis scale. However, the study design and implementation are transferrable to other study areas, especially in countries where menu labeling data are missing.

5. Conclusions

In this paper, we explore a new deep learning approach for the nutrition assessment of restaurant food. We also validate the accuracy of the method by cross-validating with the FNDDS calorie information. We further estimate the nutrition information of restaurants in the study area and further develop the RN index to explore the food inequality issue in the consumer nutrition environment. Our results show that the deep learning model can be empowered by the crowdsourced food images through gathering dietary data at a minimum cost and acceptable data quality. However, the new method is still in the early phase of development due to the compromised accuracy in the model performance and the many uncertainties in the user-generated dietary data. Thus, this new method should only complement, rather than replace, traditional nutrition assessment methods for estimating the nutrition information. We believe that the new method holds promise as a new instrument for large-scale nutrition assessment when the deep learning model is further improved and when additional means are employed for data screening and result validation. Eventually, we expect that the deep-learned nutrition information can serve as evidence for developing an information system for nutrition education and health promotion.

Author Contributions

Conceptualization, X.C. and R.X.; methodology, X.C., A.K., C.D. and R.X.; validation, Y.C. and N.R.; formal analysis, E.J., A.K. and R.X.; data curation, A.K. and E.J.; writing—original draft preparation, E.J. and X.C.; writing—review and editing, A.K., Y.C., N.R., C.D. and R.X.; visualization, X.C.; supervision, R.X.; funding acquisition, X.C., C.D. and R.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by a hatch grant from the College of Agriculture, Health, and Natural Resources, University of Connecticut, funded by the National Institute of Food and Agriculture, United States Department of Agriculture, grant number CONS01031.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy concerns.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Guthrie, J.F.; Lin, B.-H.; Frazao, E. Role of food prepared away from home in the American diet, 1977–78 versus 1994–96: Changes and consequences. J. Nutr. Educ. Behav. 2002, 34, 140–150. [Google Scholar] [CrossRef]
  2. Nielsen, S.J.; Siega-Riz, A.M.; Popkin, B.M. Trends in energy intake in US between 1977 and 1996: Similar shifts seen across age groups. Obes. Res. 2002, 10, 370–378. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. USDA. Food Expenditure Series. In Constant Dollar Food and Alcohol Expenditures, without Taxes and Tips, for all Purchasers. Available online: https://www.ers.usda.gov/data-products/food-expenditure-series/ (accessed on 24 August 2021).
  4. Lin, B.-H.; Guthrie, J.F. Nutritional Quality of Food Prepared at Home and Away from Home, 1977–2008; USDA Economic Research Service: Washington, DC, USA, 2012. [Google Scholar]
  5. Gillis, L.J.; Bar-Or, O. Food away from home, sugar-sweetened drink consumption and juvenile obesity. J. Am. Coll. Nutr. 2003, 22, 539–545. [Google Scholar] [CrossRef] [PubMed]
  6. Mancino, L.; Todd, J.E.; Guthrie, J.; Lin, B.-H. Food away from home and childhood obesity. Curr. Obes. Rep. 2014, 3, 459–469. [Google Scholar] [CrossRef]
  7. Potischman, N. Biologic and methodologic issues for nutritional biomarkers. J. Nutr. 2003, 133, 875S–880S. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Picó, C.; Serra, F.; Rodríguez, A.M.; Keijer, J.; Palou, A. Biomarkers of nutrition and health: New tools for new approaches. Nutrients 2019, 11, 1092. [Google Scholar] [CrossRef] [Green Version]
  9. Wild, C.; Andersson, C.; O’Brien, N.; Wilson, L.; Woods, J. A critical evaluation of the application of biomarkers in epidemiological studies on diet and health. Br. J. Nutr. 2001, 86, S37–S53. [Google Scholar] [CrossRef] [Green Version]
  10. Shim, J.-S.; Oh, K.; Kim, H.C. Dietary assessment methods in epidemiologic studies. Epidemiol. Health 2014, 36, e2014009. [Google Scholar] [CrossRef] [PubMed]
  11. Williamson, D.A.; Allen, H.; Martin, P.D.; Alfonso, A.; Gerald, B.; Hunt, A. Digital photography: A new method for estimating food intake in cafeteria settings. Eat. Weight Disord.-Stud. Anorex. Bulim. Obes. 2004, 9, 24–28. [Google Scholar] [CrossRef] [PubMed]
  12. Martin, C.K.; Han, H.; Coulon, S.M.; Allen, H.R.; Champagne, C.M.; Anton, S.D. A novel method to remotely measure food intake of free-living individuals in real time: The remote food photography method. Br. J. Nutr. 2008, 101, 446–456. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Martin, C.K.; Nicklas, T.; Gunturk, B.; Correa, J.B.; Allen, H.R.; Champagne, C. Measuring food intake with digital photography. J. Hum. Nutr. Diet. 2014, 27, 72–81. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Kikunaga, S.; Tin, T.; Ishibashi, G.; Wang, D.-H.; Kira, S. The application of a handheld personal digital assistant with camera and mobile phone card (Wellnavi) to the general population in a dietary survey. J. Nutr. Sci. Vitaminol. 2007, 53, 109–116. [Google Scholar] [CrossRef] [Green Version]
  15. Rollo, M.E.; Ash, S.; Lyons-Wall, P.; Russell, A. Trial of a mobile phone method for recording dietary intake in adults with type 2 diabetes: Evaluation and implications for future applications. J. Telemed. Telecare 2011, 17, 318–323. [Google Scholar] [CrossRef]
  16. Mezgec, S.; Koroušić Seljak, B. NutriNet: A deep learning food and drink image recognition system for dietary assessment. Nutrients 2017, 9, 657. [Google Scholar] [CrossRef] [PubMed]
  17. Christodoulidis, S.; Anthimopoulos, M.; Mougiakakou, S. Food recognition for dietary assessment using deep convolutional neural networks. In Proceedings of the International Conference on Image Analysis and Processing, Genova, Italy, 7 September 2015; pp. 458–465. [Google Scholar]
  18. Kawano, Y.; Yanai, K. Food image recognition with deep convolutional features. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, Seattle, WA, USA, 13 September 2014; pp. 589–593. [Google Scholar]
  19. Liu, C.; Cao, Y.; Luo, Y.; Chen, G.; Vokkarane, V.; Ma, Y. Deepfood: Deep learning-based food image recognition for computer-aided dietary assessment. In Proceedings of the International Conference on Smart Homes and Health Telematics, Wuhan, China, 25 May 2016; pp. 37–48. [Google Scholar]
  20. Dehais, J.; Anthimopoulos, M.; Mougiakakou, S. Food image segmentation for dietary assessment. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Amsterdam, The Netherlands, 16 October 2016; pp. 23–28. [Google Scholar]
  21. Hartford County, CT. Available online: https://datausa.io/profile/geo/hartford-county-ct (accessed on 26 August 2021).
  22. Yelp Fusion. Available online: https://www.yelp.com/fusion (accessed on 26 August 2021).
  23. Simple Mass Downloader. Available online: https://chrome.google.com (accessed on 26 August 2021).
  24. Review Trackers. 2021 Online Reviews Statistics and Trends: A Report by ReviewTrackers. Available online: https://www.reviewtrackers.com/reports/online-reviews-survey/ (accessed on 26 September 2021).
  25. Calorie Mama API. Available online: https://dev.caloriemama.ai/ (accessed on 5 September 2021).
  26. Van Asbroeck, S.; Matthys, C. Use of Different Food Image Recognition Platforms in Dietary Assessment: Comparison Study. JMIR Form. Res. 2020, 4, e15602. [Google Scholar] [CrossRef]
  27. USDA. Food and Nutrient Database for Dietary Studies. Available online: https://www.ars.usda.gov/northeast-area/beltsville-md-bhnrc/beltsville-human-nutrition-research-center/food-surveys-research-group/docs/fndds/ (accessed on 5 September 2021).
  28. CDC. Census Tract Level State Maps of the Modified Retail Food Environment Index (mRFEI); CDC: Atlanta, GA, USA, 2013. [Google Scholar]
  29. CDC/ATSDR Social Vulnerability Index. Available online: https://www.atsdr.cdc.gov/placeandhealth/svi/index.html (accessed on 6 September 2021).
  30. Glanz, K.; Sallis, J.F.; Brian, E.S.; Lawrence, D.F. Healthy nutrition environments: Concepts and measures. Am. J. Health Promot. 2005, 19, 330–333. [Google Scholar] [CrossRef] [PubMed]
  31. USDA. Food Access Research Atlas. Available online: https://www.ers.usda.gov/data-products/food-access-research-atlas/go-to-the-atlas/ (accessed on 15 September 2021).
  32. Fleischhacker, S.E.; Evenson, K.R.; Rodriguez, D.A.; Ammerman, A.S. A systematic review of fast food access studies. Obes. Rev. 2011, 12, e460–e471. [Google Scholar] [CrossRef]
  33. Florida, R.; Mellander, C. Segregated City: The Geography of Economic Segregation in America’s Metros; Martin Prosperity Institute: Toronto, ON, Canada, 2015. [Google Scholar]
  34. Saelens, B.E.; Glanz, K.; Sallis, J.F.; Frank, L.D. Nutrition Environment Measures Study in restaurants (NEMS-R): Development and evaluation. Am. J. Prev. Med. 2007, 32, 273–281. [Google Scholar] [CrossRef]
  35. Charreire, H.; Casey, R.; Salze, P.; Simon, C.; Chaix, B.; Banos, A.; Badariotti, D.; Weber, C.; Oppert, J.-M. Measuring the food environment using geographical information systems: A methodological review. Public Health Nutr. 2010, 13, 1773–1785. [Google Scholar] [CrossRef]
  36. USDA.Food Environment Atlas. Available online: https://www.ers.usda.gov/data-products/food-environment-atlas/ (accessed on 15 September 2021).
  37. Cooksey-Stowers, K.; Schwartz, M.; Brownell, K. Food swamps predict obesity rates better than food deserts in the United States. Int. J. Environ. Res. Public Health 2017, 14, 1366. [Google Scholar] [CrossRef] [Green Version]
  38. Phillips, A.Z.; Rodriguez, H.P. US county “food swamp” severity and hospitalization rates among adults with diabetes: A nonlinear relationship. Soc. Sci. Med. 2020, 249, 112858. [Google Scholar] [CrossRef]
  39. Widener, M.J. Spatial access to food: Retiring the food desert metaphor. Physiol. Behav. 2018, 193, 257–260. [Google Scholar] [CrossRef]
  40. Chen, X.; Kwan, M.-P. Contextual uncertainties, human mobility, and perceived food environment: The uncertain geographic context problem in food access research. Am. J. Public Health 2015, 105, 1734–1737. [Google Scholar] [CrossRef]
  41. Baginski, J.; Sui, D.; Malecki, E.J. Exploring the intraurban digital divide using online restaurant reviews: A case study in Franklin County, Ohio. Prof. Geogr. 2014, 66, 443–455. [Google Scholar] [CrossRef]
  42. Kelley, M.J. Urban experience takes an informational turn: Mobile internet usage and the unevenness of geosocial activity. GeoJournal 2014, 79, 15–29. [Google Scholar] [CrossRef]
  43. Hargittai, E.; Hinnant, A. Digital inequality: Differences in young adults’ use of the Internet. Commun. Res. 2008, 35, 602–621. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Spatial distribution of restaurants in the study area.
Figure 1. Spatial distribution of restaurants in the study area.
Nutrients 13 04132 g001
Figure 2. Example of nutrition assessment by the deep learning model.
Figure 2. Example of nutrition assessment by the deep learning model.
Nutrients 13 04132 g002
Figure 3. Flowchart of nutrition assessment by the deep learning model.
Figure 3. Flowchart of nutrition assessment by the deep learning model.
Nutrients 13 04132 g003
Figure 4. Estimated calorific level of restaurants in the study area.
Figure 4. Estimated calorific level of restaurants in the study area.
Nutrients 13 04132 g004
Figure 5. The RN index in terms of average calories of restaurant food by census tract.
Figure 5. The RN index in terms of average calories of restaurant food by census tract.
Nutrients 13 04132 g005
Table 1. Pearson’s correlation analysis between selected SVI variables and the RN index on the census tract level (n = 66).
Table 1. Pearson’s correlation analysis between selected SVI variables and the RN index on the census tract level (n = 66).
SVI VariableMean (SD)Min/MaxCorrelation Coefficient (R)
Persons (%) below poverty17.44 (13.38)0/49.20.03
Unemployment rate (%)9.16 (5.70)0/23.3−0.04
Per capita income32,691.94 (16646.26)5509/68,705−0.18
Persons (%, age 25+) with no high school diploma17.19 (12.65)0.3/490.24 *
Persons (%) aged 65 and older14.76 (6.32)1.6/28.7−0.06
Persons (%) aged 17 and younger21.54 (6.77)2.1/39.3−0.21 *
Persons (%) with a disability13.02 (4.43)0/25.4−0.03
Single-parent household (%) with children13.88 (14.19)0.5/1000.29 **
Minority (%)60.95 (30.79)9/100−0.01
Persons (%, age 5+) who speaks english less than well7.44 (6.80)0/25.10.04
Housing structures (%) with 10 or more units20.29 (20.56)0/87.3−0.09
Mobile homes (%)0.78 (3.60)0/25.50.15
Occupied housing units (%) with more people than rooms estimate3.02 (2.96)0/10.3−0.05
Households (%) with no vehicle18.89 (14.55)0/60.4−0.03
Persons (%) in group quarters3.94 (12.38)0/93.40.37 ***
*** p < 0.01, ** p < 0.05, * p < 0.1. SVI variables with p < 0.05 are in bold.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Johnson, E.; Kulkarni, A.; Ding, C.; Ranelli, N.; Chen, Y.; Xu, R. An Exploratory Approach to Deriving Nutrition Information of Restaurant Food from Crowdsourced Food Images: Case of Hartford. Nutrients 2021, 13, 4132. https://doi.org/10.3390/nu13114132

AMA Style

Chen X, Johnson E, Kulkarni A, Ding C, Ranelli N, Chen Y, Xu R. An Exploratory Approach to Deriving Nutrition Information of Restaurant Food from Crowdsourced Food Images: Case of Hartford. Nutrients. 2021; 13(11):4132. https://doi.org/10.3390/nu13114132

Chicago/Turabian Style

Chen, Xiang, Evelyn Johnson, Aditya Kulkarni, Caiwen Ding, Natalie Ranelli, Yanyan Chen, and Ran Xu. 2021. "An Exploratory Approach to Deriving Nutrition Information of Restaurant Food from Crowdsourced Food Images: Case of Hartford" Nutrients 13, no. 11: 4132. https://doi.org/10.3390/nu13114132

APA Style

Chen, X., Johnson, E., Kulkarni, A., Ding, C., Ranelli, N., Chen, Y., & Xu, R. (2021). An Exploratory Approach to Deriving Nutrition Information of Restaurant Food from Crowdsourced Food Images: Case of Hartford. Nutrients, 13(11), 4132. https://doi.org/10.3390/nu13114132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop