Next Article in Journal
Power Spectrum of Acceleration and Angular Velocity Signals as Indicators of Muscle Fatigue during Upper Limb Low-Load Repetitive Tasks
Next Article in Special Issue
PET/ZnO@MXene-Based Flexible Fabrics with Dual Piezoelectric Functions of Compression and Tension
Previous Article in Journal
Extended and Generic Higher-Order Elements for MEMS Modeling
Previous Article in Special Issue
Using Deep Learning for Task and Tremor Type Classification in People with Parkinson’s Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Improved Wearable Devices for Dietary Assessment Using a New Camera System

1
Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
2
Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
3
Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
4
Department of Mechanical Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(20), 8006; https://doi.org/10.3390/s22208006
Submission received: 2 September 2022 / Revised: 12 October 2022 / Accepted: 17 October 2022 / Published: 20 October 2022
(This article belongs to the Special Issue Wearable Biomedical Devices and Sensors)

Abstract

:
An unhealthy diet is strongly linked to obesity and numerous chronic diseases. Currently, over two-thirds of American adults are overweight or obese. Although dietary assessment helps people improve nutrition and lifestyle, traditional methods for dietary assessment depend on self-report, which is inaccurate and often biased. In recent years, as electronics, information, and artificial intelligence (AI) technologies advanced rapidly, image-based objective dietary assessment using wearable electronic devices has become a powerful approach. However, research in this field has been focused on the developments of advanced algorithms to process image data. Few reports exist on the study of device hardware for the particular purpose of dietary assessment. In this work, we demonstrate that, with the current hardware design, there is a considerable risk of missing important dietary data owing to the common use of rectangular image screen and fixed camera orientation. We then present two designs of a new camera system to reduce data loss by generating circular images using rectangular image sensor chips. We also present a mechanical design that allows the camera orientation to be adjusted, adapting to differences among device wearers, such as gender, body height, and so on. Finally, we discuss the pros and cons of rectangular versus circular images with respect to information preservation and data processing using AI algorithms.

1. Introduction

Food is essential to support human life; conversely, an unhealthy diet is strongly linked to risks of chronic diseases, such as cardiovascular diseases, diabetes, and certain types of cancer [1]. The Global Burden of Disease Study has found that, among the top 17 risk factors, poor diet is overwhelmingly the top risk factor for human diseases [2]. To study the quality (healthiness) and quantity (energy intake) in people’s diet, scientists need tools to obtain accurate information about the foods/beverages consumed by an individual over a certain period of time (e.g., one week) along with the volume or weight (called “portion size” in dietetics) of each food. This type of evaluation is called a dietary assessment (DA). Currently, self-report is the most commonly used DA method [3,4,5]. In this method, the person being evaluated takes detailed notes for each food/beverage as soon as the food is consumed (for simplicity, from now on, we will not particularly mention “beverage” and consider it is a particular type of “food”). This method is called a food diary [3,4,5,6]. In another self-report method called 24-h recall [3,4,5,7], the person being evaluated recalls each food consumed during the past 24 h. This recall is traditionally performed in a person-to-person interview by a dietitian. In recent years, the use of a web- or app-based electronic platform is gaining popularity [8,9,10,11,12]. In both cases, a food database (e.g., the FNDDS database developed by USDA [13]) is used to obtain the amounts of nutrients and energy for each food. Although widely utilized, self-report depends on the memory and willingness of the person to provide accurate and complete food-intake information. However, numerous studies have found that people tend to over-report healthy foods, but under-report unhealthy foods [7,14,15]. This type of reporting error is called subjective bias. In addition, both food diary and 24-h recall are complex and tedious procedures. Thus, their “participant burden” is high [7].
To solve the problems of self-report, image-based DA tools using wearable devices (we call them DA wearables) have emerged [16,17,18,19,20,21,22,23,24,25,26,27,28,29]. The DA wearables are equipped with a camera, as shown in Figure 1. The lens of the camera is oriented downward, aiming at the food on a dining table. The camera takes pictures automatically at a pre-programmed rate (e.g., 1–6 s between consecutive pictures). The images obtained are either stored within the DA wearable or transmitted wirelessly to a companion smartphone, where the data are stored or relayed to a remote server. Next, foods are identified, segmented, and their volumes are estimated, assisted by image processing algorithms. Finally, the food names and portion sizes are provided to a food database to obtain the nutrient/energy information. When compared with the self-report, the DA wearable approach reduces both subjective bias and participant burden because DA is conducted from images rather than the individual’s reports. However, the individual must be willing to image his/her foods and permit a dietitian to observe them. Some people may have a privacy concern on the images within which the background scene and people (e.g., family members) may be recorded unintentionally. Therefore, the image-based method has limitations. These limitations may be mitigated when AI algorithms, instead of humans, are used to process image data automatically (to be discussed further in the Discussion section).
Although commercial body-worn cameras, such as Narrative, Autographer, Vicon Revue, VIEVU, and FrontRow, are available, they are generally unsuitable to be used as DA wearables because these commercial devices are mostly designed for public security (e.g., police) or entertainment (e.g., lifelogging) purposes. As a result, their camera is forward-looking, which cannot capture the food below the camera effectively. These commercial products may also suffer from at least one of the following problems: bulky size, limited picture storage, short battery life, narrow field of view, and/or unsuitable picture-taking rate. Currently, DA studies usually use custom-made wearables. These devices are worn in different ways, such as the Automatic Ingestion Monitor (AIM) clipped on one side of eyeglasses [20,21,22], the Ear-Worn attached to a single ear [23,28], and the eButton pinned onto the chest [24,25,26,28].
Regardless of the types of wearables, the common goal is to capture a complete scene of foods on the table as a miss or an incomplete view of food results in a DA error. However, picture-taking by current DA wearables is self-activated (i.e., they do so by “random shoots”). The only way to minimize loss of food in images is to increase the coverage of the camera’s lens, i.e., to enlarge the field of view (FOV). In recent years, as mobile technologies advance, small and physically short camera modules (which makes the device thin) have become available [30,31]. Some of them have a FOV close to 180, which, theoretically, can capture the entire scene in front of the camera. However, there are two significant problems when these camera modules are used for DA wearables. First, these modules produce rectangular images at the output, which represents a crop of the circular FOV provided by the camera lens. A food may be outside the cropped region or cut by the cropping. Second, to obtain the best image quality and minimize content loss, the camera lens should be oriented to the direction at which foods are most likely to appear. However, this orientation depends on many factors, such as the wearer’s height, wearing location in the body, and/or the heights of the table and chair. Currently, these two problems have not yet been solved. All current DA wearables use imaging sensors with a rectangular screen and their camera is fix-mounted onto the device case without an adjustment mechanism.
In this work, we challenge the traditional camera design of DA wearables. A new camera system is presented to produce circular images instead of rectangular ones. We then present a new mechanical design that allows adjustment of camera orientation. Practical formulas are also provided to aid in the camera system design. Our new camera system produces more complete food intake information that increases DA accuracy.

2. Circular vs. Rectangular Images

In terms of picture-taking, there is a significant difference between a hand-held camera and a wearable one. While a hand-held camera is controlled manually (“aim and shoot”), the current DA wearable camera takes pictures without scene selection. When a rectangular frame is used by a DA wearable, it causes three major problems, as described below in detail.

2.1. Loss of Image Content

The loss of image content by a rectangular screen is illustrated in Figure 2. In both panels, the red circle represents the image field (IF) in the image plane, which is a plane within the camera where the sensor chip is placed. The IF is circular because the camera lens is always round. To produce a rectangular image from a round lens, the circular IF in the image plane must be cropped. For an image with a 4:3 screen ratio (left panel), this cropping wastes 38.9% of the available IF (i.e., the four white regions within the red circle are wasted). For a 16:9 image frame (right panel), the wasted area increases to 45.6%. Because, as mentioned earlier, the DA wearable “shoots” blindly, the risk of information loss due to the cropping effect is very high. To illustrate, let us imagine the case where one is blindfolded but left with a rectangular opening to observe the world. Certainly, it will be more likely to miss a target of interest compared with the case without such a blindfold.

2.2. Variable Field of View

The field of view (FOV) of a camera is defined as
FOV = 2 tan 1 d 2 f  
where d and f represent the diagonal length of the image sensor chip and the focal length of the camera, respectively. For a rectangular image, the effective FOV in the horizontal or vertical direction is smaller than the diagonal direction. For example, for a camera with a 60° FOV and 4:3 screen ratio, the effective FOVs are only 49.6° (horizontal) and 38.2° (vertical).

2.3. Effect of Image Distortion

In the DA application, a wide-angle lens is highly desirable to obtain a large FOV. This type of lens bends the lights from the objects in the outlying regions of the scene (i.e., regions near the boundaries of the image) so that a flat-surface image sensor chip can record these lights. This process results in a barrel distortion, as shown in Figure 3 (left column), which needs to be corrected by a process called “undistortion”. Numerous undistortion methods have been reported [32,33,34,35]. Most methods utilize a distortion model (e.g., a stereo-graphic model). If the model is direction-invariant from the optical axis (usually located at the center of the image), the circular image after undistorting is still circular. Otherwise, the output image shape changes, but, in general, the change is not excessive (exemplified in Figure 3, top row). This shape-invariant or nearly invariant property is attractive for DA wearables because it implies that the image does not need post-processing after undistorting, preserving the information in the image. On the other hand, for rectangular images, the image after undistorting has a significant unnatural shape change (Figure 3, bottom row), which must be cropped, implying some information loss.
The circular image has another significant advantage for DA wearables. In practice, a DA wearable is often worn with an angle from its leveled position unintentionally. If this angle is large, the acquired images need to be rotated (re-levelled). If the input image is circular, the result after rotation is still circular without the need for cropping. In contrast, for rectangular images, the result after rotation must be cropped for a leveled presentation, which, again, leads to information loss.

3. Circular Image Generation

To produce circular images for DA wearables, an obvious method is to use a circular sensor chip (e.g., a circular CMOS chip) that has the same diameter as the circular IF. However, we have not found any manufacturers making circular image sensor chips. As a result, circular images must be produced from existing rectangular sensor chips. We have studied two methods to generate circular images. One is to rematch sensor chips and lens, and the other is to use an ultra-wide-angle fisheye lens only recently made available.

3.1. Rematch between Sensor Chip and Lens

The rematch method is illustrated in Figure 4. In the current design (left panel), the rectangular CCD or CMOS image sensor chip is placed within the circular IF produced by the round lens. In order to capture the entire image content within the circular IF, we rematch the chip and lens pair using a larger sensor chip, placed at the same distance to the optical center as the chip of the original size. The new chip can be determined according to Figure 5, where d 1 ,   d 2 , h 1 ,   h 2 , and η represent the diagonal length of the original chip, the diagonal length of the rematched chip, the height of the original chip, the height of the rematched chip, and the screen ratio of both chips (for simplicity, here, we assume that the screen ratios of sensor chips before and after the rematch are unchanged), respectively. From Figure 5, to cover the entire circular field, the rematch must satisfy the following inequality:
d 2     h 2 2   +   ( η h 2 ) 2
As d 1 and h 2 are both diameters of the circle, we have d 1 = h 2 . Equation (2) then reduces to the following:
d 2     d 1   1 + η 2
For example, let the original image sensor chip have a diagonal length d 1 of 1/7″ and the screen ratio η of both chips be 4:3. To obtain a circular image with a diameter of 1/7″ in the image plane, we must satisfy
d 2     ( 1 7 ) 1   +   ( 4 3 ) 2   =   5 21
We may choose d 2   =   5 20   =   1 4 , i.e., a 1/4″ sensor chip placed in the same image plane will allow the rematched imaging sensor to produce complete circular images with a diameter of 1/7″.
We point out that the inequality in Equation (3) and the example provide a theoretical guideline only. In practice, the die size and the effective size of the sensor chip are often different. Therefore, in actual design, we recommend a thorough study of the datasheets of both the original and rematched chips. We also point out that the rematch method will waste some pixels (those in white regions in the right panel of Figure 4). In addition, storing circular images using the rectangular image format is less efficient because of the empty regions. To reduce the inefficiency, we suggest choosing a screen ratio η as close to 1 as possible (e.g., the 4:3 ratio is better than the 16:9 ratio). Further, as images are commonly stored in a compressed format (e.g., JPEG), the storage efficiency increases significantly if a constant pixel value (e.g., zero) is pre-written into the empty region before compression.
Another important question is how to choose the resolution of the circular image so that the details of the image are preserved. Let σ 1 and σ 2 represent the pixel densities (in pixels/mm) of the two image sensor chips before and after the rematch, respectively. Note that sensor chip manufacturers often provide the reciprocal of the pixel density, called “pixel size”, in the chip’s datasheet. Let N 1 and N 2 be the numbers of pixels of the two sensor chips before and after the rematch, respectively. From Figure 5, we have
N 1   =   η h 1 2 σ 1 2   and   N 2   =   η h 2 2 σ 2 2
For simplicity, let us consider only the largest circle contained in the outer rectangle, as shown in Figure 5, which corresponds to the equal sign in (3). As d 1 and h 2 are both diameters of the circle, we have
h 2 = d 1   = ( 1 + η 2 )   h 1
Combining Equations (5) and (6)
N 2 =   ( 1 + η 2 ) σ 2 2 σ 1 2 N 1
Given σ 1 in the original image, let us discuss two scenarios to choose σ 2 in the rematched image. If one would like to keep the original image resolution unchanged, it requires σ 2 = σ 1 . Then, the number of pixels N 2 is 1 + η 2 times larger than that of N 1 . For example, for the 4:3 screen ratio, N 2 is around 2.78 times larger than N 1 . If this choice causes memory or data handling problems in the electronic hardware of the DA wearable, one may choose σ 2 by requiring equal number of pixels in the original image (the small rectangle in Figure 5) and the circular region in the rematched image (the circle in Figure 5). This is equivalent to
N 1 = η h 1 2 σ 1 2 = π ( h 2 2 ) 2 σ 2 2
Combining Equations (6) and (8), we can establish the relationship between σ 1 and σ 2
σ 2 = σ 1 4 η π ( 1 + η 2 )
For the η = 4:3 screen ratio, we obtain σ 2 = 0.78 σ 1 . This result indicates a much smaller increase in data output (here, N 2 = 1.69 N 1 vs. N 2 = 2.78 N 1 in the previous case), but compromised by a 22% reduction in image resolution.
The sensor chip rematch method has two major advantages: (1) it allows a wide choice of FOV to satisfy the needs of different DA wearables and applications; and (2) for thinner DA wearables, the rematch method appears to be more suitable because the fisheye lens usually has a larger axial length that makes a DA wearable thicker, affecting its wearability. However, the rematch method has a disadvantage in that it may be difficult to combine a suitable pair of commercial lens and sensor chip with a correct lens mount thread size (e.g., M7). If this becomes a problem, a lens seat, including a polyimide ribbon connector (Figure 6), could be custom-made. Nevertheless, this approach is more expensive and could require longer design–test cycles.

3.2. Utilizing a Fisheye Lens

The second method to produce circular images is to use a commercial fisheye lens. Previously, a fisheye lens was usually long and heavy, unsuitable for use by DA wearables, which cannot be made cumbersome. In recent years, lens technology has improved significantly, and smaller and shorter fisheye lenses are now available. Despite the improvements, these lenses are still generally longer and heavier than non-fisheye lenses. The left panel in Figure 7a shows a fisheye lens that we have tested. This M7-lens (Type M7-1-08-Y, Nuoweian Inc., Shenzhen, China) has a focal length 1.08 mm, weight of 2.1 g, and height of 10.7 mm (about 15 mm after threading on a lens seat shown in Figure 6). Figure 7b,c shows two raw images obtained using this lens. It can be observed that these images are not completely circular. Portions of the image along the narrower direction of the image are missing. This phenomenon is quite common in small fisheye lenses. Although small portions of the circular field are lost on both sides, the empty regions outside the circular field are smaller than those shown in Figure 2, indicating a more efficient use of the active rectangular area of the sensor chip.

3.3. Comparisons between Circular and Rectangular Images

To demonstrate the benefits of the new camera system, we compare, using real-world data, the results of circular images in the new system and rectangular images in the existing system. Figure 8a shows four consecutive FOVs (blue circles) calculated according to the polynomial radio distortion model [33,35,36,37], given by
f ( ρ ) = a 0 + a 2 ρ 2 + + a n ρ n
where f ( ρ ) is a mapping function determined by the particular lens construction (in our case, the Type M7-1-08-Y lens); ρ is the radial Euclidean distance from the image center in the sensor plane; and a 0 , a 2 , , a n are the polynomial coefficients. According to this model, the points of the scene along the ray emanating from the optical center and passing through 3D point ( u , v ,   f ( ρ ) ) are mapped to point ( u , v ) on the imaging plane, with ρ = u 2 + v 2   .   The polynomial coefficients are determined by a calibration process using a checkerboard phantom [33,35]. The FOV corresponding to each point ( u , v ) is calculated as follows:
F O V = 2 tan 1 ρ f ( ρ )
The four blue circles in Figure 8a present circular image domains with FOVs equal to 60°, 100°, 140°, and 180°. For each circle, the inscribed red rectangle represents the traditional rectangular image domain (assuming the 4:3 screen ratio). Figure 8b shows four real-world food-containing images superimposed with the same image domains in Figure 8a. It can be observed that, for most of these real-world scenarios, both rectangular images and circular images with smaller FOVs tend to lose information because of missing or cutting portions of foods. Under the same FOV, the loss by the rectangular image is much more significant than the loss by the circular image. To facilitate observation of the losses, the circular and rectangular images corresponding to the top-left image in Figure 8b are shown in Figure 8c,d, respectively, for all four FOVs (labeled). It can be observed that circular images, especially those with larger FOVs, preserve image contents well. On the other hand, rectangular images are subject to higher risks of missing important food information.

4. Lens Orientation Adjustment

Although, in theory, a 180° circular lens on a DA wearable will not miss foods in front of a wearer, there is a practical problem in the varying effective resolution of the image content. For example, for a five-megapixel camera with a common 4:3 screen ratio, the circular image has approximately three megapixels. This number appears to be sufficiently high; however, it is true only in the central region of the circular image. Owing to the high distortion of the fisheye lens, as observed in Figure 7 and Figure 8, the effective resolution may become insufficient in the image outskirts away from the center. Increasing the number of pixels of the sensor chip is a clear choice to solve this problem, but it implies a higher cost; higher power consumption by the camera, central processor, and the data transfer circuitry; and likely a bulkier device, affecting its wearability. Another choice is to improve device orientation so that the lens is oriented appropriately. For DA wearables, the lens should be downward looking at a certain angle. As a result, the food items on the table should appear in, or close to, the central region of the image. In practice, however, the optimal orientation changes with several factors and varies in different individuals. For the eButton pinned onto the chest, the wearer’s age, gender, body height, heights of the table and chair, and the wearing location (towards one side or near the center of the chest) all affect the camera orientation. It is thus necessary to allow the wearer to adjust the lens orientation after the device is worn. Currently, none of the reported DA wearables have this adjustment mechanism. Thus, we independently designed a mechanical structure for the eButton to support the adjustment. This structure (shown in the right panel of Figure 9) includes two rotating axes, a camera module enclosure mounted with the axes, and two fixtures screw-mounted onto the body of the DA wearable. This mechanical structure has been 3D printed from digital blueprints made using the SOLIDWORKS mechanical design software (Dassault Systems, Paris, France), implemented in a recent version of the eButton (left panel in Figure 9) and utilized in a large-scale DA study in Africa [28].

5. Discussion

In recent years, there has been a steady trend of applying AI algorithms to image data acquired by DA wearables [38,39,40,41,42,43,44,45]. The AI approach not only reduces the data processing burden on researchers and dieticians, but also provides a solution to the privacy problem described in the Introduction section. However, as opposed to the lens orientation adjustment mechanism, which has been implemented and field-tested, we have not yet utilized circular images for real-world DA. The main reason was the difficulty encountered in data processing using AI algorithms. Currently, the existing convolutional neural networks (CNNs), which are the central components of AI algorithms for food image processing, are almost all designed for rectangular images [38,43,46,47,48,49]. Although isolated studies have been conducted for circular images, e.g., [50,51,52], they require training using the same type of images, which are not widely available. Research is ongoing in our laboratory to solve these problems.
We point out that our study on circular images has a biomimetic motivation. As far as our knowledge goes, none of the known eyes in the animal kingdom produce rectangular views. We believe that round or nearly round eyes allow better understanding of the environment. Additionally, the original radar, sonar, or oscilloscope screens were all round [53]. The rectangular screen became dominant in the past century for a number of valid reasons, including the convenient rectangular image shape in historical photography, the film strips that were made in rolls, the photographic equipment of manufacturers’ and photo viewers’ preferences, the ease of square-block based image processing procedures, and so on. Despite some drawbacks of circular images, for DA wearables and a number of other image-based wearables (e.g., those for the blind or vision impaired) where reliability in target finding is important, we believe that, at least for now, the advantages of circular images outweigh their drawbacks. However, in the future when AI, camera technology, and electronics are further developed, the next-generation miniature camera may recognize targets of interest and rotate its lens automatically before picture-taking. It is an interesting but unanswered question whether the future wearable devices will have round “biological eye(s)” or rectangular “robotic eye(s)”.

6. Conclusions

In this work, we targeted the missing data problem in image-based dietary assessment using wearable devices. We demonstrated that views of food items could be cropped out when rectangular images are produced. We presented two methods for generating circular images that preserve information. We also designed a mechanical structure to adjust camera lens orientation to obtain data of higher quality. Our approach may lead to significant improvements in using image-based wearable devices for dietary assessment and other applications. However, for this approach to be successful, there is a strong need to develop AI algorithms to extract information from circular images.

Author Contributions

Conceptualization, M.S.; Methodology, M.S., W.J. and Z.-H.M.; Prototype Design and Implementation, M.S., G.C., J.C., M.H. and W.J.; Writing—Original Draft Preparation, M.S.; Writing—Review and Editing, M.S., W.J. and Z.-H.M.; Project Administration, M.S.; Funding Acquisition, M.S., W.J. and Z.-H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by the Bill and Melinda Gates Foundation [OPP1171395]. Under the grant conditions of the Foundation, a Creative Commons Attribution 4.0 Generic License has already been assigned to the Author Accepted Manuscript version that might arise from this submission. This research was also supported in part by the U.S. National Institutes of Health Grants No. R56 DK113819 and No. R01DK127310.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cecchini, M.; Sassi, F.; Lauer, J.A.; Lee, Y.Y.; Guajardo-Barron, V.; Chisholm, D. Tackling of unhealthy diets, physical inactivity, and obesity: Health effects and cost-effectiveness. Lancet 2010, 376, 1775–1784. [Google Scholar] [CrossRef]
  2. Forouzanfar, M.H.; Alexander, L.; Anderson, H.R.; Bachman, V.F.; Biryukov, S.; Brauer, M.; Burnett, R.; Casey, D.; Coates, M.M.; Cohen, A.; et al. Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks in 188 countries, 1990–2013: A systematic analysis for the Global Burden of Disease Study 2013. Lancet 2015, 386, 2287–2323. [Google Scholar] [CrossRef] [Green Version]
  3. Shim, J.S.; Oh, K.; Kim, H.C. Dietary assessment methods in epidemiologic studies. Epidemiol. Health 2014, 36, e2014009. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Gibson, R.S. (Ed.) Principles of Nutritional Assessment; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  5. Thompson, F.E.; Subar, A.F. Chapter 1. Dietary Assessment Methodology; Academic Press: San Diego, CA, USA, 2001; pp. 3–30. [Google Scholar]
  6. Ortega, R.M.; Pérez-Rodrigo, C.; López-Sobaler, A.M. Dietary assessment methods: Dietary records. Nutr. Hosp. 2015, 31, 38–45. [Google Scholar] [CrossRef] [PubMed]
  7. Baranowski, T. 24-hour recall and diet record methods. In Nutritional Epidemiology, 3rd ed.; Willett, W., Ed.; Oxford University Press: New York, NY, USA, 2012. [Google Scholar]
  8. Schembre, S.M.; Liao, Y.; O’Connor, S.G.; Hingle, M.D.; Shen, S.E.; Hamoy, K.G.; Huh, J.; Dunton, G.F.; Weiss, R.; Thomson, C.A.; et al. Mobile ecological momentary diet assessment methods for behavioral research: Systematic review. JMIR mHealth uHealth 2018, 6, e11170. [Google Scholar] [CrossRef] [Green Version]
  9. Subar, A.F.; Kirkpatrick, S.I.; Mittl, B.; Zimmerman, T.P.; Thompson, F.E.; Bingley, C.; Willis, G.; Islam, N.G.; Baranowski, T.; McNutt, S.; et al. The automated self-administered 24-hour dietary recall (asa24): A resource for researchers, clinicians, and educators from the national cancer institute. J. Acad. Nutr. Diet. 2012, 112, 1134–1137. [Google Scholar] [CrossRef] [Green Version]
  10. Wark, P.A.; Hardie, L.J.; Frost, G.S.; Alwan, N.A.; Carter, M.; Elliott, P.; Ford, H.E.; Hancock, N.; Morris, M.A.; Mulla, U.Z.; et al. Validity of an online 24-h recall tool (myfood24) for dietary assessment in population studies: Comparison with biomarkers and standard interviews. BMC Med. 2018, 16, 136. [Google Scholar] [CrossRef]
  11. Foster, E.; Lee, C.; Imamura, F.; Hollidge, S.E.; Westgate, K.L.; Venables, M.C.; Poliakov, I.; Rowland, M.K.; Osadchiy, T.; Bradley, J.C.; et al. Validity and reliability of an online self-report 24-h dietary recall method (Intake24): A doubly labelled water study and repeated-measures analysis. J. Nutr. Sci. 2019, 8, e29. [Google Scholar] [CrossRef] [Green Version]
  12. Hasenbohler, A.; Denes, L.; Blanstier, N.; Dehove, H.; Hamouche, N.; Beer, S.; Williams, G.; Breil, B.; Depeint, F.; Cade, J.E.; et al. Development of an innovative online dietary assessment tool for france: Adaptation of myfood24. Nutrients 2022, 14, 2681. [Google Scholar] [CrossRef]
  13. U.S. Department of Agriculture, Agricultural Research Service. 2020 USDA Food and Nutrient Database for Dietary Studies 2017–2018. Food Surveys Research Group Home Page, /ba/bhnrc/fsrg. Available online: https://www.ars.usda.gov/northeast-area/beltsville-md-bhnrc/beltsville-human-nutrition-research-center/food-surveys-research-group/docs/fndds-download-databases/ (accessed on 1 October 2022).
  14. Poslusna, K.; Ruprich, J.; de Vries, J.H.; Jakubikova, M.; van’t Veer, P. Misreporting of energy and micronutrient intake estimated by food records and 24 hour recalls, control and adjustment methods in practice. Br. J. Nutr. 2009, 101 (Suppl. 2), S73–S85. [Google Scholar] [CrossRef]
  15. Kipnis, V.; Midthune, D.; Freedman, L.; Bingham, S.; Day, N.E.; Riboli, E.; Ferrari, P.; Carroll, R.J. Bias in dietary-report instruments and its implications for nutritional epidemiology. Pub. Health Nutr. 2002, 5, 915–923. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Gemming, L.; Utter, J.; Ni Mhurchu, C. Image-assisted dietary assessment: A systematic review of the evidence. J. Acad. Nutr. Diet. 2015, 115, 64–77. [Google Scholar] [CrossRef] [PubMed]
  17. Boushey, C.J.; Spoden, M.; Zhu, F.M.; Delp, E.J.; Kerr, D.A. New mobile methods for dietary assessment: Review of image-assisted and image-based dietary assessment methods. Proc. Nutr. Soc. 2017, 76, 283–294. [Google Scholar] [CrossRef] [Green Version]
  18. Limketkai, B.N.; Mauldin, K.; Manitius, N.; Jalilian, L.; Salonen, B.R. The age of artificial intelligence: Use of digital technology in clinical nutrition. Curr. Surg. Rep. 2021, 9, 20. [Google Scholar] [CrossRef] [PubMed]
  19. O’Loughlin, G.; Cullen, S.J.; McGoldrick, A.; O’Connor, S.; Blain, R.; O’Malley, S.; Warrington, G.D. Using a wearable camera to increase the accuracy of dietary analysis. Am. J. Prev. Med. 2013, 44, 297–301. [Google Scholar] [CrossRef]
  20. Farooq, M.; Doulah, A.; Parton, J.; McCrory, M.A.; Higgins, J.A.; Sazonov, E. Validation of sensor-based food intake detection by multicamera video observation in an unconstrained environment. Nutrients 2019, 11, 609. [Google Scholar] [CrossRef] [Green Version]
  21. Doulah, A.; Farooq, M.; Yang, X.; Parton, J.; McCrory, M.A.; Higgins, J.A.; Sazonov, E. Meal microstructure characterization from sensor-based food intake detection. Front. Nutr. 2017, 4, 31. [Google Scholar] [CrossRef] [Green Version]
  22. Fontana, J.M.; Farooq, M.; Sazonov, E. Automatic ingestion monitor: A novel wearable device for monitoring of ingestive behavior. IEEE Trans. Biomed. Eng. 2014, 61, 1772–1779. [Google Scholar] [CrossRef] [Green Version]
  23. Aziz, O.; Atallah, L.; Lo, B.; Gray, E.; Athanasiou, T.; Darzi, A.; Yang, G.Z. Ear-worn body sensor network device: An objective tool for functional postoperative home recovery monitoring. J. Am. Med. Inf. Assoc. 2011, 18, 156–159. [Google Scholar] [CrossRef] [Green Version]
  24. Sun, M.; Burke, L.E.; Mao, Z.H.; Chen, Y.; Chen, H.C.; Bai, Y.; Li, Y.; Li, C.; Jia, W. eButton: A wearable computer for health monitoring and personal assistance. In Proceedings of the 51st Annual Design Automation Conference, San Francisco, CA, USA, 1–5 June 2014; pp. 1–6. [Google Scholar]
  25. Sun, M.; Burke, L.E.; Baranowski, T.; Fernstrom, J.D.; Zhang, H.; Chen, H.C.; Bai, Y.; Li, Y.; Li, C.; Yue, Y.; et al. An exploratory study on a chest-worn computer for evaluation of diet, physical activity and lifestyle. J. Healthc. Eng. 2015, 6, 1–22. [Google Scholar] [CrossRef]
  26. McCrory, M.A.; Sun, M.; Sazonov, E.; Frost, G.; Anderson, A.; Jia, W.; Jobarteh, M.L.; Maitland, K.; Steiner, M.; Ghosh, T.; et al. Methodology for objective, passive, image- and sensor-based assessment of dietary intake, meal-timing, and food-related activity in Ghana and Kenya. In Proceedings of the Annual Nutrition Conference, Baltimore, MD, USA, 8–11 June 2019. [Google Scholar]
  27. Chan, V.; Davies, A.; Wellard-Cole, L.; Lu, S.; Ng, H.; Tsoi, L.; Tiscia, A.; Signal, L.; Rangan, A.; Gemming, L.; et al. Using wearable cameras to assess foods and beverages omitted in 24 hour dietary recalls and a text entry food record app. Nutrients 2021, 13, 1806. [Google Scholar] [CrossRef] [PubMed]
  28. Jobarteh, M.L.; McCrory, M.A.; Lo, B.; Sun, M.; Sazonov, E.; Anderson, A.K.; Jia, W.; Maitland, K.; Qiu, J.; Steiner-Asiedu, M.; et al. Development and validation of an objective, passive dietary assessment method for estimating food and nutrient intake in households in low- and middle-income countries: A study protocol. Curr. Dev. Nutr. 2020, 4, nzaa020. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Gemming, L.; Doherty, A.; Kelly, P.; Utter, J.; Ni Mhurchu, C. Feasibility of a SenseCam-assisted 24-h recall to reduce under-reporting of energy intake. Eur. J. Clin. Nutr. 2013, 67, 1095–1099. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Cavallaro, A.; Brutti, A. Chapter 5-Audio-visual learning for body-worn cameras. In Multimodal Behavior Analysis in the Wild; Alameda-Pineda, X., Ricci, E., Sebe, N., Eds.; Academic Press: Cambridge, MA, USA, 2019; pp. 103–119. [Google Scholar]
  31. OMNIVISION-Image Sensor. Available online: https://www.ovt.com/products/#image-sensor (accessed on 5 October 2022).
  32. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  33. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A toolbox for easily calibrating omnidirectional cameras. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Benjing, China, 9–15 October 2006; pp. 5695–5701. [Google Scholar]
  34. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. 1992, 14, 965–980. [Google Scholar] [CrossRef] [Green Version]
  35. Urban, S.; Leitloff, J.; Hinz, S. Improved wide-angle, fisheye and omnidirectional camera calibration. ISPRS J. Photogramm. Remote. Sens. 2015, 108, 72–79. [Google Scholar] [CrossRef]
  36. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A flexible technique for accurate omnidirectional camera calibration and structure from motion. In Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS’06), New York, NY, USA, 4–7 January 2006; p. 45. [Google Scholar]
  37. Micusik, B.; Pajdla, T. Estimation of omnidirectional camera model from epipolar geometry. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; p. I. [Google Scholar]
  38. Mohanty, S.P.; Singhal, G.; Scuccimarra, E.A.; Kebaili, D.; Heritier, H.; Boulanger, V.; Salathe, M. The food recognition benchmark: Using deep learning to recognize food in images. Front. Nutr. 2022, 9, 875143. [Google Scholar] [CrossRef]
  39. Lohala, S.; Alsadoon, A.; Prasad, P.W.C.; Ali, R.S.; Altaay, A.J. A novel deep learning neural network for fast-food image classification and prediction using modified loss function. Multimed. Tools Appl. 2021, 80, 25453–25476. [Google Scholar] [CrossRef]
  40. Jia, W.; Li, Y.; Qu, R.; Baranowski, T.; Burke, L.E.; Zhang, H.; Bai, Y.; Mancino, J.M.; Xu, G.; Mao, Z.H.; et al. Automatic food detection in egocentric images using artificial intelligence technology. Pub. Health Nutr. 2019, 22, 1168–1179. [Google Scholar] [CrossRef]
  41. Qiu, J.; Lo, F.P.; Lo, B. Assessing individual dietary intake in food sharing scenarios with a 360 camera and deep learning. In Proceedings of the 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar]
  42. Lo, F.P.; Sun, Y.; Qiu, J.; Lo, B. Food volume estimation based on deep learning view synthesis from a single depth map. Nutrients 2018, 10, 2005. [Google Scholar] [CrossRef]
  43. Subhi, M.A.; Ali, S.M. A deep convolutional neural network for food detection and recognition. In Proceedings of the 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), Sarawak, Malaysia, 3–6 December 2018; pp. 284–287. [Google Scholar]
  44. Temdee, P.; Uttama, S. Food recognition on smartphone using transfer learning of convolution neural network. In Proceedings of the 2017 Global Wireless Summit (GWS), Cape Town, South Africa, 15–18 October 2017; pp. 132–135. [Google Scholar]
  45. Liu, C.; Cao, Y.; Luo, Y.; Chen, G.; Vokkarane, V.; Ma, Y. DeepFood: Deep learning-based food image recognition for computer-aided dietary assessment. In Proceedings of the International Conference on Smart Homes and Health Telematics, Wuhan, China, 25–27 May 2016; pp. 37–48. [Google Scholar]
  46. Aguilar, E.; Nagarajan, B.; Remeseiro, B.; Radeva, P. Bayesian deep learning for semantic segmentation of food images. Comput. Electr. Eng. 2022, 103, 108380. [Google Scholar] [CrossRef]
  47. Mezgec, S.; Korousic Seljak, B. NutriNet: A deep learning food and drink image recognition system for dietary assessment. Nutrients 2017, 9, 657. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Kawano, Y.; Yanai, K. Food image recognition with deep convolutional features. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, Seattle, WA, USA, 13–17 September 2014; pp. 589–593. [Google Scholar]
  49. Pan, L.L.; Qin, J.H.; Chen, H.; Xiang, X.Y.; Li, C.; Chen, R. Image augmentation-based food recognition with convolutional neural networks. Comput. Mater. Contin. 2019, 59, 297–313. [Google Scholar] [CrossRef]
  50. Rashed, H.; Mohamed, E.; Sistu, G.; Kumar, V.R.; Eising, C.; El-Sallab, A.; Yogamani, S.K. FisheyeYOLO: Object detection on fisheye cameras for autonomous driving. In Proceedings of the Machine Learning for Autonomous Driving NeurIPS 2020 Virtual Workshop, Virtual, 11 December 2020. [Google Scholar]
  51. Baek, I.; Davies, A.; Yan, G.; Rajkumar, R.R. Real-time detection, tracking, and classification of moving and stationary objects using multiple fisheye images. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 447–452. [Google Scholar]
  52. Goodarzi, P.; Stellmacher, M.; Paetzold, M.; Hussein, A.; Matthes, E. Optimization of a cnn-based object detector for fisheye cameras. In Proceedings of the 2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Cairo, Egypt, 4–6 September 2019; pp. 1–7. [Google Scholar]
  53. Radar Display. Available online: https://en.wikipedia.org/wiki/Radar_display (accessed on 5 October 2022).
Figure 1. (a) The eButton is less than half the size of a credit card; (b) eButton takes pictures automatically during an eating event.
Figure 1. (a) The eButton is less than half the size of a credit card; (b) eButton takes pictures automatically during an eating event.
Sensors 22 08006 g001
Figure 2. A rectangular image sensor chip is placed within a circular field in the image plane, capturing only part of the useable visual field.
Figure 2. A rectangular image sensor chip is placed within a circular field in the image plane, capturing only part of the useable visual field.
Sensors 22 08006 g002
Figure 3. Inputs (left) and undistortion results (right). Top row: Circular image; Bottom row: Rectangular image.
Figure 3. Inputs (left) and undistortion results (right). Top row: Circular image; Bottom row: Rectangular image.
Sensors 22 08006 g003
Figure 4. The current (left) and proposed (right) designs to acquire circular images using rectangular image sensor chips of different sizes.
Figure 4. The current (left) and proposed (right) designs to acquire circular images using rectangular image sensor chips of different sizes.
Sensors 22 08006 g004
Figure 5. Geometric relationships in the rematch method.
Figure 5. Geometric relationships in the rematch method.
Sensors 22 08006 g005
Figure 6. Lens seat with a ribbon connector. The image sensor can be seen from the left panel.
Figure 6. Lens seat with a ribbon connector. The image sensor can be seen from the left panel.
Sensors 22 08006 g006
Figure 7. (a) Nuoweian fisheye lens, (b) and (c) raw images obtained by the lens.
Figure 7. (a) Nuoweian fisheye lens, (b) and (c) raw images obtained by the lens.
Sensors 22 08006 g007
Figure 8. (a) Circular (blue) and rectangular (red, 4:3 screen ratio) image domains of different FOVs. (b) Real-world food-containing circular images acquired using our DA wearable (eButton) with a Type M7-1-08-Y fisheye lens. (c,d) Effects of circular and rectangular images corresponding to the FOVs in (a).
Figure 8. (a) Circular (blue) and rectangular (red, 4:3 screen ratio) image domains of different FOVs. (b) Real-world food-containing circular images acquired using our DA wearable (eButton) with a Type M7-1-08-Y fisheye lens. (c,d) Effects of circular and rectangular images corresponding to the FOVs in (a).
Sensors 22 08006 g008aSensors 22 08006 g008b
Figure 9. Left panel: Front view of the eButton with adjustable lens orientation; Right panel: Mechanical structure of the lens orientation adjustment assembly. For clarity, the device case is depicted in the transparent form.
Figure 9. Left panel: Front view of the eButton with adjustable lens orientation; Right panel: Mechanical structure of the lens orientation adjustment assembly. For clarity, the device case is depicted in the transparent form.
Sensors 22 08006 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, M.; Jia, W.; Chen, G.; Hou, M.; Chen, J.; Mao, Z.-H. Improved Wearable Devices for Dietary Assessment Using a New Camera System. Sensors 2022, 22, 8006. https://doi.org/10.3390/s22208006

AMA Style

Sun M, Jia W, Chen G, Hou M, Chen J, Mao Z-H. Improved Wearable Devices for Dietary Assessment Using a New Camera System. Sensors. 2022; 22(20):8006. https://doi.org/10.3390/s22208006

Chicago/Turabian Style

Sun, Mingui, Wenyan Jia, Guangzong Chen, Mingke Hou, Jiacheng Chen, and Zhi-Hong Mao. 2022. "Improved Wearable Devices for Dietary Assessment Using a New Camera System" Sensors 22, no. 20: 8006. https://doi.org/10.3390/s22208006

APA Style

Sun, M., Jia, W., Chen, G., Hou, M., Chen, J., & Mao, Z. -H. (2022). Improved Wearable Devices for Dietary Assessment Using a New Camera System. Sensors, 22(20), 8006. https://doi.org/10.3390/s22208006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop