Next Article in Journal
Pulmonary Vein Activity Organization to Determine Atrial Fibrillation Recurrence: Preliminary Data from a Pilot Study
Next Article in Special Issue
A Short-Patterning of the Texts Attributed to Al Ghazali: A “Twitter Look” at the Problem
Previous Article in Journal
An Optimal Derivative-Free Ostrowski’s Scheme for Multiple Roots of Nonlinear Equations
Previous Article in Special Issue
CBRR Model for Predicting the Dynamics of the COVID-19 Epidemic in Real Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

P-NUT: Predicting NUTrient Content from Short Text Descriptions

by
Gordana Ispirova
1,2,*,
Tome Eftimov
1 and
Barbara Koroušić Seljak
1
1
Computer Systems Department, Jožef Stefan Institute, 1000 Ljubljana, Slovenia
2
Jožef Stefan International Postgraduate School, 1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(10), 1811; https://doi.org/10.3390/math8101811
Submission received: 14 September 2020 / Revised: 28 September 2020 / Accepted: 8 October 2020 / Published: 16 October 2020
(This article belongs to the Special Issue Machine Learning and Data Mining in Pattern Recognition)

Abstract

:
Assessing nutritional content is very relevant for patients suffering from various diseases, professional athletes, and for health reasons is becoming part of everyday life for many. However, it is a very challenging task as it requires complete and reliable sources. We introduce a machine learning pipeline for predicting macronutrient values of foods using learned vector representations from short text descriptions of food products. On a dataset used from health specialists, containing short descriptions of foods and macronutrient values: we generate paragraph embeddings, introduce clustering in food groups, using graph-based vector representations, that include food domain knowledge information, and train regression models for each cluster. The predictions are for four macronutrients: carbohydrates, fat, protein and water. The highest accuracy was obtained for carbohydrate predictions – 86%, compared to the baseline – 27% and 36%. The protein predictions yielded the best results across all clusters, 53%–77% of the values fall in the tolerance-level range. These results were obtained using short descriptions, the embeddings can be improved if they are learned on longer descriptions, which would lead to better prediction results. Since the task of calculating macronutrients requires exact quantities of ingredients, these results obtained only from short description are a huge leap forward.

1. Introduction

There is no denying that nutrition has become a core factor to today’s society, and an undeniable solution to the global health-crisis [1,2,3,4]. The path towards making the average human diet healthier and environmentally sustainable is a fundamental part of the solution for numerous challenges from ecological, environmental, societal and economic perspective, and the awareness for this has just started to grow and be fully appreciated.
We live in a time of a global epidemic of obesity, of diabetes, of inactivity, all connected to bad dietary habits. Many chronic diseases such as high blood pressure, cardiovascular disease, diabetes, some cancers [5], and bone-health diseases are linked to, again – poor dietary habits [6]. Dietary assessment is essential for patients suffering from many diseases (especially diet and nutrition related ones), it is also very much needed for professional athletes, and because of the accessibility of meal tracking mobile applications it is becoming part of everyday habits of a vast majority of individuals, for health, fitness, or weight loss/gain. Obesity is spiking each day in developed western countries and this contributes to raised public health concern about some subcategories of macronutrients, specifically about saturated fats, and added or free sugar. Nutritional epidemiologists are also raising concern about micronutrients like – sodium, whose intake should be monitored for individuals suffering from specific diseases like osteoporosis, stomach cancer, kidney disease, kidney; and fiber, whose intake is critical for patients suffering from irritable bowel syndrome (IBS).
Nutrient content from one food to another can vary a lot, even though they have roughly the same type of ingredients. This makes nutrient tracking and calculating very challenging, and predicting nutrient content very complicated. In this paper, we propose an approach, called P-NUT (Predicting NUTrient content from short text descriptions), for predicting macronutrient values of a food item considering learned vector representations of text describing the food item. Food items are generally unbalanced in terms of macronutrient content. When there is a broad variety of foods, they can go from one extreme to another for one macronutrient content, for example the content of fat can go from ‘fat free’ foods to ‘fat based’ foods (ex. different kinds of nut butters), which can be a good base for grouping foods. Therefore, a general model for prediction will not be efficient in macronutrient prediction. For this reason, we decided to apply unsupervised machine learning – clustering as a method to separate foods in order to obtain clusters (groups) of foods with similar characteristics. Subsequently, on these separate clusters we predict the macronutrients with applying supervised machine learning. Predicting macronutrients is not a task that has been approached in such a manner before, usually nutrient content of food is calculated or estimated from measurements and exact ingredients [7,8,9]. These calculations are pretty demanding, the detailed procedure for calculation of the nutrient content of a multi-ingredient food has a few major steps: selection or development of an appropriate recipe, data collection for the nutrient content of the ingredients, correction of the ingredient nutrient levels for weight of edible portions, adjustment of the content of each ingredient for effects of preparation, summation of ingredient composition, final weight (or volume) adjustment, and determination of the yield and final volumes. This is when all the ingredients and measurements are available. When the data for the ingredients are not available, this procedure gets more complicated [7,8].
With using just, short text descriptions of the food products – either a simple food or complex recipe dish, the results from this study show that this way of combining representation learning with unsupervised and supervised machine learning provides results with accuracy as high as 80%, compared to the baseline (mean and median – calculated from the values of a certain macronutrient of all the food items in a given cluster) in some cases there are differences in accuracies of up to 50%.
The structure of the rest of the paper is the following: In Section 2, we begin with the related work in Section 2.1 where we present the published research need to understand P-NUT, then Section 2.2 provides a structure and description of the data used in the experiments, and in Section 2.3, we explain the methodology in detail. The experimental results and the methodology evaluation are presented in the Section 3. In the Section 4, we review the outcome of the methodology, the benefits of such approach, and its novelty. At, the end, in Section 5, we summarize the importance of the methodology and give directions for future work.

2. Materials and Methods

To the best of our knowledge, predicting nutritional content of foods/recipes using only short text description has never been done before. There has been some work involving machine learning done in this direction, mainly involving image recognition: employing different deep learning models for accurate food identification and classification from food images [10], dietary assessment through food image analysis [11], calculating calorie intake from food images [12,13]. All this work in the direction of predicting total calories, strongly relies on textual data retrieved from the Web. There are numerous mobile and web applications, for tracking macronutrient intake [14,15]. Systems like these are used for achieving dietary goals, allergy management or simply, maintaining a healthy balanced diet. The biggest downside is the fact that they require manual imputation of details about the meal/food.

2.1. Related Work

In this subsection we present a review of the concepts relevant to P-NUT, the algorithms that were used, and recent work done in this area.

2.1.1. Representation Learning

Representation learning is learning representations of input data by transforming it or extracting features from it, which then makes it easier to perform a task like classification or prediction [16]. There are two different categories of vector representations: non-distributed or sparse, which are much older and distributed or dense, which have been in use for the past few years. Our focus is on distributed vector representations.

Word Embeddings

Word representations were first introduced as an idea in 1986 [17]. Since then, word representations have changes language modelling [18]. Following up is work that includes applications to automatic speech recognition and machine translation [19,20], and a wide range of Natural Language Processing (NLP) tasks [21,22,23,24,25,26,27]. Word embeddings have been used in combination with machine learning, improving results from biomedical named entity recognition [28], capturing word analogies [29], extracting latent knowledge from scientific literature and going towards a generalized approach to the process of mining scientific literature [30], etc. We previously explored the idea of applying text-based representation methods in the food domain for the task of finding similar recipes based on cosine similarity between embedding vectors [31]. Word embeddings are vector space models (VSM), that in a low-dimensional semantic space (much smaller than the vocabulary size) represent words in a form of real-valued vectors. Having distributed representations of words in vector space helps improve the performance of learning algorithms in for various NLP tasks.
  • Word2Vec was introduced as word embedding method by Mikolov et al. in 2013 at Google [32], and it is a neural network based word embedding method. There are two different Word2Vec approaches, Continuous Bag of Words and Continuous Skip Gram [33]:
    Continuous Bag-of-Words Model (CBOW) – This architecture consists of a single hidden layer and an output layer. The algorithm tries to predict the center word based on the surrounding words – which are considered as the context of this word. The inputs of this model are the one–hot encoded context word vectors.
    Skip-gram Model (SG) – In the SG architecture we have the center word and the algorithms tries to predict the words before and after it, which make up the context of the word. The output from the SG model are C number of V dimensional vectors, where C is the number of context words which we want the model to return and V is the vocabulary size. The SG model is trained to minimize the summed prediction error and gives better vectors with increments of C [32,33].
If compared, CBOW is a lot simpler and faster to train but SG performs better with rare words.
  • GloVe [34] is another method for generating word embeddings. It is a global log-bilinear regression model for unsupervised learning of word representations, that has been shown to outperform other models on word analogy, word similarity, and named entity recognition tasks. It is based on co-occurrence statistics from a given corpus.

Paragraph Embeddings

In 2014 [35] an unsupervised paragraph embedding method, called Doc2Vec, was proposed. Doc2Vec in contrast to Word2Vec generated vector representations of whole documents, regardless of their length. The paragraph vector and word vectors are concatenated in a sliding window and the next word is predicted; the training is done with a gradient decent algorithm. The Doc2Vec algorithm also takes into account the word order and context. The inspiration, of course, comes from the Word2Vec algorithm: the first part, called Distributed Memory version of Paragraph Vector (PV-DM), is an extension of the CBOW model with an additional vector (Paragraph ID) added, with the difference of including another feature vector, unique to the document, for the next word prediction. The word vectors represent the concept of a word, while the document vector represents the concept of a document.
The second algorithm, called Distributed Bag of Words version of Paragraph Vector (PV-DBOW), is similar to the Word2Vec SG model. In PV-DM the algorithm considers the concatenation of the paragraph vector with the word vectors for the prediction of the next word, whereas in the PV-DBOW the algorithm ignores the context words in the input, and the word are predicted by random sampling from the paragraph in the output.
The authors recommend using a combination of the two models, even though the PV-DM model performs better and usually will achieve state of the art results by itself.

Graph-Based Representation Learning

Besides word embeddings, there are methods that are used for embedding data represented as graphs, consequently named graph embedding. Usually, embedding methods learn vector embeddings represented in the Euclidean vector space, but as graphs are hierarchical structures, in 2017 the authors in [36] introduced an approach for embedding hierarchical structures into hyperbolic space – Poincaré ball. Poincaré embeddings are vector representations of symbolic data, the semantic similarity between two concepts is the distance between them in the vector space, and their hierarchy is waved by the magnitudes of the vectors. Graph embeddings have improved performance over many of the existing models on tasks such as text classification, distantly supervised entity extraction, and entity classification [37], they also have been used for unsupervised feature extraction from sequences of words [38]. In [39], the authors generate graph embeddings (Poincaré) for the FoodEx2 hierarchy [40]. FoodEx2 version 2 is a standardized system for food classification and description developed by the European Food Safety Authority (EFSA), it has domain knowledge embedded in it and it contains descriptions of a vast set of individual food items combined in food groups and more broad food categories in a hierarchy that exibits parent-child relationship. The domain knowledge contained in the FoodEx2 hierarchy is transcended through the graph embeddings, which later the authors use in order to group the food items from the FoodEx2 system in clusters. The clustering is done using the Partition Around Medoids algorithm [41], and the number of clusters is determined using the silhouette method [42].

2.2. Data

In our experiments we used a dataset that contains nutritional information about food items recently collected as food consumption data in Slovenia with the collaboration of subject-matter experts for the aims of the EFSA EU Menu project [43] – designed for more accurate exposure assessments and ultimately support of risk managers in their decision-making on food safety. The ultimate goal being – enabling quick assessment of exposure to chronic and acute substances possibly found in the food chain [44]. In this dataset there are 3265 food items, some of which are simple food products and others are recipes with short descriptions, a few instances are presented in Table 1 as an example.
From the dataset for each food item we have available: name in Slovene, name in English, FoodEx2 code, and nutrient values for: carbohydrates, fat, protein and water. We repeated our experiments for both English and Slovene names of the food products and the recipes.

2.3. Methodology

On Figure 1 a flowchart of the methodology is presented. Our methodology is consisted of three separate parts: representation learning and unsupervised machine learning, conducted independently, and then combined in supervised machine learning.
The idea is: (i) represent text descriptions in vector space using embedding methods, i.e., semantic embeddings at sentence/paragraph level of short food descriptions, (ii) cluster the foods based on their FoodEx2 codes [40] using graph embeddings [39], (iii) perform post-hoc cluster merging in order to obtain more evenly distributed clusters on a higher level of the FoodEx2 hierarchy, (iv) apply different single-target regression algorithms on each cluster having the embedding vectors as features for predicting separate macronutrient values (carbohydrates, fat, protein and water), (v) evaluate the methodology by comparing the predicted with the actual values of the macronutrients.

2.3.1. Representation Learning

The starting point is the textual data, in our case the short text descriptions of the food products/recipes, alongside with their FoodEx2 codes and macronutrient values. For representing the textual data as vectors, the embeddings are generated for the whole food product name/description, using two different approaches:
1. Learning word vector representations (word embeddings) with the Word2Vec and GloVe methods – The vector representations of the whole description are obtained with merging separate word embeddings generated for each separate word in the sentence (food product name/description). If D as a food product description consisted of n words:
D = w o r d 1 , w o r d 2 , , w o r d n
And E w o r d is the vector representation (embedding) of a separate word:
E w o r d a = x a 1 , x a 2   , ,   x a d
where a   1 , , n , n being the number of words in the description, and d is the dimension of the word vectors, which is defined manually for both Word2Vec and GloVe. These vectors are representations of words, to obtain the vector representations for the food product description we apply two different heuristics for merging the separate word vectors. Our two heuristics of choice are:
  • Average – The vector representation for the food product description is calculated as an average from the vectors of the words from which it consists of:
    E a v e r a g e D =   x 11 + + x n 1 n , x 12 + + x n 2 n , , x 1 d + + x n d n
  • Sum – The vector representation for each food product/recipe description is calculated by summing the vector representations of the words it consists of:
    E s u m D =   x 11 + + x n 1 , x 12 + + x n 2 , , x 1 d + + x n d
    where   E a v e r a g e D and E s u m D are the merged embeddings, i.e., embeddings for the whole description. When generating the Word2Vec and GloVe embeddings, we considered different values for the dimension size and sliding window size. The dimension sizes of choice are [50,100,200], also for the Word2Vec embeddings we considered the two types of feature extraction available: CBOW and SG. For these dimensions we assign different values to the parameter called ’sliding’ window. This parameter indicates the distance within a sentence between the current word and the word being predicted. The values of chose are [2,3,5,10] because our food product descriptions are not very long – the average number of words in a food product description in the dataset is 11, while the maximum number of words is 30). By combining these parameter values, 24 Word2Vec models were trained, plus considering the heuristics for combining, a total of 48 models, while with GloVe a total of 24 models were trained.
2. Learning paragraph vector representations with Doc2Vec algorithm – The Doc2Vec algorithm is used to generate vector representations for each description (sentence). If D is the description of the food product/description, then E D o c 2 V e c is the sentence vector representation generated with Doc2Vec is as follows:
E D o c 2 V e c D = x 1 , x 2 , , x d
where d is the predefined dimension of the vectors. Same as the two chosen word embedding methods, we considered different dimension sizes and sliding window sizes, specifically [2,3,5,10] for the sliding window and [50,100,200] for the dimension size. We also considered the two types architectures in the Doc2Vec model - PV-DM and PV-DBOW, and we used the non-concatenative mode (separate models for the sum option, and separate for the average option) because if we used the concatenation of context vectors rather than sum/average the result would be a much-larger model. Taking into account all these parameters there are 48 Doc2Vec models trained in total.

2.3.2. Unsupervised Machine Learning

Foods exhibit large variations in the nutrient content, therefore have very unbalanced macronutrient content. The dataset in our experiments includes a broad variety of foods, which implies that the content of a macronutrient can go from one extreme to another. Therefore, it goes without saying that in order to have better predictions for the content of macronutrients, food items should be grouped by some similarity. Here, the FoodEx2 codes that are available come into use, since they already contain domain knowledge, and based on them food items are grouped in food groups and broader food categories in the FoodEx2 hierarchy [40].
Independently of the representation learning process, we used the method presented in [39], where the FoodEx2 hierarchy is presented as Poincaré graph embeddings and then the FoodEx2 codes based on these embeddings are clustered into 230 clusters. This clustering process is performed on the bottom end of the hierarchy, i.e., on the leaves of the graph. Given that our dataset is rather small compared to the total number of FoodEx2 codes in the hierarchy, and the fact that when assigned a cluster number some of the clusters in our dataset will contain very few or no elements at all, we decided to do a post-hoc cluster merging. The post-hoc cluster merging is performed following a bottom up approach, the clusters are merged based on their top-level parents, going level deeper until we have as evenly distributed clusters as possible.

2.3.3. Supervised Machine Learning

The last part of the methodology is the supervised machine learning part, which on input receives the outputs from the representation learning part and the unsupervised machine learning part. This part consists of applying single-target regression algorithms in order to predict the separate macronutrient values.
Separate prediction models are trained for each macronutrient, because from the conducted correlation test (Pearson’s correlation coefficient) we concluded that there is no correlation between the target variables. In a real-time scenario, it is somewhat hard to select the right machine learning algorithm for the purpose. The overall most accepted approach is to select few algorithms, select ranges for the hyper-parameters for each algorithm, perform hyper-parameter tuning, and evaluate the estimators’ performances with cross-validation by the same data in each iteration, benchmark the algorithms and select the best one(s). When working with regression algorithms, the most common baseline is using mean or median (central tendency measures) of the train part of the dataset for all the predictions.

2.3.4. Tolerance for Nutrient Values

The main goal is obtaining macronutrient values which are expressed in grams, and by international legalizations and regulations can have defined tolerances. The European Commission Health and Consumers Directorate General in 2012 published [45], with the aim to provide advised recommendations for calculation of the acceptable differences between quantities of nutrients on the label declarations of food products and the ones established in Regulation EU 1169/2011 [46]. These tolerances for the food product labels are important as it is impossible for foods to contain the exact levels of nutrients that are presented on the labels, as a consequence of the natural variations of foods, as well as the variations occurring during production and the storage process. However, the nutrient content of foods should not deviate substantially from labelled values to the extent that such deviations could lead to consumers being misled. From the tolerance levels stated in [45], for our particular case we used the tolerance levels for the nutrition declaration of foods that do not include food supplements, out of which we used the needed information presented in Table 2 – where the allowed deviations are presented for each of the four macronutrients, depending on their quantity in 100 grams of the food in matter. These tolerance levels are included at the very final step in our methodology in the determination on how accurate the predicted macronutrient values are.

3. Results

The first step towards the evaluation is pre-processing of the data. Our dataset for evaluation is a subset from the original dataset, obtained by extracting the English food product descriptions, alongside the columns with the macronutrient values (carbohydrates, fat, protein and water). The text descriptions are tokenized. The punctuation signs and numbers that represent quantities are removed, whereas the percentage values (of fat, of sugar, of cocoa…) which contain valuable information concerning the nutrient content, and stop words which add meaning to the description, are kept. The next step is word lemmatization [47], separate lemmatizers are used for the English names and the Slovene names. In Table 3 a few examples of the pre-processed data for the English names are presented.
After obtaining the data in the desired format, the next step is to apply the algorithms for generating embeddings. For this purpose we used the Gensim [48] library in Python, and the corresponding packages for the Word2Vec and Doc2Vec algorithms. The embedding vectors represent our base for the next steps.
Independently of this process, the data is clustered, i.e., the instances are divided in clusters based on their FoodEx2 codes. In the beginning from the clustering in [39] there are 230 clusters, when assigned a cluster number, the instances in our dataset are clustered. From this initial clustering we can note that not all clusters have elements in them, and some of them have very few elements. Therefore, the post-hoc cluster merging is performed, where we merge the clusters following a bottom up approach. For our dataset we went for the parents on the third level in the FoodEx2 hierarchy and we obtained 9 clusters. In Table 4 a few examples from each cluster are given (the English names are given for convenience purposes).
The next step in our methodology is the machine learning part – applying single-target regressions according to the following setup:
  • Select regression algorithms – Linear regression, Ridge regression, Lasso regression, and ElasticNet regression (using the Scikit-learn library in Python [49]).
  • Select parameter ranges for each algorithm and perform hyper-parameter tuning – Ranges and values are a priori given for all the parameters for all the regression algorithms. From all the combinations the best parameters for the model training are then selected with GridSearchCV (using the Scikit-learn library in Python [49]). This is done for each cluster separately.
  • Apply k-fold cross-validation to estimate the prediction error – We train models for each cluster using each of the selected regression algorithms. The models are trained with the previously selected best parameters for each cluster and then evaluated with cross-validation. We chose the matched sample approach for comparison of the regressors, i.e., using the same data in each iteration.
  • Apply tolerance levels and calculate accuracy – The accuracy is calculated according to the tolerance levels in Table 2. If a i is the actual value of the i t h instance from the test set on a certain iteration of the k-fold cross-validation, and p i is the predicted values of the same, i t h , instance of the test set, then:
    d i = a i p i ,
    d i is the absolute difference between the two set values. We define a binary variable that is assigned a positive value if the predicted value is in the tolerance level.
    a l l o w e d = 1   i f : a i 10     d i 2 ,   f o r   p r o t e i n   a n d   c a r b o h y d r a t e d i 1.5 ,   f o r   f a t a i > 10   a i 40     d i 0.2 × a i a i > 40     d i 8
    At the end we calculate the accuracy as the ratio of predicted values that were in the ‘allowed’ range, i.e., tolerance level:
    A c c u r a c y = i = 1 n a l l o w e d n
    where n is the number of instances in the test set. The accuracy percentage is calculated for the baseline mean and baseline median as well – the percentage of baseline values (means and medians from each cluster) that falls in the tolerance level range, calculated according to Equations (6)–(8), where a i is the actual value of the i t h instance from the test set on a certain iteration of the k-fold cross-validation, and instead of p i we have:
    b = i = 1 m x i m   ,     t h e   b a s e l i n e   i s   t h e   m e a n Χ m + 1 / 2 + Χ m + 1 / 2 2   ,     t h e   b a s e l i n e   i s   t h e   m e d i a n
    where m is the number of instances in the train set, and Χ is the train set sorted in ascending order.
The accuracy percentages are calculated for each fold in each cluster, and at the end for each cluster we calculate an average of the percentages from each fold. In Table 5 the results obtained from the experiments with the embeddings generated from the English names are presented, and in Table 6 with the embeddings generated from the Slovene names.
In these tables we give the accuracy percentages from the predictions for each target macronutrient in each cluster. From these tables we can see that having the Word2Vec and Doc2Vec embeddings as features for the regressions yielded better results in more cases than having the GloVe embedding vectors as inputs to the regressions, but this difference is not big enough to say that these two embedding algorithms outperformed GloVe. In Figure 2, Figure 3, Figure 4 and Figure 5 the results for each target macronutrient are presented graphically.
In the graphs, for each target macronutrient, for each cluster, we give the best result obtained with the embedding vectors from the English and Slovene names and compare them with the baseline mean and median for the particular cluster. In the graphs the embedding algorithm that yields the best results alongside with the parameters and heuristic is given as:
E _ h _ d _ w ,   h s u m , a v e r a g e ,   i s   t h e   c h o s e n   h e u r i s t i c d 50,100,200 , i s   t h e   d i m e n s i o n w 2,3,5,10 , i s   t h e   s l i d i n g   w i n d o w
where, E is the embedding algorithm (Word2Vec, GloVe or Doc2Vec). We can see that the embedding algorithm that yields the best results changes, but the in all cases the embedding algorithm gives better results than the baseline methods. In Table 7, we present the embedding algorithms (with all the parameter used) that gave the best results for each target macronutrient in each cluster, alongside with the regression algorithm used for making the predictions.

4. Discussion

From the obtained results we can observe that the highest percentage of correctly predicted macronutrient values is obtained in cluster 9, for the prediction of carbohydrates: 86,36%, 81,82% and 72,73% for the English names and 86,36%, 72,73% and 86,36% for the Slovene names, and for the Word2Vec, GloVe and Doc2Vec algorithms appropriately, whereas the baseline (both mean and median) is more than half less. Following these results are the predictions for protein quantity in the same cluster, and then the predictions for protein and carbohydrates in cluster 7. When inspecting these two clusters, we concluded that these were the only two clusters that were not merged with other ones, therefore, the FoodEx2 hierarchy is on a deeper level, and the foods inside these clusters are more similar to each other compared to food in other clusters. Cluster 9 consists of types of egg products, and simple egg dishes – each of these foods have almost identical macronutrients because they only contain one ingredient – eggs. Cluster 7, on the other hand contains fish products, either frozen or canned. If we do not consider the results from these two clusters, then the best results are obtained for protein predictions in cluster 4 (70%–72%) and fat predictions (66%–68%), but compared to the baseline median of that cluster, they are not much better, but if we look at the results from the protein predictions in cluster 8 (60%–67%) we can see that the obtained accuracies are much higher than the baseline mean and median for this cluster. Cluster 8 mainly contains types of processed meats, which can vary notably in fat content, but have similarities in the range of protein content.
For comparison reasons, we also ran the single-target regressions without clustering the dataset. The results are presented in Figure 6.
From this graph we can conclude the same – the embedding algorithms give better results than the baseline mean and median (in this case of the whole dataset), for each target macronutrient. The best results, again, are obtained for the prediction of protein content (62%–64%).
In Table 8, we give the parameters for the embedding algorithms and the regressors with which the best results were obtained without clustering the data.
From these results, it is worth arguing that modeling machine learning techniques on food data previously clustered based on FoodEx2 codes would yield better results than predicting on the whole dataset. If we compare the performances of the three embedding algorithms, it is hard to argue if one outperformed the others, or if one underperformed compared to the other two. This outcome is due to the fact that we are dealing with fairly short textual descriptions.
Given the fact that the results with the clustering are better than the results without, and we rely so strongly on having the FoodEx2 codes in order to cluster the foods, the availability of the FoodEx2 codes is of big importance and therefore a limitation of the methodology. For this purpose, we can rely on a method such as StandFood [50], which is a natural language processing methodology developed for classifying and describing foods according to FoodEx2. When this limitation is surpassed, the application of our method can be fully automated.
From a theoretical viewpoint this methodology considers the benefits of using representation learning as the base of a predictive study, and proves that dense real-valued vectors can capture enough semantics even from a short text description (without including the needed details for the task in question – in our case, measurements or exact ingredients) in order to be considered in a predictive study for complicated and value-sensitive task such as predicting macronutrient content. This study offers a fertile ground for further exploration of representation learning and considering more complex embedding algorithms – using transformers [51,52] and fine tuning them for this task.
From a managerial viewpoint the application of this methodology opens up many possibilities for facilitating and easing the process of calculating macronutrient content, which is crucial for dietary assessment, dietary recommendations, dietary guidelines, macronutrient tracking, and other such tasks which are key tools for doctors, health professionals, dieticians, nutritional experts, policy makers, professional sport coaches, athletes, fitness professionals, etc.

5. Conclusions

We live in a modern health crisis. We have a cure for almost everything, and yet the most common causes of biggest mortality factor – cardiovascular diseases are nutrition and diet related. Knowing what is in our food, and understanding its nutritional content (macro and micronutrients) is the first step, that is in our power, towards the prevention of diet-related diseases. There is an overwhelming amount of nutrition-related data available, and most of it comes in textual form, structured and unstructured. Data Science can help us utilize this data for our benefit. We presented a methodology that combines representation learning and machine learning for the task of predicting macronutrient values from short textual descriptions of food data – a combination of food products and recipes. Taking learned vector representations of the descriptions as features, and applying different regression algorithms on separate clusters of the data obtained by clustering based on Poincaré graph embeddings from the FoodEx2 codes of the data, and obtaining results with as high as 86% accuracy, this approach proves to be very effective for this task. For our future work we intend to extend this methodology with the state-of-the-art embeddings based on transformers – Bert Embeddings [51], clustering on an upper level of the FoodEx2 hierarchy, and including methods for obtaining FoodEx2 codes, when they are not available [50], as well evaluating it on a bigger dataset, with longer, more detailed descriptions.

Author Contributions

Conceptualization, G.I., T.E. and B.K.S.; methodology, G.I. and T.E.; software, G.I..; validation, G.I. and T.E.; resources, B.K.S.; data curation, B.K.S.; writing—original draft preparation, G.I.; writing—review and editing, T.E. and B.K.S.; visualization, G.I.; supervision, T.E. and B.K.S.; project administration, B.K.S.; funding acquisition, B.K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Slovenian Research Agency (research core grant number P2-0098), and the European Union’s Horizon 2020 research and innovation programme (FNS-Cloud, Food Nutrition Security) (grant agreement 863059). The information and the views set out in this publication are those of the authors and do not necessarily reflect the official opinion of the European Union. Neither the European Union institutions and bodies nor any person acting on their behalf may be held responsible for the use that may be made of the information contained herein.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Willett, W.; Rockström, J.; Loken, B.; Springmann, M.; Lang, T.; Vermeulen, S.; Garnett, T.; Tilman, D.; DeClerck, F.; Wood, A.; et al. Food in the Anthropocene: The EAT–Lancet Commission on healthy diets from sustainable food systems. The Lancet 2019, 393, 447–492. [Google Scholar] [CrossRef]
  2. Branca, F.; Demaio, A.; Udomkesmalee, E.; Baker, P.; Aguayo, V.M.; Barquera, S.; Dain, K.; Keir, L.; Lartey, A.; Mugambi, G.; et al. A new nutrition manifesto for a new nutrition reality. The Lancet 2020, 395, 8–10. [Google Scholar] [CrossRef]
  3. Keeley, B.; Little, C.; Zuehlke, E. The State of the World’s Children 2019: Children, Food and Nutrition–Growing Well in a Changing World; UNICEF: New York, NY, USA, 2019. [Google Scholar]
  4. Mbow, H.-O.P.; Reisinger, A.; Canadell, J.; O’Brien, P. Special Report on Climate Change, Desertification, Land Degradation, Sustainable Land Management, Food Security, and Greenhouse Gas Fluxes in Terrestrial Ecosystems (SR2); IPCC: Geneva, Switzerland, 2017. [Google Scholar]
  5. Ijaz, M.F.; Attique, M.; Son, Y. Data-Driven Cervical Cancer Prediction Model with Outlier Detection and Over-Sampling Methods. Sensors 2020, 20, 2809. [Google Scholar] [CrossRef] [PubMed]
  6. World Health Organization. Diet, Nutrition, and the Prevention of Chronic Diseases: Report of a Joint WHO/FAO Expert Consultation; World Health Organization: Geneva, Switzerland, 2003; Volume 916. [Google Scholar]
  7. Rand, W.M.; Pennington, J.A.; Murphy, S.P.; Klensin, J.C. Compiling Data for Food Composition Data Bases; United Nations University Press: Tokyo, Japan, 1991. [Google Scholar]
  8. Greenfield, H.; Southgate, D.A. Food Composition Data: Production, Management, and Use; Food and Agriculture Organization: Rome, Italy, 2003; ISBN 978-92-5-104949-5. [Google Scholar]
  9. Schakel, S.F.; Buzzard, I.M.; Gebhardt, S.E. Procedures for estimating nutrient values for food composition databases. J. Food Compos. Anal. 1997, 10, 102–114. [Google Scholar] [CrossRef] [Green Version]
  10. Yunus, R.; Arif, O.; Afzal, H.; Amjad, M.F.; Abbas, H.; Bokhari, H.N.; Haider, S.T.; Zafar, N.; Nawaz, R. A framework to estimate the nutritional value of food in real time using deep learning techniques. IEEE Access 2018, 7, 2643–2652. [Google Scholar] [CrossRef]
  11. Jiang, L.; Qiu, B.; Liu, X.; Huang, C.; Lin, K. DeepFood: Food Image Analysis and Dietary Assessment via Deep Model. IEEE Access 2020, 8, 47477–47489. [Google Scholar] [CrossRef]
  12. Pouladzadeh, P.; Shirmohammadi, S.; Al-Maghrabi, R. Measuring calorie and nutrition from food image. IEEE Trans. Instrum. Meas. 2014, 63, 1947–1956. [Google Scholar] [CrossRef]
  13. Ege, T.; Yanai, K. Image-based food calorie estimation using recipe information. IEICE Trans. Inf. Syst. 2018, 101, 1333–1341. [Google Scholar] [CrossRef] [Green Version]
  14. Samsung Health (S-Health). Available online: https://health.apps.samsung.com/terms (accessed on 11 May 2020).
  15. MyFitnessPal. Available online: https://www.myfitnesspal.com/ (accessed on 11 May 2020).
  16. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  17. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Cogn Modeling 1988, 5, 1. [Google Scholar] [CrossRef]
  18. Bengio, Y.; Ducharme, R.; Vincent, P.; Jauvin, C. A neural probabilistic language model. J. Mach. Learn. Res. 2003, 3, 1137–1155. [Google Scholar]
  19. Mikolov, T. Statistical Language Models Based on Neural Networks; Presentation at Google, Mountain View, 2nd April 2012; Brno University of Technology: Brno, Czech Republic, 2012; Volume 80. [Google Scholar]
  20. Caracciolo, C.; Stellato, A.; Rajbahndari, S.; Morshed, A.; Johannsen, G.; Jaques, Y.; Keizer, J. Thesaurus maintenance, alignment and publication as linked data: The AGROVOC use case. Int. J. Metadatasemantics Ontol. 2012, 7, 65–75. [Google Scholar] [CrossRef]
  21. Weston, J.; Bengio, S.; Usunier, N. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011. [Google Scholar]
  22. Socher, R.; Lin, C.C.; Manning, C.; Ng, A.Y. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), Bellevue, WA, USA, 28 June–2 July 2011; pp. 129–136. [Google Scholar]
  23. Glorot, X.; Bordes, A.; Bengio, Y. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the Proceedings of the 28th International Conference on Machine Learning (ICML-11); Omnipress: Madison, WI, USA, 2011; pp. 513–520. [Google Scholar]
  24. Turney, P.D. Distributional semantics beyond words: Supervised learning of analogy and paraphrase. Trans. Assoc. Comput. Linguist. 2013, 1, 353–366. [Google Scholar] [CrossRef]
  25. Turney, P.D.; Pantel, P. From frequency to meaning: Vector space models of semantics. J. Artif. Intell. Res. 2010, 37, 141–188. [Google Scholar] [CrossRef]
  26. Mikolov, T.; Yih, W.; Zweig, G. Linguistic regularities in continuous space word representations. In Proceedings of the Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Atlanta, GA, USA, 9–14 June 2013; pp. 746–751. [Google Scholar]
  27. Eckart, C.; Young, G. The approximation of one matrix by another of lower rank. Psychometrika 1936, 1, 211–218. [Google Scholar] [CrossRef]
  28. Habibi, M.; Weber, L.; Neves, M.; Wiegandt, D.L.; Leser, U. Deep learning with word embeddings improves biomedical named entity recognition. Bioinformatics 2017, 33, i37–i48. [Google Scholar] [CrossRef]
  29. Drozd, A.; Gladkova, A.; Matsuoka, S. Word embeddings, analogies, and machine learning: Beyond king-man+ woman= queen. In Proceedings of the Coling 2016, the 26th International Conference on Computational Linguistics: Technical papers, Osaka, Japan, 11–17 December 2016; pp. 3519–3530. [Google Scholar]
  30. Tshitoyan, V.; Dagdelen, J.; Weston, L.; Dunn, A.; Rong, Z.; Kononova, O.; Persson, K.A.; Ceder, G.; Jain, A. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 2019, 571, 95–98. [Google Scholar] [CrossRef] [Green Version]
  31. Ispirova, G.; Eftimov, T.; Seljak, B.K. Comparing Semantic and Nutrient Value Similarities of Recipes. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5131–5139. [Google Scholar]
  32. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
  33. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 3111–3119. [Google Scholar]
  34. Pennington, J.; Socher, R.; Manning, C. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
  35. Le, Q.; Mikolov, T. Distributed representations of sentences and documents. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 1188–1196. [Google Scholar]
  36. Nickel, M.; Kiela, D. Poincaré embeddings for learning hierarchical representations. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6338–6347. [Google Scholar]
  37. Yang, Z.; Cohen, W.; Salakhudinov, R. Revisiting Semi-Supervised Learning with Graph Embeddings.; Balcan, M.F., Weinberger, K.Q., Eds.; PMLR: New York, NY, USA, 2016; Volume 48, pp. 40–48. [Google Scholar]
  38. Ristoski, P.; Paulheim, H. Rdf2vec: Rdf graph embeddings for data mining. In Proceedings of the International Semantic Web Conference, Hyogo, Japan, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 498–514. [Google Scholar]
  39. Eftimov, T.; Popovski, G.; Valenčič, E.; Seljak, B.K. FoodEx2vec: New foods’ representation for advanced food data analysis. Food Chem. Toxicol. 2020, 138, 111169. [Google Scholar] [CrossRef]
  40. European Food Safety Authority The food classification and description system FoodEx2 (revision 2). EFSA Supporting Publ. 2015, 12, 804E.
  41. Van der Laan, M.; Pollard, K.; Bryan, J. A new partitioning around medoids algorithm. J. Stat. Comput. Simul. 2003, 73, 575–584. [Google Scholar] [CrossRef] [Green Version]
  42. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef] [Green Version]
  43. The European Food Safety Authority. Available online: https://www.efsa.europa.eu/en/data/food-consumption-data (accessed on 11 May 2020).
  44. Authority, E.F.S. Use of the EFSA comprehensive European food consumption database in exposure assessment. EFSA J. 2011, 9, 2097. [Google Scholar] [CrossRef]
  45. European commission health and consumers directorate-general GUIDANCE DOCUMENT FOR COMPETENT AUTHORITIES FOR THE CONTROL OF COMPLIANCE WITH EU LEGISLATION ON: Regulation (EU) No 1169/2011 of the European Parliament and of the Council of 25 October 2011 on the provision of food information to consumers, amending Regulations (EC) No 1924/2006 and (EC) No 1925/2006 of the European Parliament and of the Council, and repealing Commission Directive 87/250/EEC, Council Directive 90/496/EEC, Commission Directive 1999/10/EC, Directive 2000/13/EC of the European Parliament and of the Council, Commission Directives 2002/67/EC and 2008/5/EC and Commission Regulation (EC) No 608/2004Devlin. Available online: https://ec.europa.eu/food/sites/food/files/safety/docs/labelling_nutrition-supplements-guidance_tolerances_1212_en.pdf (accessed on 11 May 2020).
  46. European Commission. Regulation (EU) No 1169/2011 of the European Parliament and of the Council of 25 October 2011 on the provision of food information to consumers, amending Regulations (EC) No 1924/2006 and (EC) No 1925/2006 of the European Parliament and of the Council, and repealing Commission Directive 87/250/EEC, Council Directive 90/496/EEC, Commission Directive 1999/10/EC, Directive 2000/13/EC of the European Parliament and of the Council, Commission Directives 2002/67/EC and 2008/5/EC and Commission Regulation (EC) No 608/2004. Off. J. Eur. Union L 2011, 304, 18–63. [Google Scholar]
  47. Korenius, T.; Laurikkala, J.; Järvelin, K.; Juhola, M. Stemming and lemmatization in the clustering of finnish text documents. In Proceedings of the thirteenth ACM international conference on Information and knowledge management, Washington, DC, USA, 8–13 November 2004; pp. 625–633. [Google Scholar]
  48. Rehurek, R.; Sojka, P. Gensim—Statistical Semantics In Python; NLP Centre, Faculty of Informatics, Masaryk University: Brno, Czech Republic, 2011. [Google Scholar]
  49. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  50. Eftimov, T.; Korošec, P.; Koroušić Seljak, B. StandFood: Standardization of foods using a semi-automatic system for classifying and describing foods according to FoodEx2. Nutrients 2017, 9, 542. [Google Scholar] [CrossRef] [Green Version]
  51. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  52. Sun, Y.; Wang, S.; Li, Y.; Feng, S.; Chen, X.; Zhang, H.; Tian, X.; Zhu, D.; Tian, H.; Wu, H. Ernie: Enhanced representation through knowledge integration. arXiv 2019, arXiv:1904.09223. [Google Scholar]
Figure 1. Flowchart of the methodology.
Figure 1. Flowchart of the methodology.
Mathematics 08 01811 g001
Figure 2. Best prediction accuracies for carbohydrates predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Figure 2. Best prediction accuracies for carbohydrates predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Mathematics 08 01811 g002
Figure 3. Best prediction accuracies for fat predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Figure 3. Best prediction accuracies for fat predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Mathematics 08 01811 g003
Figure 4. Best prediction accuracies for protein predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Figure 4. Best prediction accuracies for protein predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Mathematics 08 01811 g004
Figure 5. Best prediction accuracies for water predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Figure 5. Best prediction accuracies for water predictions obtained from the embeddings for the English names and Slovene names for each cluster compared to the baseline mean and median for the particular cluster.
Mathematics 08 01811 g005
Figure 6. Best prediction accuracies for each macronutrient obtained from the embeddings for the English and Slovene names compared to the baseline mean and median from the whole dataset.
Figure 6. Best prediction accuracies for each macronutrient obtained from the embeddings for the English and Slovene names compared to the baseline mean and median from the whole dataset.
Mathematics 08 01811 g006
Table 1. Subset from the dataset used in the experiments (SLO—Slovenian, ENG—English).
Table 1. Subset from the dataset used in the experiments (SLO—Slovenian, ENG—English).
SLO Food NameENG Food NameFoodEx2 CodeEnergy (g)Water (g)Fat (g)Carb (g)Protein (g)
Zelenjavna rižota s parboiled rižem, sezonsko zelenjavo in repičnim oljemVegetable risotto with parboiled rice, seasonal vegetables and rapeseed oilA041G#F04.A036V90.0779.361.5516.771.79
Medenjaki iz pirine in ržene moke ter hojevega meduGingerbread biscuit made of spelt and rye flour and honeyA00CT#F04.A004H$F04.A003J$F04.A033K423.680.001.8191.418.96
Čokoladna rezina Kit KatCandies, KIT KAT Wafer BarA009Z517.871.6325.9964.596.51
Zeleni ledeni čaj z medom, arizonaTea, ready-to-drink, green iced tea, ArizonaA03LD27.6593.000.006.800.01
Table 2. Tolerated differences in nutrition content in foods besides food supplements.
Table 2. Tolerated differences in nutrition content in foods besides food supplements.
Quantity per 100 g/MacronutrientTolerances (Allowed Deviations in Quantity)
CarbohydratesProteinWaterFat
<10 g per 100 g±2 g±1.5 g
10–40 g per 100 g±20%±20%
>40 g per 100 g±8 g±8 g
Table 3. Examples of pre-processed English descriptions.
Table 3. Examples of pre-processed English descriptions.
Original DescriptionPre-processed Description
Potatoes, mashed, dehydrated, prepared from flakes without milk, whole milk and butter added[‘potato’, ‘mashed’, ‘dehydrated’, ‘prepared’, ‘from’, ‘flake’, ‘without’, ‘milk’, ‘whole’, ‘milk’, ‘and’, ‘butter’, ‘added’]
Milk chocolate with 30% cocoa, Gorenjka (250 g)[‘milk’,‘chocolate’, ‘with’, ‘30’, ‘cocoa’, ‘gorenjka’]
Table 4. Example instances from each cluster.
Table 4. Example instances from each cluster.
Cluster NumberExample Food Products
Cluster 1Oil, industrial, mid-oleic, sunflower, principal uses frying and salad dressingsHomemade minced lard, Mesarija KrageljMargarine (with added vegetable sterols 0,75g/10g), line Becel pro-activ, Unilever
Cluster 2Peanuts, all types, oil-roasted, with saltSeeds, pumpkin and squash seed kernels, driedAvocados, raw, California
Cluster 3Cheese, processed, 60% fat in dry matterYogurt, fruit (peach, cereals), low fat 2.6% milkfatBaby food, cottage cheese, creamed, fruit (strawberry, banana), FruchtZwerge, Danone
Cluster 4Plums, canned, purple, light syrup pack, solids and liquidsSegedin cabbage with pork meatBuckwheat porridge sauted with onion and garlic
Cluster 5Fried chicken file (canola oil, without breadcrumbs)Trout with parsley and garlic sauceBeef, rib, whole (ribs 6–12), separable lean and fat, trimmed to 1/8 of an inch of fat, all grades, cooked, roasted
Cluster 6Fruit tea infusion, with sugar and lemonSoup made of turnip cabbage, peas and tomato (olive oil, stock)Chicken stew with seasonal vegetables, without roux
Cluster 7Fish, salmon, pink, canned, without salt, solids with bone and liquidSalty anchovies in vegetable oilTuna with beans, canned
Cluster 8Ham, sliced, regular (approximately 11% fat)Chicken hot dog, pan-friedTurkey ham, sliced, extra lean, prepackaged or deli-sliced
Cluster 9Egg, whole, cooked, scrambledFried egg (olive oil)Egg spread
Table 5. Accuracy percentages after k-fold cross validation on each cluster obtained with the embeddings for the English names of the food products. Target: C—Carbohydrates, F—Fat, P—Protein, W—Water. The bolded numbers in the table represent the overall best performance for each macronutrient in the given cluster.
Table 5. Accuracy percentages after k-fold cross validation on each cluster obtained with the embeddings for the English names of the food products. Target: C—Carbohydrates, F—Fat, P—Protein, W—Water. The bolded numbers in the table represent the overall best performance for each macronutrient in the given cluster.
ClusterTargetAccuracy
Word2VecGloVeDoc2VecMeanMedian
1C59.2147.8450.111.0017.47
F44.2635.9549.325.0510.21
P56.3760.3256.9513.1614.26
W40.3252.3248.218.059.26
2C34.8434.3233.2210.9513.27
F67.2264.6964.697.9360.55
P63.8761.3459.227.5831.89
W50.4452.8352.4117.8919.73
3C46.5146.1846.9811.1315.74
F67.4263.6264.006.8459.81
P69.6465.4770.748.7558.55
W56.6860.8558.7012.1829.83
4C40.9243.3240.5312.9516.40
F68.2866.4066.674.7962.43
P72.5070.8571.717.2366.07
W59.0961.5160.9911.2433.86
5C46.3837.6546.079.5815.80
F66.1262.3862.384.5742.43
P66.1263.6366.838.7352.38
W49.8748.9553.9812.8021.53
6C29.4630.5533.307.9010.24
F41.6641.0843.266.6829.76
P53.3554.3755.8115.0920.11
W38.0139.6941.2811.0315.45
7C72.7872.7872.7811.1141.11
F42.7848.3353.335.5611.11
P73.8973.8973.8931.6715.00
W46.6748.8957.2215.5620.00
8C58.3151.6055.580.9521.69
F48.2739.7450.176.5815.15
P60.4863.2567.067.6219.74
W41.6048.2749.8311.3411.26
9C86.3681.8272.7327.2736.36
F50.0040.9145.454.554.55
P77.2772.7363.6436.3631.82
W45.4540.9150.009.0918.18
Table 6. Accuracy percentages after k-fold cross validation on each cluster obtained with the embeddings for the Slovene names of the food products. Target: C—Carbohydrates, F—Fat, P—Protein, W—Water. The bolded numbers in the table represent the overall best performance for each macronutrient in the given cluster.
Table 6. Accuracy percentages after k-fold cross validation on each cluster obtained with the embeddings for the Slovene names of the food products. Target: C—Carbohydrates, F—Fat, P—Protein, W—Water. The bolded numbers in the table represent the overall best performance for each macronutrient in the given cluster.
ClusterTargetAccuracy
Word2VecGloVeDoc2VecMeanMedian
1C61.3754.1152.001.0017.47
F44.2637.0041.265.0510.21
P58.2650.0053.2613.1614.32
W34.0537.0533.898.059.26
2C27.4724.4332.1510.9514.00
F70.2867.0464.697.9360.55
P63.7260.1259.227.5831.54
W49.9643.2848.3817.8919.55
3C47.2841.5145.0011.1316.13
F67.4263.9963.626.8459.81
P69.2765.8369.208.7558.55
W52.8643.9754.3412.1829.44
4C34.7828.4940.3312.9516.93
F70.1367.7466.404.7962.43
P72.5069.5870.387.2366.07
W54.2447.6655.7911.2433.86
5C47.6341.4045.959.5815.80
F66.1262.8062.384.5742.43
P66.1264.4764.478.7352.38
W48.1841.4851.0212.8021.12
6C31.4225.6133.747.9010.75
F39.3434.9744.426.6829.98
P53.3650.7363.1315.0920.33
W41.2134.6741.8511.0315.45
7C72.7867.7872.7811.1141.11
F58.3337.7848.335.5611.11
P63.8963.8969.4431.6715.00
W46.1136.6741.1115.5620.00
8C56.4149.9156.450.9521.69
F47.3243.5144.556.5815.15
P64.2959.7061.437.6219.74
W35.1136.8035.1111.3411.26
9C86.3672.7386.3627.2736.36
F50.0031.8250.004.554.55
P63.6459.0968.1836.3631.82
W54.5536.3645.459.0918.18
Table 7. Embedding and regression algorithms which yielded highest accuracies for each macronutrient prediction in each cluster. Target: C—Carbohydrates, F—Fat, P—Protein, W—Water.
Table 7. Embedding and regression algorithms which yielded highest accuracies for each macronutrient prediction in each cluster. Target: C—Carbohydrates, F—Fat, P—Protein, W—Water.
ClusterTargetEmbedding AlgorithmRegression Algorithm
ENGSLOENGSLO
1CWord2VecCBOW_avg_100_2Word2VecCBOW_avg_50_2ElasticNetRidge
FDoc2VecPV-DM_avg_200_2Word2VecCBOW_avg_50_2LassoRidge
PGloVe_sum_50_10Word2VecCBOW_sum_200_2LassoRidge
WGloVe_avg_50_10GloVe_sum_200_2ElasticNetRidge
2CWord2VecSG_avg_200_2Doc2VecPV-DBOW_avg_200_2RidgeRidge
FWord2VecCBOW_sum_50_5Word2VecCBOW_avg_100_2LassoRidge
PWord2VecSG_sum_100_5Word2VecCBOW_avg_100_2RidgeRidge
WGloVe_avg_100_2Word2VecSG_avg_200_10RidgeRidge
3CDoc2VecPV-DM_avg_50_2Word2VecCBOW_avg_100_2RidgeElasticNet
FWord2VecCBOW_avg_200_2Word2VecCBOW_avg_200_3RidgeRidge
PDoc2VecPV-DBOW_avg_200_10Word2VecCBOW_avg_200_5RidgeRidge
WGloVe_avg_200_3Doc2VecPV-DBOW_avg_200_2RidgeElasticNet
4CGloVe_sum_200_3Doc2VecPV-DBOW_avg_200_5RidgeElasticNet
FWord2VecCBOW_avg_100_5Word2VecCBOW_avg_100_2LassoRidge
PWord2VecCBOW_avg_50_3Word2VecCBOW_avg_50_2LassoRidge
WGloVe_sum_200_3Doc2VecPV-DBOW_avg_200_5RidgeElasticNet
5CWord2VecCBOW_avg_200_2Word2VecCBOW_avg_100_2RidgeLasso
FWord2VecCBOW_avg_200_2Word2VecCBOW_avg_200_3RidgeRidge
PDoc2VecPV-DBOW_sum_200_5Word2VecCBOW_avg_200_5ElasticNetRidge
WDoc2VecPV-DBOW_avg_200_10Doc2VecPV-DBOW_sum_50_3RidgeLasso
6CDoc2VecPV-DBOW_sum_200_10Doc2VecPV-DBOW_sum_200_2RidgeRidge
FDoc2VecPV-DBOW_avg_200_10Doc2VecPV-DBOW_avg_200_5RidgeRidge
PDoc2VecPV-DBOW_sum_200_10Doc2VecPV-DBOW_avg_200_2RidgeRidge
WDoc2VecPV-DBOW_avg_200_3Doc2VecPV-DBOW_avg_200_2RidgeRidge
7CWord2VecCBOW_sum_50_2Word2VecCBOW_sum_50_2LinearLinear
FDoc2VecPV-DM_sum_50_5Word2VecCBOW_avg_100_2ElasticNetLinear
PWord2VecSG_avg_200_5Doc2VecPV-DM_sum_50_10LinearElasticNet
WDoc2VecPV-DM_sum_50_3Word2VecSG_sum_100_2LinearLinear
8CWord2VecCBOW_avg_200_3Doc2VecPV-DBOW_sum_200_2RidgeRidge
FDoc2VecPV-DBOW_avg_100_5Word2VecCBOW_avg_50_2LassoRidge
PDoc2VecPV-DBOW_sum_50_2Word2VecCBOW_sum_50_10ElasticNetRidge
WDoc2VecPV-DM_sum_100_2GloVe_sum_200_2RidgeRidge
9CWord2VecSG_sum_200_3Word2VecCBOW_avg_50_5LassoLinear
FWord2VecCBOW_avg_50_5Word2VecSG_sum_200_2LinearLinear
PWord2VecCBOW_avg_100_3Doc2VecPV-DBOW_sum_200_3LinearLasso
WDoc2VecPV-DM_sum_50_10Word2VecCBOW_avg_200_5LassoLinear
Table 8. Embedding and regression algorithms which yielded highest accuracies for each macronutrient prediction on the whole dataset (without clustering). Target: C—Carbohydrates, F—Fat, P—Protein, W—Water.
Table 8. Embedding and regression algorithms which yielded highest accuracies for each macronutrient prediction on the whole dataset (without clustering). Target: C—Carbohydrates, F—Fat, P—Protein, W—Water.
TargetEmbedding AlgorithmRegression Algorithm
ENGSLOENGSLO
CDoc2VecPV-DBOW_avg_200_5Doc2VecPV-DBOW_sum_200_3RidgeRidge
FDoc2VecPV-DBOW_avg_200_2Doc2VecPV-DBOW_sum_200_10LassoElasticNet
PDoc2VecPV-DBOW_avg_200_5Doc2VecPV-DBOW_sum_200_3RidgeRidge
WGloVe_avg_200_10Doc2VecPV-DBOW_sum_200_2RidgeLinear
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ispirova, G.; Eftimov, T.; Koroušić Seljak, B. P-NUT: Predicting NUTrient Content from Short Text Descriptions. Mathematics 2020, 8, 1811. https://doi.org/10.3390/math8101811

AMA Style

Ispirova G, Eftimov T, Koroušić Seljak B. P-NUT: Predicting NUTrient Content from Short Text Descriptions. Mathematics. 2020; 8(10):1811. https://doi.org/10.3390/math8101811

Chicago/Turabian Style

Ispirova, Gordana, Tome Eftimov, and Barbara Koroušić Seljak. 2020. "P-NUT: Predicting NUTrient Content from Short Text Descriptions" Mathematics 8, no. 10: 1811. https://doi.org/10.3390/math8101811

APA Style

Ispirova, G., Eftimov, T., & Koroušić Seljak, B. (2020). P-NUT: Predicting NUTrient Content from Short Text Descriptions. Mathematics, 8(10), 1811. https://doi.org/10.3390/math8101811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop