Next Article in Journal
Architecture-Oriented Agent-Based Simulations and Machine Learning Solution: The Case of Tsunami Emergency Analysis for Local Decision Makers
Next Article in Special Issue
Monkeypox, Disinformation, and Fact-Checking: A Review of Ten Iberoamerican Countries in the Context of Public Health Emergency
Previous Article in Journal
Elicitation of Key Factors to Support Information Technology Outsourcing in Technological Innovation Hubs: Applying a Multicriteria Analytical Framework
Previous Article in Special Issue
Modeling and Moderation of COVID-19 Social Network Chat
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extracting Self-Reported COVID-19 Symptom Tweets and Twitter Movement Mobility Origin/Destination Matrices to Inform Disease Models

1
Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ, UK
2
Computational Biology Facility, University of Liverpool, Liverpool L69 3GJ, UK
3
Public Health England, London NW9 5EQ, UK
4
Department of Computer Science, Universidade Nove de Julho—UNINOVE, Sao Paulo 03155-000, Brazil
*
Author to whom correspondence should be addressed.
Information 2023, 14(3), 170; https://doi.org/10.3390/info14030170
Submission received: 28 January 2023 / Revised: 3 March 2023 / Accepted: 5 March 2023 / Published: 7 March 2023

Abstract

:
The emergence of the novel coronavirus (COVID-19) generated a need to quickly and accurately assemble up-to-date information related to its spread. In this research article, we propose two methods in which Twitter is useful when modelling the spread of COVID-19: (1) machine learning algorithms trained in English, Spanish, German, Portuguese and Italian are used to identify symptomatic individuals derived from Twitter. Using the geo-location attached to each tweet, we map users to a geographic location to produce a time-series of potential symptomatic individuals. We calibrate an extended SEIRD epidemiological model with combinations of low-latency data feeds, including the symptomatic tweets, with death data and infer the parameters of the model. We then evaluate the usefulness of the data feeds when making predictions of daily deaths in 50 US States, 16 Latin American countries, 2 European countries and 7 NHS (National Health Service) regions in the UK. We show that using symptomatic tweets can result in a 6% and 17% increase in mean squared error accuracy, on average, when predicting COVID-19 deaths in US States and the rest of the world, respectively, compared to using solely death data. (2) Origin/destination (O/D) matrices, for movements between seven NHS regions, are constructed by determining when a user has tweeted twice in a 24 h period in two different locations. We show that increasing and decreasing a social connectivity parameter within an SIR model affects the rate of spread of a disease.

1. Introduction

The novel coronavirus (COVID-19) has, at the time of writing, resulted in over 6.88 million deaths and 676 million confirmed cases worldwide [1]. By January 2020, new cases of COVID-19 had been seen throughout Asia, and by the time the World Health Organisation (WHO) declared a global pandemic in March 2020, the disease had spread to over 100 countries. It quickly became imperative to establish reliable data feeds relating to the pandemic, such that researchers and analysts could model the ongoing spread of the disease and inform decision-making by government and public health officials. To facilitate collaboration between researchers and allow for published results to be replicated and scrutinised, these data sets and models must be open-source. A well-used interactive dashboard collating total daily counts of confirmed cases and deaths for countries and, in some cases, regions within countries can be found in [2]. The variables presented in the platform are traditionally used to calculate metrics such as the reproduction number ( R t ). One such method for estimating R t is by modelling how the disease spreads through a population using a Susceptible, Infected and Recovered (SIR) model [3]. This method involves splitting the population into the unobservable SIR compartments and allowing a fraction at every timestep t, to progress to the next compartment. The model consists of three nonlinear ordinary differential equations (ODE) and a set of parameters which govern how quickly individuals progress through the compartments. The standard SIR model contains two parameters, β and γ , which are the infection and recovery rates, respectively. This metric is vital in understanding both the infection growth rate, or daily rate of new infections, and the number of people, on average, infected by a single infected person and can be calculated by
R 0 = β / γ .
The quality of disease metrics is heavily dependent on the model and the ingested data. In the United Kingdom (UK), up until December 2022, a joint effort was undertaken to produce estimates of the R t number, with notable examples provided in [4]. Different data sets have been used by different institutions. Laboratory-confirmed COVID-19 diagnoses are used in [5], UK’s NHS Pathways data in [6] and hospital admissions data in [7]. The statistical model developed by Moore, Rosato and Maskell [8] contributes to these estimates through the incorporation of death, hospital admission and NHS 111 call data. Aggregated 111 call counts contain the individuals that reported potential COVID-19 symptoms through the NHS Pathways telephone service.
Evaluating short-term forecasts of COVID-19 related statistics is useful to determine the accuracy of a model. A multi-model comparison of predicted deaths, hospital admissions and intensive care unit (ICU) occupancy is given in [9]; deaths, hospital admissions and ICU occupancy in [7]; daily hospital admissions in [10] and short-term forecasting of deaths in [8]. A set of scoring rules for evaluating these short-term forecasts is outlined in [11], with an application to COVID-19 deaths provided in [8].
The latency and reliability of COVID-19 related data sources can vary. Death data can be seen as reliable when compared with confirmed cases derived from positive test results; however, observations of this data are typically delayed from the initial point of infection. Delays also occur between the occurrence and reporting of deaths. The reliability of confirmed cases is limited as the sampling of those tested varies with time with the reason for testing often not recorded. In addition, hospital admissions typically occur around 1–2 weeks after infection and so may be considered outdated in relation to the time of initial infection. The extent to which these issues are problematic is likely to vary over time and between countries. For example, reliable, publicly available tests only began to become available a number of months after the outbreak and declaration of the COVID-19 pandemic. As such, information on the spread of the disease was limited and varied between countries. Twitter provides real-time data that overcome the timing limitations of the aforementioned data sources. Correlation between tweets relating to influenza and true influenza counts have been observed in [12,13,14]. It is possible to set up a pipeline for collecting and analysing COVID-19 tweets that can be scaled up to multiple countries in a short amount of time.

1.1. Related Works

Infodemiology and infoveillance [15] refer to the ability to process and analyse data, pertinent to disease outbreaks, that are created and stored digitally in real-time. The availability of these data sets, particularly at the beginning of an outbreak, could provide a noisy but accurate representation of disease dynamics. Prior to the pandemic, tweets relating to influenza-like-illness symptoms were seen to substantially improve the model’s predicting capacity and to boost nowcasting accuracy by 13% in [16,17], respectively. Models allowing for early warning detection of multiple diseases are proposed in [17,18] through analysis of tweet content in real time. Many research papers use social media to gain valuable information relating to the COVID-19 pandemic. Natural language processing (NLP), in particular determining the sentiment of tweets, is a popular research area. Ref. [19] uses sentiment analysis and topic modelling to extract information from conversations relating to COVID-19. When including these data within forecasting models, they observed a 48.83–51.38% improvement in predicting COVID-19 cases. Large databases of tweets are open-sourced [20,21]. Public sentiment relating to COVID-19 prevention measures is analysed in [22]. Depression trends among individuals were analysed in [23]. Emotion was observed to change from fear to anger during the first stages of the pandemic [24]. Misinformation and conspiracy theories propagated rapidly through the Twittersphere during the pandemic [25]. Machine learning algorithms have been used to automatically detect tweets containing self-reported symptoms mentioned by users [26], with Ref. [27] finding symptoms reported by Twitter users to be similar to those used in a clinical setting. We note that the analysis in [19,22,24,25,26] is conducted with the English language only. Analysis conducted in multiple languages is less common. Topic detection and sentiment analysis are conducted in the Portuguese and English language in [28] while misinformation was detected in English, Hindi and Bengali [29]. To the best of our knowledge, researchers have yet to use symptomatic tweets in multiple languages to calibrate epidemiological models.
Movement mobility patterns have been derived from anonymised cell phone data [30,31] and Twitter [32,33]. Using movement between different geographic locations has been shown to be an effective way of modelling the spread of disease [31,34,35,36]. During an epidemic, limiting the movement of individuals with measures, such as school closures and national lockdowns, can drive the reproduction number below 1 [37]. In Italy, when analysing mobile phone movement data, less rigid lockdown measures led to an insufficient decrease in COVID-19 cases when compared to a more rigid lockdown [38]. In this paper, we outline how origin/destination (O/D) matrices can be derived from where people tweet and show, by using an epidemiological model, that restricting movement can have an effect on the spread of a disease. To the best of our knowledge, using O/D matrices derived from Twitter movement to inform SIR disease models has yet to be explored.

1.2. Contribution and Structure

The contribution of this paper is as follows: first, we outline how to use machine learning to identify tweets that correspond to COVID-19 related symptoms in multiple languages. We present a comprehensive study of how these symptomatic tweets differ from other open-source data sets when calibrating the extended SEIRD model described in Section 3.1. When incorporating the surveillance data outlined in Section 2.2, the Mean Absolute Error (MAE) and Normalised Estimation Error Squared (NEES) values are calculated for 7-day death forecasts. Second, we outline a method for deriving O/D matrices from Twitter and show how these can be included to better model the spread of a disease. To the best of our knowledge, using O/D matrices derived from Twitter movement to inform SIR disease models has yet to be explored.
We now present the structure of the remainder of the paper. The methodology for extracting symptomatic tweets in real time and a description of other open-source data feeds are outlined in Section 2.2. Methods for creating the O/D matrices are outlined in Section 2.3. The extended SEIRD model for predicting deaths is outlined in Section 3.1 and the SIR model including movement between NHS regions in Section 3.2. The corresponding results are presented in Section 4.1 and Section 4.2, respectively. Concluding remarks and directions for future work are described in Section 5.

2. Data Collection

In this section, the methods for collecting UK NHS region-specific surveillance data and symptomatic tweets are outlined in Section 2.1 and Section 2.2, respectively. The O/D matrices derived from Twitter mobility are included in Section 2.3. Two Twitter API developer credentials were used for data collection, in line with our two objectives: (1) querying on COVID-19 keywords and (2) querying on geo-located tweets.
Note that testing methods and criteria for classifying deaths as COVID-19-related may differ between geographic locations. All data sets and associated code can be found on the CoDatMo GitHub repository [39].

2.1. United Kingdom NHS Region-Specific Surveillance Data

The methods for collecting UK NHS region-specific surveillance data are presented in the following subsections. The references from where the data were obtained are given in Table 1. The NHS regions in the UK support local systems and provide more joined up and sustainable care for patients through integrated care systems. Every individual born in the UK is entitled to use this public health system.

2.1.1. Deaths

The aggregated death counts contain individuals with COVID-19 as the cause of death on their death certificate or those who died within 60 days of a positive test result.

2.1.2. Hospital Admissions

The aggregated admission counts contain the daily COVID-19 related hospital admissions and the total number of COVID-19 patients.

2.1.3. Zoe App

The aggregated Zoe App counts contain entries of COVID-19 symptoms to a mobile App. The App was developed in 2020 to help track COVID-19. However, it has since broadened its capacity to track other health related concerns such as cancer and high blood pressure. Users can input if they have COVID-19 symptoms as well as stating whether they have been tested for COVID-19.

2.1.4. 111 Calls and 111 Online

The aggregated 111 call and 111 online assessment counts contain individuals that reported potential COVID-19 symptoms through the NHS Pathways telephone and online assessment services, respectively. The telephone service allows for individuals to speak to a medical specialist regarding health concerns. The 111 online service provides information regarding where it is best to obtain help for the symptoms provided. During the COVID-19 epidemic, both services provided a method for individuals to report COVID-19 symptoms.

2.2. Symptomatic Tweets

The geographic locations considered when querying on keywords are:
·
US: 50 States;
·
Rest of the world: 2 European and 16 Latin American countries;
·
UK: 7 NHS regions.
Table 1 provides a summary of surveillance data corresponding to each geographical location. Death and positive case data for the US States and the rest of the world (ROW) were downloaded from the dashboard operated by the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE) [2].

2.2.1. Pre-Processing Tweets

Tweepy [44] is the Twitter API written in the programming language Python. The free Twitter streaming API was used for this research, limiting the number of tweets available for download to 1%. We note that the premium API would allow for a higher percentage of tweets to be collected. The API was filtered using 93 keywords in English, German, Italian, Portuguese and Spanish that align with COVID-19 symptoms from the MedDRA database [45]. The list of keywords can be found here [39]. These terms include those associated with fever, cough and anosmia. While we considered other keywords (e.g., “COVID”), we found that keywords related to symptoms gave rise to a large number of tweets that related to people experiencing symptoms. We do recognise that any choice of keywords will inevitably identify some tweets that are related to advice or general discussion of the disease. This motivated us to use machine learning to post-process the output from the keyword-based queries, as is discussed further in Section 2.2.2.

2.2.2. Symptom Classifier Breakdown

A multi-class support vector machine (SVM) [46] was trained with a set of annotated tweets that were vectorised using a skip-gram model. The annotated tweets were labelled according to the following classes:
  • Unrelated tweet;
  • User currently has symptoms;
  • User had symptoms in the past;
  • Someone else currently has symptoms;
  • Someone else had symptoms in the past.
The total number of tweets mentioning symptoms, given by the sum of tweets in classes 2–5, was calculated for each 24 h period. Geo-tagged tweets were mapped to their location, e.g., corresponding city, via a series of tests using country-specific shapefiles. Previous studies demonstrate that approximately 1.65% of tweets are geo-tagged [47], where the exact position of the tweeter is recorded using longitude and latitude measurements.
For non-geo-tagged tweets, the author’s profile is assessed to ascertain whether they provide an appropriate location. The server was deemed to be offline if any 15 min period within the previous 24 h had no recorded tweets. After checking all 96 15 min periods, the count in each geographical area was multiplied by a correction factor:
reported tweet count = total tweet count · 96 96 downtime periods .
To ensure the labelled tweet data sets used for training and testing were balanced, under- and over-represented classes were randomly up- and down-sampled. A subset of data was used to train the classifier before testing on the remainder. The total number of labelled tweets used for training and testing are provided in Table 2. Four metrics outlined in Table 2 were used to evaluate the classifier. These include the F1 score, accuracy, precision and recall. True positive (TP) and true negative (TN) classifications are outcomes for which the model correctly predicts positive and negative classes, respectively. Similarly, false positive (FP) and false negative (FN) classifications are outcomes for which the model incorrectly predicts positive and negative classes, respectively. Accuracy, precision, recall and the F1 score, which is the harmonic mean of precision and recall, are given as follows:
Accuracy = TP + TN TP + TN + FP + FN ,
Precision = TP TP + FP ,
Recall = TP TP + FN ,
F 1 = 2 · ( Precision · Recall ) Precision + Recall .

2.2.3. Comparison of Tweets and Positive Test Results

Figure 1 shows a comparison between the classified tweets and confirmed positive test results for five US States and one South American country. Both time-series are standardised between 0 and 1 and have been converted to a 7-day rolling average to smooth out short-term fluctuations. It is evident that, at least in the context of these specific examples, the classified tweets do (by eye) follow the trend of positive test results. In some cases, such as Texas and Chile, there seems to be a lag between tweets and positive test results. We suspect there is a reporting delay in these locations. A more rigorous analyses, such as change point detection, could give a stronger indication of how well the trends in the two time-series match. We note that, for some geographic locations, tweets align much less well with the corresponding case counts: we assert that this could be caused by issues with how cases are recorded in each location or by the processing of the tweets.

2.3. Twitter Mobility Origin Destination Matrices

We now present the data collection processes for the derivation of the O/D matrices.
The flow of individuals travelling from one location to another can be expressed as an M × M matrix, where M is the number of locations in the simulation area. The observation period of the data are 30 April 2020 to 31 May 2020. We divide England into the seven NHS regions, which are treated as separate locations. Tweets with the geo-location feature were collected using the same framework as described in Section 2.2.1; however, different Twitter developer API credentials used as tweets were not filtered based on keywords. To determine where an individual tweeted, a shapefile containing coordinates of the boundaries of the seven NHS regions was used.
If an individual tweets twice from two locations, for example, London (Origin) and South West (Destination), a movement is subsequently recorded. Figure 2 depicts each of these movements in the form of an O/D matrix. Locations on the x- and y-axes represent the origin and destination, respectively. Movements within regions, where an individual tweets multiple times in different locations within the same region, have also been collected. These are observed in the diagonal entries of the matrix.

3. Models

In the following section, the model used for making inferences and death predictions when utilising different data feeds is outlined in Section 3.1. The extended SIR disease model catering for movement between different locations is described in Section 3.2.

3.1. Model for Surveillance Data Comparison

In this analysis, we use the statistical model developed by Moore, Rosato and Maskell [8].
The model can be described in two succinct parts. The transmission model (see Section 2(a) of [8]) is an extension of the classical SIR model outlining how individuals within the population move from being susceptible to exposed, then infected to recovered or dead. The model is implemented in the probabilistic programming language Stan [48] and uses a bespoke numerical integrator. Stan allows for statistical modelling and high-performance statistical computation by utilising the high-performance No-U-Turn Sampler (NUTS) [49]. The observation model (see Section 2(b) of [8]) outlines the relationship between the transmission model and the surveillance data feeds in Table 1 during calibration. The data are modelled via the method proposed in [8]. Daily counts of the surveillance data feeds in Table 1 are assumed to follow a negative binomial distribution parameterised by mean x t and over-dispersion parameter ϕ x , such that
x obs t NegativeBinomial x t , ϕ x ,
where x is data feed specific.
We refer the reader to [8] for a comprehensive description of the full model.

3.1.1. Computational Experiments

The time series considered begins on 17 February 2020. The start dates of each data feed follow those outlined in Table 1. The terminal time for the US States and the ROW is fixed on 1 February 2021, while, for NHS regions, the terminal time is 7 January 2021. In all cases, forecasts are considered to include seven days.
Similar to the experiments in [8], the analysis was run on the University of Liverpool’s High-Performance Computer (HPC). Each node has two Intel(R) Xeon(R) Gold 6138 CPU @ 2.00 GHz processors, a total of 40 cores and 384 GB of memory. In the following experiments, six independent Markov chains each draw 2000 samples, with the first 1000 discarded as burn-in. Run-time is dependent on the location of the data and the date at which the prediction is made. However, it typically takes 4.5 h per Markov chain for a complete run.
Initially, we only calibrate the model with death data and produce forecasts of seven daily death counts for the geographic locations described in Section 2.2 for the time periods outlined in Table 3. These forecasts are set as the baseline when comparing against forecasts incorporating low-latency data feeds.
We use two metrics to determine the accuracy of the resulting forecasts. First, we calculate the MAE, which shows the average error over a set of predictions, and is given by
MAE = 1 N i = 1 N | x i y i | ,
where N is the number of predictions and x i and y i are the predicted and true number of deaths on day i, respectively. The percentage difference between forecasts using only deaths ( M A E D ) and those combining deaths with low-latency data feeds ( M A E D L ) is calculated as follows:
MAE % Diff = M A E D L M A E D M A E D ,
where a smaller percentage difference is preferred.
Secondly, we consider the uncertainties associated with the forecasts by assessing the NEES score. This is a popular method in the field of signal processing and tracking [50], recently applied to epidemiological forecasts in [8]. The metric determines whether the estimated variance of forecasts differs from the true variance. If the estimated variance is larger than the true variance, the forecast is over-cautious and if the estimated variance is smaller than the true variance, it is over-confident.
The NEES score is defined by
NEES = 1 N i = 1 N ( x i y i ) T C i 1 ( x i y i ) ,
where C i 1 is the estimated variance at day i, as approximated using the variance of the samples for that day. If x i is D-dimensional, then C i should be a D × D matrix, and the NEES score should be equivalent to D if the algorithm is consistent. As such, in assessing death forecasts, the desired NEES value is D 1 .

3.2. Model for Utilising Origin Destination Matrices

Here, we describe an extension of the discrete time approximation SIR model that includes movement between geographic locations [31,51] and is an extension of [52]. The population in location i is denoted P i . At the beginning of the simulation, P i is divided into three compartments: susceptible, infected and recovered, denoted S i , t , I i , t and R i , t , respectively, for timestep t. Location j represents the set of locations connected to location i. The origin of the pandemic is simulated at a random location, with a fraction of the susceptible compartment infected. The transmission rate in location i on day t is given by β i , t , while m i , j is the count of individuals travelling from location j to i. The global parameter γ describes the recovery rate.
The proportion of infected and susceptible individuals and the total population at locations j and i at time t are x j , t and y i , t , and N j and N i , respectively. The disease spreads via infected individuals travelling according to the O/D matrices in Figure 2. The full extended SIR model is described below:
S i , t + 1 = S i , t β i , t S i , t I i , t N i α S i , t j m i , j t x j , t β j , t N i + j m i , j t ,
I i , t + 1 = I i , t + β i , t S i , t I i , t N i + α S i , t j m i , j t x j , t β j , t N i + j m i , j t γ I i , t ,
R i , t + 1 = R i , t + γ I i , t .
The number of infected individuals that move from all locations j to location i and transmit the disease to the susceptible population is given by
j m i , j t x j , t β j , t .
Uninfected individuals at location i are infected by individuals at locations j with probability
α S i , t j m i , j t x j , t β j , t N i + j m i , j t .
This rate is dependent on α , which describes the intensity of the movement of individuals and is referred to as the social connectivity parameter.

4. Results

The two sets of results are now outlined. Comparison of the accuracy of death forecasts and findings on the impact of movement on the spread of a disease are presented in Section 4.1 and Section 4.2, respectively.

4.1. Surveillance Data Comparison

The NEES value and MAE percentage difference between the baseline, ingesting solely deaths, and the incorporation of low-latency data feeds for the US States, the ROW, and NHS regions are given in Table A1, Table A2 and Table A3, respectively. For all geographic locations, the results are averaged over the prediction windows described in Table 3. A visual representation of these predictions windows can be seen in Figure 3.
When forecasting deaths using the data in [2], calibrating the model with tests, tweets, and tests and tweets gives a 5%, 6% and 5% average increase in percentage performance, respectively, for the US States. The corresponding improvement rates for the ROW are 6%, 17% and 24%. An example of this improvement is presented in Figure 4 for death predictions in Colombia over the period 25 January 2021–1 February 2021. Considering the mean sample in the plots, outlined in red, incorporating tests and tweets follows the true death trend, outlined in green, with more accuracy when compared to only ingesting death data, for which the forecast continues to increase despite true deaths falling.
For the US States, the average NEES values are 1.696, 1.409, 1.483 and 1.269 when ingesting solely death data, tweets, tests, and tweets and tests, respectively. The corresponding results for the ROW are 0.433, 0.500, 1.198 and 0.723. As explained in Section 3.1.1, a NEES value of ∼1 is desired, with values <1 and >1 indicating that the forecast is over-cautious or over-confident, respectively. Ingesting any combination of the data feeds provides a NEES value closer to 1 than the death only forecast in both cases.
Results for NHS regions are less consistent. Ingesting hospital admissions, 111 calls and 111 online data sets provide an average increase in performance of 22%, 17% and 22%, respectively. However, tweets and Zoe App data perform less well, with decreases in performance of 2% and 124%, respectively. We perceive that this issue arises because, in these feeds, symptoms are self-diagnosed. Consequently, the counts may include relatively large numbers of people who do not have COVID-19.
NEES values for NHS regions when ingesting solely deaths, hospital admissions, tweets, Zoe App, 111 call and 111 online data are 0.662, 0.682, 1.044, 3.160, 0.916 and 0.912, respectively. These results infer that, apart from Zoe App data where forecasts are overly-confident, ingesting all types of data feeds provides more consistent forecasts. Figure 5 exemplifies this finding. In the top image, the forecast encapsulates almost all true deaths. However, when ingesting the Zoe App data, the forecast only encapsulates two out of the seven true deaths, resulting in a NEES value of 6.202, which indicates an over-confident estimate.

4.2. Origin Destination Matrices Analysis

As explained in Section 2.3, a movement is recorded if an individual tweets twice in one day in different locations over a 24 h period. The counts are assumed to be a percentage of the true population for the seven NHS regions. Figure 6 depicts these aggregated movements as O/D matrices.
Figure 6 shows the effect of the social connectivity parameter, α , on the spread of a disease. This parameter models the level of contact individuals have with one another when travelling between locations. For example, implementing a lockdown, using a personal car or travelling via public transport will correspond to increasing values of α .
Figure 6 exemplifies the role of α when simulating the disease dynamics. The SIR epidemic curves for England are presented in the top row and the infected curves for each NHS region in the bottom row. Limiting contacts within the population through specification of α = 0.2 results in disease ceasing by day 15. For α = 0.5, the peak number of infections occurs at approximately day 20 and consists of just over 0.1% of the population. In contrast, when α = 0.9, the peak occurs at approximately day 10 and 0.3% of the population are infected. Simulations of the SIR curves under no movement between NHS regions are also provided in the rightmost column of Figure 6.

5. Conclusions and Future Work

In this paper, we have outlined a method for detecting symptomatic COVID-19 tweets in multiple languages. Calibrating the epidemiological model outlined in Section 3.1 with low-latency data feeds, including symptomatic tweets, provides more accurate and consistent forecasts of daily deaths when compared with using death data alone. We have also shown how to extract movement data from Twitter in the form of O/D matrices. These movement data were utilised in an extended SIR model to better represent the spread of a disease.
Incorporating symptomatic tweets for UK regions does not provide the same level of improvement as for other geographic locations. One reason for this reduced improvement could be that daily counts of tweets for NHS regions are less plentiful than for the US States or the rest of the world. It is possible to pay for a premium Twitter API that allows the user to download a higher percentage of tweets than that used in this study. A second way to potentially increase the hit rate of geo-located tweets is to use natural language processing techniques to estimate the location of the tweet user, such as those outlined in the review [53]. Another direction for future work is to train a more sophisticated classifier such as the Bidirectional Encoder Representations from Transformers (BERT) classifier [54].
Calibrating the model in Section 3.1 with movement data was not explored in this analysis due to the computational effort required. One interesting direction for future work would be to use a sequential Monte Carlo (SMC) sampler [55] in place of the MCMC sampling algorithm. An example of such sampler that uses NUTS as the proposal can be found here [56].

Author Contributions

Conceptualization, S.M. and J.HA.; methodology, C.R., M.C., J.H. (John Heap) and S.M.; software, C.R., R.E.M., M.C. and J.H. (John Heap); validation, C.R., R.E.M., M.C. and J.H. (John Heap); formal analysis, C.R.; investigation, C.R.; resources, C.R., M.C., J.H. (John Heap) and J.S.; data curation, C.R., M.C., J.H. (John Heap) and J.S.; writing—original draft preparation, C.R.; writing—review and editing, C.R., R.E.M., M.C., J.H. (John Heap), J.H. (John Harris), J.S. and S.M.; visualization, C.R., M.C. and J.H. (John Heap); supervision, J.H. (John Harris), J.S. and S.M.; project administration, S.M.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a Research Studentship jointly funded by the EPSRC and the ESRC Centre for Doctoral Training on Quantification and Management of Risk and Uncertainty in Complex Systems Environments Grant No. (EP/L015927/1) and an ICASE Research Studentship jointly funded by EPSRC and AWE Grant No. (EP/R512011/1), the EPSRC Centre for Doctoral Training in Distributed Algorithms Grant No. (EP/S023445/1) and the EPSRC through the Big Hypotheses Grant No. (EP/R018537/1).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data and code used in this research paper can be found at: https://codatmo.github.io (accessed on 6 March 2023).

Acknowledgments

The authors would like to thank Serban Ovidiu and Chris Hankin from Imperial College London, and Ronni Bowman and Riskaware for their support and helpful discussions of this work. We would also like to thank the team at the Universidade Nove de Julho—UNINOVE in Sao Paulo, Brazil with the help they provided in labelling the Portuguese tweets. We would also like to thank Breck Baldwin for helping to make progress with CoDatMo.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
O/DOrigin/Destination
NHSNational Health Service
WHOWorld Health Organisation
R t Reproduction Number
UKUnited Kingdom
ICUIntensive Care Unit
NLPNatural Language Processing
MAEMean Absolute Error
NEESNormalised Estimation Error Squared
ROWRest of the World
JHU CSSEJohns Hopkins University Center for Systems Science and Engineering
SVMSupport Vector Machine
NUTSNo-U-Turn Sampler
HPCHigh-Performance Computer
BERTBidirectional Encoder Representations from Transformers

Appendix A

Table A1. The US States: MAE and NEES when using deaths and when using deaths and different low-latency data feeds. Lower MAE diff and NEES∼1 = better. Averaged over the prediction windows in Table 3. Only the English classifier was used.
Table A1. The US States: MAE and NEES when using deaths and when using deaths and different low-latency data feeds. Lower MAE diff and NEES∼1 = better. Averaged over the prediction windows in Table 3. Only the English classifier was used.
Geographic LocationDeathsTestsTwitterTests and Twitter
NEESMAE % DiffNEESMAE % DiffNEESMAE % DiffNEES
Alaska0.329−360.334−290.301−920.302
Alabama0.684−291.874−291.723−21.000
Arkansas0.27530.317−10.288−10.313
Arizona0.337200.334180.344−200.244
California0.61160.70990.80251.206
Colorado1.886−250.401−410.457101.278
Connecticut13.406−81.922−20.875211.459
Delaware3.020−30.918161.046120.727
Florida0.406−240.179130.353−200.454
Georgia0.55090.325410.891−480.255
Hawaii11.459−1228.114−424.6951710.149
Iowa19.17657.72041.476−31.600
Idaho0.91400.80921.79170.986
Illinois0.57390.350130.319−1161.091
Indiana0.561−170.652−400.78100.481
Kansas1.02111.037−21.83510.488
Kentucky0.355−40.374100.548−150.214
Louisiana0.298−70.305−20.34190.234
Massachusetts0.35130.342−30.365140.409
Maryland0.485−30.619100.581310.313
Maine0.48810.567−280.796−90.952
Michigan0.592−60.445−70.45340.850
Minnesota0.68391.019111.200510.747
Missouri0.810−71.165−271.609200.475
Mississippi0.683120.72120.997−150.320
Montana5.03442.244−11.538−55.189
North Carolina0.908−10.45390.877−190.570
North Dakota0.513−320.521−180.544−80.661
Nebraska0.25950.25370.57050.286
New Hampshire0.252−740.240−1480.430−360.288
New Jersey0.901−70.788−60.926103.177
New Mexico0.832−280.738−120.96900.489
Nevada2.129−240.353−120.425−131.904
New York0.496310.14630.135−170.418
Ohio0.263630.675540.46830.337
Oklahoma0.301−50.36900.62180.256
Oregon0.72901.032−21.692−40.793
Pennsylvania0.411−70.38500.426100.402
Rhode Island0.609−90.546−310.446−21.699
South Carolina2.072−32.157−45.601−390.429
South Dakota1.259141.080−21.08925.050
Tennessee0.794151.191141.687−110.600
Texas0.58560.78410.750−710.706
Utah0.499−980.716−1271.196130.632
Virginia0.731−100.39660.86490.676
Vermont0.142590.300−10.163400.043
Washington0.608−80.561191.787−10.782
Wisconsin0.84261.028253.92180.850
West Virginia0.650−60.54721.04270.291
Wyoming1.93950.951−151.126250.395
Average1.696−51.409−61.483−51.269
Table A2. Rest of the World: MAE and NEES when using deaths and when using deaths and different low-latency data feeds. Lower MAE diff and NEES∼1 = better. Averaged over the prediction windows in Table 3. Language column states which classifier was used.
Table A2. Rest of the World: MAE and NEES when using deaths and when using deaths and different low-latency data feeds. Lower MAE diff and NEES∼1 = better. Averaged over the prediction windows in Table 3. Language column states which classifier was used.
Geographic LocationLanguageDeathsTestsTwitterTests and Twitter
NEESMAE % DiffNEESMAE % DiffNEESMAE % DiffNEES
ArgentinaSpanish0.56730.695−170.904−190.765
BoliviaSpanish0.339−850.207−1170.182−1180.195
BrazilPortuguese0.396−40.405110.57840.493
ChileSpanish0.371150.439140.506100.425
ColombiaSpanish0.154170.243−460.164−1150.223
Costa RicaSpanish0.42360.583183.06020.786
EcuadorSpanish0.156−260.195−990.234−690.234
GuatemalaSpanish0.557−190.670−310.815−310.713
HondurasSpanish0.405−80.381−270.915−410.541
MexicoSpanish0.766160.939111.100111.110
NicaraguaSpanish0.091−130.207−241.340−220.364
PanamaSpanish0.550−200.421−40.451−70.368
ParaguaySpanish0.535280.877−72.61581.473
PeruSpanish0.507330.103261.630160.515
UruguaySpanish0.619110.742−130.899−70.643
VenezuelaSpanish0.610−140.713−490.890−910.603
GermanyGerman0.37950.613152.131141.570
ItalyItalian0.360170.557293.149341.991
Average 0.433−60.500−171.198−240.723
Table A3. NHS Regions: MAE and NEES when using deaths and when using deaths and different low-latency data feeds. Lower MAE diff and NEES∼1 = better. Averaged over the prediction windows in Table 3. Only the English classifier was used.
Table A3. NHS Regions: MAE and NEES when using deaths and when using deaths and different low-latency data feeds. Lower MAE diff and NEES∼1 = better. Averaged over the prediction windows in Table 3. Only the English classifier was used.
Geographic LocationDeathsHospitalTwitterZoe App111 Calls111 Online
NEESMAE % DiffNEESMAE % DiffNEESMAE % DiffNEESMAE % DiffNEESMAE % DiffNEES
East of England0.435−130.419−70.655382.908−150.820−190.795
London0.878−360.666−71.1631313.150−430.750−470.754
Midlands0.635−160.466130.5691323.330−190.418−470.404
North East and Yorkshire0.75351.188−40.8241532.325−160.860−140.888
North West0.735−10.756171.4081293.285−250.932−250.934
South East0.652−240.805−31.2551264.39081.01860.957
South West0.545−690.47421.4321602.729−81.617−61.653
Average0.662−220.68221.0441243.160−170.916−220.912

References

  1. Coronavirus Disease 2019. Available online: https://www.google.com/search?q=covid-19+cases+worldwide&rlz=1C1CHBF_enGB763GB763&sxsrf=AJOqlzVAHRTMaItK2GPe9r5WtVyiju1d9g%3A1677849490518&ei=kvMBZO6lH4SW8gL377G4Dg&ved=0ahUKEwjutvm27L_9AhUEi1wKHfd3DOcQ4dUDCA8&uact=5&oq=covid-19+cases+worldwide&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzIFCAAQgAQyBQgAEIAEMgYIABAWEB4yBggAEBYQHjIGCAAQFhAeMgYIABAWEB4yBggAEBYQHjIGCAAQFhAeMgYIABAWEB4yBggAEBYQHjoKCAAQRxDWBBCwAzoECAAQQ0oECEEYAFDLBFjOEWCFEmgBcAB4AIABWIgB8QSSAQE5mAEAoAEByAEIwAEB&sclient=gws-wiz-serpt (accessed on 3 March 2023).
  2. Dong, E.; Du, H.; Gardner, L. An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect. Dis. 2020, 20, 533–534. [Google Scholar] [CrossRef] [PubMed]
  3. Kermack, W.O.; McKendrick, A.G. A contribution to the mathematical theory of epidemics. Proc. R. Soc. London. Ser. A Contain Pap. Math. Phys. Charact. 1927, 115, 700–721. [Google Scholar]
  4. Reproduction Number (R) and Growth Rate: Methodology. Available online: https://www.gov.uk/government/publications/reproduction-number-r-and-growth-rate-methodology/reproduction-number-r-and-growth-rate-methodology (accessed on 1 October 2021).
  5. Birrell, P.; Blake, J.; Van Leeuwen, E.; Gent, N.; De Angelis, D. Real-time nowcasting and forecasting of COVID-19 dynamics in England: The first wave. Philos. Trans. R. Soc. B 2021, 376, 20200279. [Google Scholar] [CrossRef] [PubMed]
  6. Leclerc, Q.J.; Nightingale, E.S.; Abbott, S.; Jombart, T. Analysis of temporal trends in potential COVID-19 cases reported through NHS Pathways England. Sci. Rep. 2021, 11, 34053254. [Google Scholar] [CrossRef]
  7. Keeling, M.J.; Dyson, L.; Guyver-Fletcher, G.; Holmes, A.; Semple, M.G.; Investigators, I.; Tildesley, M.J.; Hill, E.M. Fitting to the UK COVID-19 outbreak, short-term forecasts and estimating the reproductive number. Stat. Methods Med. Res. 2022, 2022, 09622802211070257. [Google Scholar] [CrossRef]
  8. Moore, R.E.; Rosato, C.; Maskell, S. Refining epidemiological forecasts with simple scoring rules. Philos. Trans. R. Soc. A 2022, 380, 20210305. [Google Scholar] [CrossRef]
  9. Funk, S.; Abbott, S.; Atkins, B.D.; Baguelin, M.; Baillie, J.K.; Birrell, P.; Blake, J.; Bosse, N.I.; Burton, J.; Carruthers, J.; et al. Short-term forecasts to inform the response to the Covid-19 epidemic in the UK. MedRxiv 2020. [Google Scholar] [CrossRef]
  10. Overton, C.E.; Pellis, L.; Stage, H.B.; Scarabel, F.; Burton, J.; Fraser, C.; Hall, I.; House, T.A.; Jewell, C.; Nurtay, A.; et al. EpiBeds: Data informed modelling of the COVID-19 hospital burden in England. PLoS Comput. Biol. 2022, 18, e1010406. [Google Scholar] [CrossRef]
  11. Czado, C.; Gneiting, T.; Held, L. Predictive model assessment for count data. Biometrics 2009, 65, 1254–1261. [Google Scholar] [CrossRef]
  12. Aramaki, E.; Maskawa, S.; Morita, M. Twitter catches the flu: Detecting influenza epidemics using Twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, UK, 27–31 July 2011; pp. 1568–1576. [Google Scholar]
  13. Aslam, A.A.; Tsou, M.H.; Spitzberg, B.H.; An, L.; Gawron, J.M.; Gupta, D.K.; Peddecord, K.M.; Nagel, A.C.; Allen, C.; Yang, J.A.; et al. The reliability of tweets as a supplementary method of seasonal influenza surveillance. J. Med. Internet Res. 2014, 16, e3532. [Google Scholar] [CrossRef]
  14. Broniatowski, D.A.; Paul, M.J.; Dredze, M. National and local influenza surveillance through Twitter: An analysis of the 2012–2013 influenza epidemic. PLoS ONE 2013, 8, e83672. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Eysenbach, G. Infodemiology and infoveillance: Framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet. J. Med. Internet Res. 2009, 11, e1157. [Google Scholar] [CrossRef] [PubMed]
  16. Achrekar, H.; Gandhe, A.; Lazarus, R.; Yu, S.H.; Liu, B. Predicting flu trends using twitter data. In Proceedings of the 2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Toronto, ON, Canada, 10–15 April 2011; pp. 702–707. [Google Scholar]
  17. Șerban, O.; Thapen, N.; Maginnis, B.; Hankin, C.; Foot, V. Real-time processing of social media with SENTINEL: A syndromic surveillance system incorporating deep learning for health classification. Inf. Process. Manag. 2019, 56, 1166–1184. [Google Scholar]
  18. Espinosa, L.; Wijermans, A.; Orchard, F.; Höhle, M.; Czernichow, T.; Coletti, P.; Hermans, L.; Faes, C.; Kissling, E.; Mollet, T. Epitweetr: Early warning of public health threats using Twitter data. Eurosurveillance 2022, 27, 2200177. [Google Scholar] [CrossRef]
  19. Lamsal, R.; Harwood, A.; Read, M.R. Twitter conversations predict the daily confirmed COVID-19 cases. Appl. Soft Comput. 2022, 129, 109603. [Google Scholar] [CrossRef]
  20. Thakur, N. A Large-Scale Dataset of Twitter Chatter about Online Learning during the Current COVID-19 Omicron Wave. Data 2022, 7, 109. [Google Scholar] [CrossRef]
  21. Thakur, N.; Han, C.Y. An Exploratory Study of Tweets about the SARS-CoV-2 Omicron Variant: Insights from Sentiment Analysis, Language Interpretation, Source Tracking, Type Classification, and Embedded URL Detection. COVID 2022, 2, 1026–1049. [Google Scholar] [CrossRef]
  22. Medford, R.J.; Saleh, S.N.; Sumarsono, A.; Perl, T.M.; Lehmann, C.U. An “infodemic”: Leveraging high-volume Twitter data to understand early public sentiment for the coronavirus disease 2019 outbreak. In Proceedings of the Open Forum Infectious Diseases; Oxford University Press: Oxford, MI, USA, 2020; Volume 7, p. ofaa258. [Google Scholar]
  23. Zhang, Y.; Lyu, H.; Liu, Y.; Zhang, X.; Wang, Y.; Luo, J. Monitoring depression trends on twitter during the COVID-19 pandemic: Observational study. JMIR Infodemiol. 2021, 1, e26769. [Google Scholar] [CrossRef]
  24. Lwin, M.O.; Lu, J.; Sheldenkar, A.; Schulz, P.J.; Shin, W.; Gupta, R.; Yang, Y. Global sentiments surrounding the COVID-19 pandemic on Twitter: Analysis of Twitter trends. JMIR Public Health Surveill. 2020, 6, e19447. [Google Scholar] [CrossRef]
  25. Sharma, K.; Seo, S.; Meng, C.; Rambhatla, S.; Liu, Y. COVID-19 on social media: Analyzing misinformation in twitter conversations. arXiv 2020, arXiv:2003.12309. [Google Scholar]
  26. Al-Garadi, M.A.; Yang, Y.C.; Lakamana, S.; Sarker, A. A Text Classification Approach for the Automatic Detection of Twitter Posts Containing Self-Reported COVID-19 Symptoms. 2020. Available online: https://openreview.net/forum?id=xyGSIttHYO (accessed on 6 March 2023).
  27. Sarker, A.; Lakamana, S.; Hogg-Bremer, W.; Xie, A.; Al-Garadi, M.A.; Yang, Y.C. Self-reported COVID-19 symptoms on Twitter: An analysis and a research resource. J. Am. Med. Inform. Assoc. 2020, 27, 1310–1315. [Google Scholar] [CrossRef] [PubMed]
  28. Garcia, K.; Berton, L. Topic detection and sentiment analysis in Twitter content related to COVID-19 from Brazil and the USA. Appl. Soft Comput. 2021, 101, 107057. [Google Scholar] [CrossRef] [PubMed]
  29. Kar, D.; Bhardwaj, M.; Samanta, S.; Azad, A.P. No rumours please! A multi-indic-lingual approach for COVID fake-tweet detection. In Proceedings of the 2021 Grace Hopper Celebration India (GHCI), Bangalore, India, 18 January–3 February 2021; pp. 1–5. [Google Scholar]
  30. Badr, H.S.; Du, H.; Marshall, M.; Dong, E.; Squire, M.M.; Gardner, L.M. Association between mobility patterns and COVID-19 transmission in the USA: A mathematical modelling study. Lancet Infect. Dis. 2020, 20, 1247–1254. [Google Scholar] [CrossRef] [PubMed]
  31. Goel, R.; Sharma, R. Mobility based sir model for pandemics-with case study of covid-19. In Proceedings of the 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), The Hague, The Netherlands, 7–10 December 2020; pp. 110–117. [Google Scholar]
  32. Osorio-Arjona, J.; García-Palomares, J.C. Social media and urban mobility: Using twitter to calculate home-work travel matrices. Cities 2019, 89, 268–280. [Google Scholar] [CrossRef]
  33. Huang, X.; Li, Z.; Jiang, Y.; Li, X.; Porter, D. Twitter reveals human mobility dynamics during the COVID-19 pandemic. PLoS ONE 2020, 15, e0241957. [Google Scholar] [CrossRef]
  34. Lombardi, A.; Amoroso, N.; Monaco, A.; Tangaro, S.; Bellotti, R. Complex Network Modelling of Origin–Destination Commuting Flows for the COVID-19 Epidemic Spread Analysis in Italian Lombardy Region. Appl. Sci. 2021, 11, 4381. [Google Scholar] [CrossRef]
  35. Gómez, S.; Fernández, A.; Meloni, S.; Arenas, A. Impact of origin-destination information in epidemic spreading. Sci. Rep. 2019, 9, 2315. [Google Scholar] [CrossRef] [Green Version]
  36. Kondo, K. Simulating the impacts of interregional mobility restriction on the spatial spread of COVID-19 in Japan. Sci. Rep. 2021, 11, 18951. [Google Scholar] [CrossRef]
  37. Flaxman, S.; Mishra, S.; Gandy, A.; Unwin, H.J.T.; Mellan, T.A.; Coupland, H.; Whittaker, C.; Zhu, H.; Berah, T.; Eaton, J.W.; et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature 2020, 584, 257–261. [Google Scholar] [CrossRef]
  38. Vinceti, M.; Filippini, T.; Rothman, K.J.; Ferrari, F.; Goffi, A.; Maffeis, G.; Orsini, N. Lockdown timing and efficacy in controlling COVID-19 using mobile phone tracking. EClinicalMedicine 2020, 25, 100457. [Google Scholar] [CrossRef]
  39. CoDatMo. 2021 Welcome to the CoDatMo Site. Available online: https://codatmo.github.io (accessed on 1 October 2021).
  40. UK Government. 2021 Coronavirus (COVID-19) in the UK. Available online: https://coronavirus.data.gov.uk/details/deaths (accessed on 1 October 2021).
  41. UK Government. 2021 Coronavirus (COVID-19) in the UK. Available online: https://coronavirus.data.gov.uk/details/healthcare (accessed on 1 October 2021).
  42. Zoe App: COVID-Public-Data. Available online: https://console.cloud.google.com/storage/browser/covid-public-data;tab=objects?prefix=&forceOnObjectsSortingFiltering=false (accessed on 1 October 2021).
  43. Potential Coronavirus (COVID-19) Symptoms Reported through NHS Pathways and 111 Online. Available online: https://digital.nhs.uk/data-and-information/publications/statistical/mi-potential-covid-19-symptoms-reported-through-nhs-pathways-and-111-online/latest (accessed on 1 October 2021).
  44. Roesslein, J. Tweepy Documentation. 2009, Volume 5, p. 724. Available online: http://tweepy.readthedocs.io/en/v3 (accessed on 8 May 2012).
  45. COVID-19 Terms and MedDRA. Available online: https://www.meddra.org/COVID-19-terms-and-MedDRA (accessed on 1 October 2021).
  46. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  47. Leetaru, K.; Wang, S.; Cao, G.; Padmanabhan, A.; Shook, E. Mapping the global Twitter heartbeat: The geography of Twitter. First Monday 2013. Available online: https://journals.uic.edu/ojs/index.php/fm/article/view/4366 (accessed on 1 October 2021). [CrossRef]
  48. Carpenter, B.; Gelman, A.; Hoffman, M.D.; Lee, D.; Goodrich, B.; Betancourt, M.; Brubaker, M.; Guo, J.; Li, P.; Riddell, A. Stan: A probabilistic programming language. J. Stat. Softw. 2017, 76, 1430202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Hoffman, M.D.; Gelman, A. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res. 2014, 15, 1593–1623. [Google Scholar]
  50. Chen, Z.; Heckman, C.; Julier, S.; Ahmed, N. Weak in the NEES?: Auto-tuning Kalman filters with Bayesian optimization. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; pp. 1072–1079. [Google Scholar]
  51. Modelling the Coronavirus Epidemic in a City with Python. Available online: https://towardsdatascience.com/modelling-the-coronavirus-epidemic-spreading-in-a-city-with-python-babd14d82fa2 (accessed on 24 October 2022).
  52. Wesolowski, A.; zu Erbach-Schoenberg, E.; Tatem, A.J.; Lourenço, C.; Viboud, C.; Charu, V.; Eagle, N.; Engø-Monsen, K.; Qureshi, T.; Buckee, C.O.; et al. Multinational patterns of seasonal asymmetry in human movement influence infectious disease dynamics. Nat. Commun. 2017, 8, 2069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Huang, C.Y.; Tong, H.; He, J.; Maciejewski, R. Location Prediction for Tweets. Front. Big Data 2019, 2, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  55. Del Moral, P.; Doucet, A.; Jasra, A. Sequential monte carlo samplers. J. R. Stat. Soc. Ser. (Statist. Methodol.) 2006, 68, 411–436. [Google Scholar] [CrossRef] [Green Version]
  56. Devlin, L.; Horridge, P.; Green, P.L.; Maskell, S. The No-U-Turn sampler as a proposal distribution in a sequential Monte Carlo sampler with a near-optimal L-kernel. arXiv 2021, arXiv:2108.02498. [Google Scholar]
Figure 1. Plot of 7-day rolling average and standardised daily counts of positive COVID-19 cases (blue) and self-reported symptomatic tweets (red) for different US States and one South American country.
Figure 1. Plot of 7-day rolling average and standardised daily counts of positive COVID-19 cases (blue) and self-reported symptomatic tweets (red) for different US States and one South American country.
Information 14 00170 g001
Figure 2. Heat-maps of origin destination matrices derived from Twitter for NHS regions. Locations on the x- and y-axes represent the origin and destination, respectively.
Figure 2. Heat-maps of origin destination matrices derived from Twitter for NHS regions. Locations on the x- and y-axes represent the origin and destination, respectively.
Information 14 00170 g002
Figure 3. Death forecasts in Florida (left) and Georgia (right). The first, second and third prediction windows outlined in Table 3 and presented in the first, second and third rows, respectively. Confidence intervals of 1 standard deviation from the mean given by the orange ribbon, the mean sample given by the red line and the beginning of the prediction period by the vertical blue dashed line. True deaths are given by the black and green dots.
Figure 3. Death forecasts in Florida (left) and Georgia (right). The first, second and third prediction windows outlined in Table 3 and presented in the first, second and third rows, respectively. Confidence intervals of 1 standard deviation from the mean given by the orange ribbon, the mean sample given by the red line and the beginning of the prediction period by the vertical blue dashed line. True deaths are given by the black and green dots.
Information 14 00170 g003
Figure 4. Colombian death forecasts for combinations of data sets. Confidence intervals of 1 standard deviation from the mean given by the orange ribbon, the mean sample given by the red line and the beginning of the prediction period by the vertical blue dashed line. True deaths are given by the black and green dots.
Figure 4. Colombian death forecasts for combinations of data sets. Confidence intervals of 1 standard deviation from the mean given by the orange ribbon, the mean sample given by the red line and the beginning of the prediction period by the vertical blue dashed line. True deaths are given by the black and green dots.
Information 14 00170 g004aInformation 14 00170 g004b
Figure 5. London death forecasts for death and 111 call data (top) and death and Zoe App data (bottom). Confidence intervals of 1 standard deviation from the mean given by the orange ribbon, the mean sample given by the red line and the beginning of the prediction period by the vertical blue dashed line. True deaths are given by the black and green dots.
Figure 5. London death forecasts for death and 111 call data (top) and death and Zoe App data (bottom). Confidence intervals of 1 standard deviation from the mean given by the orange ribbon, the mean sample given by the red line and the beginning of the prediction period by the vertical blue dashed line. True deaths are given by the black and green dots.
Information 14 00170 g005aInformation 14 00170 g005b
Figure 6. (Top row): Susceptible, Infected and Recovered epidemic curves for England with different values of the social connectivity parameters and with no movement between regions. (Bottom row): The infected curves for the different NHS regions for different social connectivity parameters and no movement between regions.
Figure 6. (Top row): Susceptible, Infected and Recovered epidemic curves for England with different values of the social connectivity parameters and with no movement between regions. (Bottom row): The infected curves for the different NHS regions for different social connectivity parameters and no movement between regions.
Information 14 00170 g006
Table 1. A description of the data feeds used per geographic location, the start date used in the simulations and where they were obtained.
Table 1. A description of the data feeds used per geographic location, the start date used in the simulations and where they were obtained.
Geographic LocationData FeedStart DateReference
U.S States and the rest of the worldDeaths24 March 2020[2]
Tests1 March 2020[2]
Twitter13 April 2020Section 2.2
U.K NHS RegionsDeaths24 March 2020[40]
Hospital admissions19 March 2020[41]
Twitter9 April 2020Section 2.2
Zoe app12 May 2020[42]
111 calls18 March 2020[43]
111 online18 March 2020[43]
Table 2. Testing, training and performance measures of the machine learning classifiers in different languages.
Table 2. Testing, training and performance measures of the machine learning classifiers in different languages.
LanguageNumber of Data Used Performance Measures
TrainingTestingF1AccuracyPrecisionRecall
English11051950.850.850.850.85
German4122600.890.890.900.89
Italian2542600.970.960.970.96
Portuguese35076190.770.770.780.80
Spanish15302700.820.850.820.85
Table 3. Prediction windows for the US States and the rest of the world, and NHS regions.
Table 3. Prediction windows for the US States and the rest of the world, and NHS regions.
US States and the Rest of the WorldNHS Regions
9 July 2020–16 July 202011 November 2020–18 November 2020
17 October 2020–24 October 202021 November 2020–28 November 2020
25 January 2021–1 February 20211 December 2020–8 December 2020
-11 December 2020–18 December 2020
-21 December 2020–28 December 2020
-31 December 2020–7 January 2021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rosato, C.; Moore, R.E.; Carter, M.; Heap, J.; Harris, J.; Storopoli, J.; Maskell, S. Extracting Self-Reported COVID-19 Symptom Tweets and Twitter Movement Mobility Origin/Destination Matrices to Inform Disease Models. Information 2023, 14, 170. https://doi.org/10.3390/info14030170

AMA Style

Rosato C, Moore RE, Carter M, Heap J, Harris J, Storopoli J, Maskell S. Extracting Self-Reported COVID-19 Symptom Tweets and Twitter Movement Mobility Origin/Destination Matrices to Inform Disease Models. Information. 2023; 14(3):170. https://doi.org/10.3390/info14030170

Chicago/Turabian Style

Rosato, Conor, Robert E. Moore, Matthew Carter, John Heap, John Harris, Jose Storopoli, and Simon Maskell. 2023. "Extracting Self-Reported COVID-19 Symptom Tweets and Twitter Movement Mobility Origin/Destination Matrices to Inform Disease Models" Information 14, no. 3: 170. https://doi.org/10.3390/info14030170

APA Style

Rosato, C., Moore, R. E., Carter, M., Heap, J., Harris, J., Storopoli, J., & Maskell, S. (2023). Extracting Self-Reported COVID-19 Symptom Tweets and Twitter Movement Mobility Origin/Destination Matrices to Inform Disease Models. Information, 14(3), 170. https://doi.org/10.3390/info14030170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop