Next Article in Journal
Characterizations, Potential, and an Implementation of the Shapley-Solidarity Value
Next Article in Special Issue
Using Parameter Elimination to Solve Discrete Linear Chebyshev Approximation Problems
Previous Article in Journal
Some Intrinsic Properties of Tadmor–Tanner Functions: Related Problems and Possible Applications
Previous Article in Special Issue
The Optimal Shape Parameter for the Least Squares Approximation Based on the Radial Basis Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

To Google or Not: Differences on How Online Searches Predict Names and Faces

by
Carmen Moret-Tatay
1,*,
Abigail G. Wester
2 and
Daniel Gamermann
3
1
Escuela de Doctorado, Universidad Católica de Valencia San Vicente Mártir, San Agustín 3, Esc. A, Entresuelo 1, 46002 València, Spain
2
Department of Psychology, Neuroscience and Languages. Regis University, 3333 Regis Blvd, Denver, CO 80221, USA
3
Instituto de Física—Universidade Federal do Rio Grande do Sul (UFRGS), Av, Bento Gonçalves 9500, 15051 CEP 91501-970, Porto Alegre, Brazil
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 1964; https://doi.org/10.3390/math8111964
Submission received: 20 September 2020 / Revised: 19 October 2020 / Accepted: 28 October 2020 / Published: 5 November 2020
(This article belongs to the Special Issue Approximation Theory and Methods 2020)

Abstract

:
Word and face recognition are processes of interest for a large number of fields, including both clinical psychology and computer calculations. The research examined here aims to evaluate the role of an online frequency’s ability to predict both face and word recognition by examining the stability of these processes in a given amount of time. The study will further examine the differences between traditional theories and current contextual frequency approaches. Reaction times were recorded through both a logarithmic transformation and through a Bayesian approach. The Bayes factor notation was employed as an additional test to support the evidence provided by the data. Although differences between face and name recognition were found, the results suggest that latencies for both face and name recognition are stable for a period of six months and online news frequencies better predict reaction time for both classical frequentist analyses. These findings support the use of the contextual diversity approach.

1. Introduction

How neural pathways are related to both face and word recognition is a subject of interest and debate in a variety of literature, from clinical psychology to computer calculations. Even if both word and face recognition are related to the fusiform gyrus, the nature of each process is remarkably different. Face recognition is an innate ability for the human being, whereas word recognition must be learned through an intricate multiyear process [1]. In this way, prior literature has explored how these processes might vary depending on the nature of inherent variables for word or face stimuli, such as size, position or inversions, features, emotional valence, and lexical or sublexical factors, among others [2,3,4,5].
One of the most robust effects for word recognition is the word frequency effect [6]. This effect has established that more frequent words (e.g., “mother”) are faster recognized than less frequent words (e.g., “platypus”). Likewise, using a lexical corpus of a language can improve word recognition. Initially, these studies were based on print texts such as popular books or academic journals, but currently, these corpuses are exceptionally more complex [7]. The digital era, and more precisely, the internet-based technologies have had an effect in our reading process with an emerging preference for digital texts [8]. The explosion of search engines allows anyone with an Internet connection to search for a vast range of information on the World Wide Web. Not surprisingly, some research has examined this effect under the hypothesis that it could lead to a more ecological result that reflects the digital era. Literature has shown how movie subtitles [9], twitter and blog frequencies [7], or even frequencies based on online searches such as Google, correlate with reaction times similar to the already existing frequencies on lexical decision tasks [10]. Moreover, twitter has been employed in more complex and advanced models related to neural approaches in the field [11]. Through this, there is an underlying hypothesis in common, which stipulates that frequency is related to word repetition. Therefore, more repeated words are assumed to be better represented in our lexicon. Literature has addressed this role of contextual diversity with promising results by examining how word frequency might be confounded with the number of different contexts where it can be found [12,13].
Another variable of interest on the role of visual recognition prediction by frequencies or context is the specific time of the experiment. In other words, the use of words is not a static process. Research highlights the dynamic nature of visual recognition, and therefore, it can vary over time [14,15]. In this sense, it is important to consider that the frequency at the moment to carry out an experiment could also contribute to greater ecological validity by offering a more representative stimulus for the everyday use of the language. Frequencies from Google are of interest because they allow for the comparison between word and face recognition. Stimuli can be searched by its name or as an image, providing a frequency of search. Moreover, googling not only provides information regarding frequencies based on online searches, it is also related to news. Online searches and news might provide a more realistic scenario for face to word frequency and contextual diversity. Individual characteristics of a participant’s background are also thought to be an important factor in visual recognition. In this way, literature has pointed out that individuals from small hometowns show relatively poor face recognition ability as studied through the Cambridge Face Memory Test or CFMT [16,17]. This suggests that the number of faces present in an individual’s visual environment might be related to that individual’s face recognition ability.
The aim of this study is to understand the differences between word and face recognition. To this proposition, differences across time, as well as the role of recognition latency prediction were examined through online frequency and context in the recognition of faces and names. This last goal might further the extent to which the robust effects of word recognition can be extended to a process in the same area of the brain as face recognition. In order to further explore this theory, a simple presentation/discrimination task was selected as a way to employ a selection of different faces and names from international celebrities across time and different populations. Faces and names from famous celebrities might provide a scenario that allows for the examination of differences between the nature of both stimuli, as well as the role of their frequencies in online searches or news. Furthermore, the research between these stimuli is a common strategy seen in the previous literature [18,19,20,21].
This study will examine whether frequencies from online engines predicts word and face recognition and evaluate the similarities and differences between these processes. In order to assess the importance of time at the moment of any word and/or face processing experiment, a second longitudinal experiment in an interval of six months will be carried out. The second experiment will include two different samples to assess the role of place of origin on word and face recognition. Lastly, a Bayesian approach will be conducted as a way to implement the traditional analysis as described in prior literature [22,23]. Inferences under this new technique have raised the attention of applied fields such as psychology, biology, and econometrics [24,25,26,27,28]. Furthermore, on a pragmatic level, several advantages over traditional approaches have been described, such as the ability to quantify uncertainty about effect sizes in an easier way [29]. Therefore, the Bayes factor notation (BF10) was employed to support H1 over H0 where, according to contextual theories, online news searches might better predict recognition times, a measure of which can be non-normally distributed [30].

2. Materials and Methods

2.1. Participants

A total of two experiments were carried out. In Experiment I, a total of 16 Spanish University students volunteered to participate in a first measure (PRE) and a second one (POST) after six months. The same task was employed for both measures, in order to understand the stability of the results through time. In Experiment II, a sample of 40 Spanish and 40 North American University students, with no history or evidence of neurological or psychiatric disease, volunteered to participate. They were selected in order to show adequate variation based on demographic characteristics (therefore, controlling age and level of education), and performed the same experiment as the participants in Experiment I.
The experimental studies were carried out in accordance with the Declaration of Helsinki and approved by the University ethical committee (UCV/2017-2018/31). Participants gave written consent to participate in the study.

2.2. Stimuli

The procedure to select celebrities was similar to the one done in previous literature [21]. All stimuli, comprised of a total of 28 names and 28 faces of celebrities, were randomly presented in two counterbalanced blocks related to the type of stimuli. These were presented in black and white resolution, as done in prior literature [31]. An API that developers use to program websites that interact with Google tools was employed in order to obtain online frequencies. We employed the same Python script as the previous literature [10]. By using the API Client Library for Python, it allowed us to interact programmatically with the Google Custom Search Engine (CSE) in order to obtain, for each word in a list of words, the number of search results obtained by Google’s CSE API.

2.3. Procedure

Participants were shown imagines or names and were instructed to either identify the face or name as a celebrity or to discard the unknown stimuli if it was not. Both stimuli were related to each other: any face from a Celebrity was presented in one block, and its name in another block. Therefore, two blocks were developed and counterbalanced for each name or face recognition task in order to avoid any participant bias related to order presentation.
Specifically, on the notebook keyboard, the letter M was labeled with a green label in order to indicate where the participants had to press for a target stimulus and the letter Z was labeled in red to indicate when a distractor stimulus was presented. A laptop with a Windows operating system and DMDX software was used for the experiment [32]. A simple presentation task was chosen where each stimulus was preceded by a fixation point and an image or face (with an appearance of 500 ms, see Figure 1). The maximum time allowed for a response was 2500 ms. Participants were instructed to answer as fast as possible, while trying not to make mistakes. In order to avoid any kind of distraction, such as noise, the test was administered in an isolated room, where participants entered individually.

2.4. Data Analysis

Data were analyzed using a non-parametric measure, as well as an analysis of variance (ANOVA) to evaluation and record the reaction time for correct responses and accuracy response for each participant. This analysis was performed using a cut-off or a trimming technique (excluding latencies smaller than 250 ms or greater than 1500 ms). Furthermore, a Bayesian inference was carried out using the Bayes factor notation (BF10). This indicates evidence to support the H1 over H0. Data analysis was performed using JASP (Version 0.12.2) [Computer software].

3. Results

Face recognition was faster than name recognition for both Experiments I and II. The Kolmogorov-Smirnov and Shapiro-Wilks normality tests were employed to examine if variables were normally distributed. Although there was no significance, p > 0.05 for Experiments I and II, reaction times were positively distributed [33,34]. Moreover, due to the small sample size for Experiment I, a non-parametric approach was chosen. In this way, no statistically significant differences were found between PRE and POST moments for both latencies (see Figure 2). However, as depicted in Figure 3, differences for accuracy did reach a statistically significant level for the Wilcoxon test for target faces (z = 2.44; p < 0.05) and names (z = 3.51; p < 0.01), as well as for distracting faces (z = 2.82; p < 0.01). This might depict a test-retest learning for participants’ performance. With regard to the nature of the stimuli, no statistically significant differences were found for faces and names in the PRE moment, but there were in the POST moment through the Wilcoxon test (z = 2.58; p < 0.05). Distracting stimuli did show these statistically significant differences for faces and names both PRE (z = 2.48; p < 0.05) and POST moments (z = 2.94; p < 0.01).
As mentioned before, a second experiment was carried out across two different samples from two different countries in a larger number of participants (see Table 1 for both descriptions about Experiment I and II). With regards to this Experiment II, a 2 (face versus name) × 2 (target versus distracting) × 2 (Spain versus USA) ANOVA was carried out on latencies for Experiment I. Differences between face and name recognition were statistically significant: F(1,78) = 34.97; MSE = 14,237.18; p < 0.001; η2 = 0.31. Differences between target and distracting stimuli also reached the statistical level, with the first one recognized faster than the second one: F(1,78) = 73.85; MSE = 7025.31; p < 0.001; η2 = 0.49. An interaction was found for experimental conditions and group: F(1,78) = 3.81; MSE = 77,716.30; p < 0.05; η2 = 0.06. With regard to accuracy, name recognition was more accurate than face recognition, and this difference was statistically significant. However, the explained variance was relatively small: F(1,78) = 7.334; MSE = 0.010; p < 0.01; η2 = 0.08. Differences between target and distracting stimuli also reached the statistical level, with the second one recognized in more accurately than the first one, but with a smaller explained variance: F(1,78) = 7.07; MSE = 0.02; p < 0.01; η2 = 0.08. As expected, an interaction was found for experimental conditions and group: F(1, 78) = 13.12; MSE = 0.13; p < 0.01; η2 = 0.14.
Secondly, the relationship between latencies and frequencies based on Google searches was carried out. For this reason, Pearson’s Correlation Coefficient (Table 2) was carried out on RT and Google frequency based on simple searches (named Searches) and based on news searches (named News) from Experiment I. Figure 4 represents the scatter plot correlation for these conditions. It is important to note that the correlation coefficients presented in the scatter plot seem to be highly influenced by outliers (in some pairs), as also depicted in previous literature [31].
Table 3 shows the regression analysis to predict recognition times, under a logarithmic transformation as examined in the previous literature in the field [35,36], according to the Google frequencies on searches and news. Stronger predictions were found for online news frequencies. A second strategy was carried out based on a Bayesian approach. Wheels represented on Figure 5 depict the strength of evidence that Bayes factor provides. These ratios are transformed to a magnitude between 0 and 1 and plotted as the proportion of a circular area. In this way, stronger evidence was found for news than regular online frequencies. Moreover, this effect was stronger for name than face recognition.

4. Discussion

Differences between word and face recognition have attracted scholarly interest in the last decade. Even if these are different processes, they are located in the same area of the brain. Research in this field continues to debate how overlapped and dissociated these processes are, as face recognition is believed to be an innate process, while name recognition is based on reading, which must be learned. In this way, literature has showed how the impairment on one process might not affect the other and vice versa [37], or even how differences in early development of areas of specialization exist for these processes [38,39]. For this reason, how stable these differences are of interest. This study involved university students and was conducted over the course of six months. The results show that results are stable over this period of time in regard to latencies, but that name accuracy was similar for both PRE and POST moments. This suggest that this process might be less sensitive to re-test effects than originally believed.
Secondly, online frequencies were chosen as a way to reflect the traditional word effect, which is one of the most robust effects in the literature. Furthermore, online frequencies based on Google searches produce results similar to what is seen in a traditional corpus of linguistic data [10,13]. Due to this, terms such as contextual diversity have arisen [12,40], which refers to the number of contexts a word or face appears in rather than number of times it is repeated. For this reason, two types of frequency were considered, one on online searchers, and one on online news. Better predictions were found for frequencies based on online news. Moreover, this result was also supported in the Bayesian approach carried out afterwards. If frequencies based on news are understood as contexts, this might support the contextual diversity view. Interestingly, this result was remarkably stronger for name recognition than face recognition, which might shed light on differences between both processes.
Differences across countries were also found, which might support previous research [16,31,41] which stipulated that density population modulated face recognition. This is explained through the role of experience with faces which might be related to how likely an environment is to provide a higher range of visual experiences. Even if specific density of hometown was not considered as a variable, differences across countries might support these results. Furthermore, this result is congruent with the previous approach on contextual diversity.
Lastly, we would like to highlight methodological novelty of the Bayesian approach in this field [25,26,27]. To our knowledge, this research is the first to compare word to face recognition through a Bayesian analysis. This approach offers additional evidence to support differences between both processes. Future lines of research addressing longitudinal studies in clinical samples, are of interest to examine whether one of the processes could be selectively impaired while the other is kept intact [31]. In particular, we would like to recommend the use of Bayesian inferences, to support traditional frequentist analysis with emerging approaches [42,43]. Its advantage does not exclude traditional analysis, rather, it supports and deepens hypothesis testing and probabilities. Another line of research is related to neural approaches, cluster analysis, or deep learning, with numerous applications in technology and natural science [39,44,45,46,47,48,49].

5. Conclusions

The aim of this study was to examine differences between face and words recognition in terms of time and frequency predictions. For this reason, two experiments were carried out. Results can be described as follows: (i) latencies for face and name recognition were stable over a period of six months; (ii) name recognition seems to be less susceptible to re-test effects, as accuracy was similar for both PRE and POST; (iii) news frequencies were better predicted than regular online frequencies based on searches; (iv) if news frequencies are understood as a reflect of context, this might support a contextual diversity effect, which is higher for name than face recognition.

Author Contributions

Conceptualization, C.M.-T.; methodology, C.M.-T. and D.G.; formal analysis, C.M.-T.; data curation, C.M.-T. and A.G.W.; writing—review and editing.; C.M.-T., A.G.W., and D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We would like to thank the participants involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dehaene, S.; Cohen, L. The unique role of the visual word form area in reading. Trends Cogn. Sci. 2011, 15, 254–262. [Google Scholar] [CrossRef]
  2. Griffin, J.W.; Motta-Mena, N.V. Face and Object Recognition. In Encyclopedia of Evolutionary Psychological Science; Shackelford, T.K., Weekes-Shackelford, V.A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 1–8. ISBN 978-3-319-16999-6. [Google Scholar]
  3. Moret-Tatay, C.; Lami, A.; Oliveira, C.R.; Beneyto-Arrojo, M.J. The mediational role of distracting stimuli in emotional word recognition. Psicol. Reflex. Crítica 2018, 31, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Rezlescu, C.; Susilo, T.; Wilmer, J.B.; Caramazza, A. The inversion, part-whole, and composite effects reflect distinct perceptual mechanisms with varied relationships to face recognition. J. Exp. Psychol. Hum. Percept. Perform. 2017, 43, 1961–1973. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Wegrzyn, M.; Vogt, M.; Kireclioglu, B.; Schneider, J.; Kissler, J. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS ONE 2017, 12, e0177239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Brysbaert, M.; Buchmeier, M.; Conrad, M.; Jacobs, A.M.; Bölte, J.; Böhl, A. The Word Frequency Effect: A Review of Recent Developments and Implications for the Choice of Frequency Estimates in German. Exp. Psychol. 2011, 58, 412–424. [Google Scholar] [CrossRef] [PubMed]
  7. Gimenes, M.; New, B. Worldlex: Twitter and blog word frequencies for 66 languages. Behav. Res. Methods 2016, 48, 963–972. [Google Scholar] [CrossRef] [PubMed]
  8. Singer, L.M.; Alexander, P.A. Reading on Paper and Digitally: What the Past Decades of Empirical Research Reveal. Rev. Educ. Res. 2017, 87, 1007–1041. [Google Scholar] [CrossRef]
  9. Brysbaert, M.; New, B. Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behav. Res. Methods 2009, 41, 977–990. [Google Scholar] [CrossRef] [Green Version]
  10. Moret-Tatay, C.; Gamermann, D.; Murphy, M.; Kuzmičová, A. Just Google It: An Approach on Word Frequencies Based on Online Search Result. J. Gen. Psychol. 2018, 145, 170–182. [Google Scholar] [CrossRef]
  11. Wang, M.; Hu, G. A Novel Method for Twitter Sentiment Analysis Based on Attentional-Graph Neural Network. Information 2020, 11, 92. [Google Scholar] [CrossRef] [Green Version]
  12. Rosa, E.; Tapia, J.L.; Perea, M. Contextual diversity facilitates learning new words in the classroom. PLoS ONE 2017, 12, e0179004. [Google Scholar] [CrossRef] [Green Version]
  13. Pagán, A.; Nation, K. Learning Words Via Reading: Contextual Diversity, Spacing, and Retrieval Effects in Adults. Cogn. Sci. 2019, 43, e12705. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Balota, D.A.; Yap, M.J.; Hutchison, K.A.; Cortese, M.J.; Kessler, B.; Loftis, B.; Neely, J.H.; Nelson, D.L.; Simpson, G.B.; Treiman, R. The English Lexicon Project. Behav. Res. Methods 2007, 39, 445–459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Brysbaert, M.; Keuleers, E.; New, B. Assessing the Usefulness of Google Books’ Word Frequencies for Psycholinguistic Research on Word Processing. Front. Psychol. 2011, 2, 27. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Sunday, M.A.; Patel, P.A.; Dodd, M.D.; Gauthier, I. Gender and hometown population density interact to predict face recognition ability. Vis. Res. 2019, 163, 14–23. [Google Scholar] [CrossRef] [PubMed]
  17. Moret-Tatay, C.; Baixauli Fortea, I.; Grau Sevilla, M.D. Challenges and insights for the visual system: Are face and word recognition two sides of the same coin? J. Neurolinguistics 2020, 56, 100941. [Google Scholar] [CrossRef]
  18. Barragan-Jason, G. How fast is famous face recognition? Front. Psychol. 2012, 3. [Google Scholar] [CrossRef] [Green Version]
  19. Nanda, S.; Mohanan, N.; Kumari, S.; Mathew, M.; Ramachandran, S.; Pillai, P.G.R.; Kesavadas, C.; Sarma, P.S.; Menon, R.N. Novel Face-Name Paired Associate Learning and Famous Face Recognition in Mild Cognitive Impairment: A Neuropsychological and Brain Volumetric Study. Dement. Geriatr. Cogn. Disord. Extra 2019, 9, 114–128. [Google Scholar] [CrossRef]
  20. Quaranta, D.; Piccininni, C.; Carlesimo, G.A.; Luzzi, S.; Marra, C.; Papagno, C.; Trojano, L.; Gainotti, G. Recognition disorders for famous faces and voices: A review of the literature and normative data of a new test battery. Neurol. Sci. 2016, 37, 345–352. [Google Scholar] [CrossRef]
  21. Rizzo, S.; Venneri, A.; Papagno, C. Famous face recognition and naming test: A normative study. Neurol. Sci. 2002, 23, 153–159. [Google Scholar] [CrossRef]
  22. Nuzzo, R. Scientific method: Statistical errors. Nature 2014, 506, 150–152. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Suliman, A.; Omarov, B. Applying Bayesian Regularization for Acceleration of Levenberg Marquardt based Neural Network Training. Int. J. Interact. Multimed. Artif. Intell. 2018, 5, 68. [Google Scholar] [CrossRef]
  24. Vandekerckhove, J.; Rouder, J.N.; Kruschke, J.K. Editorial: Bayesian methods for advancing psychological science. Psychon. Bull. Rev. 2018, 25, 1–4. [Google Scholar] [CrossRef] [Green Version]
  25. Bernabé-Valero, G.; Blasco-Magraner, J.S.; Moret-Tatay, C. Testing Motivational Theories in Music Education: The Role of Effort and Gratitude. Front. Behav. Neurosci. 2019, 13, 172. [Google Scholar] [CrossRef] [Green Version]
  26. Moret-Tatay, C.; Beneyto-Arrojo, M.J.; Laborde-Bois, S.C.; Martínez-Rubio, D.; Senent-Capuz, N. Gender, Coping, and Mental Health: A Bayesian Network Model Analysis. Soc. Behav. Personal. Int. J. 2016, 44, 827–835. [Google Scholar] [CrossRef]
  27. Puga, J.L.; Krzywinski, M.; Altman, N. Bayesian networks. Nat. Methods 2015, 12, 799–800. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Ruiz-Ruano, A.-M.; López-Puga, J.; Delgado-Morán, J.-J. El componente social de la amenaza híbrida y su detección con modelos bayesianos/ The Social Component of the Hybrid Threat and its Detection with Bayesian Models. URVIO Rev. Latinoam. Estud. Segur. 2019, 57–69. [Google Scholar] [CrossRef]
  29. Van Doorn, J.; van den Bergh, D.; Bohm, U.; Dablander, F.; Derks, K.; Draws, T.; Etz, A.; Evans, N.J.; Gronau, Q.F.; Hinne, M.; et al. The JASP Guidelines for Conducting and Reporting a Bayesian Analysis. PsyArXiv 2019. [Google Scholar] [CrossRef] [Green Version]
  30. Moret-Tatay, C.; Gamermann, D.; Navarro-Pardo, E.; de Córdoba Castellá, P.F. ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density. Front. Psychol. 2018, 9, 612. [Google Scholar] [CrossRef] [Green Version]
  31. Moret-Tatay, C.; Baixauli-Fortea, I.; Sevilla, M.D.G.; Irigaray, T.Q. Can You Identify These Celebrities? A Network Analysis on Differences between Word and Face Recognition. Mathematics 2020, 8, 699. [Google Scholar] [CrossRef]
  32. Forster, K.I.; Forster, J.C. DMDX: A Windows display program with millisecond accuracy. Behav. Res. Methods Instrum. Comput. 2003, 35, 116–124. [Google Scholar] [CrossRef] [Green Version]
  33. Moret-Tatay, C.; Leth-Steensen, C.; Irigaray, T.Q.; Argimon, I.I.L.; Gamermann, D.; Abad-Tortosa, D.; Oliveira, C.; Sáiz-Mauleón, B.; Vázquez-Martínez, A.; Navarro-Pardo, E.; et al. The Effect of Corrective Feedback on Performance in Basic Cognitive Tasks: An Analysis of RT Components. Psychol. Belg. 2016, 56, 370–381. [Google Scholar] [CrossRef] [PubMed]
  34. Fitousi, D. Linking the Ex-Gaussian Parameters to Cognitive Stages: Insights from the Linear Ballistic Accumulator (LBA) Model. Quant. Methods Psychol. 2020, 16, 91–106. [Google Scholar] [CrossRef]
  35. Balota, D.A.; Cortese, M.J.; Sergent-Marshall, S.D.; Spieler, D.H.; Yap, M.J. Visual Word Recognition of Single-Syllable Words. J. Exp. Psychol. Gen. 2004, 133, 283–316. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Smith, N.J.; Levy, R. The effect of word predictability on reading time is logarithmic. Cognition 2013, 128, 302–319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Susilo, T.; Wright, V.; Tree, J.J.; Duchaine, B. Acquired prosopagnosia without word recognition deficits. Cogn. Neuropsychol. 2015, 32, 321–339. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Centanni, T.M.; Norton, E.S.; Park, A.; Beach, S.D.; Halverson, K.; Ozernov-Palchik, O.; Gaab, N.; Gabrieli, J.D. Early development of letter specialization in left fusiform is associated with better word reading and smaller fusiform face area. Dev. Sci. 2018, 21, e12658. [Google Scholar] [CrossRef]
  39. Moret-Tatay, C.; Baixauli-Fortea, I.; Grau-Sevilla, M.D. Profiles on the Orientation Discrimination Processing of Human Faces. Int. J. Environ. Res. Public. Health 2020, 17, 5772. [Google Scholar] [CrossRef]
  40. Adelman, J.S.; Brown, G.D.A.; Quesada, J.F. Contextual Diversity, Not Word Frequency, Determines Word-Naming and Lexical Decision Times. Psychol. Sci. 2006, 17, 814–823. [Google Scholar] [CrossRef] [Green Version]
  41. Sunday, M.A.; Dodd, M.D.; Tomarken, A.J.; Gauthier, I. How faces (and cars) may become special. Vis. Res. 2019, 157, 202–212. [Google Scholar] [CrossRef]
  42. Druică, E.; Vâlsan, C.; Ianole-Călin, R.; Mihail-Papuc, R.; Munteanu, I. Exploring the Link between Academic Dishonesty and Economic Delinquency: A Partial Least Squares Path Modeling Approach. Mathematics 2019, 7, 1241. [Google Scholar] [CrossRef] [Green Version]
  43. Chen, T.; Qianqian Li, Q.; Jianjun Yang, J.; Guodong Cong, G.; Li, G. Modeling of the Public Opinion Polarization Process with the Considerations of Individual Heterogeneity and Dynamic Conformity. Mathematics 2019, 7, 917. [Google Scholar] [CrossRef] [Green Version]
  44. Khari, M.; Garg, A.K.; Gonzalez-Crespo, R.; Verdú, E. Gesture Recognition of RGB and RGB-D Static Images Using Convolutional Neural Networks. Int. J. Interact. Multimed. Artif. Intell. 2019, 5, 22. [Google Scholar] [CrossRef]
  45. Magdin, M.; Prikler, F. Are Instructed Emotional States Suitable for Classification? Demonstration of How They Can Significantly Influence the Classification Result in An Automated Recognition System. Int. J. Interact. Multimed. Artif. Intell. 2019, 5, 141. [Google Scholar] [CrossRef]
  46. Moaaz, O.; Cesarano, C.; Muhib, A. Some New Oscillation Results for Fourth-Order Neutral Differential Equations. Eur. J. Pure Appl. Math. 2020, 13, 185–199. [Google Scholar] [CrossRef]
  47. Matsunaga, A.; Fortes, J.A.B. On the Use of Machine Learning to Predict the Time and Resources Consumed by Applications. In Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, Melbourne, Australia, 17–20 May 2010; pp. 495–504. [Google Scholar]
  48. Imani, M.; Ghoreishi, S.F.; Allaire, D.; Braga-Neto, U.M. MFBO-SSM: Multi-Fidelity Bayesian Optimization for Fast Inference in State-Space Models. Proc. AAAI Conf. Artif. Intell. 2019, 33, 7858–7865. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, D.; Hoi, S.C.H.; Wu, P.; Zhu, J.; He, Y.; Miao, C. Learning to name faces: A multimodal learning scheme for search-based face annotation. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval-SIGIR’ 13, Dublin, Ireland, 28 July–1 August 2013; p. 443. [Google Scholar]
Figure 1. Visual representation of the experimental setup and conditions.
Figure 1. Visual representation of the experimental setup and conditions.
Mathematics 08 01964 g001
Figure 2. Differences PRE versus POST moments for the target stimuli on the face and name recognition conditions. On the left, differences on latencies. On the right, differences on accuracy.
Figure 2. Differences PRE versus POST moments for the target stimuli on the face and name recognition conditions. On the left, differences on latencies. On the right, differences on accuracy.
Mathematics 08 01964 g002
Figure 3. Differences PRE versus POST moments for the distracting stimuli on the face and name recognition conditions. On the left, differences on latencies. On the right, differences on accuracy.
Figure 3. Differences PRE versus POST moments for the distracting stimuli on the face and name recognition conditions. On the left, differences on latencies. On the right, differences on accuracy.
Mathematics 08 01964 g003
Figure 4. Scatter plot correlation for online searches on Google and Google News for face and name conditions.
Figure 4. Scatter plot correlation for online searches on Google and Google News for face and name conditions.
Mathematics 08 01964 g004
Figure 5. Bayes Factor robustness check for differences between News and Search frequencies on face and name recognition.
Figure 5. Bayes Factor robustness check for differences between News and Search frequencies on face and name recognition.
Mathematics 08 01964 g005
Table 1. Descriptive analysis for both Experiment I and II. SD = standard deviation.
Table 1. Descriptive analysis for both Experiment I and II. SD = standard deviation.
Face RecognitionName Recognition
Group MeanSDAccuracyMeanSDAccuracy (%)
Experiment ITargetPRE670.2386.4976712.53108.5778
POST636.9577.0884696.9377.6278
DistractingPRE721.95117.8878858.33129.5582
POST665.9288.2184813.32106.5288
Experiment IITargetSpain634.6888.2280712.96100.6877
USA668.5996.4374681.9291.5379
Total651.6493.4077697.4496.8778
Spain701.74112.8179843.59137.8981
DistractingUSA696.42111.1881778.55132.2589
Total699.08111.3280811.07138.1785
Table 2. Pearson correlation coefficient among the variables of interest in Experiment II.
Table 2. Pearson correlation coefficient among the variables of interest in Experiment II.
NewsSearchesFace RTName RTFace HitsName Hits
News10.372 **−0.271 *−0.446 **0.305 *0.408 **
Searches 10.064−0.322 *0.1270.131
Face RT 10.236−0.638 **−0.632 **
Name RT 1−0.342 **−0.496 **
Face Hits 10.733 **
Name Hits 1
* p < 0.05; ** p < 0.01.
Table 3. Regression analysis for frequency database (Google searchers and News) with regards face and name recognition.
Table 3. Regression analysis for frequency database (Google searchers and News) with regards face and name recognition.
Face RecognitionName Recognition
βpR2βpR2
Search0.090.500.08−0.330.0140.10
News−0.370.030.13−0.46<0.010.20
Note. β = standardized regression coefficient; R2 = coefficient of determination.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moret-Tatay, C.; Wester, A.G.; Gamermann, D. To Google or Not: Differences on How Online Searches Predict Names and Faces. Mathematics 2020, 8, 1964. https://doi.org/10.3390/math8111964

AMA Style

Moret-Tatay C, Wester AG, Gamermann D. To Google or Not: Differences on How Online Searches Predict Names and Faces. Mathematics. 2020; 8(11):1964. https://doi.org/10.3390/math8111964

Chicago/Turabian Style

Moret-Tatay, Carmen, Abigail G. Wester, and Daniel Gamermann. 2020. "To Google or Not: Differences on How Online Searches Predict Names and Faces" Mathematics 8, no. 11: 1964. https://doi.org/10.3390/math8111964

APA Style

Moret-Tatay, C., Wester, A. G., & Gamermann, D. (2020). To Google or Not: Differences on How Online Searches Predict Names and Faces. Mathematics, 8(11), 1964. https://doi.org/10.3390/math8111964

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop