Next Article in Journal
OXTR Gene DNA Methylation Levels Are Associated with Discounting Behavior with Untrustworthy Proposers
Next Article in Special Issue
Influence of Motor and Cognitive Tasks on Time Estimation
Previous Article in Journal
Non-Invasive Transcutaneous Vagus Nerve Stimulation for the Treatment of Fibromyalgia Symptoms: A Study Protocol
Previous Article in Special Issue
Visual Duration but Not Numerosity Is Distorted While Running
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Semantics of Natural Objects and Tools in the Brain: A Combined Behavioral and MEG Study

1
Neurophysiology Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, 20133 Milan, Italy
2
Division of Neuroscience, IRCCS San Raffaele Scientific Institute, University San Raffaele, Via Olgettina 60, 20132 Milan, Italy
3
Centro Psico-Sociale di Seregno—Azienda Socio-Sanitaria Territoriale di Vimercate, 20871 Vimercate, Italy
4
Dipartimento di Scienze Mediche e Chirurgiche, University “Magna Graecia” of Catanzaro, Viale Salvatore Venuta, 88100 Germaneto, Italy
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(1), 97; https://doi.org/10.3390/brainsci12010097
Submission received: 28 October 2021 / Revised: 5 January 2022 / Accepted: 7 January 2022 / Published: 12 January 2022
(This article belongs to the Special Issue The Role of the Sensorimotor System in Cognitive Functions)

Abstract

:
Current literature supports the notion that the recognition of objects, when visually presented, is sub-served by neural structures different from those responsible for the semantic processing of their nouns. However, embodiment foresees that processing observed objects and their verbal labels should share similar neural mechanisms. In a combined behavioral and MEG study, we compared the modulation of motor responses and cortical rhythms during the processing of graspable natural objects and tools, either verbally or pictorially presented. Our findings demonstrate that conveying meaning to an observed object or processing its noun similarly modulates both motor responses and cortical rhythms; being natural graspable objects and tools differently represented in the brain, they affect in a different manner both behavioral and MEG findings, independent of presentation modality. These results provide experimental evidence that neural substrates responsible for conveying meaning to objects overlap with those where the object is represented, thus supporting an embodied view of semantic processing.

1. Introduction

Classically, semantics refers to our capacity to attribute meaning to the events and entities (such as objects, words, feelings, and so on) that we experience during our lifespan and organize in a symbolic system. Language is the symbolic system that we use to represent this knowledge about the world, but how this knowledge is organized in the brain and how it is related to the real world is a matter of debate within the neuroscientific literature. In recent times, it has been proposed that the speakers understand linguistic material thanks to the recruitment of those sensory, motor, and even emotional systems involved in experiencing the content expressed by that linguistic material [1,2,3,4,5,6,7]. This approach contrasts with a more classical one, claiming language as an amodal function, that is completely disentangled from those sensorimotor systems normally involved in experiencing its content [8,9,10,11]. Indeed, these two approaches are not mutually exclusive, and some authors have tempted to combine the two views. In this perspective, authors do not deny a potential role of sensory, motor, and emotional systems in building up concepts. As for objects, their concepts are stored in brain areas distinct from those where individuals experience the different features of objects [12,13,14,15,16].
Within this general framework, the central claim is that information about the features of an object—such as its form, its size, and the manner in which we act upon it—is stored in our sensorial, motor, and emotional systems [16]. For instance, it has been demonstrated that words related to odorants (e.g., cinnamon) activate the olfactory system [17], whereas words related to taste (e.g., salt) activate the gustatory one [18] and words related to emotions activate the corresponding area (e.g., disgust, [19]). However, despite the recruitment of sensorimotor and even emotional areas, possibly related to coding specific features of the objects expressed by the nouns, the meaning of the noun per se is not coded in those areas, but rather in distinct high-order, linguistic regions, the so-called semantic hubs [13,16,20]. In other words, this evidence does not rule out the possibility that, despite the involvement of the cortex used to experience the sensory or motor content, the mechanism (and possibly the areas) allowing us to attribute the meaning could be different [13,16,21].
A further point of interest is that, in the current literature, observed objects seems to be differently analyzed from their corresponding nouns. There is a general agreement that two visual streams subserve the processing of objects when observed [22,23,24]. When individuals have to interact with objects, the dorsal stream, including frontal and parietal areas, is mainly involved. This stream is devoted to sensorimotor transformation that make possible the choice of the most appropriate motor program to act upon the observed object. This implies that, within the dorsal stream, there are represented both the object features used to guide actions (e.g., the size, the orientation), and the actions usually performed upon an object (e.g., [25,26]). It has been demonstrated that the dorsal stream is anatomically and functionally composed by two circuits, named dorso-dorsal and dorso-ventral stream, in which natural objects and tools are represented, respectively [27,28,29,30]. Accordingly, the representation of the actions linked to these categories of objects are also coded in these circuits. Manipulative actions (such as power and precision grips, simple actions like key pressing used in the present study to provide motor responses, up to reach-to-grasp actions) specifically devoted to interact with natural objects are represented in the dorso-dorsal stream, while actions for use (such as to grasp the hammer to drive the nails) are represented in the dorso-ventral one, which also seems accountable for the processing of sensorimotor information based on long-term object representation [28,29,30]. Despite this different representation, current literature claims that the recognition of an object is subserved by the ventral stream as described by pivotal studies [22] including some specific temporal areas (e.g., lateral occipital temporal cortex, anterior and inferotemporal regions). In further support of this view, clinical findings showed that following a damage in the temporal lobe, patients lose the ability to recognize an object, while after damage to the parietal cortex, they lose the ability to use objects properly [28,31,32,33,34,35]. Indeed, these two streams are not completely segregated but they rather interact to update our functional knowledge and our capacity to interact online with an object [25,28,36,37,38,39,40,41].
Tools are a special class of graspable objects for humans. The study of tools is interesting since they have an associated functional use that involves a particular modality of interaction with the object, rather than just the feature to be grasped, as natural objects have [28]. Furthermore, humans use tools in different contexts, thus requiring a generalization process and the conceptual knowledge of their use [33]. Functional neuroimaging studies focusing on tools have demonstrated that their use elicits an activation of many distinct brain areas, including the left supramarginal gyrus (SMG) [42,43,44,45,46,47,48,49,50,51], the ventral premotor cortex (PMv) [26,52,53], the left inferior frontal gyrus (IFG) bordering pars opercularis [33], and the left insula [43]. Overall, these studies show that tools are represented in circuits distinct from those when natural objects are represented. Specifically, tools seem to be represented in a fronto-parietal circuit corresponding to ventro-dorsal subdivision of the dorsal stream [27,29]. Note that, as previously mentioned, in this part of the dorsal stream the specific actions involved in the specific use of the objects are also represented. Moreover, this different representation is possible already present in non-human primates [54].
To sum up, current literature seems to support the view that the observation of graspable natural objects and graspable tools lead to the activation of different sectors within the dorsal stream, in which also the congruent actions are represented, while the processing and understanding of nouns expressing objects in the same categories lead to activation of shared semantic hubs. These areas are therefore distinct from those where objects are coded and where actions necessary to interact with them are represented.
Indeed, for natural objects, some recent studies have suggested that the verbal labels and the observed objects share similar semantic mechanisms [2,55,56,57,58,59,60,61,62]. In two behavioral studies, which used a similar paradigm as in the present study, participants gave slower motor responses for natural graspable objects and nouns as compared to non-graspable ones [56,57]. The authors proposed that when participants are engaged in two different tasks, i.e., the object processing (either pictorially or verbally presented) and the preparation of motor responses, the motor system is involved in both tasks and, therefore, there is a competition for neuronal resources, leading to a slowing down of motor responses.
In the present study, we directly compared the modulation of the motor system during the processing of natural graspable objects either verbally or pictorially presented, and graspable tools, also either verbally or pictorially presented. In line with the embodiment approach, since natural graspable objects and tools have distinct motor representations in the brain, these two object categories should lead to a different modulation of the motor system, regardless of the presentation modality. Namely, graspable natural objects should recruit the most dorsal sector of the dorsal stream, while tools should recruit the ventral sector of the dorsal stream. On the contrary, if the processing of nouns involves areas distinct from those where the corresponding objects are motorically represented, then a different modulation could be still potentially found for observed objects, but it appears unlikely for nouns, since the nouns of objects are coded in specific hubs distinct from the regions where natural objects and tools, respectively, are motorically represented [13,16,20]. In the present study, we addressed this issue with a go/no-go task already used in previous studies of our group [56,63], where participants gave their responses when stimuli where real words and/or objects and had to refrain from responding when stimuli were meaningless (scrambled images or pseudowords). Responses were asked at 150 ms from stimulus onset, since previous studies have shown that the motor system is early recruited just 150–170 ms after the visual or auditory presentation of stimuli with a specific motor content [64,65,66,67,68,69,70,71]. We replicated the behavioral task in a Magnetoencephalography (MEG) study, looking at the modulation of the cortical Beta rhythm during the semantic processing of natural objects and tools, presented either as nouns or as images. Beta band oscillations are the predominant rhythm originating in the motor cortex with a typical pattern of suppression and rebound during movement [72]. Beta suppression, or desynchronization (event-related desynchronization, ERD), starts several hundred milliseconds before movement onset in self-paced or externally cued movements and becomes maximal around the time of movement execution. ERD and subsequent synchronization (event-related synchronization, ERS) were largely adopted to study the neural correlates of action observation [73,74], motor imagery [75,76,77] and action related language [63,78]. In sum, an increase suppression of beta rhythm expresses a condition where the motor system is more prompt to generate a motor response, while a weaker suppression of beta rhythm expresses a condition where the motor system is less prompt for action.
In the present study, magnetic ERD/ERS in the beta band has been exploited to reveal the neural correlates of object observation and noun processing and the underlying neurophysiological mechanisms of motor responses. As foreseen by embodiment, we expected a different modulation of motor responses as well as beta rhythm, during the processing of natural graspable objects and tools, respectively, given their different motor representation in the brain. This independent of the presentation modality (nouns and pictures) of the two kinds of stimuli. In other words, we aimed at assessing whether beta rhythm suppression is also sensitive to graspable natural objects (presented as picture and nouns), as it is to congruent manipulative hand actions. Note that we used as a motor response just a manipulative action (i.e., a simple key pressing, most likely involving neural structures actually used to interact with natural graspable objects) and not an action for use as those required when interacting with graspable tools [27,28,29,30]. Since the cortical circuitry involved in generating the motor response required by the behavioral task was also involved, at the same time, in the semantic processing of the natural graspable objects, we expected a weaker suppression of beta rhythm, in parallel with slowing down of motor responses, during the processing of this object category as compared to tools, regardless of the presentation modality.

2. Materials and Methods

2.1. Experiment 1—Behavioral Study

2.1.1. Participants

In total, 28 volunteers (18 females, mean age = 22 years old and 5 months, Std. Dev. = 3.2) took part in the behavioral experiment. All participants were 18 years or older prior to participating, they gave their informed consent, accordingly with the ethical standards of the Declaration of Helsinki. Exclusion criteria were formal education in linguistics, the presence of neurological or psychiatric disorders and the use of drugs affecting the central nervous system. The study was approved by the Ethics Committee of the University “Magna Graecia” of Catanzaro (approval number: 2012.40, date of approval: November 2012) and complied with the ethical standards of the Italian Psychological Society (AIP, see http://www.aipass.org/node/26, accessed on 21 November 2020) as well as the Italian Board of Psychologists (see http://www.psy.it/codice_deontologico.html, accessed on 21 November 2020). All participants were right-handed, according to the Edinburgh Handedness Inventory [79], had normal or corrected-to-normal vision, and were native Italian speakers.

2.1.2. Apparatus, Procedure and Stimuli

The experiment was carried out in a sound-attenuated room, dimly illuminated by a halogen lamp directed toward the ceiling. Participants sat comfortably in front of a PC screen (LG 22′′ LCD, 1920 × 1080 pixel resolution and 60 Hz refresh rate). The eye-to-screen distance was set at 60 cm.
The experiment used a go/no-go task, in which participants were requested to respond to real nouns and images of objects and refrain from responding when presented stimuli were pseudowords and scrambled images. The experiment session consisted of 1 practice block and 1 experimental block. In the practice block, participants were presented with 16 stimuli (4 images of natural objects or tools, 4 scrambled images, 4 nouns of natural objects or tools, and 4 pseudowords) which were not used in the experimental block. During the practice block, participants received feedback (“ERROR”) after giving a wrong response (i.e., responding to a meaningless or refraining from responding to a real item), as well as for responses given prior to go signal presentation (“ANTICIPATION”), or later than 1.5 s (“YOU HAVE NOT ANSWERED”). In the experimental block, each stimulus was randomly presented twice with the constraint that no more than three items of the same kind (verbal, visual) or referring to objects of the same category (graspable natural object, tools, meaningless) could be presented on consecutive trials. No feedback was given to participants. Thus, the experiment, which lasted about 20 min, consisted of 160 go trials (80 nouns, 50% natural graspable object nouns and 50% tools nouns, plus 80 images of objects, 50% natural graspable objects and 50% tools) and 160 no-go trials (80 pseudowords plus 80 scrambled images), and 16 practice trials, for a total of 336 trials. To sum up, the experiment used a 2 × 2 repeated measures factorial design with Category (natural graspable objects, graspable tools) and Stimulus Type (nouns, photos) as within-subjects variables.
Nouns in the 2 categories were matched for word length (mean values for nouns referring to natural objects and tools: 6.4 and 7.4; t = 0.049, p = 0.96), syllable number (mean values: 2.45 and 3.00; t = 0.018, p = 0.98) and written lexical frequency [mean values: 6.14 and 8.77 number of occurrences per million in CoLFIS (Corpus e Lessico di Frequenza dell’Italiano Scritto ~3.798.000 words)—Laudanna et al., 1995; t = 0.52, p = 0.60]. Pseudowords were built by substituting one consonant and one vowel in two distinct syllables of each noun (e.g., “sgalpillo” instead of “scalpello”). With this procedure, pseudowords contained orthographically and phonologically legal syllables for the Italian language. Hence, nouns and pseudowords were also matched for length.
Images depicted 20 natural graspable objects and 20 tools. They were photos of real objects and not sketches. The scrambled images were built by applying Photoshop distorting graphic filters (e.g., blur and twist) to the photos depicting both natural graspable objects and graspable tools, so to make them unrecognizable and then meaningless. All photos and scrambled images were 440 × 440 pixels. The list of stimuli is reported in the Supplementary Materials.
In order to avoid any priming effect due to the presentation of the same item in different modalities, nouns and images in a specific category (e.g., graspable tools) never depicted the same item (for example, a graspable tool like “hammer” was presented as a noun but not depicted as an image; coherently a graspable tool like “axe” was presented as an image but not as a noun).
Each trial started with a black fixation cross (RGB coordinates = 0, 0, 0) displayed at the center of a grey background (RGB coordinates = 178, 178, 178). After a delay of 1000–1500 ms (in order to avoid response habituation), the fixation cross was replaced by a stimulus item, either a noun/pseudoword or an image/scrambled image. Note that the delay could be at any time between 1000 and 1500 ms. The verbal labels were written in black lowercase Courier New bold (font size = 24). Stimuli were centrally displayed and surrounded by a red (RGB coordinates = 255, 0, 0) 20 pixels-wide frame. The red frame changed to green (RGB coordinates = 0, 255, 0) 150 ms after the stimulus onset. The color change of the frame was the “go” signal for the response (Figure 1). Participants were instructed to give a motor response, as fast and accurate as possible, by pressing a key on a computer keyboard centered on participants’ body midline with their right index finger. They had to respond when the stimulus referred to a real object, and refrain from responding when it was meaningless. After the go signal, stimuli remained visible for 1350 ms or until participant’s responses. Stimulus presentation and response times (RTs) collection were controlled using the software package E-Prime 2.

2.1.3. Data Analysis

Data analyses were performed using R 3.6.3 [80]. Practice trials were excluded from analysis. Participants’ RTs to real stimuli were analyzed. The RTs were measured from the “go” signal to the button pressing. Mean RTs of each participant were submitted to an rmANOVA, with Category (2 levels: natural graspable object and tool) and Stimulus type (2 levels: noun and image) as factors.

2.2. Experiment 2—MEG Study

2.2.1. Participants

In total, 15 volunteers (9 females, mean age = 26 years old, Std. Dev. = 2.0) were recruited for the experiment. All participants were 18 years or older prior to participating. All participants were right-handed, according to the Edinburgh Handedness Inventory (Oldfield, 1971), had normal or corrected-to-normal vision and were native Italian speakers. Exclusion criteria were formal education in linguistics, the presence of neurological or psychiatric disorders and the use of drugs affecting the central nervous system. The experiment was carried out in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. The study was approved by the Ethics Committee of Fondazione IRCCS Istituto Neurologico Carlo Besta of Milan (approval number: 47/2012; date of approval: November 2012) and the University “Magna Graecia” of Catanzaro (approval number: 2012.40) and complied with the ethical standards of the Italian Psychological Society (AIP, see http://www.aipass.org/node/26, accessed on 21 November 2020) as well as the Italian Board of Psychologists (see http://www.psy.it/codice_deontologico.html, accessed on 21 November 2020). Participants gave their written informed consent before being included in the study.

2.2.2. Task

Participants were seated in the magnetically shielded room to perform the experiment. Stimuli and procedure were the same of the behavioral study, with the necessary adaptation required by the MEG setting used in the current study. Sixteen practice trials were used to train participants. To improve signal-to-noise ratio, the experiment consisted of two consecutive acquisitions in each of which 80 go trials (40 nouns, 50% natural object nouns and 50% tools nouns, plus 40 images of object, 50% natural objects and 50% tools) and 80 no-go trials (40 pseudowords plus 40 scrambled images) were presented, for a total of 320 experimental trials. In the two acquisitions, the presentation order of the stimuli was randomized. Hence, the MEG study used the same 2 × 2 repeated measures factorial design as the behavioral one. Stimulus presentation and RTs collection were controlled using the software package Stim2.

2.2.3. MEG Data Acquisition and Pre-Processing

The MEG signals were acquired using a 306-channel whole head MEG system (Triux, Elekta Oy, Helsinki, Finland). Surface EMG signals were simultaneously recorded from pairs of electrodes placed bilaterally 2–3 cm apart over the belly of the right and left flexor and extensor of wrist. Signals were sampled at 1 kHz. Moreover, bipolar electro-oculographic (EOG) and electrocardiographic signals (ECG) were acquired.
The participant’s head position inside the MEG helmet was continuously monitored by five head position identification (HPI) coils located on the scalp. The locations of these coils, together with three anatomical landmarks (nasion, right and left preauriculars), and additional scalp points were digitized before the recording by means of a 3D digitizer (FASTRAK, Polhemus, Colchester, VT, USA).
The raw MEG data were pre-processed off-line using the spatio-temporal signal-space separation method [81] implemented in the Maxfilter 2.2 (Elekta Neuromag Oy, Helsinki, Finland) in order to subtract external interference and correct for head movements and then band-pass filtered at 0.1–100 Hz.
Cardiac and ocular movement artifacts were removed using ICA algorithm based on EEGLAB toolbox [82] implemented in a custom-made MATLAB code (R2017b, Mathworks Inc., Natick, MA, USA), using ECG and EOG as reference. MEG data were divided into epochs ranging from 2.2 s before to 2.8 s after the stimulus onset. The epoch length was chosen by taking into account the reaction time and the motor activation defined by the EMG signal, including the return to baseline. Epochs with continuous muscular contraction and/or sensor jumps were excluded from further analysis.
Finally, data epochs were grouped according to the four experimental conditions (natural and tools images, natural and tools words).

2.3. Data Analysis

2.3.1. Sensors Analysis

Time–frequency representations (TFR) were computed across frequencies from 1 to 30 Hz (in 1 Hz steps) and time from −2 to 2.5 s (in 0.1 s steps) with a fixed frequency smoothing of 4 Hz. Desynchronization values were obtained as percent power change in beta band (15–30 Hz) calculated with respect to mean power in the −2 to −1 s before cue onset. Finally, for each participant, the most reactive β-band frequency (individual reactive frequency, IRF) was defined as the frequency at which the maximum desynchronization was found.

2.3.2. Source Analysis

Dynamic imaging of coherent sources (DICS) beamforming [83] was used to identify the spatial distribution in the frequency domain. The leadfield matrix was computed using realistically shaped single-shell volume conduction model based on template brain co-registered by means of digitized scalp points. Source model was obtained from a 5 mm resolution grid which covered whole brain volume. Source localizations was performed for the band IRF ± 1 Hz for a pre-stimulus baseline period (−1.2 to −0.5 s) and for a window of interest during stimulus presentation (0.5 to 1.2 s) using a common spatial filter based on the pooled data from both time intervals. Subject-specific relative power differences were grand-averaged and normalized to the MNI brain template.
Source time-series were extracted using the linearly constraint minimum variance (LCMV) beamforming [84] with 5% regularization. Data were normalized to the MNI template to extract the source time-series on inferior parietal lobule and precentral and postcentral areas. Subsequently, as for the sensors data, we calculated the desynchronization in IRF ± 1 Hz band and averaged within regions. The ERD onset latency was defined as the first value leading to the minimum/maximum value and the ERD offset was defined as the first value returning to baseline. Finally, the desynchronization AUC was calculated between ERD onset and offset.
Both sensor and source data analysis were analyzed using custom Matlab (MATLAB 2017a, MathWorks, Inc., Natick, MA, USA) scripts based on SPM8 [85] and Fieldtrip toolboxes [86].

2.3.3. Statistical Analysis

The RTs were compared using repeated measures ANOVA (rmANOVA) with the factor Category (tools, natural) and Stimulus type (images, nouns).
To preliminary explore the involvement of sensorimotor areas in the contralateral hemisphere on sensors, the non-parametric permutation test in combination with cluster-level statistics and multiple comparison correction implemented in Fieldtrip toolbox was applied. Post-hoc paired two-tailed t-tests were used to calculate the within-group difference between stimuli.
Finally, to compare the beta ERD Area Under the Curves (AUCs) on source signals in terms of main effects and interactions, rmANOVA were performed with Category, Stimulus type and ROIs (inferior parietal lobule, precentral and postcentral areas) as factors. Statistical analyses were carried out using IBM SPSS, version 20 (SPSS Inc., Chicago, IL, USA). All data are expressed as mean ± standard errors of mean.
To compare TFR between different conditions in motor areas contralateral to the response hand, and to identify significant beta frequencies and time points, the non-parametric permutation test in combination with cluster-level statistics and multiple comparison correction implemented in Fieldtrip toolbox was applied. Post-hoc paired two-tailed t-tests were used to calculate the within-group difference between stimuli. All data are expressed as mean ± standard errors of mean. Statistical analyses were carried out using IBM SPSS, version 20 (SPSS Inc., Chicago, IL, USA). The RTs and beta ERD AUCs were compared using rmANOVA with the factor Category (tools, natural) and Stimulus type (images, nouns).

3. Results

3.1. Experiment 1—Behavioral Study

Data were collected from twenty-eight participants. One participant was excluded from analysis since he/she performed 120 errors. All other participants performed the task well with few errors (Mean error rate: 5.3%, SD = 2.9). Error trials were checked, excluded without replacement and they were not analyzed further. RTs were calculated as the interval between cue onset and the key press on the computer keyboard.
Repeated measures ANOVA (rmANOVA) on RTs revealed the main effect of Category (F(1, 26) = 99.64; MSE = 382.94; p < 0.001). Slower RTs were obtained with natural graspable objects as compared with tools (714 ± 11.89 ms vs. 677 ± 11.55 ms). Responses to natural stimuli were slower than those to tools both for images (t(26) = 6.79, p < 0.0001) and nouns (t(26) = 5.41, p < 0.0001). Neither the main effect of Stimulus Type (F(1, 26) = 1.57, p = 0.221) nor the interaction (F(1, 26) = 0.76, p = 0.390) reached the statistical significance. Descriptive statistics are reported in Table 1.

3.2. Experiment 2—MEG Study

3.2.1. Behavioral Data

Behavioral data from 15 participants replicated the results of Experiment 1. All subjects performed well with few errors (Mean error rate: 4.2%. SD = 2.3). RTs were calculated as the interval between cue onset and the EMG onset. rmANOVA on RTs showed a main effect of Category (F(1,14) = 22.18, MSE = 27,093.8, p < 0.001). RTs to natural stimuli were slower than those to tool stimuli both for images (Natural: 573.3 ± 11.61 ms; Tools: 536.3 ± 13.2 ms, t(14) = 3.612, p = 0.003) and nouns (Natural: 593.3 ± 18.9 ms; Tools: 545.9 ± 16.3 ms, t(14) = 3.877, p = 0.002). Neither the main effect of Stimulus Type nor the interaction reached the statistical significance.

3.2.2. MEG Data

Time-frequency analysis on sensors.
The typical time-frequency pattern was observed in every subject and condition consisting in beta band desynchronization over the contralateral motor area immediately after stimulus onset followed by focal synchronization after movement execution. When comparing natural graspable objects and tools (both for images and nouns), statistical analysis revealed a significant difference in the interval between 0.5 and 1 s after stimulus onset in contralateral motor area. Specifically, a significant greater desynchronization was found for tools stimuli with respect to natural stimuli (Figure 2A,B). The difference was greater and more protracted in the case of visual stimuli as compared to nouns (Figure 2, bottom panels). No significant differences were found in the remaining comparison (natural object images vs. nouns, tool images vs. nouns).

3.2.3. Source Analysis

Cortical sources of beta power modulation by means of dynamic imaging of coherent sources (DICS) are illustrated in Figure 3A. Beta power modulations were most pronounced in contralateral pericentral regions, including pre-central, post-central and inferior parietal areas that we used as ROIs. Comparing the beta desynchronization in the area under the curve (AUC) of selected ROIs, rmANOVA showed a significant main effect of Category (F(1,14) = 11.586, p = 0.004) and ROIs (F(1.450,20.301) = 4.168, p = 0.041) and a trending towards significant interaction between Category and Modality (F(1,14) = 3.764, p = 0.073). Taking into account the signals for all natural object vs. tool stimuli, irrespective of modality, a significantly greater desynchronization AUC was found in each ROIs (precentral: t(14) = −2.683, p = 0.018; postcentral: t(14) = −3.641, p = 0.003; IPL: t(14) = −2.282, p = 0.039). Comparing separately the different modality, for the images the ERD AUC was significantly greater for the tools stimuli with respect to the natural stimuli in precentral (t(14) = −3.279, p = 0.005) and postcentral (t(14) = −3.597, p = 0.003) areas and almost significant in IPL (t(14) = −1.971, p = 0.069), whereas for nouns no significance was found (Figure 3B,C).

4. Discussion

The results of the present study are relevant for the current literature about the semantics of objects. The first element of interest is that observed graspable objects and verbal labels of the graspable objects (i.e., nouns) showed a similar modulation of the activity of the motor system as revealed both by RTs and Beta rhythm as measured by MEG. It is worth stressing that the beta rhythm is known to be generated in frontal and parietal areas.
For natural objects, participants gave slower motor responses compared to tools, regardless of the presentation modality. Since participants gave their hand motor responses with a simple manipulative action (i.e., key pressing) involving the same neural structures where graspable natural objects are represented and semantically processed (dorso-dorsal sector of the dorsal stream), these results can be interpreted as an interference effect, because the same neuronal resources were involved at the same time in attributing meaning to the object and to perform motor response required by the task. Hence, participants paid a cost showing a slowing down of their motor responses. In the present study, this modulation occurred with natural graspable objects, either verbally or visually presented.
A similar pattern of motor responses has been found in a previous study [56] that compared seen and verbally labelled natural graspable and non-graspable objects, with a slowing down of RTs with graspable objects as compared to non-graspable ones, also in this case regardless of presentation modality.
Taking together the results of [56] and the present ones, one may argue that motor responses are fine-tuned with the motor representation of the processed stimuli, since non-graspable natural objects, as well as graspable tools, do not modulate motor responses in the same direction of graspable natural objects.
The interference effect found in the behavioral experiment (Experiment 1) was replicated in the MEG experiment (Experiment 2), where participants were requested to perform the same go/no-go task while assessing the cortical beta rhythm. In Experiment 2, motor responses to natural graspable objects confirmed to be slower than those given to tools, thus further supporting the notion that tools and natural graspable objects have a different representation within the motor system. Coherently, beta rhythm, as revealed by MEG, had a weaker decrease during the processing of natural graspable objects as compared to tools. A suppression of beta rhythm (the so-called ERD), normally recorded in motor/premotor areas, occurs when these areas are involved in the actual execution of an action or, at a less degree, when individuals observe or imagine an action [72,87]. In other words, our results show that, during the processing of natural stimuli, ERD is weaker than during the processing of tools, thus suggesting that the motor system is less prompt to give a motor response. This weaker suppression appears the neurophysiological correlate of the interference effect (i.e., the slowing down of hand motor responses during the processing of natural graspable objects, either verbally or pictorially presented) obtained in the behavioral task.
It is worth stressing that converging results also come from very few fMRI studies showing shared neural substrates activation during the processing of nouns and visually presented objects [88,89,90], thus supporting further the view of a common semantic system for both nouns and their corresponding [91,92,93]. Similar results were obtained during behavioral, neurophysiological, and MEG studies where participants were asked to process observed hand-actions and verbs expressing actions in the same category, either taken separately or combined [63,64,65,66,67,71,74,94,95].
As far as observed natural objects, the present results are in keeping with the current literature [2,16,20,24], showing that the dorsal stream is involved when participants observe natural graspable objects, as the relevant features of these objects are the motor ones. However, the present results show that a similar modulation of behavioral motor responses and beta rhythm occur also for verbal labels referring to the same object category, thus suggesting that the dorsal stream was similarly involved independent of the presentation modality. This evidence does not fit with the approach claiming that the conceptual knowledge about an object is represented in semantic hubs distinct from the brain areas where object properties are coded [13,16,20], being the semantic hubs widely coinciding with posterior inferior parietal lobule (including the angular gyrus, IPL), middle temporal gyrus, fusiform and parahippocampal gyri, dorsomedial prefrontal cortex, IFG, PMv, and posterior cingulate gyrus [13,21].
Some authors consider the recruitment sensorimotor areas within the dorsal stream during language processing as due to late effect related to the spread of activity of top-down cognitive processes most likely occurring in higher order areas involved in object identification [10]. In other words, they consider the recruitment of these areas as a side effect of the activation of distinct cognitive areas crucial for the semantics. This view claims that ‘‘sensory and motor information color conceptual processing, enriches it and provides it with a relational context’’. Since this top-down additional process requires time, we tend to rule out this explanation, since processing of our stimuli is time locked at about 150 ms from stimulus presentation, a time window which rules out the occurrence of motor system recruitment as a side effect of upstream cognitive processes [64,65,71,95,96,97,98].
A second point of interest is the evidence that observed graspable tools and nouns referring to this object category do not modulate the activity of the motor system in the same manner as natural objects do. As we stated in the introduction, tools are a special class of graspable objects that imply special hand–object interactions. Manipulation of tools is mainly devoted to a specific use (i.e., functional, [28]) rather than to a simple structural grasping that is used for natural graspable objects. Since structural grasping actions that we can act upon natural objects, are likely shared also with other species (even phylogenetically far from human primates), and have a distinct cortical representation [28,29,53], in this context we refer to these grasping actions as “ecological” ones.
Despite there is a general debate on the use of tools in monkeys [99,100] there is no doubt that only humans possess specialized neural mechanisms allowing them to understand the functional properties of tools. Moreover, only humans have the capacity to generalize the use of a tool in different contexts and to build up new tools depending on their needs. A so fine developed ability seems to have its neural basis in the left IPL that appears as a specific sector only evolved in humans, distinct from monkey grasping regions [54,101]. Within the dorsal stream, this area is referred to as ventro-dorsal sector [27,28,29,102]. A further consideration that supports the notion that the use of tools is exclusive for humans comes from clinical neurology. Apraxia is a syndrome where patients may lose the capacity to use tools properly [103,104,105]. Apparently, there is no counterpart of apraxia syndrome in the monkeys [106]. If one accepts the notion that the semantic of objects is coded where the objects are motorically represented, then processing tools should imply the involvement of the corresponding brain sector in the ventro-dorsal circuit. The results of the present study are in line with this view. Tools, whatever the modality of presentation, did not modulate the motor responses as well as beta rhythm, like natural objects did. This evidence may be explained by the fact that participants used a very simple motor act to provide their responses (pushing a button), an action represented in the circuit devoted to interactions with natural objects (ecological grasping actions) rather than in the circuit devoted to the use of tools. A similar distinction was revealed by using TMS [107] in a study where motor evoked potentials (MEPs) were obtained during the observation of graspable and non-graspable natural objects and tools, respectively. Results showed that MEPs elicited by natural graspable objects had a less amplitude than those elicited by graspable tools, again suggesting that a different circuit and a different sector of premotor/motor cortex was involved in processing these two categories of objects.
One could argue that tools nouns did not affect motor responses and beta rhythms in the present experiment because, as foreseen by current literature, nouns are processed in semantic hubs. However, if one assumes that nouns are coded in specific semantic hubs, then the nouns of tools as well as the nouns of natural objects should be coded in these semantic hubs and, consequently, should not modulate the activity of motor areas. In other words, one should expect similar motor responses as well as a similar modulation of beta rhythm when processing nouns referring both to natural graspable objects and tools. The present data, showing that only nouns of natural graspable objects modulate the activity of areas devoted to ecological grasping, further support the notion that the neural substrates of semantics processing overlap with those where the most relevant features of an object are experienced.
If semantics is coded in the areas where objects are motorically represented or perceptually experienced, then it remains to explain the role of the higher order areas that several authors consider as the actual semantic hubs [13,16,20,108]. Beyond language processing, these areas have been involved in different tasks. Some of them also constitute the nodes of the so-called “default-mode” network, a set of functionally interconnected regions that are consistently modulated during demanding cognitive tasks [109,110] or during social cognition tasks [111,112,113]. As for prefrontal cortex areas, they have been involved in working memory tasks [114] as well as in the re-organization and recall of simple and well-known motor acts in novel actions [115,116,117]. Finally, the IFG, including Broca’s region, is known to be endowed with hand motor representations and has a role in speech production as well as lip [118,119,120]. We forward that the recruitment of these areas during nouns processing and conceptualization, rather than related to semantics, is better explained if we assume that they may contribute to contextualize the processed words, to express how demanding is their processing and, most likely, how much they are related to our life experiences and personal beliefs.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/brainsci12010097/s1, Table S1: Appendix—Stimuli used in Experiment 1 and 2.

Author Contributions

Conceptualization, G.B.; methodology, G.B., D.R.S., and E.V.; software E.V. and D.D.; formal analysis, E.V., D.D., F.M., and F.S.; investigation, E.V., D.D., D.R.S., G.G., F.M., and F.S.; resources, E.V., D.D., F.M., and F.S.; writing—original draft, G.B. and E.V.; writing—review and editing, G.B., G.G., D.R.S., and E.V.; visualization, E.V. and G.G.; supervision, G.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Fondazione IRCCS Istituto Neurologico Carlo Besta of Milan (approval number: 47/2012; date of approval: November 2012) and the University “Magna Graecia” of Catanzaro (approval number: 2012.40).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All custom scripts and data contained in this manuscript are available upon request from the corresponding author, Giovanni Buccino ([email protected]).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barsalou, L.W. Grounded cognition. Annu. Rev. Psychol. 2008, 59, 617–645. [Google Scholar] [CrossRef] [Green Version]
  2. Buccino, G.; Colagè, I.; Gobbi, N.; Bonaccorso, G. Grounding meaning in experience: A broad perspective on embodied language. Neurosci. Biobehav. Rev. 2016, 69, 69–78. [Google Scholar] [CrossRef]
  3. Fischer, M.H.; Zwaan, R.A. Embodied language: A review of the role of the motor system in language comprehension. Q. J. Exp. Psychol. 2008, 61, 825–850. [Google Scholar] [CrossRef] [PubMed]
  4. Gallese, V. Mirror neurons and the social nature of language: The neural exploitation hypothesis. Soc. Neurosci. 2008, 3, 317–333. [Google Scholar] [CrossRef] [PubMed]
  5. Kousta, S.T.; Vigliocco, G.; Vinson, D.P.; Andrews, M.; Del Campo, E. The Representation of Abstract Words: Why Emotion Matters. J. Exp. Psychol. Gen. 2011, 140, 14–34. [Google Scholar] [CrossRef] [PubMed]
  6. Pulvermüller, F. A brain perspective on language mechanisms: From discrete neuronal ensembles to serial order. Prog. Neurobiol. 2002, 67, 85–111. [Google Scholar] [CrossRef]
  7. Vigliocco, G.; Kousta, S.T.; Della Rosa, P.A.; Vinson, D.P.; Tettamanti, M.; Devlin, J.T.; Cappa, S.F. The Neural Representation of Abstract Words: The Role of Emotion. Cereb. Cortex 2014, 24, 1767–1777. [Google Scholar] [CrossRef] [Green Version]
  8. Chatterjee, A. Disembodying cognition. Lang. Cogn. 2011, 2, 79–116. [Google Scholar] [CrossRef] [Green Version]
  9. Mahon, B.Z.; Caramazza, A. The orchestration of the sensory-motor systems: Clues from neuropsychology. Cogn. Neuropsychol. 2005, 22, 480–494. [Google Scholar] [CrossRef]
  10. Mahon, B.Z.; Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. J. Physiol. Paris 2008, 102, 59–70. [Google Scholar] [CrossRef]
  11. Pylyshyn, Z. Return of the mental image: Are there really pictures in the brain? Trends Cogn. Sci. 2003, 7, 113–118. [Google Scholar] [CrossRef]
  12. Desai, R.H.; Binder, J.R.; Conant, L.L.; Seidenberg, M.S. Activation of Sensory–Motor Areas in Sentence Comprehension. Cereb. Cortex 2010, 20, 468–478. [Google Scholar] [CrossRef]
  13. Fernandino, L.; Binder, J.R.; Desai, R.H.; Pendl, S.L.; Humphries, C.J.; Gross, W.L.; Conant, L.L.; Seidenberg, M.S. Concept Representation Reflects Multimodal Abstraction: A Framework for Embodied Semantics. Cereb. Cortex 2016, 26, 2018–2034. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Mahon, B.Z.; Kemmerer, D. Interactions between language, thought, and perception: Cognitive and neural perspectives. Cogn. Neuropsychol. 2020, 37, 235–240. [Google Scholar] [CrossRef]
  15. Martin, A. The representation of object concepts in the brain. Annu. Rev. Psychol. 2007, 58, 25–45. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Martin, A. GRAPES-Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain. Psychon. Bull. Rev. 2016, 23, 979–990. [Google Scholar] [CrossRef]
  17. González, J.; Barros-Loscertales, A.; Pulvermüller, F.; Meseguer, V.; Sanjuán, A.; Belloch, V.; Ávila, C. Reading cinnamon activates olfactory brain regions. NeuroImage 2006, 32, 906–912. [Google Scholar] [CrossRef] [PubMed]
  18. Barrós-Loscertales, A.; González, J.; Pulvermüller, F.; Ventura-Campos, N.; Bustamante, J.C.; Costumero, V.; Parcet, M.A.; Ávila, C. Reading salt activates gustatory brain regions: FMRI evidence for semantic grounding in a novel sensory modality. Cereb. Cortex 2012, 22, 2554–2563. [Google Scholar] [CrossRef] [PubMed]
  19. Ponz, A.; Montant, M.; Liegeois-Chauvel, C.; Silva, C.; Braun, M.; Jacobs, A.M.; Ziegler, J.C. Emotion processing in words: A test of the neural re-use hypothesis using surface and intracranial EEG. Soc. Cogn. Affect. Neurosci. 2014, 9, 619–627. [Google Scholar] [CrossRef] [PubMed]
  20. Mahon, B.Z. What is embodied about cognition? Lang. Cogn. Neurosci. 2015, 30, 420–429. [Google Scholar] [CrossRef] [PubMed]
  21. Binder, J.R.; Desai, R.H.; Graves, W.W.; Conant, L.L. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex 2009, 19, 2767–2796. [Google Scholar] [CrossRef]
  22. Goodale, M.A.; Milner, A.D. Separate visual pathways for perception and action. Trends Neurosci. 1992, 15, 20–25. [Google Scholar] [CrossRef]
  23. Goodale, M.A.; Milner, A.D. Two visual streams: Interconnections do not imply duplication of function. Cogn. Neurosci. 2010, 1, 65–68. [Google Scholar] [CrossRef] [PubMed]
  24. Milner, A.D.; Goodale, M.A. Two visual systems re-viewed. Neuropsychologia 2008, 46, 774–785. [Google Scholar] [CrossRef] [PubMed]
  25. Buccino, G.; Sato, M.; Cattaneo, L.; Rodà, F.; Riggio, L. Broken affordances, broken objects: A TMS study. Neuropsychologia 2009, 47, 3074–3078. [Google Scholar] [CrossRef]
  26. Chao, L.L.; Martin, A. Representation of manipulable man-made objects in the dorsal stream. NeuroImage 2000, 12, 478–484. [Google Scholar] [CrossRef] [Green Version]
  27. Rizzolatti, G.; Matelli, M. Two different streams form the dorsal visual system: Anatomy and functions. Exp. Brain Res. 2003, 153, 146–157. [Google Scholar] [CrossRef]
  28. Binkofski, F.; Buxbaum, L.J. Two action systems in the human brain. Brain Lang. 2013, 127, 222–229. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Sakreida, K.; Effnert, I.; Thill, S.; Menz, M.M.; Jirak, D.; Eickhoff, C.R.; Ziemke, T.; Eickhoff, S.B.; Borghi, A.M.; Binkofski, F. Affordance processing in segregated parieto-frontal dorsal stream sub-pathways. Neurosci. Biobehav. Rev. 2016, 69, 89–112. [Google Scholar] [CrossRef]
  30. Binkofski, F.; Buccino, G. The role of the parietal cortex in sensorimotor transformations and action coding. Handb. Clin. Neurol. 2018, 151, 467–479. [Google Scholar]
  31. Gonzalez Rothi, L.J.; Ochipa, C.; Heilman, K.M. A Cognitive Neuropsychological Model of Limb Praxis. Cogn. Neuropsychol. 1991, 8, 443–458. [Google Scholar] [CrossRef]
  32. Hodges, J.R.; Spatt, J.; Patterson, K. “What” and “how”: Evidence for the dissociation of object knowledge and mechanical problem-solving skills in the human brain. Proc. Natl. Acad. Sci. USA 1999, 96, 9444–9448. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Johnson-Frey, S.H. The neural bases of complex tool use in humans. Trends Cogn. Sci. 2004, 8, 71–78. [Google Scholar] [CrossRef] [PubMed]
  34. Kalénine, S.; Buxbaum, L.J.; Coslett, H.B. Critical brain regions for action recognition: Lesion symptom mapping in left hemisphere stroke. Brain 2010, 133, 3269–3280. [Google Scholar] [CrossRef]
  35. Negri, G.A.L.; Rumiati, R.; Zadini, A.; Ukmar, M.; Mahon, B.; Caramazza, A. What is the role of motor simulation in action and object recognition? Evidence from apraxia. Cogn. Neuropsychol. 2007, 24, 795–816. [Google Scholar] [CrossRef] [PubMed]
  36. Cohen, N.R.; Cross, E.S.; Tunik, E.; Grafton, S.T.; Culham, J.C. Ventral and dorsal stream contributions to the online control of immediate and delayed grasping: A TMS approach. Neuropsychologia 2009, 47, 1553–1562. [Google Scholar] [CrossRef]
  37. Whitwell, R.L.; Milner, A.D.; Goodale, M.A. The two visual systems hypothesis: New challenges and insights from visual form agnosic patient DF. Front. Neurol. 2014, 5, 255. [Google Scholar] [CrossRef] [Green Version]
  38. Van Polanen, V.; Davare, M. Interactions between dorsal and ventral streams for controlling skilled grasp. Neuropsychologia 2015, 79, 186–191. [Google Scholar] [CrossRef] [Green Version]
  39. Kopiske, K.K.; Bruno, N.; Hesse, C.; Schenk, T.; Franz, V.H. The functional subdivision of the visual brain: Is there a real illusion effect on action? A multi-lab replication study. Cortex 2016, 79, 130–152. [Google Scholar] [CrossRef] [Green Version]
  40. Uccelli, S.; Pisu, V.; Riggio, L.; Bruno, N. The Uznadze illusion reveals similar effects of relative size on perception and action. Exp. Brain Res. 2019, 237, 953–965. [Google Scholar] [CrossRef] [PubMed]
  41. Garofalo, G.; Riggio, L. Influence of colour on object motor representation. Neuropsychologia 2022, 164, 108103. [Google Scholar] [CrossRef]
  42. Boronat, C.B.; Buxbaum, L.J.; Coslett, H.B.; Tang, K.; Saffran, E.M.; Kimberg, D.Y.; Detre, J.A. Distinctions between manipulation and function knowledge of objects: Evidence from functional magnetic resonance imaging. Cogn. Brain Res. 2005, 23, 361–373. [Google Scholar] [CrossRef]
  43. Brandi, M.L.; Wohlschläger, A.; Sorg, C.; Hermsdörfer, J. The Neural Correlates of Planning and Executing Actual Tool Use. J. Neurosci. 2014, 34, 13183–13194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Chen, Q.; Garcea, F.E.; Mahon, B.Z. The Representation of Object-Directed Action and Function Knowledge in the Human Brain. Cereb. Cortex 2016, 26, 1609–1618. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Chen, Q.; Garcea, F.E.; Jacobs, R.A.; Mahon, B.Z. Abstract Representations of Object-Directed Action in the Left Inferior Parietal Lobule. Cereb. Cortex 2018, 28, 2162–2174. [Google Scholar] [CrossRef]
  46. Gallivan, J.P.; Adam McLean, D.; Valyear, K.F.; Culham, J.C. Decoding the neural mechanisms of human tool use. ELife 2013, 2, e00425. [Google Scholar] [CrossRef]
  47. Hermsdörfer, J.; Terlinden, G.; Mühlau, M.; Goldenberg, G.; Wohlschläger, A.M. Neural representations of pantomimed and actual tool use: Evidence from an event-related fMRI study. NeuroImage 2007, 36, T109–T118. [Google Scholar] [CrossRef] [PubMed]
  48. Kellenbach, M.L.; Brett, M.; Patterson, K. Actions Speak Louder Than Functions: The Importance of Manipulability and Action in Tool Representation. J. Cogn. Neurosci. 2003, 15, 30–46. [Google Scholar] [CrossRef]
  49. Marques, J.F.; Canessa, N.; Cappa, S. Neural differences in the processing of true and false sentences: Insights into the nature of “truth” in language comprehension. Cortex 2009, 45, 759–768. [Google Scholar] [CrossRef]
  50. Rumiati, R.I.; Weiss, P.H.; Shallice, T.; Ottoboni, G.; Noth, J.; Zilles, K.; Fink, G.R. Neural basis of pantomiming the use of visually presented objects. NeuroImage 2004, 21, 1224–1231. [Google Scholar] [CrossRef]
  51. Buchwald, M.; Przybylski, L.; Króliczak, G. Decoding Brain States for Planning Functional Grasps of Tools: A Functional Magnetic Resonance Imaging Multivoxel Pattern Analysis Study. J. Int. Neuropsychol. Soc. 2018, 24, 1013–1025. [Google Scholar] [CrossRef]
  52. Grafton, S.T.; Fadiga, L.; Arbib, M.A.; Rizzolatti, G. Premotor cortex activation during observation and naming of familiar tools. NeuroImage 1997, 6, 231–236. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Króliczak, G.; Frey, S.H. A Common Network in the Left Cerebral Hemisphere Represents Planning of Tool Use Pantomimes and Familiar Intransitive Gestures at the Hand-Independent Level. Cereb. Cortex 2009, 19, 2396–2410. [Google Scholar] [CrossRef] [Green Version]
  54. Peeters, R.; Simone, L.; Nelissen, K.; Fabbri-Destro, M.; Vanduffel, W.; Rizzolatti, G.; Orban, G.A. The Representation of Tool Use in Humans and Monkeys: Common and Uniquely Human Features. J. Neurosci. 2009, 29, 11523–11539. [Google Scholar] [CrossRef] [Green Version]
  55. Vigliocco, G.; Vinson, D.P.; Druks, J.; Barber, H.; Cappa, S.F. Nouns and verbs in the brain: A review of behavioural, electrophysiological, neuropsychological and imaging studies. Neurosci. Biobehav. Rev. 2011, 35, 407–426. [Google Scholar] [CrossRef] [PubMed]
  56. Marino, B.F.M.; Sirianni, M.; Volta, R.D.; Magliocco, F.; Silipo, F.; Quattrone, A.; Buccino, G. Viewing photos and reading nouns of natural graspable objects similarly modulate motor responses. Front. Hum. Neurosci. 2014, 8, 968. [Google Scholar] [CrossRef] [PubMed]
  57. Buccino, G.; Marino, B.F.; Bulgarelli, C.; Mezzadri, M. Fluent speakers of a second language process graspable nouns expressed in L2 like in their native language. Front. Psychol. 2017, 8, 1306. [Google Scholar] [CrossRef] [Green Version]
  58. Buccino, G.; Dalla Volta, R.; Arabia, G.; Morelli, M.; Chiriaco, C.; Lupo, A.; Silipo, F.; Quattrone, A. Processing graspable object images and their nouns is impaired in Parkinson’s disease patients. Cortex 2018, 100, 32–39. [Google Scholar] [CrossRef]
  59. Zhang, Z.; Sun, Y.; Humphreys, G.W. Perceiving object affordances through visual and linguistic pathways: A comparative study. Sci. Rep. 2016, 6, 26806. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Bub, D.N.; Masson, M.E.J.; Kumar, R. Time course of motor affordances evoked by pictured objects and words. J. Exp. Psychol. Hum. Percept. Perform. 2018, 44, 53–68. [Google Scholar] [CrossRef]
  61. Horoufchin, H.; Bzdok, D.; Buccino, G.; Borghi, A.M.; Binkofski, F. Action and object words are differentially anchored in the sensory motor system—A perspective on cognitive embodiment. Sci. Rep. 2018, 8, 6583. [Google Scholar] [CrossRef]
  62. Harpaintner, M.; Sim, E.J.; Trumpp, N.M.; Ulrich, M.; Kiefer, M. The grounding of abstract concepts in the motor and visual system: An fMRI study. Cortex 2020, 124, 1–22. [Google Scholar] [CrossRef]
  63. Klepp, A.; Niccolai, V.; Buccino, G.; Schnitzler, A.; Biermann-Ruben, K. Language-motor interference reflected in MEG beta oscillations. NeuroImage 2015, 109, 438–448. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Boulenger, V.; Roy, A.C.; Paulignan, Y.; Deprez, V.; Jeannerod, M.; Nazir, T.A. Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. J. Cogn. Neurosci. 2006, 18, 1607–1615. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Buccino, G.; Riggio, L.; Melli, G.; Binkofski, F.; Gallese, V.; Rizzolatti, G. Listening to action-related sentences modulates the activity of the motor system: A combined TMS and behavioral study. Cogn. Brain Res. 2005, 24, 355–363. [Google Scholar] [CrossRef] [PubMed]
  66. Dalla Volta, R.; Gianelli, C.; Campione, G.C.; Gentilucci, M. Action word understanding and overt motor behavior. Exp. Brain Res. 2009, 196, 403–412. [Google Scholar] [CrossRef] [PubMed]
  67. De Vega, M.; Moreno, V.; Castillo, D. The comprehension of action-related sentences may cause interference rather than facilitation on matching actions. Psychol. Res. 2013, 77, 20–30. [Google Scholar] [CrossRef]
  68. De Vega, M.; León, I.; Hernández, J.A.; Valdés, M.; Padrón, I.; Ferstl, E.C. Action Sentences Activate Sensory Motor Regions in the Brain Independently of Their Status of Reality. J. Cogn. Neurosci. 2014, 26, 1363–1376. [Google Scholar] [CrossRef]
  69. Niccolai, V.; Klepp, A.; Indefrey, P.; Schnitzler, A.; Biermann-Ruben, K. Semantic discrimination impacts tDCS modulation of verb processing. Sci. Rep. 2017, 7, 17162. [Google Scholar] [CrossRef] [Green Version]
  70. Pulvermüller, F.; Assadollahi, R.; Elbert, T. Neuromagnetic evidence for early semantic access in word recognition. Eur. J. Neurosci. 2001, 13, 201–205. [Google Scholar] [CrossRef]
  71. Sato, M.; Mengarelli, M.; Riggio, L.; Gallese, V.; Buccino, G. Task related modulation of the motor system during language processing. Brain Lang. 2008, 105, 83–90. [Google Scholar] [CrossRef] [Green Version]
  72. Pfurtscheller, G.; Lopes Da Silva, F.H. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
  73. Hari, R.; Kujala, M.V. Brain basis of human social interaction: From concepts to brain imaging. Physiol. Rev. 2009, 89, 453–479. [Google Scholar] [CrossRef] [Green Version]
  74. Moreno, I.; de Vega, M.; León, I. Understanding action language modulates oscillatory mu and beta rhythms in the same way as observing actions. Brain Cogn. 2013, 82, 236–242. [Google Scholar] [CrossRef]
  75. Brinkman, L.; Stolk, A.; Dijkerman, H.C.; de Lange, F.P.; Toni, I. Distinct Roles for Alpha- and Beta-Band Oscillations during Mental Simulation of Goal-Directed Actions. J. Neurosci. 2014, 34, 14783–14792. [Google Scholar] [CrossRef]
  76. De Lange, F.P.; Roelofs, K.; Toni, I. Motor imagery: A window into the mechanisms and alterations of the motor system. Cortex 2008, 44, 494–506. [Google Scholar] [CrossRef] [Green Version]
  77. Schnitzler, A.; Salenius, S.; Salmelin, R.; Jousmäki, V.; Hari, R. Involvement of Primary Motor Cortex in Motor Imagery: A Neuromagnetic Study. NeuroImage 1997, 6, 201–208. [Google Scholar] [CrossRef]
  78. Weiss, S.; Mueller, H. “Too Many betas do not Spoil the Broth”: The Role of Beta Brain Oscillations in Language Processing. Front. Psychol. 2012, 3, 201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. Oldfield, R.C. The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 1971, 9, 97–113. [Google Scholar] [CrossRef]
  80. R Core Team. R: A Language and Environment for Statistical Computing (3.6.3); R Foundation for Statistical Computing: Vienna, Austria, 2020. [Google Scholar]
  81. Taulu, S.; Simola, J. Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements. Phys. Med. Biol. 2006, 51, 1759–1768. [Google Scholar] [CrossRef] [PubMed]
  82. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [Green Version]
  83. Gross, J.; Kujala, J.; Hamalainen, M.; Timmermann, L.; Schnitzler, A.; Salmelin, R. Dynamic imaging of coherent sources: Studying neural interactions in the human brain. Proc. Natl. Acad. Sci. USA 2001, 98, 694–699. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Van Veen, B.D.; van Drongelen, W.; Yuchtman, M.; Suzuki, A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans. Biomed. Eng. 1997, 44, 867–880. [Google Scholar] [CrossRef]
  85. Friston, K.J.; Holmes, A.P.; Worsley, K.J.; Poline, J.P.; Frith, C.D.; Frackowiak, R.S.J. Statistical parametric maps in functional imaging: A general linear approach. Hum. Brain Mapp. 1994, 2, 189–210. [Google Scholar] [CrossRef]
  86. Oostenveld, R.; Fries, P.; Maris, E.; Schoffelen, J.M. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Comput. Intell. Neurosci. 2011, 2011, 156869. [Google Scholar] [CrossRef]
  87. Hari, R.; Forss, N.; Avikainen, S.; Kirveskari, E.; Salenius, S.; Rizzolatti, G. Activation of human primary motor cortex during action observation: A neuromagnetic study. Proc. Natl. Acad. Sci. USA 1998, 95, 15061–15065. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. Devereux, B.J.; Clarke, A.; Marouchos, A.; Tyle, L.K. Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. J. Neurosci. 2013, 33, 18906–18916. [Google Scholar] [CrossRef] [Green Version]
  89. Shinkareva, S.V.; Malave, V.L.; Mason, R.A.; Mitchell, T.M.; Just, M.A. Commonality of neural representations of words and pictures. NeuroImage 2011, 54, 2418–2425. [Google Scholar] [CrossRef]
  90. Simanova, I.; Hagoort, P.; Oostenveld, R.; Van Gerven, M.A.J. Modality-independent decoding of semantic information from the human brain. Cereb. Cortex 2014, 24, 426–434. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Ganis, G.; Kutas, M.; Sereno, M.I. The Search for “Common Sense”: An Electrophysiological Study of the Comprehension of Words and Pictures in Reading. J. Cogn. Neurosci. 1996, 8, 89–106. [Google Scholar] [CrossRef] [PubMed]
  92. Van Doren, L.; Dupont, P.; De Grauwe, S.; Peeters, R.; Vandenberghe, R. The amodal system for conscious word and picture identification in the absence of a semantic task. NeuroImage 2010, 49, 3295–3307. [Google Scholar] [CrossRef]
  93. Vandenberghe, R.; Price, C.; Wise, R.; Josephs, O.; Frackowiak, R.S.J. Functional anatomy of a common semantic system for words and pictures. Nature 1996, 383, 254–256. [Google Scholar] [CrossRef] [Green Version]
  94. Garofalo, G.; Magliocco, F.; Silipo, F.; Riggio, L.; Buccino, G. What matters is the undelying experiences similar motor responses during processing observed hand actions and hand related verbs. J. Neuropsychol. 2021; In press. [Google Scholar]
  95. Santana, E.J.; De Vega, M. An ERP study of motor compatibility effects in action language. Brain Res. 2013, 1526, 71–83. [Google Scholar] [CrossRef] [PubMed]
  96. Chersi, F.; Thill, S.; Ziemke, T.; Borghi, A.M. Sentence processing: Linking language to motor chains. Front. Neurorobotics 2010, 4, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  97. García, A.M.; Ibáñez, A. A touch with words: Dynamic synergies between manual actions and language. Neurosci. Biobehav. Rev. 2016, 68, 59–95. [Google Scholar] [CrossRef] [PubMed]
  98. Marino, B.F.M.; Gough, P.M.; Gallese, V.; Riggio, L.; Buccino, G. How the motor system handles nouns: A behavioral study. Psychol. Res. 2013, 77, 64–73. [Google Scholar] [CrossRef]
  99. lriki, A.; Tanaka, M.; Iwamura, Y. Coding of modified body schema during tool use by macaque postcentral neurones. NeuroReport 1996, 7, 2325–2330. [Google Scholar] [CrossRef] [PubMed]
  100. Maravita, A.; Iriki, A. Tools for the body (schema). Trends Cogn. Sci. 2004, 8, 79–86. [Google Scholar] [CrossRef]
  101. Nelissen, K.; Vanduffel, W. Grasping-related functional MRI brain responses in the macaque monkey. J. Neurosci. 2011, 31, 8220–8229. [Google Scholar] [CrossRef] [Green Version]
  102. Errante, A.; Ziccarelli, S.; Mingolla, G.; Fogassi, L. Grasping and Manipulation: Neural Bases and Anatomical Circuitry in Humans. Neuroscience 2021, 458, 203–212. [Google Scholar] [CrossRef]
  103. Buxbaum, L.J.; Veramontil, T.; Schwartz, M.F. Function and manipulation tool knowledge in apraxia: Knowing ‘what for’ but not ‘how’. Neurocase 2000, 6, 83–97. [Google Scholar]
  104. De Renzi, E.; Lucchelli, F. Ideational Apraxia. Brain 1988, 111, 1173–1185. [Google Scholar] [CrossRef] [PubMed]
  105. Heilman, K.M.; Schwartz, H.D.; Geschwind, N. Defective motor learning in ideomotor apraxia. Neurology 1975, 25, L–1018. [Google Scholar] [CrossRef] [PubMed]
  106. Caminiti, R.; Chafee, M.V.; Battaglia-Mayer, A.; Averbeck, B.B.; Crowe, D.A.; Georgopoulos, A.P. Understanding the parietal lobe syndrome from a neurophysiological and evolutionary perspective. Eur. J. Neurosci. 2010, 31, 2320–2340. [Google Scholar] [CrossRef] [PubMed]
  107. Gough, P.M.; Riggio, L.; Chersi, F.; Sato, M.; Fogassi, L.; Buccino, G. Nouns referring to tools and natural objects differentially modulate the motor system. Neuropsychologia 2012, 50, 19–25. [Google Scholar] [CrossRef]
  108. Desai, R.H.; Herter, T.; Riccardi, N.; Rorden, C.; Fridriksson, J. Concepts within reach: Action performance predicts action language processing in stroke. Neuropsychologia 2015, 71, 217–224. [Google Scholar] [CrossRef] [Green Version]
  109. Raichle, M.E.; MacLeod, A.M.; Snyder, A.Z.; Powers, W.J.; Gusnard, D.A.; Shulman, G.L. A default mode of brain function. Proc. Natl. Acad. Sci. USA 2001, 98, 676–682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Raichle, M.E. The Brain’s Default Mode Network. Annu. Rev. Neurosci. 2015, 38, 433–447. [Google Scholar] [CrossRef] [Green Version]
  111. Mars, R.B.; Neubert, F.X.; Noonan, M.A.P.; Sallet, J.; Toni, I.; Rushworth, M.F.S. On the relationship between the “default mode network” and the “social brain”. Front. Hum. Neurosci. 2012, 6, 189. [Google Scholar] [CrossRef] [Green Version]
  112. Mitchell, J.P.; Macrae, C.N.; Banaji, M.R. Dissociable Medial Prefrontal Contributions to Judgments of Similar and Dissimilar Others. Neuron 2006, 50, 655–663. [Google Scholar] [CrossRef] [Green Version]
  113. Wen, T.; Mitchell, D.J.; Duncan, J. The Functional Convergence and Heterogeneity of Social, Episodic, and Self-Referential Thought in the Default Mode Network. Cereb. Cortex 2020, 30, 5915–5929. [Google Scholar] [CrossRef] [PubMed]
  114. Goldman-Rakic, P.S. Regional and cellular fractionation of working memory. Proc. Natl. Acad. Sci. 1996, 93, 13473–13480. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Buccino, G.; Vogt, S.; Ritzl, A.; Fink, G.R.; Zilles, K.; Freund, H.J.; Rizzolatti, G. Neural circuits underlying imitation learning of hand actions: An event-related fMRI study. Neuron 2004, 42, 323–334. [Google Scholar] [CrossRef]
  116. Vogt, S.; Buccino, G.; Wohlschläger, A.M.; Canessa, N.; Shah, N.J.; Zilles, K.; Eickhoff, S.B.; Freund, H.J.; Rizzolatti, G.; Fink, G.R. Prefrontal involvement in imitation learning of hand actions: Effects of practice and expertise. NeuroImage 2007, 37, 1371–1383. [Google Scholar] [CrossRef] [Green Version]
  117. Di Tella, S.; Blasi, V.; Cabinio, M.; Bergsland, N.; Buccino, G.; Baglio, F. How Do We Motorically Resonate in Aging? A Compensatory Role of Prefrontal Cortex. Front. Aging Neurosci. 2021, 13, 412. [Google Scholar] [CrossRef]
  118. Binkofski, F.; Buccino, G. The role of ventral premotor cortex in action execution and action understanding. J. Physiol. Paris 2006, 99, 396–405. [Google Scholar] [CrossRef] [PubMed]
  119. Petrides, M.; Cadoret, G.; Mackey, S. Orofacial somatomotor responses in the macaque monkey homologue of Broca’s area. Nature 2005, 435, 1235–1238. [Google Scholar] [CrossRef]
  120. Tettamanti, M.; Buccino, G.; Saccuman, M.C.; Gallese, V.; Danna, M.; Scifo, P.; Fazio, F.; Rizzolatti, G.; Cappa, S.F.; Perani, D. Listening to action-related sentences activates fronto-parietal motor circuits. J. Cogn. Neurosci. 2005, 17, 273–281. [Google Scholar] [CrossRef]
Figure 1. Experimental procedure. (A) Task timing: Participants were asked to fixate the center of the screen placed in front of them. Each trial started with the presentation of the stimulus surrounded by a red frame. After 150 ms the frame turned green and the participants were allowed to respond. Participants were instructed to respond only if the stimulus referred to a real tool or to a real natural graspable object. The trial ended when participants provided their responses or after 1350 ms if no response was given. (B) Stimuli examples: images (1), scrambled images (2), nouns (3) and pseudowords (4).
Figure 1. Experimental procedure. (A) Task timing: Participants were asked to fixate the center of the screen placed in front of them. Each trial started with the presentation of the stimulus surrounded by a red frame. After 150 ms the frame turned green and the participants were allowed to respond. Participants were instructed to respond only if the stimulus referred to a real tool or to a real natural graspable object. The trial ended when participants provided their responses or after 1350 ms if no response was given. (B) Stimuli examples: images (1), scrambled images (2), nouns (3) and pseudowords (4).
Brainsci 12 00097 g001
Figure 2. (A). Time-frequency representations (TFR) for each pair of stimulus type comparison ((A): natural vs. tools images; (B): natural vs. tools nouns). Upper panel. Mean TFR values of the sensors in the contralateral motor area for the different stimuli. Note the beta pattern of desynchronization (reduction of power) and synchronization (increase of power) more evident in the case of tools stimuli. Lower panel. On the left, map of significant difference in beta band averaged over the time interval between 0.6 and 0.9 s for images and between 0.7 and 0.9 s for nouns, Asterisks indicate p < 0.01, plus indicate p < 0.05. On the right, time course of beta band power modification for each stimulus type. Shadowed area indicates the time range where the difference was significant (p < 0.05).
Figure 2. (A). Time-frequency representations (TFR) for each pair of stimulus type comparison ((A): natural vs. tools images; (B): natural vs. tools nouns). Upper panel. Mean TFR values of the sensors in the contralateral motor area for the different stimuli. Note the beta pattern of desynchronization (reduction of power) and synchronization (increase of power) more evident in the case of tools stimuli. Lower panel. On the left, map of significant difference in beta band averaged over the time interval between 0.6 and 0.9 s for images and between 0.7 and 0.9 s for nouns, Asterisks indicate p < 0.01, plus indicate p < 0.05. On the right, time course of beta band power modification for each stimulus type. Shadowed area indicates the time range where the difference was significant (p < 0.05).
Brainsci 12 00097 g002
Figure 3. (A): Source analysis of beta activity. Source estimation projected onto the MNI template brain of grand-averaged power modulation obtained by contrasting −1.5 to −0.5 s vs. 0.5 to 1.5 s with respect to the cue onset in 15–25 Hz band for each condition. For illustrative purpose, only values greater than 80% of the maximum are shown. (B,C): Beta desynchronization AUC. Beta AUC values for natural and tools images (B) and nouns (C) condition. Note that the natural stimuli values are smaller than tools stimuli in both images and nouns condition in all areas, confirming the main effect of Category. Asterisk indicates significant difference in t-tests. Data are represented as mean±.
Figure 3. (A): Source analysis of beta activity. Source estimation projected onto the MNI template brain of grand-averaged power modulation obtained by contrasting −1.5 to −0.5 s vs. 0.5 to 1.5 s with respect to the cue onset in 15–25 Hz band for each condition. For illustrative purpose, only values greater than 80% of the maximum are shown. (B,C): Beta desynchronization AUC. Beta AUC values for natural and tools images (B) and nouns (C) condition. Note that the natural stimuli values are smaller than tools stimuli in both images and nouns condition in all areas, confirming the main effect of Category. Asterisk indicates significant difference in t-tests. Data are represented as mean±.
Brainsci 12 00097 g003
Table 1. Descriptive statistic of behavioral study (Experiment 1).
Table 1. Descriptive statistic of behavioral study (Experiment 1).
NounImage
Mean
(ms)
Standard Deviation (ms)Standard Error (ms)Mean
(ms)
Standard Deviation (ms)Standard Error (ms)
Natural70880.9115.5772094.6118.21
Tool66676.0314.6368693.2717.95
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Visani, E.; Sebastiano, D.R.; Duran, D.; Garofalo, G.; Magliocco, F.; Silipo, F.; Buccino, G. The Semantics of Natural Objects and Tools in the Brain: A Combined Behavioral and MEG Study. Brain Sci. 2022, 12, 97. https://doi.org/10.3390/brainsci12010097

AMA Style

Visani E, Sebastiano DR, Duran D, Garofalo G, Magliocco F, Silipo F, Buccino G. The Semantics of Natural Objects and Tools in the Brain: A Combined Behavioral and MEG Study. Brain Sciences. 2022; 12(1):97. https://doi.org/10.3390/brainsci12010097

Chicago/Turabian Style

Visani, Elisa, Davide Rossi Sebastiano, Dunja Duran, Gioacchino Garofalo, Fabio Magliocco, Francesco Silipo, and Giovanni Buccino. 2022. "The Semantics of Natural Objects and Tools in the Brain: A Combined Behavioral and MEG Study" Brain Sciences 12, no. 1: 97. https://doi.org/10.3390/brainsci12010097

APA Style

Visani, E., Sebastiano, D. R., Duran, D., Garofalo, G., Magliocco, F., Silipo, F., & Buccino, G. (2022). The Semantics of Natural Objects and Tools in the Brain: A Combined Behavioral and MEG Study. Brain Sciences, 12(1), 97. https://doi.org/10.3390/brainsci12010097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop