Next Article in Journal
The Effect of Maternal Immune Activation on Social Play-Induced Ultrasonic Vocalization in Rats
Next Article in Special Issue
The Body across the Lifespan: On the Relation between Interoceptive Sensibility and High-Order Body Representations
Previous Article in Journal
The Graph of Our Mind
Previous Article in Special Issue
Is Right Angular Gyrus Involved in the Metric Component of the Mental Body Representation in Touch and Vision? A tDCS Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions

1
Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, via Enrico Melen 83, 16152 Genoa, Italy
2
Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, via Mondino 2, 27100 Pavia, Italy
*
Author to whom correspondence should be addressed.
Brain Sci. 2021, 11(3), 343; https://doi.org/10.3390/brainsci11030343
Submission received: 20 January 2021 / Revised: 27 February 2021 / Accepted: 27 February 2021 / Published: 9 March 2021

Abstract

:
Research has shown that the ability to integrate complementary sensory inputs into a unique and coherent percept based on spatiotemporal coincidence can improve perceptual precision, namely multisensory integration. Despite the extensive research on multisensory integration, very little is known about the principal mechanisms responsible for the spatial interaction of multiple sensory stimuli. Furthermore, it is not clear whether the size of spatialized stimulation can affect unisensory and multisensory perception. The present study aims to unravel whether the stimulated area’s increase has a detrimental or beneficial effect on sensory threshold. Sixteen typical adults were asked to discriminate unimodal (visual, auditory, tactile), bimodal (audio-visual, audio-tactile, visuo-tactile) and trimodal (audio-visual-tactile) stimulation produced by one, two, three or four devices positioned on the forearm. Results related to unisensory conditions indicate that the increase of the stimulated area has a detrimental effect on auditory and tactile accuracy and visual reaction times, suggesting that the size of stimulated areas affects these perceptual stimulations. Concerning multisensory stimulation, our findings indicate that integrating auditory and tactile information improves sensory precision only when the stimulation area is augmented to four devices, suggesting that multisensory interaction is occurring for expanded spatial areas.

1. Introduction

Spatial representation arises from the reciprocal relationship between the perceiver and entities in the environment and the integration of multiple sources of sensory information from the surroundings [1]. The importance of visual feedback for spatial representation has been widely demonstrated [2,3,4,5]. For instance, vision facilitates the representation of space in allocentric coordinates [6,7,8], while the lack of visual input significantly interferes with the development of spatial competencies and alters allocentric perception of space [9,10,11,12]. Moreover, the presence of visual feedback can improve the spatial encoding of an event [13,14,15]. Nonetheless, it has been demonstrated that also the integration of distinct sensory information can enhance perceptual precision compared to unimodal stimulation when stimulation is spatially and temporally congruent [16,17]. For instance, the combination of visual-auditory [18], visual-tactile [19] and auditory-tactile [20] stimuli results in enhanced spatial and temporal discrimination abilities. Moreover, reaction times are shorter when multimodal rather than unimodal stimulation is provided [18,21]. Finally, it has been also shown that auditory and tactile information is strongly biased in the spatial domain when in conflict with simultaneous visual stimuli, suggesting that visual information dominates spatial perception [22,23,24,25,26,27].
Although much evidence indicates that multisensory information enhances perceptual abilities and improves detection and discrimination of stimuli compared to unimodal information, the mechanisms underpinning such perceptual benefits are still unknown. Several pieces of evidence indicated that temporal proximity affects multisensory integration. Stevenson and colleagues [28] showed that reaction times increased when visual and auditory stimuli were asynchronous and when synchronous visuo-auditory stimuli were located in the visual periphery. Temporal proximity influences the perception of multisensory stimuli according to the spatial region where the stimulation is provided. The space outside the body is divided into peripersonal (i.e., immediately around the body; [29,30,31,32] and extrapersonal (i.e., beyond the peripersonal region; [30]) spatial areas. According to such statement, Sambo and Foster [33] confirmed decreased reaction times to simultaneous visuo-haptic stimulation only when stimulation occurred in the peripersonal space. Several studies have also demonstrated that spatial proximity of unisensory stimulation promotes a statistically optimal sensory integration [34,35,36,37]. According to such results, it has been argued that typical adults’ performance in a size discrimination task depended on the spatial position of multiple visual and haptic stimuli, showing an improvement of performance only for spatially coincident stimulations. However, it is not clear whether the size of sensory stimulation can affect perceptual accuracy, specifically whether incrementing the overall sensory stimulated area with multiple spatially and temporally coincident stimuli would enhance or impoverish sensory discrimination. This effect is referred to the impact of the stimulated area’s size on the perceived intensity of a stimulation. Therefore, positive results would indicate that the more the surface area stimulated, the higher the intensity of the perceived stimulus. Similarly, spatial summation effects have been demonstrated at the perceptual level (e.g., for different visual stimuli [38,39,40], tactile stimuli [41,42,43] and pain stimuli [44]) and at the cortical level (e.g., in the visual cortex areas [45]), suggesting its potential role for several perceptual mechanisms. Moreover, several pieces of evidence have demonstrated that spatial summation effects might explain several psychophysical phenomena, e.g., contextual effects [46,47].
In the present study, we investigated the relationship between the size of the stimulated surface and sensory discrimination by assessing how the size of sensory stimulation influences perception in unimodal (visual, auditory, tactile) and multimodal (bimodal, trimodal) conditions. We asked participants to tap a sensitized surface with the right index finger as soon as they perceived unimodal (visual, auditory, or tactile) or multimodal (combination of unimodal stimuli) stimulations conveyed by multisensory units positioned on the left harm and then to verbally indicate the number of stimuli perceived, independently of the stimulus modality. We hypothesize that incrementing the stimulated area would decrease sensory threshold, thus increase sensory discrimination, both in unisensory and multisensory conditions. Moreover, due to the strong dominance of vision in perception, we expected that visual information would dominate multimodal stimulation. Specifically, vision relies on a reference system based on external landmarks and facilitates the representation of space in allocentric coordinates. Thus, we hypothesized that vision would promote the interaction of multiple stimuli conveyed on an increasing stimulated area of the body and enhance sensory accuracy more than modalities based on an egocentric perspective of space (e.g., touch). Indeed, we hypothesized that the absence of visual inputs would undermine an effective interaction between auditory and tactile stimulations, irrespective of the size of stimulated area.

2. Materials and Methods

2.1. Participants

Sixteen sighted adults between 25 and 37 years of age (mean age: 29 ± 0.82 years, 10 females) were enrolled in the study. Participants were randomly recruited by Istituto Italiano di Tecnologia (Genoa, Italy), which provided them with monetary compensation for their participation. Participants belong to middle and upper-class Caucasian families living in a university town in Italy and none of them reported visual, auditory, musculoskeletal or neurological impairments. The study was approved by the local Ethics Committee (Comitato Etico Regione Liguria, Genoa, Italy; Prot. IIT_UVIP_COMP_2019 N. 02/2020, 4 July 2020), and participants gave written consent to the experimental protocol, following the Declaration of Helsinki. The sample size was calculated with the free software G * Power 3.1 (accessed on 4 July 2020, from www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/), based on the following parameters:
-
effect size dz: 1.18 (Cohen’s d = 1.09; see Schiatti et al., 2020 [48]);
-
α err. prob. = 0.05;
-
power (1 − β err. prob.) = 0.95.

2.2. Experimental Setup and Protocol

The experiment was conducted in a dark room, where participants sat in front of a table. The experimental setup consisted of five multisensory units, part of a wearable, wireless system that provides spatially- and temporally-coherent multisensory stimulation with real-time feedback from the user. Specifically, the system is the TechARM, entirely designed and realized by Istituto Italiano di Tecnologia (Genoa, Italy) in collaboration with Center of Child Neuro-Ophthalmology, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Mondino Foundation (Pavia, Italy), with the main intent to assess and train impaired perceptual functions caused by vision loss from an early age. The system has been recently validated by Schiatti and colleagues (2020) to investigate to investigate spatial perception in interactive tasks. Each unit included embedded sensors and actuators to enable visual (red, green and blue (RGB) light-emitting diode (LED)), auditory (digital amplifier and speaker), and tactile (haptic moto-driver) interactions and a capacitive surface (capacitive sensor) on the device’s upper part to receive and record real-time inputs from the user (dimension of a single unit: 2.5 cm × 2.5 cm × 2.5 cm; dimension of upper sensitized area: 6.25 cm2). Four units were shaped in a 2 × 2 array and positioned on each participant’s left arm, which was centrally aligned with their head, while the fifth unit was placed on the table, next to the right index finger. The dimension of stimulation area was in the range from 6.25 cm2 (single unit: 2.5 × 2.5 cm2) to 25 cm2 (four units) (Figure 1). During each trial, unimodal (auditory, visual, or tactile), bimodal (audio-tactile, audio-visual, tactile-visual) or trimodal (audio-tactile-visual) stimuli with a duration of 100 ms were randomly produced by a randomized number of units in the array (between one and four active units), defined by 15 configurations. The active units produced the same temporally-congruent stimulation. Auditory stimuli were provided as 79 dB white noise burst at 300 Hz, visual stimuli were produced as white light by RGB LED (luminance: 317 mcd), and tactile stimuli were conveyed by a vibromotor peripheral (vibration frequency: 10 Hz). The experimental protocol was divided into two phases: (a) a perceptual phase, where participants were asked to tap the upper surface of the fifth unit with the right index finger as soon as they perceived a stimulus, regardless of the kind of stimulation conveyed; (b) cognitive phase, where they reported verbally how many devices they assumed as active. Each stimulation was reproduced three times in all configurations, with a total amount of 315 trials (45 trials per seven stimulation levels). The experiment was performed in about one hour, and short breaks were allowed at any time during the session.

2.3. Data Analysis and Statistics

The experiment was designed to evaluate: (a) the accuracy in determining the number of active units; (b) the responsiveness to different levels of stimulation on the body. As a measure of accuracy, we computed index correct (IC), calculated as the number of correct responses divided by the total number of trials for each configuration of stimuli, and expressed as an index between 0 and 1. As a measure of responsiveness, we collected reaction times (RT), calculated as the time interval between the beginning of the delivered stimulation and the time when the participant tapped the fifth unit with the right index finger and expressed in seconds (s). To evaluate whether data were normally distributed, we applied the Shapiro-Wilk test of normality with the free software R (Free Software Foundation, Boston, MA, USA). After verifying that the data did not follow a normal distribution, we ran the analysis using non-parametric statistics. We conducted two separate two-way permuted analyses of variance (ANOVAs) with IC and RT as dependent variables, and within-factors “stimulation” (seven levels: Auditory—A, Visual—V, Tactile—T, Audio-Tactile—AT, Audio-Visual—AV, Tactile-Visual—TV, and Audio-Tactile-Visual—ATV) and “active units” (four levels: One, Two, Three, and Four) as independent variables. The permuted Bonferroni correction for non-parametric data was applied in case of significant effects to adjust the p-value of multiple comparisons (significant value: α = 0.05).

3. Results

We carried out two levels of analysis: (i) the main effects of the increase of stimulated area and the types of stimulation provided on index correct (IC) and reaction times (RT); (ii) the interaction effects between increasing stimulated area and the kind of stimuli on IC and RT.
In the first level of analysis, we examined whether the increasing number of active units and the difference between stimuli affected the performance in terms of accuracy (IC) and responsiveness (RT). As shown in Figure 2, the dimension of stimulated area resulted in a significant reduction of IC only within 18.75 cm2, with a similar performance in case of three and four active units, independently of the kind of stimulus provided (main effect: active units; Residual Sum of Squares (RSS) = 21.91, iter = 5000, p < 2.2 × 10−16). Differently from IC, reactions to stimuli linearly increased with the increasing number of sources (main effect: active units; RSS = 4.57, iter = 5000, p < 2.2 × 10−16). The presence of visual stimuli, alone (unimodal) or combined with auditory or/and tactile stimuli (bimodal, trimodal), increased the response correctness, while unimodal auditory, tactile and bimodal audio-tactile stimuli induced a lower accuracy, regardless of the stimulated surface (main effect: stimulation; RSS = 197.73, iter = 5000, p < 2.2 × 10−16). Concerning RTs, they were similar with unimodal tactile and bimodal visuo-tactile stimuli but higher than with other stimuli, apart from modality and number of active units (see Table 1 for Bonferroni corrections).
The second level of analysis investigated whether the combination of the two factors might influence participants’ performance and responsiveness. Firstly, we compared unimodal stimuli by considering changes in IC and RT while increasing the stimulated area. Figure 3A shows that the number of correct responses remained high only in case of visual stimuli (interaction between stimulation x active units; RSS = 45.04, iter = 5000, p < 2.2 × 10−16), while it linearly decreased with the increase in the number of active units for auditory and tactile stimuli. By contrast, participants slowed down reactions to visual (inter action between stimulation x active units; RSS = 3.12, iter = 5000, p < 2.2 × 10−16) but not to auditory and tactile stimuli (see Figure 3B). Such result might be interpreted as a speed– accuracy trade-off.
Secondly, we analyzed IC and RT’s trend in the case of bimodal and trimodal stimuli with increasing active units. As shown in the first level of analysis, Figure 3C highlights that vision combined with other stimuli improved the performance in terms of correctness, although a linear delay in responsiveness (see Figure 3D) was confirmed. Concerning audio-tactile stimuli, the absence of significant differences between different dimensions of stimulated area was observed for both IC and RT (see Table 1 for Bonferroni corrections).
Moreover, to deeply evaluate the interaction between auditory and tactile stimuli, we compared the performance with unimodal and bimodal conditions (Figure 4). Results showed that IC was lower with bimodal than both unimodal stimulations for the smallest dimension of stimulated area (6.25 cm2). However, accuracy with audio-tactile stimuli surprisingly increased with the highest dimension of stimulated area (25 cm2), overtaking auditory stimulation, that was even lower than tactile stimulation with three active units (see Figure 3A), but not tactile stimulation (see Table 1 for Bonferroni corrections). These findings might point out that bimodal interaction of audio-tactile stimuli occurred with increased stimulation complexity.
Finally, we calculated the index of errors made by participants in case of auditory, tactile and audio-tactile stimulations, expressed as the number of times that participants attributed a wrong number of active units per condition divided by the total trials per condition. Errors were not randomly distributed across the three incorrect alternatives, but they were predominantly closed to the correct response, with a visible reduction of errors far from correct responses, e.g., when “4” is the correct response, there is a predominance of “3” together with “4”.

4. Discussion

The present study aims to unravel the mechanisms responsible for unisensory and multisensory spatial interaction, specifically investigating whether the stimulated area’s increase has a detrimental or beneficial effect on sensory threshold. Two main results came out from the present work, respectively related to the proposed task’s unisensory and multisensory conditions.
In terms of unisensory processing, we found that visual information dominates perception. This finding is in line with previous works demonstrating the higher reliability of vision in perceiving simultaneous stimuli [3,49,50,51] Research has also demonstrated that the combination of perceptual experiences from the environment and visual experience drives the development of allocentric spatial skills [52,53]. Moreover, spatial accuracy and precision of an event improve when multiple senses, congruent in space and in time, are integrated [16]. However, our findings showed that when the stimulated area increases, a delayed responsiveness can be observed in the visual domain, indicating that size of stimulation has an effect in terms or visual responsiveness. A possible explanation might be that, when stimulation is conveyed on the body, the faster response of touch is due to the fact that touch contributes to represent space based on egocentric (bodily) coordinates. Concerning hearing, we found that auditory responsiveness was not affected by the size of the stimulated area but auditory accuracy did not improve along with the increase of stimulated area. This might be due to the fact that auditory information is less reliable than visual information based on external landmarks and less reliable than tactile information on the body. Several studies have demonstrated that vision typically dominates other sensory modalities in perception, producing a strong bias in case of conflicting events [22,23,24,25,26,27]. Our result might suggest that increasing the surface area on the body produced a sensory conflict between multiple sensory modalities, independent of the spatial and temporal coherence of stimulation.
Consequently, we might argue that conflicting events are solved by vision under a more cognitive point of view, while auditory and tactile stimuli foster perceptual abilities when multiple and proximal stimuli add up in space. A further explanation of the late responsiveness of vision in the task proposed might be related to the coexistence of retinotopic and spatiotopic reference frames to build spatial maps of the environment [54,55,56,57]. Since retinotopic coordinates can induce an error signal when the fovea has to be moved and kept on a selected target, such a reference system can be considered more viewer-centered than spatiotopic frames of reference determine observer-independent properties of stimuli [58]. According to this view, we might conclude that retinotopic and spatiotopic frames of reference come into a conflict when perceptual features are processed, producing a significant delay in responsiveness, while spatiotopic coordinates overcome when cognitive processes take place and guide the other senses in encoding bodily space.
In terms of multisensory processing, we found that audio-tactile interaction enhances sensory accuracy more than unisensory (audio, tactile) processing only when the stimulated area increases. Indeed our results indicate that a significant increase in sensory accuracy is evident only when the surface areas correspond to four devices (25 cm2), suggesting that the dimension of stimulated area might facilitate multisensory interaction. A possible reason for this is that, while vision dominates the spatial representation, a cost for integrating bigger spatial and tactile areas might emerge. Vision might work as a glue between spatial coordinates of the different senses. When vision is not available, the association between auditory and tactile stimuli might have a stronger benefit for a bigger area of stimulation, where unisensory uncertainty is smaller. This idea might be supported by previous findings on the computation of frequency that showed a convergence between auditory and tactile stimuli in the case of spatiotemporal coherence [59]. According to such a view, other works have highlighted the early convergence and integration of auditory and tactile inputs at sensory cortices level [60,61,62,63,64,65,66]. When vision is not available, we may suppose that touch plays a pivotal role in audio–tactile interaction since it processes spatial information conveyed on the body based on body-centered coordinates. In this sense, touch might be considered a more reliable sensory cue when multiple stimuli are conveyed in proximal positions and simultaneously. The dominance of touch over audition has been reported in previous works on spatiotemporal information processing within peripersonal borders, even though it might depend on body posture changes [67,68]. This concept is also shown in several studies on audio–tactile interaction, demonstrating that the presence of tactile stimuli seems to impact auditory perceptual judgments more than auditory information on tactile judgments [67,69,70].

5. Conclusions

The present work aimed to unravel the role of vision combined with audition and touch in space representation in an increasing bodily area. Our results suggested that, since multisensory interaction is driven by vision improved response accuracy with the increasing of the stimulated area, vision provides a more reliable information to encode peripersonal space, namely object-centered or allocentric. This result supports the idea that allocentric frames of reference (vision) enhances spatial change discrimination when multiple stimuli occur in the same location and time. On the other hand, relying on visual cues might affect responsiveness to stimulation. Depending on vision seems to have a cost in terms of the spatial perception of both unisensory and multisensory events, while touch might guide audition in the discrimination of an increasing audio-tactile area on the body. Indeed, the combination of audition and touch improved the performance compared to unimodal auditory stimuli for a high number of active units, which might highlight a leading role for touch, based on body-centered spatial coordinates, in an audio–tactile interaction process. These findings would indicate the value of further investigation on the relevance of spatial and temporal coherence when multisensory stimulation sources add up in space during different developmental stages from childhood, considering also the combination of peripersonal and extrapersonal stimuli and how extrapersonal space may impact on spatial processing of multisensory contingencies.

Author Contributions

C.M., G.C. and M.G. developed the study concept and design. C.M. collected and analyzed the data. C.M. and G.C. wrote the manuscript. S.S. and M.G. provided critical inputs to review the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received support from the MYSpace project, which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 948349).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Comitato Etico Regione Liguria, Genoa, Italy (Prot. IIT_UVIP_COMP_2019 N. 02/2020, 4 July 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. The ethical protocol was approved by Comitato Etico Regione Liguria, Genoa, Italy (Prot. IIT_UVIP_COMP_2019 N. 02/2020, 4 July 2020).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

The authors would like to thank the MYSpace project, which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 948349), for the support to this study. The research presented here has been supported and funded by Unit for Visually Impaired People, Istituto Italiano di Tecnologia (Genoa, Italy) in partnership with Center of Child Neuro-Ophthalmology IRCCS Mondino Foundation (Pavia, Italy).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hart, R.A.; Moore, R.A. The development of spatial cognition: A review. In Image & Environment: Cognitive Mapping and Spatial Behavior; Aldine Transaction: London, UK, 1973. [Google Scholar]
  2. Vasilyeva, M.; Lourenco, S.F. Spatial development. In The Handbook of Life-Span Development; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2010; Volume 1. [Google Scholar]
  3. Thinus-Blanc, C.; Gaunet, F. Representation of space in blind persons: Vision as a spatial sense? Psychol. Bull. 1997, 121, 20–42. [Google Scholar] [CrossRef]
  4. Pasqualotto, A.; Proulx, M.J. The role of visual experience for the neural basis of spatial cognition. Neurosci. Biobehav. Rev. 2012, 36, 1179–1187. [Google Scholar] [CrossRef] [PubMed]
  5. Cappagli, G.; Cocchi, E.; Gori, M. Auditory and proprioceptive spatial impairments in blind children and adults. Dev. Sci. 2015, 20. [Google Scholar] [CrossRef] [PubMed]
  6. Newell, F.N.; Woods, A.T.; Mernagh, M.; Bülthoff, H.H. Visual, haptic and crossmodal recognition of scenes. Exp. Brain Res. 2005, 161, 233–242. [Google Scholar] [CrossRef]
  7. Newport, R.; Rabb, B.; Jackson, S.R. Noninformative vision improves haptic spatial perception. Curr. Biol. 2002, 12, 1661–1664. [Google Scholar] [CrossRef] [Green Version]
  8. Postma, A.; Zuidhoek, S.; Noordzij, M.L.; Kappers, A.M.L. Differences between early-blind, late-blind, and blindfolded-sighted people in haptic spatial-configuration learning and resulting memory traces. Perception 2007, 36, 1253–1265. [Google Scholar] [CrossRef] [Green Version]
  9. Ungar, S.; Blades, M.; Spencer, C. Mental rotation of a tactile layout by young visually impaired children. Perception 1995, 24, 891–900. [Google Scholar] [CrossRef]
  10. Bigelow, A.E. Blind and sighted children’s spatial knowledge of their home environments. Int. J. Behav. Dev. 1996, 19, 797–816. [Google Scholar] [CrossRef]
  11. Cattaneo, Z.; Vecchi, T.; Cornoldi, C.; Mammarella, I.; Bonino, D.; Ricciardi, E.; Pietrini, P. Imagery and spatial processes in blindness and visual impairment. Neurosci. Biobehav. Rev. 2008, 32, 1346–1360. [Google Scholar] [CrossRef]
  12. Koustriava, E.; Papadopoulos, K. Mental rotation ability of individuals with visual impairments. J. Vis. Impair. Blind. 2010, 104, 570–575. [Google Scholar] [CrossRef]
  13. Maurer, D.; Lewis, T.L.; Mondloch, C.J. Missing sights: Consequences for visual cognitive development. Trends Cogn. Sci. 2005, 9, 144–151. [Google Scholar] [CrossRef]
  14. Lepore, N.; Shi, Y.; Lepore, F.; Fortin, M.; Voss, P.; Chou, Y.; Lord, C.; Lassonde, M.; Dinov, I.D.; Toga, A.W. Pattern of hippocampal shape and volume differences in blind subjects. Neuroimage 2009, 46, 949–957. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Ruotolo, F.; Ruggiero, G.; Vinciguerra, M.; Iachini, T. Sequential vs simultaneous encoding of spatial information: A comparison between the blind and the sighted. Acta Psychol. 2012, 139, 382–389. [Google Scholar] [CrossRef]
  16. Gori, M. Multisensory Integration and Calibration in Children and Adults with and without Sensory and Motor Disabilities. Multisens. Res. 2015, 28, 71–99. [Google Scholar] [CrossRef] [PubMed]
  17. Stein, B.E.; Meredith, M.A. The Merging of the Senses. MIT Press 1993. [Google Scholar]
  18. Miller, J. Divided attention: Evidence for coactivation with redundant signals. Cogn. Psychol. 1982, 14, 247–279. [Google Scholar] [CrossRef]
  19. Diederich, A.; Colonius, H.; Bockhorst, D.; Tabeling, S. Visual-tactile spatial interaction in saccade generation. Exp. Brain Res. 2003, 148, 328–337. [Google Scholar] [CrossRef]
  20. Murray, M.M.; Molholm, S.; Michel, C.M.; Heslenfeld, D.J.; Ritter, W.; Javitt, D.C.; Schroeder, C.E.; Foxe, J.J. Grabbing your ear: Rapid auditory–somatosensory multisensory interactions in low-level sensory cortices are not constrained by stimulus alignment. Cereb. Cortex 2004, 15, 963–974. [Google Scholar] [CrossRef]
  21. Todd, J.W. Reaction to Multiple Stimuli; Science Press: Beijing, China, 1912. [Google Scholar]
  22. Alais, D.; Burr, D. The ventriloquist effect results from Near-Optimal Bimodal Integration. Curr. Biol. 2004, 14, 257–262. [Google Scholar] [CrossRef]
  23. Anderson, P.W.; Zahorik, P. Auditory and visual distance estimation. In Proceedings of the Meetings on Acoustics, Seattle, WA, USA, 23–27 May 2011; Volume 12. [Google Scholar]
  24. Bertelson, P.; Aschersleben, G. Temporal ventriloquism: Crossmodal interaction on the time dimension: Evidence from auditory–visual temporal order judgment. Int. J. Psychophysiol. 2003, 50, 147–155. [Google Scholar] [CrossRef]
  25. Botvinick, M.; Cohen, J. Rubber hands ‘feel’ touch that eyes see. Nature 1998, 391, 756. [Google Scholar] [CrossRef] [PubMed]
  26. Flanagan, J.R.; Beltzner, M.A. Independence of perceptual and sensorimotor predictions in the size–weight illusion. Nat. Neurosci. 2000, 3, 737–741. [Google Scholar] [CrossRef]
  27. Zahorik, P. Estimating sound source distance with and without vision. Optom. Vis. Sci. 2001, 78, 270–275. [Google Scholar] [CrossRef] [PubMed]
  28. Stevenson, R.A.; Fister, J.K.; Barnett, Z.P.; Nidiffer, A.R.; Wallace, M.T. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance. Exp. Brain Res. 2012, 219, 121–137. [Google Scholar] [CrossRef] [Green Version]
  29. Brain, W.R. Visual orientation with special reference to lesions of the right cerebral hemisphere. Brain J. Neurol. 1941, 64, 244–272. [Google Scholar] [CrossRef]
  30. Hall, E.T. The Hidden Dimension; Doubleday: Garden City, NY, USA, 1966; Volume 609. [Google Scholar]
  31. Rizzolatti, G.; Scandolara, C.; Matelli, M.; Gentilucci, M. Afferent properties of periarcuate neurons in macaque monkeys. II. Visual responses. Behav. Brain Res. 1981, 2, 147–163. [Google Scholar] [CrossRef]
  32. Previc, F.H. The neuropsychology of 3-D space. Psychol. Bull. 1998, 124, 123. [Google Scholar] [CrossRef]
  33. Sambo, C.F.; Forster, B. An ERP investigation on visuotactile interactions in peripersonal and extrapersonal space: Evidence for the spatial rule. J. Cogn. Neurosci. 2009, 21, 1550–1559. [Google Scholar] [CrossRef]
  34. Soto-Faraco, S.; Lyons, J.; Gazzaniga, M.; Spence, C.; Kingstone, A. The ventriloquist in motion: Illusory capture of dynamic information across sensory modalities. Cogn. Brain Res. 2002, 14, 139–146. [Google Scholar] [CrossRef]
  35. Spence, C.; Squire, S. Multisensory integration: Maintaining the perception of synchrony. Curr. Biol. 2003, 13, R519–R521. [Google Scholar] [CrossRef] [Green Version]
  36. Zampini, M.; Shore, D.I.; Spence, C. Audiovisual temporal order judgments. Exp. Brain Res. 2003, 152, 198–210. [Google Scholar] [CrossRef] [PubMed]
  37. Ernst, M.O.; Banks, M.S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 2002, 415, 429. [Google Scholar] [CrossRef] [PubMed]
  38. Anderson, S.J.; Burr, D.C. Spatial summation properties of directionally selective mechanisms in human vision. JOSA A 1991, 8, 1330–1339. [Google Scholar] [CrossRef]
  39. Anderson, S.J.; Burr, D.C. Receptive field size of human motion detection units. Vision Res. 1987, 27, 621–635. [Google Scholar] [CrossRef]
  40. Burr, D.C.; Morrone, M.C.; Vaina, L.M. Large receptive fields for optic flow detection in humans. Vision Res. 1998, 38, 1731–1743. [Google Scholar] [CrossRef]
  41. Gescheider, G.A.; Bolanowski, S.J.; Pope, J.V.; Verrillo, R.T. A four-channel analysis of the tactile sensitivity of the fingertip: Frequency selectivity, spatial summation, and temporal summation. Somatosens. Mot. Res. 2002, 19, 114–124. [Google Scholar] [CrossRef] [PubMed]
  42. Verrillo, R.T.; Bolanowski, S.J.; McGlone, F.P. Subjective magnitude of tactile roughness. Somatosens. Mot. Res. 1999, 16, 352–360. [Google Scholar] [CrossRef]
  43. Verrillo, R.T.; Gescheider, G.A. Enhancement and summation in the perception of two successive vibrotactile stimuli. Percept. Psychophys. 1975, 18, 128–136. [Google Scholar] [CrossRef]
  44. Marchand, S.; Arsenault, P. Spatial summation for pain perception: Interaction of inhibitory and excitatory mechanisms. Pain 2002, 95, 201–206. [Google Scholar] [CrossRef]
  45. Shushruth, S.; Ichida, J.M.; Levitt, J.B.; Angelucci, A. Comparison of spatial summation properties of neurons in macaque V1 and V2. J. Neurophysiol. 2009, 102, 2069–2083. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Levitt, J.B.; Lund, J.S. Contrast dependence of contextual effects in primate visual cortex. Nature 1997, 387, 73–76. [Google Scholar] [CrossRef] [PubMed]
  47. Li, C.-Y.; Lei, J.-J.; Yao, H.-S. Shift in speed selectivity of visual cortical neurons: A neural basis of perceived motion contrast. Proc. Natl. Acad. Sci. USA 1999, 96, 4052–4056. [Google Scholar] [CrossRef] [Green Version]
  48. Schiatti, L.; Cappagli, G.; Martolini, C.; Maviglia, A.; Signorini, S.; Gori, M.; Crepaldi, M. A Novel Wearable and Wireless Device to Investigate Perception in Interactive Scenarios. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), EMBS Learning Academy, Montreal, QC, Canada, 20–24 July 2020; pp. 3252–3255. [Google Scholar]
  49. Foulke, E. Perception, Cognition, and Mobility of Blind Pedestrians. Spatial Abilities: Development and Physiological Foundations; Potegal, M., Ed.; Academic Press: New York, NY, USA, 1982; pp. 55–76. [Google Scholar]
  50. Merabet, L.B.; Pascual-Leone, A. Neural reorganization following sensory loss: The opportunity of change. Nat. Rev. Neurosci. 2010, 11, 44–52. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Pasqualotto, A.; Spiller, M.J.; Jansari, A.S.; Proulx, M.J. Visual experience facilitates allocentric spatial representation. Behav. Brain Res. 2013, 236, 175–179. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Nardini, M.; Thomas, R.L.; Knowland, V.C.P.; Braddick, O.J.; Atkinson, J. A viewpoint-independent process for spatial reorientation. Cognition 2009, 112, 241–248. [Google Scholar] [CrossRef]
  53. Vasilyeva, M.; Lourenco, S.F. Development of spatial cognition. Wiley Interdiscip. Rev. Cogn. Sci. 2012, 3, 349–362. [Google Scholar] [CrossRef]
  54. Macaluso, E.; Maravita, A. The representation of space near the body through touch and vision. Neuropsychologia 2010, 48, 782–795. [Google Scholar] [CrossRef]
  55. Wandell, B.A.; Dumoulin, S.O.; Brewer, A.A. Visual field maps in human cortex. Neuron 2007, 56, 366–383. [Google Scholar] [CrossRef] [Green Version]
  56. Golomb, J.D.; Chun, M.M.; Mazer, J.A. The native coordinate system of spatial attention is retinotopic. J. Neurosci. 2008, 28, 10654–10662. [Google Scholar] [CrossRef] [PubMed]
  57. Gardner, J.L.; Merriam, E.P.; Movshon, J.A.; Heeger, D.J. Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. J. Neurosci. 2008, 28, 3988–3999. [Google Scholar] [CrossRef] [Green Version]
  58. Noory, B.; Herzog, M.H.; Ogmen, H. Spatial properties of non-retinotopic reference frames in human vision. Vision Res. 2015, 113, 44–54. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Foxe, J.J. Multisensory integration: Frequency tuning of audio-tactile integration. Curr. Biol. 2009, 19, R373–R375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Foxe, J.J.; Morocz, I.A.; Murray, M.M.; Higgins, B.A.; Javitt, D.C.; Schroeder, C.E. Multisensory auditory-somatosensory interactions in early cortical processing revealed by high-density electrical mapping. Brain Res. Cogn. Brain Res. 2000, 10, 77–83. [Google Scholar] [CrossRef]
  61. Foxe, J.J.; Wylie, G.R.; Martinez, A.; Schroeder, C.E.; Javitt, D.C.; Guilfoyle, D.; Ritter, W.; Murray, M.M. Auditory-somatosensory multisensory processing in auditory association cortex: An fMRI study. J. Neurophysiol. 2002, 88, 540–543. [Google Scholar] [CrossRef] [PubMed]
  62. Meredith, M.A.; Allman, B.L.; Keniston, L.P.; Clemo, H.R. Auditory influences on non-auditory cortices. Hear. Res. 2009, 258, 64–71. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Schroeder, C.E.; Lindsley, R.W.; Specht, C.; Marcovici, A.; Smiley, J.F.; Javitt, D.C. Somatosensory input to auditory association cortex in the macaque monkey. J. Neurophysiol. 2001, 85, 1322–1327. [Google Scholar] [CrossRef] [PubMed]
  64. Keniston, L.P.; Henderson, S.C.; Meredith, M.A. Neuroanatomical identification of crossmodal auditory inputs to interneurons in somatosensory cortex. Exp. Brain Res. 2010, 202, 725–731. [Google Scholar] [CrossRef] [Green Version]
  65. Finocchietti, S.; Cappagli, G.; Gori, M. Encoding audio motion: Spatial impairment in early blind individuals. Front. Psychol. 2015, 6, 1357. [Google Scholar] [CrossRef] [PubMed]
  66. Brandwein, A.B.; Foxe, J.J.; Russo, N.N.; Altschuler, T.S.; Gomes, H.; Molholm, S. The development of audiovisual multisensory integration across childhood and early adolescence: A high-density electrical mapping study. Cereb. Cortex 2011, 21, 1042–1055. [Google Scholar] [CrossRef] [PubMed]
  67. Soto-Faraco, S.; Spence, C.; Kingstone, A. Cross-modal dynamic capture: Congruency effects in the perception of motion across sensory modalities. J. Exp. Psychol. Hum. Percept. Perform. 2004, 30, 330. [Google Scholar] [CrossRef] [PubMed]
  68. Sanabria, D.; Soto-Faraco, S.; Spence, C. Spatiotemporal interactions between audition and touch depend on hand posture. Exp. Brain Res. 2005, 165, 505–514. [Google Scholar] [CrossRef]
  69. Sherrick, C.E. The antagonisms of hearing and touch. In Hearing and Davis: Essays Honoring Hallowell Davis; Wasington University: Washington, DC, USA, 1976; pp. 149–158. [Google Scholar]
  70. Hötting, K.; Röder, B. Hearing cheats touch, but less in congenitally blind than in sighted individuals. Psychol. Sci. 2004, 15, 60–64. [Google Scholar] [CrossRef]
Figure 1. Experimental setup of the increasing stimulated area task. The system used for the increasing stimulated area task consisted in a technological system providing spatially and temporally coherent multisensory stimulation with real-time feedback from the user. Four units of the system were shaped in a 2 × 2 array and positioned on each participant’s left arm, with the head centrally aligned. Another unit was placed on the table, next to the right index finger, to record reaction times by tapping the upper sensitized surface. The dimensions of each unit were 2.5 × 2.5 cm2, for a total of 25 cm2 with four units.
Figure 1. Experimental setup of the increasing stimulated area task. The system used for the increasing stimulated area task consisted in a technological system providing spatially and temporally coherent multisensory stimulation with real-time feedback from the user. Four units of the system were shaped in a 2 × 2 array and positioned on each participant’s left arm, with the head centrally aligned. Another unit was placed on the table, next to the right index finger, to record reaction times by tapping the upper sensitized surface. The dimensions of each unit were 2.5 × 2.5 cm2, for a total of 25 cm2 with four units.
Brainsci 11 00343 g001
Figure 2. Impact of increasing stimulated area on index correct (IC) and reaction time (RT). The grey symbols represent the values of index correct (y-axis) between 0 and 1, as a function of Reaction Time (x-axis), expressed in seconds, per participant. The red asterisk is the average on the total number of participants. The black dashed lines indicate the chance level (0.25). The increase in the number of active units significantly resulted in a reduction of IC within 18.75 cm2 (p < 0.001), along with a linear increase of RT until 25 cm2 (p < 0.001).
Figure 2. Impact of increasing stimulated area on index correct (IC) and reaction time (RT). The grey symbols represent the values of index correct (y-axis) between 0 and 1, as a function of Reaction Time (x-axis), expressed in seconds, per participant. The red asterisk is the average on the total number of participants. The black dashed lines indicate the chance level (0.25). The increase in the number of active units significantly resulted in a reduction of IC within 18.75 cm2 (p < 0.001), along with a linear increase of RT until 25 cm2 (p < 0.001).
Brainsci 11 00343 g002
Figure 3. Impact of increasing stimulated area on index correct (IC) and reaction time (RT) with unimodal and multimodal stimuli. In (A,B) each symbol represents the mean value with standard error of IC and RT (y-axis), respectively, obtained by all the participants per number of active units (x-axis) with unimodal visual (blue dots), auditory (red squares) and tactile stimuli (purple triangles). The black dashed lines indicate the chance level (0.25). (A) Only visual stimuli induced a high IC (p = 1), regardless of the number of active units, with respect to the linear decrease of auditory (p < 0.05) and tactile (p < 0.01) modalities. (B) Concerning RT, a linear increase in responsiveness was present in case of visual stimuli (p < 0.01), but not auditory or tactile stimuli (p = 1). In (C,D) each symbol represents the mean value with standard error of IC and RT (y-axis), respectively, obtained by all the participants per number of active units (x-axis) with bimodal audio-tactile (green rectangle), audio-visual (dark-red dots), and visuo-tactile (red and green triangle) stimuli, and with trimodal audio-tactile-visual (light-blue dots) stimuli. (C) When bimodal conditions included visual stimuli, IC maintained a high value, independently from the number of active units (p = 1), while IC were lower for audio-tactile stimuli with no significant differences between different dimensions of stimulated areas (p > 0.10). (D) In opposition to the trend of IC, RT linearly delayed only in presence of visual stimuli (p < 0.05), while no significant difference was experienced with audio-tactile stimuli (p > 0.80).
Figure 3. Impact of increasing stimulated area on index correct (IC) and reaction time (RT) with unimodal and multimodal stimuli. In (A,B) each symbol represents the mean value with standard error of IC and RT (y-axis), respectively, obtained by all the participants per number of active units (x-axis) with unimodal visual (blue dots), auditory (red squares) and tactile stimuli (purple triangles). The black dashed lines indicate the chance level (0.25). (A) Only visual stimuli induced a high IC (p = 1), regardless of the number of active units, with respect to the linear decrease of auditory (p < 0.05) and tactile (p < 0.01) modalities. (B) Concerning RT, a linear increase in responsiveness was present in case of visual stimuli (p < 0.01), but not auditory or tactile stimuli (p = 1). In (C,D) each symbol represents the mean value with standard error of IC and RT (y-axis), respectively, obtained by all the participants per number of active units (x-axis) with bimodal audio-tactile (green rectangle), audio-visual (dark-red dots), and visuo-tactile (red and green triangle) stimuli, and with trimodal audio-tactile-visual (light-blue dots) stimuli. (C) When bimodal conditions included visual stimuli, IC maintained a high value, independently from the number of active units (p = 1), while IC were lower for audio-tactile stimuli with no significant differences between different dimensions of stimulated areas (p > 0.10). (D) In opposition to the trend of IC, RT linearly delayed only in presence of visual stimuli (p < 0.05), while no significant difference was experienced with audio-tactile stimuli (p > 0.80).
Brainsci 11 00343 g003
Figure 4. Comparison of index correct (IC) between bimodal Audio-Tactile stimuli and unimodal Auditory and Tactile stimuli. The grey symbols represent the values of index correct in the Audio-Tactile modality (y-axis), expressed as an index between 0 and 1, per participant. The red asterisk is the average on the total number of participants. IC was lower with bimodal than both unimodal stimulations when one unit was active (6.25 cm2) (p < 0.001), while it increased until 18.75 cm2 with a significant difference with respect to Auditory (p < 0.001) but not Tactile (p = 1.00) stimuli.
Figure 4. Comparison of index correct (IC) between bimodal Audio-Tactile stimuli and unimodal Auditory and Tactile stimuli. The grey symbols represent the values of index correct in the Audio-Tactile modality (y-axis), expressed as an index between 0 and 1, per participant. The red asterisk is the average on the total number of participants. IC was lower with bimodal than both unimodal stimulations when one unit was active (6.25 cm2) (p < 0.001), while it increased until 18.75 cm2 with a significant difference with respect to Auditory (p < 0.001) but not Tactile (p = 1.00) stimuli.
Brainsci 11 00343 g004
Table 1. Results with Bonferroni corrections. The table reports the results of the main effects and interaction effects for the index correct (IC) and responsiveness (RT).
Table 1. Results with Bonferroni corrections. The table reports the results of the main effects and interaction effects for the index correct (IC) and responsiveness (RT).
Main EffectsBonferroni-ICBonferroni-RT
Active UnitsOne vs. Two: p < 0.001One vs. Two: p = 0.02
One vs. Three: p < 0.001One vs. Three: p < 0.001
One vs. Four: p < 0.001One vs. Four: p < 0.001
Two vs. Three: p < 0.001Two vs. Three: p < 0.001
Two vs. Four: p = 0.89Two vs. Four: p < 0.001
Three vs. Four: p = 1.00Three vs. Four: p = 0.008
Stimulation T vs. TV: p = 1.00
T vs. A: p = 0.017
V vs. A/T/AT: p < 0.001T vs. V: p = 0.004
AV vs. A/T/AT: p < 0.001T vs. AT: p = 0.025
TV vs. A/T/AT: p < 0.001T vs. AV: p < 0.001
ATV vs. A/T/AT: p < 0.001T vs. ATV: p < 0.001
TV vs. A: p = 0.004
TV vs. V/AT/AV: p < 0.001
Interaction EffectsBonferroni-ICBonferroni-RT
Stimulation × Active UnitsV-One vs. V-One/Two/Three/Four: p = 1.00V-One vs. V-Three/Four: p < 0.001
A-One vs. A-Two/Three/Four: p < 0.001V-Two vs. V-Four: p = 0.004
A-Two vs. A-Three: p = 0.049A-One vs. A-One/Two/Three/Four: p = 1.00
T-One vs. T-Two/Three/Four: p < 0.001T-One vs. T-One/Two/Three/Four: p = 1.00
T-Two vs. T-Three: p < 0.001AV-One vs. AV-Two: p = 0.005
T-Two vs. T-Four: p = 0.007AV-One vs. AV-Three/Four: p < 0.001
T-Three vs. A-Three: p = 0.038
AV/TV/ATV-One vs. AV/TV/ATV-Two/Three/Four: p = 1.00AV-Three vs. AV-Four: p = 0.037
AT-One vs. AT-Two/Four: p = 1.00TV-One vs. TV-Two: p = 0.024
AT-One vs. AT-Three: p = 0.72ATV-One/Two vs. ATV-Three/Four: p < 0.001
AT-Two vs. AT-Three/Four: p = 1.00ATV-Three vs. ATV-Four: p = 0.026
AT-Three vs. AT- Four: p = 0.19AT-One vs. AT-Two: p = 1.00
AT-One vs. A-One: p < 0.001AT-One vs. AT-Three: p = 0.88
AT-One vs. T-One: p < 0.001AT-One vs. AT-Four: p = 0.96
AT-Four vs. A-Four: p < 0.001AT-Two vs. AT-Three/Four: p = 1
AT-Four vs. T-Four: p = 1.00AT-Three vs. AT-Four: p = 1.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martolini, C.; Cappagli, G.; Signorini, S.; Gori, M. Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions. Brain Sci. 2021, 11, 343. https://doi.org/10.3390/brainsci11030343

AMA Style

Martolini C, Cappagli G, Signorini S, Gori M. Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions. Brain Sciences. 2021; 11(3):343. https://doi.org/10.3390/brainsci11030343

Chicago/Turabian Style

Martolini, Chiara, Giulia Cappagli, Sabrina Signorini, and Monica Gori. 2021. "Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions" Brain Sciences 11, no. 3: 343. https://doi.org/10.3390/brainsci11030343

APA Style

Martolini, C., Cappagli, G., Signorini, S., & Gori, M. (2021). Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions. Brain Sciences, 11(3), 343. https://doi.org/10.3390/brainsci11030343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop