Next Article in Journal
Web-Browsing Application Using Web Scraping Technology in Korean Network Separation Application
Next Article in Special Issue
The Contribution of the Corpus Callosum to the Symmetrical Representation of Taste in the Human Brain: An fMRI Study of Callosotomized Patients
Previous Article in Journal
A More Accurate Half-Discrete Hilbert-Type Inequality Involving One upper Limit Function and One Partial Sum
Previous Article in Special Issue
The Bias toward the Right Side of Others Is Stronger for Hands than for Feet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neurofunctional Symmetries and Asymmetries during Voluntary out-of- and within-Body Vivid Imagery Concurrent with Orienting Attention and Visuospatial Detection

by
Amedeo D’Angiulli
1,2,*,
Darren Kenney
2,
Dao Anh Thu Pham
1,3,
Etienne Lefebvre
1,2,
Justin Bellavance
2,4 and
Derrick Matthew Buchanan
1,2
1
Neuroscience of Imagination, Cognition and Emotion Research (NICER) Lab, Carleton University, Ottawa, ON KS1 5B6, Canada
2
Department of Neuroscience, Carleton University, Ottawa, ON K1S 5B6, Canada
3
Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
4
Department of Computer Science, Carleton University, Ottawa, ON K1S 5B6, Canada
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1549; https://doi.org/10.3390/sym13081549
Submission received: 20 June 2021 / Revised: 9 August 2021 / Accepted: 16 August 2021 / Published: 23 August 2021
(This article belongs to the Special Issue Symmetry and Asymmetry in Brain Behavior and Perception II)

Abstract

:
We explored whether two visual mental imagery experiences may be differentiated by electroencephalographic (EEG) and performance interactions with concurrent orienting external attention (OEA) to stimulus location and subsequent visuospatial detection. We measured within-subject (N = 10) event-related potential (ERP) changes during out-of-body imagery (OBI)—vivid imagery of a vertical line outside of the head/body—and within-body imagery (WBI)—vivid imagery of the line within one’s own head. Furthermore, we measured ERP changes and line offset Vernier acuity (hyperacuity) performance concurrent with those imagery, compared to baseline detection without imagery. Relative to OEA baseline, OBI yielded larger N200 and P300, whereas WBI yielded larger P50, P100, N400, and P800. Additionally, hyperacuity dropped significantly when concurrent with both imagery types. Partial least squares analysis combined behavioural performance, ERPs, and/or event-related EEG band power (ERBP). For both imagery types, hyperacuity reduction correlated with opposite frontal and occipital ERP amplitude and polarity changes. Furthermore, ERP modulation and ERBP synchronizations for all EEG frequencies correlated inversely with hyperacuity. Dipole Source Localization Analysis revealed unique generators in the left middle temporal gyrus (WBI) and in the right frontal middle gyrus (OBI), whereas the common generators were in the left precuneus and middle occipital cortex (cuneus). Imagery experiences, we conclude, can be identified by symmetric and asymmetric combined neurophysiological-behavioural patterns in interactions with the width of attentional focus.

1. Introduction

1.1. Background: Out-of-Body vs. within-Body Experience in Visual Mental Imagery, Neural and Phenomenological Aspects

Visual mental images have been defined as representations that are physically implemented in the brain as neural patterns without corresponding environmental external/exogenous stimulation to the retina [1,2,3]. They may be generated from a single or a mixture of many types of sensory, perceptual, and cognitive processes and may include preparatory visual templates, information maintenance and manipulation in visual working memory as well as retrieval from long-term memory— These are all conditions that fit the traditional concept of ‘imagery’ (see [4] and review by [5]). In this context, the subjective experience of image strength or vividness is an important defining feature of imagery in terms of first-person variables that are reliably measurable with self-reporting and supplementing the converging behavioural and neurofunctional objectifiable correlates of imagery (see [6]). For over a century, self-reporting measures of vividness have been used as the chief research tool to capture the subjective conscious experience of imagery (see [7]).
A complementary fruitful approach has been to characterize and to study mental imagery patterns in terms of experiential (phenomenal) categories associated with their neurofunctional substrate. A tradition dating back to Sherrington’s ([8]) posits that exteroceptive images represent their underlying neural states as if they ‘refer’ or ‘project’ away from the body to sources out in the world so that our subjective experiences result in being located in the external sources, rather than in the parts of the body or the brain from where they are actually generated [9,10]. Generally, this family of out-of-body imagery experiences is deemed to correspond to organized topographic cortical maps that objectively code the measurable physical properties of the external visual space (e.g., texture, color, luminance, contrast).
Mental imagery can also refer to within-body experiences, i.e., internal to one own’s organism, which are associated with generating the images [11,12]. These may involve multiple organism changes, including visceral, vestibular, sensorimotor, eye movement, skeletal muscular, and in most cases, are coded as neural somatotopic maps [13,14]. The latter assumes a broad definition of interoception, as these changes involve cortical, higher order, cross-modal integrated mental representations of the body states, rather than just afferent activity in the receptors of the antisympathetic nervous system [15,16,17].
Damasio ([14]) suggested that the subjective phenomenal interoceptive experience may correspond to the ‘feeling’ of having an internal image and that in normal circumstances, this feeling is coupled with the self of whom is generating the representational enactment (for this point see also [11]). Accordingly, one current theory is that ‘second-order’ patterns or maps may link interoceptive representations describing the observer’s bodily internal changes with the exteroceptive representations describing the external object [13,18,19]. These ‘second-order’ mental images may be properly conceived as correlating self-referential interoceptive awareness with exteroceptive awareness.
Indeed, recently, the classical ventral-dorsal two-stream system has been revised [20] to incorporate the view that the predictive process leading to object identification may involve a larger, more complex network. On this account, the dorsal visual stream and the visceromotor system would integrate a stream of information under the control of the medial prefrontal cortex (PFC), whereas the ventral stream interacts with the sensory integration system, which coordinates visual and interoceptive topographical maps while being controlled by the lateral PFC. Furthermore, the anterior insular cortex is posited as a crucial integration center that takes in multimodal input from the dorso-visceromotor pathway and feeds it to the ventro-interoceptive pathway, where identification ultimately takes place (with termination in the inferior temporal cortex) [20]. Using a rationale that is compatible with predictive coding models (i.e., [21]), it is reasonable to hypothesize that during perception, top-down mental images may be compared with bottom-up sensory information to compute a prediction error, which is then used to update higher level representations and to improve future perception [22,23].
The visceromotor contribution to mental image generation has received much support, especially from eye movement research (e.g., [24]) and embodied cognition research (e.g., [25]). Similarly, the involvement of interoception has been fully exploited in theories of conditioning [26] and, most recently, in consciousness research [14]. Prediction models emphasize the link with the magnocellular neurons, specialized for the processing of low spatial frequency information (lower detail resolution) and movement detection, and parvocellular neurons for the precise processing of fine visuospatial details [20,27].
In another related context (the phenomenology of pain perception), Backryd ([28]) introduced the notion of egoception, defined as the representation and perception of an embodied someone, as opposed to the representation and perception of something outside the body. Applying Backryd’s distinction to second-order visual images, it can be argued that visual mental patterns, as internally directed cognition, can entail consciousness of two different kinds: an awareness of something as an external object, or out-of-body imagery (OBI), and an awareness of the self who perceives the object as a pattern of his/her own body states/changes/activity, or within-body imagery (WBI). It must be highlighted that following the above distinction is important for clarifying the interface between the subject’s type of conscious perspective and the underlying brain processes involved in the two types of imagery. The distinction never implies that anything is actually generated outside the body; both types of imagery are internal cognitive processes that are generated within the body (brain). However, they might recruit different types of neural organization. The validity of this possibility is what we wanted to determine.

1.2. Functional Asymmetry in Visual Mental Imagery

Typically, the term functional hemispheric asymmetry is used to describe a functional difference between symmetric or homologous structures in the two hemispheres of the brain, meaning that the same stimuli or tasks are processed by each hemisphere in a different way [29]. In particular, established forms of functional asymmetries include right-hemispheric dominance (particularly in fronto-parietal areas [30]) for visual-spatial processing, left-hemispheric dominance for language production and processing (especially for syntax) in adults [31], and the supplemental mediating role of handedness on hemispheric functions [32]. The latest recent research trends have highlighted that a key aspect of functional brain asymmetry is its ‘dynamic’ characteristics; they can change developmentally across the lifespan (for example, handedness [32]), they can be more important for behavioural output than asymmetries in terms of the size or volume of the underlying structures, as shown by animal studies [29], and they can be modulated with different techniques, such as computer–brain interaction and biofeedback [33].
For over forty years, eliciting the generation of different types of visual mental images has been a useful way of probing and measuring functional hemispheric asymmetries as they pertain to the high-level functions of the visual system [34]. Moreover, the impact of these mental images on hemispheric asymmetries can be manipulated and measured using visuospatial tasks such as Vernier acuity [35]. These methods may be considered in contrast to auditory asymmetries probed by traditional methods such as the dichotic listening task [36], or more specific versions of it, such as computer laterometry [37]. The notion of asymmetry being dynamically functional has also been recently demonstrated in other mental states such as fatigue [38], strain [39], emotional arousal [40], and during biofeedback [33].
The focus of the present investigation is to understand the functional and anatomical relationship of different types of mental imagery processing on hemispheric asymmetries by using visual mental imagery manipulations and combining behavioural and electroencephalographic (EEG) and event-related potential (ERP) techniques, which are also traditional, frequently used methods for studying hemispheric asymmetries. For clarity, in the present paper, we will refer to the following working operational definitions specifically applied to EEG/ERP measurements. We define functional hemispheric asymmetry (or asymmetry for short) as: a statistically significant or effect size difference in EEG/ERP activity between experimental manipulations found in one or a cluster of electrode sites located in one of the lateral sides (left or right) versus no statistical effects for the same type of comparison in the electrode or cluster opposite in the homologous lateral side of the scalp’s electrode topographic configuration (i.e., head map). In contrast, we define functional hemispheric symmetry (or symmetry, for short) as: a statistically significant or effect size difference in EEG/ERP activity between experimental manipulations found in one or a cluster of electrode sites located in both homologous lateral sides (left and right) of the scalp’s electrode topographic configuration.
A rich research tradition stemming from the pioneering work of Kosslyn ([41,42]) has indicated the two main visual pathway streams, the parvocellular/ventral and magnocellular/dorsal systems, as the main neurofunctional infrastructure for imagery (for an updated review, see [43]). Connected with the key role of the dorsal and ventral pathways, there is substantial neuropsychological and computational empirical evidence and theoretical consensus of functional asymmetry in processing exteroceptive relations during visuospatial perception as well as imagery (see review in [44]). According to this asymmetry, one neurocognitive function determines the detection, identification, and use of exteroceptive relationship between objects in the environment as characterized by a system of categorical representations, which code in terms of verbal, global, and spatial categories such as ‘connected/disconnected’, ‘inside/outside’, ‘above/below’, ‘left/right’, and so forth [31]. In contrast, another neurocognitive function determines the exteroceptive mapping on a coordinate system, which codes them in terms of precise, metric distances and locations [30].
A (generally accepted) hypothesis on the neurobiological substrate of this function is that in the visual system, categorical relations are implemented through the small neural receptive fields of the parvocellular (P) pathway, whereas coordinate representations may be implemented in the large neural receptive field of the magnocellular (M) pathway. How this asymmetry occurs neurally is still hotly debated.
A fundamental assumption is a structural asymmetry of distribution in the hemispheres for these types of receptive fields in that the proportion of the M cells’ large receptive fields would be relatively more abundant in the right hemisphere, and conversely, the P cells’ small receptive fields would instead be relatively more abundant in the left hemisphere [42,45,46,47]. It is important to note that originally, Kosslyn introduced this assumption in the context of his theory of high-level vision, in which essentially, imagery, attention, and sensory-motor functions overlap during the perception and knowledge of space and objects. It was only in the early 2000s that the notion became more frequently used to explain aspects specific to perception. The evidence of cytoarchitectural asymmetric distribution in M and P cells has never been supported directly and only hinges on a supposed correspondence between M cells and the properties of transient low-resolution fields observed in response to high stimulus frequencies, predominantly in the right hemisphere, versus the response of sustained high-resolution fields to low spatial frequencies, predominantly in the left hemisphere (see review in [48]). Barring new future anatomical discoveries, it is currently generally accepted that this asymmetry is essentially functional. In other words, it cannot only be explained by the underlying structural anatomy, and it is not completely determined by just the triggering of bottom-up filters or channels that are tuned to certain stimuli and that proceed from lower brain to higher brain stations in a feedforward manner.
Some authors have proposed that the top-down mechanisms involved in voluntarily directing attention [44,49] are necessary, at least in the earliest stages of initiating the cognitive or perceptual response, to explain how asymmetry is obtained. However, landmark studies by Kosslyn and colleagues [50,51,52,53] support the division of the labor between the M neurons in the dorso-visceromotor pathway and the P neurons in the ventro-interoceptive pathway during voluntary visual mental imagery. Humans, when they are instructed to do so at will, typically generate low-resolution schematic images that are associated with the activation of dorsal visual extrastriate areas (V2-V5) and that are predominantly M receptive fields; however, as images are generated with increasing levels of detail and precision, neuronal activity undergoes ‘ventralization’, spreading out over the early ventral visual areas (through V2-V1 asymmetric connections) that contain significantly more P receptive fields [54,55,56,57]. The issue of how much attention and imagery contributions are distinguishable has not received a clear answer. A clear overlap, however, is demonstrated by the fact that most of the relevant experiments on both attention and imagery have involved manipulations of size, of either the attentional window or images, to probe participants’ visuospatial predictions, so both types of manipulations are confounded with the preferential response of the M vs. P receptive fields to large vs. small entities, respectively. In the next sections, we attempt to clarify the nature of the three-way relationships between mental visual imagery, attention, and perception.

1.3. The Interaction between Visual Imagery and Externally Directed Attention

The objective of the present work was to explore how the two types of phenomenal awareness, out-of-body and within-body, may influence attention and visuospatial perception during voluntary imagery tasks. Phenomenal awareness is notoriously difficult to study using only introspection; therefore, the present approach focused on examining the objective effects and characteristics of visual mental imagery by studying the possible interaction between attention and phenomenal imagery type. A similar approach was at the core of some foundational theory in phenomenology. In particular, Hegel ([58]) was the first to hypothesize a dynamic process according to which we can move (the focus of attention) voluntarily (intentionally) from the perception of ourselves to the perception of the external world. Consistent with Hegel’s intuition, a very recent body of research shows that the representations underlying mental images can be consciously and voluntarily attended [59]. In this context, voluntary internal attention can be defined as the process by which observers are able to willfully direct or shift awareness to the focus of what is consciously and voluntarily attended in visual mental images [43,60].

EEG Findings

A tradition of empirical electroencephalography (EEG) research has focused on investigating the interactions between attention and imagery. Important work [61,62,63] investigated the effects of subjective perspective in orienting attention from the intake/rejection framework [64]. This model is based on the distinction between sensory ‘intake’ tasks (i.e., externally directed or orienting attention, EOA) and cognitive ‘rejection’ processes (i.e., internally directed cognition), such as mental arithmetic, mental imagery, and other working memory tasks. In the context of this theory, it is hypothesized that the observer needs to inhibit or ‘reject’ incoming sensory information to facilitate internally directed cognition, alluding, in particular, to the importance of attention in functional imagery tasks. Interestingly, in terms of neurophysiological markers, both imagery and attention are associated with increases in alpha (~8–12 Hz) power. Specifically, rejection tasks such as mental imagery and arithmetic tasks were found to have increased alpha power in parietal sites [62,63]. While imagery and attention are dependent on increased alpha power, Schupp et al. ([65]) found that perceptual tasks were associated with lower alpha, indicating a clear difference in the information processing demands between perceptual and cognitively demanding tasks. Additionally, Klimesch et al. ([66]) found that memory performance was positively correlated with average alpha frequencies, further solidifying the difference in the alpha power between perceptual and cognitive task demands. More recently, the sensory intake rejection hypothesis has been broadened to propose alpha as a mechanism for increasing signal to noise ratios within the cortex by inhibiting processes that are unnecessary to or that conflict with the task at hand [67,68] so that the greater the task demands or cognitive load, the more inhibition that is needed and the greater the synchronization.
Seminal multimethod studies by von Stein and colleagues [69,70] using EEG coherence techniques have suggested that attention–imagery and/or perception–imagery interactions may involve the dynamic equilibration of feed-backward top-down and feed-forward bottom-up connectivity, where ‘top-down’ is defined as a prominently internal process such as mental imagery or a working memory task, while ‘bottom-up’ is defined as external processes such as orienting attention to a stimulus. According to the interpretation of their data, fast oscillations in the gamma (~50–80 Hz) and/or upper beta (~20–29 Hz) bands reflect local coherence showing connectivity circumscribed within sensory cortical areas (i.e., occipital cortex) and indicating the binding of sensory information. Lower Beta (~13–19 Hz) showed medium-range interareas connectivity (i.e., between the parietal and occipital cortex), indicating that multimodal associative processing is involved, for example, in categorization, recognition, or semantic tasks. However, slow–medium oscillations in the theta and alpha bandwidth are correlated with the long-range connectivity between the frontal and posterior cortex during internal tasks requiring relatively more top-down than bottom-up processing such as mental imagery and maintaining information in working memory. Based on comparative human EEG and animal intracortical data [71,72], the latter pattern of connectivity was most likely interpreted as involving prominent a feed-backward frontally driven transfer of information, implicating theta and alpha synchronization as an index of top-down frontal control in calibrating the required processing trade-off between internal processes and sensory intake. Successively, this hypothesis received support from Sauseng et al. ([73]), who found anticorrelated coupled levels of alpha activity in the prefrontal and occipital areas during a visual working memory task with anterior to posterior latency shifts (interpreted as the anterior executive control of posterior sensory regions, i.e., top-down control). However, the effects related to alpha were only partially replicated in other follow up studies [74].
Little research exists in terms of differentiating oscillatory activity among different types of internally driven mental operations. For example, it is not clear if one mentally driven operation such as mental imagery differs from another operation such as arithmetic calculations in terms of the alpha oscillation’s capacity to inhibit task irrelevant processes. Even less is known about differences in alpha oscillations occurring within different, albeit closely related, types of internally driven operations. Specifically, both OBI and WBI are phenomenologically independent, yet considering the relative reliance on perceptual pathways in the dorsal and ventral visual processing streams, it is reasonable to hypothesize that these two types of mental imagery could be differentiated neurophysiologically.
Despite the prevalence of alpha in both attention and imagery demanding tasks, these oscillations do little to explain instances where, for example, the orientation of one’s attention differentially impacts the processing of visual stimuli. It is unclear how two or more alpha-dependent tasks can interact to produce functionally relevant outcomes for an individual observer given that alpha oscillations are associated with the suppression of conflicting processes relevant to the task goal. Preliminary research by Villena-Gonzales et al. ([75]) investigated whether different modalities of internally oriented imagery (auditory and visual) significantly differed from an externally oriented visual processing task. They found that all three conditions had relatively high alpha power in visual areas; however, they found that internally oriented imagery showed significantly more alpha power compared to externally oriented visual processing. These results suggest that internally directed cognitive tasks might have a higher affinity to inhibit task irrelevant processes compared to externally directed ones. While Villena-Gonzales et al. ([75]) demonstrated how these alpha-dependent tasks might differ from each other independently, it is not clear how, for example mental imagery (i.e., WBI), could influence the processing of an externally directed task, such as a perceptual task, if performed simultaneously. As such, a crucial open question is how concurrent attention–imagery interactions would play out in terms of specific mechanisms.

1.4. ERPs of Visual Mental Imagery

Voluntary visual imagery and corresponding vividness self-reports have been linked to a specific ERP signature, sometimes labelled as P8/900 in cognitive research [76] and as late positive potential (LPP) in emotion research [77]. The best characterization of this ERP signature has been provided in several studies by Farah and colleagues [78,79]. Specifically, Farah and Perronet ([80]) reported a stronger P8/900 in the occipital electrodes for participants who reported more vivid images. The actual millisecond range of the waveform has been reported as positive voltage (typically, +2 < µV < +8), starting by sharply raising at about 600 ms after stimulus presentation (i.e., word to be imagined), maximally peaking anywhere between 800 to 1250 ms, and resolving to baseline much later, at around 1300–1400 ms. The localization of the neural generators of the LPP signals was initially estimated to correspond to two network dynamics: one going from medial to lateral occipital electrodes, and a second one going from medial the occipito-temporal to lateral fronto-temporal electrodes [81]. A follow up study using a similar experimental design but with different data sampling (one image generation each second) and measuring functional Magnetic Resonance Imaging (fMRI) activity [82] confirmed that the visual association cortex was engaged during the mental image generation from words. The left inferior temporal lobe (Brodmann’s Area 37) was the most consistently activated area across participants. In addition, a subgroup showed activity that extended superiorly into the occipital association cortex (BA 19).
A key aspect highlighted in the ERP literature has been that visual imagery appears to show a mix of neurofunctional symmetries and asymmetries. Consistent with the ERP findings just reviewed, several lines of evidence in clinical neuroscience and neuropsychology demonstrate that medial and inferior structures in the temporal lobe (presumably the initial neural generators of LPP, see [81]) are also consistently associated with visual mental imagery. However, Gonsalves and colleagues [83,84] showed that vivid visual imagery can involve two anticorrelated ERP activations at the right fronto-central and left occipital electrodes, which are also related to individual processing differences, as reflected by vividness rating or self-reported data.

Converging Findings from fMRI Research

A synthesis of the hypothetical anatomical networks that would putatively correspond to the ERP and EEG activity reviewed above would be best based on the most exhaustive recent coordinate-based meta-analysis on fMRI studies on image generation using the activation likelihood estimation algorithm [85]. According to the exhaustive mapping provided in that landmark study, the initial phase of the image generation process typically involves top-down driven activity from the prefrontal and frontal cortex. A subsequent or alternative route involves the recruitment of associative semantic and linguistic neural networks at the level of the medial temporal lobe (MTL), which is involved in understanding the meaning of the words and categorical processing. Consistent with the findings from Farah and her collaborators (i.e., [82]), the activation of the MTL is almost always primarily lateralized in the left hemisphere. Indeed, Winlove et al. ([85]) found that regions of the left posterior parietal cortex (i.e., precuneus) are the most consistently activated during visual mental imagery, as this network is considered a consciousness hub [86] that is also involved in attention and working memory and vivid imagery [23,87]. A very recent analysis, however, has demonstrated that the generation of visual mental images of simple objects, such as lines, not involving verbal cues is most often bilateral [34]. The next stage in the neurological architecture involves regions of the fusiform gyrus and the peristriata densopyramidalis area in the inferior temporal cortex. In particular, the newly redefined area known as the phPIT cluster is in the inferior temporal gyrus at the posterior end of the lateral occipitotemporal sulcus (in the medial occipital cortex, for example, the cuneus), and consists of two hemifield representations, phPITd and phPITv, which share their foveal representation and vertical meridians. All evidence suggests that such retinotopic structures are the best candidates for the ‘visual buffer’ [42,43,88] where the physical implementation of the actual neural pattern corresponding to the phenomenological experience of the image occurs. The interval of activity from the MTL to the latter occipitotemporal areas are particularly relevant in the present context since the LPP is thought to reflect postsynaptic activity derived from the complex interactions of this stretch of networks. The body of neuroimaging evidence leads us to presume that the ventralization of activity in the extrastriate and primary visual cortex can, in a proportion of imagers, be associated with vivid imagery. Indeed, V1 size seems to be inversely related to reported vividness but directly related to imagery precision [89].

1.5. The Interaction between Visual Mental Imagery and Visuospatial Perception

To more appropriately understand the potential influence that internal mental imagery might have on an externally directed task, it is essential to evaluate how both imagery and visuospatial perception interact. Generally, voluntary internal mental imagery can influence perception in ways that can either interfere with perceptual acuity or in ways that can facilitate perceptual acuity. In particular, visual mental imagery may lead to the formation of a short-term memory sensory trace that can bias future perception [90]. The latter could be achieved in conditions where the high-level processes supporting visual imagery may shape low-level sensory representations [91]. What is less clear, however, is how the different types of phenomenal imagery (egoceptive and exteroceptive) can influence low-level sensory representations when attention is externally directed. Advances in evolutionary and comparative neurobiology converge with cognitive neuroscience findings in showing that the putative neural substrates of exteroceptive and egoceptive imagery are complementary and partially overlap. It has been hypothesized that the pathways corresponding to both types of codes co-evolved to predict how real objects behave in the immediate future, to avoid attackers or catch prey [92,93,94], and to select and guide the most adaptive among many available actions [13,95,96,97]. Along this line of research, evidence supporting the claim that imagery has a facilitatory role in prediction is provided by the findings that, similar to the effects of perceptual learning and practice, that imagery increases sensitivity in hyperacuity (i.e., gap detection) when the same stimulus is visualized repeatedly over a block of trials [98,99]. This further validates the finding that visual imagery can shape low-level sensory representation. Overall, support for a functional predictive theory of mental imagery has not been based on evidence of direct neuroanatomo-functional correlates of subjective experience. As there is not a test to indicate the role of phenomenal imagery associated with the deployment of external attention, an alternative account of the locus of imagery–perception interaction and the related implications for hemispheric asymmetry remains open [6].
Arguably, current indirect suggestions of the involvement of imagery in prediction [100] would seem to be a natural extension of older proposals that state that imagery anticipates [101] or facilitates [102] or primes [103] perception. However, facilitation is not the only possible outcome of the predictive cycle process. There is evidence that imagery biases predictive decision in ways that do not necessarily lead to facilitation. In particular, imagery can also impair, to different degrees, the accuracy of concurrent visual perception. Initially coined the ‘Perky effect’ after its discoverer [104,105], the phenomenon has since been replicated and extended with different paradigms across multiple disciplines and has been reported on with different terms (e.g., [89,106,107]). The available neurophysiological evidence suggests that interference effects on hyperacuity should be associated with ‘attenuation’ or ‘modulation’ to the underlying neural changes in early visual processing and, in particular, V1 activity [108,109].

1.6. The Present Study: Attention-Perception-Imagery Interactions and Functional Asymmetry

In this study, we primarily tested the effects of two phenomenal imagery types (out of body and within body) during an externally oriented visuospatial task in order to investigate if and how changes in phenomenal awareness can interact with visuospatial perception. To create a condition where both phenomenal imagery type and perceptual acuity could interact, participants were instructed to simultaneously imagine a simple line while being presented with another offset real line. This dual task allowed us to observe the ERP/EEG effects of different types of phenomenal internal imagery on the ERP signature and the EEG pattern characteristics of external attention to stimuli, i.e., external orienting attention (EOA). That is, if imagery has a systematic effect on the ERP/EEG associated with EOA, then there must be some common brain locus at which imagery type and perceptual processing interact. More importantly, if the interaction between imagery and perception is phenomenologically specific—that is, for example, if imagining a line within one’s body (egoceptively) affects the ERP to the offset line more than imagining a line in the environment (exteroceptively) affects the ERP to the offset—then this interaction must take place at some locus where information about the differences between exteroceptive and egoceptive maps are neurally implemented. This is especially the case given the observation (which applies to our foregoing experiment) that imagining a simple line should, as reviewed earlier [34], involve functional hemispheric symmetry. Any observed asymmetry should then be most likely associated with the phenomenological perspective induced in the imager via task instructions.
Given the fact that very few neurophenomenological studies have been conducted on the topics reviewed earlier and that there are still too many gaps in current knowledge, the following experimental study should be considered as an exploratory beginning, supported by a guided set of working hypotheses.
We hypothesized that when imagery is voluntarily produced, the dorsal or ventral higher cortical processes (i.e., frontal or temporal), which are embedded in the predictive visuospatial process, would exert ‘top-down’ modulation on occipital cortical activity. We also hypothesized that phenomenologically incongruent mental images may take away some of the neural resources otherwise allocated to attention and predictive processing responsible for hyperacuity. In other words, we hypothesized that voluntary imagery would be associated with the attenuation of the concurrent V1 processing of incoming ‘bottom-up’ input, potentially interfering with a type of perceptual prediction that the human visual system evolved to be exceptionally good at [110]; recall that the resolution involved by Vernier acuity performance is generally 5 to 10 times higher than visual acuity outside of the lab and in ecological environmental conditions [111]. Consequently, we predicted that hyperacuity would decrease when OBI and WBI were concurrent to the visuospatial detection; however, we expected more interference for the latter imagery condition than the former.
Associated with the differential decrease in behavioural performance, we expected relatively more interference on the neural activity between orienting external attention (OEA) and WBI during the simultaneous stimulus detection but relatively more interference on neural activity between OBI and OEA in the generating/recalling phase and while maintaining the image without visuospatial detection. The latter predictions follow from the hypothesis that the top-down modulation of EEG/ERP occipital activity in the OBI might follow a similar route as OEA, predominantly the dorsal fronto-parietal pathway, whereas the top-down modulation of EEG/ERP occipital activity might follow the largely parallel and independent route from OEA, that is, predominantly following the medial temporal-parietal pathway. Critically, we would expect that the ERP activity would show asymmetries related to the width of attentional focus, such that WBI would involve broader focus lateralized to the right hemisphere, whereas WBI would involve a narrower focus lateralized to the left hemisphere.
As reviewed, the available EEG research does not offer compelling evidence for the allegedly special and unique role of alpha, as opposed to other oscillations. This is especially true because studies have very rarely contrasted (or at least reported comparisons of) all EEG frequencies during imagery tasks. Current theoretical perspectives on the nature of EEG oscillations point to the possibility that the interaction among different external and internal mental operations such as attention and imagery are likely to involve complex multiple interactive dynamics between alpha and other oscillatory rhythms such as gamma (~30–80 Hz), beta (~13–29 Hz), and theta (~4–7 Hz). In particular, tasks that engage working memory, perceptual, and conscious processing may involve simultaneous increases or decreases in power across those frequency bands [112,113]. Therefore, following von Stein et al. [69], we predicted that both OBI and WBI would show an inverse relationship between synchronization (i.e., more desynchronization) and hyperacuity reduction, with such inverse relationship involving slow (delta and theta) and fast (gamma) EEG oscillations reflecting, long-range and visual intrarea connectivity, respectively.
Finally, in the foregoing experiment, we designed instructions aimed at minimizing uncontrolled variations in the degree of subjective imagery vividness, as we wanted to keep vividness as ‘constant’ as possible across participants in order to minimize additional confounding factors due to individual differences or fluctuations in subjective image quality. As a result, we sought to concentrate on imagery that was recalled with a high level of vividness.

2. Materials and Methods

2.1. Participants

A total of ten adults (M = 23.3 years old; SD = 2.79 years; 6 males) with normal vision (all right-handed) were recruited through Carleton University’s volunteer portal and compensated with a percentage of course credit; only eligible candidates who had never participated in an imagery experiment before were admitted to the study. This final sample size was reached after three candidate participants were excluded, as they did not have sufficiently vivid mental images to complete the experimental task. Due to the COVID-19 pandemic, we were unable to recruit and test more participants and upgrade our sample size. For post hoc verification, a power calculation was completed in order to find the power of the lowest reported effect size in this paper (ES = 3.78, q = 0.00747). With N = 10, the power of this effect size was 0.85 (µII −0.319 µV, µNI −2.024 µV, σx = 0.6196 µV). In addition, we collected many converging repeated measurements, which increase the reliability and replicability of the reported results.

2.2. Task and Apparatus

Participants performed a variation of the Vernier acuity task (as in [114]), where a white offset line appeared on either the left or right side of a computer screen, and the participants were instructed to respond to which side they perceived it to be on (Figure 1A). Each participant completed a total of 150 trials, which were divided equally into three conditions: a no imagery condition (NI), where the participant simply responded to the offset line; an out-of-body imagery condition (OBI), where the participant simultaneously projected a mental image of the white line on the center of the screen; and a within-body imagery condition (WBI), where the participant simultaneously generated an image of a white line while avoiding physical or spatial representation in the environment. During both of the imagery conditions, the participants were informed to keep the vividness of their images at a minimum rating of 5–7 (vivid–very vivid) according to the mental imagery vividness rating scale used in D’Angiulli and Reeves ([115]). To assure that the participants were indeed generating mental images during the relevant conditions, the experimenter reminded the participant of what type of imagery was expected of them after every 5 trials for both imagery conditions. In all conditions, the participants were instructed to fixate on the center of the screen, which was indicated by two smaller white lines for the entire experiment. To ensure that the participants understood all of the instructions, they were given up to ten practice trials. Before each imagery trial, the participants were instructed to generate and hold the mental image at their own pace and to press the down arrow key when they were ready to view the next offset line. In the no imagery trials, the participants pressed the down arrow key when they were ready for the next offset line. Exactly 500 ms after the participants indicated that they were ready, the offset line appeared on the screen for 67 ms and at a visual angle of 0.157° from the center of the screen. The participants responded to which side of the screen they perceived the offset line to be on at their own pace using the respective arrow key. Each condition was separated into blocks of 10 trials, rotating between NI, OBI, and WBI. The offset line appeared 5 times on the left and 5 times on the right in a random sequence in each block. Between blocks, the participants were given as much time as they needed to rest, with a minimum of 30 seconds. Each participant completed all 150 trials.
The Vernier acuity task was displayed on a black background using a standard CRT monitor (11 × 19.5 cm). The contrast and brightness were set to minimum, and the colour was set to grey scale. The task corresponded to photopic conditions of approximate luminance of 50 cd/m2 with a Weber contrast of 21:1 (as in [116]). The offset line (9.5 × 0.5) was calibrated to display at an offset of 0.25 cm from the center, indicated by two short white lines (1.2 × 0.5 cm), for four refresh frames (67 ms at 60 Hz). The participants were seated at 91.44 cm from the monitor, resulting in a visual offset angle of 0.157° from the center. The total distance between the two offset lines was 0.5 cm, corresponding to a maximum visual angle of 0.314°.

2.3. Calculation of Acuity Reduction

The percentage of correct responses in the 50 NI trials was used as a measure of baseline visual acuity and was compared to the percentage of correct responses in the 50 imagery (OBI and WBI) trials to find the change in acuity in both imagery conditions.

2.4. EEG Acquisition and Preprocessing

EEG data were acquired using a 68-channel Quik-cap adhering to the 10/20 positioning system (Quik Cap, Neuroscan, Charlotte, NC, USA). The recessed Ag-AgCl electrodes were initially sampled at a rate of 1000 Hz and were amplified in reference to an electrode located on the nose tip (gain of 10; range of ±200 μV, or 400 μV peak-to-peak; accuracy of 29.80 nV/LSB) and were digitized via SynAmps2. Impedances were kept under 5 kOhm to ensure effective signal quality with minimal noise. Data were recorded using the software NeuroScan 4.5 (Compumedics, Neuroscan). In order to ensure that the eyes remained stable and that eye-movement patterns did not obscure the ERP or behavioural results, trials with ocular artifacts were detected and rejected using the automatic spatial filtering model as implemented in BESA v.5.4.28 [117] (also see [118,119]). The percentage of rejected trials was below 5% per subject (across all 150 trials).

Participant Setup and EEG Procedure

Data acquisition was completed in the Neuroscience of Imagery Cognition and Emotion Research (NICER) Lab at Carleton University. Participants completed the Vernier acuity task on a 2013 Dell monitor while the researcher was seated in an adjacent room. After obtaining signed consent, the general procedure was explained to the participants, including the details of the Vernier acuity task, a pictorial example of the 2D line to be imagined as well as a cartoon representing the OBI and WBI. Participants were made aware that they were able to stop the experiment at any time for any reason. The Quik-cap was fitted using standard capping procedures, and one of the three cap sizes was used: 50–54 cm, 55–60 cm, or 60–65 cm. To reduce electrode impedance, participants exfoliated their scalp for 60 seconds using a comb, and an electrode prep pad (Professional Disposables International, Inc., Orangeburg, NY, USA) was rubbed then on areas of exposed skin followed by nuprep skin prep gel (Waeaver and Company, Aurora, CO, USA). Finally, a syringe filled with Electro-Gel (Electro-Cap International, Inc., Eaton, OH, USA) was injected around each electrode in order to further reduce impedance. Electrode configuration checks were then performed before recording began for each participant: eyes open and closed were matched with beta and alpha waves, respectively; blinking, looking upwards, and looking downwards were matched with the spikes in the electrodes FP1, FP2, IO1, and IO2; looking left and right were matched with the spikes in electrodes IO1 and IO2; and teeth gritting was matched with the noisy signal from electrodes T7, T8, TP9, and TP10. Before beginning the experiment, participants were once again given thorough instructions on how to complete the task, were given descriptions of exteroceptive versus interoceptive mental imagery, and were given one practice block. The distance between the participant and the monitor was exactly 91.44 cm, and they were asked to sit upright with their feet flat on the ground and with their left hand on their lap. Participants were instructed to reduce muscle and eye movement aside from pressing the down arrow to indicate image ready and the left or right arrows to indicate line offset. Before each block, the participants were informed that they would either be starting the NI, OBI, or WBI condition. The entire experiment lasted approximately 3 hours for each participant. Participants were debriefed before leaving the lab.

2.5. ERP Processing and Analysis

EEG and ERP signal processing followed a pipeline analysis made up from selected (but differently ordered) steps of the Mass Univariate Analysis and Permutation Statistics (MUAPS) approach (see [120], for an introduction). The three processing steps were: (1) clustering (following standard epoching); (2) permutation statistics; and (3) False Discovery Rate (FDR) correction. The analysis was designed by first using the Mass Univariate Toolbox (http://openwetware.org/wiki/Mass_Univariate_ERP_Toolbox, accessed on 20 August 2021). Successively, MATLAB 2017b was used for all ERP offline processing (down-sampling, clustering, grand averaging, t-testing, binomial testing for type I error determination, and FDR multiple comparisons correction).

2.5.1. ERP Epoching and Clustering

Data for each condition were divided into two epochs: ‘offset response’, which used the appearance of the offset line as the event marker with a −100 ms pre-stimulus and 1000 ms post stimulus and ‘image ready’, which used the image ready response as the event marker, with a −500 ms pre-stimulus and 500 ms post-stimulus (Figure 1).
The ERPs were down sampled from 1000 Hz (1 sample per ms) to 40 Hz (1 sample per 25 ms) in order to simplify the time series analysis of the major components [121]. The mean and standard deviation of each new data point were calculated by taking the pooled between-trial mean and root variance of the old data points. A total of three grand average ERPs were calculated by taking the pooled and weighted mean and root variance of the individual ERPs, which were weighted by the number of successful (artifact free) trials that the subject completed for each experimental condition. Finally, electrodes were clustered into 21 regions of interest (ROIs), with 12 irrelevant electrodes (orbifrontal, ocular, cerebellar, and reference electrodes) being removed in the process (Figure 1C).

2.5.2. ERP Statistical Permutations Analysis

Initial ERP statistical analysis (Figure 2) involved multiple two-sided independent measures t-tests, where the resulting p-value distribution was compared against a binomial distribution (assuming P(H0) = 0.05, N = 1092) in order to assess the number of possible permutations of type I errors due to performing multiple t-tests (see [122]).

2.5.3. False Discover Rate Correction

In the second, fine-tuned analysis, a Benjamini–Hochberg false detection rate (FDR) procedure was applied to the p-values (converting them to q-values) to further correct for multiple comparisons. The usage of the two-sided t-test assumed that the voltage distribution at each time point followed a Gaussian distribution, where the imagery conditions could be assessed for significance against the no imagery condition. After binomial testing and FDR correction, there were significant q-values in three clusters of interest (displayed on the periphery of Figures 2–4). The q-values for each time bin are displayed on the y-axis in reverse order with a scale of log20 (in order to clearly display q-values surpassing the significance threshold).

2.6. Partial Least Squares Analysis

Subsequent statistical analyses relied on permutations and bootstrap sampling using partial least squares (PLS). A total of two PLS variations were applied as they were deemed useful for further ERP analysis, as we were interested in cross-validating the results of the first analysis with a second data-driven technique. As an exploratory analysis, behavioural PLS correlation and contrast-task PLS (mean-centered) analyses were applied to the ERPs in order to find the relationship between brain activity and Vernier acuity scores. Finally, in keeping with the other hypotheses and predictions, we once again both the contrast task PLS and the behavioural PLS to the event-related band powers (ERBP) of the three conditions.
PLS is especially well suited for the analysis of brain activity across many different experimental designs, as it uses a multiple comparisons approach in order to extract patterns of maximal covariance within the dataset that can be attributed to specific aspects of the experimental design, such as conditions or scores [123]. Multiple variations of PLS exist, such as contrast task PLS, which finds differences between a priori contrasts (in this case the three experimental conditions), and behavioural PLS, which finds correlations between behavioural variables (in this case the condition’s raw Vernier acuity scores). A covariance matrix is created between the specified measure of brain activity and the specified design, where PLS applies singular value decomposition in order to assess significance. Latent variables (LVs) characterize the distributed patterns of neural activity that can be compared for similarities or differences between participant groups and/or experimental conditions. LVs are a set of linear combinations of the initial variables coming from the two compared data blocks and that maximally covary with the corresponding contrasts [123]. Specifically, each LV consists of a set of singular values that describe the effect size and a set of singular vectors or weights that identify the contribution of each initial variable to the LVs [124]. To obtain our LVs, we followed the standard procedure of computing saliences. Saliences can be considered as the equivalent to the loadings in principal component analysis (PCA), and latent variables are similar to PCA components. Our saliences were obtained by decomposing the correlation matrix from the two blocks of data. In our analysis, we always assumed one LV. The only exception was the very last exploratory multivariate analysis investigating the relationship between ERP amplitudes and EEG desynchronization, and for the latter analysis, we assumed two LVs.
Bootstrap sampling was used to estimate the standard error in PLS correlation (123). Using the dataset, a bootstrap sample was created by repeated random sampling with replacement. Error was estimated by applying PLSC to this bootstrap sample. Due to the small sample size of this experiment, LVs were assessed for significance using 100 regular and 100 split-half permutations as per the guidelines provided by Kovacevic et al. ([125]). The LVs were calculated using 100 bootstrap samples and a minimum threshold confidence interval of 95%. The LV confidence interval of each group on the design panel assesses the relative contribution to the bootstrap model and determines which regions of interest are the most reliable [124,126]. All of the groups can be considered significant with respect to the bootstrap matrix because their 95% confidence interval did not cross the zero mark.
The statistical significance of a correlation for a latent variable was defined by a p-value calculated from Fisher’s nonparametric estimation of sampling distributions. In the latter procedure, the rows of each matrix were randomly rearranged, and the PLS correlation was then re-applied; this process involved 1000 reiterations that estimated the probability distribution of the singular values assuming the null hypothesis. Bootstrapping was used to assess the reliability of each original variable (i.e., electrode clusters at each time bin) that contributed to the latent variable. Bootstrap ratios were calculated for each original variable for this purpose. Each of these were defined as the ratio of the weights to the standard errors estimated by bootstrapping. Hence, the larger the ratio, the larger the weight (i.e., contribution to the LV) and the smaller the standard error (i.e., higher stability). For clarity, we expressed Bootstrap ratios as z-scores, given their equivalence under approximately normal bootstrap distribution [127].

2.7. Event-Related Frequency Band Power (ERBP)

Fast Fourier transform (FFT) is an algorithm that divides a time domain signal into its frequency components [128]. We applied FFT to the 1000 Hz data (not the data downsampled to 40 Hz) in order to measure the event-related band power (ERBP) for each epoch, which can be achieved by using the power-in-bands function as implemented in Brain Electric Source Analysis (BESA v.5.4.28; http://www.besa.de/, accessed on 20 August 2021). The electrodes were then clustered in order to align with the ERP figures. Event-related frequency band synchronization (ERS) was marked by a percent increase in the frequency band power in a given region compared to the prestimulus resting baseline, whereas the event-related desynchronization (ERD) was marked by a percent decrease in the frequency of the band power compared to the prestimulus resting baseline in a given region. Generally, it has been found that ERD (which roughly reflects signal complexity) is associated with the cortical regions engaged in task-related information processing [129]. Furthermore, the ERD of the alpha and beta-bands has been linked to cortical excitability during sensory and motor tasks [130,131,132,133,134,135,136]. Finally, ERD has also been correlated with various perceptual benefits, such as improved visuospatial acuity and processing [136,137,138,139,140,141].

2.8. Source Dipole Localization Analysis

The standard procedures collected in the Brain Electric Source Analysis methods toolbox suite (BESA v.5.4.28; http://www.besa.de/, accessed on 20 August 2021) were used to perform an initial independent component analysis (ICA) decomposition to group the data in the electrode clusters corresponding to those of Figure 1C. A second ICA was performed on the signals from all of the electrodes across the scalp to identify new electrode cluster components using the FASTICA algorithm [142], which is available in the EEGLab app [143]. The ICA method can estimate the location and timings of components with reasonable reliability. However, ICA cannot provide an estimate of the absolute magnitude for each component since there is an intrinsic confound between the strength of the component and the attenuation due to the distance from the measurement point. To overcome this ambiguity, the entire ERP data were first ‘binned’ and were then converted to a series of topographic maps.
The next steps in our source analysis closely followed procedures that have been presented in detail elsewhere [144]. Here, we describe the most salient details. The EEGLab suite contains an graphic editing program that allows the representation of the averaged ERP epochs onto topographic maps that have been binned as clips of 10 ms. This permits the clips to be put together in sequence, resulting in an ‘ERP movie’, that is, a capture of the time course of the dynamic ERP activity. Following a review of the movies, specific single ‘stand still’ topographic maps were selected for further analysis so that they consistently captured the scalp activity at the mid-points of the averaged standardized time intervals (i.e., previously binned using a uniform ranking procedure scaled nonparametrically across the averaged data of all of the participants). Maps were only extracted for the image ready epoch. More precisely, we only considered the interval designed for the recall of the image before the manual response, which signaled that the participants had recalled the image (i.e., −500 to 0 ms, see Figure 1B).
The DIPFIT program module in EEGLAB was used to estimate dipoles in the ERP data that would explain the components obtained from the initial ICA. Each dipole represents a cortical area where the parallel activity of several thousand neurons produces the combined electric field responsible for the EEG signal picked up at the scalp. The DIPFIT software is able to find a parsimonious number (usually one or two) of dipoles for each of the specific regions associated with the independent components.
The EEGLAB MRI-based spherical head model with standard adult Talairach coordinates was selected. As a first step, we found the labels of the brain regions to which the dipole locations best corresponded to by using the most recently updated Talairach database [145]. We used a built-in software function to search for the nearest grey matter within concentric cubes (voxels) within the following specific range: from a minimum of ± 1 mm up to a maximum within ±5 mm in relation to exact dipole origin. Namely, with this procedure, the nearest gray searches involve concentric cube searches with varying diameters. Typically, the software is designed to search for consecutively larger cubes until a gray matter label is identified at the finest resolution specified or, if possible, cubes corresponding to an area with the same outer limit of a 11 mm wide cube. We elected to follow this procedure because it is the most veridical and therefore leaves the possibility of finding no gray matter matches open.
As final step, the Talairach MRI coordinates database [145] was translated to the coordinates parametrized in the Yale Bioimage Suite [146]. The translation was operated by simply entering the former coordinates into the latter system and by matching the anatomical labels. Once completed, the translation underwent a quality check procedure in which the matching between the labels from the two coordinate systems was confirmed by consensus among two independent anonymous judges with extensive practical, clinical, and theoretical expertise in imaging and neuroanatomy.

2.9. Statistical Analysis Strategy

Our analytic strategy followed the approach of converging statistical evidence [147], meaning that significance by hypothesis testing (rejecting the null) and clinical/practical importance as reflected by effect sizes counted equally for inference on a given tested result. These approaches could provide independent or joint supportive evidence for the existence of an effect. Importantly, for one given aspect in particular, we sought the best supporting evidence by using the same underlying principle of MUAPS by conducting multiple replication tests via independent measurements related to the aspect of interest.

3. Results

3.1. Hyperacuity Behavioural Performance

All behavioural results are shown in Table 1.
There were two participants who scored at ceiling (100%). With the two ceiling cases removed, a repeated measures, one-way ANOVA contrast (testing the ordered acuity trend NI > OBI > WBI) showed that visual acuity diminished to a greater extent in the WBI condition than in the OBI condition (F(2, 7) = 6.352, MSE = 0.002, p = 0.040, Partial µ2 = 0.476). A test for within-subjects effects was also significant (F(2, 14) = 3.904, MSE = 0.002, p = 0.045, partial µ2 = 0.358, with Huynh–Feldt correction). When applied to the acuity scores of all participants, the same ANOVA contrast remained significant. A post hoc two-tailed t contrast [147] was significant for WBI vs. NI (t(9) = 2.3, p = 0.047) and was marginally significant for OBI vs. NI (t(9) = 2.2, p = 0.055), revealing that both imagery conditions diminished visual acuity compared to baseline.

3.2. Event-Related Potentials

3.2.1. Image Generation Phase

Generally, ERP activity concurrent with visual mental image recall during the image generation phase (i.e., before the ‘image-ready’ response mark event, i.e., −500 to 0 ms) differed from no image external orienting attention (NI) in the out-of-body imagery (OBI) but not in the within-body imagery (WBI) condition. The top panel of Figure 2 shows the ERP t-contrasts between conditions (OBI–NI in blue-celeste; WBI–NI in yellow) for the image ready epoch (with the multiple-tests corrected threshold at p < 0.05, t(9) > ±4.78). The results show the same general frontal to occipital (including posterior-parietal) processing polarity shift, which does not differ in NI and WBI waveforms (which are or close to zero on x-axis). The shift in polarity shows OBI going from anterior positivity to posterior negativity.
The bottom panel of Figure 2 shows the patterns of the results more clearly in terms of waveform differences. In particular, this panel highlights the significant asymmetric and symmetric effects. Before the image ready response (approximately −500 ms to −300 ms), OBI showed significant positive deflections in the left frontal cluster (FrL) and in the mid anterior and central frontal clusters (Frz and FCz). However, OBI showed a symmetric significant negative going deflection across all of the posterior parietal clusters. Thus, changes in EOA that are concurrent with image recall in working memory reflected a difference that may be interpreted as an interference effect brought by OBI, but not by WBI, on EOA, with the intensification of a frontal-posterior parietal-occipital processing shift, which was mostly symmetrical, except for a left frontal effect.
After the image-ready response (i.e., 0 to 500 ms), a consistent sequenced gradient of ERP activity occurred concurrently to the maintenance of the image in working memory. This gradient showed a progressively more negative-going gradient, with WBI yielding a significantly more negative wave deflection than OBI between 400 and 500 ms after the image-ready response in the parieto-occipital and occipital clusters (for each corresponding cluster, the top panel of Figure 2 shows the t-contrast statistics in the boxes at the side of the ERP head map). This finding may confirm the top-down regulation or allocation of attentional resources during imagery.
The bottom panel of Figure 2 shows the different waves effects in all of the midline electrode clusters from the center to the occipital sites (CPz, Paz, POz, Ocz) between approximately 100 ms and 400 ms, with WBI showing a significantly larger negative deflection. The same type of result occurred earlier, between 0 and 200 ms, in the right parietal-occipital and occipital clusters (PoR, OcR).

3.2.2. Visuospatial Detection

Figure 3 shows a positive deflection across many midline clusters (from Frz to POz) between 350 and 450 ms, consistent with the well-characterized P300 signature. Absolute simultaneous multiple tests between OBI, WBI, and NI using the false discovery rate adjustment, showed a significant effect at the midline frontal and occipital and right occipital clusters in relation to this P300 activity, but only for WBI during visuospatial detection and before the corresponding response. Although Figure 3 may also suggest an imagery-associated N200 increase in both conditions in the left parieto-occipital cluster (OBI: p = 0.0019; WBI: p = 0.024), this effect did not survive the FDR correction; it was nevertheless confirmed with another type of analysis described below.
The top panel of Figure 4 shows pairwise relative comparisons of effect sizes expressed by significant p-values (<0.05) of ERP activity during the offset epoch (Vernier acuity task). The comparisons revealed that both imagery conditions were associated with relatively reduced P300 amplitude in the frontal clusters, although more so for OBI, and simultaneously exhibited a dipolar amplitude shift towards occipital clusters, which was the most pronounced for WBI. This latter amplitude increase was found in the occipital and some occipito-parietal clusters (OcL, Ocz, OcR, POR). The imagery-associated P300 decrease was the greatest in the FCz cluster (NI(baseline): 3.56 μV; OBI: 1.78 μV, WBI: 0.80 μV), with extremely high effect sizes (OBI: p = 0.000745, WBI: p = 0.00000273). The amplitude decrease was less pronounced in the Frz cluster (OBI: p = 0.044, WBI: p = 0.0011). In contrast, the directionally opposite imagery-associated P300 increase was the greatest in the Ocz cluster (NI: −0.37 μV, OBI: 0.79 μV, WBI: 2.1 μV), with a moderate effect size for OBI (p = 0.0257) and an extremely high effect size for WBI (p = 0.0000131). The amplitude decrease was also especially potent in the OcR cluster (OBI: p = 0.00251, WBI: p = 0.0000360). Finally, the neighboring clusters OcL and POR both had p = 0.05 for WBI but failed to reach significance for OBI. Finally, in all contrasts, the difference between OEA and imagery was much larger for WBI than OBI (on a sign binomial test, this outcome, i.e., 21 significant results vs. 0 null results, corresponds to a probability <0.0001). The anticorrelated pattern relative to the P300 (frontal decrease with concurrent parietal-occipital increase) can be more clearly seen in the bottom panel of Figure 4 (see especially the white outline boxes), which shows the difference waves analysis corresponding to the ERPs. The latter analysis also highlights that for WBI, the reduction of P300 occurs together with the enhancement of late positivity (P600–700) in the left central (FcL) and midline (FCz) frontal clusters (indicated with an asterisk in the Figure). For OBI, most notably, the left temporal cluster shows the characteristic continuation of P600 into the P8/900 imagery signature [76,77].
The comparison of all of the relative effect size (i.e., p-value) distances (whether significant or not), as seen in the top panel of Figure 4 and assessed through the normal standard deviate (cut off Z > 1.96), also show that maintaining the image during the visuospatial detection task was associated with a larger difference in N200 and P300 in the midline and left occipital electrodes clusters between NI and WBI compared to NI vs. OBI. In contrast, this stage of the task revealed a larger significant difference between NI and OBI compared to NI vs. WBI for P50 and P100 and N400 and P800 in the frontal midline electrodes cluster. Importantly, these results further confirm that distinct neural processing and anatomical patterns are associated with WBI and OBI, as reflected by the different patterns of interaction between EOA and the two imagery conditions.

3.3. Partial Least Squares (PLS)

3.3.1. ERPs of Task-Related Imagery-Baseline Contrasts

We followed up the analysis of the behavioural and ERP data by exploring the combined effects through PLS analysis by first examining the differences between NI and the imagery conditions as they had been locked to the task period. Figure 5 shows the PLS task results for the contrasts among the conditions placed on the ERPs for the image ready (Figure 5A, Top Panel) and line offset (Figure 5B, Top Panel) epochs. The bar graphs depict the contrast differences between the three conditions with significant expression in the dataset (Cluster × Time bin), determined by permutation testing in relation to each phase of the task (for image ready epoch, p = 0.01; for line offset epoch, p = 0.05). The heat map matrices represent clusters and time bins with stable contrasts as determined by bootstrapping. In Figure 5, the middle panel matrices show all bootstrap ratios across the scalp expressed as equivalent z-scores, and the bottom panel matrix shows significant bootstrap ratios with a threshold of +/−2 (p = 0.0455, two-tailed). Positive values (yellow-red) indicate time bins and clusters showing increased ERPs in imagery conditions and decreased ERPs in the baseline condition, whereas negative values (celeste-blue) indicate time bins and clusters showing decreased ERPs in imagery conditions and increased ERPs in the baseline condition.
Note that the axis on the left side of the heat map shows all of the electrode clusters, from which possible symmetries and asymmetries can be observed.
The design salience shows that the significant effects of imagery on EOA (i.e., differences NI vs. OBI and NI vs. WBI) occurred in the same direction for both imagery conditions, but the direction of association or effect among ERP activity for the recall and maintenance of an image (‘image ready’) differed markedly from that shown for Vernier acuity detection (‘offset’). The thresholded PLS matrices showed that similar (correlated) patterns of OBI and WBI effects were rather sparse (hence, this should be interpreted as opposite, i.e., more effects variability, presumably increased entropy due to individual differences), meaning that there were few selective instances of consistently significant similar differences in the same direction, which corresponds to the pattern of relationships shown by the design salience. In additon, the estimates were robust for recall and maintenance but were marginally significant for visuospatial detection.
Even so, imagery conditions did show identical design saliences as opposed to the NI condition, indicating that similar ERP modulations were found across imagery conditions. The direction of the imagery effects varied in the two phases of the task, involving different amplitude and polarity changes with varying asymmetries. Overall, the bootstrap ratio matrix for the image ready phase showed concurrent increases in amplitude, albeit of positive polarity anteriorly as opposed to negative polarity posteriorly. Furthermore, there was an amplitude decrease in the parietal clusters during image recall and a more concurrent amplitude increase of opposite polarity in the anterior and posterior clusters during image maintenance.
Patterns of asymmetry and symmetry in the ERP activation related to the task phases can be observed from the left y-axis in the heat map indicating all of the electrode clusters. In particular, during image maintenance, the significant increases in positive polarity were selective to the mid and right frontal clusters about 50–200 ms after the image ready response. Roughly simultaneous negative polarity increases occurred selectively in the midline and right posterior parietal-occipital clusters and across all of the occipital electrodes.
On the other hand, in the visuospatial detection phase, early amplitude decreases of positive polarity of around 300–400 ms selectively for the midline and right frontal clusters occurred simultaneously to a respective increase in positive amplitude in the right parietal-occipital and midline and right occipital clusters, which was followed by a decrease in positive polarity in the left parietal clusters from about 600 to 800 ms. The data therefore suggest that while increased ERP activity in frontal and occipital areas was coupled during image recall, and more so during image maintenance, during visuospatial detection, the changes in ERP activity involving the same areas were inversely correlated.

3.3.2. Correlation between ERPs and Vernier Acuity Performance

To further clarify the relationship pattern between brain activity and the behavioural performance on the visual-spatial task, Figure 6 shows the behavioural PLS results for the correlation between the ERP values and the acuity scores for the linear offset response. The bar graph depicts the contrast between acuity scores with significant expression in the dataset (cluster × time bin) as determined by permutation testing. The heat map matrices represent electrodes and time bins with stable correlations between mean square errors and behaviour measures as determined by bootstrapping. The middle matrix shows all of the bootstrap ratios expressed as equivalent z-scores, and the bottom matrix shows significant bootstrap ratios with a threshold of +/−2. Values represent the ratio of the parameter estimate for the source divided by the bootstrap derived standard error. Positive values indicate time bins and clusters showing increased ERPs with increased acuity scores across all three conditions, whereas the negative scores of the negative values indicate time bins and clusters showing decreased ERPs with increased acuity scores across all three conditions.
Figure 6A shows very strong positive correlations between the ERP activity and acuity scores across all conditions (p = 0.02; NI: r = 0.72; OBI: r = 0.66; WBI: r = 0.81). Importantly, a medium effect size (Cohen’s q = 0.334) is observed between the OBI and the WBI correlations. Figure 6B displays bootstrap values for temporal, centroparietal, occipitoparietal, and occipital clusters next to their respective grand average ERPs in order to better interpret the absolute amplitude shift associated with the bootstrap values. There was a positive-going shift in occipital and parietooccipital activity from 700 to 1000 ms and a simultaneous negative-going shift in centroparietal activity from 500 to 1000 ms, indicating a dipolar amplitude shift associated with improved acuity. In other words, acuity reduction was strongly inversely correlated with increases in ERP amplitudes, which were related to positive polarity deflections in frontal and central clusters but to negative polarity deflections in parietal and occipital clusters. Finally, there was also a negative-going shift in the temporal activity between approximately 200 and 400 ms, which was not captured in the contrast between the conditions but is evidently associated with improved acuity performance. The strong correlations between the ERP activity and acuity scores suggest that individual differences may account for variation in acuity performance and, more specifically, its interaction with mental imagery.

3.4. Event-Related Band Power PLS Analysis

3.4.1. Image Generation Phase

During image recall, the salience design for the contrast differences between NI and the imagery condition was not significant. In contrast, the Vernier acuity reduction was significantly inversely correlated with alpha and beta synchronization (p = 0.039), or, more precisely, it was directly related to their desynchronization, as reported in Figure 7. In particular, the top panel of Figure 7 shows that such association was strong in the OBI condition but was reduced and weak during WBI. The difference between the OBI (r = −0.64) and the WBI (r = −0.15) correlation was significant at p = 0.025 and demonstrated a large effect size (Cohen’s q = 0.607). The corresponding thresholded bootstrapping matrices shown in the bottom panel of Figure 7 show that significant alpha desynchronization was nearly global. In contrast, beta desynchronization occurred in the midline and left frontal clusters, across the midline and left central clusters, and in the parietal and parietal-occipital clusters.

3.4.2. Visuospatial Detection

Figure 8A shows the ERBP contrast between conditions in the offset response epoch (p = 0.039). Relatively more ERD was found in NI and OBI, and relatively more ERS was found in WBI. The rather few similar localized synchronization patterns, which occurred for OBI and WBI conditions—as shown in the bootstrap thresholded matrix at the bottom of Figure 8A—involved localized effects for the delta band in the frontal left cluster, the theta band in the occipital left and (midline cerebellar), the alpha band in the midline and right parietal and the right occipital, and the gamma band in the left parietal. The band power shift was the most significant in theta and alpha oscillations in the parietal and occipital regions. (Note that the largest bootstrap value was found in CBz in the theta band (BSR = −2.5).) Since the cerebellum only produces high-frequency oscillations, activity picked up by CBz was presumed to have originated in V1 [148]. The cerebellar electrode was included ad hoc in this analysis (following a preliminary analysis).
Figure 8B shows the ERBP differences in the offset response epoch (p = 0.029) and reflects the finding that the acuity score reduction was inversely correlated with the ERD across the entire brain in response to the Vernier offset line. There was a significant inverse correlation between the acuity scores and the beta ERD in the parietal regions as well as in the alpha ERD in the occipital regions. That is, during imagery, greater ERD in these regions was associated with greater imagery–acuity interference. However, the inverse correlation between acuity reduction and desynchronization for alpha and beta was relatively higher for OBI (r = −0.65) compared to WBI (r = −0.36). Importantly, a medium effect size of Cohen’s q = 0.4 was observed between the OBI and the WBI correlations.
As further exploratory confirmatory analysis relates ERP and EEG activities, we performed a multivariate PLS analysis (assuming two combined latent variables) on ERP values for the ‘line offset’ epoch, regressing the latter on global (all electrodes) ERBPs for each imagery condition and the corresponding acuity score as the behavioural correlates. This analysis is graphically represented in Figure 9. There were several key findings that converged with the previous results. ERP amplitude increased with desynchronization across all frequency bands. The inverse relationship between ERP amplitudes and desyncronization was strong (hovering around an r of −0.80) in relation to the delta, theta, and gamma bands without remarkable differences between the two imagery conditions. However, the pattern revealed differences in relation to the alpha and beta bands, which showed relatively smaller effects. Specifically, the inverse association between ERP amplitude and alpha desynchronization was relatively stronger in OBI than in WBI (−0.76 vs. −0.41; Cohen’s q = 0.56), whereas the opposite was true in the case for beta (−0.61 vs. −0.71; Cohen’s q = 0.36).

3.5. Dipole Source Localization Analysis

Dipole source localization analysis (DSLA) revealed converging solutions for four generators associated with the image recall and maintenance phase (image ready epoch), which are graphically represented in Figure 10. The middle frontal gyrus in the right frontal lobe was identified as a source uniquely associated with OBI, whereas the middle temporal gyrus in the left temporal lobe was uniquely associated with WBI. The precuneus in the right parietal lobe and the cuneus in the right occipital lobe, were identified as common sources for both imagery conditions. Thus, DSLA supports the notion that imagery is a higher cognitive function with strong asymmetric cortical substrates; these results also suggest that the two imagery experiences considered here can be differentiated on the basis of their distinct lateralized functional organization.

4. Discussion

The present study provided a complex set of information with numerous findings. For clarity, below we organize the discussion according to the two main themes of the paper.

4.1. Neural Differentiation between Out-Of-Body (OBI) and Within-Body (WBI) Imagery

In this study, we measured Vernier acuity (hyperacuity) performance and corresponding EEG/ERP changes across two imagery conditions and a baseline no-imagery condition. Considered on its own, the behavioural performance data showed that both phenomenal imagery types reduced hyperacuity in comparison to the no imagery condition, operationally defining the baseline orienting external attention to stimulus spatial location. Although small (mean effect OBI = −4.87%; mean effect WBI = −5.23%), the decrement of visual hyperacuity during concurrent imagery conditions had a rather wide range (from 2% to 17%), and this indicates that there was a consistent and important effect –again, since hyperacuity corresponds to 5 to 10 times the normal acuity in ecological conditions—but included, as customary in imagery data, rather large individual differences.
On the other hand, there were numerous neurophysiological differences found between both the phenomenal imagery types across the different stages of the experiment, indicating that phenomenal imagery type can recruit different neural resources when external attention to stimuli is concurrently deployed. This other body of evidence generated by our experiment can be considered separately from the behavioural results. Table 2 presents a summary of all of the differences found in our study. All findings related to ERP patterns and the dipole source analysis showed evidence of neural processing differentiation between OBI and WBI (formally, 11 ‘successes’ on 11 comparisons, by sign test p = 0.001, two-tailed).
Nevertheless, we employed the partial least squares approach to test the combined effects of the behavioural and neuro-electrophysiological changes. As again shown in Table 2, there was converging evidence of combined behavioural-neurophysiological differentiation patterns between OBI and WBI. Behavioural PLS results measuring the correlation between ERP values and acuity scores demonstrated a moderate effect size between the OBI and the WBI conditions (e.g., Figure 6). Importantly, event-related band power PLS analysis measuring the correlation between alpha and theta synchronization and acuity reduction during the image generation phase demonstrated significant differences between phenomenal imagery types (e.g., Figure 7). These differences also emerged during visuospatial detection (e.g., Figure 8B). Overall, the ERP and ERBP findings suggest that external orienting attention interacted with vivid OBI but not with WBI during image recall, and conversely, it interacted differentially with WBI but not with OBI during image maintenance in working memory. Thus, the additional processing of cognitive demands introduced by fine visuospatial detection had distinct non-overlapping types of effects on maintaining and directing attention to imagery that were differentially associated with within and out-of-bodyout-of-body imagery conditions. Therefore, empirical EEG/ERP differences (found separately or in combination with behavior) between both phenomenal imagery types, lends converging evidence for the validity of the phenomenological differentiation between the two types of imagery considered here.
Although the present findings indicate that phenomenal imagery type may recruit different neural resources when externally attending to stimuli, the findings illustrate that the phenomenal imagery type does not influence perceptual acuity differentially. Overall, our results showed that both imagery types reduced hyperacuity. This can be explained by the fact that both types of imagery may compete for part of the neural resources for simultaneous cortical processing and that this may degrade the robustness of the perceptual representation that visuospatial predictive decision is based on. Thus, consistent with predictive coding, the Perky effect can be interpreted as a relative failure to prime perception and not as the successful protection of imagery. Indeed, as discussed later, our EEG findings offer a possible neural account in that the reduction of visual acuity seems to be driven by the attenuated top-down modulation of (increased) occipital activity.
Although small, our sample size was comparable to previous psychophysical studies on Vernier acuity reduction by concurrent visual imagery or the ‘Perky effect’ [114,149]. However, important procedural differences with previous studies might account for the smaller percentage of Vernier acuity change we found and might explain the importance of the results. First, in keeping with our research questions, we did not control the baseline performance level a priori because we sought to probe the variability of Vernier acuity reduction as naturally occurring in a normal population of untrained observers. Although this study warrants an obvious increase in sample size in future research, arguably, such an extension would improve reliability but not validity, which is confirmed by the converging patterns of the results (more discussion below). Therefore, regardless of its effect size, the observed reduction in Vernier acuity is important since it shows a systematic reduction in an ability that is usually exceedingly efficient in humans. Differently from the ecological viewing conditions of previous studies using tachistoscopic presentation, we displayed the offset lines on a CRT monitor with a very high photon density [150]. We previously reported that in the same photic conditions, it is difficult for the participants to overcome incoming visual input and to generate a vivid mental image [116], and comparable findings have been reported with different luminance levels [151]. Other studies have also shown that the persistence of the visual short-term trace (in terms of both luminance and contrast) of a concurrent target stimulus weakens imagery [87] and the Perky effect [106]. Thus, it is possible that our presentation conditions actually attenuated the extent of imagery interference in low-level sensory representations.
Hyperacuity reduction was correlated with a dipolar P300 amplitude shift between frontal and occipital regions. There was relatively more global ERD in the no imagery and the OBI conditions, and there was relatively more ERS in the WBI condition, indicating that there was relatively more task-related processing in the former two conditions compared to the latter. It is plausible that the differences in synchronization may represent a tendency for exteroceptive imagery to be more reliant on perceptually driven neural substrates compared to frontally driven neural substrates, which traditionally characterize internal mental imagery. Interestingly, the greatest ERD was found in the occipital and parietal regions, with an especially significant shift picked up in in the theta band of the midline cerebellar electrode CBz (z = −2.5). Since the cerebellum only produces high-frequency oscillations, we presumed that the theta band activity picked up by CBz originated in V1 [148]. Theta oscillations are driven by hippocampal and prefrontal cortical activity, implying that there may have been back propagation from the temporal to the occipital cortices, which is important for hyperacuity resolution.
From the band power analysis results, it would seem that the top-down ‘control’ or the modulatory mechanism supports the model proposed by Sauseng et al. ([73]). The changes in synchronization and desynchronization, which seemed to have more weight, were related to the slowest (delta and theta) and, at the same time, fastest oscillations (gamma), with the medium oscillations (alpha and beta) seemingly having a lesser role. Therefore, the pattern does suggest modulation, presumably through the cortical parietal components of top-down feed-backward frontal and temporal selective input to occipital activity. If, as postulated by von Stein et al. [69], gamma indicates activity that is only confined within the occipital areas, its desynchronized patterns, we found, might indicate active suppression or selection of incoming exogenous sensory processing or even the binding of sensory information. The two types of imagery also suggest that the top-down modulation mechanism may be implemented through two different pathways, which would be more ventral for WBI, with the mediating participation of temporal lobe components (i.e., middle temporal gyrus), which would be more dorsal for OBI, with the mediating participation of frontal lobe components (i.e., middle frontal gyrus). The replication and confirmation of this possible dual mechanism might be a worthwhile research objective for future research using combined fMRI-EEG measurements.

4.2. Findings of Neural Activity Hemispheric A/Symmetries and Their Functional Implications

A key aspect of this paper is that the two types of imagery did involve bilateral, midline, or distinctively asymmetric, lateralized cortical processes at various junctures. Following our operational working definition (see Section 1.2, in the Introduction), we consider significant asymmetries or effect size differences between OBI and WBI in a lateralized electrode cluster with no corresponding differences in the homologous contralateral electrode cluster. In contrast, symmetry is considered as significant or effect sizes differences occurring at midline electrode clusters or bilaterally. A look at Table 2 shows that asymmetries were found for several comparisons with or without symmetry effects. Overall, it is fair to say that most of the ERP/EEG effects we found involved midline or, most prominently, right hemisphere cortical areas, in particular, the frontal and parietal-occipital areas. The important exceptions, across conditions, were the left frontal delta desynchronization related to occipital and temporal ERP and the alpha desynchronization at the parietal and, most consistently, at the occipital areas. As shown by both the ERP findings and source localization, OBI was associated with predominant right hemispheric activity, whereas WBI involved prominent left temporal processes. Crucially, all of the estimated sources were consistent with fMRI findings in the literature (see [85]; see especially [152]).
A plausible functional implication of the present findings is that the types of imagery and the task that we studied are able to trigger (or ‘prime’) the dynamic interaction between multiple large scale neural systems in the dorsal and ventral processing streams, which are laterally specialized and are typically recruited during visuospatial perception. Specifically, it seems possible that the neural systems that regulate the focus of attention as well as the bias towards categorical vs. coordinate spatial representations (see [49]) might have been dynamically implicated and reciprocally interacting at one time or another. Indeed, all of our converging findings seem to point to a coherent unitary account based on an adapted version of hemispheric top-down specialization division.
As proposed and demonstrated by a MEG study by Falasca et al. ([153]) using a-not-too dissimilar task to ours, attention to spatially localized 2D stimulus typically and predominantly recruits a broad focus that favours coordinate (precise, metric, analog) processing representations as opposed to categorical (approximate, verbal, digital) ones. The network that supports the broad/coordinate processing bias is lateralized in the right hemisphere and may typically involve a top-down flow and exchange of information from the middle frontal gyrus to the parietal (superior and inferior portions) to occipital areas. These patterns of results are consistent with the generators identified by our source analysis for OBI and our ERP/ERBP data showing interference of the latter type of imagery with external orienting attention during image recall and selective differential early and late ERP signatures (P50–100, N400 and P8/900) as well as of both alpha and beta desynchronization seen at the right electrode clusters during image maintenance, whether concurrent or not with fine visuospatial detection.
In contrast, WBI most likely recruited a predominantly parallel left hemisphere stream of processing, which supports a narrow focus of attention and favours categorical representations. The latter network involves a flow in the opposite direction, namely, a bottom-up feed-forward exchange of information from the occipital to parietal areas. However, and here is the ‘adaptation’ of Falasca et al.’s model, WBI involved additional top-down components originating from the left temporal lobe (i.e., left middle temporal gyrus) to the posterior parietal lobe. The latter activation circuitry may explain why there was no ERP evidence of interference between WBI and EOA (contrary to what we found for OBI) since the left-specialized stream of processing might have run in parallel and independently of the contralateral one supporting broad attentional focus in OEA. This set up would also explain the pattern of results found for WBI. Indeed, we found predominant activation of left occipital and parietal ERPs during the maintenance of WBI and only found attentional interference indicated by the selective N200 and P300 effects during image maintenance concurrent to the visuospatial detection as well as the sparse desynchronizations seen in the left parietal and occipital electrode clusters associated with this phase of the task.
The present findings are equally if not more important in relation to the possible functional implications for the symmetry during imagery, that is, the many findings of activation corresponding to bilateral or midline homologous cortical areas. Our findings, we argue, support the pioneering work by Kosslyn ([42]) hypothesizing the existence of an adaptive, dynamic, and task-dependent categorical-coordinate conversion processes, which would enable the conversion of categorical representations to a range of coordinates. Kosslyn speculated that such a conversion could involve two types of neural connections: a cortico–cortical connection and a relatively longer range but more efficient fascicular connection (see Kosslyn, [42]; p. 233). Consequently, he speculated that cortical hubs, either in the frontal or parietal cortex, could implement the conversion between the two representation systems by implementing the cross talk between several scattered specialized sub-centres from the two hemispheres. Our data agree with the suggestion by Falasca et al. ([153]), which states that a possible plausible cortical site for this process might be the middle frontal gyrus. The present findings, however, also suggest another plausible cortical hub for the conversion, namely, the precuneus—the latter possibility actually fits with Kosslyn’s own best guess of a posterior parietal site (see Table 11.1 in Kosslyn [42]; p. 382).
A further important functional implication of the present findings is that different types of voluntary imagery prime in a top-down fashion attentional processes, which, in turn, trigger bias inherent to the dorsal (transient/magnocellular) vs. ventral (parvocellular) stream in the bottom-up processing of visuospatial perception. Broad attentional focus is predominantly a specialized function of the right hemisphere, which as a byproduct that should also induce a tendency to base the Vernier acuity detection judgment on the use of the coordinate representations system, that is, the precise estimation of the gap between the offset lines. Conversely, narrow attentional focus is a function that is specialized in the left hemisphere and will as such induce a tendency to base the Vernier acuity detection judgment on the use of the categorical representation systems, that is, a coarser estimation of the side of the offset line, left vs. right. Yet, at some level, the two different codes are compared. Congruently, different degrees of conversion between the two representation systems might exist, supposedly not just from categorical to coordinate but also in the opposite direction. Consequently, we suggest the conclusion that the top-down attention-based neural processing that defines ‘voluntary’ in this context depends on the asymmetry intrinsic to the functional hemispheric organization of the used receptive fields (i.e., proportionally smaller in the left and proportionally larger in the right hemisphere) and the dynamic exchanges and integration of information between the two hemispheres.

5. Conclusions

This study shows that two types of phenomenologically distinct imagery types, which we have dubbed out-of-body imagery (OBI) and within-body imagery (WBI), can be distinguished and are associated by different (partially non-overlapping) neural patterns. These patterns seem to instantiate the top-down modulation of visual processing either via the fronto-dorsal (OBI) or via temporo-ventral (WBI) pathways. Both are associated with reduction in visuospatial performance, as they interfere, albeit differently, with the concurrent deployment of orienting external attention to the stimulus location. Methodologically, we showed that the measurements of that interference can be leveraged to distinguish the underlying neural processes corresponding to the distinct subjective perspectives involved in voluntary generation of OBI and WBI. The imagery experiences also implicated partially distinct neurofunctional symmetries and asymmetries in stimulus-locked ERP and EEG activity, implying a key role of the width of attentional focus in the interaction with voluntarily recalling and maintaining images in visual working memory.

Author Contributions

A.D. and D.M.B.; methodology, A.D. and D.M.B.; software, D.K.; validation, E.L., D.A.T.P., J.B., and D.K.; formal analysis, D.K., A.D., D.A.T.P., and J.B.; investigation, D.M.B.; resources, A.D.; data curation, D.K., D.A.T.P., D.M.B., and J.B.; writing—original draft preparation, A.D., D.K., and D.M.B.; writing—review and editing, A.D. and E.L.; visualization, D.K., D.A.T.P., J.B., and A.D.; supervision, A.D.; project administration, A.D.; funding acquisition, A.D. All authors have read and agreed to the published version of the manuscript.

Funding

Partial funding was provided by a SSHRC standard research grant and a seed grant from the Carleton University Faculty of Science to A.D.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Carleton University, protocol code 107499, approved on 12 September 2009.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data can be obtained from A.D. upon request for purposes relevant to review.

Acknowledgments

This paper was based on D.K.’s and D.M.B.’s honours theses. We Thank Randy McIntosh for help and comments on the PLS analyses. We thank Patricia Van Roon for her assistance in the experimental design and the preliminary data analysis and Korey MacDougal for his help in writing the task program that was used in the initial experiment.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hebb, D.O. Concerning imagery. Psychol. Rev. 1968, 75, 466–477. [Google Scholar] [CrossRef] [PubMed]
  2. Edelman, G.M. Bright Air, Brilliant Fire: On the Matter of the Mind; Basic Books: New York, NY, USA, 1992. [Google Scholar]
  3. Kosslyn, S.M.; Behrmann, M.; Jeannerod, M. The cognitive neuroscience of mental imagery. Neuropsychologia 1995, 33, 1335–1344. [Google Scholar] [CrossRef]
  4. Baddeley, A.D.; Andrade, J. Working memory and the vividness of imagery. J. Exp. Psychol. Gen. 2000, 129, 126–145. [Google Scholar] [CrossRef] [PubMed]
  5. Reeder, R.R. Individual differences shape the content of visual representations. Vis. Res. 2017, 141, 266–281. [Google Scholar] [CrossRef] [PubMed]
  6. Runge, M.; Cheung, M.W.; D’Angiulli, A. Meta-analytic comparison of trial-versus questionnaire-based vividness reportability across behavioral, cognitive and neural measurements of imagery. Neurosci. Conscious. 2017, 2017, nix006. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Haustein, S.; Vellino, A.; D’Angiulli, A. Insights from a Bibliometric Analysis of Vividness and Its Links with Consciousness and Mental Imagery. Brain Sci. 2020, 10, 41. [Google Scholar] [CrossRef] [Green Version]
  8. Sherrington, C. The Integrative Action of the Nervous System; Yale University Press: Yale, CT, USA, 1906. [Google Scholar]
  9. Velmans, M. Understanding Consciousness; Routledge/Taylor & Francis: London, UK, 2009. [Google Scholar]
  10. Feinberg, T.E. Neuroontology, neurobiological naturalism, and consciousness: A challenge to scientific reduction and a solution. Phys. Life Rev. 2012, 9, 13–34. [Google Scholar] [CrossRef] [PubMed]
  11. Thompson, E. Representationalism and the phenomenology of mental imagery. Synthese 2008, 160, 397–415. [Google Scholar] [CrossRef]
  12. Thomas, N.J.T. Mental Imagery. In The Stanford Encyclopedia of Philosophy (Spring 2021 Edition); Edward, N.Z., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2021; Available online: https://plato.stanford.edu/archives/spr2021/entries/mental-imagery (accessed on 20 August 2021).
  13. Damasio, A. Self Comes to Mind: Constructing the Conscious Brain; Pantheon/Random House: New York, NY, USA, 2010. [Google Scholar]
  14. Damasio, A.R. The Strange Order of Things: Life, Feeling, and the Making of Cultures; Vintage Books: New York, NY, USA, 2019. [Google Scholar]
  15. Ceunen, E.; Vlaeyen, J.W.; Van Diest, I. On the origin of interoception. Front. Psychol. 2016, 7, 743. [Google Scholar] [CrossRef] [Green Version]
  16. Craig, A.D. How do you feel? Interoception: The sense of the physiological condition of the body. Nat. Rev. Neurosci. 2002, 3, 655–666. [Google Scholar] [CrossRef]
  17. Barrett, L.F.; Simmons, W.K. Interoceptive predictions in the brain. Nat. Rev. Neurosci. 2015, 16, 419–429. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Critchley, H.D.; Wiens, S.; Rotshtein, P.; Öhman, A.; Dolan, R.J. Neural systems supporting interoceptive awareness. Nat. Neurosci. 2004, 7, 189–195. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Seth, A.K. Interoceptive inference, emotion, and the embodied self. Trends Cogn. Sci. 2013, 17, 565–573. [Google Scholar] [CrossRef]
  20. Barrett, L.F.; Bar, M. See it with feeling: Affective predictions during object perception. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2009, 364, 1325–1334. [Google Scholar] [CrossRef]
  21. Bastos, A.M.; Usrey, W.M.; Adams, R.A.; Mangun, G.R.; Fries, P.; Friston, K.J. Canonical Microcircuits for Predictive Coding. Neuron 2012, 76, 695–711. [Google Scholar] [CrossRef] [Green Version]
  22. Mechelli, A.; Price, C.J.; Friston, K.J.; Ishai, A. Where bottom-up meets top-down: Neuronal interactions during perception and imagery. Cereb. Cortex 2004, 14, 1256–1265. [Google Scholar] [CrossRef] [Green Version]
  23. Dijkstra, N.; Bosch, S.E.; Gerven, M.A.V. Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas. J. Neurosci. 2017, 37, 1367–1373. [Google Scholar] [CrossRef] [Green Version]
  24. Bone, M.B.; St-Laurent, M.; Dang, C.; McQuiggan, D.A.; Ryan, J.D.; Buchsbaum, B.R. Eye Movement Reinstatement and Neural Reactivation during Mental Imagery. Cereb. Cortex 2019, 29, 1075–1089. [Google Scholar] [CrossRef]
  25. Azzalini, D.; Rebollo, I.; Tallon-Baudry, C. Visceral signals shape brain dynamics and cognition. Trends Cogn. Sci. 2019, 23, 488–509. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Dadds, M.R.; Bovbjerg, D.H.; Redd, W.H.; Cutmore, T.R. Imagery in human classical conditioning. Psychol. Bull. 1997, 122, 89. [Google Scholar] [CrossRef] [PubMed]
  27. Kveraga, K.; Boshyan, J.; Bar, M. Magnocellular projections as the trigger of top down facilitation in recognition. J. Neurosci. 2007, 27, 13232–13240. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Bäckryd, E. Pain as the perception of someone: An analysis of the interface between pain medicine and philosophy. Health Care Anal. 2019, 27, 13–25. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Ocklenburg, S.; Güntürkün, O. Hemispheric asymmetries: The comparative view. Front. Psychol. 2012, 3, 1–9. [Google Scholar] [CrossRef] [Green Version]
  30. Marshall, J.C.; Fink, G.R. Spatial cognition: Where we were and where we are. Neuroimage 2001, 14, 2–7. [Google Scholar] [CrossRef]
  31. Olulade, O.A.; Seydell-Greenwald, A.; Chambers, C.E.; Turkeltaub, P.E.; Dromerick, A.W.; Berl, M.M.; Gaillard, W.D.; Newport, E.L. The neural basis of language development: Changes in lateralization over age. Proc. Natl. Acad. Sci. USA 2020, 117, 23477–23483. [Google Scholar] [CrossRef]
  32. O’Regan, L.; Serrien, D.J. Individual Differences and Hemispheric Asymmetries for Language and Spatial Attention. Front. Hum. Neurosci. 2018, 12, 380. [Google Scholar] [CrossRef] [Green Version]
  33. Demareva, V.; Mukhina, E.; Bobro, T.; Abitov, I. Does Double Biofeedback Affect Functional Hemispheric Asymmetry and Activity? A Pilot Study. Symmetry 2021, 13, 937. [Google Scholar] [CrossRef]
  34. Liu, J. Hemispheric asymmetries in visual mental imagery. Brain Struct Funct. 2021. [Google Scholar] [CrossRef]
  35. Corballis, P.M.; Funnell, M.G.; Gazzaniga, M.S. Hemispheric asymmetries for simple visual judgments in the split brain. Neuropsychologia 2002, 40, 401. [Google Scholar] [CrossRef]
  36. Westerhausen, R. A primer on dichotic listening as a paradigm for the assessment of hemispheric asymmetry. Laterality 2019, 24, 740–771. [Google Scholar] [CrossRef] [PubMed]
  37. Vasilkov, V.A.; Ishchenko, I.A.; Tikidji-Hamburyan, R.A. Modeling of localization phenomena of the auditory image caused by brain regions dysfunctions. Biophysics 2013, 58, 428–433. [Google Scholar] [CrossRef]
  38. Sun, Y.; Li, J.; Suckling, J.; Feng, L. Asymmetry of Hemispheric Network Topology Reveals Dissociable Processes between Functional and Struc-tural Brain Connectome in Community-Living Elders. Front. Aging Neurosci. 2017, 9, 361. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Burdakov, D.S. Self-regulation of individuals with different types of functional brain asymmetry and mental strain. Exp. Psychol. 2010, 3, 123–134. [Google Scholar]
  40. Cao, R.; Shi, H.; Wang, X.; Huo, S.; Hao, Y.; Wang, B.; Guo, H.; Xiang, J. Hemispheric Asymmetry of Functional Brain Networks under Different Emotions Using EEG Data. Entropy 2020, 22, 939. [Google Scholar] [CrossRef] [PubMed]
  41. Kosslyn, S.M.; Thompson, W.L.; Ganis, G. The Case for Mental Imagery; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  42. Kosslyn, S.M. Image and Brain; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  43. Pearson, J. The human imagination: The cognitive neuroscience of visual mental imagery. Nat. Rev. Neurosci. 2019, 20, 624–634. [Google Scholar] [CrossRef]
  44. Van der Ham, I.J.M.; Ruotolo, F. On Inter and Intra Hemispheric Differences in Visuospatial Perception. In The Neuropsychology of Space; Postma, A., van der Ham, I.J.M., Eds.; Elsevier Academic Press: Cambridge, MA, USA, 2016. [Google Scholar]
  45. Kosslyn, S.M. Seeing and imagining in the cerebral hemispheres: A computational approach. Psychol. Rev. 1987, 94, 148. [Google Scholar] [CrossRef]
  46. Hellige, J.B. Hemispheric asymmetry for visual information processing. Acta Neurobiol. Exp. 1996, 56, 485–497. [Google Scholar]
  47. Hellige, J.B.; Cumberland, N. Categorical and coordinate spatial processing: More on contributions of the transient/magnocellular visual system. Brain Cogn. 2001, 45, 155–163. [Google Scholar] [CrossRef]
  48. Howard, M.F.; Reggia, J.A. A theory of the visual system biology underlying development of spatial frequency lateralization. Brain Cogn. 2007, 64, 111–123. [Google Scholar] [CrossRef] [Green Version]
  49. Van der Ham, I.J.; Postma, A.; Laeng, B. Lateralized perception: The role of attention in spatial relation processing. Neurosci. Biobehav. Rev. 2014, 45, 142–148. [Google Scholar] [CrossRef]
  50. Kosslyn, S.M.; Alpert, N.M.; Thompson, W.L.; Maljkovic, V.; Weise, S.B.; Chabris, C.F.; Hamilton, S.E.; Buonanno, F.S. Visual mental imagery activates topographically organized visual cortex: PET investigations. J. Cogn. Neurosci. 1993, 5, 263–287. [Google Scholar] [CrossRef] [Green Version]
  51. Kosslyn, S.M.; Thompson, W.L.; Kim, I.J.; Alpert, N.M. Topographical representations of mental images in primary visual cortex. Nature 1995, 378, 496–498. [Google Scholar] [CrossRef] [PubMed]
  52. Kosslyn, S.M.; Pascual-Leone, A.; Felician, O.; Camposano, S.; Keenan, J.P.; Thompson, W.L.; Alpert, N.M. The role of area 17 in visual imagery: Convergent evidence from PET and rTMS. Science 1999, 284, 167–170. [Google Scholar] [CrossRef]
  53. Kosslyn, S.M.; Sukel, K.E.; Bly, B.M. Squinting with the mind’s eye: Effects of stimulus resolution on imaginal and perceptual comparisons. Mem. Cogn. 1999, 27, 276–287. [Google Scholar] [CrossRef] [Green Version]
  54. van Essen, D.C.; Maunsell, J.H. Hierarchical organization and functional streams in the visual cortex. Trends Neurosci. 1983, 6, 370–375. [Google Scholar] [CrossRef]
  55. Kosslyn, S.M.; Thompson, W.L. When is early visual cortex activated during visual mental imagery? Psychol. Bull. 2003, 129, 723–746. [Google Scholar] [CrossRef]
  56. Mazard, A.; Tzourio-Mazoyer, N.; Crivello, F.; Mazoyer, B.; Mellet, E. A PET meta-analysis of object and spatial mental imagery. Eur. J. Cogn. Psychol. 2004, 16, 673–695. [Google Scholar] [CrossRef] [Green Version]
  57. Dijkstra, N.; Zeidman, P.; Ondobaka, S.; Gerven, M.A.V.; Friston, K. Distinct Top-down and Bottom-up Brain Connectivity during Visual Perception and Imagery. Sci. Rep. 2017, 7, 1–9. [Google Scholar] [CrossRef] [PubMed]
  58. Hegel, G.W.F. The Phenomenology of Mind, Trans. In JB Baillie 1807; Harper: New York, NY, USA, 1807. [Google Scholar]
  59. Dixon, M.L.; Fox, K.C.; Christoff, K. A framework for understanding the relationship between externally and internally directed cognition. Neuropsychologia 2014, 62, 321–330. [Google Scholar] [CrossRef] [PubMed]
  60. D’Angiulli, A. Is the Spotlight an Obsolete Metaphor of “Seeing with the Mind’s Eye”? A Constructive Naturalistic Approach to the Inspection of Visual Mental Images. Imagin. Cogn. Personal. 2008, 28, 117–135. [Google Scholar]
  61. Cole, H.W.; Ray, W.J. EEG correlates of emotional tasks related to attentional demands. Int. J. Psychophysiol. 1985, 3, 33–41. [Google Scholar] [CrossRef]
  62. Ray, W.J.; Cole, H.W. EEG activity during cognitive processing: Influence of attentional factors. Int. J. Psychophysiol. 1985, 3, 43–48. [Google Scholar] [CrossRef]
  63. Ray, W.J.; Cole, H.W. EEG alpha activity reflects attentional demands, and beta activity reflects emotional and cognitive processes. Science 1985, 228, 750–752. [Google Scholar] [CrossRef] [PubMed]
  64. Lacey, J.I.; Lacey, B.C. Some autonomic-central nervous system interrelationships. In Physiological Correlates of Emotion; Black, P., Ed.; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  65. Schupp, H.T.; Lutzenberger, W.; Birbaumer, N.; Miltner, W.; Braun, C. Neurophysiological differences between perception and imagery. Cogn. Brain Res. 1994, 2, 77–86. [Google Scholar] [CrossRef] [Green Version]
  66. Klimesch, W.; Schimke, H.; Ladurner, G.; Pfurtscheller, G. Alpha frequency and memory performance. J. Psychophysiol. 1990, 4, 381–390. [Google Scholar]
  67. Klimesch, W.; Doppelmayr, M.; Schwaiger, J.; Auinger, P.; Winkler, T.H. Paradoxical alpha synchronization in a memory task. Cogn. Brain Res. 1999, 7, 493–501. [Google Scholar] [CrossRef]
  68. Klimesch, W.; Doppelmayr, M.; Röhm, D.; Pöllhuber, D.; Stadler, W. Simultaneous desynchronization and synchronization of different alpha responses in the human electroencephalograph: A neglected paradox? Neurosci. Lett. 2000, 284, 97–100. [Google Scholar] [CrossRef]
  69. Von Stein, A.; Chiang, C.; König, P. Top-down processing mediated by interareal synchronization. Proc. Natl. Acad. Sci. USA 2000, 97, 14748–14753. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Sarnthein, J.; Petsche, H.; Rappelsberger, P.; Shaw, G.L.; Von Stein, A. Synchronization between prefrontal and posterior association cortex during human working memory. Proc. Natl. Acad. Sci. USA 1998, 95, 7092–7096. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. von Stein, A.; Sarnthein, J. Different frequencies for different scales of cortical integration: From local gamma to long range alpha/theta synchronization. Int. J. Psychophysiol. 2000, 38, 301–313. [Google Scholar] [CrossRef]
  72. von Stein, A.; Sarnthein, J. EEG frequency and the size of cognitive neuronal assemblies. Behav. Brain Sci. 2000, 23, 413–414. [Google Scholar] [CrossRef]
  73. Sauseng, P.; Klimesch, W.; Doppelmayr, M.; Pecherstorfer, T.; Freunberger, R.; Hanslmayr, S. EEG alpha synchronization and functional coupling during top-down processing in a working memory task. Hum. Brain Mapp. 2005, 26, 148–155. [Google Scholar] [CrossRef]
  74. Cooper, N.R.; Croft, R.J.; Dominey, S.J.; Burgess, A.P.; Gruzelier, J.H. Paradox lost? Exploring the role of alpha oscillations during externally vs. internally directed attention and the implications for idling and inhibition hypotheses. Int. J. Psychophysiol. 2003, 47, 65–74. [Google Scholar] [CrossRef]
  75. Villena-González, M.; López, V.; Rodríguez, E. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli. Neuroimage 2016, 132, 71–78. [Google Scholar] [CrossRef] [PubMed]
  76. D’Angiulli, A.; Griffiths, G.; Marmolejo-Ramos, F. Neural correlates of visualizations of concrete and abstract words in preschool children: A developmental embodied approach. Front. Dev. Psychol. 2015, 6, 856. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Marmolejo-Ramos, F.; Hellemans, K.; Comeau, A.; Heenan, A.; Faulkner, A.; Abizaid, A.; D’Angiulli, A. ERP signatures to perceived and imagined emotional and food real-life photos. Neurosci. Bull. 2015, 31, 317–330. [Google Scholar] [CrossRef] [Green Version]
  78. Farah, M.J.; Weisberg, L.L.; Monheit, M.; Peronnet, F. Brain activity underlying mental imagery: Event-related potentials during mental image generation. J. Cogn. Neurosci. 1989, 1, 302–316. [Google Scholar] [CrossRef] [PubMed]
  79. Farah, M.J.; Péronnet, F.; Gonon, M.A.; Giard, M.H. Electrophysiological evidence for a shared representational medium for visual images and visual percepts. J. Exp. Psychol. Gen. 1988, 117, 248. [Google Scholar] [CrossRef]
  80. Farah, M.J.; Peronnet, F. Event-related potentials in the study of mental imagery. J. Psychophysiol. 1989, 3, 99–109. [Google Scholar]
  81. Farah, M.J. Is visual imagery really visual? Overlooked evidence from neuropsychology. Psychol. Rev. 1988, 95, 307. [Google Scholar] [CrossRef]
  82. D’Esposito, M.; Detre, J.A.; Aguirre, G.K.; Stallcup, M.; Alsop, D.C.; Tippet, L.J.; Farah, M.J. A functional MRI study of mental image generation. Neuropsychologia 1997, 35, 725–730. [Google Scholar] [CrossRef]
  83. Gonsalves, B.; Paller, K.A. Brain potentials associated with recollective processing of spoken words. Mem. Cogn. 2000, 28, 321–330. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Gonsalves, B.; Reber, P.J.; Crawford, M.; Paller, K.A. Reading the mind’s eye using an occipital brain potential that indexes vividness of visual imagery. In Journal of Cognitive Neuroscience; MIT Press: Cambridge, MA, USA, 2002; p. 138. [Google Scholar]
  85. Winlove, C.I.; Milton, F.; Ranson, J.; Fulford, J.; MacKisack, M.; Macpherson, F.; Zeman, A. The neural correlates of visual imagery: A co-ordinate-based meta-analysis. Cortex 2018, 105, 4–25. [Google Scholar] [CrossRef] [PubMed]
  86. Damasio, A.; Meyer, K. Consciousness: An overview of the phenomenon and of its possible neural basis. In The Neurology of Consciousness: Cognitive Neuroscience and Neuropathology; Laureys, S., Tononi, G., Eds.; Elsevier: New York, NY, USA, 2009; pp. 3–14. [Google Scholar]
  87. Keogh, R.; Pearson, J. The sensory strength of voluntary visual imagery predicts visual working memory capacity. J. Vis. 2014, 14, 7. [Google Scholar] [CrossRef] [Green Version]
  88. Pearson, J.; Naselaris, T.; Holmes, E.A.; Kosslyn, S.M. Mental Imagery: Functional Mechanisms and Clinical Applications. Trends Cogn. Sci. 2015, 19, 590–602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Bergmann, J.; Genç, E.; Kohler, A.; Singer, W.; Pearson, J. Smaller Primary Visual Cortex Is Associated with Stronger, but Less Precise Mental Imagery. Cereb. Cortex 2016, 26, 3838–3850. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  90. Pearson, J.; Clifford, C.W.; Tong, F. The functional impact of mental imagery on conscious perception. Curr. Biol. 2008, 18, 982–986. [Google Scholar] [CrossRef] [Green Version]
  91. Ganis, G.; Schendan, H.E. Visual mental imagery and perception produce opposite adaptation effects on early brain potentials. Neuroimage 2008, 42, 1714–1727. [Google Scholar] [CrossRef] [PubMed]
  92. Freeman, W.J. The physiological basis of mental images. Biol. Psychiatry 1983, 18, 1107–1125. [Google Scholar]
  93. Schneider, G.E. Brain Structure and Its Origins: In Development and in Evolution of Behavior and the Mind; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  94. Feinberg, T.E.; Mallatt, J. The Ancient Origins of Consciousness; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  95. Jonkisz, J. Consciousness: Individuated information in action. Front. Psychol. 2015, 6, 1035. [Google Scholar] [CrossRef] [Green Version]
  96. Denton, D.A.; McKinley, M.J.; Farrell, M.; Egan, G.F. The role of primordial emotions in the evolutionary origin of consciousness. Conscious. Cogn. 2009, 18, 500–514. [Google Scholar] [CrossRef]
  97. Keller, A. The evolutionary function of conscious information processing is revealed by its task-dependency in the olfactory system. Front. Psychol. 2014, 5, 62. [Google Scholar] [CrossRef] [Green Version]
  98. Tartaglia, E.M.; Bamert, L.; Mast, F.W.; Herzog, M.H. Human perceptual learning by mental imagery. Curr. Biol. 2009, 19, 2081–2085. [Google Scholar] [CrossRef] [Green Version]
  99. Grzeczkowski, L.; Tartaglia, E.M.; Mast, F.W.; Herzog, M.H. Linking perceptual learning with identical stimuli to imagery perceptual learning. J. Vis. 2015, 15, 13. [Google Scholar] [CrossRef] [Green Version]
  100. Moulton, S.T.; Kosslyn, S.M. Imagining predictions: Mental imagery as mental emulation. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2009, 364, 1273–1280. [Google Scholar] [CrossRef]
  101. Neisser, U. Perceiving, Anticipating, and Imagining; University of Minnesota Press: Minneapolis, MN, USA, 1978. [Google Scholar]
  102. Ishai, A.; Sagi, D. Visual imagery facilitates visual perception: Psychophysical evidence. J. Cogn. Neurosci. 1997, 9, 476–489. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  103. McDermott, K.B.; Roediger, H.L. Effects of imagery on perceptual implicit memory tests. J. Exp. Psychol. Learn. Mem. Cogn. 1994, 20, 1379. [Google Scholar] [CrossRef] [PubMed]
  104. Perky, C.W. An Experimental Study of Imagination. Am. J. Psychol. 1910, 21, 422. [Google Scholar] [CrossRef]
  105. Segal, S.J. Processing of the stimulus in imagery and perception. In Imagery; Academic Press: New York, NY, USA, 1971; pp. 69–100. [Google Scholar]
  106. Ishai, A.; Sagi, D. Visual imagery: Effects of short-and long-term memory. J. Cogn. Neurosci. 1997, 9, 734–742. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. Pearson, J.; Rademaker, R.L.; Tong, F. Evaluating the mind’s eye: The metacognition of visual imagery. Psychol. Sci. 2011, 22, 1535–1542. [Google Scholar] [CrossRef]
  108. Levi, D.M.; Manny, R.E.; Klein, S.A.; Steinman, S.B. Electrophysiological correlates of hyperacuity in the human visual cortex. Nature 1983, 306, 468–470. [Google Scholar] [CrossRef] [PubMed]
  109. Hou, C.; Kim, Y.J.; Verghese, P. Cortical sources of Vernier acuity in the human visual system: An EEG-source imaging study. J. Vis. 2017, 17, 2. [Google Scholar] [CrossRef] [PubMed]
  110. Westheimer, G. Hyperacuity. In Encyclopedia of Neuroscience; Squire, L.A., Ed.; Academic Press: Oxford, UK, 2008. [Google Scholar]
  111. Bach, M. Visual Acuity-Hyperacuity 2020. Available online: https://michaelbach.de/ot/lum-hyperacuity/ (accessed on 20 August 2021).
  112. Palva, S.; Palva, J.M. New vistas for α-frequency band oscillations. Trends Neurosci. 2007, 30, 150–158. [Google Scholar] [CrossRef] [PubMed]
  113. Roux, F.; Uhlhaas, P.J. Working memory and neural oscillations: Alpha–gamma versus theta–gamma codes for distinct WM information? Trends Cogn. Sci. 2014, 18, 16–25. [Google Scholar] [CrossRef]
  114. Craver-Lemley, C.; Reeves, A. How visual imagery interferes with vision. Psychol. Rev. 1992, 99, 633–649. [Google Scholar] [CrossRef] [PubMed]
  115. D’Angiulli, A.; Reeves, A. The relationship between self-reported vividness and latency during mental size scaling of everyday items: Phenomenological evidence of different types of imagery. Am. J. Psychol. 2007, 521–551. [Google Scholar] [CrossRef]
  116. D’Angiulli, A. Mental image generation and the contrast sensitivity function. Cognition 2002, 85, B11–B19. [Google Scholar] [CrossRef]
  117. Berg, P.; Scherg, M. A multiple source approach to the correction of eye artifacts. Electroencephalogr. Clin. Neurophysiol. 1994, 90, 229–241. [Google Scholar] [CrossRef]
  118. Ille, N.; Berg, P.; Scherg, M. A spatial components method for continuous artifact correction in EEG and MEG. Biomed. Technol. 1997, 42, 80–83. [Google Scholar]
  119. Ille, N.; Berg, P.; Scherg, M. Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies. J. Clin. Neurophysiol. 2002, 19, 113–124. [Google Scholar] [CrossRef]
  120. Luck, S. An Introduction to the Event-Related Potential Technique, 2nd ed.; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  121. Berman, S.; Friedman, D. The development of selective attention as reflected by event-related brain potentials. J. Exp. Child Psychol. 1995, 59, 1–31. [Google Scholar] [CrossRef]
  122. Maris, E.; Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 2007, 164, 177–190. [Google Scholar] [CrossRef]
  123. Krishnan, A.; Williams, L.J.; McIntosh, A.R.; Abdi, H. Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review. Neuroimage 2011, 56, 455–475. [Google Scholar] [CrossRef]
  124. McIntosh, A.R.; Mišić, B. Multivariate statistical analyses for neuroimaging data. Annu. Rev. Psychol. 2013, 64, 499–525. [Google Scholar] [CrossRef]
  125. Kovacevic, N.; Abdi, H.; Beaton, D.; McIntosh, A.R. Revisiting PLS Resampling: Comparing Significance versus Reliability Across Range of Simulations. In New Perspectives in Partial Least Squares and Related Methods. Springer Proceedings in Mathematics & Statistics; Abdi, H., Chin, W., Esposito Vinzi, V., Russolillo, G., Trinchera, L., Eds.; Springer: New York, NY, USA, 2013; Volume 56. [Google Scholar]
  126. McIntosh, A.R.; Lobaugh, N.J. Partial least squares analysis of neuroimaging data: Applications and advances. Neuroimage 2004, 23, S250–S263. [Google Scholar] [CrossRef]
  127. Efron, B.; Tibshirani, R. Bootstrap methods for standard errors, confidence intervals and other measures of statistical accuracy. Stat. Sci. 1986, 1, 54–77. [Google Scholar] [CrossRef]
  128. Al-Fahoum, A.S.; Al-Fraihat, A.A. Methods of EEG signal features extraction using linear analysis in frequency and time-frequency domains. Int. Sch. Res. Not. 2014, 2014, 730218. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  129. Pfurtscheller, G.; Da Silva, F.L. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
  130. Gilbertson, T.; Lalo, E.; Doyle, L.; Di Lazzaro, V.; Cioni, B.; Brown, P. Existing motor state is favored at the expense of new movement during 13–35 Hz oscillatory synchrony in the human corticospinal system. J. Neurosci. 2005, 25, 7771–7779. [Google Scholar] [CrossRef] [Green Version]
  131. Romei, V.; Brodbeck, V.; Michel, C.; Amedi, A.; Pascual-Leone, A.; Thut, G. Spontaneous fluctuations posterior alpha-band EEG activity reflect variability in excitability of human visual areas. Cereb. Cortex 2008, 18, 2010–2018. [Google Scholar] [CrossRef]
  132. Romei, V.; Gross, J.; Thut, G. On the role of prestimulus alpha rhythms over occipito-parietal areas in visual input regulation: Correlation or causation? J. Neurosci. 2010, 30, 8692–8697. [Google Scholar] [CrossRef] [Green Version]
  133. Engel, A.K.; Fries, P. Beta-band oscillations—Signalling the status quo? Curr. Opin. Neurobiol. 2010, 20, 156–165. [Google Scholar] [CrossRef] [PubMed]
  134. Haegens, S.; Nacher, V.; Luna, R.; Romo, R.; Jensen, O. alpha-Oscillations in the monkey sensorimotor network influence discrimination performance by rhythmical inhibition of neuronal spiking. Proc. Natl. Acad. Sci. USA 2011, 108, 19377–19382. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Jensen, O.; Bonnefond, M.; VanRullen, R. An oscillatory mechanism for prioritizing salient unattended stimuli. Trends Cogn. Sci. 2012, 16, 200–206. [Google Scholar] [CrossRef] [PubMed]
  136. van Ede, F.; Koster, M.; Maris, E. Beyond establishing involvement: Quantifying the contribution of anticipatory alpha- and beta-band suppression to perceptual improvement with attention. J. Neurophysiol. 2012, 108, 2352–2362. [Google Scholar] [CrossRef] [Green Version]
  137. Thut, G.; Nietzel, A.; Brandt, S.A.; Pascual-Leone, A. Alpha-band Electroencephalographic activity over occipital cortex indexes visuospatial attention bias and predicts visual target detection. J. Neurosci. 2006, 26, 9494–9502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  138. van Ede, F.; Jensen, O.; Maris, E. Tactile expectation modulates prestimulus beta-band oscillations in human sensorimotor cortex. Neuroimage 2010, 51, 867–876. [Google Scholar] [CrossRef] [PubMed]
  139. Haegens, S.; Händel, B.F.; Jensen, O. Top-down controlled alpha band activity in somatosensory areas determines behavioral performance in a discrimination task. J. Neurosci. 2011, 31, 5197–5204. [Google Scholar] [CrossRef] [PubMed]
  140. Händel, B.F.; Haarmeier, T.; Jensen, O. Alpha oscillations correlate with the successful inhibition of unattended stimuli. J. Cogn. Neurosci. 2011, 23, 2494–2502. [Google Scholar] [CrossRef]
  141. Klimesch, W.; Doppelmayr, M.; Russegger, H.; Pachinger, T.; Schwaiger, J. Induced alpha band power changes in the human EEG and attention. Neurosci. Lett. 1998, 244, 73–76. [Google Scholar] [CrossRef]
  142. Hyvärinen, A.; Oja, E. A fast fixed-point algorithm for independent component analysis. Neural Comput. 1997, 9, 1483–1492. [Google Scholar] [CrossRef]
  143. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [Green Version]
  144. D’Angiulli, A.; Pham, D.A.T.; Leisman, G.; Goldfield, G. Evaluating Preschool Visual Attentional Selective-Set: Preliminary ERP Modeling and Simulation of Target Enhancement Homology. Brain Sci. 2020, 10, 124. [Google Scholar] [CrossRef] [Green Version]
  145. Lancaster, J.; Fox, P.; Tailarach.org. Research Imaging Institute of the University of Texas Health Science Center San Antonio. Available online: http://www.talairach.org/index.html (accessed on 20 August 2021).
  146. Papademetris, X.; Jackowski, M.P.; Rajeevan, N.; DiStasio, M.; Okuda, H.; Constable, R.T.; Staib, L.H. BioImage Suite: An integrated medical image analysis suite: An update. Insight J. 2006, 2006, 209. [Google Scholar] [PubMed]
  147. Rosenthal, R.; Rosnow, R.L.; Rubin, D.B. Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  148. Todd, N.P.; Govender, S.; Colebatch, J.G. The human electrocerebellogram (ECeG) recorded non-invasively using scalp electrodes. Neurosci. Lett. 2018, 682, 124–131. [Google Scholar] [CrossRef]
  149. Craver-Lemley, C.; Reeves, A. Visual Imagery Selectively Reduces Vernier Acuity. Perception 1987, 16, 599–614. [Google Scholar] [CrossRef] [PubMed]
  150. Miller, G.H. Cathode-Ray Tube. In McGraw-Hill Encyclopedia of Science and Technology, 8th ed.; Parker, S., Ed.; McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  151. Sherwood, R.; Pearson, J. Closing the mind’s eye: Incoming luminance signals disrupt visual imagery. PLoS ONE 2010, 5, e15217. [Google Scholar] [CrossRef] [Green Version]
  152. Ganis, G.; Thompson, W.L.; Kosslyn, S.M. Brain areas underlying visual mental imagery and visual perception: An fMRI study. Cogn. Brain Res. 2004, 20, 226–241. [Google Scholar] [CrossRef]
  153. Falasca, N.W.; D’Ascenzo, S.; Di Domenico, A.; Onofrj, M.; Tommasi, L.; Laeng, B.; Franciotti, R. Hemispheric lateralization in top-down attention during spatial relation processing: A Granger causal model approach. Eur. J. Neurosci. 2015, 41, 914–924. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (A) Vernier acuity task displayed on a CRT monitor (settings described in 2.2). The left panel displays the left micro-foveal offset, and the right panel displays the right micro-foveal offset (both 0.157° from the center). (B) Timeline of the acuity task, showing where the image ready and the offset response epochs were taken. (C) Scalp map of the 68 electrodes clustered into 21 regions of interest (ROI) for ERP processing.
Figure 1. (A) Vernier acuity task displayed on a CRT monitor (settings described in 2.2). The left panel displays the left micro-foveal offset, and the right panel displays the right micro-foveal offset (both 0.157° from the center). (B) Timeline of the acuity task, showing where the image ready and the offset response epochs were taken. (C) Scalp map of the 68 electrodes clustered into 21 regions of interest (ROI) for ERP processing.
Symmetry 13 01549 g001
Figure 2. (Top Panel) Grand average event-related potential (ERP) scalp map during the ‘Image Ready’ epoch corresponding to the image generation phase of the experiment, in three conditions: no imagery/orienting external attention (NI), within-body imagery (WBI), and outside-of-body imagery (OBI). Color coding is displayed in right–left box overlay top of the head map, scaling is displayed at the top within the head map, and significant electrode clusters of interest are indicated with white boxes. Statistical differences between no imagery and imagery conditions are displayed on the right-hand side using different color coding. (Bottom Panel) Difference waves corresponding to the ERPs are shown in the top panel. White outline boxes show significant corrected asymmetry and symmetry differences.
Figure 2. (Top Panel) Grand average event-related potential (ERP) scalp map during the ‘Image Ready’ epoch corresponding to the image generation phase of the experiment, in three conditions: no imagery/orienting external attention (NI), within-body imagery (WBI), and outside-of-body imagery (OBI). Color coding is displayed in right–left box overlay top of the head map, scaling is displayed at the top within the head map, and significant electrode clusters of interest are indicated with white boxes. Statistical differences between no imagery and imagery conditions are displayed on the right-hand side using different color coding. (Bottom Panel) Difference waves corresponding to the ERPs are shown in the top panel. White outline boxes show significant corrected asymmetry and symmetry differences.
Symmetry 13 01549 g002
Figure 3. Grand average ERP scalp map for the linear offset response corresponding to the Vernier acuity visuospatial detection task. Electrode clusters of interest are indicated with black boxes; ERPs are shown on the scalp, and relevant OBI and WBI effect sizes are shown connected to the ERPs on the periphery.
Figure 3. Grand average ERP scalp map for the linear offset response corresponding to the Vernier acuity visuospatial detection task. Electrode clusters of interest are indicated with black boxes; ERPs are shown on the scalp, and relevant OBI and WBI effect sizes are shown connected to the ERPs on the periphery.
Symmetry 13 01549 g003
Figure 4. (Top Panel) Analysis of effect size differences between OBI and OEA and between the WBI and OEA conditions for the grand average ERPs of the Vernier line offset epoch. (Bottom Panel) Difference waves analysis corresponding to the ERPs shown in the top panel. The white outlined boxes indicate effects related to P300, whereas the single white asterisks indicate effects related to P600, and the double white asterisks indicate effects related to P8/900 signatures.
Figure 4. (Top Panel) Analysis of effect size differences between OBI and OEA and between the WBI and OEA conditions for the grand average ERPs of the Vernier line offset epoch. (Bottom Panel) Difference waves analysis corresponding to the ERPs shown in the top panel. The white outlined boxes indicate effects related to P300, whereas the single white asterisks indicate effects related to P600, and the double white asterisks indicate effects related to P8/900 signatures.
Symmetry 13 01549 g004
Figure 5. (A) Task PLS results for the comparison of conditions on ERPs for the image ready epoch (i.e., image recall and maintenance phase). (B) Task PLS results for the comparison of conditions on ERPs for the linear offset epoch (i.e., visuospatial detection phase). In both of the top panels with bar graphs, error bars represent ±1 standard error. Bottom heat maps present multiple testing thresholded z-scores.
Figure 5. (A) Task PLS results for the comparison of conditions on ERPs for the image ready epoch (i.e., image recall and maintenance phase). (B) Task PLS results for the comparison of conditions on ERPs for the linear offset epoch (i.e., visuospatial detection phase). In both of the top panels with bar graphs, error bars represent ±1 standard error. Bottom heat maps present multiple testing thresholded z-scores.
Symmetry 13 01549 g005aSymmetry 13 01549 g005bSymmetry 13 01549 g005c
Figure 6. (A) Behavioural PLS results for the correlation between ERP values and acuity scores for the linear offset response. Error bars represent ±1 standard error. (B) ERPs and the respective bootstrap values over time associated with higher Vernier acuity scores. Bottom heat map presents multiple testing thresholded z-scores.
Figure 6. (A) Behavioural PLS results for the correlation between ERP values and acuity scores for the linear offset response. Error bars represent ±1 standard error. (B) ERPs and the respective bootstrap values over time associated with higher Vernier acuity scores. Bottom heat map presents multiple testing thresholded z-scores.
Symmetry 13 01549 g006aSymmetry 13 01549 g006b
Figure 7. (Top) PLS correlations for ERBPs in the image ready epoch. (Bottom) Thresholded significant bootstrapping values for ERBPs across electrode clusters. Error bars represent ±1 standard error.
Figure 7. (Top) PLS correlations for ERBPs in the image ready epoch. (Bottom) Thresholded significant bootstrapping values for ERBPs across electrode clusters. Error bars represent ±1 standard error.
Symmetry 13 01549 g007aSymmetry 13 01549 g007b
Figure 8. ((A) Top) Behavioural PLS on acuity reduction and difference in FFT frequency band power (imagery minus non-imagery conditions) for the ‘line offset’ epoch. ((A) Bottom) Thresholded bootstrap matrices for FFT frequency band power. ((B) Top) PLS correlations between FFT frequency band power and Vernier acuity reduction for line offset epoch. ((B) Bottom) Thresholded bootstrap matrices for PLS correlations. Error bars represent ± 1 standard error.
Figure 8. ((A) Top) Behavioural PLS on acuity reduction and difference in FFT frequency band power (imagery minus non-imagery conditions) for the ‘line offset’ epoch. ((A) Bottom) Thresholded bootstrap matrices for FFT frequency band power. ((B) Top) PLS correlations between FFT frequency band power and Vernier acuity reduction for line offset epoch. ((B) Bottom) Thresholded bootstrap matrices for PLS correlations. Error bars represent ± 1 standard error.
Symmetry 13 01549 g008aSymmetry 13 01549 g008b
Figure 9. Behavioural PLS on ERP values for the ‘line offset’ epoch using global frequency band power for each condition as the behavioural correlate. PLS correlations are broken down by FFT band frequency. Error bars represent ± 1 standard error.
Figure 9. Behavioural PLS on ERP values for the ‘line offset’ epoch using global frequency band power for each condition as the behavioural correlate. PLS correlations are broken down by FFT band frequency. Error bars represent ± 1 standard error.
Symmetry 13 01549 g009
Figure 10. Results of dipole source localization analysis during image recall phase (first 500 ms).
Figure 10. Results of dipole source localization analysis during image recall phase (first 500 ms).
Symmetry 13 01549 g010
Table 1. Participant Vernier acuity scores (decrease from NI) under conditions three conditions: NI, OBI, and WBI.
Table 1. Participant Vernier acuity scores (decrease from NI) under conditions three conditions: NI, OBI, and WBI.
ParticipantNI
Score
AcuityOBI
Score
AcuityDecrease aWBI
Score
AcuityDecrease a
P15015010.00%5010.00%
P25015010.00%5010.00%
P35015010.00%490.982.00%
P4420.84360.7214.29%340.6819.05%
P5460.92410.8210.87%440.884.35%
P6450.9450.90.00%470.94−4.44%
P7440.88410.826.82%420.844.55%
P8440.88400.89.09%390.7811.36%
P9410.82450.9−9.76%400.82.44%
P10460.92380.7617.39%400.813.04%
Mean45.80.91643.60.8724.87%43.50.875.23%
SD3.290.06595.190.104-5.420.108-
Note. In the body of the Table, ‘Score’ indicates the raw number of correct responses out of 50 trials, whereas ‘Acuity’ is the proportion of correct responses. ‘Decrease’ is the percent reduction in acuity relative to the NI condition, where a positive percentage indicates interference, whereas a negative percentage indicates acuity facilitation. The following equation was used to calculate the percentage of the acuity decrease: % A c u i t y D e c r e a s e O B / W B I = % O B / W B   I % N I % N I . a Combined decrease (repeated measure) rate over change from baseline (13/15) is equal to Binomial p = 0.007, two-tailed.
Table 2. Summary of comparisons between OBI and WBI and asymmetry and symmetry findings with cross reference to the number of the figures reporting the results (i.e., Figure #).
Table 2. Summary of comparisons between OBI and WBI and asymmetry and symmetry findings with cross reference to the number of the figures reporting the results (i.e., Figure #).
Measure
(Condition)
Approx. Time Range (ms)ComparisonA/SymmetryLobeProcessing
Characteristics
Figure
#
ERPs
(Image recall)
−500 to −300OBI vs. NI sig.
WBI vs. NI ns.
Left and MidlineFrontalPositivityFigure 2
ERPs
(Image recall)
−500 to −300OBI-NI sig.
WBI-NI ns.
MidlineParietal-OccipitalNegativityFigure 2
ERPs
(Image hold)
0 to 200WBI vs. NI sig.
OBI vs. NI ns.
RightParietal-Occipital and OccipitalNegativityFigure 2
ERPs
(Image hold)
100 to 400WBI vs. NI sig.
OBI vs.NI ns.
MidlineCentro-Parietal to OccipitalNegativityFigure 2
ERPs
(Visuospatial detection)
350 to 450WBI vs. OBI vs. NIRight and MidlineOccipital, Parietal-OccipitalIncreased P300Figure 3 and Figure 4
ERPs
(Visuospatial detection)
350 to 450WBI vs. OBI vs. NILeft and Midline FrontalDecreased P300Figure 3 and Figure 4
ERPs
(Visuospatial detection)
450 to 600(WBI vs. NI)
vs.
(OBI vs. NI)
Left and MidlineFrontalIncreased P600
(WBI)
Figure 4
ERPs
(Visuospatial detection)
300 to 1000(WBI vs. NI)
vs.
(OBI vs. NI)
LeftTemporalIncreased Late Positivity
(OBI)
Figure 4
ERPs
(Visuospatial detection)
Larger Effect sizes for
WBI amplitudes
Right and MidlineOccipitalN200, P300Figure 4
ERPs
(Visuospatial detection)
Larger Effect sizes for
OBI amplitudes
MidlineFrontalP50, P100, N400, P800Figure 4
PLS ERP-Task contrast
(Image hold)
50 to 200Same pattern for OBI and WBIRight and MidlineFrontalIncreased positivityFigure 5A
Right and MidlineParietal-OccipitalIncreased negativity
Bilateral and MidlineOccipitalIncreased negativity
PLS ERP-Task phase
(Visuospatial detection)
300–400Same pattern for OBI and WBIRight and Midline
Right and Midline
Frontal

Parietal-Occipital
Decreased positivity
Increased positivity
Figure 5B
600–800 LeftOccipital
Parietal
Decreased positivity
PLS
ERP-Acuity Correlation
500 to 1000

700 to 1000

500 to 1000
WBI shows larger correlation than OBIBilateral and MidlineFrontal

Parietal and Occipital


Temporal
Negativity

Positivity



Negativity
Figure 6
PLS
EEG-Acuity
Correlation
(Image recall)
OBI associated with more Desynch. than WBILeft and Midline



Bilateral and Midline
Frontal
Parietal
Parietal-Occipital

Global
Beta Desynch.




Alpha Desynch.
Figure 7
PLS
EEG Task contrast
(Visuospatial detection)
Anticorrelation between WBI (Synchronization)
And OBI (Desynchronization)
Left
Left
Right and Midline
Right
Left
Frontal
Occipital
Parietal

Occipital
Parietal
Delta
Theta
Alpha

Alpha
Gamma
Figure 8A
PLS
EEG-Acuity
Correlation
(Visuospatial detection)
Inverse correlation between ERD and acuity
Larger for OBI than WBI
Left

Bilateral and Midline
Occipital

Parietal
Alpha

Beta
Figure 8B
PLS
ERP-EEG
Correlation
Inverse correlation between ERP and EEG Desynch.:
OBI > WBI

WBI > OBI


Alpha

Beta
Figure 9
Dipole Source Analysis
(Image recall)
WBI and OBI


WBI

OBI
Right


Left

Right
Parietal
Occipital

Temporal

Frontal
Figure 10
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

D’Angiulli, A.; Kenney, D.; Pham, D.A.T.; Lefebvre, E.; Bellavance, J.; Buchanan, D.M. Neurofunctional Symmetries and Asymmetries during Voluntary out-of- and within-Body Vivid Imagery Concurrent with Orienting Attention and Visuospatial Detection. Symmetry 2021, 13, 1549. https://doi.org/10.3390/sym13081549

AMA Style

D’Angiulli A, Kenney D, Pham DAT, Lefebvre E, Bellavance J, Buchanan DM. Neurofunctional Symmetries and Asymmetries during Voluntary out-of- and within-Body Vivid Imagery Concurrent with Orienting Attention and Visuospatial Detection. Symmetry. 2021; 13(8):1549. https://doi.org/10.3390/sym13081549

Chicago/Turabian Style

D’Angiulli, Amedeo, Darren Kenney, Dao Anh Thu Pham, Etienne Lefebvre, Justin Bellavance, and Derrick Matthew Buchanan. 2021. "Neurofunctional Symmetries and Asymmetries during Voluntary out-of- and within-Body Vivid Imagery Concurrent with Orienting Attention and Visuospatial Detection" Symmetry 13, no. 8: 1549. https://doi.org/10.3390/sym13081549

APA Style

D’Angiulli, A., Kenney, D., Pham, D. A. T., Lefebvre, E., Bellavance, J., & Buchanan, D. M. (2021). Neurofunctional Symmetries and Asymmetries during Voluntary out-of- and within-Body Vivid Imagery Concurrent with Orienting Attention and Visuospatial Detection. Symmetry, 13(8), 1549. https://doi.org/10.3390/sym13081549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop