Next Article in Journal
Finite-Size Effects with Boundary Conditions on Bose-Einstein Condensation
Next Article in Special Issue
Intelligent Detection of False Information in Arabic Tweets Utilizing Hybrid Harris Hawks Based Feature Selection and Machine Learning Models
Previous Article in Journal
Lagrangian Curve Flows on Symplectic Spaces
Previous Article in Special Issue
Transfer Detection of YOLO to Focus CNN’s Attention on Nude Regions for Adult Content Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map

by
Birgitta Dresp-Langley
1,* and
John M. Wandeto
2
1
ICube Lab UMR 7357 CNRS, Strasbourg University, 67085 Strasbourg, France
2
Department of Information Technology, Dedan Kimathi University of Technology, Box 12405-10109 Nyeri, Kenya
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(2), 299; https://doi.org/10.3390/sym13020299
Submission received: 17 January 2021 / Revised: 5 February 2021 / Accepted: 8 February 2021 / Published: 10 February 2021
(This article belongs to the Special Issue Symmetry in Artificial Visual Perception and Its Application)

Abstract

:
Symmetry in biological and physical systems is a product of self-organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry-based feature extraction or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human observers. To this end, we exploit a neural network metric in the output of a biologically inspired Self-Organizing Map Quantization Error (SOM-QE). Shape pairs with perfect geometry mirror symmetry but a non-homogenous appearance, caused by local variations in hue, saturation, or lightness within and/or across the shapes in a given pair produce, as shown here, a longer choice response time (RT) for “yes” responses relative to symmetry. These data are consistently mirrored by the variations in the SOM-QE from unsupervised neural network analysis of the same stimulus images. The neural network metric is thus capable of detecting and scaling human symmetry uncertainty in response to patterns. Such capacity is tightly linked to the metric’s proven selectivity to local contrast and color variations in large and highly complex image data.

1. Introduction

Symmetry in biological and physical systems is a product of self-organization [1] driven by evolutionary processes and/or mechanical systems under constraints. It conveys a basic feature to living objects, from molecules to animal bodies, or to physical forces acting in synergy to create symmetrical structures [1,2,3,4,5,6]. In pattern formation, perfect symmetry is a regularity within a pattern, the two halves of which are mirror images of each other. In information theory and in particular human information processing [7,8,9,10,11], symmetry is considered an important carrier of information, detected universally by humans from an early age on [12,13]. Human symmetry detection [14,15] in patterns or shapes involves visual and cognitive processes from lower to higher levels of functional organization [16,17,18,19,20,21,22,23,24]. Vertical mirror symmetry is a particularly salient form of visual symmetry [23,24,25], processed at early stages in human vision and producing greater or lesser detection reliability [23] depending on local features of the stimulus display with greater or lesser stimulus certainty. Shape symmetry is a visual property that attracts attention [18] and determines perceived volume [19,20,21,22] and perceptual salience [26] of objects represented in the two-dimensional image plane. Aesthetic judgment and choice preference [27,28] are influenced by symmetry, justifying biologically inspired models of symmetry perception in humans [29] under the light of the fact that symmetry is detected not only by primates but also by other species, such as insects, for example [30].
Symmetry may be exploited in pattern detection and classification by neural networks, which may have to learn multiple copies of the same object representation displayed in different orientations. Encoding symmetry as a prior shape in the network can, therefore, help avoid redundant computation where the network has to learn to detect the same pattern or shape in multiple orientations [31]. Symmetry-based feature extraction and/or representation [32] by neural networks using deep learning can, for example, help discover the most informative representations in large image databases with only a minor amount of preprocessing [33]. However, despite significant achievements of artificial intelligence in recognition and classification of well-reproducible patterns, the problem of uncertainty still requires additional attention, especially in ambiguous data. An artificial neural network that detects uncertainty states, where a human observer doubts about an image interpretation has been described previously for the case of MagnetoEncephaloGraphic (MEG) image data with significant ambiguity [34]. Here in this study, we present an artificial neural network that detects uncertainty states in human observers with regard to shape symmetry. To this end, we exploit a neural network metric in the output of a biologically inspired Self-Organizing Map (SOM). The Quantization Error of the SOM (SOM-QE) [35,36,37,38,39,40,41,42,43] is a measure of output variance and quantifies the difference between neural network states at different stages of unsupervised image learning. In our previous studies, we demonstrated functional properties of the SOM-QE such as a sensitivity to the spatial extent, intensity, and sign or color of local image contrasts in unsupervised image classification by the neural network. The metric reliably detects the finest, clinically or functionally relevant variations in local image contrast contents [36,37,38,39,40,41,42,43], often invisible to the human eye [38,39,42,43]. Here it will be shown that the SOM-QE as a neural network state metric reliably captures, or correlates with, varying levels of human uncertainty in the detection of symmetry of shape pairs with varying local color information contents.
Previous work using human two-alternative forced choice decision has shown that local color variations in two-dimensional pattern displays may significantly influence perceived relative distances [44,45,46,47]. In the present study, we use shapes that display perfect geometrical vertical mirror symmetry, as described further below here in Section 2.1., where visual uncertainty about symmetry is introduced by systematic variations in color (hue) and or saturation of local shape elements, leading to lesser or greater amounts of visual information content. The psychophysically determined choice response time, previously shown to directly reflect stimulus uncertainty in terms of a biological or physiological system states [48,49], is exploited as measure of symmetry uncertainty. Response time (RT) measurement is the psychophysical approach to what is referred to as “mental chronometry” [50,51,52], which uses measurements of the time between a stimulus onset and an immediate behavioral response to the stimulus. Hick’s Law [53,54], in particular, established on the basis of experiments in traditional sensory psychophysics, was repeatedly confirmed in studies by others [48,49,50,51,52]. The law directly relates response time to amount of information in the stimulus, and amount of information in the stimulus to uncertainty as a biologically grounded and potentially universal mechanism underlying sensation, perception, attention and, ultimately, decision [48,49,50,51,52,53]. In its most general form, Hick’s Law postulates that, provided the error rate in the given psychophysical task is low, RT increases linearly with the amount of transmitted information (I) in a set of signals or, as here in this study, a visual image configuration. The law extends to a direct relationship between choice RT and stimulus uncertainty (SU), as plotted graphically in Figure 1 for illustration, and predicts that, provided the response error rate is low, RT increases linearly with stimulus uncertainty (SU)
RT = a + b (SU)
There is no evidence that RT in low-level visual decision tasks would be influenced by individual variables such as gender. Inter-individual differences have been found in RT tasks involving higher levels of semantic processing, as in lexical decision and similar higher level cognitive decision tasks [55]. Biological and cognitive ageing is found to correlate with longer RT due to changes in neurotransmitter dynamics in the ageing brain [56]. This age effect on RT is, however, subject to a certain amount of functional plasticity and may be counteracted by training [57]. To avoid potential effects of age in the experiments here, we chose a study population of healthy young male individuals.

2. Materials and Methods

Visual system uncertainty associated with the symmetry of shape pairs was varied experimentally in a series of two-dimensional images showing shape pairs with perfect geometrical (vertical mirror) symmetry but varying amounts of local color information. To quantify human uncertainty in response to the variable amounts of local color information, images were presented in random order on a computer screen to observers who had to decide as quickly as possible whether two shapes in a given images were symmetrical or not (yes/no procedure). The psychophysically measured choice response time was computed as a measure of uncertainty following the rationale of Hick’s Law explained in detail in the introduction above. The original images were submitted to SOM-QE to test whether the algorithm reliably detects the different levels of human uncertainty reflected by the psychophysical response time variations.

2.1. Original Images

The original images fed into the SOM one by one, as explained further below in “Section 2.4”, were generated in Photoshop 12. They are made available in the Supplementary Materials Section under “S1”; copies thereof are shown, for illustration only, in Figure 2. All images are identically scaled. The image size is constant (2720 × 1446 pixels). All paired shape configurations in the images have perfect geometric mirror symmetry, with the same number of local shape elements on the left and on the right, i.e., a total number of 12 shape elements on each side. All local shape elements are of identical size, the perimeter of each single element, which is the sum of the lengths of the six sides, being constant at 1260 pixels, ensuring constant area size of the local elements. The local color parameters of shape elements in the images were selectively manipulated in Adobe RGB color space to introduce varying levels of visual uncertainty about the mirror symmetry of shape pairs in the different images. The corresponding physical variations in color, hue, saturation, lightness and R-G-B are given below in Table 1. The medium grey (R = 130, G = 130, B = 130) background is identical in all the images.

2.2. Experimental Display

The images were displayed for visual presentation on a high-resolution computer screen (EIZO COLOR EDGE CG 275W, 2560 × 1440-pixel resolution) connected to a DELL computer equipped with a high-performance graphics card (NVIDIA). Color and luminance calibration of the RGB channels of the monitor was performed using the appropriate Color Navigator self-calibration software, which is delivered with the screen and runs under Windows. As stated above, visual symmetry uncertainty in the shape pairs was varied by giving the local shape elements variable color appearance in terms of hue, lightness and saturation. We have four variations for each of the single-color shape pair configurations, labeled as “RED” and “BLUE”, and two variations for the multicolor shape pair configurations, labeled as “MULTICOL”. As explained in the introduction, we expect that multiple local variations in appearance across shapes of a pair would be likely to produce higher levels of visual symmetry uncertainty in a human observer compared with single local or global variations.

2.3. Choice Response Time Test

In the test phase measuring human decision times, the 20 images were displayed in random order in two successively repeated test sessions per human observer. Tests were run on a workstation consisting of a computer screen (EIZO COLOR EDGE CG 275W, 2560 × 1440-pixel resolution) connected to a DELL computer equipped with a high performance graphics card (NVIDIA). Fifteen healthy young individuals, chosen mainly from a population of undergraduate students, participated in the test phase. All participants had normal or corrected-to-normal visual acuity. In addition, the Ishihara plates [58] were used prior to individual testing to ensure that all of them also had normal color vision.
The choice response tests were run in October and November 2019, and in conformity with the Helsinki Declaration for scientific experiments on human individuals. Participants provided informed consent prior to testing; individuals’ identities are not revealed. The test protocol adheres to standards of good procedure stated in the ethics regulations of the Centre National de la Recherche Scientifique (CNRS) relative to response data collection from healthy humans in non-invasive standard tasks, for which examination of the experimental protocol by a specific ethics committee is not mandatory. Each individual participant was placed in an adjustable chair at a viewing distance of about 80 cm from the computer screen. Individual positions were adjusted in order to ensure that the screen was at eye-level. Tests were run in individual sessions and under mesopic viewing conditions. Each participant was adapted to the ambient lighting condition for about five minutes. Instructions given stated that images with two abstract patterns, one on the left and one on the right, would be shown on the screen in two separate sequences. The task communicated to each participant was to: “decide as quickly as possible and as soon as an image appears on the screen whether or not the two patterns in the given image appear to be symmetrical or not”. They had to press “1” for “yes”, or “2” for “no” on the computer keyboard, their index and middle fingers of their dominant hand ready above the numbers to be able to press a given key without any motor response delay. Each coded response choice was recorded and stored next to the choice response time in a labeled data column of an MS Excel file. The choice response time corresponds to the time between an image onset and the moment a response key is pressed. As soon as a response key was pressed, a current image disappeared from the screen; 900 milliseconds later, the next image was displayed. Image presentations in random order, trial sequencing, and data coding and recording were computer controlled. The program is written in Python for Windows, and is freely available for download [59,60].

2.4. Neural Network (SOM) Analysis

The conceptual background and method of neural network analysis follows the same principle and protocol already described in our latest previous work on biological cell imaging data analysis by SOM [37,43]. It is described here again in full detail, for the benefit of the reader. The Self-Organizing Map is an artificial neural network (ANN) with a functional architecture that formally corresponds to the nonlinear, ordered, smooth mapping of high-dimensional input data onto the elements of a regular, low-dimensional array [61]. It is assumed that the set of input variables can be defined as a real vector x of n-dimension. A parametric real vector mi of n-dimension is associated with each element in the SOM. Vector mi is a model and the SOM is therefore an array of models. Assuming a general distance measure between x and mi given by d(x,mi), the map of an input vector x on the SOM array is defined as the array element mc that best matches x yielding the smallest d(x,mi). During the learning process, an input vector x is compared with all the mi to identify mc. The Euclidean distances ||x-mi|| define mc. Models topographically close in the map up to a certain geometric distance, indicated by hci, will activate each other to learn from their joint input x. This results in a local relaxation or smoothing effect on the models in this neighborhood, which in continuous learning leads to global ordering. Learning is represented by
m ( t + 1 ) = m i ( t ) + α ( t ) h c i ( t ) x ( t ) m i ( t )
where t = 1 , 2 , 3   e t c . represents an integer, the discrete-time coordinate hci(t) denotes the neighborhood function—a smoothing kernel defined across the map points which converges towards zero with time— α ( t ) is the learning rate, which also converges towards zero with time and affects the amount of learning in each model. At the end of a winner-take-all learning process in the SOM, each image input vector x is matched to its best matching model within the map mc. The difference between x and mc, ||x − mc||, is a measure indicating how close a final SOM value is to the original input value; it is reflected by the quantization error, QE. The average QE of all x (X) within a given image is determined by
Q E = 1 / N i = 1 N X i m c i
where N is the number of input vectors x in the image. The final weights of the SOM are defined in terms of a three-dimensional output vector space representing each R, G, and B channel. Magnitude, as well as the direction of change in any of these from one image to another is reliably reflected by changes in the QE. SOM training consisted of 1000 iterations. The SOM was a two-dimensional rectangular map of 4 by 4 nodes, hence capable of creating 16 models, or domains, of observation. The spatial locations, or coordinates, of each model at different locations on the map exhibit specific characteristics, each one different from all the others. When a new input signal is fed into the map, all the models compete; the winner will be the model whose features most closely resemble the features of the input signal. The input signal will be grouped accordingly into one of the models. Each model, or domain, is an independent decoder of the same input independently interpreting the information contained in the input, which is represented as a mathematical vector of the same form as that of the model. Therefore, the presence or absence of an active response at a specific map location, rather than the exact input–output signal transformation or magnitude of the response, generates an interpretation of the input. To define initial values for map size, a trial-and-error procedure was implemented. Map sizes larger than 4 by 4 produced observations where some models ended up empty, meaning that these models did not match input the end of the training. As a consequence, 16 models were sufficient to represent all local data in the image. Neighborhood distance and learning rate were assigned initial values of 1.2 and 0.2, respectively. These values were obtained through the trial-and-error procedure, after testing for the quality of the “first guess”, directly determined by the value of the resulting quantization error. The smaller the latter, the better the “first guess”. It is worthwhile pointing out that the models were initialized by randomly picking training image vectors. This allows the SOM to work on the original data with no prior assumptions about levels of organization within the data. This, however, requires starting with a wider neighborhood function, and a bigger learning-rate factor than in procedures where initial values for model vectors may be pre-selected [62]. Our approach is economical in terms of computation times, which constitutes one of its major advantages. The 20 images here were fed one by one into a single SOM. The training image for the SOM prior to further input can be any of these. Since each of the 20 input images is fed into SOM at a time, it gets subjected to all the 16 models in SOM. These 16 models compete, and the winner ends up as the best matching model for the input vector. After unsupervised winner-takes-all SOM learning, the SOM-QE output is written into a data file. Further steps generate output plots of SOM-QE, where each output value is associated with the corresponding input image. The output data are then plotted in increasing/decreasing orders of SOM-QE magnitude as a function of the corresponding image variations (automatic image classification). The computation time of SOM analysis of each of the 20 images was about two seconds per image. The code used for implementing the SOM-QE is available online at: https://www.researchgate.net/publication/330500541_Self-organizing_map-based_quantization_error_from_images (accessed on 8 January 2021).

3. Results

With 20 images per individual session, two successive sessions per participant, and 15 participants, a total of 600 choice response time data were recorded. A shape pair corresponding to a single factor level relative to shape appearance and color was presented twice in a session with 20 images to allow for left- and right-hand side presentation of a given appearance factor level in the shape pairs. The labels of the individual factor level associated with each shape pair are given in Figure 1. With two repeated sessions per participants, we have four individual response time data for each single factor level. All data analyses relative to choice response times were run on the 15 average response times for each factor level from the 15 participants. These data are made available here in S2 of the Supplementary Materials Section.
Since all shape pairs in all the images were mirror-symmetric, “no” responses occurred only very rarely in the experiment (17 of the 600 recorded choice responses signaled “no”, which corresponds to less than three percent of the total number of observations), as would be expected. In these rare cases, only the choice response times corresponding to a “yes” among the four responses recorded for a given factor level were used for computing the average. In terms of operational factor levels in the Cartesian experimental design plan, we have four levels (1,2,3,4) of a factor termed “Appearance” (A4) associated with the colors BLUE and RED, and two levels (1,2) of “Appearance” (A2) associated with the multiple color case termed MULTICOL here. The three color conditions, blue, red, and multicolor, describe three operational levels of a second factor termed “Color” (C3) herein. In a first step, two separate two-way analyses of variance (ANOVA) were run to test for significant effects of the factors “Appearance” and “Color”. The first ANOVA compares between four levels of “Appearance” (1,2,3,4) in two levels (BLUE, RED) of the “Color” factor. The second ANOVA compares between two levels of “Appearance” in three levels (BLUE, RED, MULTICOL) of the “Color” factor.

3.1. Two-Way ANOVA on Choice Response Times

3.1.1. A4 × C2 × 15

This analysis corresponds to a Cartesian analysis plan A4 × C2 × 15, with four levels (1,2,3,4) of the “Appearance” factor and two levels (BLUE, RED) of the “Color” factor on the 15 individual average response times (RT), yielding a total number (N-1) of 119 degrees of freedom (DF). The results from this analysis are shown here below in the top part of Table 2.
The results of this analysis signal a statistically significant effect of the “Appearance” factor on the average RT and a statistically significant interaction between the “Appearance” and the “Color” factor for the cases BLUE and RED. A statistically significant effect of “Color” independent of “Appearance” is not observed, leading to the conclusion that either of these two colors produced similar effects on RT relative to shape symmetry when their appearance was modified. This holds with the exception for statistical comparison between BLUE3 and BLUE4, which is the only one that is not significant here, as revealed by the post-hoc comparison (Holm–Sidak) between these two factor levels (t(1,1) = 0.32, p < 0.75 NS). The effects can be appreciated further by looking at the effect sizes for the different conditions, which are visualized further in Figure 3 and Figure 4.

3.1.2. A2 × C3 × 15

This analysis corresponds to a Cartesian analysis plan A2 × C3 × 15, with two levels (1,2) of the “Appearance” factor and three levels (BLUE, RED, MULTICOL) of the “Color” factor on the 15 individual average response times (RT), yielding a total number (N-1) of 89 degrees of freedom. The results from this analysis are shown above in the bottom part of Table 2. They signal a statistically significant effect of the “Appearance” factor on the average RT and a statistically significant effect of the “Color” factor. A statistically significant interaction is not observed here. Statistical post-hoc comparisons (Holm–Sidak) reveal statistically significant differences between the factor levels MULTICOL and RED (t(1,1) = 14.44, p < 0.001) and between the factor levels MULTICOL and BLUE (t(1,1) = 12.60, p < 0.001), but not, as could be expected from the previous ANOVA, between the factor levels RED and BLUE (t(1,1) = 1.84, p < 0.09 NS). The “Color” effect here is reflected by the observation that shape pairs with multiple color elements yield significantly longer symmetry related RT compared with shape pairs composed of any of the two single colors here. This effect can be appreciated further by looking at the effect sizes for the different comparisons, which are visualized further in Figure 5 below.

3.2. RT Effect Sizes

The effect sizes, in terms of differences between means, that correspond to significant statistical differences signaled by two-way ANOVA were plotted graphically, and are shown in the top graph in Figure 2, and in the top graphs in Figure 3 and Figure 4 along with the corresponding shape pairs that produced the results. The graphs show clearly that shape pairs with non-homogenous appearance, i.e., local variations in hue, saturation, or lightness within and/or across shapes in a given pair, produce longer choice RT for “yes” responses relative to shape symmetry.

3.3. SOM-QE Effect Sizes

The SOM-QE metrics from the unsupervised neural network analysis of the test images were also plotted graphically and are displayed in the bottom graphs of Figure 2, Figure 3 and Figure 4. The graphs show clearly that the magnitudes of the SOM-QE from the neural network analysis consistently mirror the observed magnitudes of average choice RT for “yes” responses relative to shape symmetry produced by shape pairs with varying appearance in terms of local variations in hue, saturation, or lightness within and/or across shapes in a given pair.

3.4. Linear Regression Analyses

The results from the previous analyses show that the average choice RT for “yes” responses relative to shape symmetry, produced by shape pairs with varying appearance in terms of local variations in hue, saturation, or lightness within and/or across shapes in a given pair, produce significant variations consistent with variations in decisional uncertainty about the mirror symmetry of the shapes in a pair. The higher the variability in hue, saturation or lightness of single shape elements, the longer the RT for “yes” hence the higher the stimulus uncertainty for “symmetry”. Indeed, the longest choice RT for “yes” responses relative to shape symmetry is produced by the shape pairs MULTICOL1 and MULTICOL2. To bring the tight link between variations in RT reflecting different levels of human uncertainty and the variations in the SOM-QE metric from the neural network analyses, we performed a linear regression analysis on the RT data for shape pairs with varying levels of appearance in BLUE, RED and MULTICOL shapes, and a linear regression analysis on the SOM-QE data for exactly the same shape pairs. The results from these analyses are plotted below in Figure 6. The linear regressions coefficients (R2) are provided in the graph for each analysis. It is shown that RT for “yes” responses relative to shape symmetry and the SOM-QE as a function of the same shape variations follow highly similar and significant linear trends.

4. Discussion

It is shown that mirror symmetric shape pairs with variable appearance caused by local variations in color information within and/or across shapes of a pair produce longer choice RT for “yes” responses relative to the shape symmetry. The variations in average choice RT for “yes” are consistent with variations in visual system uncertainty for symmetry, made fully operational here by the variations in amount or diversity of local color information (“Appearance”) in the display. Consistently, the display with the highest number of local variations, MULTICOL2, produces the longest choice RT. The higher the amount of variable information, the longer the RT for “yes” engendered by the stimulus uncertainty. Although color per se may be deemed irrelevant to the given task, color singletons, like the ones manipulated in this experiment to introduce variations in uncertainty, can act as powerful distractors in a variety of psychophysical choice tasks [63,64,65]. This is accounted for by models where the attentional weight of any object may be conceived as the product of bottom-up (visual) and top-down (expectational) components [63]. The higher the number of different color singletons in a given paired configuration, the longer the choice response time for “symmetry”, in consistency with the most general variant of Hick’s Law [53]. As also shown, the variations in RT are consistently mirrored by the variations in the SOM-QE from the unsupervised neural network analysis of the same stimulus images. This provides further data showing that artificial neural networks are capable of detecting human uncertainty in perceptual judgment tasks [34]. The capability of the SOM-QE to capture such uncertainty in human choice responses to the symmetry of shapes with local variations in color parameters is tightly linked to the proven selectivity of this neural network metric to local contrast and color variations in a large variety of complex image data [36,37,38,39,40,41,42,43]. Here, the metric is revealed as a measure of both variance in the image input data, and uncertainty in specific human decisions in response to such data. The neural network metric captures the effects of local color contrast on symmetry saliency in cases where pure shape geometry signals perfect mirror symmetry. This unambiguously shows that visual parameters beyond stimulus geometry [66,67,68,69] influence what has previously been termed the “symmetry of things in a thing”. Such local, non-geometrically determined effects on perceived shape symmetry have potentially important implications for image-guided human precision tasks [70,71], now more and more often assisted by neural network-driven image analysis. From a general functional viewpoint, local-color-based visual system uncertainty for symmetry delaying conscious choice response times in humans is consistent with current theory invoking interactions between low-level visual and higher-level cognitive mechanisms in perceptual decisions relative to symmetry [72,73]. The fully conscious detection of symmetry in choice response tasks most likely involves information processing through brain networks with neurons that display large receptive field areas and a massive amount of lateral connectivity [74]. Other machine learning approaches have exploited low-level and global features of natural images or scenes to train artificial symmetry detectors on the basis of symmetry-axis ground-truth datasets [75,76], demonstrating the importance of local features such as color in boosting symmetry detection learning by machines compared with gray-scale image data. In human perception, however, the attentional weight of visual information enabling symmetry detection depends not only on the contrast of local features to their local surroundings (saliency), but also on the importance of the features with respect to the given task (relevance). This study deals with human perceptual system uncertainty when geometrically defined local (feature) and global (shape) symmetry rules are satisfied. Other types of symmetry uncertainty detection, not addressed in this study, are relevant in image processing. For example, feature selection methods such as uncertainty class–feature association map-based selection [77] have proven to be highly useful preprocessing approaches for eliminating irrelevant and redundant features from complex image data such as DNA microarray data [78,79], where the number of dimensions increase steadily and fast. In such feature selection approaches, which are generally applied to image clusters, the Symmetric Uncertainty Principle [77] is exploited to achieve dimensionality reduction for optimal classification performance. The SOM used in this study operates on the basis of the assumption that all data in the image (or a cluster thereof) are relevant; the dimensionality of the input to be mapped is predefined and already minimized.

5. Conclusions

While symmetry detection by machines and the importance of colored features for such purposes is an increasingly popular field of investigation, questions relating to image properties that may increase system uncertainty for detecting or responding to symmetry, especially when the latter is geometrically perfect, are less often investigated. This work shows a clear study case where increasing amount of local color information delays the human visual system response to symmetry in consistency with predictions of Hick’s Law [53,54]. This system uncertainty is reliably captured by the output metric of a biologically inspired artificial neural network, the SOM, which possesses self-organizing functional properties akin to those of the human sensory system, within the limitations of the study scope set here.

Supplementary Materials

The following are available online at https://www.mdpi.com/2073-8994/13/2/299/s1: Folder S1: OriginalImagesFedIntoSOM; Table S1: AverageResponseTimeData15Subjects.xls.

Author Contributions

Conceptualization, B.D.-L.; methodology, B.D.-L., J.M.W.; software, J.M.W.; validation, B.D.-L., J.M.W.; formal analysis, B.D.-L., J.M.W.; investigation, B.D.-L., J.M.W.; resources, B.D.-L.; data curation, B.D.-L., J.M.W.; writing—original draft preparation, B.D.-L., J.M.W.; writing—review and editing, B.D.-L., J.M.W.; visualization, B.D.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki. All the individuals provided informed consent to participate. Their identity is not revealed. The test procedure fully adheres to rules and regulations set by the ethics board of the corresponding author’s host institution (CNRS) for response data collection from healthy human individuals in non-invasive psychophysical choice response tasks, for which examination of the experimental protocol by a specific ethics committee is not mandatory. Ethical review and approval were, as a consequence, waived for this study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data exploited here are made available in this publication.

Acknowledgments

Material support from the CNRS is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schweisguth, F.; Corson, F. Self-Organization in Pattern Formation. Dev. Cell 2019, 49, 659–677. [Google Scholar] [CrossRef] [PubMed]
  2. Carroll, S.B. Chance and necessity: The evolution of morphological complexity and diversity. Nature 2001, 409, 1102–1109. [Google Scholar] [CrossRef] [PubMed]
  3. García-Bellido, A. Symmetries throughout organic evolution. Proc. Natl. Acad. Sci. USA 1996, 93, 14229–14232. [Google Scholar] [CrossRef] [Green Version]
  4. Groves, J.T. The physical chemistry of membrane curvature. Nat. Chem. Biol. 2009, 5, 783–784. [Google Scholar] [CrossRef]
  5. Hatzakis, N.S.; Bhatia, V.K.; Larsen, J.; Madsen, K.L.; Bolinger, P.Y.; Kunding, A.H.; Castillo, J.; Gether, U.; Hedegård, P.; Stamou, D. How curved membranes recruit amphipathic helices and protein anchoring motifs. Nat. Chem. Biol. 2009, 5, 835–841. [Google Scholar] [CrossRef] [PubMed]
  6. Holló, G. Demystification of animal symmetry: Symmetry is a response to mechanical forces. Biol. Direct 2017, 12, 11. [Google Scholar] [CrossRef] [Green Version]
  7. Mach, E. On Symmetry. In Popular Scientific Lectures; Open Court Publishing: Lasalle, IL, USA, 1893. [Google Scholar]
  8. Arnheim, R. Visual Thinking, 1969; University of California Press: Oakland, CA, USA, 2004. [Google Scholar]
  9. Deregowski, J.B. Symmetry, Gestalt and information theory. Q. J. Exp. Psychol. 1971, 23, 381–385. [Google Scholar] [CrossRef]
  10. Eisenman, R. Complexity–simplicity: I. Preference for symmetry and rejection of complexity. Psychon. Sci. 1967, 8, 169–170. [Google Scholar] [CrossRef]
  11. Eisenman, R.; Rappaport, J. Complexity preference and semantic differential ratings of complexity-simplicity and symmetry-asymmetry. Psychon. Sci. 1967, 7, 147–148. [Google Scholar] [CrossRef]
  12. Deregowski, J.B. The role of symmetry in pattern reproduction by Zambian children. J. Cross Cult. Psychol. 1972, 3, 303–307. [Google Scholar] [CrossRef]
  13. Amir, O.; Biederman, I.; Hayworth, K.J. Sensitivity to non-accidental properties across various shape dimensions. Vis. Res. 2012, 62, 35–43. [Google Scholar] [CrossRef] [Green Version]
  14. Bahnsen, P. Eine Untersuchung über Symmetrie und Asymmetrie bei visuellen Wahrnehmungen. Z. Für Psychol. 1928, 108, 129–154. [Google Scholar]
  15. Wagemans, J. Characteristics and models of human symmetry detection. Trends Cogn. Sci. 1997, 9, 346–352. [Google Scholar] [CrossRef]
  16. Sweeny, T.D.; Grabowecky, M.; Kim, Y.J.; Suzuki, S. Internal curvature signal and noise in low- and high-level vision. J. Neurophysiol. 2011, 105, 1236–1257. [Google Scholar] [CrossRef] [PubMed]
  17. Wilson, H.R.; Wilkinson, F. Symmetry perception: A novel approach for biological shapes. Vis. Res. 2002, 42, 589–597. [Google Scholar] [CrossRef] [Green Version]
  18. Baylis, G.C.; Driver, J. Perception of symmetry and repetition within and across visual shapes: Part-descriptions and object-based attention. Vis. Cognit. 2001, 8, 163–196. [Google Scholar] [CrossRef]
  19. Michaux, A.; Kumar, V.; Jayadevan, V.; Delp, E.; Pizlo, Z. Binocular 3D Object Recovery Using a Symmetry Prior. Symmetry 2017, 9, 64. [Google Scholar] [CrossRef]
  20. Jayadevan, V.; Sawada, T.; Delp, E.; Pizlo, Z. Perception of 3D Symmetrical and Nearly Symmetrical Shapes. Symmetry 2018, 10, 344. [Google Scholar] [CrossRef] [Green Version]
  21. Li, Y.; Sawada, T.; Shi, Y.; Steinman, R.M.; Pizlo, Z. Symmetry is the sine qua non of shape. In Shape Perception in Human and Computer Vision; Dickinson, S., Pizlo, Z., Eds.; Springer: London, UK, 2013; pp. 21–40. [Google Scholar]
  22. Pizlo, Z.; Sawada, T.; Li, Y.; Kropatsch, W.G.; Steinman, R.M. New approach to the perception of 3D shape based on veridicality, complexity, symmetry and volume: A mini-review. Vis. Res. 2010, 50, 1–11. [Google Scholar] [CrossRef] [Green Version]
  23. Barlow, H.B.; Reeves, B.C. The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vis. Res. 1979, 19, 783–793. [Google Scholar] [CrossRef]
  24. Barrett, B.T.; Whitaker, D.; McGraw, P.V.; Herbert, A.M. Discriminating mirror symmetry in foveal and extra-foveal vision. Vis. Res. 1999, 39, 3737–3744. [Google Scholar] [CrossRef] [Green Version]
  25. Machilsen, B.; Pauwels, M.; Wagemans, J. The role of vertical mirror symmetry in visual shape perception. J. Vis. 2009, 9, 11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Dresp-Langley, B. Bilateral Symmetry Strengthens the Perceptual Salience of Figure against Ground. Symmetry 2019, 11, 225. [Google Scholar] [CrossRef] [Green Version]
  27. Dresp-Langley, B. Affine Geometry, Visual Sensation, and Preference for Symmetry of Things in a Thing. Symmetry 2016, 8, 127. [Google Scholar] [CrossRef] [Green Version]
  28. Sabatelli, H.; Lawandow, A.; Kopra, A.R. Asymmetry, symmetry and beauty. Symmetry 2010, 2, 1591–1624. [Google Scholar] [CrossRef] [Green Version]
  29. Poirier, F.J.A.M.; Wilson, H.R. A biologically plausible model of human shape symmetry perception. J. Vis. 2010, 10, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Giurfa, M.; Eichmann, B.; Menzl, R. Symmetry perception in an insect. Nature 1996, 382, 458–461. [Google Scholar] [CrossRef]
  31. Krippendorf, S.; Syvaeri, M. Detecting symmetries with neural networks. Mach. Learn. Sci. Technol. 2021, 2, 015010. [Google Scholar] [CrossRef]
  32. Toureau, V.; Bibiloni, P.; Talavera-Martínez, L.; González-Hidalgo, M. Automatic Detection of Symmetry in Dermoscopic Images Based on Shape and Texture. Inf. Process. Manag. Uncertain. Knowl. Based Syst. 2020, 1237, 625–636. [Google Scholar]
  33. Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [Green Version]
  34. Hramov, A.E.; Frolov, N.S.; Maksimenko, V.A.; Makarov, V.V.; Koronovskii, A.A.; Garcia-Prieto, J.; Antón-Toro, L.F.; Maestú, F.; Pisarchik, A.N. Artificial neural network detects human uncertainty. Chaos 2018, 28, 033607. [Google Scholar] [CrossRef]
  35. Dresp-Langley, B. Seven Properties of Self-Organization in the Human Brain. Big Data Cogn. Comput. 2020, 4, 10. [Google Scholar] [CrossRef]
  36. Wandeto, J.M.; Dresp-Langley, B. Ultrafast automatic classification of SEM image sets showing CD4 + cells with varying extent of HIV virion infection. In Proceedings of the 7ièmes Journées de la Fédération de Médecine Translationnelle de l’Université de Strasbourg, Strasbourg, France, 25–26 May 2019. [Google Scholar]
  37. Dresp-Langley, B.; Wandeto, J.M. Unsupervised classification of cell imaging data using the quantization error in a Self-Organizing Map. In Transactions on Computational Science and Computational Intelligence; Arabnia, H.R., Ferens, K., de la Fuente, D., Kozerenko, E.B., Olivas Varela, J.A., Tinetti, F.G., Eds.; Advances in Artificial Intelligence and Applied Computing; Springer-Nature: Berlin/Heidelberg, Germany, in press.
  38. Wandeto, J.M.; Nyongesa, H.K.O.; Remond, Y.; Dresp-Langley, B. Detection of small changes in medical and random-dot images comparing self-organizing map performance to human detection. Inf. Med. Unlocked 2017, 7, 39–45. [Google Scholar] [CrossRef]
  39. Wandeto, J.M.; Nyongesa, H.K.O.; Dresp-Langley, B. Detection of smallest changes in complex images comparing self-organizing map and expert performance. In Proceedings of the 40th European Conference on Visual Perception, Berlin, Germany, 27–31 August 2017. [Google Scholar]
  40. Wandeto, J.M.; Dresp-Langley, B.; Nyongesa, H.K.O. Vision-Inspired Automatic Detection of Water-Level Changes in Satellite Images: The Example of Lake Mead. In Proceedings of the 41st European Conference on Visual Perception, Trieste, Italy, 26–30 August 2018. [Google Scholar]
  41. Dresp-Langley, B.; Wandeto, J.M.; Nyongesa, H.K.O. Using the quantization error from Self-Organizing Map output for fast detection of critical variations in image time series. In ISTE OpenScience, collection “From data to decisions”; Wiley & Sons: London, UK, 2018. [Google Scholar]
  42. Wandeto, J.M.; Dresp-Langley, B. The quantization error in a Self-Organizing Map as a contrast and colour specific indicator of single-pixel change in large random patterns. Neural Netw. 2019, 119, 273–285, Special Issue in Neural Netw. 2019, 120, 116–128.. [Google Scholar] [CrossRef] [PubMed]
  43. Dresp-Langley, B.; Wandeto, J.M. Pixel precise unsupervised detection of viral particle proliferation in cellular imaging data. Inf. Med. Unlocked 2020, 20, 100433. [Google Scholar] [CrossRef] [PubMed]
  44. Dresp-Langley, B.; Reeves, A. Simultaneous brightness and apparent depth from true colors on grey: Chevreul revisited. Seeing Perceiving 2012, 25, 597–618. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Dresp-Langley, B.; Reeves, A. Effects of saturation and contrast polarity on the figure-ground organization of color on gray. Front. Psychol. 2014, 5, 1136. [Google Scholar] [CrossRef] [Green Version]
  46. Dresp-Langley, B.; Reeves, A. Color and figure-ground: From signals to qualia, In Perception Beyond Gestalt: Progress in Vision Research; Geremek, A., Greenlee, M., Magnussen, S., Eds.; Psychology Press: Hove, UK, 2016; pp. 159–171. [Google Scholar]
  47. Dresp-Langley, B.; Reeves, A. Color for the perceptual organization of the pictorial plane: Victor Vasarely’s legacy to Gestalt psychology. Heliyon 2020, 6, 04375. [Google Scholar] [CrossRef]
  48. Bonnet, C.; Fauquet, A.J.; Estaún Ferrer, S. Reaction times as a measure of uncertainty. Psicothema 2008, 20, 43–48. [Google Scholar]
  49. Brown, S.D.; Marley, A.A.; Donkin, C.; Heathcote, A. An integrated model of choices and response times in absolute identification. Psychol. Rev. 2008, 115, 396–425. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Luce, R.D. Response Times: Their Role in Inferring Elementary Mental Organization; Oxford University Press: New York, NY, USA, 1986. [Google Scholar]
  51. Posner, M.I. Timing the brain: Mental chronometry as a tool in neuroscience. PLoS Biol. 2005, 3, e51. [Google Scholar] [CrossRef] [Green Version]
  52. Posner, M.I. Chronometric Explorations of Mind; Erlbaum: Hillsdale, NJ, USA, 1978. [Google Scholar]
  53. Hickw, E. Rate Gain Information. Q. J. Exp. Psychol. 1952, 4, 11–26. [Google Scholar]
  54. Bartz, A.E. Reaction time as a function of stimulus uncertainty on a single trial. Percept. Psychophys. 1971, 9, 94–96. [Google Scholar] [CrossRef]
  55. Jensen, A.R. Clocking the Mind: Mental Chronometry and Individual Differences; Elsevier: Amsterdam, The Netherland, 2006. [Google Scholar]
  56. Salthouse, T.A. Aging and measures of processing speed. Biol. Psychol. 2000, 54, 35–54. [Google Scholar] [CrossRef]
  57. Kuang, S. Is reaction time an index of white matter connectivity during training? Cogn. Neurosci. 2017, 8, 126–128. [Google Scholar] [CrossRef] [PubMed]
  58. Ishihara, S. Tests for Color-Blindness; Hongo Harukicho: Tokyo, Japan, 1917. [Google Scholar]
  59. Monfouga, M. Python Code for 2AFC Forced-Choice Experiments Using Contrast Patterns. 2019. Available online: https://pumpkinmarie.github.io/ExperimentalPictureSoftware/ (accessed on 8 January 2021).
  60. Dresp-Langley, B.; Monfouga, M. Combining Visual Contrast Information with Sound Can Produce Faster Decisions. Information 2019, 10, 346. [Google Scholar] [CrossRef] [Green Version]
  61. Kohonen, T. Self-Organizing Maps. 2001. Available online: http://link.springer.com/10.1007/978-3-642-56927-2 (accessed on 8 January 2021).
  62. Kohonen, T. MATLAB Implementations and Applications of the Self-Organizing Map; Unigrafia Oy: Helsinki, Finland, 2014; p. 177. [Google Scholar]
  63. Nordfang, M.; Dyrholm, M.; Bundesen, C. Identifying bottom-up and top-down components of attentional weight by experimental analysis and computational modeling. J. Exp. Psychol. Gen. 2013, 142, 510–535. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Liesefeld, H.R.; Müller, H.J. Modulations of saliency signals at two hierarchical levels of priority computation revealed by spatial statistical distractor learning. J. Exp. Psychol. Gen. 2020, in press. [Google Scholar] [CrossRef] [PubMed]
  65. Dresp, B.; Fischer, S. Asymmetrical contrast effects induced by luminance and color configurations. Percept. Psychophys. 2001, 63, 1262–1270. [Google Scholar] [CrossRef] [Green Version]
  66. Dresp-Langley, B. Why the brain knows more than we do: Non-conscious representations and their role in the construction of conscious experience. Brain Sci. 2012, 2, 1–21. [Google Scholar] [CrossRef] [Green Version]
  67. Dresp-Langley, B. Generic properties of curvature sensing by vision and touch. Comput. Math. Methods Med. 2013, 634168. [Google Scholar] [CrossRef] [PubMed]
  68. Dresp-Langley, B. 2D geometry predicts perceived visual curvature in context-free viewing. Comput. Intell. Neurosci. 2015, 9. [Google Scholar] [CrossRef] [Green Version]
  69. Gerbino, W.; Zhang, L. Visual orientation and symmetry detection under affine transformations. Bull. Psychon. Soc. 1991, 29, 480. [Google Scholar]
  70. Batmaz, A.U.; de Mathelin, M.; Dresp-Langley, B. Seeing virtual while acting real: Visual display and strategy effects on the time and precision of eye-hand coordination. PLoS ONE 2017, 12, e0183789. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Dresp-Langley, B. Principles of perceptual grouping: Implications for image-guided surgery. Front. Psychol. 2015, 6, 1565. [Google Scholar] [CrossRef] [Green Version]
  72. Martinovic, J.; Jennings, B.J.; Makin, A.D.J.; Bertamini, M.; Angelescu, I. Symmetry perception for patterns defined by color and luminance. J. Vis. 2018, 18, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Treder, M.S. Behind the Looking-Glass: A Review on Human Symmetry Perception. Symmetry 2010, 2, 1510–1543. [Google Scholar] [CrossRef]
  74. Spillmann, L.; Dresp-Langley, B.; Tseng, C.H. Beyond the classic receptive field: The effect of contextual stimuli. J. Vis. 2015, 15, 7. [Google Scholar] [CrossRef] [Green Version]
  75. Tsogkas, S.; Kokkinos, I. Learning-Based Symmetry Detection in Natural Images. In Lecture Notes in Computer Science; Computer Vision—ECCV 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7578. [Google Scholar]
  76. Liu, Y. Computational Symmetry in Computer Vision and Computer Graphics; Now Publishers Inc.: Norwell, MA, USA, 2009. [Google Scholar]
  77. Bakhshandeh, S.; Azmi, R.; Teshnehlab, M. Symmetric uncertainty class-feature association map for feature selection in microarray dataset. Int. J. Mach. Learn. Cyber. 2020, 11, 15–32. [Google Scholar] [CrossRef]
  78. Radovic, M.; Ghalwash, M.; Filipovic, N.; Obradovic, Z. Minimum redundancy maximum relevance feature selection approach for temporal gene expression data. BMC Bioinform. 2017, 18, 9. [Google Scholar] [CrossRef] [Green Version]
  79. Strippoli, P.; Canaider, S.; Noferini, F.; D’Addabbo, P.; Vitale, L.; Facchin, F.; Lenzi, L.; Casadei, R.; Carinci, P.; Zannotti, M.; et al. Uncertainty principle of genetic information in a living cell. Theor. Biol. Med. Model. 2005, 30, 40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Hick’s Law [53,54] postulates that, provided the error rate in the given psychophysical task is low, sensory system uncertainty (SU) increases linearly with the amount of transmitted information (I) in a set (top graph). The law presumes a direct relationship between choice RT and sensory system uncertainty (SU) where RT increases linearly with amount of transmitted information/stimulus uncertainty (bottom graph).
Figure 1. Hick’s Law [53,54] postulates that, provided the error rate in the given psychophysical task is low, sensory system uncertainty (SU) increases linearly with the amount of transmitted information (I) in a set (top graph). The law presumes a direct relationship between choice RT and sensory system uncertainty (SU) where RT increases linearly with amount of transmitted information/stimulus uncertainty (bottom graph).
Symmetry 13 00299 g001
Figure 2. Copies of the test images, for illustration. Mirror symmetric shape pairs are displayed on a medium grey background. Visual symmetry uncertainty in the shape pairs was varied by giving shape elements variable amounts of color information resulting in variations in appearance. The condition with the highest amount of locally different color information is MULTICOL2.
Figure 2. Copies of the test images, for illustration. Mirror symmetric shape pairs are displayed on a medium grey background. Visual symmetry uncertainty in the shape pairs was varied by giving shape elements variable amounts of color information resulting in variations in appearance. The condition with the highest amount of locally different color information is MULTICOL2.
Symmetry 13 00299 g002
Figure 3. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1 and 2. The corresponding Self-Organizing Map Quantization Error (SOM-QE) values (bottom) from the neural network analysis are plotted in the graph below.
Figure 3. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1 and 2. The corresponding Self-Organizing Map Quantization Error (SOM-QE) values (bottom) from the neural network analysis are plotted in the graph below.
Symmetry 13 00299 g003
Figure 4. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1, 3 and 4. The corresponding SOM-QE values (bottom) from the neural network analysis are plotted in the graph below. The difference in average RT between BLUE3 and BLUE4 is the only one here that is not statistically significant (see “Section 3.1.1.”).
Figure 4. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1, 3 and 4. The corresponding SOM-QE values (bottom) from the neural network analysis are plotted in the graph below. The difference in average RT between BLUE3 and BLUE4 is the only one here that is not statistically significant (see “Section 3.1.1.”).
Symmetry 13 00299 g004
Figure 5. Differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance level 1 and the multicolored MULTICOL shape pairs with appearance levels 1 and 2. The differences between BLUE and RED shape pairs of any appearance level are not statistically significant (see “Section 3.1.1”). The differences between image conditions BLUE1 or RED1 and MULTICOL1 and between BLUE2 or RED2 and MULTICOL2 are highly significant, as is the difference between MULTICOL1 and MULTICOL2 (see “Section 3.1.2”). The corresponding SOM-QE values (bottom) from the neural network analysis are plotted in the graph below.
Figure 5. Differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance level 1 and the multicolored MULTICOL shape pairs with appearance levels 1 and 2. The differences between BLUE and RED shape pairs of any appearance level are not statistically significant (see “Section 3.1.1”). The differences between image conditions BLUE1 or RED1 and MULTICOL1 and between BLUE2 or RED2 and MULTICOL2 are highly significant, as is the difference between MULTICOL1 and MULTICOL2 (see “Section 3.1.2”). The corresponding SOM-QE values (bottom) from the neural network analysis are plotted in the graph below.
Symmetry 13 00299 g005
Figure 6. The tight link between variations in RT reflecting different levels of human uncertainty and the variations in the SOM-QE metric from the neural network analyses is brought to the fore here under the light of linear regression analysis on the RT data for shape pairs with varying levels of appearance in BLUE, RED and MULTICOL shapes, and linear regression analysis on the SOM-QE data for exactly the same shape pairs.
Figure 6. The tight link between variations in RT reflecting different levels of human uncertainty and the variations in the SOM-QE metric from the neural network analyses is brought to the fore here under the light of linear regression analysis on the RT data for shape pairs with varying levels of appearance in BLUE, RED and MULTICOL shapes, and linear regression analysis on the SOM-QE data for exactly the same shape pairs.
Symmetry 13 00299 g006
Table 1. Physical color parameters producing local variations in pattern appearance.
Table 1. Physical color parameters producing local variations in pattern appearance.
ColorHueSaturationLightnessR-G-B
“Strong”BLUE240100500-0-255
RED010050255-0-0
GREEN120100500-255-0
MAGENTA30010050255-0-255
YELLOW6010050
“Pale”BLUE180955010-250-250
RED010087255-190-190
GREEN12010087190-255-190
MAGENTA3002587255-190-255
YELLOW6006567255-255-190
Table 2. Results from the two-way analyses of variance with factor-specific degrees of freedom (DF), the corresponding F statistics, and their associated probability limits (p).
Table 2. Results from the two-way analyses of variance with factor-specific degrees of freedom (DF), the corresponding F statistics, and their associated probability limits (p).
FactorDFFp
1st 2-way
ANOVA
APPEARANCE368.42<0.001
COLOR10.012<0.914 NS
INTERACTION35.37<0.01
2nd 2-way ANOVA APPEARANCE18.20<0.01
COLOR2123.56<0.001
INTERACTION20.564<0.57 NS
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dresp-Langley, B.; Wandeto, J.M. Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map. Symmetry 2021, 13, 299. https://doi.org/10.3390/sym13020299

AMA Style

Dresp-Langley B, Wandeto JM. Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map. Symmetry. 2021; 13(2):299. https://doi.org/10.3390/sym13020299

Chicago/Turabian Style

Dresp-Langley, Birgitta, and John M. Wandeto. 2021. "Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map" Symmetry 13, no. 2: 299. https://doi.org/10.3390/sym13020299

APA Style

Dresp-Langley, B., & Wandeto, J. M. (2021). Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map. Symmetry, 13(2), 299. https://doi.org/10.3390/sym13020299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop