Impact of Visual Design Elements and Principles in Human Electroencephalogram Brain Activity Assessed with Spectral Methods and Convolutional Neural Networks
Abstract
:1. Introduction
1.1. Overview
1.2. Background
1.2.1. EEG Symmetry Perception
1.2.2. EEG Colour Perception
1.2.3. EEG Brightness Perception
1.2.4. EEG Visual Motion Perception
1.3. Objective
- ➢
- There is a statistical relationship between the VDEPs of video fragments and the mean EEG frequency bands (δ, θ, α, β, γ) of the viewers.
- ➢
- A simple Convolutional Neural Network model can accurately predict the VDEPs in video content from the EEG activity of the viewer.
2. Materials and Methods
2.1. DEAP Dataset
- The data were down sampled to 128 Hz.
- EOG artefacts were removed.
- A bandpass frequency filter from 4 to 45 Hz was applied.
- The data was averaged to the common reference.
- The EEG channels were reordered so that all the samples followed the same order.
- The data was segmented into 60-s trials and a 3-s pre-trial baseline was removed.
- The trials were reordered from presentation order to video (Experiment_id) order.
2.2. VDEP Tagging and Timestamps Pre-Processing
- ○
- Colour: “cold” (class 1), “warm” (class 2) and “unclear”.
- ○
- Balance: “asymmetrical” (class 1), “symmetrical” (class 2) and “unclear”.
- ○
- Movement: “fast” (class 1), “slow” (class 2) and “unclear”.
- ○
- Light: “bright” (class 1), “dark” (class 2) and “unclear”.
2.3. Convolutional Neural Network (CNN)
3. Results
3.1. Balance
3.2. Colour
3.3. Light
3.4. Movement
3.5. Convolutional Neural Networks (CNN)
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
VDEP | Band | Category | Kolmogorov-Smirnov | ||
---|---|---|---|---|---|
Statistic | df | Sig. | |||
Balance | Alpha | Symmetrical | 0.276 | 6528 | <0.001 |
Asymmetrical | 0.295 | 6528 | <0.001 | ||
Beta | Symmetrical | 0.252 | 6528 | <0.001 | |
Asymmetrical | 0.262 | 6528 | <0.001 | ||
Delta | Symmetrical | 0.264 | 6528 | <0.001 | |
Asymmetrical | 0.282 | 6528 | <0.001 | ||
Gamma | Symmetrical | 0.237 | 6528 | <0.001 | |
Asymmetrical | 0.280 | 6528 | <0.001 | ||
Theta | Symmetrical | 0.274 | 6528 | <0.001 | |
Asymmetrical | 0.289 | 6528 | <0.001 | ||
Colour | Alpha | Warm | 0.291 | 15,296 | <0.001 |
Cold | 0.280 | 15,296 | <0.001 | ||
Beta | Warm | 0.265 | 15,296 | <0.001 | |
Cold | 0.263 | 15,296 | <0.001 | ||
Delta | Warm | 0.278 | 15,296 | <0.001 | |
Cold | 0.282 | 15,296 | <0.001 | ||
Gamma | Warm | 0.266 | 15,296 | <0.001 | |
Cold | 0.264 | 15,296 | <0.001 | ||
Theta | Warm | 0.292 | 15,296 | <0.001 | |
Cold | 0.281 | 15,296 | <0.001 | ||
Light | Alpha | Bright | 0.269 | 20,320 | <0.001 |
Dark | 0.286 | 20,320 | <0.001 | ||
Beta | Bright | 0.256 | 20,320 | <0.001 | |
Dark | 0.263 | 20,320 | <0.001 | ||
Delta | Bright | 0.276 | 20,320 | <0.001 | |
Dark | 0.282 | 20,320 | <0.001 | ||
Gamma | Bright | 0.258 | 20,320 | <0.001 | |
Dark | 0.266 | 20,320 | <0.001 | ||
Theta | Bright | 0.279 | 20,320 | <0.001 | |
Dark | 0.288 | 20,320 | <0.001 | ||
Movement | Alpha | Slow | 0.282 | 16,480 | <0.001 |
Fast | 0.273 | 16,480 | <0.001 | ||
Beta | Slow | 0.265 | 16,480 | <0.001 | |
Fast | 0.258 | 16,480 | <0.001 | ||
Delta | Slow | 0.272 | 16,480 | <0.001 | |
Fast | 0.277 | 16,480 | <0.001 | ||
Gamma | Slow | 0.265 | 16,480 | <0.001 | |
Fast | 0.264 | 16,480 | <0.001 | ||
Theta | Slow | 0.284 | 16,480 | <0.001 | |
Fast | 0.285 | 16,480 | <0.001 |
Band | Category | N | Mean Rank | Sum of Ranks |
---|---|---|---|---|
α (alpha) | Symmetrical | 6528 | 6625.26 | 43,249,723.00 |
Asymmetrical | 6528 | 6431.74 | 41,986,373.00 | |
Total | 13,056 | |||
β (beta) | Symmetrical | 6528 | 6628.52 | 43,270,953.00 |
Asymmetrical | 6528 | 6428.48 | 41,965,143.00 | |
Total | 13,056 | |||
δ (delta) | Symmetrical | 6528 | 6599.36 | 43,080,616.00 |
Asymmetrical | 6528 | 6457.64 | 42,155,480.00 | |
Total | 13,056 | |||
γ (gamma) | Symmetrical | 6528 | 6641.20 | 43,353,771.00 |
Asymmetrical | 6528 | 6415.80 | 41,882,325.00 | |
Total | 13,056 | |||
θ (theta) | Symmetrical | 6528 | 6596.48 | 43,061,834.00 |
Asymmetrical | 6528 | 6460.52 | 42,174,262.00 | |
Total | 13,056 |
Category | N | Mean Rank | Sum of Ranks | |
---|---|---|---|---|
α (alpha) | Warm | 15,296 | 15,333.69 | 234,544,166.00 |
Cold | 15,296 | 15,259.31 | 233406362.00 | |
Total | 30,592 | |||
β (beta) | Warm | 15,296 | 15,345.02 | 234,717,493.00 |
Cold | 15,296 | 15,247.98 | 233,233,035.00 | |
Total | 30,592 | |||
δ (delta) | Warm | 15,296 | 15,319.84 | 234,332,287.00 |
Cold | 15,296 | 15,273.16 | 233,618,241.00 | |
Total | 30,592 | |||
γ (gamma) | Warm | 15,296 | 15,326.88 | 234,439,886.00 |
Cold | 15,296 | 15,266.12 | 233,510,642.00 | |
Total | 30,592 | |||
θ (theta) | Warm | 15,296 | 15,276.85 | 233,674,686.00 |
Cold | 15,296 | 15,316.15 | 234,275,842.00 | |
Total | 30,592 |
Category | N | Mean Rank | Sum of Ranks | |
---|---|---|---|---|
α (alpha) | Bright | 20,320 | 20,597.97 | 418,550,732.00 |
Dark | 20,320 | 20,043.03 | 407,274,388.00 | |
Total | 40,640 | |||
β (beta) | Bright | 20,320 | 20,567.33 | 417,928,061.50 |
Dark | 20,320 | 20,073.67 | 407,897,058.50 | |
Total | 40,640 | |||
δ (delta) | Bright | 20,320 | 20,532.43 | 417,218,904.00 |
Dark | 20,320 | 20,108.57 | 408,606,216.00 | |
Total | 40,640 | |||
γ (gamma) | Bright | 20,320 | 20,570.53 | 417,993,114.00 |
Dark | 20,320 | 20,070.47 | 407,832,006.00 | |
Total | 40,640 | |||
θ (theta) | Bright | 20,320 | 20,553.28 | 417,642,687.00 |
Dark | 20,320 | 20,087.72 | 408,182,433.00 | |
Total | 40,640 |
Category | N | Mean Rank | Sum of Ranks | |
---|---|---|---|---|
α (alpha) | Slow | 16,480 | 16,084.77 | 265,076,976.00 |
Fast | 16,480 | 16,876.23 | 278,120,304.00 | |
Total | 32,960 | |||
β (beta) | Slow | 16,480 | 16,066.68 | 264,778,816.00 |
Fast | 16,480 | 16,894.32 | 278,418,464.00 | |
Total | 32,960 | |||
δ (delta) | Slow | 16,480 | 16,079.20 | 264,985,174.00 |
Fast | 16,480 | 16,881.80 | 278,212,106.00 | |
Total | 32,960 | |||
γ (gamma) | Slow | 16,480 | 16,081.66 | 265,025,729.00 |
Fast | 16,480 | 16,879.34 | 278,171,551.00 | |
Total | 32,960 | |||
θ (theta) | Slow | 16,480 | 16,087.10 | 265,115,447.00 |
Fast | 16,480 | 16,873.90 | 278,081,833.00 | |
Total | 32,960 |
Layer (type) | Output Shape | Param # |
---|---|---|
Convolutional 2D | (None, 126, 30, 32) | 320 |
Convolutional 2D | (None, 124, 28, 32) | 9248 |
Batch Normalization | (None, 124, 28, 32) | 128 |
Dropout | (None, 124, 28, 32) | 0 |
Convolutional 2D | (None, 122, 26, 64) | 18,496 |
Convolutional 2D | (None, 120, 24, 64) | 36,928 |
Batch Normalization | (None, 120, 24, 64) | 256 |
Dropout | (None, 120, 24, 64) | 0 |
Max Pooling 2D | (None, 60, 12, 64) | 0 |
Convolutional 2D | (None, 58, 10, 64) | 36,928 |
Convolutional 2D | (None, 56, 8, 64) | 36,928 |
Batch Normalization | (None, 56, 8, 64) | 256 |
Dropout | (None, 56, 8, 64) | 0 |
Convolutional 2D | (None, 54, 6, 128) | 73,856 |
Convolutional 2D | (None, 52, 4, 128) | 147,584 |
Batch Normalization | (None, 52, 4, 128) | 512 |
Dropout | (None, 52, 4, 128) | 0 |
Max Pooling 2D | (None, 26, 2, 128) | 0 |
Flatten | (None, 6656) | 0 |
Dense | (None, 16) | 106,512 |
Dense | (None, 2) | 34 |
δ (Delta) | θ (Theta) | α (Alpha) | β (Beta) | γ (Gamma) | |
---|---|---|---|---|---|
Mann-Whitney U | 20,844,824.000 | 20,863,606.000 | 20,675,717.000 | 20,654,487.000 | 20,571,669.000 |
Wilcoxon W | 42,155,480.000 | 42,174,262.000 | 41,986,373.000 | 41,965,143.000 | 41,882,325.000 |
Z | −2.148 | −2.061 | −2.933 | −3.032 | −3.417 |
Asymptotic Significance (2-tailed) | 0.032 | 0.039 | 0.003 | 0.002 | 0.001 |
δ (Delta) | θ (Theta) | α (Alpha) | β (Beta) | γ (Gamma) | |
---|---|---|---|---|---|
Mann-Whitney U | 116,626,785.000 | 116,683,230.000 | 116,414,906.000 | 116,241,579.000 | 116,519,186.000 |
Wilcoxon W | 233,618,241.000 | 233,674,686.000 | 233,406,362.000 | 233,233,035.000 | 233,510,642.000 |
Z | −0.462 | −0.389 | −0.737 | −0.961 | −0.602 |
Asymptotic Significance (2-tailed) | 0.644 | 0.697 | 0.461 | 0.337 | 0.547 |
δ (Delta) | θ (Theta) | α (Alpha) | β (Beta) | γ (Gamma) | |
---|---|---|---|---|---|
Mann-Whitney U | 202,144,856.000 | 201,721,073.000 | 200,813,028.000 | 201,435,698.500 | 201,370,646.000 |
Wilcoxon W | 408,606,216.000 | 408,182,433.000 | 407,274,388.000 | 407,897,058.500 | 407,832,006.000 |
Z | −3.642 | −4.000 | −4.768 | −4.241 | −4.296 |
Asymptotic Significance (2-tailed) | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 |
δ (Delta) | θ (Theta) | α (Alpha) | β (Beta) | γ (Gamma) | |
---|---|---|---|---|---|
Mann-Whitney U | 129,181,734.000 | 129,312,007.000 | 129,273,536.000 | 128,975,376.000 | 129,222,289.000 |
Wilcoxon W | 264,985,174.000 | 265,115,447.000 | 265,076,976.000 | 264,778,816.000 | 265,025,729.000 |
Z | −7.657 | −7.506 | −7.551 | −7.896 | −7.610 |
Asymptotic Significance (2-tailed) | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 |
Colour VDEP | LightVDEP | Balance VDEP | Movement VDEP | |||||
---|---|---|---|---|---|---|---|---|
Warm | Cold | Bright | Dark | Symmetrical | Asymmetrical | Slow | Fast | |
Clip 1 | (1–16), (35–60) | (16–32) | (1–17), (33–60) | (17–33) | (3–5), (9–11), (23–24), (33–40), (45–60) | (1–10), (33–49) | ||
Clip 4 | (1–4), (9–12), (27–49), (53–60) | (4–9), (12–15), (49–53) | (1–60) | (1–60) | (1–60) | |||
Clip 5 | (1–60) | (1–60) | (1–60) | (16–32), (47–60) | ||||
Clip 6 | (52–53) | (1–49), (53–60) | (1–60) | (1–60) | (1–2), (7–11) | |||
Clip 7 | (8–15), (22–25), (26–30), (36–44), (55–60) | (1–3), (15–22), (24–25), (30–36), (44–55) | (1–60) | (27–29), (30–36), (47–55) | (1–27), (36–47), (55–60) | (1–8) | ||
Clip 9 | (1–7), (9–60) | (7–9) | (1–6), (9–60) | 6–9 | (7–9) | (1–7), (9–60) | (9–11), (13–15) | |
Clip 12 | (1–60) | (1–60) | (1–60) | (23–41) | ||||
Clip 13 | (1–60) | (1–60) | (7–11), (15–23) | (1–7), (23–60) | (1–60) | |||
Clip 14 | (22–24) | (1–22) | (37–60) | (1–37) | (1–60) | (1–60) | ||
Clip 15 | (1–25), (29–60) | (25–29) | (1–60) | (25–38) | (1–25), (38–60) | (1–37), (43–60) | (37–43) | |
Clip 16 | (1–60) | (1–60) | (1–60) | (1–60) | ||||
Clip 17 | (2–6), (8–18) | (6–8), (18–28), (30–60) | (1–19), (34–60) | (19–34) | (1–60) | (1–60) | ||
Clip 19 | (1–60) | (1–60) | (1–60) | |||||
Clip 20 | (5–60) | (2–5) | (1–60) | (1–60) | ||||
Clip 22 | (18–22), (24–33), (55–60) | (1–18), (22–24), (33–55) | (1–60) | (4–6), (29–32) | (32–60) | (1–60) | ||
Clip 23 | (1–60) | (1–60) | (1–60) | (1–60) | ||||
Clip 24 | (10–12), (15–18), (20–22), (25–26), (27–28), (35–36), (38–44) | (1–10), (12–15), (18–20), (22–25), (26–27), (28–35), (37–38), (44–60) | (15–18), (33–60) | (1–10), (19–20), (22–33) | (1–3), (40–44) | (3–40), (44–60) | (1–60) | |
Clip 25 | (1–60) | (1–60) | (1–60) | |||||
Clip 26 | (12–13) | (1–12), (13–60) | (1–60) | |||||
Clip 27 | (1–9), (15–16), (19–22), (29–39), (41–44), (48–51), (52–53), (58–60) | (9–15), (16–19), (22–29), (39–41), (44–48), (51–52), (53–58) | (1–7), (15–39), (41–44), (47–60) | (7–15), (39–41), (44–47) | (43–60) | (1–43) | (1–60) | |
Clip 31 | (1–60) | (1–60) | (8–9), (11–12), (17–18), (33–34), (36–38), (44–46), (49–50), (54–60) | (1–8), (9–11), (12–17), (18–33), (34–36), (38–44), (46–49), (50–54) | (1–60) | |||
Clip 32 | (1–3), (7–9), (25–26), (29–32), (42–60) | (3–7), (9–25), (26–29), (32–42) | (1–60) | (35–36), (38–39), (40–41) | (1–35), (36–38), (39–40), (41–60) | (1–60) | ||
Clip 33 | (8–11), (37–38), (41–42), (48–60) | (1–8), (11–37), (38–41), (43–48) | (1–60) | (1–60) | (1–60) | |||
Clip 35 | (7–15), (19–23), (43–48), (54–55), (56–58) | (1–7), (15–19), (23–43), (48–54), (58–60) | (1–60) | (1–60) | ||||
Clip 36 | (1–60) | (1–60) | (1–60) | (1–60) | ||||
Clip 40 | (1–60) | (1–60) | (1–60) | (1–60) |
References
- Frascara, J. Communication Design: Principles, Methods, and Practice; Allworth Press: New York, NY, USA, 2004; ISBN 1-58115-365-1. [Google Scholar]
- Ambrose, G.; Harris, P. The Fundamentals of Graphic Design; AVA Publishing SA Bloomsbury Publishing Plc: London, UK, 2009; ISBN 9782940476008. [Google Scholar]
- Palmer, S.E. Modern Theories of Gestalt Perception. Mind Lang. 1990, 5, 289–323. [Google Scholar] [CrossRef]
- Kepes, G. Language of Vision; Courier Corporation: Chelmsford, MA, USA, 1995; ISBN 0-486-28650-9. [Google Scholar]
- Dondis, D.A. La Sintaxis de la Imagen: Introducción al Alfabeto Visual; Editorial GG: Barcelona, Spain, 2012. [Google Scholar]
- Villafañe, J. Introducción a la Teoría de la Imagen; Piramide: Madrid, Spain, 2006. [Google Scholar]
- Won, S.; Westland, S. Colour meaning and context. Color Res. Appl. 2017, 42, 450–459. [Google Scholar] [CrossRef] [Green Version]
- Machajdik, J.; Hanbury, A. Affective image classification using features inspired by psychology and art theory. In Proceedings of the International Conference on Multimedia—MM ’10, Florence, Italy, 25–29 October 2010; ACM Press: New York, NY, USA, 2010; p. 83. [Google Scholar]
- O’Connor, Z. Colour, contrast and gestalt theories of perception: The impact in contemporary visual communications design. Color Res. Appl. 2015, 40, 85–92. [Google Scholar] [CrossRef]
- Sanchez-Nunez, P.; Cobo, M.J.; Las Heras-Pedrosa, C.D.; Pelaez, J.I.; Herrera-Viedma, E.; de las Heras-Pedrosa, C.; Pelaez, J.I.; Herrera-Viedma, E. Opinion Mining, Sentiment Analysis and Emotion Understanding in Advertising: A Bibliometric Analysis. IEEE Access 2020, 8, 134563–134576. [Google Scholar] [CrossRef]
- Chen, H.Y.; Huang, K.L. Construction of perfume bottle visual design model based on multiple affective responses. In Proceedings of the IEEE International Conference on Advanced Materials for Science and Engineering (ICAMSE), Tainan, Taiwan, 12–13 November 2016; pp. 169–172. [Google Scholar] [CrossRef]
- Fajardo, T.M.; Zhang, J.; Tsiros, M. The contingent nature of the symbolic associations of visual design elements: The case of brand logo frames. J. Consum. Res. 2016, 43, 549–566. [Google Scholar] [CrossRef]
- Dillman, D.A.; Gertseva, A.; Mahon-haft, T. Achieving Usability in Establishment Surveys Through the Application of Visual Design Principles. J. Off. Stat. 2005, 21, 183–214. [Google Scholar]
- Plassmann, H.; Ramsøy, T.Z.; Milosavljevic, M. Branding the brain: A critical review and outlook. J. Consum. Psychol. 2012, 22, 18–36. [Google Scholar] [CrossRef]
- Zeki, S.; Stutters, J. Functional specialization and generalization for grouping of stimuli based on colour and motion. Neuroimage 2013, 73, 156–166. [Google Scholar] [CrossRef] [Green Version]
- Laeng, B.; Suegami, T.; Aminihajibashi, S. Wine labels: An eye-tracking and pupillometry study. Int. J. Wine Bus. Res. 2016, 28, 327–348. [Google Scholar] [CrossRef]
- Pieters, R.; Wedel, M. Attention Capture and Transfer in Advertising: Brand, Pictorial, and Text-Size Effects. J. Mark. 2004, 68, 36–50. [Google Scholar] [CrossRef]
- Van der Laan, L.N.; Hooge, I.T.C.; de Ridder, D.T.D.; Viergever, M.A.; Smeets, P.A.M. Do you like what you see? The role of first fixation and total fixation duration in consumer choice. Food Qual. Prefer. 2015, 39, 46–55. [Google Scholar] [CrossRef]
- Ishizu, T.; Zeki, S. Toward a brain-based theory of beauty. PLoS ONE 2011, 6, e21852. [Google Scholar] [CrossRef] [Green Version]
- Qiao, R.; Qing, C.; Zhang, T.; Xing, X.; Xu, X. A novel deep-learning based framework for multi-subject emotion recognition. In Proceedings of the 4th International Conference on Information, Cybernetics and Computational Social Systems (ICCSS), Dalian, China, 24–26 July 2017; pp. 181–185. [Google Scholar]
- Marchesotti, L.; Murray, N.; Perronnin, F. Discovering Beautiful Attributes for Aesthetic Image Analysis. Int. J. Comput. Vis. 2015, 113, 246–266. [Google Scholar] [CrossRef] [Green Version]
- Siddharth, S.; Jung, T.P.; Sejnowski, T.J. Impact of Affective Multimedia Content on the Electroencephalogram and Facial Expressions. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef]
- Yadava, M.; Kumar, P.; Saini, R.; Roy, P.P.; Prosad Dogra, D. Analysis of EEG signals and its application to neuromarketing. Multimed. Tools Appl. 2017, 76, 19087–19111. [Google Scholar] [CrossRef]
- Sánchez-Núñez, P.; Cobo, M.J.; Vaccaro, G.; Peláez, J.I.; Herrera-Viedma, E. Citation Classics in Consumer Neuroscience, Neuromarketing and Neuroaesthetics: Identification and Conceptual Analysis. Brain Sci. 2021, 11, 548. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Salama, E.S.; El-Khoribi, R.A.; Shoman, M.E.; Wahby, M.A. EEG-Based Emotion Recognition using 3D Convolutional Neural Networks. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 329–337. [Google Scholar] [CrossRef]
- Acar, E.; Hopfgartner, F.; Albayrak, S. Fusion of learned multi-modal representations and dense trajectories for emotional analysis in videos. In Proceedings of the 13th International Workshop on Content-Based Multimedia Indexing (CBMI), Prague, Czech Republic, 10–12 June 2015; pp. 1–6. [Google Scholar]
- Choi, E.J.; Kim, D.K. Arousal and valence classification model based on long short-term memory and DEAP data for mental healthcare management. Healthc. Inform. Res. 2018, 24, 309–316. [Google Scholar] [CrossRef] [PubMed]
- Chen, J.X.; Zhang, P.W.; Mao, Z.J.; Huang, Y.F.; Jiang, D.M.; Zhang, Y.N. Accurate EEG-based Emotion Recognition on Combined Features Using Deep Convolutional Neural Networks. IEEE Access 2019, 7, 1. [Google Scholar] [CrossRef]
- Kimball, M.A. Visual Design Principles: An Empirical Study of Design Lore. J. Tech. Writ. Commun. 2013, 43, 3–41. [Google Scholar] [CrossRef]
- Locher, P.J.; Jan Stappers, P.; Overbeeke, K. The role of balance as an organizing design principle underlying adults’ compositional strategies for creating visual displays. Acta Psychol. 1998, 99, 141–161. [Google Scholar] [CrossRef]
- Cyr, D.; Head, M.; Larios, H. Colour appeal in website design within and across cultures: A multi-method evaluation. Int. J. Hum. Comput. Stud. 2010, 68, 1–21. [Google Scholar] [CrossRef]
- Kahn, B.E. Using Visual Design to Improve Customer Perceptions of Online Assortments. J. Retail. 2017, 93, 29–42. [Google Scholar] [CrossRef]
- Dondis, D.A. A Primer of Visual Literacy; The MIT Press: Cambridge, MA, USA, 1973; ISBN 9780262040402. [Google Scholar]
- Albers, J. The Interaction of Color; Yale University Press: London, UK, 1963; ISBN 9780300179354. [Google Scholar]
- Karp, A.; Itten, J. The Elements of Color. Leonardo 1972, 5, 180. [Google Scholar] [CrossRef]
- Arnheim, R. Art and Visual Perception: A Psychology of the Creative Eye; University of California Press: Berkeley, CA, USA, 2004; ISBN 9780520243835. [Google Scholar]
- Makin, A.D.J.; Wilton, M.M.; Pecchinenda, A.; Bertamini, M. Symmetry perception and affective responses: A combined EEG/EMG study. Neuropsychologia 2012, 50, 3250–3261. [Google Scholar] [CrossRef]
- Huang, Y.; Xue, X.; Spelke, E.; Huang, L.; Zheng, W.; Peng, K. The aesthetic preference for symmetry dissociates from early-emerging attention to symmetry. Sci. Rep. 2018, 8, 6263. [Google Scholar] [CrossRef] [Green Version]
- Makin, A.D.J.; Rampone, G.; Bertamini, M. Symmetric patterns with different luminance polarity (anti-symmetry) generate an automatic response in extrastriate cortex. Eur. J. Neurosci. 2020, 51, 922–936. [Google Scholar] [CrossRef] [Green Version]
- Gheorghiu, E.; Kingdom, F.A.A.; Remkes, A.; Li, H.-C.O.; Rainville, S. The role of color and attention-to-color in mirror-symmetry perception. Sci. Rep. 2016, 6, 29287. [Google Scholar] [CrossRef] [Green Version]
- Bertamini, M.; Wagemans, J. Processing convexity and concavity along a 2-D contour: Figure–ground, structural shape, and attention. Psychon. Bull. Rev. 2013, 20, 191–207. [Google Scholar] [CrossRef] [Green Version]
- Chai, M.T.; Amin, H.U.; Izhar, L.I.; Saad, M.N.M.; Abdul Rahman, M.; Malik, A.S.; Tang, T.B. Exploring EEG Effective Connectivity Network in Estimating Influence of Color on Emotion and Memory. Front. Neuroinform. 2019, 13. [Google Scholar] [CrossRef]
- Tcheslavski, G.V.; Vasefi, M.; Gonen, F.F. Response of a human visual system to continuous color variation: An EEG-based approach. Biomed. Signal Process. Control 2018, 43, 130–137. [Google Scholar] [CrossRef]
- Nicolae, I.E.; Ivanovici, M. Preparatory Experiments Regarding Human Brain Perception and Reasoning of Image Complexity for Synthetic Color Fractal and Natural Texture Images via EEG. Appl. Sci. 2020, 11, 164. [Google Scholar] [CrossRef]
- Golnar-Nik, P.; Farashi, S.; Safari, M.-S. The application of EEG power for the prediction and interpretation of consumer decision-making: A neuromarketing study. Physiol. Behav. 2019, 207, 90–98. [Google Scholar] [CrossRef]
- Eroğlu, K.; Kayıkçıoğlu, T.; Osman, O. Effect of brightness of visual stimuli on EEG signals. Behav. Brain Res. 2020, 382, 112486. [Google Scholar] [CrossRef]
- Johannes, S.; Münte, T.F.; Heinze, H.J.; Mangun, G.R. Luminance and spatial attention effects on early visual processing. Cogn. Brain Res. 1995, 2, 189–205. [Google Scholar] [CrossRef]
- Lakens, D.; Semin, G.R.; Foroni, F. But for the bad, there would not be good: Grounding valence in brightness through shared relational structures. J. Exp. Psychol. Gen. 2012, 141, 584–594. [Google Scholar] [CrossRef] [Green Version]
- Schettino, A.; Keil, A.; Porcu, E.; Müller, M.M. Shedding light on emotional perception: Interaction of brightness and semantic content in extrastriate visual cortex. Neuroimage 2016, 133, 341–353. [Google Scholar] [CrossRef]
- Albright, T.D.; Stoner, G.R. Visual motion perception. Proc. Natl. Acad. Sci. USA 1995, 92, 2433–2440. [Google Scholar] [CrossRef] [Green Version]
- Hülsdünker, T.; Ostermann, M.; Mierau, A. The Speed of Neural Visual Motion Perception and Processing Determines the Visuomotor Reaction Time of Young Elite Table Tennis Athletes. Front. Behav. Neurosci. 2019, 13. [Google Scholar] [CrossRef] [Green Version]
- Agyei, S.B.; van der Weel, F.R.; van der Meer, A.L.H. Development of Visual Motion Perception for Prospective Control: Brain and Behavioral Studies in Infants. Front. Psychol. 2016, 7. [Google Scholar] [CrossRef] [Green Version]
- Cochin, S.; Barthelemy, C.; Lejeune, B.; Roux, S.; Martineau, J. Perception of motion and qEEG activity in human adults. Electroencephalogr. Clin. Neurophysiol. 1998, 107, 287–295. [Google Scholar] [CrossRef]
- Himmelberg, M.M.; Segala, F.G.; Maloney, R.T.; Harris, J.M.; Wade, A.R. Decoding Neural Responses to Motion-in-Depth Using EEG. Front. Neurosci. 2020, 14. [Google Scholar] [CrossRef]
- Wade, A.; Segala, F.; Yu, M. Using EEG to examine the timecourse of motion-in-depth perception. J. Vis. 2019, 19, 104. [Google Scholar] [CrossRef]
- He, B.; Yuan, H.; Meng, J.; Gao, S. Brain–Computer Interfaces. In Neural Engineering; Springer International Publishing: Cham, Switzerland, 2020; pp. 131–183. [Google Scholar]
- Salelkar, S.; Ray, S. Interaction between steady-state visually evoked potentials at nearby flicker frequencies. Sci. Rep. 2020, 10, 1–16. [Google Scholar] [CrossRef]
- Du, Y.; Yin, M.; Jiao, B. InceptionSSVEP: A Multi-Scale Convolutional Neural Network for Steady-State Visual Evoked Potential Classification. In Proceedings of the IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 2080–2085. [Google Scholar]
- Vialatte, F.-B.; Maurice, M.; Dauwels, J.; Cichocki, A. Steady-state visually evoked potentials: Focus on essential paradigms and future perspectives. Prog. Neurobiol. 2010, 90, 418–438. [Google Scholar] [CrossRef]
- Thomas, J.; Maszczyk, T.; Sinha, N.; Kluge, T.; Dauwels, J. Deep learning-based classification for brain-computer interfaces. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 234–239. [Google Scholar]
- Kwak, N.-S.; Müller, K.-R.; Lee, S.-W. A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS ONE 2017, 12, e0172578. [Google Scholar] [CrossRef] [Green Version]
- Stober, S.; Cameron, D.J.; Grahn, J.A. Using convolutional neural networks to recognize rhythm stimuli from electroencephalography recordings. Adv. Neural Inf. Process. Syst. 2014, 2, 1449–1457. [Google Scholar]
- Schlag, J.; Schlag-Rey, M. Through the eye, slowly; Delays and localization errors in the visual system. Nat. Rev. Neurosci. 2002, 3, 191. [Google Scholar] [CrossRef]
- Jaswal, R. Brain Wave Classification and Feature Extraction of EEG Signal by Using FFT on Lab View. Int. Res. J. Eng. Technol. 2016, 3, 1208–1212. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; Volume 1, pp. 448–456. [Google Scholar]
- Wen, Z.; Xu, R.; Du, J. A novel convolutional neural networks for emotion recognition based on EEG signal. In Proceedings of the International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), Shenzhen, China, 15–17 December 2017; pp. 672–677. [Google Scholar] [CrossRef]
- Yoto, A.; Katsuura, T.; Iwanaga, K.; Shimomura, Y. Effects of object color stimuli on human brain activities in perception and attention referred to EEG alpha band response. J. Physiol. Anthropol. 2007, 26, 373–379. [Google Scholar] [CrossRef] [Green Version]
- Specker, E.; Forster, M.; Brinkmann, H.; Boddy, J.; Immelmann, B.; Goller, J.; Pelowski, M.; Rosenberg, R.; Leder, H. Warm, lively, rough? Assessing agreement on aesthetic effects of artworks. PLoS ONE 2020, 15, e0232083. [Google Scholar] [CrossRef]
- Boerman, S.C.; van Reijmersdal, E.A.; Neijens, P.C. Using Eye Tracking to Understand the Effects of Brand Placement Disclosure Types in Television Programs. J. Advert. 2015, 44, 196–207. [Google Scholar] [CrossRef] [Green Version]
- Gonçalves, S.I.; de Munck, J.C.; Pouwels, P.J.W.; Schoonhoven, R.; Kuijer, J.P.A.; Maurits, N.M.; Hoogduin, J.M.; van Someren, E.J.W.; Heethaar, R.M.; Lopes da Silva, F.H. Correlating the alpha rhythm to BOLD using simultaneous EEG/fMRI: Inter-subject variability. Neuroimage 2006, 30, 203–213. [Google Scholar] [CrossRef]
- Klistorner, A.I.; Graham, S.L. Electroencephalogram-based scaling of multifocal visual evoked potentials: Effect on intersubject amplitude variability. Investig. Ophthalmol. Vis. Sci. 2001, 42, 2145–2152. [Google Scholar]
- Busch, N.A.; Dubois, J.; VanRullen, R. The phase of ongoing EEG oscillations predicts visual perception. J. Neurosci. 2009, 29, 7869–7876. [Google Scholar] [CrossRef]
- Romei, V.; Brodbeck, V.; Michel, C.; Amedi, A.; Pascual-Leone, A.; Thut, G. Spontaneous Fluctuations in Posterior Band EEG Activity Reflect Variability in Excitability of Human Visual Areas. Cereb. Cortex 2008, 18, 2010–2018. [Google Scholar] [CrossRef]
- Ergenoglu, T.; Demiralp, T.; Bayraktaroglu, Z.; Ergen, M.; Beydagi, H.; Uresin, Y. Alpha rhythm of the EEG modulates visual detection performance in humans. Cogn. Brain Res. 2004, 20, 376–383. [Google Scholar] [CrossRef]
VDEP | Timestamps Class 1 | Timestamps Class 2 | Unclear Timestamps |
---|---|---|---|
Colour | 875 | 588 | 97 |
Balance | 1276 | 264 | 20 |
Movement | 645 | 535 | 380 |
Light | 795 | 654 | 111 |
VDEP Targets | AUC | Accuracy | PR-AUC |
---|---|---|---|
Light | 0.8873 | 0.8883 | 0.8484 |
Colour | 0.75560 | 0.7447 | 0.6940 |
Balance | 0.7584 | 0.7477 | 0.7241 |
Movement | 0.9698 | 0.9675 | 0.9569 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cabrera, F.E.; Sánchez-Núñez, P.; Vaccaro, G.; Peláez, J.I.; Escudero, J. Impact of Visual Design Elements and Principles in Human Electroencephalogram Brain Activity Assessed with Spectral Methods and Convolutional Neural Networks. Sensors 2021, 21, 4695. https://doi.org/10.3390/s21144695
Cabrera FE, Sánchez-Núñez P, Vaccaro G, Peláez JI, Escudero J. Impact of Visual Design Elements and Principles in Human Electroencephalogram Brain Activity Assessed with Spectral Methods and Convolutional Neural Networks. Sensors. 2021; 21(14):4695. https://doi.org/10.3390/s21144695
Chicago/Turabian StyleCabrera, Francisco E., Pablo Sánchez-Núñez, Gustavo Vaccaro, José Ignacio Peláez, and Javier Escudero. 2021. "Impact of Visual Design Elements and Principles in Human Electroencephalogram Brain Activity Assessed with Spectral Methods and Convolutional Neural Networks" Sensors 21, no. 14: 4695. https://doi.org/10.3390/s21144695
APA StyleCabrera, F. E., Sánchez-Núñez, P., Vaccaro, G., Peláez, J. I., & Escudero, J. (2021). Impact of Visual Design Elements and Principles in Human Electroencephalogram Brain Activity Assessed with Spectral Methods and Convolutional Neural Networks. Sensors, 21(14), 4695. https://doi.org/10.3390/s21144695