Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles
Abstract
:1. Introduction
2. Materials and Methods
2.1. Selection of Video Resources
- The viewing angle must encompass the entire movement of the model for at least one gait cycle from the front to avoid sliding effects;
- The camera must have sufficient resolution (≥400 × 360) and speed (≥25 images/s);
- Models must walk on a flat surface and not wear clothing that is too heavy, which would reduce the accuracy of the skeletal detection;
2.2. Extraction of Skeletal Coordinates
- The 2D coordinates (x, y) of the following 13 landmarks were automatically extracted using the human pose detection library Pose Cap (Four Assist, Tokyo, Japan): head, shoulder center, hip center, right and left shoulders, right and left elbows, right and left hips, right and left knees, and right and left ankles. The default settings of the software were used for the pose detection. For skeletal landmarks with distinct outliers (e.g., when a part of the clothing was misidentified as a part of the human body), the principal investigator (Y.K.) made manual corrections using G-Dig v.2 software (Four Assist, Tokyo, Japan). The quality of the manual corrections was reviewed by a co-author (S.S.). We used this software because it allows for the manual correction of misidentified landmarks. This function was essential to the analysis of videos of fashion shows for which models sometimes wore flowy dresses;
- For each video resource, Y.K. manually detected the timing of the right- and left-heel contact events frame by frame, and the skeletal coordinates for one gait cycle were extracted. When one gait cycle was extracted from the contact event of the left heel, the skeletal coordinates were inverted to the left and right for the subsequent analysis. Y.K. verified the accuracy of the heel contact event detection, and another biomechanics expert (S.S.) performed the same analysis on 18 randomly selected videos (25% of the total video resources). The mean absolute error of the manual heel contact event detection between the investigators was 0.53 frames (Table A2 in Appendix C);
- Time, size, and location normalizations were performed for each data unit. For the time normalization, the skeletal coordinate data were linearly interpolated such that one gait cycle contained 51 frames (0–100%; 2% per frame). The cadence (steps/min), determined from the number of frames and the frame rate (images/sec) between the heel contact events, was also recorded for subsequent analyses. For the size normalization, the distance from the neck to the center of the hip was set to one, and the size of the entire body was adjusted. For the location normalization, the 2D coordinates of the hip joint at the first and last frames were both set to origin, and the skeletal coordinate data between events were linearly interpolated. These normalization processes were necessary because the time of one stride, the model size, and the walking locations varied among the models/video resources.
2.3. Data Analysis
- A 69 × 1327 input matrix was constructed (69 models with 13 landmarks × 51 frames × 2D coordinates + cadence);
- Principal component analysis (PCA) was applied to the input matrix using a correlation matrix to reduce the data dimension;
- Hierarchical cluster analysis (HCA) was applied to the principal component scores (PCSs) of the principal component vectors (PCVs) with up to 80% cumulative variance to classify the data. The Euclidean distance and the Ward aggregation criterion were considered in the analysis. Dendrograms and cluster agglomeration schedules [21] were used to comprehensively determine the appropriate clusters for further analysis. Appendix A describes how to determine the appropriate clusters;
- To help with the interpretation of the walking styles, stick-figure animations representing the walking style of each cluster were generated from the reconstructed skeletal coordinates. The skeletal coordinates were reconstructed from the mean PCS of each cluster in each PCV and the mean and standard deviations of each data unit, as performed in previous studies [5,22];
- Furthermore, the cadences, years when the shows were held, and several kinematic parameters representing the walking styles of the clusters were compared statistically across the clusters. One-way analysis of variance (ANOVA) was applied when the normality and homoscedasticity assumptions were confirmed, and the Kruskal–Wallis test was applied when they were rejected. The Bonferroni method was used for multiple comparisons when a significant main effect was observed.
3. Results
3.1. Classification of Walking Styles
3.2. Cadences and Years When Shows Were Held
3.3. Reconstructed Walking Styles
4. Discussion
4.1. Detailed Interpretations of Five Clusters
4.2. Limitations and Future Perspectives
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Detailed Consideration of Clusters
Appendix B. Verification of the Reliability of the PCA Results
- Select 50 data units randomly from 69 data units to create a data subset;
- Repeat procedure 1 to create five different data subsets;
- Apply PCA to each of the data subsets;
- Calculate the correlation coefficients between the original and subset principal component loadings for principal components 1–13.
Appendix C. Supplemental Tables
HC1: Y.K. | HC2: Y.K. | HC1: S.S. | HC2: S.S. | HC1 | HC2 | ||
---|---|---|---|---|---|---|---|
ID | Resources | Frame Number | Frame Number | Frame Number | Frame Number | Absolute Difference | Absolute Difference |
1 | Dolce & Gabbana 2003 | 16 | 43 | 16 | 43 | 0 | 0 |
6 | Dolce & Gabbana 2013 | 21 | 46 | 21 | 46 | 0 | 0 |
8 | Dolce & Gabbana 2015 | 18 | 43 | 17 | 43 | 1 | 0 |
9 | Dolce & Gabbana 2017 | 27 | 54 | 27 | 54 | 0 | 0 |
14 | Dolce & Gabbana 2022 | 14 | 36 | 14 | 38 | 0 | 2 |
16 | Louis Vuitton 2003 | 25 | 49 | 26 | 49 | 1 | 0 |
18 | Louis Vuitton 2006 | 25 | 49 | 24 | 48 | 1 | 1 |
21 | Louis Vuitton 2015 | 25 | 51 | 26 | 51 | 1 | 0 |
30 | Valentino 1998 | 30 | 57 | 29 | 56 | 1 | 1 |
34 | Valentino 2005 | 32 | 59 | 32 | 59 | 0 | 0 |
36 | Valentino 2012 | 17 | 43 | 18 | 43 | 1 | 0 |
39 | Valentino 2015 | 15 | 42 | 16 | 42 | 1 | 0 |
41 | Valentino 2018 | 18 | 43 | 18 | 43 | 0 | 0 |
43 | Valentino 2021 | 21 | 46 | 20 | 45 | 1 | 1 |
49 | Versace 2011 | 12 | 36 | 12 | 36 | 0 | 0 |
51 | Versace 2013 | 14 | 39 | 14 | 39 | 0 | 0 |
54 | Versace 2016 | 16 | 40 | 17 | 41 | 1 | 1 |
68 | Saint Laurent 2019 | 16 | 41 | 14 | 39 | 2 | 2 |
Principal Component | Eigenvalues | Principal Component | Eigenvalues | ||||
---|---|---|---|---|---|---|---|
Total | % of Variance | Cumulative % | Total | % of Variance | Cumulative % | ||
1 | 261.631 | 19.776 | 19.776 | 31 | 4.733 | 0.358 | 94.764 |
2 | 200.914 | 15.186 | 34.962 | 32 | 4.315 | 0.326 | 95.090 |
3 | 182.836 | 13.82 | 48.782 | 33 | 4.228 | 0.320 | 95.409 |
4 | 101.47 | 7.670 | 56.451 | 34 | 4.013 | 0.303 | 95.713 |
5 | 68.524 | 5.179 | 61.631 | 35 | 3.877 | 0.293 | 96.006 |
6 | 52.543 | 3.972 | 65.602 | 36 | 3.421 | 0.259 | 96.264 |
7 | 44.647 | 3.375 | 68.977 | 37 | 3.241 | 0.245 | 96.509 |
8 | 39.104 | 2.956 | 71.933 | 38 | 3.019 | 0.228 | 96.738 |
9 | 31.303 | 2.366 | 74.299 | 39 | 2.892 | 0.219 | 96.956 |
10 | 28.842 | 2.180 | 76.479 | 40 | 2.788 | 0.211 | 97.167 |
11 | 22.418 | 1.694 | 78.173 | 41 | 2.702 | 0.204 | 97.371 |
12 | 19.439 | 1.469 | 79.643 | 42 | 2.532 | 0.191 | 97.563 |
13 | 19.056 | 1.440 | 81.083 | 43 | 2.502 | 0.189 | 97.752 |
14 | 17.008 | 1.286 | 82.368 | 44 | 2.324 | 0.176 | 97.927 |
15 | 16.480 | 1.246 | 83.614 | 45 | 2.153 | 0.163 | 98.090 |
16 | 15.797 | 1.194 | 84.808 | 46 | 1.977 | 0.149 | 98.240 |
17 | 14.927 | 1.128 | 85.936 | 47 | 1.830 | 0.138 | 98.378 |
18 | 13.764 | 1.040 | 86.977 | 48 | 1.756 | 0.133 | 98.511 |
19 | 12.733 | 0.962 | 87.939 | 49 | 1.734 | 0.131 | 98.642 |
20 | 10.693 | 0.808 | 88.747 | 50 | 1.606 | 0.121 | 98.763 |
21 | 9.769 | 0.738 | 89.486 | 51 | 1.52 | 0.115 | 98.878 |
22 | 9.515 | 0.719 | 90.205 | 52 | 1.431 | 0.108 | 98.986 |
23 | 8.678 | 0.656 | 90.861 | 53 | 1.344 | 0.102 | 99.088 |
24 | 8.190 | 0.619 | 91.480 | 54 | 1.273 | 0.096 | 99.184 |
25 | 8.062 | 0.609 | 92.089 | 55 | 1.137 | 0.086 | 99.27 |
26 | 7.134 | 0.539 | 92.629 | 56 | 1.112 | 0.084 | 99.354 |
27 | 6.474 | 0.489 | 93.118 | 57 | 1.050 | 0.079 | 99.433 |
28 | 5.943 | 0.449 | 93.567 | ||||
29 | 5.658 | 0.428 | 93.995 | ||||
30 | 5.441 | 0.411 | 94.406 |
Stage | Combined Clusters | Coefficients | Stage at Which the Cluster First Appears | Next Stage | ||
---|---|---|---|---|---|---|
Cluster 1 | Cluster 2 | Cluster 1 | Cluster 2 | |||
1 | 39 | 53 | 0.994 | 0 | 0 | 16 |
2 | 22 | 64 | 2.608 | 0 | 0 | 11 |
3 | 31 | 51 | 4.603 | 0 | 0 | 15 |
4 | 23 | 67 | 6.677 | 0 | 0 | 16 |
5 | 52 | 65 | 9.029 | 0 | 0 | 32 |
6 | 15 | 21 | 11.692 | 0 | 0 | 19 |
7 | 11 | 20 | 14.360 | 0 | 0 | 22 |
8 | 36 | 57 | 17.112 | 0 | 0 | 29 |
9 | 19 | 55 | 20.687 | 0 | 0 | 29 |
10 | 12 | 50 | 24.324 | 0 | 0 | 17 |
11 | 22 | 68 | 28.195 | 2 | 0 | 28 |
12 | 41 | 69 | 32.124 | 0 | 0 | 35 |
13 | 17 | 46 | 36.383 | 0 | 0 | 53 |
14 | 4 | 7 | 40.649 | 0 | 0 | 23 |
15 | 31 | 49 | 44.916 | 3 | 0 | 37 |
16 | 23 | 39 | 49.184 | 4 | 1 | 48 |
17 | 10 | 12 | 53.572 | 0 | 10 | 38 |
18 | 58 | 66 | 58.131 | 0 | 0 | 42 |
19 | 15 | 47 | 62.808 | 6 | 0 | 24 |
20 | 8 | 14 | 67.498 | 0 | 0 | 44 |
21 | 2 | 29 | 72.260 | 0 | 0 | 41 |
22 | 11 | 54 | 77.202 | 7 | 0 | 48 |
23 | 4 | 5 | 82.210 | 14 | 0 | 58 |
24 | 15 | 44 | 87.270 | 19 | 0 | 46 |
25 | 61 | 62 | 92.967 | 0 | 0 | 50 |
26 | 9 | 32 | 98.941 | 0 | 0 | 36 |
27 | 13 | 33 | 105.013 | 0 | 0 | 39 |
28 | 22 | 37 | 111.342 | 11 | 0 | 31 |
29 | 19 | 36 | 117.783 | 9 | 8 | 32 |
30 | 6 | 48 | 124.437 | 0 | 0 | 38 |
31 | 22 | 45 | 131.447 | 28 | 0 | 46 |
32 | 19 | 52 | 138.884 | 29 | 5 | 53 |
33 | 16 | 60 | 146.627 | 0 | 0 | 43 |
34 | 27 | 43 | 154.417 | 0 | 0 | 49 |
35 | 41 | 42 | 162.358 | 12 | 0 | 37 |
36 | 1 | 9 | 171.666 | 0 | 26 | 58 |
37 | 31 | 41 | 181.129 | 15 | 35 | 47 |
38 | 6 | 10 | 191.068 | 30 | 17 | 51 |
39 | 13 | 59 | 201.125 | 27 | 0 | 45 |
40 | 30 | 38 | 211.187 | 0 | 0 | 52 |
41 | 2 | 3 | 221.706 | 21 | 0 | 47 |
42 | 56 | 58 | 232.444 | 0 | 18 | 57 |
43 | 16 | 63 | 243.376 | 33 | 0 | 50 |
44 | 8 | 35 | 254.721 | 20 | 0 | 54 |
45 | 13 | 34 | 267.346 | 39 | 0 | 51 |
46 | 15 | 22 | 280.375 | 24 | 31 | 62 |
47 | 2 | 31 | 293.487 | 41 | 37 | 65 |
48 | 11 | 23 | 306.696 | 22 | 16 | 59 |
49 | 26 | 27 | 320.456 | 0 | 34 | 59 |
50 | 16 | 61 | 337.771 | 43 | 25 | 60 |
51 | 6 | 13 | 355.717 | 38 | 45 | 57 |
52 | 30 | 40 | 374.133 | 40 | 0 | 62 |
53 | 17 | 19 | 392.680 | 13 | 32 | 54 |
54 | 8 | 17 | 412.592 | 44 | 53 | 63 |
55 | 24 | 25 | 433.653 | 0 | 0 | 61 |
56 | 18 | 28 | 455.566 | 0 | 0 | 64 |
57 | 6 | 56 | 478.205 | 51 | 42 | 64 |
58 | 1 | 4 | 501.157 | 36 | 23 | 60 |
59 | 11 | 26 | 530.219 | 48 | 49 | 61 |
60 | 1 | 16 | 559.464 | 58 | 50 | 65 |
61 | 11 | 24 | 590.135 | 59 | 55 | 67 |
62 | 15 | 30 | 621.827 | 46 | 52 | 63 |
63 | 8 | 15 | 660.884 | 54 | 62 | 66 |
64 | 6 | 18 | 700.994 | 57 | 56 | 68 |
65 | 1 | 2 | 744.246 | 60 | 47 | 66 |
66 | 1 | 8 | 788.376 | 65 | 63 | 67 |
67 | 1 | 11 | 835.548 | 66 | 61 | 68 |
68 | 1 | 6 | 884.000 | 67 | 64 | 0 |
References
- Koh, Y.; Lee, J. A study of color differences in women’s read-to-wear collections from world fashion cities: Intensive study of the Fall/Winter 2010 collections from New York, London, Milan, and Paris. Color Res. Appl 2013, 38, 463–468. [Google Scholar] [CrossRef]
- Volonté, P. Modelling Practice: The Inertia of Body Ideals in the Fashion. Sociologica 2019, 13, 11–26. [Google Scholar]
- Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186. [Google Scholar] [CrossRef] [PubMed]
- Yamamoto, M.; Shimatani, K.; Ishige, Y.; Takemura, H. Verification of gait analysis method fusing camera-based pose estimation and an IMU sensor in various gait conditions. Sci. Rep. 2022, 12, 17719. [Google Scholar] [CrossRef] [PubMed]
- Kobayashi, Y.; Hobara, H.; Heldoorn, T.A.; Kouchi, M.; Mochimaru, M. Age-independent and age-dependent sex differences in gait pattern determined by principal component analysis. Gait Posture 2016, 46, 11–17. [Google Scholar] [CrossRef] [PubMed]
- Sawacha, Z.; Sartor, C.D.; Yi, L.C.; Guiotto, A.; Spolaor, F.; Sacco, I.C.N. Clustering classification of diabetic walking abnormalities: A new approach taking into account intralimb coordination patterns. Gait Posture 2020, 79, 33–40. [Google Scholar] [CrossRef] [PubMed]
- Hobara, H.; Kobayashi, Y.; Mochimaru, M. Spatiotemporal variables of able-bodied and amputee sprinters in men’s 100-m sprint. Int. J. Sports Med. 2015, 36, 494–497. [Google Scholar] [CrossRef]
- Tierney, G.J.; Gildea, K.; Krosshaug, T.; Simms, C.K. Analysis of ball carrier head motion during a rugby union tackle without direct head contact: A case study. Int. J. Sports Sci. Coach. 2019, 14, 190–196. [Google Scholar] [CrossRef]
- Magnani, C.; Ait-Said, E.D. Geometrical analysis of motion schemes on fencing experts from competition videos. PLoS ONE 2021, 16, e0261888. [Google Scholar] [CrossRef]
- Wu, C.H.; Wu, T.C.; Lin, W.B. Exploration of applying pose estimation techniques in table tennis. Appl. Sci. 2023, 13, 1896. [Google Scholar] [CrossRef]
- Segal, Y.; Hadar, O.; Lhotska, L. Using EfficientNet-B7 (CNN), Variational Auto Encoder (VAE) and Siamese Twins’ Networks to Evaluate Human Exercises as Super Objects in a TSSCI Images. J. Pers. Med. 2023, 22, 874. [Google Scholar] [CrossRef] [PubMed]
- TDPT Windows Version v0.6 Released! Available online: https://www.youtube.com/watch?v=Tfo8X86A6RI (accessed on 31 May 2024).
- Women’s Apparel—Worldwide. Available online: https://www.statista.com/outlook/cmo/apparel/women-s-apparel/worldwide (accessed on 7 April 2024).
- Men’s Apparel—Worldwide. Available online: https://www.statista.com/outlook/cmo/apparel/men-s-apparel/worldwide (accessed on 7 April 2024).
- Fashion Model Demographics and Statistics in the US. Available online: https://www.zippia.com/fashion-model-jobs/demographics/ (accessed on 7 April 2024).
- The Social Data Behind the Biggest Fashion Week Shows. 2024. Available online: https://www.brandwatch.com/blog/social-data-biggest-fashion-week-shows/ (accessed on 31 May 2024).
- Furukawa, T.; Miura, C.; Miyatake, K.; Watanabe, A.; Hasegawa, M. Quantitative trend analysis of luxury fashion based on visual impressions of young Japanese women. Int. J. Fash. Des. Technol. Educ. 2016, 10, 146–157. [Google Scholar] [CrossRef]
- An, H.; Park, M. Approaching fashion design trend applications using text mining and semantic network analysis. Fash. Text. 2020, 7, 34. [Google Scholar] [CrossRef]
- Sim, T.K. OpenSim Project Home. Available online: https://simtk.org/projects/opensim (accessed on 31 May 2024).
- Delp, S.L.; Anderson, F.C.; Arnold, A.S.; Loan, P.; Habib, A.; John, C.T.; Guendelman, E.; Thelan, D.G. OpenSim: Open-source software to create and analyze dynamic simulations of movement. IEEE Trans. Biomed. Eng. 2007, 55, 1940–1950. [Google Scholar] [CrossRef]
- Jauhiainen, S.; Pohl, A.J.; Äyrämö, S.; Kauppi, J.P.; Ferber, R. A hierarchical cluster analysis to determine whether injured runners exhibit similar kinematic gait patterns. Scand. J. Med. Sci. Sports 2020, 30, 732–740. [Google Scholar] [CrossRef] [PubMed]
- Deluzio, K.J.; Astephen, J.L. Biomechanical features of gait waveform data associated with knee osteoarthritis: An application of principal component analysis. Gait Posture 2007, 25, 86–93. [Google Scholar] [CrossRef]
- Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
- Field, A. Discovering Statistics Using SPSS, 2nd ed.; Sage Publications: Washington, DC, USA, 2005. [Google Scholar]
- Why Do Models Walk Cross Legged? Available online: https://fashionrepublicmagazine.com/community/runway/why-do-models-walk-cross-legged/ (accessed on 7 April 2024).
- Guo, R.L.; Liu, B.; Huang, H.Y. The study of female fashion Model’s basic walking posture. Adv. Mater. Res. 2011, 332–334, 1272–1275. [Google Scholar] [CrossRef]
- An Analysis of the Most Iconic Supermodel Runway Walks: From Gigi Hadid to Hailey Bieber. 2016. Available online: https://www.elle.com/uk/fashion/celebrity-style/articles/a31869/an-analysis-of-the-most-iconic-supermodel-runway-walks/ (accessed on 7 April 2024).
- Gender Neutrality Becomes Fashion Reality. Available online: https://www.vogue.co.uk/article/gender-neutrality-becomes-fashion-reality (accessed on 7 April 2024).
- Press Release LVMH and Kering Have Drawn up a Charter on Working Relations with Fashion Models and Their Well-Being. Available online: https://r.lvmh-static.com/uploads/2017/09/press-release-models-charter-kering-lvmh-en-def-09-06-17.pdf (accessed on 7 April 2024).
- Aoyagi, Y.; Yamada, S.; Ueda, S.; Iseki, C.; Kondo, T.; Mori, K.; Kobayashi, Y.; Fukami, T.; Hoshimaru, M.; Ishikawa, M.; et al. Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model. Sensors 2022, 22, 5282. [Google Scholar] [CrossRef]
- Goel, S.; Pavlakos, G.; Rajasegaran, J.; Kanazawa, A.; Malik, J. Humans in 4D: Reconstructing and Tracking Humans with Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023. [Google Scholar]
- Tanabe, H.; Fujii, K.; Kaneko, N.; Yokoyama, H.; Nakazawa, K. Biomechanical strategies to maximize gait attractiveness among women. Front. Sports Act. Living 2023, 5, 1091470. [Google Scholar] [CrossRef]
- Tanabe, H.; Yamamoto, K. Structural equation modeling of female gait attractiveness using gait kinematics. Sci. Rep. 2023, 13, 17823. [Google Scholar] [CrossRef]
- Saito, S.; Saito, M.; Kondo, M.; Kobayashi, Y. Gait pattern can alter aesthetic visual impression from a third-person perspective. Sci. Rep. 2024, 14, 6602. [Google Scholar] [CrossRef] [PubMed]
- YouTube Fair Use Policy. Available online: https://www.youtube.com/howyoutubeworks/policies/copyright/#copyright-exceptions (accessed on 7 April 2024).
- Japanese Law Translation, Copyright Act. Available online: https://www.japaneselawtranslation.go.jp/ja/laws/view/4207 (accessed on 7 April 2024).
- Clough, S.; Tanguay, A.F.; Mutlu, B.; Turkstra, L.S.; Duff, M.C. How do Individuals With and Without Traumatic Brain Injury Interpret Emoji? Similarities and Differences in Perceived Valence, Arousal, and Emotion Representation. J. Nonverbal Behav. 2023, 47, 489–511. [Google Scholar] [CrossRef]
- Hierarchical Cluster Analysis Using SPSS|Agglomeration Schedule| Data Analysis: Part 3. Available online: https://www.youtube.com/watch?v=bNemTjPGWlo (accessed on 16 April 2024).
- Validating a Hierarchical Cluster Analysis. Available online: https://www.youtube.com/watch?v=mSzk2KrbNfs (accessed on 16 April 2024).
Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Cluster 5 | |
---|---|---|---|---|---|
Number of resources | 11 | 9 | 14 | 23 | 12 |
Cadence | 129.6 (9.6) | 124.8 (16.8) | 124.8 (9.6) | 124.8 (4.8) | 122.4 (13.2) |
Years when shows were held | 2006 (9) ** | 2011 (16) | 2014.5 (15.5) | 2014 (10) | 2018.5 (5.5) ** |
Classified shows | D&G 2003 | D&G 2005 | D&G 2013 | D&G 2015 | D&G 2019 |
D&G 2011 | D&G 2009 | D&G 2018 | D&G 2022 | LV 2009 | |
D&G 2012 | VAL 1998 | D&G 2020 | D&G 2023 | LV 2018 | |
D&G 2014 | VAL 2001 | D&G 2021 | LV 2004 | LV 2019 | |
D&G 2017 | VAL 2018 | LV 2006 | LV 2008 | LV 2020 | |
LV 2003 | VAL 2020 | VAL 1995 | LV 2015 | LV 2021 | |
VAL 2002 | VER 2011 | VAL 2004 | LV 2016 | LV 2022 | |
SL 2003 | VER 2013 | VAL 2005 | VAL 1999 | VAL 2015 | |
SL 2004 | SL 2020 | VER 2004 | VAL 2010 | VAL 2021 | |
SL 2006 | VER 2012 | VAL 2012 | VER 2015 | ||
SL 2008 | VER 2019 | VAL 2013 | VER 2016 | ||
VER 2022 | VAL 2014 | SL 2017 | |||
VER 2023 | VAL 2017 | ||||
SL 2016 | VAL 2022 | ||||
VAL 2023 | |||||
VER 2000 | |||||
VER 2002 | |||||
VER 2014 | |||||
VER 2017 | |||||
VER 2021 | |||||
SL 2009 | |||||
SL 2012 | |||||
SL 2019 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kobayashi, Y.; Saito, S.; Murahori, T. Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles. Sensors 2024, 24, 3865. https://doi.org/10.3390/s24123865
Kobayashi Y, Saito S, Murahori T. Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles. Sensors. 2024; 24(12):3865. https://doi.org/10.3390/s24123865
Chicago/Turabian StyleKobayashi, Yoshiyuki, Sakiko Saito, and Tatsuya Murahori. 2024. "Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles" Sensors 24, no. 12: 3865. https://doi.org/10.3390/s24123865
APA StyleKobayashi, Y., Saito, S., & Murahori, T. (2024). Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles. Sensors, 24(12), 3865. https://doi.org/10.3390/s24123865