Next Article in Journal
Wavefront Changes during a Sustained Reading Task in Presbyopic Eyes
Next Article in Special Issue
Asymmetric Stereo High Dynamic Range Imaging with Smartphone Cameras
Previous Article in Journal
DELTA: Integrating Multimodal Sensing with Micromobility for Enhanced Sidewalk and Pedestrian Route Understanding
Previous Article in Special Issue
Optimized OTSU Segmentation Algorithm-Based Temperature Feature Extraction Method for Infrared Images of Electrical Equipment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles

1
Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology (AIST), c/o Kashiwa II Campus, University of Tokyo, 6-2-3 Kashiwanoha, Kashiwa 277-0882, Japan
2
Liberal Arts and Sciences, Nippon Institute of Technology, 4-1 Gakuendai, Saitama 345-8501, Japan
3
TOKYO GAISHO Inc., #101 Heimat Daikanyama, 2-21-10 Ebisunishi, Tokyo 150-0021, Japan
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(12), 3865; https://doi.org/10.3390/s24123865
Submission received: 25 April 2024 / Revised: 4 June 2024 / Accepted: 12 June 2024 / Published: 14 June 2024

Abstract

:
Understanding past and current trends is crucial in the fashion industry to forecast future market demands. This study quantifies and reports the characteristics of the trendy walking styles of fashion models during real-world runway performances using three cutting-edge technologies: (a) publicly available video resources, (b) human pose detection technology, and (c) multivariate human-movement analysis techniques. The skeletal coordinates of the whole body during one gait cycle, extracted from publicly available video resources of 69 fashion models, underwent principal component analysis to reduce the dimensionality of the data. Then, hierarchical cluster analysis was used to classify the data. The results revealed that (1) the gaits of the fashion models analyzed in this study could be classified into five clusters, (2) there were significant differences in the median years in which the shows were held between the clusters, and (3) reconstructed stick-figure animations representing the walking styles of each cluster indicate that an exaggerated leg-crossing gait has become less common over recent years. Accordingly, we concluded that the level of leg crossing while walking is one of the major changes in trendy walking styles, from the past to the present, directed by the world’s leading brands.

1. Introduction

In the fashion industry, trend analysis is crucial to forecasting future market demands because trends change with the times [1]. Despite the importance given to the walking skills exhibited by models during fashion shows [2], trend analysis has not been applied to their walking styles. A Google Scholar search using the keywords “fashion model” and “gait analysis” returned only seven articles as of 30 May 2024. None of these studies focused on the walking styles of fashion models in a historical or evolutionary context. Fashion models need a high degree of technical athletic coordination to perfect their gaits on the runway [2]. Additionally, they need to understand sophisticated movement features that evolve based on the demands of the industry and society [2]. Hence, quantifying and reporting the features of trendy walking styles from the past to the present is extremely beneficial not only for models, their trainers, and agencies to increase models’ chances of being cast in runway shows, but also for designers and brands to sustain the development of the industry.
Until recently, it was virtually impossible to analyze the top performances of the world’s best performers, such as professional athletes, dancers, and fashion models, to understand the nature of their sophisticated techniques. However, such an analysis is now possible using three cutting-edge technologies: (1) video-sharing services, such as YouTube, in which high-resolution video resources (even at the 4K level) of the world’s best performers, including the recordings of the fashion shows of the world’s leading brands over the past 20 years, are available; (2) human pose detection technology, such as OpenPose [3], which is a novel sensing technology that can extract the skeletal coordinates of the human body from RGB images with remarkable accuracy [4]; (3) multivariate human-movement analysis techniques, which can classify the data into several groups based on the characteristics of whole-body movements [5,6]. Several researchers have used these technologies to evaluate the performances of the world’s best performers during actual competitions [7,8,9]. For example, Hobara et al. analyzed publicly available internet broadcasts to determine the running characteristics of able-bodied and amputee sprinters in actual 100 m races at world championships [7]. Recently, advanced pose detection technologies have been used in various industries, such as sports [10], healthcare [11], and entertainment [12]. Therefore, we concluded that the combination of these cutting-edge technologies, namely, the analysis of publicly available fashion show video resources using pose detection technology and multivariate analysis techniques, can clarify the characteristics of the sophisticated walking styles of the world’s leading fashion models that are modified over time based on the demands of the industry.
This study is the first to analyze the gaits of fashion models during actual fashion shows of the world’s leading brands using the abovementioned three cutting-edge technologies, aimed at quantifying and reporting the features of past and current trendy walking styles directed by the world’s leading brands. This study focused only on women’s fashion shows, as the market size of women’s apparel is 1.5 times larger than that of men’s apparel (USD 901.10 billion [13] vs. USD 568.90 billion [14] in 2023), and 77.7% of fashion models in the United States are women [15]. Furthermore, we focused only on (1) the spring/summer prêt-à-porter collections and (2) the first model to walk down the runway of each show. We took this approach because heavy clothing, often worn in autumn/winter, and haute-couture collections make skeletal detection less accurate with the current computer vision technology. Additionally, the media highlights the “first look” as the hottest model of the year, significantly reflecting the fashion trends of each period.

2. Materials and Methods

A flowchart indicating the overall research methodology is shown in Figure 1.

2.1. Selection of Video Resources

The principal investigator of this research (Y.K.) performed a search on YouTube from 1 October 2022 to 28 February 2023. The search terms used were “fashion show”, “spring/summer”, and “women’s” combined with the names of brands (e.g., Versace, Dolce & Gabbana, and Louis Vuitton) participating in either one of the two most popular fashion weeks in the world (Paris or Milan) [16]. Manual selection of the videos was performed according to the following four inclusion criteria:
  • The viewing angle must encompass the entire movement of the model for at least one gait cycle from the front to avoid sliding effects;
  • The camera must have sufficient resolution (≥400 × 360) and speed (≥25 images/s);
  • Models must walk on a flat surface and not wear clothing that is too heavy, which would reduce the accuracy of the skeletal detection;
  • At least 10 video resources from different years must be obtained from the same brand so that transitions over time can be determined. This criterion was determined based on previous studies that conducted trend analyses in the fashion industry [17,18].
Because the walking environments among the shows could not be perfectly matched, two expert biomechanical researchers (Y.K. and S.S.) independently reviewed the videos and selected those that met the criteria. Finally, 69 videos were accepted, as listed in Table A1 (Appendix C).

2.2. Extraction of Skeletal Coordinates

After selecting the videos, the following five steps were performed to extract reliable skeletal coordinates of the models during a single gait cycle:
  • The 2D coordinates (x, y) of the following 13 landmarks were automatically extracted using the human pose detection library Pose Cap (Four Assist, Tokyo, Japan): head, shoulder center, hip center, right and left shoulders, right and left elbows, right and left hips, right and left knees, and right and left ankles. The default settings of the software were used for the pose detection. For skeletal landmarks with distinct outliers (e.g., when a part of the clothing was misidentified as a part of the human body), the principal investigator (Y.K.) made manual corrections using G-Dig v.2 software (Four Assist, Tokyo, Japan). The quality of the manual corrections was reviewed by a co-author (S.S.). We used this software because it allows for the manual correction of misidentified landmarks. This function was essential to the analysis of videos of fashion shows for which models sometimes wore flowy dresses;
  • A Butterworth low-pass filter with a cut-off frequency of 6 Hz (the default value in the biomechanical simulator OpenSim [19,20]) was used to smooth the time-series landmark signals;
  • For each video resource, Y.K. manually detected the timing of the right- and left-heel contact events frame by frame, and the skeletal coordinates for one gait cycle were extracted. When one gait cycle was extracted from the contact event of the left heel, the skeletal coordinates were inverted to the left and right for the subsequent analysis. Y.K. verified the accuracy of the heel contact event detection, and another biomechanics expert (S.S.) performed the same analysis on 18 randomly selected videos (25% of the total video resources). The mean absolute error of the manual heel contact event detection between the investigators was 0.53 frames (Table A2 in Appendix C);
  • Time, size, and location normalizations were performed for each data unit. For the time normalization, the skeletal coordinate data were linearly interpolated such that one gait cycle contained 51 frames (0–100%; 2% per frame). The cadence (steps/min), determined from the number of frames and the frame rate (images/sec) between the heel contact events, was also recorded for subsequent analyses. For the size normalization, the distance from the neck to the center of the hip was set to one, and the size of the entire body was adjusted. For the location normalization, the 2D coordinates of the hip joint at the first and last frames were both set to origin, and the skeletal coordinate data between events were linearly interpolated. These normalization processes were necessary because the time of one stride, the model size, and the walking locations varied among the models/video resources.

2.3. Data Analysis

The time-, size-, and location-normalized skeletal coordinate data, as well as the cadence data, were analyzed as follows:
  • A 69 × 1327 input matrix was constructed (69 models with 13 landmarks × 51 frames × 2D coordinates + cadence);
  • Principal component analysis (PCA) was applied to the input matrix using a correlation matrix to reduce the data dimension;
  • Hierarchical cluster analysis (HCA) was applied to the principal component scores (PCSs) of the principal component vectors (PCVs) with up to 80% cumulative variance to classify the data. The Euclidean distance and the Ward aggregation criterion were considered in the analysis. Dendrograms and cluster agglomeration schedules [21] were used to comprehensively determine the appropriate clusters for further analysis. Appendix A describes how to determine the appropriate clusters;
  • To help with the interpretation of the walking styles, stick-figure animations representing the walking style of each cluster were generated from the reconstructed skeletal coordinates. The skeletal coordinates were reconstructed from the mean PCS of each cluster in each PCV and the mean and standard deviations of each data unit, as performed in previous studies [5,22];
  • Furthermore, the cadences, years when the shows were held, and several kinematic parameters representing the walking styles of the clusters were compared statistically across the clusters. One-way analysis of variance (ANOVA) was applied when the normality and homoscedasticity assumptions were confirmed, and the Kruskal–Wallis test was applied when they were rejected. The Bonferroni method was used for multiple comparisons when a significant main effect was observed.
SPSS software (IBM SPSS Statistics v.19, IBM Inc., Armonk, NY, USA) was used for all statistical analyses. Given the number of data units in this study, we judged statistically significant differences by both p-values and effect sizes. This approach avoids the possible risk of misinterpreting the results based on the p-value alone. The effect sizes were the partial-eta-squared (ŋ2) value for the parametric tests and the r-value for the nonparametric tests. Based on previous studies [23,24], the criterion was set at p < 0.05 and a medium effect size (ŋ2 > 0.06 or r > 0.30).

3. Results

3.1. Classification of Walking Styles

The PCA produced 57 PCVs with eigenvalues greater than one as outputs. Of the 57 PCVs, the first 13 PCVs explained more than 80% of the cumulative variance (Table A3 in Appendix C). Reliability of the PCA results have verified as described in Appendix B. The HCA produced the dendrogram shown in Figure 2 and the agglomeration schedule coefficients presented in Table A4 (Appendix C) as outputs. Based on these outputs and the detailed consideration described in Appendix A, we concluded that classifying the data into five clusters was the most appropriate for interpreting the results. Table 1 provides detailed information about each cluster.

3.2. Cadences and Years When Shows Were Held

For the cadence, normality was rejected. The Kruskal–Wallis test revealed no significant main effect on the cadence (K(4, 69) = 2.892, not significant). For the years when the shows were conducted, both normality and homoscedasticity were rejected. The Kruskal–Wallis test revealed a significant main effect on the years when the shows were conducted (K(4, 69) = 12.759, p = 0.013). Post hoc analyses revealed that the median year of a show was significantly earlier for Cluster 1 than it was for Cluster 5, with the medium effect size (p < 0.001, r = 0.41). The median (interquartile range (IQR)) cadence (steps/min) and the year when a show was conducted in each cluster are listed in Table 1.

3.3. Reconstructed Walking Styles

Figure 3a–e show the skeletal coordinates representing the walking styles of each cluster. Each subfigure represents a consecutive 10% segment of the gait cycle. The full animation movies can be downloaded from the Supplementary Material. As shown, the stick figures in Cluster 1 crossed their legs most exaggeratedly, and the level of leg crossing decreased in the order of Clusters 2, 3, 4, and 5. In Clusters 4 and 5, the skeletal figures did not cross their legs. Therefore, we compared the medio-lateral distance between the left- and right-ankle landmarks in the first frame (at the timing of heel contact) among the clusters. After confirming both the normality and homoscedasticity of the data, a one-way ANOVA was applied. The results revealed a significant main effect with the large effect size (F(4, 69) = 5.429, p < 0.001, η2 = 0.253), and multiple comparisons indicated significant differences between Clusters 1 and 4 (p < 0.05) and Clusters 1 and 5 (p < 0.01), as shown in Figure 4a.
The model in Cluster 3 tended to walk with a swaying upper body. Therefore, we compared the range of the neck landmark motion in the medio-lateral direction during the entire gait cycle among the clusters. Because the normality of the data was rejected, we applied the Kruskal–Wallis test. As a result, a significant main effect (K(4, 69) = 9.794; p < 0.05) was confirmed. Multiple comparisons indicated near-significant differences between Clusters 3 and 4 with the medium effect size (p = 0.63, r = 0.33), as shown in Figure 4b.

4. Discussion

This study aimed to quantify and report the features of past and current trendy walking styles directed by the world’s leading brands. Therefore, we quantitatively analyzed the gaits of fashion models during real-world runway performances using publicly available video resources, human pose detection technology, and multivariate human-movement analysis techniques. Our results revealed that (1) the gaits of the fashion models analyzed in this study could be classified into five clusters; (2) the median year for the shows in each cluster became more recent in the order of Clusters 1, 2, 4, 3, and 5, with a significant difference observed between Clusters 1 and 5; and (3) the level of leg crossing has decreased in shows conducted more recently. Accordingly, we concluded that the level of leg crossing while walking is one of the major changes in trendy walking styles, from the past to the present, directed by the world’s leading brands. Detailed discussions of each cluster are described as follows.

4.1. Detailed Interpretations of Five Clusters

Cluster 1 was the oldest among the five clusters, comprising 11 videos from the early 2000s to the mid-2010s (Table 1). The reconstructed walking style clearly indicates that the models in Cluster 1 tended to walk with their legs crossed in the most exaggerated manner among the five clusters, with small trunk movements. According to the Fashion Republic forum [25], exaggerated cross-legged walking causes clothing to drape in a visually appealing, fluid manner. Indeed, several models classified in this cluster wore long skirts that moved along the models’ legs as they walked. Accordingly, the walking style of Cluster 1 can be interpreted as a special walking technique that makes dresses, such as long skirts, appear more attractive. However, these walking styles and fashions may not be recommended by the latest trends of the world’s leading brands, as evidenced by the fact that no shows after 2017 are included in this cluster.
Cluster 2 was the second oldest among the five clusters, and the smallest, with only nine videos, as listed in Table 1. The reconstructed stick figures indicate that the models in Cluster 2 tended to walk as if following a straight line. Guo et al. [26] stated that “keeping the feet in a straight line of imagination” is one of the basic requirements of walking for fashion models. Therefore, the walking style of Cluster 2 can be interpreted as one of the typical walking styles of fashion models who keep their feet in an imaginary straight line. Considering that the cluster is the smallest of the five and that no resources after 2020 are classified, the walking style of this cluster is likely becoming outdated.
Cluster 3 was the second most recent, with 14 videos included (Table 1). The reconstructed stick figures in this cluster are characterized by a large upper-body swing. Such a walking style is again another basic requirement of the walking style of fashion models, as described by Guo et al. [26]; that is, “upper-body relax, avoid swing range is too big”. However, in actual fashion shows, models seem to occasionally use their upper bodies to attract attention. Regarding the walking style of Karlie Kloss, one of the world’s most famous supermodels, the fashion magazine Elle [27] quoted, “Karlie uses her hips to sashay down the catwalk in heels with exaggerated arm movements to ensure all eyes are on her”. Therefore, the models included in this cluster may have intentionally moved their upper bodies to attract the audience’s attention. This type of walking is still applicable today as well because recent (2021, 2022, and 2023) shows are included in this cluster.
Cluster 4 was the third oldest and was the largest among the five clusters (Table 1). The reconstructed stick figures indicate that the models in this cluster tended to walk with the smallest upper-body motion among the five clusters. As mentioned in the previous paragraph, “upper-body relax, avoid swing range is too big” is described as one of the other basic requirements of the walking style of fashion models [26]. Therefore, the walking style of Cluster 4 can be interpreted as one of the other typical walking styles of fashion models that minimize upper-body movements. Because recent shows (2022 and 2023) are also included in this cluster, this type of walking is still applicable today as well.
Cluster 5 was the latest of the five clusters. A significant difference in the median years of the shows was observed between Cluster 1 and Cluster 5. This cluster had 12 videos, as listed in Table 1. Here, the models appeared to walk with a normal gait, not completely crossing their legs. From the mid-2010s, the keywords “gender neutral” or “gender fluidity” began to attract attention in the fashion scene [28]. Around the same period, “the charter for the well-being of fashion models” was released by an industry coalition [29]. As clothes and shoes become genderless, the gaits of models may also become genderless. Therefore, the walking style of Cluster 5 can be interpreted as the latest trend, which may be influenced by recent social affairs.

4.2. Limitations and Future Perspectives

Owing to the methodology employed, this study has several limitations. First, the image quality and camera angles were not consistent among the analyzed video resources. While considering the detailed criteria and validation methods employed to minimize the impacts of these factors, the reader should also keep these points in mind when interpreting the results. Furthermore, we only used software that utilizes fundamental two-dimensional human pose detection technology (Pose Cap and G-Dig v.2) because misidentified landmarks can be manually corrected. This function was essential for this study to analyze videos of fashion shows for which models sometimes wore flowy dresses. The use of quasi-three-dimensional (or 4D) pose detection technology, which has been recently developed [12,30,31], may provide further understanding of trendy walking styles. Recent studies on impressive walking styles have consistently reported that the pelvic postures in the sagittal plane also play an important role in the aesthetic impressions of gait [32,33,34].
The methodology employed also has some advantages. For example, the use of publicly available videos allow us to analyze the maximum performances of performers. They are conditioned to perform to their maximum potential at competitions and not in the laboratory. Additionally, it is difficult to mimic the atmosphere of a live environment (e.g., the excitement of an audience) in a laboratory setting. We will continue our research, applying the latest sensing technology, to provide beneficial information to fashion models who aspire to the top.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/s24123865/s1, Video S1: CLST1_Animation.mp4, Video S2: CLST2_Animation.mp4, Video S3: CLST3_Animation.mp4, Video S4: CLST4_Animation.mp4, Video S5: CLST1_Animation.mp5.

Author Contributions

Y.K. provided the overall methodology for this study. Y.K. and S.S. performed the analyses. Y.K. drafted the manuscript. All authors contributed notes, edited the manuscript, and provided critical feedback for improving it. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partially supported by the National Institute of Advanced Industrial Science and Technology internal research funds.

Institutional Review Board Statement

This study was deemed exempt from approval by the local institutional review board, as we analyzed only publicly available videos (confirmation ID: H2022-1313). All video data analyzed in this study were used in accordance with the “fair use” policy of YouTube [35] and the rules of the “copyright exception” for noncommercial research [36].

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated and/or analyzed in the current study can be obtained from the corresponding author upon reasonable request.

Conflicts of Interest

Tatsuya Murahori is the CEO of TOKYO GAISHO Inc. The remaining authors declare that this research was conducted in the absence of any commercial or financial relationships that could be interpreted as a potential conflict of interest.

Appendix A. Detailed Consideration of Clusters

Considering insufficient clusters made from only one or two resources and understanding their characteristics was inconsequential to achieve the study objectives. We therefore set the minimum number of data units to be included in each cluster at 7, which is approximately 10% of the total number of data units analyzed (i.e., 69 in total). As previous studies have and several lectures have instructed, we determined the optimal number of clusters comprehensively from the percent changes of the agglomeration schedule coefficients at each stage, in combination with the visual inspection of the dendrogram [37,38,39].
A hierarchical cluster analysis produced the dendrogram shown in Figure 1 and the agglomeration schedule coefficients presented in Table A2 (Appendix C) as outputs. The greatest percent change was found between stages 62 and 63, which indicated that classification into six clusters was optimal. However, in this case, Cluster 6 consists of only two shows (Louis Vuitton 2006 and Valentino 1995), violating the criterion for the minimum number of data units to be included in each cluster. Therefore, we considered classifying the data into five clusters. In this case, all the clusters met the criterion, and the visual inspection of the dendrogram was reasonable as well. Consequently, we concluded that classifying the data into five clusters was considered the most appropriate for interpreting the results.

Appendix B. Verification of the Reliability of the PCA Results

To verify the reliability of the PCA results in this study, we conducted the following test:
  • Select 50 data units randomly from 69 data units to create a data subset;
  • Repeat procedure 1 to create five different data subsets;
  • Apply PCA to each of the data subsets;
  • Calculate the correlation coefficients between the original and subset principal component loadings for principal components 1–13.
As a result, the average correlation coefficients for the five datasets were greater than 0.3 for principal components 1–13. This suggests that similar results could have been obtained from the PCA even if different datasets were used. Therefore, we can conclude that the results of the PCA reported in our study are reliable.

Appendix C. Supplemental Tables

Table A1. List of selected resources. SS: spring/summer.
Table A1. List of selected resources. SS: spring/summer.
BrandYear/SeasonLocationLink (All Links Accessed on 31 May 2024)
1Dolce & Gabbana2003 SSMilanhttps://www.youtube.com/watch?v=p_nZpTBYsaA
2Dolce & Gabbana2005 SSMilanhttps://www.youtube.com/watch?v=fsbW4GqRJN0
3Dolce & Gabbana2009 SSMilanhttps://www.youtube.com/watch?v=QTXiOStGz5w
4Dolce & Gabbana2011 SSMilanhttps://www.youtube.com/watch?v=g0_Y9CuETcM
5Dolce & Gabbana2012 SSMilanhttps://www.youtube.com/watch?v=kuJ4lIR99fs
6Dolce & Gabbana2013 SSMilanhttps://www.youtube.com/watch?v=iB2ZuXC7da0
7Dolce & Gabbana2014 SSMilanhttps://www.youtube.com/watch?v=l6PaWaREPPo
8Dolce & Gabbana2015 SSMilanhttps://www.youtube.com/watch?v=bTT3bWCPfxc
9Dolce & Gabbana2017 SSMilanhttps://www.youtube.com/watch?v=CPZmkiUsATU
10Dolce & Gabbana2018 SSMilanhttps://www.youtube.com/watch?v=n986zYUsDfg
11Dolce & Gabbana2019 SSMilanhttps://www.youtube.com/watch?v=T-xUKG_rUGA
12Dolce & Gabbana2020 SSMilanhttps://www.youtube.com/watch?v=I2AbpKwRR40
13Dolce & Gabbana2021 SSMilanhttps://www.youtube.com/watch?v=rVjAtPiW3MQ
14Dolce & Gabbana2022 SSMilanhttps://www.youtube.com/watch?v=RjIyaVwgryE
15Dolce & Gabbana2023 SSMilanhttps://www.youtube.com/watch?v=Gpvgfc1mmvo
16Louis Vuitton2003 SSParishttps://www.youtube.com/watch?v=LHDaKrpiV9o
17Louis Vuitton2004 SSParishttps://www.youtube.com/watch?v=KN_AaQqYHpo
18Louis Vuitton2006 SSParishttps://www.youtube.com/watch?v=a21WpJJDods
19Louis Vuitton2008 SSParishttps://www.youtube.com/watch?v=Bx8iBjohtig
20Louis Vuitton2009 SSParishttps://www.youtube.com/watch?v=JGPlKM-gDGI
21Louis Vuitton2015 SSParishttps://www.youtube.com/watch?v=c3ZDGxxgCao
22Louis Vuitton2016 SSParishttps://www.youtube.com/watch?v=IK_9fSslijU
23Louis Vuitton2018 SSParishttps://www.youtube.com/watch?v=k57CBLm8jCs
24Louis Vuitton2019 SSParishttps://www.youtube.com/watch?v=PJleMAHoKek
25Louis Vuitton2020 SSParishttps://www.youtube.com/watch?v=BKy5SJjKTZ0
26Louis Vuitton2021 SSParishttps://www.youtube.com/watch?v=6G7L4rpxQfI
27Louis Vuitton2022 SSParishttps://www.youtube.com/watch?v=D7NYdae-KlA
28Valentino1995 SSParishttps://www.youtube.com/watch?v=g0DArKBQrdY
29Valentino1998 SSParishttps://www.youtube.com/watch?v=cACLF2Jqr6A
30Valentino1999 SSParishttps://www.youtube.com/watch?v=A8SnyF7Wqpw
31Valentino2001 SSParishttps://www.youtube.com/watch?v=jffPfbM0Nok
32Valentino2002 SSParishttps://www.youtube.com/watch?v=OLCl_o7a3so
33Valentino2004 SSParishttps://www.youtube.com/watch?v=6gR-f0pxmjQ
34Valentino2005 SSParishttps://www.youtube.com/watch?v=yAboBLAJMmQ
35Valentino2010 SSParishttps://www.youtube.com/watch?v=9OAyyZqPcFg
36Valentino2012 SSParishttps://www.youtube.com/watch?v=QtpYH3qnB0k
37Valentino2013 SSParishttps://www.youtube.com/watch?v=WVuRiuI3wag
38Valentino2014 SSParishttps://www.youtube.com/watch?v=BqgX5C_k8Ps
39Valentino2015 SSParishttps://www.youtube.com/watch?v=4PenthS8Pmw
40Valentino2017 SSParishttps://www.youtube.com/watch?v=RidHtqKqz78
41Valentino2018 SSParishttps://www.youtube.com/watch?v=LR2qbYb_s2A
42Valentino2020 SSParishttps://www.youtube.com/watch?v=pSkfcbzebSc
43Valentino2021 SSMilanhttps://www.youtube.com/watch?v=RktAHdMZOAs
44Valentino2022 SSParishttps://www.youtube.com/watch?v=ZKofPZM2ipw
45Valentino2023 SSParishttps://www.youtube.com/watch?v=B4jrkcZP0vQ
46Versace2000 SSMilanhttps://www.youtube.com/watch?v=l-zCHv6gwnE
47Versace2002 SSMilanhttps://www.youtube.com/watch?v=c77CR_VYKsQ
48Versace2004 SSMilanhttps://www.youtube.com/watch?v=Mw4ieZtKjUY
49Versace2011 SSMilanhttps://www.youtube.com/watch?v=BypOv68hdBk
50Versace2012 SSMilanhttps://www.youtube.com/watch?v=waJjDVWGPPg
51Versace2013 SSMilanhttps://www.youtube.com/watch?v=tPLS2dR6Z10
52Versace2014 SSMilanhttps://www.youtube.com/watch?v=A3OJYERxFpI
53Versace2015 SSMilanhttps://www.youtube.com/watch?v=HmPTucTXyyY
54Versace2016 SSMilanhttps://www.youtube.com/watch?v=yeXZLSqUBqo
55Versace2017 SSMilanhttps://www.youtube.com/watch?v=KnDwRS-opLI
56Versace2019 SSMilanhttps://www.youtube.com/watch?v=SFdYNuWmnqo
57Versace2021 SSMilanhttps://www.youtube.com/watch?v=Ffm-DfOTtv8
58Versace2022 SSMilanhttps://www.youtube.com/watch?v=zsu2WRFaUoQ
59Versace2023 SSMilanhttps://www.youtube.com/watch?v=hoKDrFyQDy0
60Saint Laurent2003 SSParishttps://www.youtube.com/watch?v=eAtT8PGQuFw
61Saint Laurent2004 SSParishttps://www.youtube.com/watch?v=rha577L91Qs
62Saint Laurent2006 SSParishttps://www.youtube.com/watch?v=E1Lxk9Sylng
63Saint Laurent2008 SSParishttps://www.youtube.com/watch?v=AId0Za9azmw
64Saint Laurent2009 SSParishttps://www.youtube.com/watch?v=U2ebRPcvFk0
65Saint Laurent2012 SSParishttps://www.youtube.com/watch?v=QFbGAWyWQ_c
66Saint Laurent2016 SSParishttps://www.youtube.com/watch?v=JbQL-wkKuq0
67Saint Laurent2017 SSParishttps://www.youtube.com/watch?v=IdRQ4E9N2Oc
68Saint Laurent2019 SSParishttps://www.youtube.com/watch?v=gtSbb-Euswk
69Saint Laurent2020 SSParishttps://www.youtube.com/watch?v=RwxyYras96k
Table A2. Validation of manually detected heel contact events.
Table A2. Validation of manually detected heel contact events.
HC1: Y.K.HC2: Y.K.HC1: S.S.HC2: S.S.HC1HC2
IDResourcesFrame NumberFrame NumberFrame NumberFrame NumberAbsolute DifferenceAbsolute Difference
1Dolce & Gabbana 20031643164300
6Dolce & Gabbana 20132146214600
8Dolce & Gabbana 20151843174310
9Dolce & Gabbana 20172754275400
14Dolce & Gabbana 20221436143802
16Louis Vuitton 20032549264910
18Louis Vuitton 20062549244811
21Louis Vuitton 20152551265110
30Valentino 19983057295611
34Valentino 20053259325900
36Valentino 20121743184310
39Valentino 20151542164210
41Valentino 20181843184300
43Valentino 20212146204511
49Versace 20111236123600
51Versace 20131439143900
54Versace 20161640174111
68Saint Laurent 20191641143922
HC: heel contact.
Table A3. Total variance of principal components.
Table A3. Total variance of principal components.
Principal
Component
EigenvaluesPrincipal
Component
Eigenvalues
Total% of VarianceCumulative %Total% of VarianceCumulative %
1261.63119.77619.776314.7330.35894.764
2200.91415.18634.962324.3150.32695.090
3182.83613.8248.782334.2280.32095.409
4101.477.67056.451344.0130.30395.713
568.5245.17961.631353.8770.29396.006
652.5433.97265.602363.4210.25996.264
744.6473.37568.977373.2410.24596.509
839.1042.95671.933383.0190.22896.738
931.3032.36674.299392.8920.21996.956
1028.8422.18076.479402.7880.21197.167
1122.4181.69478.173412.7020.20497.371
1219.4391.46979.643422.5320.19197.563
1319.0561.44081.083432.5020.18997.752
1417.0081.28682.368442.3240.17697.927
1516.4801.24683.614452.1530.16398.090
1615.7971.19484.808461.9770.14998.240
1714.9271.12885.936471.8300.13898.378
1813.7641.04086.977481.7560.13398.511
1912.7330.96287.939491.7340.13198.642
2010.6930.80888.747501.6060.12198.763
219.7690.73889.486511.520.11598.878
229.5150.71990.205521.4310.10898.986
238.6780.65690.861531.3440.10299.088
248.1900.61991.480541.2730.09699.184
258.0620.60992.089551.1370.08699.27
267.1340.53992.629561.1120.08499.354
276.4740.48993.118571.0500.07999.433
285.9430.44993.567
295.6580.42893.995
305.4410.41194.406
Table A4. Agglomeration schedule provided by HCA.
Table A4. Agglomeration schedule provided by HCA.
StageCombined Clusters CoefficientsStage at Which the Cluster First AppearsNext Stage
Cluster 1Cluster 2Cluster 1Cluster 2
139530.9940016
222642.6080011
331514.6030015
423676.6770016
552659.0290032
6152111.6920019
7112014.3600022
8365717.1120029
9195520.6870029
10125024.3240017
11226828.1952028
12416932.1240035
13174636.3830053
144740.6490023
15314944.9163037
16233949.1844148
17101253.57201038
18586658.1310042
19154762.8086024
2081467.4980044
2122972.2600041
22115477.2027048
234582.21014058
24154487.27019046
25616292.9670050
2693298.9410036
271333105.0130039
282237111.34211031
291936117.7839832
30648124.4370038
312245131.44728046
321952138.88429553
331660146.6270043
342743154.4170049
354142162.35812037
3619171.66602658
373141181.129153547
38610191.068301751
391359201.12527045
403038211.1870052
4123221.70621047
425658232.44401857
431663243.37633050
44835254.72120054
451334267.34639051
461522280.375243162
47231293.487413765
481123306.696221659
492627320.45603459
501661337.771432560
51613355.717384557
523040374.13340062
531719392.680133254
54817412.592445363
552425433.6530061
561828455.5660064
57656478.205514264
5814501.157362360
591126530.219484961
60116559.464585065
611124590.135595567
621530621.827465263
63815660.884546266
64618700.994575668
6512744.246604766
6618788.376656367
67111835.548666168
6816884.00067640

References

  1. Koh, Y.; Lee, J. A study of color differences in women’s read-to-wear collections from world fashion cities: Intensive study of the Fall/Winter 2010 collections from New York, London, Milan, and Paris. Color Res. Appl 2013, 38, 463–468. [Google Scholar] [CrossRef]
  2. Volonté, P. Modelling Practice: The Inertia of Body Ideals in the Fashion. Sociologica 2019, 13, 11–26. [Google Scholar]
  3. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186. [Google Scholar] [CrossRef] [PubMed]
  4. Yamamoto, M.; Shimatani, K.; Ishige, Y.; Takemura, H. Verification of gait analysis method fusing camera-based pose estimation and an IMU sensor in various gait conditions. Sci. Rep. 2022, 12, 17719. [Google Scholar] [CrossRef] [PubMed]
  5. Kobayashi, Y.; Hobara, H.; Heldoorn, T.A.; Kouchi, M.; Mochimaru, M. Age-independent and age-dependent sex differences in gait pattern determined by principal component analysis. Gait Posture 2016, 46, 11–17. [Google Scholar] [CrossRef] [PubMed]
  6. Sawacha, Z.; Sartor, C.D.; Yi, L.C.; Guiotto, A.; Spolaor, F.; Sacco, I.C.N. Clustering classification of diabetic walking abnormalities: A new approach taking into account intralimb coordination patterns. Gait Posture 2020, 79, 33–40. [Google Scholar] [CrossRef] [PubMed]
  7. Hobara, H.; Kobayashi, Y.; Mochimaru, M. Spatiotemporal variables of able-bodied and amputee sprinters in men’s 100-m sprint. Int. J. Sports Med. 2015, 36, 494–497. [Google Scholar] [CrossRef]
  8. Tierney, G.J.; Gildea, K.; Krosshaug, T.; Simms, C.K. Analysis of ball carrier head motion during a rugby union tackle without direct head contact: A case study. Int. J. Sports Sci. Coach. 2019, 14, 190–196. [Google Scholar] [CrossRef]
  9. Magnani, C.; Ait-Said, E.D. Geometrical analysis of motion schemes on fencing experts from competition videos. PLoS ONE 2021, 16, e0261888. [Google Scholar] [CrossRef]
  10. Wu, C.H.; Wu, T.C.; Lin, W.B. Exploration of applying pose estimation techniques in table tennis. Appl. Sci. 2023, 13, 1896. [Google Scholar] [CrossRef]
  11. Segal, Y.; Hadar, O.; Lhotska, L. Using EfficientNet-B7 (CNN), Variational Auto Encoder (VAE) and Siamese Twins’ Networks to Evaluate Human Exercises as Super Objects in a TSSCI Images. J. Pers. Med. 2023, 22, 874. [Google Scholar] [CrossRef] [PubMed]
  12. TDPT Windows Version v0.6 Released! Available online: https://www.youtube.com/watch?v=Tfo8X86A6RI (accessed on 31 May 2024).
  13. Women’s Apparel—Worldwide. Available online: https://www.statista.com/outlook/cmo/apparel/women-s-apparel/worldwide (accessed on 7 April 2024).
  14. Men’s Apparel—Worldwide. Available online: https://www.statista.com/outlook/cmo/apparel/men-s-apparel/worldwide (accessed on 7 April 2024).
  15. Fashion Model Demographics and Statistics in the US. Available online: https://www.zippia.com/fashion-model-jobs/demographics/ (accessed on 7 April 2024).
  16. The Social Data Behind the Biggest Fashion Week Shows. 2024. Available online: https://www.brandwatch.com/blog/social-data-biggest-fashion-week-shows/ (accessed on 31 May 2024).
  17. Furukawa, T.; Miura, C.; Miyatake, K.; Watanabe, A.; Hasegawa, M. Quantitative trend analysis of luxury fashion based on visual impressions of young Japanese women. Int. J. Fash. Des. Technol. Educ. 2016, 10, 146–157. [Google Scholar] [CrossRef]
  18. An, H.; Park, M. Approaching fashion design trend applications using text mining and semantic network analysis. Fash. Text. 2020, 7, 34. [Google Scholar] [CrossRef]
  19. Sim, T.K. OpenSim Project Home. Available online: https://simtk.org/projects/opensim (accessed on 31 May 2024).
  20. Delp, S.L.; Anderson, F.C.; Arnold, A.S.; Loan, P.; Habib, A.; John, C.T.; Guendelman, E.; Thelan, D.G. OpenSim: Open-source software to create and analyze dynamic simulations of movement. IEEE Trans. Biomed. Eng. 2007, 55, 1940–1950. [Google Scholar] [CrossRef]
  21. Jauhiainen, S.; Pohl, A.J.; Äyrämö, S.; Kauppi, J.P.; Ferber, R. A hierarchical cluster analysis to determine whether injured runners exhibit similar kinematic gait patterns. Scand. J. Med. Sci. Sports 2020, 30, 732–740. [Google Scholar] [CrossRef] [PubMed]
  22. Deluzio, K.J.; Astephen, J.L. Biomechanical features of gait waveform data associated with knee osteoarthritis: An application of principal component analysis. Gait Posture 2007, 25, 86–93. [Google Scholar] [CrossRef]
  23. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  24. Field, A. Discovering Statistics Using SPSS, 2nd ed.; Sage Publications: Washington, DC, USA, 2005. [Google Scholar]
  25. Why Do Models Walk Cross Legged? Available online: https://fashionrepublicmagazine.com/community/runway/why-do-models-walk-cross-legged/ (accessed on 7 April 2024).
  26. Guo, R.L.; Liu, B.; Huang, H.Y. The study of female fashion Model’s basic walking posture. Adv. Mater. Res. 2011, 332–334, 1272–1275. [Google Scholar] [CrossRef]
  27. An Analysis of the Most Iconic Supermodel Runway Walks: From Gigi Hadid to Hailey Bieber. 2016. Available online: https://www.elle.com/uk/fashion/celebrity-style/articles/a31869/an-analysis-of-the-most-iconic-supermodel-runway-walks/ (accessed on 7 April 2024).
  28. Gender Neutrality Becomes Fashion Reality. Available online: https://www.vogue.co.uk/article/gender-neutrality-becomes-fashion-reality (accessed on 7 April 2024).
  29. Press Release LVMH and Kering Have Drawn up a Charter on Working Relations with Fashion Models and Their Well-Being. Available online: https://r.lvmh-static.com/uploads/2017/09/press-release-models-charter-kering-lvmh-en-def-09-06-17.pdf (accessed on 7 April 2024).
  30. Aoyagi, Y.; Yamada, S.; Ueda, S.; Iseki, C.; Kondo, T.; Mori, K.; Kobayashi, Y.; Fukami, T.; Hoshimaru, M.; Ishikawa, M.; et al. Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model. Sensors 2022, 22, 5282. [Google Scholar] [CrossRef]
  31. Goel, S.; Pavlakos, G.; Rajasegaran, J.; Kanazawa, A.; Malik, J. Humans in 4D: Reconstructing and Tracking Humans with Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023. [Google Scholar]
  32. Tanabe, H.; Fujii, K.; Kaneko, N.; Yokoyama, H.; Nakazawa, K. Biomechanical strategies to maximize gait attractiveness among women. Front. Sports Act. Living 2023, 5, 1091470. [Google Scholar] [CrossRef]
  33. Tanabe, H.; Yamamoto, K. Structural equation modeling of female gait attractiveness using gait kinematics. Sci. Rep. 2023, 13, 17823. [Google Scholar] [CrossRef]
  34. Saito, S.; Saito, M.; Kondo, M.; Kobayashi, Y. Gait pattern can alter aesthetic visual impression from a third-person perspective. Sci. Rep. 2024, 14, 6602. [Google Scholar] [CrossRef] [PubMed]
  35. YouTube Fair Use Policy. Available online: https://www.youtube.com/howyoutubeworks/policies/copyright/#copyright-exceptions (accessed on 7 April 2024).
  36. Japanese Law Translation, Copyright Act. Available online: https://www.japaneselawtranslation.go.jp/ja/laws/view/4207 (accessed on 7 April 2024).
  37. Clough, S.; Tanguay, A.F.; Mutlu, B.; Turkstra, L.S.; Duff, M.C. How do Individuals With and Without Traumatic Brain Injury Interpret Emoji? Similarities and Differences in Perceived Valence, Arousal, and Emotion Representation. J. Nonverbal Behav. 2023, 47, 489–511. [Google Scholar] [CrossRef]
  38. Hierarchical Cluster Analysis Using SPSS|Agglomeration Schedule| Data Analysis: Part 3. Available online: https://www.youtube.com/watch?v=bNemTjPGWlo (accessed on 16 April 2024).
  39. Validating a Hierarchical Cluster Analysis. Available online: https://www.youtube.com/watch?v=mSzk2KrbNfs (accessed on 16 April 2024).
Figure 1. Flowchart of the overall research methodology.
Figure 1. Flowchart of the overall research methodology.
Sensors 24 03865 g001
Figure 2. Dendrogram extracted using hierarchical cluster analysis. Based on the outputs and the detailed consideration described in Appendix A, we concluded that classifying the data into five clusters was the most appropriate for interpreting the results.
Figure 2. Dendrogram extracted using hierarchical cluster analysis. Based on the outputs and the detailed consideration described in Appendix A, we concluded that classifying the data into five clusters was the most appropriate for interpreting the results.
Sensors 24 03865 g002
Figure 3. Reconstructed stick figures representing the walking styles of models in each cluster: (a) Cluster 1, (b) Cluster 2, (c) Cluster 3, (d) Cluster 4, and (e) Cluster 5. Each subfigure represents a consecutive 10% segment of the gait cycle. Full animation movies can be downloaded from Supplementary Material.
Figure 3. Reconstructed stick figures representing the walking styles of models in each cluster: (a) Cluster 1, (b) Cluster 2, (c) Cluster 3, (d) Cluster 4, and (e) Cluster 5. Each subfigure represents a consecutive 10% segment of the gait cycle. Full animation movies can be downloaded from Supplementary Material.
Sensors 24 03865 g003
Figure 4. Box plot representing the gait features of each cluster: (a) medio-lateral distance between left- and right-ankle landmarks at the timing of heel contact; (b) amount of body sway in the medio-lateral direction. Pairs with significant or near-significant differences are marked in the figure.
Figure 4. Box plot representing the gait features of each cluster: (a) medio-lateral distance between left- and right-ankle landmarks at the timing of heel contact; (b) amount of body sway in the medio-lateral direction. Pairs with significant or near-significant differences are marked in the figure.
Sensors 24 03865 g004
Table 1. Detailed information for each cluster. For the cadence and years when the shows were held, the median (interquartile range) is reported. Statistical analyses revealed a significant main effect on the year when the show was held but not on the cadence.
Table 1. Detailed information for each cluster. For the cadence and years when the shows were held, the median (interquartile range) is reported. Statistical analyses revealed a significant main effect on the year when the show was held but not on the cadence.
Cluster 1Cluster 2Cluster 3Cluster 4Cluster 5
Number of
resources
119142312
Cadence129.6 (9.6)124.8 (16.8)124.8 (9.6)124.8 (4.8)122.4 (13.2)
Years when
shows were held
2006 (9) **2011 (16)2014.5 (15.5)2014 (10)2018.5 (5.5) **
Classified showsD&G 2003D&G 2005D&G 2013D&G 2015D&G 2019
D&G 2011D&G 2009D&G 2018D&G 2022LV 2009
D&G 2012VAL 1998D&G 2020D&G 2023LV 2018
D&G 2014VAL 2001D&G 2021LV 2004LV 2019
D&G 2017VAL 2018LV 2006LV 2008LV 2020
LV 2003VAL 2020VAL 1995LV 2015LV 2021
VAL 2002VER 2011VAL 2004LV 2016LV 2022
SL 2003VER 2013VAL 2005VAL 1999VAL 2015
SL 2004SL 2020VER 2004VAL 2010VAL 2021
SL 2006 VER 2012VAL 2012VER 2015
SL 2008 VER 2019VAL 2013VER 2016
VER 2022VAL 2014SL 2017
VER 2023VAL 2017
SL 2016VAL 2022
VAL 2023
VER 2000
VER 2002
VER 2014
VER 2017
VER 2021
SL 2009
SL 2012
SL 2019
D&G: Dolce & Gabbana; LV: Louis Vuitton; SL: Saint Laurent; VAL: Valentino; VER: Versace. ** Significant differences (p < 0.05) between the clusters with asterisks.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kobayashi, Y.; Saito, S.; Murahori, T. Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles. Sensors 2024, 24, 3865. https://doi.org/10.3390/s24123865

AMA Style

Kobayashi Y, Saito S, Murahori T. Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles. Sensors. 2024; 24(12):3865. https://doi.org/10.3390/s24123865

Chicago/Turabian Style

Kobayashi, Yoshiyuki, Sakiko Saito, and Tatsuya Murahori. 2024. "Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles" Sensors 24, no. 12: 3865. https://doi.org/10.3390/s24123865

APA Style

Kobayashi, Y., Saito, S., & Murahori, T. (2024). Classification of Fashion Models’ Walking Styles Using Publicly Available Data, Pose Detection Technology, and Multivariate Analysis: From Past to Current Trendy Walking Styles. Sensors, 24(12), 3865. https://doi.org/10.3390/s24123865

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop