Next Article in Journal
On the Robustness of Compressed Models with Class Imbalance
Previous Article in Journal
Reinforcement-Learning-Based Edge Offloading Orchestration in Computing Continuum
Previous Article in Special Issue
Technical Innovations and Social Implications: Mapping Global Research Focus in AI, Blockchain, Cybersecurity, and Privacy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Fake Instagram Accounts via Machine Learning Techniques

by
Stefanos Chelas
1,
George Routis
1,2 and
Ioanna Roussaki
1,2,*
1
School of Electrical and Computer Engineering, National Technical University of Athens, 15773 Athens, Greece
2
Institute of Communication and Computer Systems, 10682 Athens, Greece
*
Author to whom correspondence should be addressed.
Computers 2024, 13(11), 296; https://doi.org/10.3390/computers13110296
Submission received: 20 September 2024 / Revised: 6 November 2024 / Accepted: 11 November 2024 / Published: 15 November 2024
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)

Abstract

:
This paper focuses on the detection of fake accounts on Instagram and proposes a novel solution that aims to address this problem. More specifically, a machine learning-based solution is introduced that can be employed by Instagram-based applications to combat this phenomenon. To accomplish this, publicly available data from Instagram users are collected and processed. After making the necessary feature additions to and removals from these data, they are fed into machine learning algorithms with the aim of detecting fake Instagram accounts. The interest of the results presented in this study lies in their comparability with the results of other similar studies, with the difference that fewer input features have been used. This addresses significant research challenges regarding the performance of and confidence in obtained results in this specific problem domain.

1. Introduction

A social network is a networked platform or application that allows users to create personal profiles, connect with other users and interact with them via various features, such as text posts, images, video and comments. The aim of social networks is to establish virtual networks, making communication, information notification and interaction easier and accessible overall. Social networks have greatly affected our daily lives by shaping the way people communicate, get informed, and share their experiences. However, the evolution of social media introduces critical problems, such as fake news propagation and excessive digital exposure.
Some of the most popular social networks include Facebook, Instagram, X (Twitter), TikTok, YouTube, LinkedIn, etc. Instagram is a social network introduced by Kevin Systrom in October 2010 [1,2]. It offers its users the ability to share pictures and videos with other users’ or exchange messages. The first week it was released, the application counted 100,000 new users [3], and in only 2 months it reached 1 million users [4]. In 2012, Facebook (now Meta) bought Instagram for USD 1 billion [3]. Although Instagram offers many facilities to its users, it mainly serves for sharing pictures and videos. Its users can process multimedia with various filters and organize them based on location via the use of hashtags. Users can sign up by creating a new account or profile. This is completely free, but requires potential users to register with some personal data, such as name, birthdate, biography, username (that the user would like to use), etc. Each account can be public or private. On a public profile, all posts, pictures and videos can be viewed by everybody, whereas on private profiles, the account owner can specify which users will have access to the data they upload. Based on the data posted on Instagram, trends are formed of topics or themes that are popular at a given time. Moreover, users can “follow” other users and comment on or “like” their posts. Each Instagram user has their personal homepage, where the posts of followers appear.
One of the Instagram aspects that has gained increased popularity is “Instagram Stories”, which was introduced in August 2016 [5]. This function allows users to create temporary content in the form of pictures or videos, which appear for 24 h and are then removed [2,5,6]. This operation has been quite popular, and it is a significant part of the user experience on Instagram. According to the company, over 500 million people [5] use this service every day.
As is the case with every other social network, Instagram is also used for commercial purposes. It hosts accounts of companies and business stakeholders in general, allowing the promotion of goods and services. Moreover, it provides tools for commercial activities. Influencers, i.e., people with high follower populations, often cooperate with companies, facilitating the promotion of goods and services. In this respect, they use audio and video to communicate commercial messages to their audience. Furthermore, politicians exploit Instagram to highlight their political positions, promote their campaigns and share snapshots of their everyday life. Instagram has thus evolved to be a valuable tool for politicians, enabling direct communication with the general public, facilitating the promotion of political views.
Instagram is undoubtedly a very useful, practical, and easy tool in the hands of millions of people. However, the increase in its popularity has led to the appearance of fake accounts, a phenomenon that is unfortunately gaining momentum. Accounts that use any fake information or content that is not authentic and, directly or indirectly, aim to deceive people are considered to be fake. It is estimated that about 10% of the 2.4 billion accounts on Instagram are fake (2024). Fake accounts may have the following aims:
  • Malicious activity: A few users start fake accounts to execute malicious activities such as fraud, phishing or the blackmail of other users.
  • Spamming: These accounts are used to persuade the user to buy a specific good/service that appears continuously in posts.
  • Mass influence: Specifically, these accounts influence trends by publishing numerous messages of similar content with different phrases in order not to be recognized.
  • Increased followers: In certain situations, fake accounts are produced to increase the number of followers of another user, most of the time with the target of changing their popularity.
  • Selling followers or likes: There is also the case of selling or buying fake followers or likes, and fake accounts are used to offer this service.
The last two goals are very frequent, as they serve as means to establish influencers, the popularity of whom relies exclusively on the number of followers they have. The higher the number of followers, the more money they get paid to promote products or services. At the same time, as technology has dramatically evolved, it offers the ability to create content and algorithms on social media, affecting metrics to promote content to even more users. As far as the mass influence of social media is concerned, it is important that legitimate users are in place to protect the public and avoid the formulation of political or social views based on fake information. Instagram takes several measures to detect fake accounts and encourages users to report suspicious activity. Nevertheless, the percentage of fake Instagram accounts is still very high and more effective means need to be employed. This paper aims to present such means that facilitate the detection of fake Instagram accounts.
To be able to build models supporting the above and then detect fake Instagram accounts on the fly, sufficient publicly available social media data need to be identified and processed. But there are various limitations regarding the usage of public social media data [7]. From a technical perspective, the identified data are almost never ideal for usage and require heavy processing before being useful. There are major GDPR [8] and privacy implications in using such public data, while public self-reporting data are often biased, erroneous or incomplete, leading to unreliable available content. Data acquisition limitations are also a problem, as depending on the platforms used, the data services used cannot aggregate data owned by third parties, such as social media platforms. In this respect, third-party data collection is an important concern for scientific reproducibility, as often the raw query generated from query logs is proprietary and cannot be shared with other owners, or it may be available for a certain time for study but not later. Moreover, a major drawback is the lack of control over social media data and the fact that there are often changes in user behavior across social media, which negatively affects the process of data interpretation. Furthermore, there are major ethical concerns in using publicly available data. This paper does not require informed consent to access and process publicly available data, according to IRB guidelines. Therefore, the owners of public social media data may not be aware that their data are public or obtained and used by external actors. This is a significant ethical problem that is a result of the frequently unclear privacy management systems and policies in place, while the boundaries between public and private data and the extent that public data can be used for research purposes are often not clear in terms of social media.
The work presented in this paper demonstrates a significant advancement in detecting fake Instagram accounts using a relatively small dataset. We used several repositories, each with limited data. So, there was an effort to identify as many datasets as possible, ignoring the fact that each carried very few data, and then combine these in a larger dataset, identifying and matching common features. Unfortunately, this approach was not easy since numerous modifications were required in order to construct a reliable dataset that could be used for the experiments eventually conducted. Despite the limited data, this study achieved higher accuracy scores compared to other studies that utilized larger datasets. This success was accomplished through the application of various machine learning (ML) models, including Gaussian, Random Forest, Decision Trees, Logistic Regression, MLP, KNN, SVM, etc. The proposed approach involves feature engineering techniques, which enhance the performance of the ML models. As elaborated upon in the rest of this paper, the proposed method outperforms other studies in the field, indicating that effective fake account detection can be achieved with smaller datasets when advanced ML techniques are applied.

2. Related State-of-the-Art Work

This section aims to review the related state-of-the-art work regarding the detection of fake social media accounts. The first subsection focuses exclusively on Instagram, whereas the second one addresses other social networks, such as Facebook, X (Twitter), etc.

2.1. Instagram

In [9], the researchers compared the performance, and more specifically the F1-score, of many machine learning algorithms, such as Naïve Bayes, Logistic Regression and SVM, but also one of their own neural networks, in order to identify fake accounts and automated bot accounts. Given the unevenness of the data used, 20% were fake accounts while 80% were real accounts. The researchers produced synthetic accounts via the use of the SMOTE-NC algorithm. They concluded that the SVM algorithm and their neural network showed the best performance, achieving 94%. They note that the existence of oversampling affected their results, giving a better appearance in relation to its absence, where the SVM algorithm exclusively showed a best F1 performance of 89%.
In [10], the researchers proposed a machine learning model that was based on the Bagging algorithm in order to solve the targeted problem. They produced a web crawler for Instagram and, in combination with the Instagram API, they collected information and classified users’ accounts. They compared the performances of their model with popular classifying models, such as Random Tree, J48, SVM, RBF, L\MLP, Hoeffding Tree, and Naïve Bayes, in order to conclude that their model was the most efficient, since it achieved an accuracy of 98.45%, with a very small percentage of wrong classifications. Finally, they classified users based on each feature separately in order to identify the effect of each one on the final result.
In [11], the authors conducted extended research, aiming to answer questions which had to do not only with the optimized way of separating the accounts into fake and real, but also with the features of the accounts and the account type (for instance, metadata, media info, etc.). They collected data under very strict criteria by choosing to gather fake accounts from companies that sell such accounts, and they also gathered real accounts via web-scraping, analyzing 24 Instagram accounts of private universities. The final labeling of the accounts—32.640 in total—was realized by a three-member committee. They executed two experiments using both the typical two-class classification (fake–authentic) as well as the four-class classification (authentic, active fake user, inactive fake user, spammer) via the help of the following algorithms: Random Forest, MLP, Logistic Regression, Naïve Bayes, J48 and Decision Tree. They found that, for both classification types, the Random Forest algorithm showed the best behavior, with 90.09% and 91.76%, respectively. At the same time, they concluded that metadata is the type of account feature that plays the most significant role in this kind of classification.
In [12], the authors tried to identify both fake and spam accounts. To identify spam accounts and estimate their affect, they proposed two algorithms, the “InstaFake” and the “InstaReach”, selected features of which were input into an “Instagram fake spammer genuine accounts” dataset. For the selection of the specific features, they conducted some in-depth research and monitored their relation. In the same sense, they used all the features of the accounts that they had and fed their Deep Learning Neural Network they had constructed to identify fake accounts. The results showed an accuracy of around 91%.
In [13], the researchers implemented a literature research, in which they depicted in detail what had been achieved regarding fake account identification on Instagram via the help of machine learning until the publication date. They presented the various methods that they had examined and also their results, without expressing a preference towards any of the existing methods or proposing a new method.
The authors in [14] proposed a Deep Learning model for clustering accounts into real and fake ones. They used the total number of data of “Instagram fake spammer genuine accounts” and modeled a four-hidden-layer ANN with 34.752 trainable parameters. For each hidden layer used, they followed a dropout layer with a 0.3 parameter, and their model was trained for 20 epochs. The accuracy of the model reached 93.63% with 0.18% permanent loss.
The authors in [15] used regression algorithms to solve the fake account identification problem. Random Forest and Logistic Regression were the choices of the authors. They used “Instagram fake spammer genuine accounts” as a dataset, as well as the following metrics: accuracy, recall, precision and F1-score, which comprised the final results of the research. The authors claimed about 92.5% accuracy using Random Forest and 90.8% accuracy using Logistic Regression models, without showing a preference for one of the two algorithms.
In another study [16], the authors noticed that most of the implemented approaches were targeting the Random Forest algorithm. They used and proposed a system which exploited gradient boosting, a similar algorithm to Random Forest, which, according to the authors, made use of a basic advantage and could manage missing values’ inputs. They did not feed the metadata gathered via web scraping as final inputs to the algorithm, but the methods that were the result of them: engagement rate, artificial activity and spam commenting. The authors did not mention the performance of the current effort in raw numbers, but they implied that their proposition is better than the current propositions in the literature.

2.2. Other Social Networks

The author of [17] proposed a machine learning approach that consists of two fully developed models: one is targeted at identity verification and account security and the other at text analysis and harmfulness, regarding posts and comments of users. The first uses the IP and MAC of users to check the possibility of the existence of multiple suspicious accounts and asks for authentication if needed, whereas the second implements SVM and classifiers in order to number the frequency of the “malicious/toxic” words in the text and uses Natural Language Processing (NLP) to manage foreign languages and data. The authors evaluate their approach across various social networks (Facebook, Instagram, Twitter, Youtube, WhatsApp) and indicate high accuracy in fake account identification.
The authors of [18] proposed a modern classification algorithm named SVM-NN, which combines the known ML algorithm SVM with a neural network designed to better classify a social network’s users into real and fake ones. More specifically, the training results of the SVM were used to train the neural network and the testing results to be evaluated. The “MIB” dataset was used, which, during processing, was divided into four non-covered subsets fed by the three evaluated systems: the SVM, the neural network and their combination, SVM-NN. The accounts existing in these subsets of data were related to Twitter users. The authors observed that their SVM-NN model showed higher performance and better appearance in relation to remote elements, with the classification accuracy of the stored accounts approaching 98%. They also noticed that the various subsets of the initial data did not show the same behavior regarding the effectiveness of the final classification, which was the result of the feature correlation that each feature (characteristic) contained, as well as the nature of the algorithm chosen in each separation.
The authors of [19] carried out extensive bibliographic research in November 2020 and presented in detail all initiatives focusing on fake account detection across all social networks. Given that there were numerous initiatives, they performed a targeted selection of what they would use based on criteria that would guarantee the correctness and validity of the research, such as the popularity of the publisher, the documentation of the publication, comparison with previous works and the usage of algorithmic machine learning models. Then, they collected all of the possible features used in previous studies and went on with the classification of publications based on such characteristics. They depicted the various methods examined, the related means and their results without showing any reference for any of the proposed methods. Moreover, they did not propose any new method.
In another research initiative [20], the authors compared the performance of three machine learning algorithms to classify Twitter users into bots or real users. It is worth noting that based on the research the dataset used, the word “bot” can mean “fake profile”, which is the target of the current journal article. The following three algorithms were used: Logistic Regression, SVM and Random Forest. The total dataset was a subset of the following research work of [21]. The dataset was preprocessed before being used to feed the algorithms, and 24 features were kept separated in classes. The Logistic Regression and Random Forest models showed the highest performances, with the latter exceling and achieving an accuracy of 97.9%. The authors identified that although the dataset and the algorithms could easily identify automated bot accounts, there was an intense difficulty in identifying human-operated fake accounts.
In another research initiative [22], the researchers compared supervised models that could classify fakes accounts based on users’ emotions on Facebook. Similarly, they implemented an extensive analysis of emotions expressed by users in their posts, with the results being very interesting regarding users’ behavior on social networks. Twelve features were used to indicate emotions, such as anger, regret, happiness, fear, disgust, etc. The following models were used: SVM, Naïve Bayes, JRip and Random Forest. The following metrics were used: accuracy, F-measure and AUROC. Finally, Random Forest indicated the best performance in all three metrics, and was the one chosen by the researchers.
The authors in [23] produced a model for the identification of fake accounts on LinkedIn. They claim that in order to separate the fake and the real profiles on social networks, websites’ features based on users’ accounts need to be identified. However, the access restrictions around private information made their work more difficult. For this reason, they experimented with a small set of data that comprised 74 samples in total. Specifically, 40 samples were real and 32 samples were fake. They were separated to datasets in 1:1 ratio for training and testing. The features were extracted via the PCA method and then were fed to NN, SVM and Weighted Average (WA) classifiers in order to classify real and fake profiles. Based on their results, the researchers proposed the SVM model, as it reached an accuracy of 87.34%.
The authors of [24] proposed a model that uses an algorithm for pattern matching to identify fake accounts on Twitter. They gathered 62 million user profiles using crawlers, which then were processed to be easily manageable. During this process, a number of 724,494 sets were produced and 6,958,523 accounts were maintained in total. In order to improve the 724,494 sets, screen names were analyzed and those which were identical were identified. Finally, the update time of an account with new activity (such as a new post), the production time of the accounts, as well as the URL analysis of each account were very important in producing behavior prototypes to target users. The researchers completed the process of unifying the accounts with this method, based on features and sets with similar behavior. In conclusion, apart from the minor analysis that took place, a very reliable subset of fake accounts was specified and used via the use of map-reducing techniques and pattern recognition presented.
The researchers of [25] tried to identify fake accounts in real time via the use of an extension/add-on to Google Chrome browser. They used machine learning techniques to achieve their goal, and they collected data by combining crawlers and Twitter APIs, as well as their personal research. The new idea proposed was based on the Random Forest and Bagging models, which demonstrated the best performance among the five algorithms they used in terms of the ROC and TP (True Positive) metrics, reaching a score of 99.4%.

3. Datasets Used in the Conducted Experiments

In this section, the datasets used for the extraction of the research results of the current work are presented. There is a combination of two separate datasets with similar features and attributes for describing account properties on Instagram.

3.1. The Dataset “InstaFake”

The “InstaFake Dataset” [26] was collected by a specific research initiative [9], and the respective contributors have provided it to any researcher via a public repository. It carries information about user accounts, which are classified according to the authors’ criteria into the following four categories:
  • Fake—malicious;
  • Real;
  • Automated;
  • Non-automated.
For the problem addressed in the study presented in this paper, the accounts of the first category are very interesting. These data, with their final downloaded files, are divided in two JSON files. Each one contains a sum of the accounts that belong to one of the two categories. Table 1 presents some quantitative elements for these two files.
Table 2 below depicts in detail the features (columns) of the specific accounts along with a short description.

3.2. Dataset “Instagram Fake Spammer Genuine Accounts”

The “Instagram fake spammer genuine accounts” dataset [27] was collected from a public repository. With a format and features similar to the ones previously analyzed, it seemed to be the ideal extension of “InstaFake Dataset”. It contains information about accounts which are classified in the following two classes:
  • Fake—malicious;
  • Real.
The data, as they are downloaded, are divided into two .csv files, “train.csv” and “test.csv”, which contain the training data and testing data, respectively. Each file’s data consist of accounts with elements of the two categories. It is more than obvious that the ratios of fake (malicious) to real accounts are different depending on the solution/aim they are used for. In Table 3, Table 4 and Table 5 presented below, there are some quantitative elements of these two files.

3.3. Final Dataset

To produce the final dataset used in the study presented in this paper, there was a need to modify the available datasets. These modifications aimed to achieve the optimum utilization of the common information. Table 6 presents the quantitative elements for the final dataset.
The modifications that took place in the two initial datasets were the following:
  • The columns “nums/length username” and “username_digit_count” were used to generate the new column “hasDigitsInUsername”. This occurred since the information that they provided was interesting but not compatible with both datasets.
  • The columns “username_length”, “username_digit_count” and “InstaFake Dataset” were used to generate the column “nums/length username”, as follows:
    nums / length   username = u s e r n a m e _ l e n g t h u s e r n a m e _ d i g i t _ c o u n t
  • The columns “fullname words”, “nums/length filename”, “name == username” and “external URL” of the total dataset “Instagram fake spammer genuine accounts” were deleted since there was not a method to assign the content to one of the columns of “InstaFake Dataset”.
In Table 7, there is a detailed demonstration of the final features of the specific accounts and a synoptic description.
In Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 presented below, the distributions of the features depicted in Table 7 are illustrated for real and fake accounts.

4. Fake Account Detection on Instagram

This section presents the proposed approach that aims to enable the detection of fake accounts on Instagram. Section 4.1 presents the preprocessing of the available dataset in order to feed the various machine leaning models. Section 4.2 discusses the feature correlation in a correlation matrix. Section 4.3 analyzes the implementation of the algorithms followed by the authors of this paper. Finally, Section 4.4 elaborates on the performance of each algorithm tested.

4.1. Data Preprocessing

In order to detect the highest possible success rate of the methods used for fake account detection, a technique known as feature engineering was exploited, which uses existing features to produce new ones. Although it is considered to depict something that does not benefit the training procedure of a machine learning model, it has been observed that when using the suitable feature combination, there might be a significant improvement regarding the final results of a study. Consequently, the study presented in this paper produced three new features. Their selection was made based on the current literature, and a significant increase in the success of the used models was shown.

4.1.1. Following/Followers Ratio

The following/followers ratio is a feature referring to the relationship between the number of accounts that someone follows and the number of accounts following them, which could provide some indication in relation to the authenticity of an account on a social network, but of course it is not an absolute indicator. The ratio is depicted in the following formula:
Following / Follower   Ratio = F o l l o w i n g F o l l o w e r s
From a mathematical perspective, there are some constraints in the case where the number of followers equals zero. Based on the statistical picture in Figure 11, it is obvious that real users have a very low following/followers ratio in comparison to fake ones. The best approach in this case was to match the following/followers ratio with the ratio appearing in the rest of the data (presented in next sections), which equaled 2000. Consequently, this approach cannot affect in any way the final result since only fake accounts follow such a behavior in the available dataset, facilitating this mathematical modeling.
Figure 11 depicts the fake accounts produced to a high degree with the target of increasing the following of third-party accounts in order to obtain benefits for the various reasons analyzed in Section 1. The result was highly expected, given that there are companies which produce and sell such accounts in bulk. Obviously, the production rate of such accounts makes account management difficult; this benefits the real accounts that “need” something like so that they can increase their number of followers.
Of course, the following feature alone could indicate falsity, but a user’s profile cannot be removed simply because they choose to follow third-party posts on Instagram. Based on the sample and bibliography, there is the assumption that if an account usually follows about 150 other users, it is understood that 63% of the real accounts of the dataset that were used should have been deleted.
Related literature indicates that a legitimate account typically follows 150 or more other users. In the studied dataset, approximately 63% of real accounts do not meet this criterion, which suggests that they should have been falsely removed due to suspicious activity. To avoid this, further studies have been carried out that led to the introduction of three new features exploited that allowed the inclusion of real user accounts that follow less than 150 other users. Details on this are provided in the remainder of this section. Moreover, it is often the case that some users decide to create own accounts exclusively to attend/follow events for which social media accounts are created, e.g., online competitions, gatherings, and various meetups, etc. At the same time, they follow and make friends without being active (i.e., posting, commenting, liking, etc.) every day. These actions could bring more followers.
As a result, considering the previous parameter, a small number of followers could not provide an indication that an x-random profile was created with legitimate thought, since the kind of users previously described would be automatically blocked. If such a conclusion had to be considered from an arithmetic perspective, it could be observed in our sample that around 10% of real users would be blocked because of having a low number of followers.
Based on the following/followers ratio, one can more confidently tell the difference between real and fake accounts because of their distribution, since fake accounts disproportionately follow many more users than the number of followers they have.

4.1.2. Following/Posts Ratio

This is another criterion that can provide some indication for the authenticity of an account on a social network. It references the relation between the number of users that someone follows and the number of posts on their profile. It is depicted in the following formula:
Following / Posts   Ratio = F o l l o w i n g P o s t s
Also in this case, there is some mathematical inconsistency, since there are many accounts without any posts. Figure 12 shows that fake accounts tend to have higher values in this specific feature compared to real accounts. Contrary to the previous case, a unified approach could not be implemented since there were 48 real accounts which, if assigned with a high score, were likely to be wrongly classified. For this reason, it was decided that each account that falls into this category should be assigned a score equal to the mean value corresponding to such an account. As a result, the fake accounts take the value of 524,399, whereas the real ones take the value of 41,542 for the purpose of preventing any inconsistency, and thus, providing a satisfactory solution to the problem.
The analysis for this specific feature choice is similar to the following/followers ratio analysis. The same observations depicted in “Following/Followers Ratio” section appear in the current section as well. Different styles and points of view regarding using social media can lead to accounts with authentic content, which can provide wrong results if the two combined features become isolated. This could lead to the conclusion that 25% of real accounts should be deactivated because of having a small number of publications (<10).

4.1.3. Followers/Posts Ratio

This refers to the relation between the followers number of a user on a social network and the post number on their profile. This feature can be used in order to understand the way with which a user manages their activity on the social network and output some results for their authenticity. It is defined by the following equation:
Followers / Posts   Ratio = F o l l o w e r s P o s t s
In this case, it is possible to have the mathematical issue that existed with the accounts in Section 4.1.2, where there are no posts, resulting in division with 0. Figure 13, in contrast to the previous two cases, presents that fake accounts tend to have lower values in this specific feature compared to real ones. As in the previous case, a unified policy could not be applied since there were 294 fake accounts which, if attributed with the highest possible value belonging to real accounts, would likely be wrongly classified. For this reason, we decided to employ the same policy as in the previous subsection for each account showing a similar problem, and define as a feature value the average that appeared for the corresponding kind of account. As a consequence, the fake accounts were assigned a value of 48.98, while the real accounts were assigned a value of 179.90. As previously observed, the features could individually provide a view of account authenticity, but the results would be too precarious. By combining both, better combinations are delivered, and at the same time, this helps the classifiers used, offering extra features that facilitate correct classifications.
The results depicted in Figure 13 may seem strange; however, the fake accounts have very few posts, with very small numbers of followers. Although the average value seems high, it should be highlighted that a standard policy of people that produce such accounts is to follow the fake accounts they create. It involves some effort to produce an appearance of authenticity without real content or activity. Thus, there is no comparison to what real accounts show. In this case, it is more common to have more followers than posts, especially when it comes to popular people. Based on research that took place in 2022, an average user does not frequently post over a week (0–3), particularly on Instagram, where stories are dominant, and of course, these posts are not taken into consideration.

4.2. Correlation Matrix

As was mentioned at the beginning of this paper, our target was to use as few features as possible for the input data to bring the best result. As a result of the preprocessing, the correlation matrix of the total features used can be seen below in Figure 14.
It is obvious that by excluding the features “numberOfFollowers” and “followers/posts ratio”, all the other features take part in the identification process regarding account authenticity, showing a good relation (positive or negative) with the feature “IsFake”. This specific procedure aims to achieve the highest possible success rates. Due to finding one more fake account, it was decided to keep the “numberOfFollowers” and “followers/posts ratio” elements apart from the others. As will be presented in a following section, this helps to extract better results and increases the metrics in most of the algorithms analyzed.

4.3. Realization of Algorithmic Experimental Evaluation in Four Phases

This study’s final results derive from the comparison of machine learning algorithms, which are used to classify accounts into categories. The algorithms used were the following: Naïve Bayes (Gaussian), Random Forest, Decision Trees, Logistic Regression, MLP, KNN and SVM. These algorithms were used in four phases, as shown in Figure 15. These phases were separated based on the structure of our dataset in a way that will be demonstrated in a following section. In each phase, the seven algorithms were fed with the same data and produced results that were evaluated based on the following metrics: accuracy, precision, recall, F1-score, AUC-ROC and log-loss. The nature and the specificity of each algorithm affect the results as the phases propagate.
In each phase, there is a quick demonstration of the differentiations that exist in the algorithms’ inputs, as well as the results, both in table demonstration and diagram form. As it will be presented, the SVM algorithm shows good results. For this reason, although it is presented in each results table, it is not presented in the relevant diagram, because the other low performances limit the capability of the optical comparison of the implementation results by the reader, something very important for such a presentation. Similarly, it is observed that the metric log-loss shows some abnormalities, since it gathers values exceeding the unit, making the typical demonstration of the results very difficult. However, the metric’s behavior in each phase is presented in a different diagram.

4.3.1. First Phase—Without Extra Features

In the first phase, the extracted results (Table 8) were achieved by exclusively taking as input for the classifiers the initial sum of the dataset without adding any of the extra features referenced in Section 4.1. In Figure 16, the aggregated results of the first-phase evaluation are shown, and Figure 17 shows the log-loss evaluation of the first phase.

4.3.2. Second Phase—Adding Following/Followers Ratio

In the second phase, the results extracted are presented in Table 9, Figure 18 and Figure 19 and also consider the “following/followers ratio” features over the initial dataset.

4.3.3. Third Phase—Adding Following/Posts Ratio

In the third phase, the extracted results (Table 10, Figure 20 and Figure 21) are derived by adding the “following/posts ratio” feature into the input as a classifier of the dataset of the previous phase.

4.3.4. Fourth Phase—Adding Followers/Posts Ratio

In the fourth phase, the extracted results (Table 11, Figure 22 and Figure 23) derive from adding the “followers/posts ratio” feature into the input as a classifier of the dataset of the previous phase.

4.4. Evaluation Results for Each Algorithm

4.4.1. Gaussian Naïve Bayes

Concerning the results of the Gaussian Naïve Bayes algorithm, as depicted in Figure 24 and Figure 25, the following can easily be observed:
  • In the first phase, the recall is very high, meaning that the model correctly identifies many fake accounts, but the low precision shows that there are many real accounts that are evaluated as fakes ones.
  • In the second phase, it can be observed that there is a sudden increase in the metrics’ values. The log-loss has significantly decreased, suggesting better probabilistic evaluation. The addition of the feature “following/followers” seems to seriously affect the performance of this algorithm, validating our decision to include it in our research.
  • In the third and fourth phase, the model seems to show high performance in all directions, apart from the fact that in the last one there is a slight decrease in its metrics.
  • Overall, the Gaussian Naïve Bayes algorithm tends to perform better in various metrics after the addition of features, which make it more effective in identifying fake accounts.

4.4.2. Random Forest

As far as the Random Forest results are concerned, as depicted in Figure 26 and Figure 27, the results are the following:
  • The algorithm demonstrates stable and significant improvement in its performance during each phase, utilizing all extra features that were offered in order to improve its internal operations.
  • The algorithm demonstrates high accuracy, recall and F1-score, showing good classification capability as well as fake recognition identification.
  • The decrease in log-loss shows that the model improves its probabilistic evaluation and is optimal in producing probabilistic forecasting.
  • The AUC-ROC keeps high values (from 96.50% to 96.80%), thus showing a very good capability of the model to understand the classes.
  • Overall, it demonstrates significant total performance with stable improvement in various metrics. Its capability to provide high accuracy, stabilized precision and recall, as well as low log-loss makes it a powerful algorithmic model for the specific problem studied herein.
  • It is the best choice among the algorithms used for fake account identification.

4.4.3. Decision Trees

As far as the results of the Decision Trees algorithm are concerned, as depicted in Figure 28 and Figure 29, the following conclusions can be drawn:
  • The Decision Trees algorithm offers high performance in all metrics, with the capability to provide reliable forecasting and effectively separate the classes.
  • The stable trend of improving its metrics in the consecutive phases shows that the model makes good use of the additional features offered: it improves its internal procedures and produces better forecasting.
  • Overall, the Decision Tree seems to be an effective choice for the specific problem at hand, offering high accuracy and satisfactory balance between precision and recall.

4.4.4. Logistic Regression

Concerning the results of the Logistic Regression model, as depicted in Figure 30 and Figure 31, the following can be stated:
  • The model shows a stable increase in accuracy, ranging from 95.41% in the first phase to 97.00% in the fourth phase, offering a reliable and improved total classification.
  • It shows balanced performance between precision and recall (F1-score), showing the model’s capability to find the fake accounts and to decrease incorrect forecasting. In increasing the F1-score, the effect of the extra features added is of vital importance.

4.4.5. MLP

As far as the MLP results are concerned, as depicted in Figure 32 and Figure 33, the following conclusions can be drawn:
  • There is a stable improvement during the phases with the amazing help of the feature “followers/posts ratio”, which was presented in Figure 14 as not having high correlation with the authenticity of each account. The results extracted in terms of the metrics’ values indicate that this model is the second best, following the Random Forest model.
  • The model shows exceptional performance with high accuracy, precision, recall, F1-Score and AUC-ROC.
  • The log-loss decreases significantly (from 1.59 to 0.95), showing probabilistic evaluation.

4.4.6. ΚΝΝ

As far as the KNN algorithm results are concerned, as depicted in Figure 34 and Figure 35, the following should be noted:
  • There is a stable increase rate in the metrics’ values during the four phases, with a higher increase in the third phase, which means that “following/posts ratio” drastically changes the combinations of the algorithm.
  • The log-loss has too high values in comparison to the previous implementations, accounting for the fact that KNN model requires a lot of data (entries, not features) in order to produce better results.

4.4.7. SVM

As far as the SVM results are concerned, as depicted in Figure 36 and Figure 37, the following should be noted:
  • In the first phase, the accuracy is 71.96%, which stands as an average value, but the low recall shows that there are many false-negative samples. Thus, there is a problem, taking into account that the situation examines the classification of fake and real accounts.
  • In the next three phases, there is some stability in the values, proving that the additional features did not affect the model’s performance from the third phase onwards. The increased accuracy and the slight increased recall, in comparison to the first phase, are positive clues showing that the addition of the feature “following/followers” helped a lot.
  • The AUC-ROC values are around 0.53 in all classes. This implies an average capability of the model to separate the classes. The same problem appears also for log-loss, which seems too high (close to 10) in all cases, possibly implying a problem in probability evaluation.
  • Overall, the SVM results are not satisfactory. However, the imbalance of the dataset as a whole, as far as the dataset volume in each class is concerned, shows a ratio where the metrics fall to considerably low levels.

4.4.8. Generalizability of the Findings

As far as the generalizability of the findings are concerned, SVMs in general are used when datasets can be linearly separated, but there are configurations for non-linear problems via the use of kernel functions.
In Decision Trees, the explanation of the values is related to the methodology selected, but in general, high values indicate the serious engagement of a feature in the model’s prediction capability.
In MLPs, in general, the first layer has the same number of neurons as the number of features of the given inputs. The in-between layers can contain various numbers of neurons in relation to the complexity of the problem and the requirements of the model. The last layer has one neuron for each class or prediction it is trying to achieve.

5. Comparison with Related State-of-the-Art Work

In [28], the authors make use of the following features and parameters (Table 12, Table 13 and Table 14), which can be compared with the current work’s parameters in Section 3.1.
As far as a results comparison is concerned, the following Table 15 sums up a comparison between two different works and the study presented in this paper:
As one can easily observe, the results presented in this paper demonstrate higher accuracy compared to similar approaches when Logistic Regression, KNN and Decision Trees algorithms are exploited. Nevertheless, lower accuracy is observed in terms of the SVM algorithm and Random Forest results.
The superiority of the current work in relation to other research papers is the fact that, via the use of smaller public repositories, most of the results regarding accuracy supersede the current year’s (2024) SOTA results, as can easily be seen in Table 15. This is due to the fact that there is difficulty in using public repositories of fake accounts due to new laws regarding the use of private data.

6. Discussion

In the study presented in this paper, the goal was to identify fake accounts on Instagram via the help of machine learning algorithms. Given the general difficulty of finding such kinds of accounts, because of the strict new legal rules related to data processing, we made the decision to follow an alternative approach. So, in contrast with previous research papers, which are based on a substantial number of features, the research work presented in this paper was based on a mitigated number of publicly available features for each user in order to achieve our results. The decision was taken with the rationale of future exploitation of the current work, since even today it is necessary to use a limited amount of data in order to evaluate with a great degree of success a user’s identity in the enormous pool of a social network. As it is more than clear, the above-described effort completes, in an essential degree, the current research gap of the year 2024, as far as the detection of fake Instagram accounts via machine learning techniques is concerned.
The implications of feature selection on detection accuracy are substantial, as demonstrated in recent research on fake account detection. Careful feature engineering allows for a more accurate and computationally efficient model by isolating the attributes that best distinguish fake accounts. For instance, introducing ratios such as the following/followers ratio, the following/posts ratio, and the followers/posts ratio significantly improved classification accuracy by capturing behavioral patterns unique to fake profiles. In the preliminary experiments, each additional feature selection increased the model’s accuracy incrementally by up to 7%, highlighting the effectiveness of selective feature inclusion. Additionally, a correlation matrix analysis revealed that attributes like bio length and username digit count, which initially seemed relevant, had weaker associations with the “IsFake” label, and they were ultimately excluded to reduce noise. This systematic approach to feature selection not only streamlined the model, but also amplified its robustness, underscoring the critical role of carefully chosen features in achieving optimal detection accuracy for machine learning models in social media applications.

7. Conclusions and Future Work

In this section, the results of this article are summarized, providing suggestions, ideas and thoughts for future realizations related to the detection of fake accounts both on Instagram and other social networks. In this paper, various machine learning approaches were employed to detect fake Instagram accounts. With its appearance in 2010, Instagram put great emphasis on utilizing pictures and videos in the best possible ways in order to attract a lot of users. Thus, as time passed, the rapid development of technology and changes in society resulted in more users being active on Instagram, since it provided a new means of communication, in addition to snapshots and the ability to publicize moments of everyday life, which can be applied in the commercial sector to target millions of users. However, there are users who exploit this means of communication for illegal purposes. Fraud, bullying behavior, misinformation, influencing public opinion, and the promotion of third parties are some of the reasons why such accounts are created.
Seven machine learning algorithms (Gaussian Naïve Bayes, Random Forest, Decision Trees, Logistic Regression, MLP, KNN and SVM) were used in order to recognize such kinds of accounts with high accuracy. In order to achieve this, publicly available data were collected and were gradually enriched with more features (characteristics). These features comprised combinations of already existing features, and it was proved in the final results that they played a decisive role in improving the metrics (accuracy, precision, recall, F1-Score, AUC-ROC, log-loss).
An overview of the main results elaborated upon in this paper is presented below:
  • Algorithms with similar internal operation (Random Forests, Decision Trees) demonstrated a similar response during the phases, responding similarly to the additionally proposed features.
  • Overall, the proposed features significantly increased the values of our metrics. This proves that by utilizing the features that a dataset already has, results can be improved without having to incorporate less popular features. The same result can be achieved with few but qualitative features, saving time during the models’ training.
  • There were models that did not necessarily improve in the fourth and last phase (Gaussian Naïve Bayes, Logistic Regression), showing that the choice of many features per input does not make an ideal solution for a problem. Many models (Random Forest, MLP, KNN) had better results because of their different internal structures, showing that the approach of not leaving out the last additional feature was correct, demonstrating zero correlation with the authenticity of the account.
  • The Random Forest model seems to be the best solution proposed for detecting fake accounts. With the exception of the precision value, in all other values it excelled, reaching a classification accuracy of 98% (97.71%). Similarly, the rest of the metrics reached high levels, with F1-Score reaching 95.74%. These values appeared during the fourth phase with all the features available in the algorithm.
The authors aim to evolve the presented work in several directions. Indications of their respective future plans are listed below:
  • The results presented in this paper can be used in order to produce frameworks and applications available to Instagram users and every such user can thus check the authenticity of accounts they interact with in Instagram.
  • It could be possible to implement specific results or models in the context of similar studies on less popular social networks where the available data will also be small in number, so that such malicious actions could be detected/prevented.
  • Another direction could be identifying fake accounts on websites or in tutorials (restaurants, hotels, etc.), where fake accounts also appear and behave in the same way as on social networks.
  • A final option could be the use of the used algorithms with unsupervised learning to solve the same problem. Large volumes of data exist, but labeling is a very demanding procedure that consumes time and labor hours.
In summary, this research paper focused on the detection of fake Instagram accounts, proposing solutions to protect users from such malicious entities. Specifically, via the use of machine learning algorithms, solutions were proposed that can be exploited in technological implementations, thus developing a prevention mechanism against such behaviors.
To achieve this, datasets of public Instagram users were collected from the internet, and were then processed by adding or removing features before being fed into the machine learning models. A key finding from the results is that the research work presented in this paper can be compared to similar studies, but with the distinction that we achieved comparable, if not superior results using smaller datasets. While smaller datasets typically complicate the extraction of robust conclusions, our research successfully navigated this challenge. It is worth highlighting that the research work presented in this paper compares the obtained results mainly with similar approaches that use more features. Last but not least, it should be noted that the proposed approach outperforms existing fake account detection approaches as coupled with six well-established features selected and adopted; it also uses three newly introduced features that overall result in achieving higher overall accuracy.

Author Contributions

Conceptualization, G.R. and I.R.; methodology, G.R. and S.C.; software, S.C.; validation, S.C.; formal analysis, G.R.; investigation, G.R. and S.C.; writing—original draft preparation, S.C. and G.R.; writing—review and editing, G.R. and I.R.; supervision, I.R.; project administration, I.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hu, Y.; Manikonda, L.; Kambhampati, S. What We Instagram: A First Analysis of Instagram Photo Content and User Types. In Proceedings of the International AAAI Conference on Web and Social Media, Ann Arbor, MI, USA, 1–4 June 2014; Volume 8, pp. 595–598. [Google Scholar]
  2. Meiselwitz, G. Social Computing and Social Media. Participation, User Experience, Consumer Experience, and Applications of Social Computing. In Proceedings of the 12th International Conference, SCSM 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July 2020; Proceedings, Part II. Springer Nature: Berlin/Heidelberg, Germany, 2020. ISBN 978-3-030-49576-3. [Google Scholar]
  3. Yang, C. Research in the Instagram Context: Approaches and Methods. J. Soc. Sci. Res. 2021, 7, 15–21. [Google Scholar] [CrossRef]
  4. Instagram. Wikipedia 2024. Available online: https://en.wikipedia.org/wiki/Instagram (accessed on 20 August 2024).
  5. Bainotti, L.; Caliandro, A.; Gandini, A. From Archive Cultures to Ephemeral Content, and Back: Studying Instagram Stories with Digital Methods. New Media Soc. 2021, 23, 3656–3676. [Google Scholar] [CrossRef]
  6. How and Why Are Educators Using Instagram?—ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/S0742051X20313408 (accessed on 30 October 2024).
  7. Biswas, S.; Chowdhury, C.; Acharya, B.; Liu, C.-M. (Eds.) Internet of Things Based Smart Healthcare: Intelligent and Secure Solutions Applying Machine Learning Techniques; Smart Computing and Intelligence; Springer Nature: Singapore, 2022; ISBN 978-981-19140-7-2. [Google Scholar]
  8. Lacroix, P. Big Data Privacy and Ethical Challenges. In Big Data, Big Challenges: A Healthcare Perspective: Background, Issues, Solutions and Research Directions; Househ, M., Kushniruk, A.W., Borycki, E.M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 101–111. ISBN 978-3-030-06109-8. [Google Scholar]
  9. Akyon, F.C.; Esat Kalfaoglu, M. Instagram Fake and Automated Account Detection. In Proceedings of the 2019 Innovations in Intelligent Systems and Applications Conference (ASYU), Izmir, Turkey, 31 October–2 November 2019; pp. 1–7. [Google Scholar]
  10. Sheikhi, S. An Efficient Method for Detection of Fake Accounts on the Instagram Platform. Rev. D’intelligence Artif. 2020, 34, 429–436. [Google Scholar] [CrossRef]
  11. Purba, K.R.; Asirvatham, D.; Murugesan, R.K. Classification of Instagram Fake Users Using Supervised Machine Learning Algorithms. Int. J. Electr. Comput. Eng. 2020, 10, 2763. [Google Scholar] [CrossRef]
  12. Kaushik, K.; Bhardwaj, A.; Kumar, M.; Gupta, S.K.; Gupta, A. A Novel Machine Learning-based Framework for Detecting Fake Instagram Profiles. Concurr. Comput. 2022, 34, e7349. [Google Scholar] [CrossRef]
  13. Ezarfelix, J.; Jeffrey, N.; Sari, N. A Systematic Literature Review: Instagram Fake Account Detection Based on Machine Learning. Eng. Math. Comput. Sci. J. (EMACS) 2022, 4, 25–31. [Google Scholar] [CrossRef]
  14. Kesharwani, M.; Kumari, S.; Niranjan, V. Detecting Fake Social Media Account Using Deep Neural Networking. Int. Res. J. Eng. Technol. (IRJET) 2021, 8, 1191–1197. [Google Scholar]
  15. Dey, A.; Reddy, H.; Dey, M.; Sinha, N.; Joy, J. Detection of Fake Accounts in Instagram Using Machine Learning. Int. J. Comput. Sci. Inf. Technol. 2019, 11, 83–90. [Google Scholar] [CrossRef]
  16. Maniraj, S.P.; Krishnan, G.H.; Surya, T.; Pranav, R. Fake Account Detection Using Machine Learning and Data Science. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 583–585. [Google Scholar]
  17. Raturi, R. Machine Learning Implementation for Identifying Fake Accounts in Social Network. Int. J. Pure Appl. Math. 2018, 118, 4785–4797. [Google Scholar]
  18. Khaled, S.; El-Tazi, N.; Mokhtar, H.M. Detecting Fake Accounts on Social Media. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 3672–3681. [Google Scholar]
  19. Roy, P.K.; Chahar, S. Fake Profile Detection on Social Networking Websites: A Comprehensive Review. IEEE Trans. Artif. Intell. 2020, 1, 271–285. [Google Scholar] [CrossRef]
  20. Kaubiyal, J.; Jain, A.K. A Feature Based Approach to Detect Fake Profiles in Twitter. In Proceedings of the 3rd International Conference on Big Data and Internet of Things, Melbourn, VIC, Australia, 22 August 2019; pp. 135–139. [Google Scholar]
  21. Cresci, S.; Di Pietro, R.; Petrocchi, M.; Spognardi, A.; Tesconi, M. The Paradigm-Shift of Social Spambots: Evidence, Theories, and Tools for the Arms Race. In Proceedings of the 26th International Conference on World Wide Web Companion, WWW ’17 Companion, Perth, Australia, 3–7 April 2017; pp. 963–972. [Google Scholar]
  22. Agarwal, N.; Jabin, S.; Hussain, S.Z. Analyzing Real and Fake Users in Facebook Network Based on Emotions. In Proceedings of the 2019 11th International Conference on Communication Systems & Networks (COMSNETS), Bengaluru, India, 7–11 January 2019; pp. 110–117. [Google Scholar]
  23. Adikari, S.; Dutta, K. Identifying Fake Profiles in LinkedIn. arXiv 2020, arXiv:2006.01381. [Google Scholar] [CrossRef]
  24. Gurajala, S.; White, J.; Hudson, B.; Voter, B.; Matthews, J. Profile Characteristics of Fake Twitter Accounts. Big Data Soc. 2016, 3, 1–13. [Google Scholar] [CrossRef]
  25. Sahoo, S.R.; Gupta, B.B. Real-Time Detection of Fake Account in Twitter Using Machine-Learning Approach. In Advances in Computational Intelligence and Communication Technology; Gao, X.-Z., Tiwari, S., Trivedi, M.C., Mishra, K.K., Eds.; Advances in Intelligent Systems and Computing; Springer: Singapore, 2021; Volume 1086, pp. 149–159. ISBN 9789811512742. [Google Scholar]
  26. Instafake-Dataset/Data/Fake-v1.0 at Master · Fcakyon/Instafake-Dataset. Available online: https://github.com/fcakyon/instafake-dataset/tree/master/data/fake-v1.0 (accessed on 30 July 2024).
  27. Instagram Fake Spammer Genuine Accounts. Available online: https://www.kaggle.com/datasets/free4ever1/instagram-fake-spammer-genuine-accounts (accessed on 30 July 2024).
  28. Shah, A.; Varshney, S.; Mehrotra, M. DeepMUI: A Novel Method to Identify Malicious Users on Online Social Network Platforms. Concurr. Comput. Pract. Exp. 2023, 36, e7917. [Google Scholar] [CrossRef]
  29. Mughaid, A.; Obeidat, I.; AlZu’bi, S.; Elsoud, E.A.; Alnajjar, A.; Alsoud, A.R.; Abualigah, L. A Novel Machine Learning and Face Recognition Technique for Fake Accounts Detection System on Cyber Social Networks. Multimed. Tools Appl. 2023, 82, 26353–26378. [Google Scholar] [CrossRef]
  30. Amankeldin, D.; Kurmangaziyeva, L.; Mailybayeva, A.; Glazyrina, N.; Zhumadillayeva, A.; Karasheva, N. Deep Neural Network for Detecting Fake Profiles in Social Networks. Comput. Syst. Sci. Eng. 2023, 47, 1091–1108. [Google Scholar] [CrossRef]
Figure 1. Distribution of “hasProfilePicture”.
Figure 1. Distribution of “hasProfilePicture”.
Computers 13 00296 g001
Figure 2. Distribution of “hasDigitsInUsername”.
Figure 2. Distribution of “hasDigitsInUsername”.
Computers 13 00296 g002
Figure 3. Distribution of “numberOfFollowers” (zoomed-in view).
Figure 3. Distribution of “numberOfFollowers” (zoomed-in view).
Computers 13 00296 g003
Figure 4. Distribution of “numberOfFollowers” (zoomed-out view).
Figure 4. Distribution of “numberOfFollowers” (zoomed-out view).
Computers 13 00296 g004
Figure 5. Distribution of “numberOfPosts” (zoomed-in view).
Figure 5. Distribution of “numberOfPosts” (zoomed-in view).
Computers 13 00296 g005
Figure 6. Distribution of “numberOfPosts” (zoomed-out view).
Figure 6. Distribution of “numberOfPosts” (zoomed-out view).
Computers 13 00296 g006
Figure 7. Distribution of “nums/length username”.
Figure 7. Distribution of “nums/length username”.
Computers 13 00296 g007
Figure 8. Distribution of “numberOfFollowing”.
Figure 8. Distribution of “numberOfFollowing”.
Computers 13 00296 g008
Figure 9. Distribution of “descriptionLength”.
Figure 9. Distribution of “descriptionLength”.
Computers 13 00296 g009
Figure 10. Distribution of “isPrivate”.
Figure 10. Distribution of “isPrivate”.
Computers 13 00296 g010
Figure 11. Distribution of “following/followers ratio” (zoomed-out view on the left; zoomed-in view on the right).
Figure 11. Distribution of “following/followers ratio” (zoomed-out view on the left; zoomed-in view on the right).
Computers 13 00296 g011
Figure 12. Distribution of “following/posts ratio” (zoomed-out view on the left; zoomed-in view on the right).
Figure 12. Distribution of “following/posts ratio” (zoomed-out view on the left; zoomed-in view on the right).
Computers 13 00296 g012
Figure 13. Distribution for “followers/posts ratio” (zoomed-out view on the left; zoomed-in view on the right).
Figure 13. Distribution for “followers/posts ratio” (zoomed-out view on the left; zoomed-in view on the right).
Computers 13 00296 g013
Figure 14. Features correlation matrix.
Figure 14. Features correlation matrix.
Computers 13 00296 g014
Figure 15. Realization diagram.
Figure 15. Realization diagram.
Computers 13 00296 g015
Figure 16. Aggregated performance of 1st-phase evaluation.
Figure 16. Aggregated performance of 1st-phase evaluation.
Computers 13 00296 g016
Figure 17. Log-loss evaluation of 1st phase.
Figure 17. Log-loss evaluation of 1st phase.
Computers 13 00296 g017
Figure 18. Log-loss evaluation of 2nd phase.
Figure 18. Log-loss evaluation of 2nd phase.
Computers 13 00296 g018
Figure 19. Aggregated performance of 2nd-phase evaluation.
Figure 19. Aggregated performance of 2nd-phase evaluation.
Computers 13 00296 g019
Figure 20. Log-loss evaluation of 3rd phase.
Figure 20. Log-loss evaluation of 3rd phase.
Computers 13 00296 g020
Figure 21. Aggregated results of 3rd-phase evaluation.
Figure 21. Aggregated results of 3rd-phase evaluation.
Computers 13 00296 g021
Figure 22. Log-loss evaluation of 4th phase.
Figure 22. Log-loss evaluation of 4th phase.
Computers 13 00296 g022
Figure 23. Aggregated results of 4th-phase evaluation.
Figure 23. Aggregated results of 4th-phase evaluation.
Computers 13 00296 g023
Figure 24. Aggregated results of Gaussian Naïve Bayes model.
Figure 24. Aggregated results of Gaussian Naïve Bayes model.
Computers 13 00296 g024
Figure 25. Aggregated results for log-loss of Gaussian Naïve Bayes.
Figure 25. Aggregated results for log-loss of Gaussian Naïve Bayes.
Computers 13 00296 g025
Figure 26. Aggregated results of Random Forest model.
Figure 26. Aggregated results of Random Forest model.
Computers 13 00296 g026
Figure 27. Aggregated results for log-loss with Random Forest.
Figure 27. Aggregated results for log-loss with Random Forest.
Computers 13 00296 g027
Figure 28. Aggregated results of Decision Trees model.
Figure 28. Aggregated results of Decision Trees model.
Computers 13 00296 g028
Figure 29. Aggregated results for log-loss of Decision Trees.
Figure 29. Aggregated results for log-loss of Decision Trees.
Computers 13 00296 g029
Figure 30. Aggregated results of Logistic Regression model.
Figure 30. Aggregated results of Logistic Regression model.
Computers 13 00296 g030
Figure 31. Aggregated results for log-loss of Logistic Regression.
Figure 31. Aggregated results for log-loss of Logistic Regression.
Computers 13 00296 g031
Figure 32. Aggregated results of MLP model.
Figure 32. Aggregated results of MLP model.
Computers 13 00296 g032
Figure 33. Aggregated results for log-loss of MLP.
Figure 33. Aggregated results for log-loss of MLP.
Computers 13 00296 g033
Figure 34. Aggregated results of KNN model.
Figure 34. Aggregated results of KNN model.
Computers 13 00296 g034
Figure 35. Aggregated results for log-loss of KNN.
Figure 35. Aggregated results for log-loss of KNN.
Computers 13 00296 g035
Figure 36. Aggregated results of SVM model.
Figure 36. Aggregated results of SVM model.
Computers 13 00296 g036
Figure 37. Aggregated results for log-loss of SVM.
Figure 37. Aggregated results for log-loss of SVM.
Computers 13 00296 g037
Table 1. Quantitative results for the “InstaFake Dataset”.
Table 1. Quantitative results for the “InstaFake Dataset”.
Number of fake (malicious) accounts200
Number of real accounts994
Total number of accounts1194
Number of features in columns9
Table 2. Presentation and description of the features of the “InstaFake Dataset”.
Table 2. Presentation and description of the features of the “InstaFake Dataset”.
Column TitleDescription
user_media_countThe total number of posts that the account contains.
user_follower_countThe total number of followers that the account/user has.
user_following_countThe total number of the accounts that the user follows.
user_has_profile_picIf an account has a profile image or not.
user_is_privateIf an account is private or not.
user_biography_lengthThe number of characters that appear in the account’s biography.
username_lengthThe number of characters that appear in the account’s name.
username_digit_countThe multitude of numbers that appear in the account’s name.
is_fakeIf an account is fake malware or real.
Table 3. Quantitative results from the “train.csv” file of the “Instagram fake spammer genuine accounts” dataset.
Table 3. Quantitative results from the “train.csv” file of the “Instagram fake spammer genuine accounts” dataset.
Number of real accounts288
Number of fake—malicious accounts288
Total number of accounts576
Number of features in columns12
Table 4. Quantitative results from the “test.csv” file of the “Instagram fake spammer genuine accounts” dataset.
Table 4. Quantitative results from the “test.csv” file of the “Instagram fake spammer genuine accounts” dataset.
Number of real accounts60
Number of fake—malicious accounts60
Total number of accounts120
Number of features in columns12
Table 5. Presentation of the features of the “Instagram fake spammer genuine accounts” dataset.
Table 5. Presentation of the features of the “Instagram fake spammer genuine accounts” dataset.
Column TitleDescription
profile_picIf an account has a profile image or not.
nums/length usernameRatio of the number of arithmetic characters in the account name against its length.
fullname wordsNumber of words in the user’s real name.
nums/length fullnameRatio of the number of arithmetic characters in the user’s real name against its length.
name==usernameWhether the user’s real name and the user’s account name are matched.
description lengthThe number of characters which is displayed in account’s biography.
external URLWhether there is an external link (URL) or not.
PrivateWhether an account is private or not.
#postsThe number of posts that an account has.
#followersThe number of accounts that follow the specific account/user.
#followsThe number of rest accounts that the user follows.
FakeWhether an account is fake/malicious or real.
Table 6. Quantitative results for the final dataset.
Table 6. Quantitative results for the final dataset.
Number of fake—malicious accounts548
Number of real accounts1342
Total number of accounts1890
Ratio of the real accounts against the total number of accounts71%
Ratio of the fake accounts against the total number of accounts29%
Number of features in columns9
Table 7. Presentation and description of the final dataset’s features.
Table 7. Presentation and description of the final dataset’s features.
Column TitleDescription
numberOfFollowersThe number of accounts that follow the specific account/user.
numberOfFollowingThe number of rest accounts that a user follows.
nums/length usernameRatio of the number of arithmetic characters in account’s name against its length.
hasDigitsInUsernameIf the account name has arithmetic characters.
hasProfilePictureIf the account has a profile picture or not.
descriptionLengthThe number of characters depicted in account’s biography.
isPrivateIf an account is private or not.
numberOfPostsThe number of posts of an account.
isFakeIf the account is fake—malicious or real.
Table 8. Results of 1st phase evaluation.
Table 8. Results of 1st phase evaluation.
1st Phase
AccuracyPrecisionRecallF1roc_aucLog-loss
Gaussian0.6010.4110.9470.57450.706414.217
Random Forest0.9630.90990.97010.9390.9651.336
Decision Trees0.94650.90140.92310.91230.941.913
Logistic Regression0.95410.91030.92210.91620.94411.653
MLP0.9550.9520.90350.9270.94151.586
KNN0.94720.90140.88870.89510.92781.907
SVM0.71990.59750.01560.03430.505410.015
Table 9. Results of 2nd-phase evaluation.
Table 9. Results of 2nd-phase evaluation.
2nd Phase
AccuracyPrecisionRecallF1roc_aucLog-loss
Gaussian0.89590.74360.95350.83650.91173.613
Random Forest0.97010.94520.93580.94190.95981.079
Decision Trees0.95050.89090.93630.9130.94571.784
Logistic Regression0.95790.90850.92130.91490.94561.525
MLP0.96090.94570.9090.92710.94461.399
KNN0.94710.90130.88870.8950.92761.906
SVM0.7340.91020.05920.11540.53129.654
Table 10. Results of 3rd-phase evaluation.
Table 10. Results of 3rd-phase evaluation.
3rd Phase
AccuracyPrecisionRecallF1roc_aucLog-loss
Gaussian0.9440.92380.87640.89870.92191.815
Random Forest0.97360.94860.95490.95190.96770.953
Decision Trees0.95070.96120.87120.91380.92781.784
Logistic Regression0.96620.94120.94680.9440.96081.207
MLP0.95740.9550.9050.92710.94241.525
KNN0.95970.90330.9260.91450.94321.779
SVM0.7340.91020.05920.11540.53129.564
Table 11. Results of 4th-phase evaluation.
Table 11. Results of 4th-phase evaluation.
4th Phase
AccuracyPrecisionRecallF1roc_aucLog-loss
Gaussian0.92940.92660.82070.86990.89592.404
Random Forest0.97710.96680.94790.95730.96790.828
Decision Trees0.95730.97390.88310.92640.93611.525
Logistic Regression0.970.9710.91160.94040.95111.083
MLP0.97320.96630.93520.95080.96140.954
KNN0.95220.9090.92590.91740.94431.716
SVM0.7340.91020.60.11540.52739.564
Table 12. Features and their descriptions from paper [28] for comparison with the current work.
Table 12. Features and their descriptions from paper [28] for comparison with the current work.
FeaturesDescription
nameThe name of the twitter user
statuses_countThe number of status updates a user posts on their timeline
followers_countThe number of followers of the user
favourites_countThe total number of tweets liked by the user
friends_countThe number of accounts that the user is following
follower_friend_RatioRatio between follower_count and friend_count
listed_countThe total number of public lists to which the user has been added
lang_codeLanguage code through which the user communicates
Table 13. More features and their explanations from paper [28] for comparison with the current work.
Table 13. More features and their explanations from paper [28] for comparison with the current work.
FeaturesDescription
user_Media_CountThe amount of media a user has on their timeline
user_Follower_CountThe number of accounts that are following the user
user_Following_CountThe number of accounts that are followed by the user
user_Has_Highligh_ReelsWhether the user has highlighted any reel on their wall
user_Has_External_UrlWhether the user has any external url on their wall
user_Tags_CountThe number of times the user has been tagged
user_Biography_LengthThe user’s biography length
username_LengthThe length of the user’s username
username_Digit_CountThe number of digits in the user’s username
user_Follower_Following_RatioRatio between user_Follower_Count and user_Following_Count
username_Length_Digit_Count_RatioRatio between username_Length and username_Digit_Count
Table 14. Parameter selection for the neural networks used in paper [28] for comparison with the study presented in this paper.
Table 14. Parameter selection for the neural networks used in paper [28] for comparison with the study presented in this paper.
ParameterValues
LSTM memory cells64
CNN filters64
Batch size40
Learning rate0.001
OptimizerAdam
LossBinary cross entropy
Epochs40
Pool size3 (first pooling layer) and 2 (second pooling layer)
Dropout value0.1
L2 regularizer value0.01
Table 15. Comparison of the current research work with 2 others.
Table 15. Comparison of the current research work with 2 others.
Logistic RegressionSVMKNNNaïve BayesNeural NetworkRandom ForestDecision Trees
BasicKernelBasicKernel
Current work0.97 0.734 0.9522 0.97710.9573
[29]0.9570.9450.9630.9630.930.9630.93
[30] 0.82 0.790.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chelas, S.; Routis, G.; Roussaki, I. Detection of Fake Instagram Accounts via Machine Learning Techniques. Computers 2024, 13, 296. https://doi.org/10.3390/computers13110296

AMA Style

Chelas S, Routis G, Roussaki I. Detection of Fake Instagram Accounts via Machine Learning Techniques. Computers. 2024; 13(11):296. https://doi.org/10.3390/computers13110296

Chicago/Turabian Style

Chelas, Stefanos, George Routis, and Ioanna Roussaki. 2024. "Detection of Fake Instagram Accounts via Machine Learning Techniques" Computers 13, no. 11: 296. https://doi.org/10.3390/computers13110296

APA Style

Chelas, S., Routis, G., & Roussaki, I. (2024). Detection of Fake Instagram Accounts via Machine Learning Techniques. Computers, 13(11), 296. https://doi.org/10.3390/computers13110296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop