Scientific Programming in Practical Symmetric Big Data

A special issue of Symmetry (ISSN 2073-8994).

Deadline for manuscript submissions: closed (31 March 2017) | Viewed by 82876

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Software, SoonChunHyang University, Asan, Korea
Interests: symmetric big data; cloud computing; hybrid intelligence; cluster & parallel computing; multimedia service

E-Mail Website
Guest Editor
School of Computing and Information Sciences, Florida International University, Miami, FL, USA
Interests: big data; mobile multimedia; distributed database systems; multimedia data mining
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Scientific programming based on practical symmetric data is to gather new trends and methodological recent advances on a wide range of problems arising in different fields to handle practical symmetric big data. Many computational methods have been successfully applied to a range of optimization and classification problems, but there are still many practical problems tackled by traditional methods that are generally difficult to solve experimentally in practical symmetric big data. More specifically, many computational problems arising in fields of scientific programming have been addressed in Artificial Intelligence (AI), High-Performance Computing (HPC), large-scale data mining, and so on, which handle practical big data. Submissions are welcomed on scientific programming applied to optimization in practical big data. We invite researchers and practitioners to submit their original research articles and theoretical articles.

Prof. Dr. Doo-soon Park
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Heuristic algorithms for optimization problems of symmetric data
  • Genetic programming using symmetric data
  • Parallel and distributed design and implementations of symmetric data
  • Symmetric computational method using high performance computing
  • Optimization problems in symmetric life science
  • Nature inspired symmetric computing
  • Symmetric modeling and prediction of performance
  • Design and validation of applications
  • Symmetric programming and execution models for new architectures
  • Optimization applications on the cloud system
  • Large-scale optimization of symmetric data
  • Symmetric big data analysis
  • Performance evaluations of symmetric big data

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

1773 KiB  
Article
Fair Dynamic Spectrum Allocation Using Modified Game Theory for Resource-Constrained Cognitive Wireless Sensor Networks
by Sang-Seon Byun and Joon-Min Gil
Symmetry 2017, 9(5), 73; https://doi.org/10.3390/sym9050073 - 16 May 2017
Cited by 14 | Viewed by 4349
Abstract
This paper considers the deployment of a cognitive radio scheme in wireless sensor networks to achieve (1) fair spectrum allocation, (2) maximum spectrum utilization, and (3) priority-based sensor transmissions, while (4) avoiding unnecessary spectrum handover (or handoff). This problem is modelled as a [...] Read more.
This paper considers the deployment of a cognitive radio scheme in wireless sensor networks to achieve (1) fair spectrum allocation, (2) maximum spectrum utilization, and (3) priority-based sensor transmissions, while (4) avoiding unnecessary spectrum handover (or handoff). This problem is modelled as a bi-objective optimization problem. We apply modified game theory and a cooperative approach to identify an approximate optimal solution in reasonable time. We perform a series of numerical experiments to show that our scheme achieves fair spectrum allocation (in terms of proportional fairness) while observing transmission priorities and minimizing unnecessary spectrum handover. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

1120 KiB  
Article
Analysis of a Similarity Measure for Non-Overlapped Data
by Sanghyuk Lee, Jaehoon Cha, Nipon Theera-Umpon and Kyeong Soo Kim
Symmetry 2017, 9(5), 68; https://doi.org/10.3390/sym9050068 - 9 May 2017
Cited by 3 | Viewed by 5128
Abstract
A similarity measure is a measure evaluating the degree of similarity between two fuzzy data sets and has become an essential tool in many applications including data mining, pattern recognition, and clustering. In this paper, we propose a similarity measure capable of handling [...] Read more.
A similarity measure is a measure evaluating the degree of similarity between two fuzzy data sets and has become an essential tool in many applications including data mining, pattern recognition, and clustering. In this paper, we propose a similarity measure capable of handling non-overlapped data as well as overlapped data and analyze its characteristics on data distributions. We first design the similarity measure based on a distance measure and apply it to overlapped data distributions. From the calculations for example data distributions, we find that, though the similarity calculation is effective, the designed similarity measure cannot distinguish two non-overlapped data distributions, thus resulting in the same value for both data sets. To obtain discriminative similarity values for non-overlapped data, we consider two approaches. The first one is to use a conventional similarity measure after preprocessing non-overlapped data. The second one is to take into account neighbor data information in designing the similarity measure, where we consider the relation to specific data and residual data information. Two artificial patterns of non-overlapped data are analyzed in an illustrative example. The calculation results demonstrate that the proposed similarity measures can discriminate non-overlapped data. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

1245 KiB  
Article
Analysis of Clustering Evaluation Considering Features of Item Response Data Using Data Mining Technique for Setting Cut-Off Scores
by Byoungwook Kim, JaMee Kim and Gangman Yi
Symmetry 2017, 9(5), 62; https://doi.org/10.3390/sym9050062 - 25 Apr 2017
Cited by 16 | Viewed by 7210
Abstract
The setting of standards is a critical process in educational evaluation, but it is time-consuming and expensive because it is generally conducted by an education experts group. The purpose of this paper is to find a suitable cluster validity index that considers the [...] Read more.
The setting of standards is a critical process in educational evaluation, but it is time-consuming and expensive because it is generally conducted by an education experts group. The purpose of this paper is to find a suitable cluster validity index that considers the futures of item response data for setting cut-off scores. In this study, nine representative cluster validity indexes were used to evaluate the clustering results. Cohen’s kappa coefficient is used to check the conformity between a set cut-off score using four clustering techniques and a cut-off score set by experts. We compared the cut-off scores by each cluster validity index and by a group of experts. The experimental results show that the entropy-based method considers the features of item response data, so it has a realistic possibility of applying a clustering evaluation method to the setting of standards in criterion referenced evaluation. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

712 KiB  
Article
A Fast K-prototypes Algorithm Using Partial Distance Computation
by Byoungwook Kim
Symmetry 2017, 9(4), 58; https://doi.org/10.3390/sym9040058 - 21 Apr 2017
Cited by 10 | Viewed by 7702
Abstract
The k-means is one of the most popular and widely used clustering algorithm; however, it is limited to numerical data only. The k-prototypes algorithm is an algorithm famous for dealing with both numerical and categorical data. However, there have been no studies to [...] Read more.
The k-means is one of the most popular and widely used clustering algorithm; however, it is limited to numerical data only. The k-prototypes algorithm is an algorithm famous for dealing with both numerical and categorical data. However, there have been no studies to accelerate it. In this paper, we propose a new, fast k-prototypes algorithm that provides the same answers as those of the original k-prototypes algorithm. The proposed algorithm avoids distance computations using partial distance computation. Our k-prototypes algorithm finds minimum distance without distance computations of all attributes between an object and a cluster center, which allows it to reduce time complexity. A partial distance computation uses a fact that a value of the maximum difference between two categorical attributes is 1 during distance computations. If data objects have m categorical attributes, the maximum difference of categorical attributes between an object and a cluster center is m. Our algorithm first computes distance with numerical attributes only. If a difference of the minimum distance and the second smallest with numerical attributes is higher than m, we can find the minimum distance between an object and a cluster center without distance computations of categorical attributes. The experimental results show that the computational performance of the proposed k-prototypes algorithm is superior to the original k-prototypes algorithm in our dataset. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

4222 KiB  
Article
3D Reconstruction Framework for Multiple Remote Robots on Cloud System
by Phuong Minh Chu, Seoungjae Cho, Simon Fong, Yong Woon Park and Kyungeun Cho
Symmetry 2017, 9(4), 55; https://doi.org/10.3390/sym9040055 - 14 Apr 2017
Cited by 12 | Viewed by 5891
Abstract
This paper proposes a cloud-based framework that optimizes the three-dimensional (3D) reconstruction of multiple types of sensor data captured from multiple remote robots. A working environment using multiple remote robots requires massive amounts of data processing in real-time, which cannot be achieved using [...] Read more.
This paper proposes a cloud-based framework that optimizes the three-dimensional (3D) reconstruction of multiple types of sensor data captured from multiple remote robots. A working environment using multiple remote robots requires massive amounts of data processing in real-time, which cannot be achieved using a single computer. In the proposed framework, reconstruction is carried out in cloud-based servers via distributed data processing. Consequently, users do not need to consider computing resources even when utilizing multiple remote robots. The sensors’ bulk data are transferred to a master server that divides the data and allocates the processing to a set of slave servers. Thus, the segmentation and reconstruction tasks are implemented in the slave servers. The reconstructed 3D space is created by fusing all the results in a visualization server, and the results are saved in a database that users can access and visualize in real-time. The results of the experiments conducted verify that the proposed system is capable of providing real-time 3D scenes of the surroundings of remote robots. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

1644 KiB  
Article
Iterative Speedup by Utilizing Symmetric Data in Pricing Options with Two Risky Assets
by Dohyun Pak, Changkyu Han and Won-Tak Hong
Symmetry 2017, 9(1), 12; https://doi.org/10.3390/sym9010012 - 21 Jan 2017
Cited by 6 | Viewed by 4414
Abstract
The Crank–Nicolson method can be used to solve the Black–Scholes partial differential equation in one-dimension when both accuracy and stability is of concern. In multi-dimensions, however, discretizing the computational grid with a Crank–Nicolson scheme requires significantly large storage compared to the widely adopted [...] Read more.
The Crank–Nicolson method can be used to solve the Black–Scholes partial differential equation in one-dimension when both accuracy and stability is of concern. In multi-dimensions, however, discretizing the computational grid with a Crank–Nicolson scheme requires significantly large storage compared to the widely adopted Operator Splitting Method (OSM). We found that symmetrizing the system of equations resulting from the Crank–Nicolson discretization help us to use the standard pre-conditioner for the iterative matrix solver and reduces the number of iterations to get an accurate option values. In addition, the number of iterations that is required to solve the preconditioned system, resulting from the proposed iterative Crank–Nicolson scheme, does not grow with the size of the system. Thus, we can effectively reduce the order of complexity in multidimensional option pricing. The numerical results are compared to the one with implicit Operator Splitting Method (OSM) to show the effectiveness. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

7369 KiB  
Article
DIaaS: Resource Management System for the Intra-Cloud with On-Premise Desktops
by Hyun-Woo Kim, Jaekyung Han, Jong Hyuk Park and Young-Sik Jeong
Symmetry 2017, 9(1), 8; https://doi.org/10.3390/sym9010008 - 9 Jan 2017
Cited by 7 | Viewed by 6410
Abstract
Infrastructure as a service with desktops (DIaaS) based on the extensible mark-up language (XML) is herein proposed to utilize surplus resources. DIaaS is a traditional surplus-resource integrated management technology. It is designed to provide fast work distribution and computing services based on user [...] Read more.
Infrastructure as a service with desktops (DIaaS) based on the extensible mark-up language (XML) is herein proposed to utilize surplus resources. DIaaS is a traditional surplus-resource integrated management technology. It is designed to provide fast work distribution and computing services based on user service requests as well as storage services through desktop-based distributed computing and storage resource integration. DIaaS includes a nondisruptive resource service and an auto-scalable scheme to enhance the availability and scalability of intra-cloud computing resources. A performance evaluation of the proposed scheme measured the clustering performance time for surplus resource utilization. The results showed improvement in computing and storage services in a connection of at least two computers compared to the traditional method for high-availability measurement of nondisruptive services. Furthermore, an artificial server error environment was used to create a clustering delay for computing and storage services and for nondisruptive services. It was compared to the Hadoop distributed file system (HDFS). Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

2372 KiB  
Article
Unmanned Aerial Vehicle Flight Point Classification Algorithm Based on Symmetric Big Data
by Jeonghoon Kwak, Jong Hyuk Park and Yunsick Sung
Symmetry 2017, 9(1), 1; https://doi.org/10.3390/sym9010001 - 24 Dec 2016
Cited by 10 | Viewed by 5132
Abstract
Unmanned aerial vehicles (UAVs) with auto-pilot capabilities are often used for surveillance and patrol. Pilots set the flight points on a map in order to navigate to the imaging point where surveillance or patrolling is required. However, there is the limit denoting the [...] Read more.
Unmanned aerial vehicles (UAVs) with auto-pilot capabilities are often used for surveillance and patrol. Pilots set the flight points on a map in order to navigate to the imaging point where surveillance or patrolling is required. However, there is the limit denoting the information such as absolute altitudes and angles. Therefore, it is required to set the information accurately. This paper hereby proposes a method to construct environmental symmetric big data using an unmanned aerial vehicle (UAV) during flight by designating the imaging and non-imaging points for surveillance and patrols. The K-Means-based algorithm proposed in this paper is then employed to divide the imaging points, which is set by the pilot, into K clusters, and K imaging points are determined using these clusters. Flight data are then used to set the points to which the UAV will fly. In our experiment, flight records were gathered through an UAV in order to monitor a stadium and the imaging and non-imaging points were set using the proposed method and compared with the points determined by a traditional K-Means algorithm. Through the proposed method, the cluster centroids and cumulative distance of its members were reduced by 87.57% more than with the traditional K-Means algorithm. With the traditional K-Means algorithm, imaging points were not created in the five points desired by the pilot, and two incorrect points were obtained. However, with the proposed method, two incorrect imaging points were obtained. Due to these two incorrect imaging points, the two points desired by the pilot were not generated. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

2791 KiB  
Article
Continuous Learning Graphical Knowledge Unit for Cluster Identification in High Density Data Sets
by K.K.L.B. Adikaram, Mohamed A. Hussein, Mathias Effenberger and Thomas Becker
Symmetry 2016, 8(12), 152; https://doi.org/10.3390/sym8120152 - 14 Dec 2016
Cited by 2 | Viewed by 4910
Abstract
Big data are visually cluttered by overlapping data points. Rather than removing, reducing or reformulating overlap, we propose a simple, effective and powerful technique for density cluster generation and visualization, where point marker (graphical symbol of a data point) overlap is exploited in [...] Read more.
Big data are visually cluttered by overlapping data points. Rather than removing, reducing or reformulating overlap, we propose a simple, effective and powerful technique for density cluster generation and visualization, where point marker (graphical symbol of a data point) overlap is exploited in an additive fashion in order to obtain bitmap data summaries in which clusters can be identified visually, aided by automatically generated contour lines. In the proposed method, the plotting area is a bitmap and the marker is a shape of more than one pixel. As the markers overlap, the red, green and blue (RGB) colour values of pixels in the shared region are added. Thus, a pixel of a 24-bit RGB bitmap can code up to 224 (over 1.6 million) overlaps. A higher number of overlaps at the same location makes the colour of this area identical, which can be identified by the naked eye. A bitmap is a matrix of colour values that can be represented as integers. The proposed method updates this matrix while adding new points. Thus, this matrix can be considered as an up-to-time knowledge unit of processed data. Results show cluster generation, cluster identification, missing and out-of-range data visualization, and outlier detection capability of the newly proposed method. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

931 KiB  
Article
Evaluation and Classification of Overseas Talents in China Based on the BWM for Intuitionistic Relations
by Qing Yang, Zaisheng Zhang, Xinshang You and Tong Chen
Symmetry 2016, 8(11), 137; https://doi.org/10.3390/sym8110137 - 23 Nov 2016
Cited by 27 | Viewed by 5147
Abstract
Efficient utilization of human resources is an important force for the sustainable development of society and the economy. Against the backdrop of the development of economic globalization, the Chinese Government is presently implementing the strategy of “Strengthening the Nation with Talent” to assist [...] Read more.
Efficient utilization of human resources is an important force for the sustainable development of society and the economy. Against the backdrop of the development of economic globalization, the Chinese Government is presently implementing the strategy of “Strengthening the Nation with Talent” to assist the exploitation and management of human resources. Overseas talents have recently become an important resource. How to scientifically evaluate and classify overseas talents has become an important research topic, and it is necessary to seek a systematic decision aid. This paper introduces a novel methodology to evaluate and classify overseas talents in China under the intuitionistic relations environment. Firstly, we determine the weighted values of decision makers and criteria through defining geometry consistency. Secondly, we construct a non-linear Best-Worst-Method (BWM) model with intuitionistic preference relations. A highlight of this BWM model for intuitionistic relations is taking both positive and negative aspects into consideration, which is different from the original BWM. Finally, the proposed methodology is applied to an illustrative example of overseas talent evaluation, indicating the simultaneous efficiency and practicability of the method. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

1761 KiB  
Article
The Combination of a Fuzzy Analytical Hierarchy Process and the Taguchi Method to Evaluate the Malaysian Users’ Willingness to Pay for Public Transportation
by Hashem Salarzadeh Jenatabadi, Peyman Babashamsi and Nur Izzi Md Yusoff
Symmetry 2016, 8(9), 90; https://doi.org/10.3390/sym8090090 - 2 Sep 2016
Cited by 11 | Viewed by 6416
Abstract
This study is an attempt to overcome the lack of reliable estimates on the willingness of Malaysian users to pay for public transportation (particularly buses) through a combined analysis of a fuzzy analytical hierarchy process (F-AHP) and the Taguchi method. This is a [...] Read more.
This study is an attempt to overcome the lack of reliable estimates on the willingness of Malaysian users to pay for public transportation (particularly buses) through a combined analysis of a fuzzy analytical hierarchy process (F-AHP) and the Taguchi method. This is a ground-breaking study in the attempt to evaluate the bus users’ satisfaction factors based on the F-AHP, and find the pattern for the users’ willingness to pay (WTP) characteristic by reducing the travel time with the Taguchi application. The data were collected from the public transportation users’ intentions in Kelang Valley, Kuala Lumpur, Malaysia. The results convinced us that, for complex data, one requires flexible approaches that can adjust their combination methods to the properties of analyzed datasets. This study is interested in initiating the use of a system combination strategy to have a better understanding of the factors that motivate the public transportation users to be willing to pay for the public transportation’s fare. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

1828 KiB  
Article
A Logistic Based Mathematical Model to Optimize Duplicate Elimination Ratio in Content Defined Chunking Based Big Data Storage System
by Longxiang Wang, Xiaoshe Dong, Xingjun Zhang, Fuliang Guo, Yinfeng Wang and Weifeng Gong
Symmetry 2016, 8(7), 69; https://doi.org/10.3390/sym8070069 - 21 Jul 2016
Cited by 8 | Viewed by 5100
Abstract
Deduplication is an efficient data reduction technique, and it is used to mitigate the problem of huge data volume in big data storage systems. Content defined chunking (CDC) is the most widely used algorithm in deduplication systems. The expected chunk size is an [...] Read more.
Deduplication is an efficient data reduction technique, and it is used to mitigate the problem of huge data volume in big data storage systems. Content defined chunking (CDC) is the most widely used algorithm in deduplication systems. The expected chunk size is an important parameter of CDC, and it influences the duplicate elimination ratio (DER) significantly. We collected two realistic datasets to perform an experiment. The experimental results showed that the current approach of setting the expected chunk size to 4 KB or 8 KB empirically cannot optimize DER. Therefore, we present a logistic based mathematical model to reveal the hidden relationship between the expected chunk size and the DER. This model provides a theoretical basis for optimizing DER by setting the expected chunk size reasonably. We used the collected datasets to verify this model. The experimental results showed that the R2 values, which describe the goodness of fit, are above 0.9, validating the correctness of this mathematic model. Based on the DER model, we discussed how to make DER close to the optimum by setting the expected chunk size reasonably. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Graphical abstract

3646 KiB  
Article
Optional Frame Selection Algorithm for Adaptive Symmetric Service of Augmented Reality Big Data on Smart Devices
by HwiRim Byun, Jong Hyuk Park and Young-Sik Jeong
Symmetry 2016, 8(5), 37; https://doi.org/10.3390/sym8050037 - 23 May 2016
Cited by 2 | Viewed by 5352
Abstract
Following recent technological advances in diverse mobile devices, including smartphones, tablets and smartwatches, in-depth studies aimed at improving the quality of augmented reality (AR) are currently ongoing. Smartphones feature the essential elements of AR implementation, such as a camera, a processor and a [...] Read more.
Following recent technological advances in diverse mobile devices, including smartphones, tablets and smartwatches, in-depth studies aimed at improving the quality of augmented reality (AR) are currently ongoing. Smartphones feature the essential elements of AR implementation, such as a camera, a processor and a display in a single device. As a result, additional hardware expansion for AR implementation has become unnecessary, popularizing AR technology at the user level. In the early stages, low-level AR technology was used mainly in limited fields, including simple road guides and marker-based recognition. Due to advances in AR technology, the range of usage has expanded as diverse technologies and purposes are combined. Users’ expectations of AR technology have also increased with this trend, and a high quality of service (QoS), with high-resolution, high-quality images, is now available. However, there are limitations in terms of processing speed and graphic treatment with smart devices, which, due to their small size, have inferior performance compared to the desktop environment when processing data for the implementation of high-resolution, high-quality images. This paper proposes an optional frame-selection algorithm (OFSA), which eliminates the unnecessary work involved with redundant frames during rendering for adaptive symmetric service of augmented reality big data on smart devices. Moreover, the memory read-write delay of the internally-operating OFSA, is minimized by adding an adaptive operation function. It is possible to provide adaptive common AR images at an improved frame rate in heterogeneous smart devices with different levels of performance. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Show Figures

Figure 1

Review

Jump to: Research

250 KiB  
Review
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions
by Vacius Jusas, Darius Birvinskas and Elvar Gahramanov
Symmetry 2017, 9(4), 49; https://doi.org/10.3390/sym9040049 - 28 Mar 2017
Cited by 26 | Viewed by 8542
Abstract
Digital triage is the first investigative step of the forensic examination. The digital triage comes in two forms, live triage and post-mortem triage. The primary goal of the live triage is a rapid extraction of an intelligence from the potential sources. The live [...] Read more.
Digital triage is the first investigative step of the forensic examination. The digital triage comes in two forms, live triage and post-mortem triage. The primary goal of the live triage is a rapid extraction of an intelligence from the potential sources. The live triage raises legitimate concerns. The post-mortem triage is conducted in the laboratory and its main goal is ranking of the seized devices for the possible existence of the relevant evidence. The digital triage has the potential to quickly identify items that are likely to contain the evidential data. Therefore, it is a solution to the problem of case backlogs. However, existing methods and tools of the digital triage have limitations, especially, in the forensic context. Nevertheless, we have no better solution for the time being. In this paper, we critically review published research works and the proposed solutions for digital triage. The review is divided into four sections as follows: live triage, post-mortem triage, mobile device triage, and triage tools. We conclude that many challenges are awaiting for the developers in creating methods and tools of digital triage in order to keep pace with the development of new technologies. Full article
(This article belongs to the Special Issue Scientific Programming in Practical Symmetric Big Data)
Back to TopTop