Dear Readers,
This Special Issue contains a series of excellent research works on telecommunications and signal processing; selected from the 2018 41st International Conference on Telecommunications and Signal Processing (TSP), which was held during 4–6 July 2018, in Athens, Greece. The Conference was organized in cooperation with the IEEE Region 8 (Europe, Middle East and Africa), IEEE Greece Section, IEEE Czechoslovakia Section, and IEEE Czechoslovakia Section SP/CAS/COM Joint Chapter by seventeen universities, from the Czech Republic, Hungary, Turkey, Taiwan, Japan, Slovak Republic, Spain, Bulgaria, France, Slovenia, Croatia, and Poland, for academics, researchers, and developers, and it serves as a premier annual international forum to promote the exchange of the latest advances in telecommunication technology and signal processing. The aim of the conference is to bring together both novice and experienced scientists, developers, and specialists, to meet new colleagues, collect new ideas, and establish new cooperation between research groups from universities, research centers, and private sectors worldwide. It is our great pleasure to introduce a collection of 10 selected high-quality research papers and let us briefly introduce the published works in this Special Issue.
In the first paper of this Special Issue [
1], written by G. Baldini et al., authors address the problem of authentication and identification of wireless devices using their physical properties derived from their radio frequency (RF) emissions. This technique is based on the concept that small differences in the physical implementation of wireless devices are significant enough and they are carried over to the RF emissions to distinguish wireless devices with high accuracy. The technique can be used both to authenticate the claimed identity of a wireless device or to identify one wireless device among others. In the literature, this technique has been implemented by feature extraction in the 1D time domain, 1D frequency domain or also in the 2D time frequency domain. This paper describes the novel application of the synchrosqueezing transform to the problem of physical layer authentication. The idea is to exploit the capability of the synchrosqueezing transform to enhance the identification and authentication accuracy of RF devices from their actual wireless emissions. An experimental dataset of 12 cellular communication devices is used to validate the approach and to perform a comparison of the different techniques. The results described in this paper show that the accuracy obtained using 2D synchrosqueezing transform (SST) is superior to conventional techniques from the literature based in the 1D time domain, 1D frequency domain or 2D time frequency domain.
In the next paper [
2], E. Erdogan et al. examine the interference alignment (IA) performance of a multi-input multi-output (MIMO) multi-hop cognitive radio (CR) network in the presence of multiple primary users. In the proposed architecture, it is assumed that linear IA is adopted at the secondary network to alleviate the interference between primary and secondary networks. By doing so, the secondary source can communicate with the secondary destination via multiple relays without causing any interference to the primary network. Even though linear IA can suppress the interference in CR networks considerably, interference leakages may occur due to a fast fading channel. To this end, the authors focus on the performance of the secondary network for two different cases: (i) The interference is perfectly aligned; (ii) the impact of interference leakages. For both cases, closed-form expressions of outage probability and ergodic capacity are derived. The results, which are validated by Monte Carlo simulations, show that interference leakages can deteriorate both system performance and the diversity gains considerably.
In the paper [
3], T. Horvath et al. present a numerical implementation of the activation process for gigabit and 10 gigabit next generation and Ethernet passive optical networks (PONs). The specifications are completely different because gigabit PON (GPON), next generation PON (XG-PON) and next generation PON Stage 2 (NG-PON2) were developed by the International Telecommunication Union, whereas Ethernet PON was developed by the Institute of Electrical and Electronics Engineers. The speed of an activation process is the most important in a blackout scenario because end optical units have a timer after expiration transmission parameters are discarded. Proper implementation of an activation process is crucial for eliminating inadvisable delay. An optical line termination chassis is dedicated to several GPON (or other standard) cards. Each card has up to eight or 16 GPON ports. Furthermore, one GPON port can operate with up to 64/128 optical network units (ONUs). The results indicate a shorter duration activation process (due to a shorter frame duration) in Ethernet-based PON, but the maximum split ratio is only 1:32 instead of up to 1:64/128 for gigabit PON and newer standards. An optimization improves the reduction time for the GPON activation process with current physical layer operations and administration and maintenance messages and with no changes in the transmission convergence layer. The activation time was reduced from 215 ms to 145 ms for 64 ONUs.
In the paper [
4] by D. Kubanek et al., fractional-order transfer functions to approximate the passband and stopband ripple characteristics of a second-order elliptic lowpass filter are designed and validated. The necessary coefficients for these transfer functions are determined through the application of a least squares fitting process. These fittings are applied to symmetrical and asymmetrical frequency ranges to evaluate how the selected approximated frequency band impacts the determined coefficients using this process and the transfer function magnitude characteristics. MATLAB simulations of (1 +
) order lowpass magnitude responses are given as examples with fractional steps from
= 0.1 to
= 0.9 and compared to the second-order elliptic response. Further, MATLAB simulations of the (1 +
) = 1.25 and 1.75 using all sets of coefficients are given as examples to highlight their differences. Finally, the fractional-order filter responses were validated using both SPICE simulations and experimental results using two operational amplifier topologies realized with approximated fractional-order capacitors for (1 +
) = 1.2 and 1.8 order filters.
The next paper [
5] by J. Mucha et al. deals with Parkinson’s disease (PD) dysgraphia, which affects the majority of PD patients and is the result of handwriting abnormalities mainly caused by motor dysfunctions. Several effective approaches to quantitative PD dysgraphia analysis, such as online handwriting processing, have been utilized. In this study, authors aim to deeply explore the impact of advanced online handwriting parameterization based on fractional-order derivatives (FD) on the PD dysgraphia diagnosis and its monitoring. For this purpose, 33 PD patients and 36 healthy controls from the PaHaW (PD handwriting database) are used. Partial correlation analysis (Spearman’s and Pearson’s) was performed to investigate the relationship between the newly designed features and patients’ clinical data. Next, the discrimination power of the FD features was evaluated by a binary classification analysis. Finally, regression models were trained to explore the new features’ ability to assess the progress and severity of PD. These results were compared to a baseline, which is based on conventional online handwriting features. In comparison with the conventional parameters, the FD handwriting features correlated more significantly with the patients’ clinical characteristics and provided a more accurate assessment of PD severity (error around 12%). On the other hand, the highest classification accuracy (ACC = 97.14%) was obtained by the conventional parameters. The results of this study suggest that utilization of FD in combination with properly selected tasks (continuous and/or repetitive, such as the Archimedean spiral) could improve computerized PD severity assessment.
In the paper [
6], Z. Galaz et al. focus on hypokinetic dysarthria, which is associated with PD, affects several speech dimensions, including phonation. Although the scientific community has dealt with a quantitative analysis of phonation in PD patients, a complex research revealing probable relations between phonatory features and progress of PD is missing. Therefore, the aim of this study is to explore these relations and model them mathematically to be able to estimate progress of PD during a two-year follow-up. Authors enrolled 51 PD patients who were assessed by three commonly used clinical scales. In addition, eight possible phonatory disorders in five vowels were quantified. To identify the relationship between baseline phonatory features and changes in clinical scores, a partial correlation analysis was performed. Finally, XGBoost models to predict the changes in clinical scores during a two-year follow-up were trained. For two years, the patients’ voices became more aperiodic with increased microperturbations of frequency and amplitude. Next, the XGBoost models were able to predict changes in clinical scores with an error in range 11–26%. Although some significant correlations between changes in phonatory features and clinical scores were identified, they are less interpretable. This study suggests that it is possible to predict the progress of PD based on the acoustic analysis of phonation. Moreover, it recommends utilizing the sustained vowel /i/ instead of /a/.
In the paper [
7], D. Luengo et al. describe an efficient method to construct an overcomplete and multi-scale dictionary for sparse electrocardiogram (ECG) representation using waveforms recorded from real-world patients. The ECG was the first biomedical signal for which digital signal processing techniques were extensively applied. By its own nature, the ECG is typically a sparse signal, composed of regular activations (QRS complexes and other waveforms, such as the P and T waves) and periods of inactivity (corresponding to isoelectric intervals, such as the PQ or ST segments), plus noise and interferences. Unlike most existing methods (which require multiple alternative iterations of the dictionary learning and sparse representation stages), the proposed approach learns the dictionary first, and then applies a fast sparse inference algorithm to model the signal using the constructed dictionary. As a result, the introduced method is much more efficient from a computational point of view than other existing algorithms, thus becoming amenable to dealing with long recordings from multiple patients. Regarding the dictionary construction, first all the QRS complexes were located in the training database, then authors computed a single average waveform per patient, and finally the most representative waveforms (using a correlation-based approach) as the basic atoms that were resampled to construct the multi-scale dictionary were selected. Simulations on real-world records from Physionet’s PTB database show the good performance of the proposed approach.
In the work [
8], written by M. Kolařík et al., a fully automatic method for high resolution 3D volumetric segmentation of medical image data using modern supervised deep learning approach is presented. Authors introduce 3D Dense-U-Net neural network architecture implementing densely connected layers. It has been optimized for graphic process unit accelerated high resolution image processing on currently available hardware (Nvidia GTX 1080ti). The method has been evaluated on MRI brain 3D volumetric dataset and computed tomography (CT) thoracic scan dataset for spine segmentation. In contrast with many previous methods, the approach is capable of precise segmentation of the input image data in the original resolution, without any pre-processing of the input image. It can process image data in 3D and has achieved accuracy of 99.72% on MRI brain dataset, which outperformed results achieved by human expert. On lumbar and thoracic vertebrae CT dataset it has achieved the accuracy of 99.80%. The architecture proposed in this paper can also be easily applied to any task already using U-Net network as a segmentation algorithm to enhance its results. Complete source code was released online under open-source license.
Technological evolution in the remote sensing domain has allowed the acquisition of large archives of satellite image time series (SITS) for Earth Observation. In this context, the need to interpret Earth Observation image time series is continuously increasing and the extraction of information from these archives has become difficult without adequate tools. In the paper [
9],
A. Radoi and
C. Burileanu propose a fast and effective two-step technique for the retrieval of spatio-temporal patterns that are similar to a given query. The method is based on a query-by-example procedure whose inputs are evolution patterns provided by the end-user and outputs are other similar spatio-temporal patterns. The comparison between the temporal sequences and the queries is performed using the Dynamic Time Warping alignment method, whereas the separation between similar and non-similar patterns is determined via Expectation-Maximization. The experiments, which are assessed on both short and long SITS, prove the effectiveness of the proposed SITS retrieval method for different application scenarios. For the short SITS, two application scenarios, namely the construction of two accumulation lakes and flooding caused by heavy rain were considered. For the long SITS, a database formed of 88 Landsat images was used, and authors showed that the proposed method is able to retrieve similar patterns of land cover and land use.
In the last paper [
10], X. Liu et al. discuss the time-interleaved analog-to-digital converter (TIADC), which is a good option for high sampling rate applications. However, the inevitable sample-and-hold (S/H) mismatches between channels incur undesirable error and then affect the TIADC’s dynamic performance. Several calibration methods have been proposed for S/H mismatches which either need training signals or have less extensive applicability for different input signals and different numbers of channels. This paper proposes a statistics-based calibration algorithm for S/H mismatches in
M-channel TIADCs. Initially, the mismatch coefficients are identified by eliminating the statistical differences between channels. Subsequently, the mismatch-induced error is approximated by employing variable multipliers and differentiators in several Richardson iterations. Finally, the error is subtracted from the original output signal to approximate the expected signal. Simulation results illustrate the effectiveness of the proposed method, the selection of key parameters and the advantage to other methods.
In summary, this Special Issue contains a series of excellent research works on telecommunications and signal processing. This collection of 10 papers is highly recommended and believed to be interesting, inspiring, and motivating readers in their further research.