1. Introduction
Over the last ten years, the percentage of renewables in the energy mix has grown by 10%, mainly due to the greater number of countries that encourage their use, to align with European environmental directives [
1]. However, due to their penetration (in terms of capacity and distribution) and to the random and dispersed nature of this growth, the electrical network has been affected to such an extent that the assessment of the supply has become a challenge that requires often tailor-made measurement solutions. Indeed, the intermittence (and also the deregulated connections) and the non-linearities (based on electronic power devices) have provoked new and hybrid disturbances that damage the more sensitive electric loads, such as IoT sensors, adjustable-speed drives and computers [
2,
3,
4,
5,
6,
7].
In this context, the confluence of statistical signal processing, IoT instrumentation and measurement techniques pursues the following goals:
Characterisation of the power system. The new trends in instrumentation campaigns lie in the strategic hypothesis that on-demand monitoring pursues better quality assurance and energy efficiency, and consequently reduces the carbon footprint [
3].
Time and space scalability, in order to obtain flexibility, capable of paying attention to a specific zone or time interval [
8].
Collecting energy usage data. Handling massive information (big data), especially from the burgeoning digitisation of the modern industry sector (e.g., the so-called industries 4.0 and the human-centred 5.0).
Eliminating redundant information, in order to save memory [
9].
Detection of new PQ events. Characterisation of hybrid disturbances has fostered the development of new estimators beyond the traditional Gaussian-based (second-order).
To date, such guidelines have not been considered, because PQ has been associated with a high degree of expertise. They are explained in more detail hereinafter. Firstly, the design of emerging measurement systems based on new techniques aims to elaborate benchmarks, so as to monitor daily supplies. Thus, a company can compare its data with others’, and it may perform better, settling new initiatives as green and sustainable actions.
Secondly, the goal of time and space scalability stands out. Indeed, network analysis implies measurements in several points, giving rise to handling issues regarding big data. Thus, it is necessary to eliminate repeated information by introducing specific measurements adapted to the customer’s requirements, i.e., a more realistic network status [
10,
11]. Until now, measurement campaigns have been based mostly on instrumentation and strategies designed under the IEC 61000-4-30 standard [
12]; but also, in recent years, in a complementary way, the reporting levels have been increasingly inspired by the so-called and well-known PQ triangle [
13], a context in which the scalability term appears. The goal essentially consists of extracting precise information regarding the state of the network and the electrical disturbances that most likely prevail in the site under test, carrying out an ad hoc characterisation of the energy system. The analysis window (temporal scalability) and zone averaging (spatial scalability) are used together to offer a clear view of the PQ evolution. This strategy ensures efficient data management, which, in turn, improves the repeatability of the measurements and favours their traceability [
14].
Thirdly, in order to put the aforementioned objectives into practice, network operational data must be refined, as a result of a feature extraction stage that guarantees fidelity (e.g., eliminating redundancy and outliers). This must occur not only in abnormal conditions but also during rated regimes [
15,
16]. In other words, measurements should be more realistic, so as to reduce the uncertainty of subsequent actions derived from the industrial process control [
16].
Fourthly, new hybrid continuous events have appeared [
7,
17,
18]. In the literature, real-time surveillance has not been improved enough to achieve these ends [
19], and cases of long-term monitoring of hybrids and network statuses are generally missed [
20]. Thus, the current technology and measurement strategies do not involve new events within a network status operation [
20,
21]. The present work implements a solution to track hybrid PQ events based on 2D patterns of the indices.
In this context, new trends in measurement solutions manage information by conditioning raw data into statistical parameters and then calculating the difference from ideal patterns. For example, demand data can be mapped onto a 2D scatter plot [
15], being friendly for both customers. In this regard, this work goes one step further and uses 2D-HOS (cycle-to-cycle measurements) to characterise the voltage, compress the time series and track the 2D waveform magnitudes. It fulfils PQ goals regarding continuous monitoring, eliminating data redundancy and giving views of time and location compression. In addition, the technique implemented in the present instrument goes beyond power spectrum and RMS measurements. In fact, complementary techniques (e.g., wavelets) have shown hopeful results in waveform characterisation, but, so far, they are susceptible to noise and the computation cost is still high [
17].
This paper is a logical continuation of the work in [
22], and it presents a Higher-Order Statistics (HOS)-based non-intrusive measurement solution (physical distributed measurement) conceived to monitor the states of the electrical supply voltage at any point in the network, whether upstream or downstream. Being noise-immune and focusing on the waveform, the proposed indicators help characterise the point under test and inform if the surveyed values fall outside the healthy operating conditions within the agreement limits. Additionally, two acquisition devices are proposed, in order to show our measurement solution’s design flexibility.
From the point of view of technology maturity, the prototype successfully demonstrated operation in a real environment, which is indicative of TRL Technology Readiness Level (TRL) level #7.
For an optimum understanding, the highlights of the paper are listed as objectives:
Modern Power Quality visual measurements beyond the UNE-EN 50160 standard [
23] and Gaussian-based concept.
Power Quality equipment suitable to the demands of big data and prosumers.
Clustering operational states to discover different daily supply conditions.
2D-HOS diagrams to detect non-Gaussian deviations of the supply voltage.
Graphic monitoring of the electrical network to allow less subjectivity and expertise.
The rest of the paper is outlined according to the following parts.
Section 2 exposes the mathematical foundations of the measurement strategy, along with a comparison of the most similar and relevant existing methods (also performed in the present introduction). The measurement activities framework and the instrument design (hardware and software) are presented in
Section 3.
Section 4 presents our results and discussion along with the instrument performance, browsing through the panels and tabs, in order to show its potential and practice. Then, a brief comparison with other statistical methods is conducted in
Section 5, with an associated discussion in
Section 6. Finally,
Section 7 summarises valuable theses derived from the practical performance and the visual results.
2. Continuous Monitoring Based on HOS
2.1. Second-Order vs. HOS-Based PQ Indices
The mathematical foundations lie in the hypothesis that HOS-based indices are far more sensitive to shape parameters: thus far neither gathered nor considered within current norms and standards, this potential has use in the extraction of voltage supply features. Traditionally, the former have to do with so-called second-order regulated measurements, such as
or
THD (Total Harmonic Distortion), in the time and frequency domains, respectively [
23]. However, new analytical tools are demanded, in order to look beyond the power change [
16].
Background research has also elicited the need for tracking continuously the power-line waveform. Indeed, variance targets power changes in sags and swells [
16]. The sign of skewness depends on the tail sizes of the Probability Density Function (PDF). Non-symmetry in electrical events indicates an anomalous half-cycle. Therefore, skewness is capable of detecting transients and the non-symmetry of the starting and ending parts of the events. Looking at the kurtosis, the flatter the top and down zones of a waveform, the lower the kurtosis [
16].
In feature extraction, pre-processing should guarantee the maximum probability of detecting abnormal operation, which was the main goal of this paper. This requirement must be fulfilled in power recognition systems [
15,
16]. In background research, real-time events and characterisation were not improved [
19] and characterisation of hybrid events was rarely found [
20,
21]. Alternative methods, such as wavelets [
17], have provided good performance in shape correlation, but noise drastically reduces effectiveness.
Table 1 compares HOS’ performance with second-order ones. Specifically, variance (proportional to
) and Crest Factor (CF) (the ratio between the maximum peak level and its effective value during a predetermined time) were compared with skewness and kurtosis, which are third- and fourth-order statistics, respectively. Being the first (the asymmetry coefficient) and the second (the tailedness coefficient) of the statistical distributions of the signal amplitudes, these indicators constitute an important complement to the CF, to confirm the presence of an anomalous operating state. The analysis was conducted according to different time domain voltage supply signals that lose their symmetry, their sinusoidal character or both:
Symm–Sinusoidal, Symm–NonSinusoid, NonSymm–NonSinusoid, as, for example, in [
24,
25].
Compared to classical indicators, the calculation of the and the variance is based on squaring the signal, so the results are almost identical. Therefore, any change detected by the first is targeted by the second. With respect to the CF, it can be affected by noise or small impulses in the peaks. However, kurtosis is an estimator based on the distribution of points relative to the mean, indicating whether they are more concentrated or dispersed, and it is less sensitive to the variation of a point on the ridges. Finally, traditional indicators work on the premise that the wave is symmetrical, since this is a premise of the normal operation. However, during transients, specific or temporary asymmetries may occur, which are detected by the skewness.
Table 1 includes the behaviour of the differential PQ index (variance, skewness and kurtosis), based on previous results [
22], which took into account the absolute differences between the ideal and real statistics.
Based on the former definitions and strategy, hereinafter the subsequent design of the Virtual Instrument (VI) is explained. The solution for monitoring the network voltage is based on graphs with higher-order statistical indicators and a numerical index that in the case of an ideal supply is zero; deviations from this value show a change of state in the network.
2.2. Mathematical Foundations of HOS
Originally, HOS were proposed to filter blurred time series and to estimate non-Gaussian features [
2]. Within the present context, the electrical disturbance is dealt with as non-Gaussian, while the background is considered symmetrically distributed.
If the focus is put on the waveform, HOS provide new information not issued by the second-order statistics. Indeed, two signals can be valued with the same
and exhibit different time patterns [
2].
In the context of voltage quality, HOS are even more suitable, since they characterise the waveform itself, thus increasing the probability of discriminating among types of disturbances if the analysis window is small, or amidst operating states of the electrical network, in cases of focusing on longer observation intervals, such as in reliability studies. Such discrimination is only possible using Higher-Order Statistics, as described by the cumulants, which are defined hereinafter.
Let
x(t) be an
rth-order process (e.g., an instantaneous voltage line supply). The
rth-order statistic is the self-correlation of the variables
,
,
, …,
, where
,
,…,
are time-shift multiples of the sampling period,
; i.e.,
=
, with
j∈
. The first time shift,
= 0, corresponds to the original signal without time shifting. For the second and third orders we have the magnitudes defined in [
2]. Due to the correlation nature, and depending on the statistics’ orders, the interdependency will target specific states of the electrical network.
Thus, the instrument calculates three statistics—the variance, the skewness and the kurtosis—and the differences between the real measurements and the ideal triplet (0.5, 0.0, 1.5) [
22].
Once the general PQ index and the benefits of the cumulants have been recalled from [
22] the instrument design is described in detail.
3. Virtual Instrument: Context and Design
3.1. General Measurement Framework
The general framework is illustrated in the conceptual diagram of
Figure 1. PQAs monitor VRMS and the proposed magnitudes, variance, skewness and kurtosis.
The measurements were collected at the University of Cádiz, Spain. A 50 Hz LV was monitored at 25 kHz (i.e., 500 samples per cycle), in order to test the shape of the voltage supply signal. Harmonic analysis (using a Fluke 435 analyser) did not reveal different network statuses.
3.2. Description of the Instrument Hardware
Figure 2 shows two versions of the measurement chain, ranging from the physical variable (the voltage line signal) to the computer, which emulates the control panel at the consumer’s side. This single-phase version consists of the following elements. A voltage differential probe model HZ-115 (
Hameg Instruments) is connected to the voltage DAQ module card, model NI-9215, which is hosted in the bus-energised chassis NI-cDAQ-9191. The supply voltage is attenuated by the probe and then conditioned to a range of ±10 V.
Figure 3 shows the real-life differential probe (
Figure 3a) used in the first version and the NI c-DAQ chassis (
Figure 3b) chosen for the instrument design. The probe has an overall accuracy of ±0.5% and it is conceived for measuring from ±10 to ±500 V. The DAQ module NI-9215 has four inputs, a 16-bit resolution per channel and a maximum sampling frequency of 100 kS/s/channel (range ±10 V). The chassis establishes the physical or wireless link between the laptop computer and the DAQ board.
The interest in regard to the first version consists in showing off the design flexibility of the distributed instrumentation. Furthermore, in an advanced measurement chain, the differential probe is eliminated to improve ±0.5% gain accuracy and to increase the sampling resolution up to 24 bits. In this case, the DAQ hardware is the voltage module, C series, NI-9225, 50 kSPs (data rate), 24 bits, 300 , instead of the pair HZ-115/NI-9215 (16 bits, 100 kSPs).
From this point onwards, the software is described by navigating through its different panels (interfaces) and showing the most significant results and capabilities according to the results from data analysis.
3.3. Description of the Instrument Software
The software was designed and implemented using the Version 2020 SP1 environment. The Graphical User Interface (GUI) for the VI was specially designed to separate the hardware-control and data-acquisition settings from the displays and graphs, which, in turn, were designed to display the optimal spatial and temporal configuration, so as to extract and interpret the data. The GUI consists of five panels (or tips): two for the acquisition and configuration set-up (the core of the system), and three for statistical adjustments, called Acquisition, Config, HOS Fingerprint, HOS Histogram and Trajectories, respectively. In this sub-section, the first two are introduced.
Two VIs (or modules) work cooperatively. The main one includes the analysis and the core structure, as well as the following tabs: Config, HOS Fingerprint, HOS Histogram and Trajectories. On the other hand, the sub-VI responsible for acquiring data can be seen in the Acquisition tab, and it accumulates the data until the main VI is ready to analyse. The data should be read from the acquisition card as soon as available (the data block requires, e.g., 1-second length), to prevent the memory card from saturating and the acquisition being interrupted. Two independent data reception systems ensure that this does not happen; in fact, the two programs communicate with each other through shared variables.
Figure 4 shows the
Acquisition tab. Here, the reception of the data is surveyed, indicating the card and the channel in
DAQmx Physical Channel (s), the
Sampling Frequency,
, Sa/s) and the
Duration (s) of each measurement. Continuous measurements are carried out without interruption. The maximum number of signals to be accumulated (waiting to be analysed) is previously indicated, to avoid overflowing memory. We are able to see the status of the analysis queue (signals waiting), the acquisition status and, if an error occurs, the reason why. To begin, the user presses “Start ACQ”. We have the option to stop the acquisition, in case we want to modify the parameters (“Stop ACQ”) and continue after an error, as well as the option to eliminate the signals pending (
Flush Queue).
Figure 5 illustrates the
Configuration panel. It establishes the analysis setup, i.e., the network frequency, nominal values and limits, as well as the size of the persistence-type graphs. This configuration predefines the 2D graphs that appear in the following tab, which, in turn, illustrate the subsequent fingerprints of the statistics for the given selected refreshing time. A case with five sub-graphs is indicated in
Figure 6, and a situation with four is depicted in
Figure 7. Thus, all subsequent stages consider different time horizons, which are defined here, indicating a numerical and a time unit. The case represented in
Figure 7 corresponds to the last two hours, 1 h, 1 min and 10 s, respectively. On the right-hand area, the names of the graphs are configured, using formatted string wildcards.
Figure 5 also shows a graph (bottom-right corner) that monitors the current acquisition of the 50 Hz grid voltage, whose amplitude is normalised before applying the statistical computing algorithms. The real
voltage is indicated above, and it is used to normalise the input voltage and adjust the scale factor of the probe. Normalisation allows the algorithm to extend its validity to any network under monitoring. A transformer is used as galvanic isolation in the substation.
For each 20 ms period, the dimension of the vector array is . The instrument calculates three statistics, using a cycle-by-cycle sliding 500-data window, with no overlap; these are compressed into four (three statistics and a PQ differential index).
Section 4 presents the instrument’s behaviour, while describing the measurement campaign at the same time.
4. Virtual Instrument: Performance Results and Discussion
It is preferable to show the VI’s real-life performance within an offline description making use of the main panel. Two-dimensional graphs are available in the different panels emphasizing the areas of signal persistence throughout different time durations. Hereinafter, the three tabs that allow graphical representations are exposed: the HOS fingerprint, the HOS histograms panels and the HOS trajectory panels, respectively.
4.1. The HOS Fingerprint Panel: Energy Traces
The tab corresponding to the layout of the footprint or voltage trace, the
HOS fingerprint panel, allows us to monitor the network status. In this example, the focus was on the
variance vs.
kurtosis time curves. Two design versions of the panel were selected, depicted in
Figure 6 and
Figure 7, respectively, each aiming to show a different network status.
The 50 Hz waveform is not ideal, because the measurements in the colour maps show a bias with respect to the ideal target, the solid cross. From left to right and top to bottom, the curves represent the statistics from the highest scale to the lowest. The colour map represents the cumulative probability. In the top-right zone of each sub-trace, the user finds the PQ index. If the waveform goes out of bounds, the outliers counter is increased.
The first representation in
Figure 6 gathers the one-week data (upper-left subgraph). Then, they are down-scaled to one day, one hour, 10 min and 10 s, respectively. The solid cross is the ideal value of the variance–kurtosis. These first three 2D graphs account for the reliability of the voltage supply; the two graphs below inform about the PQ (shorter time intervals), and looking for PQ events, as named in IEC 61000-4-30. As in the former case, a transformer is used as galvanic isolation in the substation.
Figure 7 is another version of the fingerprint panel, which contains four sub-graphs and incorporates the clusters’ centre, with a dashed cross. This version is suitable for establishing the distance from the real centroid to the ideal centre, according to the
Euclidean metrics.
Both figures illustrate different experimental behaviours. For example, comparing the two hourly sub-graphs, greater network instability is observed in
Figure 7 than in
Figure 6. The yellow centroid being the reference, in the first the variance ranges from 0.465 to 0.485, while the second presents a far smaller (negligible) dispersion around the clusters’ centroid and the dashed cross (the global centroid).
The following programming flow diagram,
Figure 8, illustrates the procedure conceived to obtain the two-dimensional plot of the voltage supply quality, the
HOS footprint or
energy trace, which explains the state of the electrical network. The analysis, as in the previous case, was arranged and performed in pairs, i.e., variance–kurtosis, variance–skewness or kurtosis–skewness.
As seen in
Figure 8, firstly the software initialises a matrix (zero) with the dimension of the representation, and then it reads each pair of HOS values. This two-dimensional array of zeroes will constitute the fingerprint storage later on. First, the HOS values are compared with the extremes assigned by the user that are considered as the network-condition status limit. If either of the two data variables in the pair is out of range (greater than the maximum or less than the minimum indicated for that axis), the point is considered an outlier, the outlier counter is incremented and we end up with this pair of values. Otherwise, if both data in the analysed pair fall within the range, an integer position among
max and
min is assigned for each one, with an integer division by the increment (max–min divided by the fingerprint vector dimension). Thus, the increment is calculated as the maximum minus the minimum, divided by the number of points on the axis. To obtain the value, the minimum of the axis is subtracted from the current HOS value, and the result is divided by the increment, leaving us with the integer part of the division. The same is done for both axes, which obtains the pair of statistics to which the analysed signal cycle corresponds; finally, we add 1 to it. After that, the next pair of HOS is considered, i.e., the position indicated by both indexes, related to division of both pairs of HOS, is incremented by one.
When all the HOS variables have been analysed, a two-dimensional representation is obtained, giving rise to a colourful map, in which different colours are seen, depending on the frequency with which the data are repeated, as can be seen in the colour bars in
Figure 6 and
Figure 7. In this panel, the area in which data are located is also marked by means of the cursors (continuous axes), to comply with the Power Quality requirements, as can be seen in both.
4.2. The HOS Histograms Panel
Figure 9 shows the panel of the instrument corresponding to the histograms of variance, skewness and kurtosis, obtained according to different averaging times, i.e., 1 h, 1 min and 10 s, respectively. Since it is a real signal that is being monitored, these histograms are not centred on the ideal values. In all of them, it is worth remarking that the longer the averaging time (the time interval window), the more accurate the measurement of the most probable value, since the fluctuating statistics are compensated.
Hereinafter, the trajectory panel is presented.
4.3. The HOS Trajectory Panel
The panel shows the trajectory following the two-dimensional dynamic evolution of a pair of statistics previously selected by the user among those studied in the time domain. These statistics are calculated with a time resolution of one network signal cycle (20 ms), which is the minimum needed to carry out operations. That is, each cycle can have a triplet or a variance–skewness–kurtosis vector. Thus, for example, Figure 11 shows the evolution of the kurtosis (Stat. #2) vs. the variance (Stat. #1). The monitored statistics in this figure have ideal values at the coordinate point (kurtosis = 1.5, variance = 0.5), and a rectangular zone of error margin has been represented with ranges of 0.45–0.52 for the variance and 1.465–1.535 for the kurtosis, respectively, corresponding to the maximum values observed during normal supply day. The trajectory, with a total duration of one hour, is given by the cloud of points, whose centroid is separated from the previous idyllic point (var = 0.5, kur = 1.5), and which corresponds to the situation of perfect voltage supply. In this case, the supply holds steady within the tolerance region established according to the empirical criteria previously outlined.
The programming code shown in
Figure 10 is a section designed to implement the graph for the trajectory. Thus, it takes the vectors containing the statistics in the order in which they have been calculated, and it later adds the quadrangular area that delimits the validity limits of the correct supply, as a second plot. The union of the two plots is conducted in the “bundle”-type element, and the graphic representations are done in the “XY Graph” graphic
-type elements.
Figure 10.
The
programming code that implements the 2D tool path tracing. The associated graphs are depicted in
Figure 11.
Figure 10.
The
programming code that implements the 2D tool path tracing. The associated graphs are depicted in
Figure 11.
The virtual sub-instrument block named
Trajectory from HOS, shown in
Figure 10 and whose “guts” (programming code) are shown in
Figure 12, builds the Higher-Order Statistics vector transferred to the graphs.
The 3D-vector coordinates are the Higher-Order Statistics, where
N corresponds to the number of voltage cycles that are analysed, calculating the variance–skewness–kurtosis triplet for each cycle. The selector in
Figure 12 allows the selecting of the pair of statistics that will be represented. To do this, it receives a pair of indices, one for each axis of the 2D diagram; each index refers to a vector that contains all the values of its statistics measured in the selected time interval. These vectors are the inputs to the “XY Graph” building blocks of
object-oriented programming. In this process lies the idea that the lowest temporal resolution used corresponds to one signal cycle.
The developed measurement system was tested in a public building of the university, being able to detect fluctuations in 10 s traces of 3.9% in variance and 0.6% in kurtosis, as is shown in
Figure 7.
5. Comparison with Other Statistic Methods
Two of the most important characteristics of the supply voltage wave were selected (frequency and ), and their measurements were shown every 525 cycles, in order to check whether clusters had formed, which would demonstrate the existence of more than one state of the electrical network.
In this section, and in order to make a comparison, statistical analyses of other variables are included, in order to establish whether de facto different functional states of the network can be established. The first of these is the frequency of the electrical network. Indeed, according to the UNE-EN 50160 standard, the nominal value measured for 10 s must be included between 50 Hz ± 1% (49.5–50.5 Hz). Consequently, there is no discrimination of possible different network states according to
Figure 13:
Regarding the voltage variations, in
Figure 14 the maximum limit of the effective values of the supply voltage to the final consumers must be between
± 7%, with
being the peak value.
Once again, and according to the criterion observed in the standard UNE-EN 50160, no clusters are observed, so, as in the case of the grid frequency, it must be concluded that the remained stable.
As a complement, a discriminant analysis was carried out, which is a technique used to predict membership in a group (dependent variable) from a set of predictors (independent variables). If canonical correlation and principal component techniques are used, then it is called canonical discriminant analysis. A discriminant analysis seeks to differentiate the two groups as much as possible by combining the independent variables (grid frequency measurements and
), but if the groups do not differ in the independent variables, a dimension in which the groups differ cannot be found. Once again, it was the variance and kurtosis that offered the best results within this technique, but a differentiating axis was not found, since the overlap between the groups was excessive; that is, the centroids were located in almost the same location. In other words, since the centroids were very close, it was not possible to distinguish the subjects from either group.
Table 2 summarises the results of the two performance indicators for this purpose. The discrepancy between percentages was probably due to outliers.
Therefore, it must be concluded that higher-order statistical variables perform the optimal classification of power grid states.
6. Discussions
In this section, we discuss the instrument’s performance, and comparisons with other statistic methods are made.
The designed electronic instrument exhibits robust behaviour over time, since it is capable of developing continuous monitoring, managing internal memory efficiently and discarding those situations that do not provide truly novel information. The graphs incorporated in its different panels are capable of providing detailed information on numerous statistical parameters of the electrical network, which other traditional analysers do not incorporate. Those 2D graphs that include Higher-Order Statistics (HOS) perform discrimination between states of the quality of the electrical supply.
Two different configurations for hardware measurement chains have been proposed, which are very easy to implement, and which, together with the developed software, exhibit uninterrupted operation, giving the system robustness and reliability.
The graphical representations and visual and statistical methods that incorporate traditional variables (such as the grid frequency and the ) are designed for the monitoring of electrical supply variables within the stipulated regulatory framework. Any deviation that appears in these graphs is considered allowable, and each one of these observed differences is not admitted as a new differential state of the network.
Service quality is the set of technical and commercial characteristics inherent in the electricity supply required by subjects, consumers and by the competent bodies of the Administration. The quality of service consists of the following content:
Continuity of supply, relative to the number and duration of supply interruptions.
Product quality, relative to the characteristics of the voltage wave.
Quality of customer service and relationship, relative to the set of information, advice, contracting, communication and complaint actions.
This prototype instrument monitors or puts the focus on the second item, the quality of the product (voltage grid) based on the waveform. This waveform is distorted by numerous factors inherent in the modern electrical grid, which range from non-linear loads to the intermittency of renewable energies.
The traditional variables of quality of electricity supply, according to the UNE-EN 50160 standard, are not capable of discriminating different states of the modern electrical network on their own, perhaps because the energy sources have different origins and because there are increasingly more of the types of non-linear loads that are connected. This is why there are presumably new disturbances and hybrid disturbances that, although less intense, may, due to their form or persistence, damage sensitive electronic equipment or reduce its useful operating life.
Likewise, the states of the electrical network must be differentiated by new parameters or statistics that take into account the shape of the signal. The formation of clusters in the 2D diagrams demonstrates this circumstance. These groupings of measures are not standardised and their categorisation is the subject of incipient research, such as that demonstrated in this work.
On the other hand, consumers must take measures to ensure that disturbances emitted by their facilities are within limits. They must also establish measures that minimise the risks derived from the lack of quality. This instrument provides an easy and intuitive way to record supply deviations and allows the user to detect them in a short period of time. The user’s registration will allow the user to claim damages to their equipment.
It can be stated that the proposed method and the instrument that implements it is capable of separating network states that, according to the observational parameters provided for in the current standard UNE-EN 50160, cannot be observed.
7. Conclusions
This paper summarises the behaviour of an HOS-based measurement instrument that analyses big data for the evaluation of PQ. With new functions, the developed VI provides specific information that allows better tracking of domestic and industrial PQ. Taking the primary voltage measurements, it further calculates the statistics and it maps the compressed parameters into 2D traces. Aggregating cycle-over-cycle, this visualisation tool establishes the quality regions that characterise the behaviour of the energy system. Two possible hardware solutions have been shown.
It has been proven in a real case that 2D graphics based on Higher-Order Statistics are capable of discriminating different states of the electrical network and that it is possible to evaluate the resolution with which they are measured. These new states can be of great interest when we want to establish the causes of the bias of an electrical supply.
Two alternatives for hardware measurement chains have been proposed that are very easy to implement and that, together with the developed software, exhibit uninterrupted operation, giving the system robustness and reliability.
Future work is based on a double strategy. On the one hand, it will be about implementing the programming code in a simpler language. On the other hand, it will be about diversifying the design of the instrument, depending on the type of energy being monitored.