1. Introduction
Classically, the concept of entropy arises from the analysis of physical problems in statistical physics and thermodynamics. Since the beginning, it was a measure of uncertainty in a physical system, and based on this, C.E. Shannon [
1] proposed to extend this concept for the analysis of complexity in signals, thus giving rise to the emerging information theory [
2]. Several year later than Shannon, A. Rényi showed that a valid measure of entropy has to be defined in accordance with a measure of diversity [
3]. A step forward in this direction was given by A. Kolmogorov (1958), who used the concept of entropy to define a fundamental measure for chaotic evolution and loss of information in the course of time [
4].
Indeed, this entropy is an extension of a known concept in information entropy (for time-dependent processes), which is used to characterize dynamical systems between regular, chaotic and purely random evolution.
Chaotic motions and, in particular, attractors can be also described by iterative maps, which belong to the fundamental methods of fractal geometry.
Chaos, complexity and fractals have many common features, and recently, they have attracted the interest of scholars for their application in science and engineering.
Fractal sets are abstract objects that cannot be physically implemented. However, some related geometries known as pre-fractals have been shown to be very useful in engineering and applied science [
5,
6]. In particular, some fractal models have been used to design some fractal antenna with very special properties: about one-tenth of a wavelength (p. 231, [
7]) and a pre-fractal geometrical configuration.
An antenna is a complex device, characterized by different parameters (resonant frequency, gain, directivity, radiation pattern,
etcetera), which define the performance of the radiator. The chaoticity of the fractal antenna will be studied in the following by an entropy measure based on the computation of the fractal dimension, according to the analysis of a radiating structure given by [
8,
9].
Since the Rényi entropy
and generalized fractal dimension
are connected by a well-known relation (see Equation (
25)), in this paper, the quantity
was used to compute the Rényi entropy
of a pre-fractal structure and to describe the electromagnetic behavior of an antenna together with the corresponding performance.
The results of Best [
10,
11] show how antenna geometry alone (pre-fractal or otherwise) is not a significant factor to determine the performance of small antennas. Yet, this may be a good clue.
In the literature, there are only a few articles about how the self-similarity property of a pre-fractal radiator can influence its performance (see again [
8,
9]).
In order to investigate in this direction, the values of Rényi entropy
through Equation (
25) for the classical Sierpinski gasket were determined. The quantity
was numerically estimated (see Paragraph 4.3).
2. Concept of Entropy
There are basically three definitions of entropy in this article. The Kolmogorov entropy
K, which measures the chaoticity of a dynamical system (Chapter 6, [
12]), can be estimated as the Rényi entropy
. In information theory, the Shannon entropy is a special case of Rényi entropy for
.
Definition 1 (Shannon entropy)
. The Shannon entropy [
1,
13]
of a discrete-type RV X is defined as:where N is the number of possible states and is the probability of the event , and it is assumed ; the most common used values are and . This entropy may be defined in a simple way for a continuous-type RV, as well [
13]. Yet, this is not the scope of this article. As is well known, Shannon entropy satisfies different properties, which will not be treated herein [
14].
Moreover, it is possible to show that it represents the measure of uncertainty about a suitable partition [
13].
A first generalization of this kind of entropy is the so-called Rényi entropy [
3,
15]: it represents one family of functionals to describe the uncertainty or randomness of a given system.
Definition 2 (Rényi entropy)
. Let α be a positive real number. The Rényi entropy of order α is defined as [
3]
:where X is discrete-type RV and is the probability of the event , and the most common used values are the same as the Shannon entropy. If the events
are equiprobable, then
is maximal and
for every
,
i.e., the so-called Hartley entropy
[
16]. It is clear that they do not depend on the probability, but only on the number of events with non-zero probability.
In order to understand the meaning of the last definition, it is necessary to observe that at
, the quantity:
generates the indeterminate form
. By L’Hôpital’s rule [
17], it is easy to show that:
i.e., the Shannon entropy, so
, as shown in
Figure 1.
Therefore, the Rényi entropy may be considered as a generalization of the Shannon entropy. It can be shown that the Rényi entropies decrease as a function of
α [
3].
Let
be the trajectory of a dynamical system on a strange attractor. Let the
d-dimensional phase space be partitioned into boxes of size
and sampled at the discrete time intervals
τ. Let
be the joint probability that the trajectory
is in the box
,
is in the box
, ..., and
is in the box
[
18,
19]; for example:
According to Equation (
1), the quantity:
gives the expected amount of information needed to locate the system on a special trajectory
,
i.e., if it is known
a priori that our system was in
, then
is the necessary information to predict in which box
this system will be included. Using the language of information theory, this means that
measures the loss of information for the system from
n to
(Chapter 6, [
12]). Therefore:
The definition of this new kind of entropy will be provided at this point.
Definition 3 (Kolmogorov entropy)
. The Kolmogorov entropy K is defined as [
19]
:where l and τ have the same meaning as above. From this definition, it can be seen immediately that it is nothing other than the average rate of loss of information.
K is independent of the particular partition (thanks to the limit
).
Figure 2 reveals how
K represents a measure of chaos: indeed (Chapter 6, [
12]):
where the definition of a random system can be found in Chapter 3 of [
20]. By describing the chaos in a dynamic system, the Kolmogorov entropy is expected to be strongly connected with the Lyapunov exponent
λ; see [
21]. For more information about the theoretical aspects of entropy, its generalizations and entropy-like measures, which can be used to measure the complexity of a system, see [
22,
23,
24,
25,
26].
3. Remarks on Fractal Geometry
A fractal is characterized by the property that each enlargement of this set reveals further details, so it has a structure that is too irregular to be described by a classic mathematical theory (even if a fractal can often be described recursively). Furthermore:
it is self-similar, i.e., each very small portion of it is exactly or approximately similar to itself (this property has to be understood in the statistical or approximated sense, because a random element can be introduced in the construction of the fractal);
it is a space-filling curve [
28].
3.1. Hausdorff–Besicovitch and Box-Counting Dimensions
Among the different definitions of fractal dimensions in use, the Hausdorff–Besicovitch dimension is probably the most important, even if it is not usually used for experimental procedures to find the fractal dimensions of real objects.
Fractal dimensions are very important because they provide a measure of the degree to which new details are revealed at different scales. For example, the fractal dimension of the coastline of Great Britain is about
(Chapter 2, [
29]). In order to define the Hausdorff–Besicovitch dimension, some remarks on fractal geometry are given [
5,
6].
Theorem 4 (See
Figure 3)
. If and A is a bounded subset of Euclidean metric space , then there exists a unique number , such that: Definition 5 (Hausdorff–Besicovitch dimension)
. Under the hypotheses of Theorem (4)
, the correspondent real number present in Equation (
8)
is called the Hausdorff–Besicovitch dimension of A, and it is generally indicated with .
From this last definition, it follows that the Hausdorff–Besicovitch dimension of a bounded subset
is a non-negative number
, such that:
Therefore, the Hausdorff measure of
A,
i.e.,
, might be equal to zero, infinity or such that
. In
Figure 3, the plot of
is presented as a function of
s, which shows us that
is the critical value of the variable
s in the jump of
from
∞ to zero.
At this point, the definition of the fractal set can be provided. It is to be recalled that
, where
represents the topological dimension of the bounded subset
A of
(p. 3, [
30]).
Definition 6 (Fractal set). A bounded subset is fractal (in the sense of Mandelbrot) if it holds that , where the difference is called the fractal degree of A.
The Hausdorff–Besicovitch dimension is not particularly useful in engineering or applied sciences, because its calculation is not very easy, so another definition of the fractal dimension more suitable to compute the fractal dimension for problems of mathematical modeling was introduced [
6,
31].
Definition 7 (Box-counting dimension)
. Let be a metric space and , where denotes the space of non-empty compact subsets of X. Let , be the smallest number of closed balls of radius δ needed to cover A. The lower and upper box-counting dimensions of A, denoted and , respectively, are defined as:When , the following limit exists and is called the box-counting dimension of A, denoted : The box-counting dimension of an object does not exactly have to be equal to the Hausdorff–Besicovitch dimension, even though they can be really close at times. This new definition of the fractal dimension is given by the minimum number of objects needed to cover the fractal set.
In
Figure 4, it is shown how the box-counting dimension works to compute the length of England’s coastline: looking at the first iterations, the meaning of the adjective box-counting is clear.
In general, the limit in Equation (
10) might not exist; hence, the box-counting and Hausdorff–Besicovitch dimensions are linked by the following relation:
(Chapter 3, [
6]).
3.2. Iterated Function System and Pre-Fractals
It is to be recalled that a contraction on a metric space
is a transformation
, such that:
where the number
s, called the contractivity factor for
f, belongs to
.
The famous contraction mapping theorem states that every contraction
f on a complete metric space
has exactly one fixed point
, and the sequence of iterations
converges to
,
(pp. 76–77, [
5]).
Clearly any contraction is continuous. If the equality holds in Equation (
11),
f is called a contracting similarity, because it transforms sets into geometrically similar sets.
It is now time to give the definition of an important procedure concerning fractals.
Definition 8 (Iterated function system)
. The iterated function system (IFS) is a couple , where is defined through a finite family of contractions on the complete metric space , with , and denotes again the space of non-empty compact subsets of X. Moreover, the set is called the attractor (or sometimes invariant set) for the IFS if: Technically speaking, the operator
F given by Equation (
12) is called the Hutchinson operator associated with the IFS
,
,...,
[
5,
32]. From the definition above, it is clear that the attractor for the IFS is also its unique fixed point. This is the fundamental property of an IFS, because this attractor is often a fractal. An IFS has a unique (non-empty compact) attractor (Chapter 9, [
6]), but its introduction brings with it two main problems: the first one shows the way to represent a given set as the attractor of some IFS, while the second is to reconstruct the IFS starting from its attractor (p. 126, [
6]).
Both of these two problems can often be solved by inspection, especially if
F has a self-similar structure (see
Figure 5).
For the majority of the fractals suitable for an application in antenna theory, the thesis of the Moran–Hutchinson theorem (pp. 130–132, [
6]) holds true, so:
where
A is the attractor of the IFS with contraction factors
, ...,
and
.
This theorem provides us the possibility to compute the fractal dimension of many self-similarity fractals. Indeed, let us consider the von Koch curve and the middle third Cantor set (see
Figure 5): for the first one, it is:
while for the other set, we get:
IFS can be applied to all self-similarity structures, especially for the simulation of real objects with fractal properties, like fractal antennas.
It is well known that fractals are only mathematical abstractions (because it is impossible to iterate indefinitely in the real word). In addition, numerical simulations show how the fractal modeling in antenna theory provides substantial advantages within a certain value of the iteration (typically, for fractal antennas, it is not greater than six). Beyond this value, the benefits are negligible. It is clear that all self-similar structures in nature are nothing other than fractals arrested at a prefixed iteration, i.e., pre-fractals (geometrical objects characterized by a finite number of fractal iterations).
4. Fractal Antennas
In order to minimize the antenna size holding a high radiation efficiency, a fractal approach to model its geometrical configuration can be considered.
The two fundamental properties of a fractal (i.e., self-similarity and space-filling) allow fractal antennas to have an efficient miniaturization and multiband characteristics.
The well-known log-periodic antennas, introduced by DuHamel and Isbell around the 1950s and closely paralleling the independent-frequency concept [
7], might be considered the first fractal antenna of history. Another example of a self-similar antenna discovered in the same period is the spiral antenna (see
Figure 6).
However, their true origin may be traced back to 1988, when Nathan L. Cohen, a Boston University radio astronomer, published a paper about this new type of antenna [
33].
Fractal antennas have not only a large effective length, but also a simple matching circuit, thanks to the contours of their geometrical shape, which is able to generate a capacity or an inductance. For instance, a quarter-wavelength monopole may be transformed into a smaller antenna using the von Koch curve (see
Figure 5).
A big part of the research on fractal antennas has been done by Fractal Antenna Systems Inc., an American company founded by Cohen.
Carles Puente Baliarda (Polytechnic University of Catalonia) was the first to treat these antennas as multiband antennas. In 1998, he won the award of “innovative IT products with evident market potential” due to his pioneering research in fractal antennas (for a total of € 200,000), while he and his company (Fractus S.A.) were the finalists for the European Inventor Award 2014, showing the great potentials of these antennas.
In 2011, billion fractal-based antenna units were supplied worldwide (a report by BCC Research).
4.1. Sierpinski Gasket and Hilbert Antenna
The Sierpinski triangle
T can be constructed from an equilateral triangle by the removal of inverted equilateral triangles (see
Figure 7). It is a fractal and attractive fixed set. Considering
Figure 7, it is:
since all of the contraction factors
,
,
are equal to
. Therefore:
This fractal may be also generated by an IFS (Chapter 9, [
6]).
There exist different versions of the Sierpinski triangle. The shape can be modified in many ways, and they are often used in engineering and applied sciences.
The Sierpinski (gasket) antenna belongs to the class of multiband fractal antennas based on the Sierpinski triangle. The classical Sierpinski dipole is shown in
Figure 7. It is probably a fractal antenna with more applications, from wireless communication systems (GSM, UMTS and WLAN) through RF MEMS (radio frequency microelectromechanical system) to get to space probe design and ANN (artificial neural network) theory [
34,
35].
The famous Hilbert curve is a continuous fractal space-filling curve,
i.e., it fills the plane without leaving any gaps. Hilbert introduced it as a modification of the Peano curve [
28].
There are important differences between these two curves. Indeed, it is not possible to construct the Hilbert curve H through the IFS (while for the other one, this procedure is applicable). The reason is that the steps in the Hilbert curve’s construction are not self-similar, i.e., they are not divisible in a number of parts similar to the initial figure.
The original construction of the Hilbert curve is extraordinarily elegant: it starts with a square
, while in the first step (see
Figure 7), the curve
connects the centers of the quadrants by three line segments (having a size of one). In the second step, four copies (reduced by
) of this initial stage are made and placed into the quarters (see again
Figure 7). In this way, the first copy is clockwise rotated and the last one counter-clockwise rotated by 90 degrees. After this, the start and end points of these four curves are connected using three line segments (of a size of
), and we call the resulting curve
.
In the third iteration, the scaling is done again by , and four copies are placed into the quadrants of the square (as in the first step). They are again connected by three line segments (of a size of ) obtaining , and so on.
In
Figure 7, it can be noticed that each successive stage consists of four copies of the previous one, connected with additional line segments. Therefore, the curve is scaled down by the ratio
, and four copies are made; so:
hence:
Naturally, the topological dimension of
H is one, since it consists only of line segments. Therefore, the Hilbert curve is a fractal for all intents and purposes.
An alternative procedure to IFS is that of the so-called L-systems [
36].
In
Figure 7, a Hilbert dipole is also shown, where the feed source point is placed at the point of symmetry for these two pre-fractals.
The Hilbert antenna is especially used in spatial communications, like RF MEMS design [
37] and, generally speaking, in each (telecommunication) system where the space available for the antenna is limited [
38].
4.2. The Results of Best and HRC Conditions
A fractal approach is not the only way to miniaturize an antenna; indeed, there exist few particular non-fractal modifications of the classical von Koch dipole that could have the same performance [
10].
In addition, it is clear that fractal geometry does not uniquely translate the electromagnetic behavior of the antenna. The geometrical configuration alone (fractal or non-fractal) could not be the only significant factor that determines the resonant behavior of wire antennas: indeed, a fractal configuration does not represent alone a guarantee of the highest antenna efficiency [
11].
The same applies to the loop antennas. It is well known that the main advantage of the fractal loop antennas is that they have a high radiation resistance on a “small” physical area. In
Figure 8 (top side), three examples of non-fractal antennas are shown. They offer similar or, in some cases, improved performance over their fractal-antenna counterparts, like the Minkowski antenna. The reason is that the radiation resistance of an electrically-small loop, given by [
10]:
where
λ is the working wavelength, is generally not valid for a loop antenna with complex geometry. However, there are few small non-fractal loop antennas with similar or better performance than their fractal counterparts (see
Figure 8, top side).
In order to investigate the significance of self-similarity in determining the multiband behavior of the fractal antennas, Steven R. Best has presented a comparison of the multiband behavior of the Sierpinski gasket and several modified gaskets where the major portions of the self-similar structure were modified or eliminated [
11].
His numerical simulations reveal that many of the self-similar fractal gap structures can be eliminated from the Sierpinski gasket, without modifying its multiband behavior.
Best showed how the total self-similar fractal gap structure is not the primary factor that determines the multiband behavior, because the Sierpinski gasket and modified Parany gasket antenna have the same behavior [
11]. Therefore, for all of the Sierpinski-based antennas, the multiband behavior depends on the small isosceles trapezia located in the center of the modified Parany gasket antenna, as shown in
Figure 8 (bottom side).
It would seem that some non-fractal geometries could be a good substitute for their fractal counterparts, but this is manifestly untrue.
Indeed, the results obtained by Best represent only a very few special cases, and these antennas do not belong to a family of radiators. Furthermore, the so-called
HCR conditions can be considered [
39] (they provide a necessary and sufficient condition for all frequency independent antennas). This criterion reveals that an antenna satisfies this property if and only if the radiating structure is both self-similar and origin symmetric about a point. It is clear that some non-fractal radiators might belong to this second one, potentially giving them the same performance of a fractal antenna that is non-symmetric about a point.
4.3. The Entropy of a Fractal Antenna
In addition to the box-counting dimension, another convenient way to estimate the fractal dimension is the so-called generalized fractal dimension (or Rényi dimension)
, given by [
40]:
In this definition,
is the total number of boxes with
, where also here, the most commonly-used values are
and
.
Considering the definition of Rényi entropy Equation (
7), it is clear that:
As
,
which is nothing but the fractal dimension. The numerator of the last equation is simply the Hartley entropy. It can be shown similarly, as for the definition of Rényi entropy, that
Therefore, the dimension is a generalization of , which is called the information dimension. Indeed, characterizes the information required for the determination of the point location in some cell i.
According to Equation (
21), it also follows that:
This quantity is called the correlation dimension, because it is very useful to detect chaotic behavior.
Taking still into account Equation (
21),
is clearly a nonincreasing function of
α,
i.e.,
at
: in particular,
.
Therefore, the generalized fractal dimension provides a direct measurement of the fractal properties of an object. Several values of the momentum order α correspond to well-known generalized dimensions.
Equation (
22) cannot be applied practically, and it is only possible to get an approximation fixing a small value of
δ, but strictly greater than zero. Therefore, in applied sciences and engineering, Equation (
22) becomes:
where
.
This equation shows us that the entropy of a region of size
δ is a function of the box-counting fractal dimension
: the entropies of analyzed regions (with size
δ) can be calculated from the three spatial dimensions through Equation (
25) [
16].
Right now, the Rényi entropy has to be computed for the geometric configuration of each fractal antenna. It is easy to create an algorithm for its computation using Equation (
25). This procedure consists of the classical algorithm for numerical estimating
of affine RIFS-invariant measures; see [
41,
42]. The Rényi entropy will be computed through Equation (
25), considering the logarithm of the cell size (see
Figure 9 below).
With this procedure, it is possible to compute the entropy for a fractal radiator, but it must be completely modified for each class of fractal antennas.
However, the general definition of entropy for a small antenna, in order to better understand how the chaoticness of the structure may affect its performance, remains an open problem.