1. Introduction
It is not necessary to emphasize the importance of the concept of causality in science and in the natural sciences in particular. The concept traverses all disciplines, and it is a matter of extensive research fueled by the exponentially increasing available scientific data and computation power. Revealing causal relations between systems via the time series produced by them is one of the most attractive challenges. The first major advancement was due to Granger who used an auto-regressive framework for a practical implementation of the predictive causality principle by Wiener [
1].
The very popular Granger [
2] method has some theoretical and practical limitations. It is not able to detect hidden common cause and, instead, indicates false directional causal relation between the observed systems (for details of all the pros and cons cf. [
3]). Several further methods appeared in the last two decades (for a concise review see Runge [
4] or [
5,
6]). One of the most prominent is the convergent cross mapping method developed by Sugihara [
7] to investigate deterministic dynamic systems, which essentially utilizes Takens’ embedding theorem [
8]. Stark [
9,
10] generalized Takens’ result and showed the theoretical limitations to use it for stochastic dynamic systems. For deterministic dynamics, a new approach was presented in a recent work [
11] that was based on the comparison of the dimension of the attractors of the given systems and their joint observation.
The present paper investigates the causal relation of a pair of dynamic systems (which might be deterministic or stochastic). Facts are revealed that, to our best knowledge, avoided the attention of previous studies. We show that the common driver is an i.i.d. sequence, shared observational noise, if there is dependence between the systems with the smallest but positive time difference. We also show that, if the pair is located in a non-abstract physical space where the speed of information transfer is known, then direct causation and common cause cases can be distinguished, which, in general, is theoretically impossible.
Basic Definitions
First, we provide the framework of our investigation. Our aim is to find the causal relationship between two stochastic dynamic systems X and Y from which we observe the time series .
Assumption 1. We assume that there is a set of systems and , and an external source of noise W such that the process has a joint distribution. The series are unobserved, hidden series that are not i.i.d. (see Figure 1). In what follows, for , we will use emphasizing that it influences x and similarly for for y.
For brevity, we will use the multi-index of involved dimensions: .
Assumption 2. The external noise is modeled with an unobserved i.i.d. sequence and affects all the systems with independent components. Furthermore, is independent from .
Assumption 3. The process is stationary.
Assumption 4. The causal structure of the time series is time invariant and non random.
In what follows, we use the expression “drives” for all the terms “causes”, “influences” and “injects information” in relation to dynamic systems.
Following [
12], we use the next model. The visible and invisible system can be described by a
p-order Structural Vector Auto-regressive
model:
and
follows the stationary distribution of the system. (It is a
process, were
is the multi index of dimensions of the variables, and
p is the order of auto-regression).
The recursion clearly can be transformed with time delay embedding into higher dimension first order
in particular to
in short
with
We make the same restriction as in [
12] that (2) must be recursive in the variables, which ensures that there is no directed functional cycle. Variables with capital letters denote the same “embedding” as in (1).
Assumption 5. The process M is exact (cf. Definition 4.3.2. [13]). Exactness means that, if the process started from a set with positive probability, then, after a long time, the set in which it can be found has probability one. It is natural to assume exactness given that we work with an observation, and the support of the observed process for us will be the whole set where the process can run and, consequently, has probability one. On the other hand, exactness implies mixing for stationary processes, and, at the same time, a mixing stationary process is
-mixing (or strong mixing). Let us note here that, from strong mixing, ergodicity also follows, but we do not need that fact (see also [
13]).
In our discovery scheme, we may allow instantaneous causation between all variables; however, we do not elaborate on that case here. For brevity, that is not reflected in (2). We note that a system like (2) with contemporaneous interaction but without a directed cycle can be rewritten into the form of (2) using time shifts thanks to the acyclic recursivity.
Definition 1. We will say that X drives Y if there is a s.t.where stands for the set of latent variables, is an i.i.d sequence that is independent of , and cannot be omitted without violating the validity of (3). Let us explain that key definition. We may say that there is no such a function
g that
which makes the fact explicit, that
can be created without
Here, one should also observe that the i.i.d. part is also the same as in (3), and there is no possibility for an i.i.d.
to be hidden in
.
2. Causal Discovery Schemes
The literature of causal discovery is huge. This work has been inspired by two recent ones with their strengths and limitations. First, we found the framework defined by Malinsky in [
12] very appealing and that the complex nature of assumptions and the suggested algorithm in [
14] presented an essential challenge. The algorithm in [
12] is an extension of [
15,
16]. The algorithm provides a theoretically complete recall of the underlying causal structure at the price that some relations are marked undetermined and some causal relations are not or only partially revealed.
In [
14], in addition to many other assumptions, it is assumed that all hidden processes that influence an observed one have no memory (Assumption A9 in [
14]). That assumption and A6 in [
14] cannot be checked. In [
12], such restrictions are eliminated. That paper and most of the works based on Pearl’s DAG analysis have theoretical limitations as admitted in [
12]. In what follows, we investigate some situations in which that limitation can be relaxed.
Information from X to Y can be transferred along a chain of direct causal links, along a directed path . The length of the path (the number of intermediate components plus one) is denoted by Such a path has a starting and ending time (for arbitrary ), the difference is the time lag.
Assumption 6. We assume that, with some background information, the minimal lag between the systems X and Y can be determined.
We consider the smallest lag
for which dependence can be detected in “direction”
X to
Y:
2.1. The Decomposable Case
We introduce our notation. In order to save space, let
or
. Let
I stand for the Shannon entropy/differential entropy based mutual information. We define conditional mutual information between elements of time series
and similarly for other series. A segment from
k to
l of a time series
are denoted by
. Such segments are used in the condition representing a part or the full past. In order to investigate if there is information transfer from
B to
A with a given time lag
we use the conditional mutual information between
and
given the full past of both series
and
, and we denote it by
. We define the following conditional mutual information
where we set
and similarly for
B and other variables.
Proposition 1. Let for ( or )Under Assumptions 1–6 for, the following implications hold.Relation 1. Logical relations between conditional mutual information values and causal relations wherestands for Common Driver and.
In the right part of the table, =0 means thatholds for all, while >0 means that there is at least one such k for which.
The proposition summarizes the possible inferences in a concise way. In Relation 1, the headers contain the list of possible combinations and the possible causal scenarios. We have the direct product of two lists of cases collected in the two tables. The header of tables contains, on the left, the quantities that are decisive and, on the right, the possible causal scenarios. As an example, in the left table, the first row shows that if and only if we have that but (significantly differ from zero) then B does not drive A but A drives B. In the right table, if holds for all that means that there is no common information between members of the series for , while, in the opposite case, there should be a common driver, given, that there is shared information that cannot be attributed to driving with a lag below . If (or ) then, causation between X and Y and a common driver may coexist, and we cannot separate those models. In the next section we provide some observations in that situation.
2.2. The Confounder Case
We assume that
but
If
, we can investigate the common information between
and
. Unfortunately, the variables
have a confounder; therefore, we cannot tell which causal relation is behind the dependence. However, some internal structure can be revealed. In line with the assumptions
but
we assume that
Let
be the information that is passed from
to
and
for
from an
L to both (if one or other information transfer takes place). We also let
be the information passed from
to the
as
Figure 2 shows.
From (7), we have that is independent from and is independent from . Thus, we have that the information injected to and from L is an i.i.d. sequence. A similar argument shows that the information passed from to is independent from We still cannot decide if X drives Y or L drives both; however, in the latter situation, we may say that L emits observational noise for X, and it does not influence its evolution (the value of ). Alternatively, we may consider as the “part” of X, which is injected to Y. Let us note that L itself is not necessarily an i.i.d. sequence but, from the point of view of its impact on X and Y, it is indifferent.
One may appeal to the Occam’s razor principle (if other background knowledge does not dictate otherwise) that L itself is an i.i.d. process. If b is part of X or external noise that cannot be decided without further knowledge, we may refer again to the Occam’s razor principle and assume that there is no a third system, a common driver but X injects an i.i.d. sequence to Y.
2.3. Geodesic Spaces
Now, we investigate the case when the subsystems of M are located in a geodesic metric space with unique geodesics between any pair of points. We assume that the information transfer speed is uniform, constant in the space regardless of the location of the source and target. Under that assumption, we can speak interchangeably about distance in space and time.
2.4. Strict Reversed Triangular Inequality
If
and
then we have
the reversed, strict triangular inequality, and there is information share between
and
, then no
L can be a common driver of
and
(cf.
Figure 3), so a direct driving should take place from
X to
2.5. Strict Triangular Inequality
On the other hand, if for an
L
then we have
and
and
have positive conditional mutual information conditioned on the past, then only
the common driver can produce it, not causation (see
Figure 4).
2.6. The Equality, the Confounder Case
Finally, if
we have a confounder.
If the metric space has a unique geodesic from
L to
Y, then
X should be on that geodesic of
L and
Y, and this means that the information from
L either enters
X along the path to
Y or avoids it in in a tricky way by an infinitesimal detour as
Figure 5 depicts.
In the former case, we have no confounder but the causal chain . This is a situation that, again, cannot be resolved without additional information about the actual systems under scrutiny. Economists used to call such L an instrumental variable.
Now, let us recall that the inequalities (8)
(10) read as
The latter one is the strict triangular inequality and the former one is its converse (both with strict inequality). Here, we arrive at the interpretation of causation in
M. If it is a system in an abstract space without metric properties, there is no point to speak about distances in it, and there is no link between information transfer time (lag in short) and distances.
On the other hand, if
the system M is located in a geodesic metric space,
the geodesics are unique,
the information propagates along the geodesics, and
the information transfer has a constant speed,
then, distances are proportional to the delay with the same constant factor for all members. Triangular inequality is inherited from distances to lags. In the case of a metric space, like the Euclidean, hyperbolic and spherical with unique geodesic (except if X and Y are the oppositely positioned on the sphere) the triangular inequality holds, and thus (13) is impossible, and L cannot be a common driver that mimics driving or acts parallel to a driving between X and Let us note that the triangular inequality holds for space-like vectors in the Minkovsky space, while the converse holds for time-like positions. Finally, the case of strict equality needs further investigation.
In case of different transfer speed, the picture is more complex, and the above geometric consideration is applicable in particular settings only. In the human brain, the information transfer has different speed depending on the transfer mode: via sequences of neural cells, long axon bundles or volume of surface currents. The transfer speed depends on the number of intermediate relay nodes of the network as well. Consequently, the case of causality analysis of brain regions needs detailed information on the connection type and speed between them. It is likely that many other topical areas, like climate and geophysics, specific knowledge of the metric properties and transfer speed may contribute to the success of causal discovery. In other areas, there is no information about the temporal arrangement of the unobserved factors, and consequently revealing the perfect description of the causal structure seems impossible.
2.7. Conditions and Mixing
Let us recall here that all the methods that are based on Pearl’s DAG analysis use d-separation ( or causal Markovness) based on a conditional independence test (CIT) in which parents are the conditioning variables. As such, they need access to the parents, which is impossible if those are not observed, and the computation cost can be prohibitive for large networks. Let us see that the d-separation uses the parents as cut set in the DAG. In
Section 2, we used the full past of both observed processes. In practice, it is impossible to put the whole past in the condition; therefore, we should work with a shorter history. Let us consider, as an example, the case when
, which means that there is no information transfer from
to
and investigate
. If
, i.e., there is no hidden common driver. One can show that
as
where
. For the proof, see
Appendix A.
With this argument, we have that the convergence to a constant or to zero of the conditional mutual information determines if there is a driving between X and Y and if there is a common driver (as indicated in Relation 1). Under Assumption 5, it is evident that if there is a hidden common driver, the information is passed along a fixed length path from the common cause to X and Y, and its effect on dependency is not diminishing. If there is no common driver, the exchanged information should traverse longer and longer paths, and the Conditional Mutual Information (CMI) should go to zero as d goes to infinity.
The conditional independence test (and proper estimate of CMI) has recently been the focus of research motivated by applications in machine learning and artificial intelligence. This is known to be a challenging task (cf. [
6,
12,
17,
18,
19]).
3. Related Works and Discussion
There are numerous extensions and refinements of the original PC algorithm that Pearl developed [
20]. This applies to the study of causal discovery of dynamic systems based on observed time series. We mention some prominent works [
4,
6,
12,
21,
22] and their bibliography for further reading (see also the extended surveys [
5,
23]). The recent works [
12,
14] (see also [
23]) have a very similar approach to the present one. In particular, we also use the structural modeling framework; however, we limit our focus to the discovery of a causal relation between a pair of systems. The method can be extended to the study of many time series by considering vector valued observations and/or many pairwise investigations.
The capabilities and limitations of the causal discovery algorithms were investigated in detail in seminal works [
15,
16,
20,
24] and recently in [
14,
21]. The recent generalizations are complete. They extend the labeling of edge ends of classical DAGs, while completeness does not mean that all relations are well specified. Completeness means that all the possible MAGs (Markov Equivalent Acyclic Graphs) can be created.
In this paper, we used an essential assumption and two unavoidable approximations. First, we assumed that the continuous time process can be inferred using a discrete time and limited resolution time series observation. Next, we assumed that the discrete time process can be well approximated with an order-p SVAR model. Finally, if the processes contain continuous variables, the condition is not restricted to a single state value but to a set of them, and, as a consequence, it is not perfectly blocking the information flow between the marginal variables. This deficiency might be eliminated by the local permutation method proposed by Runge in [
19].