1. Introduction
Within the topic “variation of fundamental constants”, the Varying Speed of Light (VSL) theories have a peculiar position, due to a quite vivid debate about their soundness and usefulness [
1,
2,
3]. Most of the concerns are related to what can be summarized as the “dimensional vs dimensionless” question: the speed of light
1 is a dimensional quantity and, as such, any investigation about its variation should be misleading and not well-based, because one can always define a unique set of units of length and time for which
is constant. Namely, units (dimensions) might be changing but not the speed of light itself. That is why the only reasonable fundamental constants whose variation should be investigated are dimensionless quantities, for example, the fine structure constant. The tricky point is that, also, in fixing the “correct” units, the speed of light could be made variable [
4,
5]. It is worth noting that many of such concerns might be alleviated if the speed of light were properly introduced after the Lagrangian definition as a scalar field; this is done in [
6], but it is not completely solved, because
c also appears in the line element and whenever the time coordinate happens to be (with consequences, as described in [
7]).
Having in mind such preliminaries, in [
8,
9] we have found that a constant speed of light or a more general VSL theory can be intimately related to a dimensionless parameter which is strictly equal to 1, if the speed of light is constant, and different from 1, if it is varying. The most interesting point is that such a parameter can be easily measured because it is strictly connected to the typical correlation length imprinted in the clustering of galaxies and thus is related to the Baryon Acoustic Oscillations (BAO) observations. In a BAO survey, this correlation length is seen subtending an angle; as such, we determine an angular diameter distance from it. Starting from its definition (see Equation (
1) and [
8,
9]), the angular diameter distance has a peculiar property: it is small for close objects, tending to zero for redshift
; it grows as it goes further from us (grows with redshift); it reaches a maximum and then starts to decrease. Thus, objects located at redshifts higher than a certain “maximum redshift”,
, appear to be bigger than other objects at smaller redshifts. While this could sound weird, it is the consequence of a combination of many aspects (metric, spatial curvature). The crucial point is that, at
, one can define a dimensionless parameter involving the angular diameter distance and the expansion rate (the Hubble function
) measured through BAO, and the value of the speed of light, namely
, which is equal to 1 only if the speed of light is constant and equal to
. Thus, any deviation from such value would point toward a VSL scenario. In [
8,
9], we have explored the possibility to measure the speed of light at only the maximum redshift
through this parameter; here, we will show how to use BAO surveys to measure a possible VSL signal on an extended redshift range.
2. Build Up the Method: The Theory
The BAO clustering length can be measured in a radial and tangential direction, as
and
, where
is the sound horizon at dragging epoch
2. Thus, some of the main observational outcomes of a BAO survey are the angular diameter distance
, and the expansion rate
H. From now on, we will define the tangible
numerical outputs of BAO measurements as
and
.
If we assume a Friedmann–Robertson–Walker metric, a spatially flat universe (in the last section, we will relax this hypothesis), and that the redshift is defined in VSL theories as in a standard scenario [
4,
10,
11,
12], we can generalize the definition of the angular diameter distance to
where
is the speed of light as a—unknown—function of redshift. It is straightforward that this same relation also remains between observational data, namely we have
from this, it is clear that we do not need to express
as a function of any cosmological parameter, because our method is based entirely on only the use of the outputs of BAO observations.
The focal point is that we do not know if the speed of light appearing in Equation (
2) is constant or not; but we can find this out. In fact, on one hand, we have the
direct data:
where the second equivalence derives directly from Equation (
2). In our calculations, we will need only the central expression, making direct use of
. Instead, the very last expression should be compared with the
reconstructed set of data:
which directly involve
. Of course, in Equation (
4), we need to assume explicitly that the speed of light is constant. Thus, we now have two possibilities: if
then the assumption we have made in Equation (
4), i.e., constant speed of light, is correct. Otherwise, if
such an assumption is wrong and we might have a VSL. Moreover, we can potentially reconstruct how the speed of light is changing with redshift using the ratio
Note that, in this way, we are also avoiding the “dimensionless vs dimensional” debate, because we have defined and will work with a dimensionless ratio.
3. Build Up the Method: In Practice
As we have no real BAO data to test our method, we have to create mock data. Thus, we need to define a fiducial theoretical background. There is no unique way to develop a VSL theory, but this is unimportant for our purposes. We have chosen to follow the approach introduced in [
13], where a VSL with a minimal coupling with gravity is considered, leading to the modified versions of the first Friedmann equation and of the continuity equation:
where
and
p are, respectively, the mass density and the pressure of any fluid in the Universe;
is the scale factor;
G is the universal gravitational constant; and
is the speed of light as a function of time. The initial assumption of spatial flatness clearly implies no effective change in both the continuity equation and in the first Friedmann equation; the only contribution from the VSL theory comes not from the dynamics, but from the metric (see Equation (
1)). Obviously, as we are neglecting any dynamics of the parameter assumed to vary [
14], we also need to give an ansatz for
; from [
15], we use
, where
is the scale factor, and
sets the transition epoch from some
(at early times) to
(now).
Then, mock data have to be consistent with present observations. Our fiducial cosmological background is derived from a slightly modified version of the baseline
CDM model from
Planck 2015 release
3, the
base_
plikHM_
TTTEEE_
lowTEB_
lensing_
post_
BAO in
Planck Legacy Archive. As far as we are concerned, this model is fully characterized by the dimensionless matter density today, which is equal to
; but the introduction of a VSL signal changes the universe dynamics. Depending on the VSL, we have to change the value of
to accommodate observations. We have considered two different VSL scenarios: one, given by the parameters
and
, corresponds to a
variation at redshift 1.5–1.6; the other, given by the parameters
and
, has a
variation at redshift 1.5–1.6. We use this redshift as a reference because it is the approximate redshift range where the present consensus cosmological model should exhibit a maximum in
. In Table I of [
9], we show how, in order to have a
CDM+VSL signal consistent with current observations, we have to change the parameter
to, respectively,
and
.
The next ingredients for our analysis are the expected accuracies from future surveys. In [
16], the errors on
and
H are reported, in redshift bins of
width, for BOSS (
), DESI (
), and
WFIRST-2.4 (
). We also add SKA forecast from [
17] for
.
Once we have
and
and the corresponding errors, we randomly generate our sets of
and
from a multivariate Gaussian centered on the fiducial values, and with a total covariance matrix built up from the errors defined for each survey, plus a correlation factor between
and
[
18]. Finally, in order to infer statistical meaningful considerations, we do not rely only on one randomly generated set of
H and
; instead, we consider an ensemble of
simulations, i.e., we produce
observational scenarios compatible with a
CDM model plus the VSL signals we have specified above.
The main obstacle to the use of Equation (
7) is that it relies on the same definition of
given by Equation (
3): we need to calculate the derivatives with respect to the redshift of a quantity
represented by a discrete set of points and with an intrinsic dispersion. Both these properties alter the calculation of the derivatives and inexorably lead to blown-up errors on any quantity based on such numerically-calculated derivatives. In any case, we can alleviate this problem: in fact, as we do not need a cosmological fit of
H or
involving any standard cosmological parameter, we can propose to fit our mock data with any analytical function. For example,
is well fitted by a sixth-order redshift polynomial, requesting that
for
; while
prefers a Pad
approximant
which also satisfies the physical conditions,
for
and
. We additionally require that
for
.
Once we have fitted both
and
with such functions, we can simply propagate errors from the related sets of parameters and obtain the polynomial-reconstructed
and
, with corresponding errors; from them, we can derive the
ratios and their errors, through Equation (
7).
4. Results and Conclusions
For each one of the
simulations, in each redshift bin, we calculate the residuals with respect to a constant speed of light, i.e.,
. Then, we count the normalized number of simulations for which such residuals are strictly positive and/or negative, implying a clear detection of a non-constant
. Such residuals are calculated using the
,
and
limits derived from Equation (
7), indicating a detection of the VSL signal at, respectively,
,
and
confidence level. In more than
of our simulations, we can detect a
VSL signal at
level in the redshift range
, which corresponds to the range covered by SKA. Among those that we have considered, SKA is the survey with the best performance, and we need its accuracy results to be the minimal in order to achieve the detection of a
VSL signal, if there is any. We also find that a
signal will be hardly detected: a
detection is maximal in the redshift range
, which is achieved only in
of our simulations, thus we cannot conclude if its detection can really be achieved or not.
Finally, as pointed out in previous sections, we have to solve the degeneracy between VSL and spatial curvature. If we relax the spatial flatness hypothesis, the definition of the angular diameter distance is
where
is the dimensionless curvature density parameter today;
is the Hubble distance; and the comoving distance is defined as
, where we have made use of the general ansatz
, with
for
. Now, Equation (
3) will be modified as
It is straightforward to check whether, even if the speed of light were constant, we would still have some contribution from the curvature terms if . From the Planck Legacy Archive, we consider the extension of the baseline CDM model with a free curvature parameter, named base_omegak_plikHM_TTTEEE_lowTEB_BAO_H070p6_JLA_post_lensing, which has at the confidence level (the lower and upper limits are and ). With these numbers, a realistic contribution from the spatial curvature is at for both and ; while the upper limit gives a contribution. Clearly, these curvature-signals would be even smaller than the VSL signal which we have stated to be undetectable by the surveys that we have considered. We might also detect a total signal of a certain amount without being able to decode how much of it depends on the VSL signal and how much on curvature. Given the actual constraints on spatial curvature, for a total signal, the VSL signal should be of the total signal for and , and not less than for . Thus, curvature should play a negligible role; we were unable to discriminate between the two sources only in the case of signals smaller than which, however, represent signals that could possibly be detected in future BAO surveys.