1. Introduction
Symmetry plays a key role in physics and other sciences. One important example is the so-called Brownian motor (e.g., [
1,
2,
3,
4,
5,
6]) whose function hinges on the very presence of symmetry-breaking. As its name indicates, it produces uni-directed motion and works even in the absence of any net macroscopic forces and potential gradients via a noise-induced transport [
2]. Specifically, unlike man-made deterministic motors where noise has a negative effect on its performance, the Brownian motor works in a noisy environment far from equilibrium in the presence of spatial asymmetry; thermal fluctuations are preferentially rectified in one direction due to the asymmetry to allow them in the favoured direction while blocking those in the opposite direction [
4].
It is a useful mathematical model of molecular motors [
5] of the size
O (1–100) nanometres in living organisms that play a vital role in organising and orchestrating various transport processes and movement in cells. Due to their small size, they produce kinetic energy which is comparable to thermal fluctuating energy and consequently have small inertia. Thus, their motion is approximated by an overdamped stochastic process. Important examples include myosin [
7], responsible for muscle contraction in cardiac and skeleton muscles, or kinesin/dynein for pulling cargos (e.g., organelles). They have the capability of producing force directly rather than via an intermediate energy, by converting chemical energy, e.g., adenosine triphosphate (ATP), to kinetic energy.
One of the key questions is if there is an optimal level of fluctuations that maximises the motor performance. Previous works have suggested that the answer to this question is likely to depend on how a motor performance is defined, such as by the current, work or different types of efficiencies [
1], and if an external force introduces an additional symmetry-breaking in time [
8,
9,
10].
For instance, in the rocked thermal ratchet model where a spatially periodic sawtooth (asymmetric) potential
is rocked periodically in time by a symmetric square wave force
, the optimal current was obtained analytically for a finite fluctuation level (temperature)
under the adiabatic approximation [
8] in the limit of a very slow time-variation. Using a similar adiabatic approximation, ref. [
9] showed that the peak of efficiency is different from the peak of current and that the efficiency degrades as the fluctuation increases. However, the numerical simulations without using the adiabatic approximation [
10] showed different results, the efficiency being optimised for a finite
. On the other hand, even under the adiabatic approximation, the efficiency can be a peaked function of
D if temporal asymmetry is introduced in the external force
[
11]. These are some examples demonstrating the importance of exploring a broad range of parameter values and calculating a fully time-dependent solution without making approximations such as slow or fast variation [
12].
Given strong fluctuations, a Brownian motor constitutes an important example of non-equilibrium complex systems where traditional equilibrium thermodynamics or statistical physics do not hold. Therefore, a Brownian motor provides a useful framework in which to develop or consolidate/test newly emerging theories of non-equilibrium complex systems. In particular, far from equilibrium, statistical properties change with time and the time-evolution of a system does not obey time-symmetry (time-irreversibility) even when the external force is symmetric in time. As a measure of time-irreversibility, entropy production, fluctuation theorems (e.g., see [
13,
14,
15] and references therein), etc., are investigated. Furthermore, since the proposal of Feymann’s ratchet, there has been growing interest in information theory [
13,
16,
17,
18] to understand or else optimise a Brownian motor [
19,
20,
21,
22,
23].
In this paper, we propose to investigate the dynamics of a Brownian motor from the point of geometry and the distance (metric tensor) by utilising the information geometric theory—the application of the differential geometry to probability and statistics [
18,
24,
25,
26]. It is a powerful tool for elucidating the disparity between different probabilities as well as for linking complexity and geometry (e.g., see [
18,
24] and references therein). To capture a temporal variation, our focus will be on the path-dependent information geometric concept (information length and rate) that quantifies the time evolution of a system in terms of a dimensionless distance in a statistical space [
18,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38] or the change in information. Its key properties are summarised in
Section 2.
The main aim of this paper is to investigate exact time-dependent solutions of a Brownian motor under different conditions by varying the values of parameters in an unprecedentedly wide range. By utilising an asymmetric sawtooth periodic potential and three different types of periodic forcing
(sinusoidal, square and sawtooth waves) with period
T and amplitude
A, we investigate the performance (mean current, Stokes efficiency) of a rocking ratchet in light of thermodynamic quantities (energetics, entropy production, entropy flow) and the path-dependent information geometric measures. For each
, we calculate exact time-dependent probability density functions (PDFs) by numerically solving the Fokker–Planck equation [
39] under different conditions by varying
T,
A and the strength of the stochastic noise
D. It is worth noting that this paper focuses on elucidating the path-dependent information geometry in the efficiency of the Brownian motor and comparing with some of the popular measures of irreversibility (e.g., entropy production).
The remainder of this paper is organised as follows.
Section 2 provides our model and key thermodynamic measures, motor efficiency and the path-dependent informational geometry measures.
Section 3 provides our numerical methods and key diagnostics.
Section 4 and
Section 5 provide results and discussions. We conclude in
Section 6. We provide general thermodynamic relations in
Appendix A and
Appendix B to make the paper self-contained.
2. Model
To gain key insight, we consider a rocking ratchet model [
8,
9,
10] governed by the following Langevin equation
Here, is the force due to the potential . The potential contains the three parts. is a spatially periodic potential with the period L, is a time-dependent (rocking) potential given by a function that is periodic in time with period T, is a potential due to a constant force F. We note that T in this paper denotes a temporal periodicity of G and not temperature.
is assumed to have a zero mean
and the strength
D with the following property
where the angular brackets denote the ensemble average over
. We note that we are using the notation
D for the fluctuation in heat bath (temperature) instead of
T (the periodicity of the force
).
The Fokker–Planck equation [
39] for a time-dependent probability density function (PDF)
corresponding to Equations (
1) and (
2) is
where
;
is the probability current.
Because of the spatial periodicity of
, we have
and
. So, we normalise the PDF
over
L as
at any time
t. The ensemble average of a variable, say,
, is then expressed in terms of the PDF of
as
We use the double angular brackets to denote the average over the space and time, for instance
For the current
J, Equations (
4) and (
5) translate to
2.1. Thermodynamic Relations
Thermodynamic laws concern the energy conservation and entropy relations and are provided in
Appendix A and
Appendix B for a general overdamped process. When
in Equation (
1) [
,
], the energy conservation in Equation (
A3) (with
) can be expressed as
Here,
,
, and
, and
and
are given by
Here,
is the rate of energy input (e.g., by chemical agency, ATP, etc.) and
the heat flow from the system to the environment. The rate at which a system does work against the external force is defined by
Using Equations (
9)–(
11) in Equation (
8) gives us
The total energy input over one period
T is given by the time-integral of Equation (
9) as
Furthermore, over one period
T,
. Thus, Equation (
12) is simplified as
Q in Equation (
14) is related to the entropy flow
from the system to the environment as
. It is also related to the entropy production
in Equation (
A1) (see
Appendix A for details), where
is the differential entropy. Specifically, Equation (
A2) reads
where
and
denotes the entropy production rate and entropy flow rate, respectively. Equation (
15) shows that
;
serves as a measure of irreversibility far from equilibrium.
is positive when the entropy flows from the system to the environment. Due to the periodicity of
, the total change over the cycle in the differential entropy
. Thus, we have
where
2.2. Efficiency
Using the relations in
Section 2.1, we have the instantaneous efficiency (
) and cumulative efficiency (
) defined over one cycle as follows:
Equation (
18) shows explicitly that the non-zero entropy production
or heat flow
Q reduces the efficiency.
To take into account the viscous work
(
for Equation (
1)), Equations (
17) and (
18) were generalised to include the Stokes efficiency [
40,
41] as follows
where
.
We note that even when there is no external force
, the system can generate a non-zero current
. A stopping force
is the force that is required to make the current zero
above/below which the sign of the current changes (
for
and
for
). To separate the additional effect of
, we will focus on
in
Section 3,
Section 4 and
Section 5. Moreover, since in our problem (e.g., for a sinusoidal
)
can take zero value at certain time with a singular
, we will calculate
in Equation (
20) in
Section 3,
Section 4 and
Section 5.
2.3. Information Rate and Length
Information rate
and length
are the information geometric measures that quantify how information unfolds in time-varying stochastic processes [
18,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38]. In a nutshell,
is proportional to the square root of infinitesimal (symmetric) relative entropy (Kullback–Leibler divergence) while
measures the total change in the information along its evolution path. For example, for a PDF of one variable
x evolving in time
t,
We note that in terms of is well defined for . When the parameters of are known as ’s (), can be expressed in terms of the metric tensor as .
has the dimensions of time and is linked to the smallest timescale of fluctuations [
38]. As a non-decreasing function of time,
is a dimensionless, path-dependent distance measuring the deviation from the initial state in terms of the total number of statistically different states that a system passes through. For a Gaussian PDF with a constant variance,
when
moves from
by one standard deviation since the latter provides the uncertainty in measuring the PDF position. Even when
,
is sensitive to temporal changes at
and take different values depending on the path.
and
are
invariant under (time-independent) change of variables (unlike entropy), enabling us to compare the evolution of different variables.
We measure the change in information associated with the dynamics of a Brownian ratchet by calculating
over the time interval of one period. Equation (
23) represents the total number of different statistical states that a ratchet passes through in one period.
4. Results for
We start with the sinusoidal rocking Equation (
24). We investigated a variety of periods
T, ultimately focusing on the four values
,
, 1 and 2 to consider in detail. For each of these, we scanned over the range
, for the three noise values
,
and
.
Figure 4 shows the current
from Equation (
7). One feature that immediately stands out is how
tends to zero if
A is too small, consistent with our discussion above regarding the dynamics of the ratchet mechanism, and why there is a non-zero current at all. It is also noticeable that
is much less for
than for the larger values. This is also as one might expect: if the system is rocked back and forth too rapidly, there simply is not enough time for much spill-over to occur, even if
A is large enough that it otherwise would. For
T even smaller than 0.4, the current also becomes even smaller.
Regarding the variation with
D, we note how the various peaks become increasingly sharp and distinct as
D is reduced. The limit
would consist of a set of peaks that vary discontinuously with
A, being either zero or
, for
(e.g., see [
47]). To understand such a complicated pattern, and why the current does not vary monotonically with
A, we recall that once
for our particular
here, the spill-over can also go to the left, the opposite of the ‘desired’ direction. When
abruptly drops to zero are then values of
A where this negative spill-over occurred, whereas
jumps back to some non-zero value for those values of
A where the positive spill-over extends one further well of
than before.
Figure 5 shows the Stokes efficiency
from Equation (
20). If
A is too small it is relatively small, but for larger
A it can approach 1, even for
where the current was small. As with the current, there are various up-and-down jumps that become sharper and more distinct for smaller
D. Comparing
Figure 4 and
Figure 5, one can already see that the jumps in
and
occur at similar values of
A; we will consider this phase relationship between these quantities in more detail below (Figure 8).
Figure 6 shows the information length
from Equation (
23). The two features that immediately stand out are that
generally increases with increasing
A, and decreasing
D. This suggests to collapse the results by instead plotting
. As shown in
Figure 7, this does indeed remove the overall increase with
A, allowing one to focus more on the jumps that are similar to those seen before in the current and efficiency.
Figure 7 also suggests that in the
limit, this combination might become independent of
D, including the details of the various jumps. However,
D would have to be reduced quite a bit further to fully clarify this question. The overall trend though that
is broadly proportional to
is very likely robust. Roughly speaking, this is (i) because
is a dimensionless distance that measures the change in the mean location of a PDF (mean value) with respect to the measurement error given by the standard deviation (width of a PDF) as long as the mean (
A) is not too small compared with the standard deviation and (ii) the change in the mean and standard deviation increase with
A and
, respectively.
The three quantities presented so far,
,
and
, all exhibit similar up and down jumps as
A is increased. To clarify the phase relationships between these quantities,
Figure 8 super-imposes the three, for the
case where the jumps become clearest. The range in
A is also restricted to
to focus on the first few jumps. Only
and 1 are shown here, but 0.4 and 2 exhibited similar behaviour. In particular, we see how the jumps in all three quantities are indeed very closely correlated, with maxima or minima in
and
tending to line up with those
A values where
is changing most rapidly. This suggests that our (normalised) information geometric diagnostics
approximates the Stokes efficiency better than the current. We however notice that the phase relations seem to be less clear for larger
.
Based on
Figure 8, the particular values
for
represent one of the transitions from
being large at
to essentially zero at
, and correspondingly
is a minimum for
. The first two panels in
Figure 9 then show the variation in time of the quantities
and
. Note how
is positive for almost the entire cycle for
, which is why
is so large at this
A. For
and 8,
is if anything slightly greater for the first half of the cycle, but for the second half it is strongly negative. This is precisely the point noted above, that for sufficiently large
A the spill-over can also go in the ‘wrong’ direction, reducing the average current
to essentially zero at
.
In comparison,
measures the rate at which the change in a PDF occurs, taking a large value when a PDF changes rapidly. Thus, a spill-over regardless of its direction can cause a sudden change in a PDF shape. This is seen in
Figure 9 where
tends to be large at points in the cycle where the absolute value of
is also large. That is,
is insensitive to the sign of any spill-over, but it does measure that spill-over is occurring. Furthermore, the first peak of
occurs at different times
for
, while the peak of
occurs at a similar time
, earlier than that of
. This suggests that
detects a rapid change in a PDF prior to the appearance of
peak.
Panels (c,d) in
Figure 9 show the variation in time of the quantities
and
. A few points to note here are: First,
can be both positive and negative, see Equation (
A1), and indeed integrates to zero due to the temporal periodicity
, as noted in
Section 2.2. In contrast,
is strictly non-negative, as it must be according to Equations (
15) and (
16). Next, we can note that a typical magnitude of
is roughly 50 times smaller than a typical magnitude of
,
versus
. Finally, if we compare
with
in the panel directly above it, we notice that the jumps in
almost invariably line up with certain features in
, for each of the three values
. It seems therefore that
is also sensitive to the spill-over dynamics that cause variations in
. This is because the spill-over inevitably causes the change in a PDF such as broadening or narrowing of a PDF width and thus entropy.
Figure 10 shows the period-integrated quantity
, for the same parameter values as before in
Figure 4,
Figure 5,
Figure 6 and
Figure 7. There are again three curves in each panel, with the same colour-coding as before, but we see that the three curves almost completely overlap virtually everywhere. That is, the scaling
holds so accurately that
becomes almost completely identical for the three different
D values. The other point to note is how the curves in all four panels are following parabolic profiles for all but the very smallest values of
A. That is, once
or so,
becomes an almost exact fit.
5. Results for Square and Sawtooth
Similar scans were performed for the square wave Equation (
25) and sawtooth waves Equation (
27). We will only present
here; other values were also considered and exhibited similar trends.
Figure 11 again shows the current
as a function of
A, for the same three
D values as before. We note how the maximum value of
for the square wave is twice what it was for the sinusoidal
and also for the sawtooth waves. This simply reflects the fact that the square wave
has twice the rms value as the sinusoidal
. Another interesting point is that
for the square wave does not seem to drop back to zero between the various peaks, unlike for the sinusoidal or sawtooth waves. Finally, note how the two sawtooth waves
yield virtually identical results for this particular diagnostic quantity
.
Figure 12 shows the Stokes efficiency
. The results are again broadly similar to the sinusoidal results in
Figure 5, with efficiencies approaching one in all cases. Again, just as before the particularly small currents for the sinusoidal
with
had no particular effect on the efficiency, here also the particularly large currents for the square wave have no discernible effect on its efficiency. Another point to note here is that the results for the two sawtooth waves are still similar, but noticeably different, unlike in
Figure 11 where the currents were virtually identical.
Finally, for the information length
, we dispense with the plots of
itself and proceed directly to plots of
, as shown in
Figure 13. The overall pattern is again similar to what it was for the sinusoidal
. In particular, there is again at least the suggestion that this combination might be tending to a limit independent of
D, but considerably smaller values would be required to confirm this. The last point to note is that now the results for the two sawtooth waves
are significantly different, more even than previously for the Stokes efficiency. Information length is evidently a very delicate diagnostic quantity that picks out differences in PDFs that other diagnostics are far less sensitive to.
6. Conclusions
We investigated exact time-dependent solutions of a Brownian motor under different conditions to elucidate the role of symmetry-breaking and information geometry. Specifically, we utilised an asymmetric sawtooth periodic potential and three different types of periodic forcing
given by sinusoidal, square and sawtooth waves with period
T and amplitude
A. For each
, we calculated exact time-dependent probability density functions (PDFs) by numerically solving the Fokker–Planck equation [
39] under different conditions by varying
T,
A and the strength of the stochastic noise
D in an unprecedentedly wide range. We showed that a non-differentiable potential led to a non-differentiable PDF and ensured that our solutions were well resolved by including a sufficiently large number of Fourier modes. From our time-dependent PDF, we performed a systematic investigation of comparing mean current, Stokes efficiency, entropy production and the path-dependent information geometric measures (information rate and information length). Interestingly, regardless of different forms of
, we found overall similar behaviours. This highlights the essential role of symmetry-breaking and robustness of the results that are less sensitive to the detailed temporal evolution of a motor (driven by different forcings
).
In particular, current, Stokes efficiency and the information rate normalised by
A and
D exhibit one or multiple local maxima and minima as
A increases. However, the dependence of current and Stokes efficiency on
A can be quite different, while the information rate normalised by
A and
D tends to resemble that of the Stokes efficiency. In comparison, the irreversibility measured by a normalised entropy production is independent of
A. The results thus suggest that the entropy production is not a good measure of a motor efficiency. Instead, our information geometry provides a useful proxy of the Stokes efficiency. Whether this result holds in general when there is a non-zero constant external force
F in our model Equation (
1) will be investigated in future. Future work will also address different types of Brownian motors such as the (two-state) flashing ratchet [
48] driven by the dichotomous noise in addition to a short-correlated noise or Levy-type noise. It will also be of interest to compare our path-dependent information geometric theory with other measures of irreversibility such as the Komogorov eigenvalue or non-equilibrium Lyapunov function [
49].