1. Problems in the Foundation of Classical Mathematics
The title of the famous Wigner paper [
1] is: “The unreasonable effectiveness of mathematics in the natural sciences”, and the paper concludes as follows:
“The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.”
Wigner is known mainly as a famous physicist, and from those words, one can see that he treated mathematics mainly as a powerful tool for applications (e.g., in physics, information theory, chemistry, biology, etc.). However, when I discussed the problem of the foundation of mathematics with mathematicians, I was surprised that many of them treat mathematics solely as an abstract science, and for them it is not important whether or not there are problems in the application of mathematics.
In principle, such an approach has the right to exist, and history shows that many mathematical results, which at one time were considered purely abstract, eventually found their application in physics and other sciences. But even if some results are not of practical use, they may have a purely aesthetic value. After all, we do not expect poetry or music to have any applications for describing nature. In poetry and music, the main thing is beauty, which cannot be expressed in words. In mathematics, as Dirac said, the main thing is the beauty of formulas. But there are certain criteria here. Under the influence of my professors of mathematics, I thought that the rigor of mathematical proofs was sacred to mathematicians, and they would never sacrifice this. But was it?
In a possible approach to the foundation of mathematics, which we will call Approach A, the question of whether mathematics should accurately describe nature is not considered. The goal of the approach is to find a complete and consistent set of axioms that will make it possible to conclude whether any mathematical statement is true or false. This problem is also formulated as the Entscheidungsproblem, which asks for algorithms that consider statements, and answers “Yes” or “No” according to whether the statements are universally valid, i.e., valid in every structure satisfying the axioms.
One of the most famous mathematicians who supported Approach A was Hilbert. For example, he said: “No one shall expel us from the paradise that Cantor has created for us”. Hilbert believed that the issue of the foundation of mathematics would mainly be resolved within the scope of classical mathematics. One definition of classical mathematics found in the literature is that it is based on classical logic and the ZFC set theory. This forms the mainstream approach to mathematics, which is used in applications. In simpler words, one can say that classical mathematics involves all integers, all real numbers, continuity, infinitesimals, and infinitely large numbers. Alternatives to classical mathematics in Approach A are constructive mathematics and predicative mathematics, but they are almost never used in applications.
The problem of the foundation of classical mathematics is very difficult. Gödel’s incompleteness theorems state that mathematics involving the standard arithmetic of natural numbers is incomplete and cannot demonstrate its own consistency. The problem widely discussed in the literature is whether the problems posed by the theorems can be circumvented by nonstandard approaches to natural numbers, e.g., by treating them in the framework of Robinson arithmetic, finitistic arithmetic, transfinite numbers, etc. However, the results obtained by Tarski, Turing, and others show that, in Approach A, the problem of the foundation of mathematics remains, and this problem has not been resolved as yet.
Gödel’s works on the incompleteness theorems, which suggest that any mathematics involving the set of all natural numbers has foundational problems, are written in the highly technical terms of mathematical logic. However, this fact is obvious from the perspective of verificationism. In the 1920s of the 20th century, the Viennese circle of philosophers under the leadership of Schlick developed an approach called logical positivism, which contains the verification principle:
A proposition is only cognitively meaningful if it can be definitively and conclusively determined to be either true or false (see, e.g., [
2,
3,
4]). However, this principle does not work in classical mathematics. For example, from the point of view of verificationism, it cannot be determined whether the statement that
for all natural numbers
a and
b is true or false.
However, in the scientific community, there are strong opponents to verificationism. For example, as noted by Grayling [
5], “
The general laws of science are not, even in principle, verifiable, if verifying means furnishing conclusive proof of their truth. They can be strongly supported by repeated experiments and accumulated evidence but they cannot be verified completely”. So, from the point of view of standard mathematics and standard physics, the verification principle is too strong.
Also, Popper proposed the concept of falsificationism [
6]:
If no cases where a claim is false can be found, then the hypothesis is accepted as provisionally true. In particular, the statement that
for all natural numbers,
a and
b, can be treated as provisionally true until one finds some numbers,
a and
b, for which
.
However, according to the philosophy of quantum theory, there should be no statements accepted without proof, based on belief in their correctness (i.e., axioms). The theory should contain only statements that can be verified, where by “verified”, physicists mean an experiment involving only a finite number of steps. So, the philosophy of quantum theory is similar to verificationism, not falsificationism. Note that Popper was a strong opponent of quantum theory and supported Einstein in his dispute with Bohr.
In particular, quantum theory should not be based on mathematics with foundational problems, but, according to Gödel’s incompleteness theorems, classical mathematics does have such problems. Quantum theory has made great progress in fulfilling its program. For example, in this theory, physical quantities are not abstract concepts, but only those that are described by well-defined operators. However, existing quantum theory is based on classical mathematics because it involves space-time coordinates for which there are no well-defined operators in relativistic quantum theory. Therefore, the current version of the most general quantum theory does not yet satisfy all the principles of this theory and, as a consequence, in this theory, some physical quantities are described by divergent integrals (see below).
From the point of view of verificationism and the philosophy of quantum theory, standard classical mathematics is not well-defined, not only because it contains an infinite number of numbers. For example, consider the problem: does 10 + 20 equal 30? We should describe an experiment, which should solve this problem. Such an experiment must use some kind of computing device. Therefore, the answer to this question should not be given from any abstract considerations, but from how the computing device works.
Any computing device can operate only with a finite amount of resources and cannot handle numbers greater than a certain number, L. If a and b are natural numbers, then we assume that our computing device gives the same results for and , as in standard mathematics, if those numbers are less than L, but under no circumstances can it produce a number greater than L. Consider two possibilities:
- (a)
If those numbers are greater than L, then the result equals L. Say that , then the experiment will confirm that , while if , then we have that .
- (b)
If those numbers are ≥L, then the result equals the standard result but modulo L. Say that , then the experiment will confirm that ; if , then we have that .
So, the statements that and even that are ambiguous because they do not contain information on how they should be verified.
In contrast to Approach A, we define Approach B as an approach to mathematics where mathematics not only correctly describes experimental data, but also does not contain foundational problems. As is clear from Mark Burgin’s approach to the foundation of mathematics, described in the subsequent sections, he was a proponent of Approach B. Even in the spirit of the last example, the title of Mark Burgin’s paper [
7] written in 1997 reads: “Non-Diophantine Arithmetics or is it Possible that
is not Equal to 4?” Meanwhile, in this section, we will still describe problems in Approach A.
We believe the following observation is very important: Although classical mathematics (including its constructive version) is a part of everyday life, people typically do not realize that classical mathematics is implicitly based on the assumption that one can have any desired amount of resources. Classical mathematics is based on the implicit assumption that we can consider an idealized case where a computing device can operate with an infinite amount of resources. In other words, from the point of view of verificationism, standard operations with natural numbers are implicitly treated as limits of operations with a finite natural L, when . As a rule, every limit in mathematics is thoroughly investigated, but in the case of standard operations with natural numbers, it is not even mentioned that those operations are formal limits of operations with a finite L when . In real life, such limits might not even exist if, for example, the universe contains a finite number of elementary particles.
One of the key concepts in classical mathematics is the concept of infinitesimals, which was proposed by Newton and Leibniz more than 300 years ago. Since that time, a titanic effort has been made on the foundation of classical mathematics. As noted above, this problem has not been solved until the present time, but for many mathematicians, the most important thing is not whether a rigorous foundation exists but that standard mathematics is a powerful tool for solving many problems.
The concept of infinitesimals originated from the belief that any macroscopic object can be divided into an arbitrarily large number of arbitrarily small parts, and, in the times of Newton and Leibniz, people did not know about the existence of atoms and elementary particles. But now we know that when we reach the level of atoms and elementary particles, then standard division loses its usual meaning, and in nature, there are no arbitrarily small parts and no continuity.
For example, the typical energies of electrons in modern accelerators are millions of times greater than the electron’s rest energy, and such electrons experience many collisions with different particles. If it were possible to break the electron into parts, then it would have been noticed long ago.
Another example is that if we draw a line on a sheet of paper and look at this line with a microscope, then we see that the line is strongly discontinuous, as it consists of atoms. That is why standard geometry (the concepts of continuous lines and surfaces) can only approximate nature when the sizes of atoms are disregarded; standard macroscopic theory can work well only in this approximation, etc. For example, differential geometry (DG) is used in general relativity, which is a purely classical (i.e., non-quantum) theory that uses standard continuum mathematics and does not take into account that matter consists of atoms and elementary particles. DG is also used in quantum field theories involving a curved space-time background. These theories not only face mathematical foundational challenges related to Gödel’s incompleteness theorems but also grapple with the problem of physical quantities being described by divergent integrals.
Of course, when we consider water in the ocean and describe it by differential equations of hydrodynamics, this works well, but this is only an approximation since water consists of atoms. However, it seems unnatural that even quantum theory is based on continuous mathematics. Even the name “quantum theory” reflects a belief that nature is quantized, i.e., discrete, and this name has arisen because in quantum theory some quantities have discrete spectra (i.e., the spectrum of the angular momentum operator, the energy spectrum of the hydrogen atom, etc.). But this discrete spectrum has appeared in the framework of classical mathematics, i.e., mathematics that involves infinitesimals and has foundational problems.
I asked mathematicians whether, in their opinion, the indivisibility of the electron indicates that in nature, there are no infinitesimals, and that standard division may not always be applicable. Some of them say that, sooner or later, the electron will be divided, but as a rule, mathematicians agree that the electron is indivisible, and in nature, there are no infinitesimals. They say that, for example, in practice, should be understood as where and are small but not infinitesimal. I told them: But you work with , not . They replied that since mathematics with derivatives works well, there is no need to philosophize and develop something else.
One of the key problems of modern quantum theory is the problem of divergences: The theory produces divergent expressions for the S-matrix in perturbation theory. In renormalized theories, these divergences are eliminated through the renormalization process, where finite observable quantities are formally expressed as products of singularities. Although this procedure is not well substantiated mathematically, in some cases, it results in excellent agreement with experimental results. At the same time, in non-renormalized theories, infinities cannot be eliminated by the renormalization procedure, and this is a great obstacle in several fundamental problems, e.g., for constructing quantum gravity based on quantum field theory. As the famous physicist and Nobel Prize laureate Steven Weinberg writes in his book [
8]: “
Disappointingly this problem appeared with even greater severity in the early days of quantum theory, and although greatly ameliorated by subsequent improvements in the theory, it remains with us to the present day”. The title of Weinberg’s paper [
9] is “Living with infinities”.
So, classical mathematics has foundational problems that so far have not been solved in spite of efforts from great mathematicians, such as Cantor, Fraenkel, Gödel, Hilbert, Kronecker, Russell, Zermelo, and others, and, as noted above, classical mathematics is problematic from the point of view of verificationism and the philosophy of quantum theory. The philosophy of those great mathematicians was implicitly based on macroscopic experiences, where the concepts of infinitely small/large, continuity, and standard division seem natural. However, as noted above, those concepts contradict the existence of elementary particles and are not natural in quantum theory. The illusion of continuity arises when one neglects the discrete structure of matter.
The above discussion suggests that the problem of the foundation of mathematics can only be solved if significant changes are made to existing mathematics. However, this does not mean that existing mathematics will be canceled. The history of science shows that new fundamental theories do not cancel existing theories that have proven themselves in many problems. New theories usually only show that existing theories are not universal because there are conditions under which they do not work. A notable example is the theory of relativity, which does not invalidate classical mechanics but shows it only works when speeds are much less than the speed of light; when they are already comparable to the speed of light, then it is necessary to apply the theory of relativity. The next section describes the changes Mark Burgin proposed to address the foundational problem of mathematics.
2. Mark Burgin’s Approach to the Problem of the Foundation of Mathematics
Mark Burgin studied at the Faculty of Mechanics and Mathematics of Moscow University. During those years, this department was regarded as the Mecca of mathematics throughout the Soviet Union. Many students and professors at the faculty held the belief that other sciences were inferior to mathematics. Therefore, they believed that the problem of the foundation of mathematics should only be considered from the point of view of Approach A described in
Section 1. However, Burgin believed that the applications of mathematics were equally important, and this is clear even from the title of his book with Czachor [
10].
Apparently, Mark’s first paper on the foundation of mathematics was [
11] written in 1977. It begins with the words: “Even at the very beginning of the emergence of mathematics as a science in ancient Greece, doubts arouse about how true many basic mathematical logical abstractions were. In this case, the most important, apparently, are the concept of infinity and the construction of natural numbers... These problems were formulated in detail by P.K. Rashevsky [
12], who pointed out on the need to construct a natural series that differs significantly in its properties from the classic natural series”.
In his paper, [
12], Rashevsky writes that “… the natural series is still the only mathematical idealization of real counting processes. This monopoly position dawns its aura of a certain truth in the ultimate instance, absolute, the only possible, recourse to which is inevitable in all cases when a mathematician works with recalculation their objects. Moreover, since the physicist uses only the apparatus that mathematics offers him, then the absolute power of the natural series extends and on physics and—through the number line—predetermines to a large extent possibilities of physical theories”.
In [
11], Mark defined concepts that later became basic in his approach to non-Diophantine arithmetic. Then, in [
7,
10,
13,
14,
15], he built a non-Diophantine arithmetic
of integer numbers using weak projectivity with the Diophantine arithmetic
of all integer numbers, where the definition of weak projectivity is as follows.
Let us consider two abstract arithmetics, and , and consider two mappings g: and h: .
Definition 1. (a) An abstract arithmetic is called weakly projective with respect to an abstract arithmetic if we have the following relations between orders and operations in and : (b) The mapping g is called the projector and the mapping h is called the co-projector for the pair ().
Functions g and h determine the weak projectivity between arithmetic and arithmetic .
Informally, it means that, to perform an operation, e.g., the addition or multiplication in with two numbers, a and b, we map these numbers into , perform this operation there, and map the result back to .
For instance, consider and . Taking and , we have In such a way, these two functions, g and h, define the non-Diophantine arithmetic of integer numbers. Note that contains the same integer numbers as conventional arithmetic Z but operations with them are defined in a different way.
In his papers and joint book with Czachor [
10], Mark considers various choices of functions
g and
h for various problems in non-Diophantine arithmetic. However, from the point of view of the foundation of mathematics, the set of all possible pairs
must be significantly narrowed. As discussed in
Section 1, from the point of view of verificationism and the philosophy of quantum theory, only those versions of mathematics can be substantiated, which do not contain the concept of infinity. Those versions should necessarily contain a parameter,
L, such that the theory does not contain numbers greater than
L.
As noted in
Section 1, from the point of view of verificationism and the philosophy of quantum theory, classical mathematics is not well-defined because it does not contain information on how all operations with numbers should be verified. They can be verified only by using computing devices, which can operate only with a finite amount of resources and cannot work with numbers greater than some number,
L. Nevertheless, the prevailing mindset among most mathematicians and physicists is that fundamental mathematics and fundamental physics should not include such a number,
L.
However, the laws of how fundamental mathematics correctly describes physics are determined by the universe in which we live, and this universe can be considered a computer that determines these laws. For example, if there is only a finite number of elementary particles in the universe, then the presence of L is mandatory in these laws. This number is determined by the state of the universe. Since this state is changing, the number L will vary at different stages of the evolution of the universe.
Based on these considerations, Mark proposed in [
16] the following option for the functions
g and
h. Let us take a natural number
L as the boundary parameter of the non-Diophantine arithmetic
. We build the non-Diophantine arithmetic
, using functions
g and
h to establish a weak projectivity between
and
Z:
and
Then, the operations; that is, addition, subtraction, and multiplication, in
are defined in the following way:
The number
L is called the
upper boundary number of the arithmetic
. Note that, formally, the non-Diophantine arithmetic
contains all integer numbers, but with the above choice of the functions
g and
h, only numbers greater than
and less than
L are accessible. All other integer numbers do not impact operations with accessible numbers. As a result, arithmetic
precisely models computer arithmetic with integer numbers [
17,
18]. It is also possible to suggest that arithmetic
will be useful for building finite physics based on sound and adequate mathematical structures.
As shown in [
16], the arithmetic
becomes the standard arithmetic
Z in the formal limit
. However, as noted above, from the point of view of verificationism, the value of
L should be finite. For illustration, Mark considers examples of operations in the arithmetic
, where
. If ⊕, ⊖ and ⊗ are used to denote addition, subtraction, and multiplication in this arithmetic, respectively, then:
The direct application of the definition of operations in the arithmetic gives the following result:
Proposition 1. For any natural numbers, L and n, we have the following identities in the arithmetic :and one can prove [16] the following: Theorem 1. For any natural number L, we have
- (a)
Addition and multiplication are commutative in the arithmetic ;
- (b)
Addition in the arithmetic is not always associative;
- (c)
Multiplication in the arithmetic is always associative;
- (d)
Multiplication in the arithmetic is not always distributive with respect to addition;
- (e)
The results of addition, subtraction, and multiplication in the arithmetic cannot be greater than L and less than .
The next section describes typical divergent integrals, which appear in the quantum field theory (QFT) as a consequence of the fact that the standard approach to QFT is not well-defined. It then explains how, using Mark Burgin’s approach to non-Diophantine arithmetic as described above, divergent integrals in QFT can be considered without divergences in the framework of a consistent mathematical theory.
3. Elimination of Divergences in Quantum Electrodynamics
As noted in
Section 1, one of the key problems of modern quantum theory is the problem of divergences. Typical divergences in QFT are similar to the divergences in quantum electrodynamics (QED). As shown in [
16], after the integration over hyperspherical angular variables, the integrals of the Feynman diagrams describing the electron self-energy, the photon self-energy, and the electron-photon vertex are
In standard theory, those integrals are divergent, and in the literature, this is sometimes illustrated as follows. Let
(
) be the integrals in Equation (
1), where the upper limit is not
∞ but
. Then, a straightforward integration shows that if
is very large, then
From the point of view of the formal construction of QED, one should consider the limits of certain expressions as because standard QED is based on standard mathematics, where there is no maximum for the momentum. However, these limits do not exist. This is an indication that, mathematically, QED is not well-defined. This raises the question of why QED describes experimental data with high accuracy.
The answer is as follows. Perturbation theory in QED starts from the bare electron mass and bare electron electric charge . However, the description of the experiment should not involve those quantities but real electron mass m and real electron charge e. It has been proven that, in each order of perturbation theory, all singularities related to the unknown quantities and and all singularities of QED perturbation theory are fully absorbed by m and e, such that the resulting formulas expressed in terms of m and e do not contain singularities anymore.
This property of QED is characterized such that QED is a renormalizable theory. A very impressive property of QED is that it describes the electron and muon magnetic moments with an accuracy of up to eight decimal digits. This result has been achieved in the third order of perturbation theory, and so far, no comparisons of theories and experiments in higher orders are possible. It has also been proved that both the electroweak theory and quantum chromodynamics are renormalizable. At the same time, QED and those theories cannot answer the question as to whether the perturbation series are convergent or asymptotic. Also, the existing versions of quantum gravity are not renormalizable.
Despite the successes of renormalizable theories in describing experimental data, the above discussion shows that those theories are not well-defined mathematically. One of the reasons, as indicated in known textbooks (see, e.g., [
19]), is that they contain products of quantized fields at the same points. This is not a correct mathematical operation because quantized fields are distributions.
We now consider how the integrals in Equation (
1) should be treated in non-Diophantine mathematics (NDM). We will use the version of NDM where the functions
g and
h are described in the preceding section, and it has been noted that those functions have been proposed by Mark Burgin in [
16].
A detailed description of NDM is given in [
10] and the very basic facts of NDM are described in
Section 2. Let
be a set of integers, rational or real numbers
x. Then, as follows from Theorem 1 in
Section 2, in NDM, there always exists a number,
L, with the following properties: if
, then the results of addition, subtraction, and multiplication of
and
will be the same as in standard mathematics if
and
are much less than
L but can essentially differ from the results in standard mathematics if
and/or
are comparable to
L. Therefore, we can say that Mark Burgin’s approach is applicable not only to the foundation of arithmetic, but also, in the general case, to the foundation of mathematics.
Consider, for example, integral
in Equation (
1). The Riemann sums for this integral are defined as follows. We represent the interval
as the union
, where
,
, where
. Then, the Riemann sum for
is
and
is the limit of
, when
and
.
Let us note that
p and
l are dimensional quantities, and their dimensions depend on systems of units. For example, in SI, the dimension of
p is
, while in the system of units
, which is often used in particle theory, the dimension is
. To obtain the corresponding descriptions in NDM, it is necessary to use non-Grassmannian linear spaces [
10], where integer, rational, and real numbers are dimensionless. For this reason, one can define
, where
a is a constant with the dimension of momentum, and
is the dimensionless variable. Then,
where
,
and
.
Since all the terms in the sum (
4) are positive, in standard theory,
diverges and
is a limit of the sums
when
and
, then in standard theory,
and
, such that
and
.
To eliminate the unwelcome divergence of the considered integrals, one can use non-Diophantine mathematics
obtained from
by replacing
Z with the set of real numbers
R [
10]. Then, operations with numbers and functions cannot go beyond the boundary number
L in the positive direction and the boundary number
in the negative direction, as well as in
. At the same time, the results of contemporary physics in general, and quantum theory in particular, which do not involve infinity in the form of divergence, are preserved in this new setting if we take L as sufficiently large, because for all numbers from
from the interval
, all basic arithmetical operations are the same as in the conventional Diophantine mathematics, R, of real numbers. Note that if
L is very large, the interval
is sufficiently large.
Contemporary quantum physics is based on the Diophantine mathematics because in it, where all numerical and functional operations adhere to the rules of this mathematical system. Using operations from non-Diophantine mathematics in QFT, we obtain ND quantum physics. When we utilize non-Diophantine mathematics
to build ND quantum physics, the Riemann sum (
3) is transformed to the non-Diophantine Riemann sum
Here, ⊕ denotes addition, ⊖ denotes subtraction, ⊘ denotes division,
denotes multiple addition, and ⊗ denotes multiplication in non-Diophantine mathematics
. Taking the limit of
, when
and
, we obtain a non-Newtonian integral [
10]
In ND quantum physics based on non-Diophantine mathematics
, it is the counterpart of the integral
that describes the electron self-energy. By constructing non-Diophantine mathematics
, the sum (
5) cannot be greater than the number
L. Consequently, the integral (
6) also cannot be greater than the number
L.
By the same technique as before, in non-Diophantine mathematics, one can transform the Riemann sum (
4) to the non-Diophantine Riemann sum
Taking the limit of this sum when
and
, we obtain a non-Diophantine integral [
10]
In ND quantum physics based on non-Diophantine mathematics
, the sum (
7) is also the counterpart of the integral
that describes the electron’s self-energy. By construction, this sum cannot be larger than the number
L. Consequently, the integral (
8) cannot be larger than the number
L.
As noted by Mark in [
16], in a similar way, it is possible to demonstrate that in non-Diophantine mathematics
, for all integrals (
describe electron self-energy,
describes photon self-energy, and
describes the electron–photon vertex), we have the following inequalities:
This shows that, in contrast to standard mathematics, where the values of are infinite, their counterparts in ND quantum physics based on non-Diophantine mathematics are finite. Therefore, in ND quantum physics, the renormalization procedure can be performed in a fully legitimate mathematical way.
4. Discussion
The consideration in
Section 1 and
Section 2 shows that only such versions of mathematics can be free from foundational problems that do not involve the concept of infinity. One such possibility is the version of non-Diophantine mathematics, where functions
g and
h are as suggested by Mark Burgin in [
16]. However, in the joint book by Mark Burgin with Marek Czachor [
10], another version of non-Diophantine arithmetic without infinity is proposed. This is a version based on modular arithmetic, where the ring
is replaced with the modular ring
, where
p is the characteristic of the ring. The rules of addition, subtraction, and multiplication in
are inherited from the rules in
but all operations are taken modulo
p. The authors of [
10] give an example that in
,
but
.
In
Section 1, when we discussed verificationism and the philosophy of quantum theory, we mentioned possibilities (a) and (b). The version of non-Diophantine mathematics discussed in
Section 2 and its applications discussed in
Section 3 is in the spirit of possibility (a). The applications of modular arithmetic are in the spirit of possibility (b). Such applications are discussed in [
20].
At present, it is unclear which version of mathematics will be more promising. The immediate advantage of Mark Burgin’s approach is that (as the discussion in
Section 3 shows) the results of this approach can be immediately applied to various concrete problems, while applying the approach of [
20] requires considerable preparatory work. In any case, in fundamental mathematics, the foundational problems must be solved, and, as follows from the principle of verificationism, such mathematics should not contain the concept of infinity. As can be seen from the discussion above, Mark Burgin has made considerable contributions in this direction.
Now, I will describe a problem that Mark started to work on, but, unfortunately, did not finish.
Let A and B be two processes, such that:
The initial states in A and B are the same;
All particles in the finite states of A and B have the same momenta and spins but the final state of B contains an additional (soft) photon with very small energy .
Then, as shown in the literature, with high accuracy, the differential cross-sections,
and
, of processes
A and
B are related, as follows:
where
is the fine structure constant,
is the momentum of the soft photon,
o is the range of solid angles for the unit vector
, and
F is a function of the initial and final momenta in the process
A. If we consider only cases where
and integrate over
o and
, then we have
where function
G is of the order of unity and depends only on the initial and final momenta in process
A. As noted in the literature on QED, with logarithmic accuracy,
can be replaced by the energy of the particle, which emits the soft photon, but the value of
is not limited from below. Therefore, when
,
becomes infinite, and this situation is called an infrared catastrophe.
As explained in the literature, the reason for the infrared catastrophe is that the result (
11) is obtained in the perturbation theory over
, while in fact, as follows from Equation (
11), the parameter of the perturbation theory is
, and this quantity is not less than unity when
is small. It is also explained that the infrared catastrophe can be avoided if a cutoff for
in the intermediate stage of calculations is introduced, and this cutoff disappears at the final stage of the calculations. Although this rule is not well substantiated mathematically, in practice, it results in avoiding the infrared catastrophe.
So, the situation is similar to the divergences in QED discussed in
Section 3, but now the divergences appear not at very large momenta, but rather at very small ones. However, Mark believed that well-defined theories should not contain divergences at all. His idea was that the problem with the infrared catastrophe could be rigorously solved within the framework of non-Diophantine mathematics, and he began working on this problem.
I had not met Mark in person and knew nothing about his family. But we communicated by phone and email. I was very impressed that, unlike many mathematicians and physicists (who believe that foundational questions are not very important and that the main thing is that the theory should work well in concrete problems), Mark believed that foundational questions are a very important part of fundamental mathematics and fundamental physics.