Bateson and Wright on Number and Quantity: How to Not Separate Thinking from Its Relational Context
Round 1
Reviewer 1 Report
I want to remain honest and even stupid if you accept Cipolla’s laws in my entire statistical way of thinking. I do it, even when I am reading or writing philosophical papers, related to research, etc. I believe in general principles of research, methodology, modeling, etc. including statistical symmetry. The statistical or methodological evaluation of symmetry must be done in relation not only with the measurement itself, with some major quantities or numbers but especially with the idea of distributions or statistical complexity, with the systematic errors and obviously probabilities of realization of some events in the definition of the possible future that brings a lot of events together as close as possible to reality. Nicholas Taleb, Daniel Kaufman, and Amos Tversky changed the accents and moved them from the obsolete symmetry of the Gaussian curve to the black swan and to the importance of knowing biases with an emphasis on errors and on the typology of slow or fast thinking… Without upsetting, I accept and I really understand the ideas in the article, but I believe that statistical thought which is the basement of any methodology, is much simplified in this article. I can accept the paper as it is, no problem, but it was not written at the end of the last century or as a reference to classic statistical thinking… But for three decades everything has changed the residue of models hides many, many aspects and AI and robotics will detail them better if we can recognize the quality of modern statistical thinking, its incredible adaptive quality to reality extracted from the reality of streets (antifragile) or outside of academic ideas and theory and with the desire to understand any of the complex changes. If you accept this idea please try to write about this idea. And if this is possible in your paper I will feel more satisfied reading your paper again. Otherwise, no problem you can decide now, the article is acceptable and publishable for me anyway. It was a real pleasure to read your paper.
Author Response
Thanks to the reviewer for the insightful questions and comments.
The reviewer seems to not share in the article’s fundamental distinction between statistical and measurement approaches to quantification. The authors cited by the reviewer make no mention of this distinction in their work, and contribute unknowingly to perpetuating the epistemological error identified by Bateson elaborated in the article. The reviewer’s comment suggests the article does not elaborate well enough the main point of contrast between statistical and measurement approaches to quantification. And so the second full paragraph on page 3 has been modified so that it ends with the sentence concluding “…statistical methods in psychology and the social sciences still find themselves,” and a new paragraph has been inserted:
The double bind is one in which the value of mathematical abstraction is asserted and desired while, at the same time, the methods employed for realizing that value are completely inadequate to the task. Everyone is well aware of the truth of the point made by both Bateson and Wright, that numeric counts are not measured quantities: it is plain and obvious that someone holding ten rocks may not possess as much rock mass as an-other person holding one rock. The same principle plainly applies in the context of data from tests and surveys, such that counts of correct or agreeable answers to easy questions are not quantitatively equivalent to identical counts of correct or agreeable answers to difficult questions. The practicality, convenience, and scientific rigor of measurement models quantifying constructs in ways facilitating the equating of easy and hard tests have been documented and validated for decades, but have not yet been integrated into mainstream conceptions of measurement.
The first sentence of the following paragraph has then been modified, so that it starts with: “Even so, the resolution of the double bind Wright helped create is widely applied….”
Also incorporated into the paper in response to reviewer 1 is the contrast between Kahneman’s desire to have people take the trouble to think more slowly and the implication of the paper that we should instead provide the measurement infrastructure needed to help people think more clearly when they are thinking fast. The first full paragraph on page 13 has been newly inserted to address this distinction:
These infrastructures satisfy Whitehead’s [2] recognition that “Civilization advances by extending the number of important operations which we can perform without thinking about them” (p. 61). Given the complexity of the problem, it is unreasonable to expect individuals to master the technical matters involved. This insight contradicts those, like Kahneman [160], who recommends learning to think slower instead of allowing language’s automatic associations to play out. Whitehead [2] held that this position, “that we should cultivate the habit of thinking of what we are doing,” is "a profoundly erroneous truism.” Developmental psychology [161-164] and social studies of science [165-167] have arrived at much the same conclusion, noting that individual intelligence is often a function of the infrastructural scaffolding provided by linguistic and scientific standards. When these standards do not include the concepts and words needed for dealing with the concrete problems of the day, individuals and institutions must inevitably fail.
This point is reiterated again in the second full paragraph from the bottom of the same page, where Hayek’s agreement with Whitehead on this point is noted.
Given reviewer 1’s general acceptance of the paper, these changes should be satisfactory.
Reviewer 2 Report
This article is well written grammatically, but I am unable to appreciate which research question the author is trying to address. The paper reads like a commentary to me and the major contribution is difficult to identify. I suggest that author states the research question being addressed and outlines the method being used to address it. It appears author is trying to situate Wright's work in the context of Bateson's work, but author's own major contribution is not visible to me.
Author Response
Thanks to the reviewer for the insightful questions and comments.
The last paragraph of the introduction states the research question being addressed (epistemological confusion) and outlines the method being used to address it (improved measurement). This has been clarified with the addition of a new, third sentence, and a modification of the final, concluding sentence, in that paragraph:
Important aspects of these solutions were anticipated by Bateson but came to fruition in the work of Benjamin Wright, a physicist and psychoanalyst who made a number of fundamental contributions in measurement modelling, experimental approaches to instrument calibration, parameter estimation, fit analysis, software development, professional development, and applications across multiple fields [4; 5]. After describing the problem of how confused logical types are implicated in mixed messages, Bateson's distinction between numeric counts and quantitative measures will be shown to set the stage for Wright's expansion on that theme. That is, the problem being addressed concerns the epistemological confusion of numeric counts and measured quantities, and the solution offered consists of improved measurement implemented via metrological infrastructures functioning at multiple levels of complexity. The specific contribution made by this article comes in situating measurement and instruments calibrated to reference standards in the context of larger epistemological issues; this connection opens up new possibilities for meeting the highly challenging technical demand for new forms of computability that are as local and socially situated as they are abstract and able to travel across time and space [6-8].
Reviewer 3 Report
See attached report.
Comments for author File: Comments.pdf
Author Response
Thanks to the reviewer for the insightful questions.
Concerning the definition of a double bind at 1.1. 120, a new paragraph has been added on page 3 at lines 118-129:
The double bind is one in which the value of mathematical abstraction is asserted and desired while, at the same time, the methods employed for realizing that value are completely inadequate to the task. Everyone is well aware of the truth of the point made by both Bateson and Wright, that numeric counts are not measured quantities: it is plain and obvious that someone holding ten rocks may not possess as much rock mass as another person holding one rock. The same principle plainly applies in the context of data from tests and surveys, such that counts of correct or agreeable answers to easy questions are not quantitatively equivalent to identical counts of correct or agreeable answers to difficult questions. The practicality, convenience, and scientific rigor of measurement models quantifying constructs in ways facilitating the equating of easy and hard tests have been documented and validated for decades, but have not yet been integrated into mainstream conceptions of measurement.
Concerning how the distinction between counts and quantities can be built into knowledge infrastructures at 2.1. 246, the sentence has been expanded to say:
These researchers suggest that systematic distinctions between numeric counts and measured quantities can be built into knowledge infrastructures in the same way that the standard kilogram unit for measuring mass has been globally adopted.
Concerning the lack of clarity at 3.1. 382, the sentence at issue has been shortened, new text has been added, and existing text revised, as follows:
It also led to the systematic incorporation of unresolvable double binds in the messaging systems of virtually all major institutions. That is, the policies and practices of education, health care, government, and business all rely on nonlinear and ordinal numeric counts and percentages that are presented and treated as though they are linear, interval measured quantities.
What makes this a schizophrenic double bind? Just this: everyone involved is perfectly well aware that these counts of concrete events and entities (my ten small rocks or ten correct answers to easy questions, vs. your five large rocks or five correct answers to difficult questions) are not quantitatively comparable, but everyone nonetheless plays along and acts as if they are. We restrict comparisons to responses to the same set of questions as one way of covering over our complicity in the maintenance of the illusion. This results in the uncontrolled proliferation of thousands of different, incomparable ways of purportedly measuring the same thing. Everyone accepts this as a necessity that must be accommodated—though it is nothing of the sort.
The schizophrenic dissonance of these so-called metrics is in no way incontrovertibly written in stone as an absolutely unrevisable linguistic formula. Following on the emergence in the 1960s of key developments in measurement theory and practice, two highly reputable measurement theoreticians wrote in 1986 that (a) the same structure serving as a basis for measuring mass, length, duration, and charge also serves as a basis for measuring probabilities, and (b) that this broad scope of fundamental measurement theory’s applicability was “widely accepted” [105]. But if measurement approaches avoiding the epistemological error of confusing counts for quantities were “widely accepted” by 1986, why are communications infrastructures dependent on psychological and social measurements still so fragmented? Why have not systematic implementations of the structures commonly found to hold across physical and psychosocial measurements been devised so as to better inform the management of outcomes?
The answers to these questions likely involve the extreme difficulty of thinking, acting, and organizing in relation to multiple levels of complexity. As Wright concludes, the difference between counting right answers and constructing measures is the same as the difference between counting and weighing oranges. But neither Wright nor Bateson ever mentions the infrastructural issues involved in making it possible to weigh oranges in a quality assured quantity value traceable to a global metrological standard. It is true that, just as one person with three oranges might have as much orange juice as someone else with six, so, too, might one person correctly answering three hard questions have more reading ability as someone else correctly answering six easier questions. But this point alone contributes nothing to envisioning, planning, incentivizing, resourcing, or staffing the development of the infrastructure needed to make the measurement of reading ability, for instance, as universally interpretable as the measurement of mass.
An alternative examination of counts-quantities situation is provided by Pendrill and Fisher [84], who apply engineering and psychometric models to numeric count estimates obtained from members of a South American indigenous culture lacking number words greater than 3. The results clearly support Bateson's sense of the relation of counting to pattern recognition by demonstrating how intuitive visualizations approximate numeric tallies and so provide a basis for estimating quantitative measures. But Pendrill and Fisher expand the consideration of the problem to a level of generality supporting the viability of a new class of metrological knowledge infrastructure standards.
As to 4.1.422, the sentence quoted by reviewer 3 has been put at the start of a new paragraph and two explanatory sentences have been added:
Fit to the model provides empirical substantiation of the existence of unit quantities that stand as consistent, repeatable, and comparable differences. An explanatory theory successfully predicting the scale locations of items based on features systematically varied across them enables qualitative annotations to and interpretations of the quantitative scale [75, 119].
I certainly agree as to the clarity of Wright’s statement, noted by the reviewer at 5.1.449, and aspire to someday myself approximate that level of expression.
Round 2
Reviewer 2 Report
I am satisfied with the clarification.
Reviewer 3 Report
The contents of the paper are still pretty obscure to me but it has been improved. One suggestion is to have a very clear statement of what the paper accomplishes that is new. The statement on lines 74-79 doesn't enlighten much.