Newtonian physics and Lockean psychology
Lateral thinkers interested in the
mind have been inspired by the methods and results of physics for a long time.
For example, the British empiricist philosopher John Locke (*1632; †1704) was
imbued with the corpuscular theory of light (primarily formulated by his friend
Sir Isaac Newton) when he formulated his “corpuscular theory of ideas” in his
profoundly influential publication “An essay concerning human understanding”
which appeared in 1690. Locke transferred and generalised the axioms of Newtons
physical theory (which concerned the lawful behaviour of matter) to the
psychological (nonmaterial) domain. In other terms, Locke committed himself to
a reductionist Newtonian science of the mind (Ducheyne,
2009). Corpuscularianism is an ontological theory which postulates that all
matter is assembled of infinitesimally small particles (Jacovides,
2002). This notion is similar to the theory of atomism, except that, in
contrast to atoms (from the Greek átomos, “that which is indivisible”)[1],
corpuscles can theoretically be further subdivided (ad infinitum).
According to Newton, these corpuscles are held together by a unifying force
which he termed “gravitation” (Rosenfeld,
1965). One of Locke’s primary concerns in this regard was: What are the most
elementary “particles” of human understanding (i.e., what are the “atoms of
thought”), where do they come from, and how are they held together? Locke
rejected the Cartesian notion of innate (God-given) ideas, but he accepted some
intuitive principles of the mind (e.g., the law of contradiction) which he
assumed must be in place a priori in order for any knowledge to arise.[2]
In addition to this kind of intuitive knowledge about propositional logic,
which he conceptualized as immediate, indubitably knowable and certainly true,
Locke also accepted some forms of demonstrative knowledge to be certainly true.
For example, the axioms of Euclidean geometry. In contrast to intuitive
knowledge, one has to perform a series of mathematical proofs in order to reach
a certain general conclusion which is true in all contexts and circumstances.[3] Having
defined these principles he pursued his initial question: What are the most
elementary “particles” of human cognition, where do they come from, and how are
they held together? Locke's answer is simple: Ideas come from experience and
are held together by associational forces (Halabi,
2005). That is, empirical knowledge which is accumulated diachronically during
the course of a lifetime forms the basis of thought. Locke argues that the most
elementary act is the sensory act and the most elementary contents of the mind
are sensations. He remarks: “For to imprint anything on the mind without the
mind's perceiving it, seems to me hardly intelligible” (Chapter 2 - On
innate ideas). In other words, what enters the mind comes through the sensorium
and these elementary sensations must be connected somehow. According to Newton,
the corpuscular components of reality are held together by gravitational forces,
i.e., Newton's law of universal gravitation which follows the inverse-square
law.[4] Locke
ingeniously applied this idea to elementary sensations and proposes the
principle of “association” as the mental counterpart to physical gravitation.[5] Ex
hypothesi, objects or events which are frequently experienced together are
connected by associative processes.[6] They thereby recombine to form simple
ideas. Out of simple ideas, increasingly complex ideas are hierarchically assembled
by the binding force of association – this the Lockean associative “logic of
ideas” (Yolton,
1955). The Lockean associationist memetic[7] account is still viable today. e.g.,
associative (Bayesian) neural networks in artificial intelligence research. Locke
was clearly far ahead of his time and the associative principles he formulated
where later partly experimentally confirmed by his scientific successors, e.g.,
Ivan Pavlov (Mackintosh,
2003) and later by the behaviourists in the context of S-R associations
(Skinner, Watson, Thorndike, Tolman, etc. pp.). Furthermore, the Newtonian/Lockean
theory of how ideas are composed in the mind forms the basis of the “British
Associationist School” with its numerous eminent members (David Hartley, Joseph
Priestley, James Mill, John Stuart Mill, Alexander Bain, David Hume, inter
alia). In England, the Associationist School asserted an unique influence
on science and art alike and the principles of associationism and connectivism
are still widely applied in many scientific fields, for instance, in the
psychology of associative learning and memory (Rescorla,
1985) and in computer science (for instance, associative neural networks like
cutting-edge deep/multi-layered convolutional neural nets (Kivelä
et al., 2014; Lecun et al., 2015)). To indicate Newton’s and Locke’s pervasive
influence on psychology it could for instance be noted that Pavlov’s classical
and Skinner’s operant conditioning can be classified as a form of
associationism, as can Hebbian learning which is ubiquitously utilised in
science.
Until today, psychology and much of science operates on the basis of a
materialistic, mechanistic, and deterministic quasi-Newtonian paradigm.
The crucial point is that Locke's associationist (Newtonian) theory of mind is fundamentally deterministic (and consequently leaves no room for free will (cf. Conway & Kochen, 2011)). Newton’s “Philosophiæ Naturalis Principia Mathematica” (Mathematical Principles of Natural Philosophy) originally published in 1687 is among the most influential works in the history of science and Newton’s materialistic mechanistic determinism shaped and impacted scientific hypothesizing and theorising in multifarious ways. In 1814, Pierre Simon Laplace famously wrote in his formative “Essai philosophique sur les probabilités” (A Philosophical Essay on Probabilities):
“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” (Laplace, 1814, p. 4)[8]
This deterministic view on reality was extremely influential until the late 18th century and is still implicitly or explicitly the ideological modus operandi for the clear majority of scientists today. However, in physics, unexplainable (anomalous) data and inexplicable abnormalities kept accumulating (e.g.: the three-body-problem, the results of Young’s double-slit experiment, etc.) and finally a non-deterministic (stochastic) quantum perspective on physical reality evolved as exemplified by the following concise quotation concerning the uncertainty principle by Werner Heisenberg from “Über die Grundprinzipien der Quantenmechanik” (About the principles of quantum mechanics):
“In a stationary state of an atom its phase is in principle indeterminate,” (Heisenberg, 1927, p. 177)[9]
One of the most eminent adversaries of this indeterministic theoretical approach, Albert Einstein, vehemently disagreed with the stochastic uncertainty inherent to quantum mechanics. For example, Einstein wrote in one of his letters to Max Born in 1944:
“We have become Antipodean in our scientific expectations. You believe in the God who plays dice, and I in complete law and order in a world which objectively exists, and which I, in a wildly speculative way, am trying to capture. I firmly believe, but I hope that someone will discover a more realistic way, or rather a more tangible basis than it has been my lot to find. Even the great initial success of the quantum theory does not make me believe in the fundamental dice-game, although I am well aware that our younger colleagues interpret this as a consequence of senility. No doubt the day will come when we will see whose instinctive attitude was the correct one.” (Born, 1973, p.149)[10]
Einstein's general and special theory of relativity, radical though they were, explain natural phenomena in a Newtonian deterministic fashion, thereby leaving the established forms of reasoning, logic, and mathematics of the 19th century undisputed. By comparison, quantum theory completely changed the conceptual framework of science due to its fundamentally stochastic indeterminism. It has not just changed scientific concepts of physical reality but our understanding of the most essential rationality principles in general, i.e., a new form of quantum logic was developed (Beltrametti & Cassinelli, 1973). Quantum theory is now by a large margin the most reliable theory science has ever developed because its quantitative predictions are extremely accurate and have been tested in countless domains. Despite this unmatched track record, contemporary psychology, the neurosciences, and the biomedical sciences[11] (and their associated statistical methods) are still modelled after the antiquated and de facto outdated Newtonian/Lockean deterministic worldview and these scientific disciplines (and others) have not yet aligned themselves with the far-reaching implications derived from quantum theory. In other words, the revolutionary reformation of Newtonian mechanics has not yet reached psychology which is still based on the hypothetical premise of local realism of classical physics. In fact, it could be effectively argued that the classical probability framework (which is used almost exclusively in cognitive modelling efforts) exhibits the defining characteristics of a tenacious Kuhnian paradigm. As Thomas Kuhn articulates in his influential book “The structure of scientific revolutions”:
“... ‘normal science’ means research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice. Today such achievements are recounted, though seldom in their original form, by science textbooks, elementary and advanced. These textbooks expound the body of accepted theory, illustrate many or all of its successful applications, and compare these applications with exemplary observations and experiments. Before such books became popular early in the nineteenth century (and until even more recently in the newly matured sciences), many of the famous classics of science fulfilled a similar function. Aristotle’s Physica, Ptolemy’s Almagest, Newton’s Principia and Opticks, Franklin’s Electricity, Lavoisier’s Chemistry, and Lyell’s Geology—these and many other works served for a time implicitly to define the legitimate problems and methods of a research field for succeeding generations of practitioners.” (Kuhn, 1962, p. 10)
Allen, K. (2010). Locke and the nature of ideas. In Archiv für Geschichte der Philosophie (Vol. 92, Issue 3, pp. 236–255). https://doi.org/10.1515/AGPH.2010.011
Beltrametti, E. G., & Cassinelli, G. (1973). On the Logic of Quantum Mechanics. Zeitschrift für Naturforschung - Section A Journal of Physical Sciences, 28(9), 1516–1530. https://doi.org/10.1515/zna-1973-0920
Conway, J. H., & Kochen, S. (2011). The strong free will theorem. In Deep Beauty: Understanding the Quantum World Through Mathematical Innovation. https://doi.org/10.1017/CBO9780511976971.014
Dowling, J. P., & Milburn, G. J. (2003). Quantum technology: the second quantum revolution. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 361(1809), 1655–1674. https://doi.org/10.1098/rsta.2003.1227
Ducheyne, S. (2009). The flow of influence: From Newton to Locke... and back. Rivista Di Storia Della Filosofia, 64(2).
Evans, J. S. B. T. (2003). In two minds: Dual-process accounts of reasoning. In Trends in Cognitive Sciences (Vol. 7, Issue 10, pp. 454–459). https://doi.org/10.1016/j.tics.2003.08.012
Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-Process Theories of Higher Cognition: Advancing the Debate. Perspectives on Psychological Science, 8(3), 223–241. https://doi.org/10.1177/1745691612460685
Greenwald, A. G., & Farnham, S. D. (2000). Using the Implicit Association Test to Measure. Jounal of Personality and Social Psychology, 79(6), 1022–1038. https://doi.org/10.1037//0022-3514.79.6.I022
Gröblacher, S., Paterek, T., Kaltenbaek, R., Brukner, Č., Zukowski, M., Aspelmeyer, M., & Zeilinger, A. (2007). An experimental test of non-local realism. Nature, 446(7138), 871–875. https://doi.org/10.1038/nature05677
Halabi, S. (2005). A useful anachronism: John Locke, the corpuscular philosophy, and inference to the best explanation. Studies in History and Philosophy of Science Part A, 36(2), 241–259. https://doi.org/10.1016/j.shpsa.2005.03.002
Hebb, D. O. (1949). The Organization of Behavior. The Organization of Behavior, 911(1), 335. https://doi.org/10.2307/1418888
Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift Für Physik. https://doi.org/10.1007/BF01397280
Heisenberg, W. (1958). Physics and Philosophy The Revolution in Modern Science. Book, 206. https://doi.org/EB D HEISEN
Heylighen, F., & Chielens, K. (2008). Cultural evolution and memetics. Encyclopedia of Complexity and System Science, 1–27. https://doi.org/10.1007/978-0-387-30440-3
Jacovides, M. (2002). The epistemology under lockes corpuscularianism. In Archiv fur Geschichte der Philosophie (Vol. 84, Issue 2, pp. 161–189). https://doi.org/10.1515/agph.2002.008
Kahneman, D. (2011). Thinking , Fast and Slow (Abstract). In Book. https://doi.org/10.1007/s13398-014-0173-7.2
Kendal, J. R., & Laland, K. N. (2000). Mathematical Models for Memetics. Journal Of Memetics, 4(2000), 1–9. http://cfpm.org/jom-emit/2000/vol4/kendal_jr&laland_kn.html
Kivelä, M., Arenas, A., Barthelemy, M., Gleeson, J. P., Moreno, Y., & Porter, M. A. (2014). Multilayer networks. Journal of Complex Networks, 2(3), 203–271. https://doi.org/10.1093/comnet/cnu016
Kuhn, T. S. (1962). The Structure of Scientific Revolutions. In Structure (Vol. 2, Issue 2). https://doi.org/10.1046/j.1440-1614.2002.t01-5-01102a.x
Laplace, P. S. (1814). Essai philosophique sur les probabilités. Mme. Ve Courcier. https://doi.org/10.1017/CBO9780511693182
Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. In Nature (Vol. 521, Issue 7553, pp. 436–444). https://doi.org/10.1038/nature14539
Mackintosh, N. J. (2003). Pavlov and Associationism. Spanish Journal of Psychology, 6(2), 177–184. https://doi.org/10.1017/S1138741600005321
Oppenheim, J., & Wehner, S. (2010). The uncertainty principle determines the nonlocality of quantum mechanics. Science, 330(6007), 1072–1074. https://doi.org/10.1126/science.1192065
Pullman, B. (2001). The atom in the history of human thought. Oxford University Press. https://books.google.co.uk/books/about/The_Atom_in_the_History_of_Human_Thought.html?id=IQs5hur-BpgC&redir_esc=y
Rasmussen, W. S. (2006). The shape of ancient thought. Philosophy East and West, 56(1), 182–191. https://doi.org/10.1353/pew.2006.0003
Rescorla, R. A. (1985). Associationism in animal learning. In Perspectives on learning and memory. (pp. 39–61).
Robertson, H. (1929). The uncertainty principle. Physical Review, 34, 163–164. https://doi.org/10.1103/PhysRev.34.163
Rosenfeld, L. (1965). Newton and the law of gravitation. Archive for History of Exact Sciences, 2(5), 365–386. https://doi.org/10.1007/BF00327457
Sriram, N., & Greenwald, A. G. (2009). The brief implicit association test. Experimental Psychology, 56(4), 283–294. https://doi.org/10.1027/1618-3169.56.4.283
Thompson, V. A. (2012). Dual-process theories: A metacognitive perspective. In In Two Minds: Dual Processes and Beyond. https://doi.org/10.1093/acprof:oso/9780199230167.003.0008
Yolton, J. W. (1955). Locke and the Seventeenth-Century Logic of Ideas. Journal of the History of Ideas, 16(4), 431–452. http://www.jstor.org/stable/2707503
[1] The idea behind the atom is that matter is composed of primordial material elements which are fundamental to all of existence. Etymologically, the Greek term átomos (ἄτομος) is a composite lexeme composed of the negating prefix á, meaning “not” and the word stem tomṓteros, “to cut”. Ergo, its literal meaning is “not cuttable”. In the memetic history of human thought, the term atom is ascribed to the Greek philosophers Leucippus and Democritus (Pullman, 2001) even though similar atomistic concepts were present in ancient Indian schools of thought (Rasmussen, 2006).
[2]The Greek term “Epistemonicon” (i.e., the cognitive ability by which humans comprehend universal propositions) provides an apposite semantic descriptor for this psychological faculty.
[3] From a modern dual-systems perspective on cognitive processes, automatic (associative) and effortless intuition is a System 1 process, whereas sequential and effortful logical reasoning is a System 2 process (Kahneman, 2011) (but see Appendix A7). Hence, Locke’s theory can be regarded as a predecessor of modern dual-process theories which are now ubiquitous in many fields of psychology and neuroscience (Evans, 2003; Evans & Stanovich, 2013; Thompson, 2012).
[4] The inverse-square law can be mathematically notated as follows:
In the context of Locke’s psychological theory, the term “gravitational intensity” can be replaced with “associational intensity”. While gravitation is the attraction of two physical objects, association describes the attraction between mental concepts (i.e., ideas). For instance, the “distance” between various concepts can be indirectly quantified by variations in reaction-times in a semantic priming paradigm, for instance, the implicit-association test (IAT) (Greenwald & Farnham, 2000; Sriram & Greenwald, 2009). The concepts “tree” and “roots” are closer associated (i.e., the “associational intensity” is stronger) than the concepts “university” and “beer” (perhaps this is an unfortunate example, but it illustrates the general point).
[5] Interestingly, it has been noted by historians of philosophy and science that “Locke's attitude towards the nature of ideas in the Essay is reminiscent of Boyle's diffident attitude towards the nature of matter” (Allen, 2010, p. 236).
[6] This Lockean idea can be regarded as the predecessor of Hebbian engrams and assembly theory – “cells that fire together wire together” (Hebb, 1949). The formulaic description of Hebb's postulate is as follows:
[7] The science of memetics tries to (mathematically) understand the evolution of memes, analogous to the way genetics aims to understand the evolution of genes (Kendal & Laland, 2000). Locke’s early contributions are pivotal for the development of this discipline which is embedded in the general framework of complex systems theory (Heylighen & Chielens, 2008). Memetics is of great importance for our understanding of creativity and the longitudinal evolution of ideas in general. Memes reproduce, recombine, mutate, compete and only the best adapted survive in a given fitness landscape. Similar to genotypes, the degree of similarity/diversity between memes (and their associated fitness values) determines the topology of the fitness landscape.
[8] The full essay is available on the Internet Archive under the following UR: https://archive.org/details/essaiphilosophiq00lapluoft/page/n5
[9]The mathematical formulation of the
Heisenbergian uncertainty principle is: ,
where Δ signifies standard deviation (spread or uncertainty),
x and p signify the position and linear momentum of a given particle,
signifies
a specific fraction of Planck's constant (Planck's constant divided by
2π).
That is, an accurate measurement of position disturbs momentum and vice versa (see
Robertson, 1929). For a discussion of the “inextricable” relation between non-locality
and the uncertainty principle see (Oppenheim
& Wehner, 2010)
[10] Einstein-Born letter are available
on the Internet Archive under the following URL:
https://archive.org/details/TheBornEinsteinLetters/
[11] It has been argued that the entire scientific endeavour has not yet come to terms with the radical revolution which has been set in motion by quantum physics (Dowling & Milburn, 2003; Heisenberg, 1958). Science wants to define itself as objective, detached, and neutral. Several findings from quantum physics challenge this identity. For instance, the observer effect questions the possibility of objective measurements and the violations of Bell inequalities challenge the notion of local realism which forms the basis of much of scientific theorising (Gröblacher et al., 2007).