What Scientific Understanding Implies About Causation

The last post discussed the idea that understanding is an epistemic achievement distinct from knowledge.  In particular, some philosophers have made the case that understanding doesn’t require truth. I focused in on one corollary of this idea: scientific understanding can be achieved via causal explanations that utilize idealization.  This is possible because false characterizations in the explanations can stand in for features of the real causal structure of target phenomena. I then went on to infer a broader claim: understanding does not rely on an assumption of scientific realism, as long as causation is real and can be represented using idealized scientific descriptions.

I now wish to highlight how this picture favors a certain take on the nature of causation itself. Given that idealized scientific explanations can be vehicles for an epistemic connection between human cognizers and real causal structure, we can ask: what does this imply about the latter? As I’ll discuss below (in two parts), we should conclude that the kind of causation involved is singular and productive.

Like clockwork.

I

Generality and Explanation

When comparing explanations, we favor those that are broader in scope (all else equal).[1] The wider the scope of application, the greater the explanatory power.  The desire for generality is one of the reasons the representations used in scientific explanations include false elements.

The idea that idealization is employed in pursuit of broad explanatory power was a central theme of Nancy Cartwright’s classic How the Laws of Physics Lie (1983). More recently, Angela Potochnik has stressed the ubiquitous role of idealization throughout science, and lists the enabling of generality as one of several reasons to idealize (Potochnik, 2017, 48). The implication is that the striving for generality isn’t necessarily driven by the facts about the world, but rather by facts about scientists’ research aims and about human cognition.  Specifically, generalized causal explanations may not faithfully reflect the actual nature of the world’s causal structure.

Another philosopher who has reflected on this topic is Stuart Glennan.[2] Glennan studies mechanistic explanation and its connection to the ontological questions about the role of mechanisms in nature (often with a focus on biology). Ontologically, mechanisms consist of “entities (parts) whose activities and interactions are organized so as to be responsible for the phenomenon (Glennan, 2017, 17).”  Glennan sees the causal structure of the world as comprised of these mechanisms. Importantly, they are “local, heterogeneous, and particular (5).”  Generality is introduced when mechanisms are represented by mechanistic models: “Models are our source of generality (62).” They are used to represent multiple worldly targets.  To represent a larger class of mechanisms, the models are generalized using abstraction and idealization. Glennan discusses the historical example of the study of the nerve signals along axons.  A giant squid axon (a concrete or material model) was studied because its size made it easier to manipulate. It was thought its features could stand in for axons generally, and the results of experiments with squid led to the formulation of the Hodgkin-Huxley mathematical model describing the action potential (62-3). These models were taken to apply across a wide range of nerve cells. But this explanatory power means ignoring features that cause individual systems to differ from one another in their details. Now, this is an episode describing a great scientific success. But this process has consequences for how we think about the precise relationship between models and reality.

The Status of Scientific Kinds

As Glennan goes on to discuss, even when we do seek to depict particular systems, we inevitably describe them as instances of types or kinds (88). Our generalizing approach to modeling natural phenomena is reflected in the ways we sort things into scientific kinds.

Notably, this view makes realism about the scientific kind terms used in representations problematic, since abstraction and idealization introduce distortions. For his part, Glennan still wants to endorse a form of “weak realism” about kinds (89). The idea is that there are real natural kinds which science seeks to identify, but we acknowledge that our identifications may be fallible to various degrees. Glennan goes on to claim that generalizing models “can be used to represent these things in virtue of real similarities between features of the model and features of the targets (99)”.  On the other hand, later in his discussion, Glennan also asserts that successful explanation requires that the models utilized represent at least one particular target. As he puts it, until a generalization “is attached to particular cases, there is no explanation (227).” Generalized models will outrun our testing of individual cases (such as the giant squid axon above), but it is their ability to stand in for particular targets that tether the model to reality to allow successful explanation.

Given this, one can just as well take an anti-realist stance about kinds, holding that their intended scope is a reflection, not of anything real, but rather our cognitive make-up and research interests.  In this picture, explanations with these fallible generalizing features can still succeed and foster understanding, in keeping with the discussion in the prior post. Scientific understanding only requires that these features can stand in for elements of singular causal structures. Of course, philosophers may go ahead and make additional metaphysical posits (kinds, laws, universals, etc.) that would support realism about various scientific generalizations, but there is no need to do so to make sense of the science’s epistemic achievement.

Summary of Part I

To recap: understanding requires an epistemic connection between our cognitive processes and the worldly target of our understanding.  Scientific explanations (usually causal explanations) are vehicles that mediate this connection. When explanations feature generalized representations, however, their causal features may not track something real that actually unites the multiple particular targets to which the representation is ostensibly applied. Here, the striving for generality is serving human goals. But such explanations can still succeed in fostering understanding, by virtue of representing features of the causal structure of at least one singular target. The only real causal structure we need is singular.

II

Two Concepts of Causation in Scientific Explanation

When philosophers offer accounts of scientific explanation, they employ one of two concepts of causation. The first, production, emphasizes the generation of change via spatio-temporal connection, while the second, difference-making, concerns relevance or dependence relations.[3]

Mechanistic philosophers such as Stuart Glennan see production as the fundamental causal concept.[4] In mechanisms, persisting entities interact to create change via their spatio-temporal organization. Note that production fits well with a singular conception of causation. This is because the generation of change is intrinsic to the process—no reference to laws or regularities is required.

Other philosophers of explanation use the difference-making concept. This concept identifies causes by using a method of comparison: an actual sequence of events is compared to a counterfactual scenario that modifies the sequence in order to identify the elements (causes) that made a difference to a subsequent outcome.

An Example of Difference-Making Explanation

A prominent example is James Woodward’s interventionist approach to causal explanation.[5] Here causation is defined in terms of counterfactuals that represent how hypothetical ideal interventions or manipulations on one variable impact another given certain background conditions:

X is causally relevant (at the type level) to Y if and only if there is a possible intervention on X such that if such an intervention were to occur, the value of Y or the probability distribution of Y would change—in other words, some interventions on X make a difference for Y (Woodward, 2011, 412).

This approach to investigating causal patterns is very flexible and can be applied to a wide variety of phenomena. It provides a theoretical underpinning for approaches to causal modeling that feature structural equations or directed graphs to represent systems.

Note that idealizations enter into the framework in several ways. First, by design, these models can encompass hypothetical interventions that may not be possible in the actual world. Also, the use of variables to represent properties of a natural system obviously involves abstraction. It is questionable whether the type-level variables truly correspond to shared properties of real kinds and also whether the range of values for the variables reflects what is actually possible. Spatio-temporal structure is typically ignored as well. So, as is always the case, if explanations featuring these idealizations are to facilitate scientific understanding, it will be because their strictly false elements are able to nevertheless represent elements of the actual singular causal structure of some natural phenomenon.

But when we turn to ontological questions about the nature of this causal structure, the framework is inapt. The notion of difference-making includes an element that cannot correspond to reality.  This is the essential step where one removes or alters the candidate causal element from a sequence, evaluates the counterfactual scenario that results, and compares it to the actual result. The counterfactual scenario inherently cannot represent even imperfectly any part of actuality. Difference-making models are tools for investigating causal structure that leverage one of our core cognitive causal notions. But difference-making does not provide an ontological account of actual causation.[6]

Summary of Part II

Causal explanations utilizing both difference-making and production play a role in science. For instance, a targeted mutagenesis experiment (e.g., a gene knockout) may have value for inferring a causal explanatory relationship between a gene and the expression of a certain phenotypic trait (the presence of the gene makes a crucial difference). On the other hand, when a detailed account is sought about how the gene’s presence brings about the development of the trait, a chain of productive connections will be traced.

Of the two concepts, production is the concept we use when envisioning the basic fabric of the world described by the natural sciences. We trace antecedent causes via a series of interacting entities such as elementary particles, fields, molecules, cells, or organisms. Scientific understanding requires a epistemic connection between something in our cognitive toolkit and the actual causal structure of target phenomena. The production concept provides this match.

Final Thoughts

I want to make clear that these conclusions about causation should not be thought as somehow underplaying or belittling the importance of making generalizations or engaging in counterfactual reasoning. These typically play crucial roles in scientific understanding (although they need not: for example a purely “one-off” causal explanation can lead to understanding as well). It is just that when thinking about scientific understanding using the framework from the prior post, we can assess what aspects of understanding plausibly respectively stem from the human side and world side of this epistemic achievement. If we accept that strictly false causal explanations can nevertheless lead to understanding, this points toward the conclusion that the epistemic value of generalizations and counterfactuals, which are sources of falsity, is due to our cognitive toolkit, not the world itself. Given this, and the necessity that there is real causation in the world (or there could be no understanding via causal explanation to begin with), we can conclude that real causation is singular and productive.

References

Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Oxford University Press.

Glennan, S. (2017). The New Mechanical Philosophy. Oxford: Oxford University Press.

Hall, N. (2004). Two Concepts of Causation. In J. Collins, N. Hall, & L. Paul (Eds.), Causation and Counterfactuals (pp. 225-276). Cambridge, MA: MIT Press.

Menzies, P. and Beebee, H. (2020). “Counterfactual Theories of Causation,” The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.): https://plato.stanford.edu/archives/win2020/entries/causation-counterfactual/

Potochnik, A. (2017). Idealization and the Aims of Science. Chicago: University of Chicago Press.

Woodward, J. (2003). Making Things Happen. Oxford: Oxford University Press.

Woodward, J. (2011). Mechanisms Revisited. Synthese, 183, 409-427.


[1] For a discussion of psychological research on this topic, see Lombrozo (2016).

[2] See especially his recent monograph: The New Mechanical Philosophy (2017).

[3] See Hall (2004) for a discussion of the two concepts. While all topics related to causation remain controversial (and my gloss here certainly oversimplifies things to some degree), I think the case for the existence of these two independent causal concepts is well supported by both philosophical analysis and relevant psychological research.

[4] Glennan discusses the two concepts in chapters 6 and 7 of his (2017).

[5] Woodward (2003) is the primary source. To be clear, Woodward (like many other philosophers of explanation) does not claim to provide an account of the metaphysics of causation (he is also aware that there is a degree of circularity in his definition of causal relevance, since the notion of intervention is causal in character).

[6] I am skipping detailed discussion of the research program which sought to analyze causation in terms of counterfactuals (note that taking this seriously as metaphysics requires the reality of other possible worlds). See the referenced Stanford Encyclopedia of Philosophy article. The effort faced various difficulties, but prominent among these was the inability to adequately account for certain scenarios that foreground the production concept.

Leave a comment

Your email address will not be published. Required fields are marked *