In our recent Review (Ross, L. N. & Bassett, D. S. Causation in neuroscience: keeping mechanism meaningful. Nat. Rev. Neurosci. 25, 81–90; 2024)1, we suggest that the term mechanism lacks a clear and consistent definition across neuroscientific research. We outline challenges associated with the myriad meanings of this term, which guides grant and publication acceptances and remains viewed as a fundamental unit for understanding the brain. In their Correspondence, Tseng and Cheng largely agree with these challenges and provide suggestions to move the field forwards (Tseng, P. & Cheng, T. Causal prominence for neuroscience. Nat. Rev. Neurosci. https://doi.org/10.1038/s41583-024-00838-6; 2024)2.

Like us, Tseng and Cheng view higher-level causal systems as genuinely explanatory. In other words, causal circuits, networks and topologies can provide legitimate explanations that are not always improved by including lower-level detail (such as molecular information). How can such a non-reductive view of causal explanation be supported? Their solution is to consider ‘causal prominence’, in which the explanatorily relevant causal system is the one that is most ‘prominent’ for and has the most ‘causal impact’ on the target of interest. Of course, how exactly prominence and causal impact are defined needs to be further clarified. One way to do this is to employ the notion of causal control as used in the philosophical literature. According to this view, a prominent cause or causal system is one that, if hypothetically manipulated, provides control over the target of interest3,4. If hypothetical manipulations at a particular scale (molecules, circuits or networks) control an outcome of interest, then this scale is the most explanatory for the outcome. Thus, causal explanation “is not a game of how low you can go, but what gives you control”4.

These points capture basic notions of causal and explanatory relevance, but the mechanism concept remains a challenge. Just because circuits or networks are causally prominent for an outcome, it does not mean that these systems are mechanisms: it means that they are explanatory. Although circuits and networks differ from traditional, reductive mechanisms in terms of level of causal detail, there are other differences. Circuits and networks often involve constraints — fixed physical, anatomical or channelling factors — that are rarely present in standard, machine-like mechanisms. In fact, the causal systems studied in neuroscience are highly heterogeneous and differ with respect to both the types of cause involved and their causal organization. For example, causes can differ in terms of their stability, strength, specificity, reversibility or material continuity4,5,6 and in terms of whether they are necessary, sufficient, proximal, distal, deterministic, probabilistic and so on. In addition, causal systems can have varying causal organizations, including linear chains, feedback loops and final common pathway architectures. A framework for explanation in neuroscience should capture explanatory relevance in terms of causal level, but it should also capture diverse types of cause and causal systems. Whether all or some of these causal systems count as mechanisms — and why — should then be further clarified.

Thus, instead of Tseng and Chen’s question “When is a causal system explanatory enough to be termed a ‘mechanism’?” we would ask the following questions: When is a causal system explanatory? And what causal systems are mechanisms?