The Question of Artifical Agency

I’m grateful for recent speculations about whether AI systems might achieve human-like intelligence (AGI) and/or sentience. These have led me to think more carefully about the ingredients that go into these capabilities. I think a necessary ingredient is agency, and understanding agency can sharpen our inquiry into the prospects for AI.

Sad Robot (learning it is not an agent) by UnexpectedToy via Deviantart.com

What is agency and why is it a requirement for AGI and sentience?

Philosophers have debated the conception of agency and various theories explaining it, but the basic notion is that of a capacity for acting for goal-driven reasons.  Human-style “general intelligence” is unfortunately ill-defined, but we can get some grip on it by looking at how it is usually characterized. The idea typically stresses a kind of multi-dimensional, flexible & creative cognitive capacity, and is contrasted with abilities to perform highly specialized tasks.  I believe agency is clearly present in the background of the discussion: human capabilities are various and flexible because they are serving a variety of personal goals. Further, these goals are not fixed, but can evolve through time amid changing circumstances.

With regard to sentience, a concept that is even more philosophically contentious, the presence of a point-of-view (subjectivity) seems to be an essential aspect. For sentient beings, worldly events are imbued with values and meaning that relate to their unique perspective.  Subjectivity and agency are complementary ideas—the possession of goals both shapes one’s point-of-view and guides one’s actions. Plausibly, a sentient entity must also be an agent.

What underlies human agency? Could an artificial entity be an agent?

Philosophers of agency typically have formed accounts that posit causal links between actions and reasons, which themselves are combinations of mental states such as desires and beliefs. Of course, if a theory of this sort is accepted, then one must turn one’s attention to the difficult task of explaining the nature of the states and links themselves, and whether they could be instantiated in artificial entities.

The arguably dominant approach in the philosophy of mind—functionalism—seems to hold promise for addressing this challenge. Functionalist theories propose that the nature of mental states is exhausted by their role in the right kind of (high-level) network or system. This means the system in question can be realized in more than one way: perhaps by neuro-biological “hardware” for humans, but in silicon for an artificial entity.  It is also popular to theorize that the specific functional framework for human cognition is in fact computational in nature. Then it is a short step to conclude that a computer-based entity could achieve capabilities similar to ours.

While long popular, this theoretical framework for mental phenomena is a promissory note that has gone uncollected. It is true that computational models can usefully depict facets of various cognitive processes. But these idealized partial representations do not bring us closer to grasping how playing a role in an abstract computational system could account for the nature of distinctively mental phenomena, with no essential dependence on human biology (the putative “hardware”). In contrast to functionalism and computationalism, I believe that our cognitive capabilities are deeply rooted in our bio-physical makeup, not in some high-level “software” they are thought to support.

I acknowledge that computationalism still seems compelling to many people, so I will focus on the particular challenge posed by agency. A common criticism of the traditional theories of agency is that they could not adequately explain how various actions could be known to be agential. For every act, we can propose alternative causal histories that don’t make reference to goal-driven reasons.  One might worry that our introspective sense that our reasons and intentions cause our actions is misleading or illusory. Certainly, skeptics about free will pursue this critique.  A theory of agency needs to show how an entity’s actions are truly driven by internal goals.  It should also shed light on how an agent can come to exist in a world whose basic building blocks are (presumably) non-agential. It isn’t clear that computationalism has the resources to answer these hard questions.

This concern becomes acute when we think about agency in artificial systems. In our own case, at least we have our (perhaps defeasible) introspective evidence. We must be especially wary about attributing agency to artificial entities that are not, in fact, truly acting according to internal goals. The bar is high: reasons for behavior must be intrinsic and not derived from elsewhere. In particular, they must not follow from the reasons of the manufacturer, programmer or user.

To understand how true intrinsic agency is possible in our own case, and assess whether or how an artificial entity could be an agent, we need to approach these questions from a different angle

What accounts for human agency?  Look to the capabilities of non-human organisms.

Even the simplest living things have an intrinsic purpose: to survive.  We explain their behavior by referencing this goal, and we understand the workings of their component systems using related ideas such as norms and functions. When entities act to serve their own goals, they are agents. Now, a skeptic may say worry that talk of goals and functions in in biology is merely pragmatic.  More theoretical work is needed to show why we should think organisms truly possess intrinsic agency—the topic of the next section.

But I note that there are some quick insights we can glean from this change of perspective on agency. If living things that lack big brains (and even lack nervous systems altogether) are excellent candidates for agents, then a lot of speculation about the possible agency of AI systems is likely misguided. This is because it focuses on the wrong analogies with human agency. Humans are not agents because of big, computationally complex brains.  Humans are not agents because of our mastery of language. Humans are agents due to features we share with a remarkable variety of evolutionary cousins.

Intrinsic Biological Agency: An Overview

There are two important features of the natural world’s causal web that play special roles in explaining agency: indeterminism and multi-scale causation.

Causation for the simplest physical systems is a combination of historical influence (that constrains subsequent events within certain boundaries) and indeterministic spontaneity (that selects one of the possible outcomes). Indeterminism in micro-physics can readily manifest at larger scales.

Causation happens at multiple scales because larger organized patterns of activity constrain and channel smaller-scale dynamics. We also see this phenomenon throughout the inorganic world.

Biological systems put these features to work in distinctive ways that enable agency. First, organisms have a special organization, that gives them an unusual degree of autonomy within the multiscale causal framework. Specifically, they are organized as cyclic networks of processes, connected by mutually enabling interactions: each component process is both a source and a beneficiary of relations that sustain their functioning.[1]  This network defines an entity against the backdrop of an environment. The composing processes are precarious in the sense that they will cease outside the context of the autonomous network. The network is thus in a constant dynamic battle against decay and dissolution, and its interactions with the external environment determine successful persistence or failure—life or death.

How these autonomous systems manage inherent causal indeterminism is the second key to agency. Biological systems have features that not only withstand the noise that comes with indeterministic causation (such as redundancy), but also enable spontaneity in their external interactions to better adapt to changing conditions. There are examples throughout nature: they range from bacterial chemotaxis (e.g. tumbling motions in e. coli) all the way to the noisy neural dynamics involved in animal motor control.[2] This strategic use of spontaneity is probably our single best candidate for a “mark” of agency in nature.[3]

Do Humans Have Free Will?

Before returning to the topic of artificial systems, I’ll digress to connect this understanding of biological agency back to the hoary debate over human free will.  This debate has been difficult because our pre-theoretical notion of free will is too simple and thus easy to view skeptically. Given an isolated decision at a single moment, it seems difficult to say that a choice is both freely made and under our causal control.  The reality is a more complicated (and interesting).

Like other biological agents, our reason-driven actions are explained by a combination of causal influences (accumulated within us over a variety of time scales) along with frequent harnessing of indeterministic spontaneity.  This interplay happens in a number of ways: for instance, causal influences shaped by past experience might suggest two equally favorable acts, and we spontaneously choose one. Or, reversing things, some new options for action arise spontaneously, and causal influences sway the choice. There is a recursive aspect, too. Whenever we act, the outcome becomes part of the causal backdrop, setting the stage for what comes next.  The interplay forms an incredibly complex web of partly-chancy causal elements shaping our thoughts and actions.

When you take this overall framework into view, it is certainly revisionary, but arguably not too far away from the naïve conception.  We are free because we strategically harness indeterminism. We control our decisions because our history of accumulated actions makes us who we are and exerts influence over what we do next.

Conclusion: The Outlook for Artificial Agency and Risk Implications

I had previously argued that present AI efforts are not on a path to AGI. But I think a focus on agency gets to the heart of many of the issues. We know that AGI and sentience are possible for embodied, autonomous entities acting according to intrinsic goals. It is unlikely this is possible for AI systems as we currently understand them. The issue is not one of scaling—it is no use taking a non-agential system and adding more data/computing power and better algorithms.

We can certainly program AI systems using approaches that simulate some features of agential behavior (such as reinforcement learning). But for intrinsic agency, the goals themselves must evolve freely, and the values associated with environmental interactions (“inputs”) must change based on previous actions (“outputs”). True agents are capable of evolving beyond preset plans and boundaries. Along with their autonomy, the distinctive way organisms harness indeterminism helps them achieve this exceptional, open-ended, freedom. For these reasons, I think our formalized simulations of agency will be idealized and incomplete.

The clearest path to intrinsic agency via artificial systems would be through attempts at artificial life: trying to build agency from the ground up, incorporating embodied autonomy.

I see two main implications. First, AGI and sentience in artificial systems are not on the immediate horizon. The risk posed by AI in the foreseeable future will be a more familiar (though still dangerous) kind of risk: that of immoral and negligent humans misusing their increasingly powerful tools. But importantly, reflecting on the possible path to agency via artificial life calls for a focus on the risks that could come from unleashing artificial or artificially modified organisms. Work on this topic is a natural extension of existing biorisk efforts.

References

Di Paolo, E., & Thompson, E. (2014). The enactive approach. The Routledge handbook of embodied cognition, 68-78.

Froese, T. (2023). Irruption Theory: A Novel Conceptualization of the Enactive Account of Motivated Activity. Entropy, 25: 748. https://doi.org/10.3390/e25050748

Mitchell, K. J. (2023). Free Agents: How Evolution Gave Us Free Will. Princeton: Princeton University Press.


[1] This is the notion of “operational closure” discussed by enactivist cognitive scientists like Di Paolo and Thompson (2014). See my prior post here for more.

[2] Kevin Mitchell’s recent book Free Agents collects a number of good examples of organisms’ strategically chancy behaviors. I discuss Mitchell’s book here.

[3] An exciting possibility is that we use this fact to develop a theory for identifying agent-directed actions. One example is Tom Froese’s (2023) “irruption theory,” which proposes that end-motivated spontaneous action will be indirectly detectable as unexplained excess physiological noise/increased entropy production.

1 comment

  1. The Beatles – легендарная британская рок-группа, сформированная в 1960 году в Ливерпуле. Их музыка стала символом эпохи и оказала огромное влияние на мировую культуру. Среди их лучших песен: “Hey Jude”, “Let It Be”, “Yesterday”, “Come Together”, “Here Comes the Sun”, “A Day in the Life”, “Something”, “Eleanor Rigby” и многие другие. Их творчество отличается мелодичностью, глубиной текстов и экспериментами в звуке, что сделало их одной из самых влиятельных групп в истории музыки. Музыка 2024 года слушать онлайн и скачать бесплатно mp3.

Comments are closed.