How do you spot an optimistic pig? This isn’t the setup for a punchline; the question is genuine, and in the answer lies much that is revealing about our attitudes to other minds – to minds, that is, that are not human. If the notion of an optimistic (or for that matter a pessimistic) pig sounds vaguely comical, it is because we scarcely know how to think about other minds except in relation to our own.
Here is how you spot an optimistic pig: you train the pig to associate a particular sound – a note played on a glockenspiel, say – with a treat, such as an apple. When the note sounds, an apple falls through a hatch so the pig can eat it. But another sound – a dog-clicker, say – signals nothing so nice. If the pig approaches the hatch on hearing the clicker, all it gets is a plastic bag rustled in its face.
What happens now if the pig hears neither of these sounds, but instead a squeak from a dog toy? An optimistic pig might think there’s a chance that this, too, signals delivery of an apple. A pessimistic pig figures it will just get the plastic bag treatment. But what makes a pig optimistic? In 2010, researchers at Newcastle University showed that pigs reared in a pleasant, stimulating environment, with room to roam, plenty of straw, and “pig toys” to explore, show the optimistic response to the squeak significantly more often than pigs raised in a small, bleak, boring enclosure. In other words, if you want an optimistic pig, you must treat it not as pork but as a being with a mind, deserving the resources for a cognitively rich life.
We don’t, and probably never can, know what it feels like to be an optimistic pig. Objectively, there’s no reason to suppose that it feels like anything: that there is “something it is like” to be a pig, whether apparently happy or gloomy. Until rather recently, philosophers and scientists have been reluctant to grant a mind to any nonhuman entity. Feelings and emotions, hope and pain and a sense of self were deemed attributes that separated us from the rest of the living world. To René Descartes in the 17th century, and to behavioural psychologist BF Skinner in the 1950s, other animals were stimulus-response mechanisms that could be trained but lacked an inner life. To grant animals “minds” in any meaningful sense was to indulge a crude anthropomorphism that had no place in science.
Some caution was warranted. If other animals behave like us, that’s no basis to assume that they do so for the same reasons and with the same experiences and mental representations of the world. But as countless careful experiments like this study of pigs reveal ever more about the inner world of animals, there comes a point where it looks far more contrived to suppose that their behaviour just happens to look like ours in all kinds of ways while differing utterly in its explanation. Maybe, instead, they have minds that are not really so different after all. Primatologist Frans de Waal warns that, while we must avoid anthropomorphising other animals such as great apes, sometimes their actions “make little sense if we refuse to assume intentions and feelings”.
After all, as Charles Darwin pointed out, we all share an evolutionary heritage – and there is nothing in the evolutionary record to suggest that minds were a sudden innovation, let alone that such a thing occurred with the advent of humans. “There is,” Darwin wrote in The Descent of Man, “no fundamental difference between man and the higher mammals in their mental faculties”.
The challenge, then, becomes finding a way of thinking about animal minds that doesn’t simply view them as like the human mind with the dials turned down: less intelligent, less conscious, more or less distant from the pinnacle of mentation we represent. We must recognise that mind is not a single thing that beings have more or less of. There are many dimensions of mind: the “space of possible minds” (a concept first proposed in 1984 by computer scientist Aaron Sloman) has multiple coordinates, and we exist in some part of it, a cluster of data points that reflects our neurodiversity. We are no more at the centre of this mind-space than we are at the centre of the cosmos. So what else is out there?
Consider the often maligned bird brain. Compared with bird neurodiversity, humans are a monoculture. Birds’ minds are scattered widely in mind-space, their differences and specialities tremendously varied. Some birds excel at navigation, others at learning complex songs or making elaborate nests. Scrub jays are expert food storers, able to stash hundreds of caches around their habitat and find them all flawlessly, returning first to the most perishable items. They are cunning: cache thieving is common, and the birds might employ deceptive measures such as returning soon after depositing a store to move it to another location – but only if they know they were observed while caching it. Or even more remarkably, pretending to do so, suggesting that they have what psychologists call a “theory of mind”: the capacity to acknowledge the existence of other agents with motives and knowledge different from their own. (Human children only acquire this around the age of three or four.) In contrast to the common view that other animals live in a perpetual present, scrub jays may store food in anticipation of the circumstances they are likely to face later: experimenters found that they will do this when placed in a cage that the birds know from experience is likely to contain no food tomorrow.
We award pride of place in the hierarchy of bird minds to tool-using species, especially corvids (crows, ravens, rooks). The most masterful of them is the New Caledonian crow of the south Pacific, which will design and store custom-made hooks for foraging, and even make tools with multiple parts. Among animals, great apes, dolphins, sea otters, elephants and octopuses are the only others known to use tools. The challenge is to figure out which qualities of mind corvids bring to bear on the task. Young children acquire an “intuitive physics”: they understand that objects don’t simply vanish, that they have properties like hardness and brittleness, and (eventually!) that cups that are tilted or precariously balanced may topple. They become able to execute multi-step tasks, keeping the end in mind at each stage.
It’s not yet clear what the “rules” are that govern a bird’s ability to deploy a tool. There’s a distinction, for example, between “ritualistic” and “mechanistic” thinking: “if I move the stick like this, bugs appear” versus “the stick pokes out the bugs”. Generally you need the latter view to adapt tools to new uses. You need a basic grasp of cause and effect.
We’re gradually teasing out what bird behaviour reveals about the way they represent the world internally: to what extent they, like us, can deduce the possibilities and affordances it offers for achieving goals. What’s harder to determine is how all this feels for the bird. It was long assumed that the anatomy of the bird brain (lacking a neocortex) is too different from ours to support any conscious experience, but recently those differences have been found to be less pronounced. How do you test, though, if another animal has a sense of self? One approach is to see if it shows an ability to assess its own state of knowledge: can a bird “look into” its own mind and acknowledge what it doesn’t know? Recent experiments with crows suggest they can.
If we’ve sometimes been ungenerous to birds, bees and other insects were often seen as the epitome of mindless automata blindingly following programmes. Naturalists in the 18th century suggested that bees execute the perfect geometry of comb-building by “divine guidance and command”; today we recognise that this exquisite hexagonal mesh demands only that each bee follow simple construction rules. Hive-building is, then, no more a sign of advanced rationality in bees than chess-playing is in humans – there, computers can defeat us with not a glimmer of sentience.
Football, however, is another matter. Lars Chittka of Queen Mary University of London and his colleagues have trained bees to manipulate a small ball into a hole at the centre of the “pitch” for a sugary reward. Bees can train one another in the task, and can find better solutions than their demonstrator: they don’t blindly follow rules but adapt them to the circumstances.
Although animal communication can be subtle and complex, it’s generally thought that no animal besides a human uses symbolic communication, where one concept is represented by another, as it is in writing. None, that is, except perhaps the honeybee, which conveys information about a distant food source to its hive members by dancing. The bee treads out waggling movements on the comb, and watchers deduce the distance to the source from the number of waggles, and the direction to the source from the orientation of dance relative to the downward arrow of gravity. It sounds like a weird problem in a geometry exam, requiring protractors and conversion of units: “If each waggle equals 10 metres … ” and so on. The waggle-dance code even has regional dialects that might take into account the local terrain. And each bee interprets the instructions in the light of its own internal map of the surroundings, gathered and refined by previous forays. All this happens in a bee brain about the size of a large grain of sand.
There may even be such a thing as an optimistic bee. The equivalent of the pig experiment uses flowers: blue ones carry a sugar reward, green ones don’t, but will a bee interpret an ambiguously blue-green flower optimistically or pessimistically? Again, behaviour seems to be coloured by experience: bees that have just been given an unexpected sugar treat seem more inclined to optimism, as though put in a good mood. Again, we can’t know if the behaviour is accompanied by a “feeling” – but machines don’t do stuff like this.
Time now to journey further afield in mind-space. Philosopher Peter Godfrey-Smith says that the octopus is “probably the closest we will come to meeting an intelligent alien.” Octopuses interact with us, apparently with neither fear nor aggression. But in contrast to a dog or a chimpanzee, it’s hard to fathom what the agenda might be. They are great problem-solvers: they unscrew jars, navigate mazes in the murky gloom of the seabed, and fuse bright laboratory lights with jets of water. They seem playful – but who knows, really, what the antics are for?
If this is perplexing, we should hardly be surprised. Our evolutionary lineage diverged from that of octopuses (which are molluscs) around the dawn of complex multicellular life 600m years ago; our common ancestor was a mere flatworm. So octopuses represent an entirely distinct evolutionary path to making a mind – and how different it looks! The octopus has a similar number of neurons as a dog, but instead of being mostly collected in a central brain they are distributed throughout the body in a ladder-like network. There is a centralised brain in the head, but more than half of the nervous system is in the arms, gathered into clusters called ganglia that seem to operate largely autonomously: the arms do things of their own accord while the brain watches, perhaps as if observing another creature.
If there is any kind of consciousness in the octopus mind – thanks to the advocacy of marine biologists and other experts, they were recently admitted into the category of sentient beings under UK law – it might not be unified. Some researchers suggest that octopuses have dual or even multiple consciousness: each individual creature might be a loosely integrated community of minds. “When I try to imagine this,” Godfrey-Smith says, “I find myself in a rather hallucinogenic place.”
Even this, however, might sound tame compared to the idea that plants have minds. Yet that proposition is no longer confined to the fringes of new-age belief; you can find it discussed (relatively) soberly in august scientific journals. There, it often goes by the name of “plant neurobiology” or, in a more extreme form, “biopsychism” – which supposes that every living being from bacteria up has sentience of a sort.
Plants don’t have a nervous system, or even neurons. But their cells, like many non-neural cells, do communicate with one another electrically, and there’s evidence that cellular channels in plants called phloem can transport not only sugars but also electrical pulses. Some plants show distinctly animal-like behaviour: witness the carnivorous jaws of the Venus flytrap, which even displays a primitive ability to “count” the number of impulses it senses before closing on an insect.
Plants can sense and respond to stimuli, as when flower heads move to track the progress of the sun across the sky. The South American “sensitive plant” Mimosa pudica, a member of the pea family, folds its leaves when touched and displays a kind of learning called habituation, where it eventually ignores a repeated stimulus that proves harmless. Pea plants can be trained in an almost Pavlovian fashion to associate a neutral signal such as the flow of air with a beneficial signal such as light.
All this justifies a view of plant behaviour as “cognitive”. Where it becomes more controversial is in suggestions that the plant behaviours are accompanied by sentience or feeling. On that, arguments still rage.
Conceiving of a universe of possible minds can discourage human hubris, and advises erring on the side of generosity in considering the rights and dignity of other beings. But it also enables a literally broad-minded view of what other minds could exist. Mindedness needn’t be a club with rigorously exclusive entry rules. We might not (and may never) agree about whether plants, fungi or bacteria have any kind of sentience, but they show enough attributes of cognition to warrant a place somewhere in this space. This perspective also promotes a calmer appraisal of artificial intelligence than the popular fevered fantasies about impending apocalypse at the hands of malevolent, soulless machines. There is no reason to suppose that today’s AI has any more sentience or experience than the rocks from which its silicon is extracted. But it, too, shows intelligence of a kind, including the ability to learn and predict.
To suppose that something like artificial consciousness will emerge simply by making computer circuits bigger and faster is, as one AI expert put it to me, like imagining that if we make an aeroplane fly fast enough, eventually it will lay an egg. Computers and AI are taking off in the “intelligence” direction of mind-space while gaining nothing on the “experience” axis: their trajectory is heading not towards us but somewhere else entirely. If we want AI to be more human-like, many experts believe we will need explicitly to build human qualities into it – which in turn requires that we better understand what those are and how they arise.
Likewise, most of our fantasies about advanced alien intelligence suppose it to be like us but with better tech. That’s not just a sci-fi trope; the scientific search for extraterrestrial intelligence typically assumes that ET carves nature at the same joints as we do, recognising the same abstract laws of maths and physics. But the more we know about minds, the more we recognise that they conceptualise the world according to the possibilities they possess for sensing and intervening in it; nothing is inevitable. We need to be more imaginative about what minds can be, and less fixated on ours as the default. As the biologist JBS Haldane once said: “The universe is not only queerer than we suppose, but queerer than we can suppose.” Our only hope of understanding the universe, he said, “is to look at it from as many different points of view as possible.” We may need those other minds.