User Tools

Anthropomorphisation

Refers to a fallacy by which inanimate objects, plants or animals are ascribed human characteristics or attributes, or are described in a context that would be more appropriate for human agents.

Examples:

Other names

Explanation

Anthropomorphisation of things, phenomena and abstract concepts is an effect of the cognitive bias called anthropomorphism, which all humans commit (albeit to varying degrees).

This kind of projection only becomes problematic when an attempt is made to understand the behaviour of non-human actors as equivalents of human action.

Humanising plants and inanimate objects (e.g. stones) is also possible, but here the underlying fallacy is readily apparent and makes sense at very best in the context of mythology or religion.

For the sake of completeness, we should also mention the gambler who, after a long series of unfavourable results, assumes that the dice or the roulette table meant him “harm” …

However, the situation is different with animals or machines, which can at least outwardly give the appearance of human or human-like behaviour.

Examples

Anthropomorphisation of animals

It is particularly tempting to judge animals and their behaviour by human standards, since their behaviour and ability often show, at least outwardly, clear similarities to human behaviour.

Certainly, most pet owners will at least occasionally interpret the behaviour of their dogs or cats in terms of human standards and attribute human-like thoughts or intentions to thir actions – and there is little to be said against it, as long as one does not make subsequent decisions based on such anthropomorphisation.

It is not difficult to find an example of how such misinterpretations can lead to negative consequences (in this case for the animals):

This dolphin shows a smile similar to that of people who are happy.
Consequently, this dolphin is happy.

In fact, dolphins cannot help but “smile”, as this corresponds to the physiognomy of the dolphin's head. Whether they are really happy (or at least content) is hardly recognisable for us humans.

Whether the “happy” smiling dolphins in dolphin shows are really satisfied with their fate is hard to assess and should certainly not be judged on the basis of superficial similarities with human behaviour.

The same applies to negative feelings. Certainly, most humans would not feel comfortable if they were forced to perform tricks in front of an audience, as the dolphins mentioned above must do. On the other hand, the alternative for humans would also not be to have to hunt for food themselves every day in the wild and to constantly live with the risk of dying in agony in a fishing net.

Another example, from an article about the pioneering days of space exploration:

On 19 August 1960, two courageous dogs, Strelka and Belka flew into space aboard Sputnik 5.

To attribute “courage” to the dogs, one would have to assume that they could really understand the dangers of their mission, which this is rather very unlikely. Presumably, they were not even given a choice whether to participate in the experiment or not.

Anthropomorphisation of machines

It is not only animals, also machines are often perceived as having human-like traits.

Basically, the more complex (and thus more difficult to understand) the behaviour or functioning of the apparatus is, and the greater its influence on our lives, the more frequently this phenomenon seems to occur. While hardly anyone would think of talking to e.g. their garlic press, precisely this kind of behaviour can frequently be observed when dealing with e.g. a car or a musical instrument.

The engine of my car won’t start.
I encourage the car by calling: “You can do it!”
The engine starts.
Consequently, the engine started because I gave it a good encouragement.

If in such a situation the engine starts up after you called out to it, this can lead to a causal illusion in which you (unconsciously) link the two events together – even if you are consciously aware that they have nothing to do with each other.

However, even if persuasion does not really make the engine start faster, it is equally unlikely to cause any damage.

This becomes more problematic when such a mindset distracts from the real problem, as in the following expressions which surely everyone has heard before:

The car has parked on the cycle path.
The pedestrian was hit by the car.

Indeed, a car is able to “move itself” (hence the name “automobile”), but it is not able to decide for itself where to park or what speed to drive: It is of course the driver who makes these decisions. It would therefore be more correct to say:

The driver parked the car on the cycle path.
The [careless] driver has hit pedestrian with the car.

This problem will only shift slightly if we perhaps have autonomous cars in the future: The driver may no longer be responsible for possible wrongdoing – or at least not to the same extent as before, but instead the manufacturer who is responsible for programming the car is – the car itself will still be subordinate to the commands that come from its programmed algorithms.

In general, the whole area of so-called “artificial intelligence” (AI) is riddled with potentially problematic cases of anthropomorphisations, which certainly warrants to add a whole section on this topic:

“Artificial intelligence” / autonomous vehicles

Today, the term “artificial intelligence” (AI) is used to describe various types of computer systems that attempt to emulate “intelligent” behaviour.

Due to the sometimes truly impressive achievements that have been made in this field in recent years – but also due to the often dramaturgically exaggerated “intelligence achievements” of supposed “AIs” in films and video games - there is a tendency to also ascribe human attributes such as “sensitivity”, “reason” or even “feelings” to them.

Indeed, at least at the current state of the art, “artificial intelligences” are still simply computer programmes that execute an algorithm. They differ from traditional software primarily in the way they are programmed – for example, using sample data and feedback mechanisms. This approach has opened up completely new application possibilities that would have been impossible or very difficult to achieve with other programming methods - but no “intelligent being” has been created, as some seem to believe.

One could even argue that the term “intelligence” already describes a concept that is specifically fitted to a human context. Already the transfer of this concept to animals can be seen as at least problematic – even more so in the case of machines.

From the political discussion about how we should in the future deal with such “intelligent” machines comes the following stylistic flourish (somewhat exaggeratedly formulated here):

The AI in an autonomous driving car behaves similarly to a human driver.
Human drivers are legally responsible for any accidents they cause.
Consequently, an AI should also bear its own legal responsibility for accidents.

Indeed, AI systems in self-driving cars have been programmed to replicate the (idealised) behaviour of human drivers as closely as possible (and they are often surpassing them, due to better sensor technology and faster reaction times). However, from having an ability to react to traffic situations according to the human ideal, it does not follow that these machines would be able to understand and take moral decisions and that they could thus be legally held responsible for wrong decisions (not to mention that it would be very unclear how those should look like).

The question of who can be held liable for the behaviour of such autonomous vehicles is interesting from both a legal and ethical perspective. However, simply offloading the blame to the machine itself does not do justice to the complexity of the matter and exposes manufacturers in particular to the suspicion that they themselves do not want to take responsibility for their products.

Pathetic fallacy

Another form of humanisation is the so-called “pathetic fallacy”. This term refers to the inappropriate association of emotions with inanimate objects or abstract concepts.

For more information on this, please see: Pathetic fallacy.

See also

More information

This website uses cookies. By using the website, you agree with storing cookies on your computer. Also, you acknowledge that you have read and understand our Privacy Policy. If you do not agree, please leave the website.

More information