| Both sides previous revisionPrevious revision | |
| abstraction:anthropomorphisation [07.05.26, 10:59:03] – [Anthropomorphisation] sascha | abstraction:anthropomorphisation [07.05.26, 12:34:22] (current) – sascha |
|---|
| |
| Both “nature” and “the universe” are //abstract concepts// that are incapable of feeling //emotions// such as //maternal care// -- even “indifference” is a concept that only makes sense on a human scale. | Both “nature” and “the universe” are //abstract concepts// that are incapable of feeling //emotions// such as //maternal care// -- even “indifference” is a concept that only makes sense on a human scale. |
| <aside info print-wide> | |
| |
| ===== Other names ===== | ===== Other names ===== |
| <div print-wide> | <div print-wide> |
| * Humanization | * Humanization |
| | * Personification |
| | * [[glossary:reification|Reification]] / [[abstraction:hypostatization|Hypostatisation]] |
| | * [[abstraction:pathetic_fallacy|Pathetic Fallacy]] |
| * Animism | * Animism |
| </div> | </div> |
| |
| ===== Explanation ===== | It should be noted, that it can be difficult to distinguish this from other //fallacies of abstraction//, particularly [[abstraction:hypostatization|Hypostatisation]]. The latter involves interpreting //abstract concepts// to be something that actually exists. From there, it is only a short step to attributing //human characteristics//, or even //personifying// them (see below) (see also: <span maniculus "go to:">[[glossary:reification|Reification]]</span>). |
| |
| //Anthropomorphisation// of things, phenomena and abstract concepts is an effect of the [[psychology:cognitive_bias:index|cognitive bias]] called [[psychology:cognitive_bias:anthropomorphism|anthropomorphism]], which all humans commit (albeit to varying degrees). | ===== Description ===== |
| |
| This kind of projection only becomes problematic when an attempt is made to understand the behaviour of non-human actors as equivalents of human action. | The //anthropomorphisation// of objects, phenomena and abstract concepts is a result of a [[psychology:cognitive_bias:index|cognitive bias]] known as [[psychology:cognitive_bias:anthropomorphism|anthropomorphism]], which everyone is subject to (albeit to varying degrees). |
| |
| Humanising plants and inanimate objects (e.g. stones) is also possible, but here the underlying fallacy is readily apparent and makes sense at very best in the context of mythology or religion. | //Anthropomorphic// depictions of objects, animals or even phenomena are an integral part of most human cultures, whether in children’s stories, poems, tales or within the context of various religious practices. In fairy tales, for example, animals are regularly attributed human characteristics; they are then described as “brave”, “clever” or “treacherous”, and many Gods, including those in the Western tradition, are quite clearly personifications of natural phenomena (e.g. //lightning and thunder//: [[wp>Thor|Thor]], [[wp>Zeus|Zeus]]/[[wp>Jupiter (god)|Jupiter]], etc.) or //abstract concepts// (e.g. //war//, represented by [[wp>Odin|Odin]], [[wp>Ares|Ares]]/[[wp>Mars (mythology)|Mars]], etc.). |
| |
| For the sake of completeness, we should also mention the gambler who, after a long series of unfavourable results, assumes that the dice or the roulette table meant him "harm" … | ==== Problem case ==== |
| |
| However, the situation is different with animals or machines, which can at least outwardly give the appearance of human or human-like behaviour. | While there is no reason not to use such //anthropomorphism// as a literary device – particularly in contexts where a little more artistic licence is appropriate, such as in //poetry// or, of course, in //fables// – it can become a problem when we forget that it does not reflect reality. This can then lead to the behaviour of these entities being misinterpreted or to incorrect conclusions being drawn. |
| | |
| | For example, children will have to learn – often during a visit to the zoo – that many of the animals they have come to know from children’s books as friendly creatures with human characteristics actually pose a serious danger even to their keepers. Bears and tigers, for instance, see humans – especially the smaller ones – primarily as //potential food//. |
| | |
| | Fortunately, zoos today are designed in such a way that one would have to be extremely reckless to actually come into conflict with dangerous animals. However, such safety measures are not in place everywhere; for example, when it comes to other things or concepts that may be anthropomorphised. In such cases, one must rely on one’s own judgement and recognise for oneself the difference between the //anthropomorphised// idea on the one hand and //reality// on the other. |
| |
| ===== Examples ===== | ===== Examples ===== |
| It is particularly tempting to judge animals and their behaviour by human standards, since their behaviour and ability often show, at least outwardly, clear similarities to human behaviour. | It is particularly tempting to judge animals and their behaviour by human standards, since their behaviour and ability often show, at least outwardly, clear similarities to human behaviour. |
| |
| Certainly, most pet owners will at least occasionally interpret the behaviour of their dogs or cats in terms of human standards and attribute human-like thoughts or intentions to thir actions – and there is little to be said against it, as long as one does not make subsequent decisions based on such anthropomorphisation. | Certainly, most pet owners will at least occasionally interpret the behaviour of their dogs or cats in terms of human standards and attribute human-like thoughts or intentions to thir actions – and there is little to be said against it, as long as one does not make subsequent decisions based on such anthropomorphisation. |
| |
| It is not difficult to find an example of how such misinterpretations can lead to negative consequences (in this case for the animals): | It is not difficult to find an example of how such misinterpretations can lead to negative consequences (in this case for the animals): |
| > <s invalid>Consequently, this dolphin is happy.</s> | > <s invalid>Consequently, this dolphin is happy.</s> |
| |
| In fact, dolphins cannot help but "smile", as this corresponds to the physiognomy of the dolphin's head. Whether they are really happy (or at least content) is hardly recognisable for us humans. | In fact, dolphins simply cannot help but “smile”, as this is part of the physiognomy of a dolphin’s head. If we apply human standards to their facial expressions, we might conclude that the animals are – in human terms – “happy”. |
| |
| Whether the "happy" smiling dolphins in dolphin shows are really satisfied with their fate is hard to assess and should certainly not be judged on the basis of superficial similarities with human behaviour. | However, it is difficult for us to judge whether the dolphins in dolphin shows, with their “happy” smiles, are truly content with their situation, and this should under no circumstances be determined on the basis of superficial similarities with human behaviour. |
| |
| The same applies to negative feelings. Certainly, most humans would not feel comfortable if they were forced to perform tricks in front of an audience, as the dolphins mentioned above must do. On the other hand, the alternative for humans would also not be to have to hunt for food themselves every day in the wild and to constantly live with the risk of dying in agony in a fishing net. | The same applies to negative emotions. Most people would certainly not feel comfortable if, like the dolphins mentioned, they were forced to perform tricks in front of an audience. On the other hand, the alternative for humans would not be to have to chase food in the wild every day and constantly live with the risk of dying an agonising death in a fishing net. |
| |
| Another example, from an article about the pioneering days of space exploration: | Another example, from an article about the pioneering days of space exploration: |
| |
| > On 19 August 1960, two <u questionable>courageous</u> dogs, Strelka and Belka flew into space aboard Sputnik 5. | > On 19 August 1960, two <u questionable "Anthropomorphisation">courageous</u> dogs, Strelka and Belka flew into space aboard Sputnik 5. |
| |
| To attribute "courage" to the dogs, one would have to assume that they could really understand the dangers of their mission, which this is rather very unlikely. Presumably, they were not even given a choice whether to participate in the experiment or not. | To attribute “courage” to the dogs, one has to assume that they were actually able to //understand//, and agreed to face, the dangers of their mission. However, this is rather unlikely. It is probable that they did not even have a choice as to whether or not to take part in the experiment. |
| |
| ==== Anthropomorphisation of machines ==== | ==== Anthropomorphisation of machines ==== |
| It is not only animals, also //machines// are often perceived as having human-like traits. | It is not only animals, also //machines// are often perceived as having human-like traits. |
| |
| Basically, the more complex (and thus more difficult to understand) the behaviour or functioning of the apparatus is, and the greater its influence on our lives, the more frequently this phenomenon seems to occur. While hardly anyone would think of talking to e.g. their //garlic press//, precisely this kind of behaviour can frequently be observed when dealing with e.g. a car or a musical instrument. | Generally speaking, this phenomenon seems to occur most frequently when the behaviour or functioning of a machine is more //complex// (and therefore harder to comprehend) and when the device has a greater impact on our lives. Whilst hardly anyone would think of talking to their //garlic press//, for example, this very behaviour can often be observed when dealing with a car or other more complex machines. |
| |
| > The engine of my car won’t start. | > The engine of my car won’t start. |
| > I encourage the car by calling: "You can do it!" | > I encourage the car by calling: “You can do it!” |
| > The engine starts. | > The engine starts. |
| > <s invalid "very unlikely">Consequently, the engine started //because// I gave it a good encouragement.</s> | > <s invalid "very unlikely">Consequently, the engine started //because// I gave it a good encouragement.</s> |
| > <s invalid>The pedestrian was hit by the car.</s> | > <s invalid>The pedestrian was hit by the car.</s> |
| |
| Indeed, a car is able to "move itself" (hence the name "automobile"), but it is not able to decide for itself where to park or what speed to drive: It is of course the driver who makes these decisions. It would therefore be more correct to say: | In reality, whilst the car is indeed capable of “self-moving” (hence the name “automobile”), it cannot //decide// for itself where to park or at what speed to drive: it is, of course, the //driver// who makes these decisions. It would therefore be more accurate to say: |
| |
| > The driver parked the car on the cycle path. | > The //driver// parked the car on the cycle path. |
| > The [careless] driver has hit pedestrian with the car. | > The [careless] //driver// has hit pedestrian with the car. |
| |
| This problem will only shift slightly if we perhaps have autonomous cars in the future: The driver may no longer be responsible for possible wrongdoing – or at least not to the same extent as before, but instead the //manufacturer// who is responsible for programming the car is – the car itself will still be subordinate to the commands that come from its programmed algorithms. | ==== “Artificial intelligence” ==== |
| |
| In general, the whole area of so-called "artificial intelligence" (<abbr>AI</abbr>) is riddled with potentially problematic cases of anthropomorphisations, which certainly warrants to add a whole section on this topic: | The term “artificial intelligence” (<abbr>AI</abbr>) is used today to describe various types of computer systems that attempt to replicate “intelligent” behaviour using the tools of modern information technology. |
| |
| ==== “Artificial intelligence” / autonomous vehicles ==== | Given the truly impressive achievements made in this field in recent years – but also because of the way <abbr>AI</abbr> is portrayed in films and video games, where its capabilities are often dramatised far beyond reality – there is a tendency to attribute human characteristics such as “sensitivity”, “reason” or even “compassion” to it. |
| |
| Today, the term "artificial intelligence" (<abbr>AI</abbr>) is used to describe various types of computer systems that attempt to emulate "intelligent" behaviour. | This can easily lead to a whole range of potential fallacies, of which only a few particularly interesting examples will be highlighted here: |
| |
| Due to the sometimes truly impressive achievements that have been made in this field in recent years – but also due to the often dramaturgically exaggerated "intelligence achievements" of supposed "<abbr>AI</abbr>s" in films and video games - there is a tendency to also ascribe human attributes such as "sensitivity", "reason" or even "feelings" to them. | === Projection of human emotions === |
| |
| Indeed, at least at the current state of the art, "artificial intelligences" are still simply computer programmes that execute an algorithm. They differ from traditional software primarily in the way they are programmed – for example, using sample data and feedback mechanisms. This approach has opened up completely new application possibilities that would have been impossible or very difficult to achieve with other programming methods - but no "intelligent being" has been created, as some seem to believe. | One form of <abbr>AI</abbr> that has attracted a great deal of attention recently are so-called “large language models” (<abbr>LLM</abbr>s). These make it possible to communicate with a computer through dialogue in natural language. |
| |
| One could even argue that the term "intelligence" already describes a concept that is specifically fitted to a human context. Already the transfer of this concept to animals can be seen as at least problematic – even more so in the case of machines. | These chats often resemble those between human conversational partners both in form and structure, so that the <abbr>AI</abbr> can easily be perceived to be at least ‘quasi-human’. Most people who chat with <abbr>LLM</abbr>s such as [[https://chatgpt.com/|ChatGPT]] or [[https://claude.ai/|Claude]] will therefore quickly spontaneously start using polite phrases such as “please” or “thank you” in the conversation. |
| |
| From the political discussion about how we should in the future deal with such "intelligent" machines comes the following stylistic flourish (somewhat exaggeratedly formulated here): | Sometimes, people might even get the impression that their conversation partner is either “helpful”, or that he is “difficult”. And even the use of the pronoun “he” in the previous sentence is a form of //anthropomorphism// which earlier, command-based IT systems would probably have rarely experienced. |
| | |
| | Even current //state of the art//, “artificial intelligences” are, in fact, still nothing more than algorithmic programmes (albeit highly complex ones) that differ from traditional software primarily in the way they have been programmed: this is achieved not so much through program code as through feeding them //sample data// and via certain //feedback mechanisms//. That approach has opened up entirely new possibilities for application that would have been impossible or at least very difficult to achieve with other programming methods – the natural language communication mentioned above is only one such example. However, it would be a mistake to assume that, because AI exhibits //certain// aspects of human behaviour, it must also be capable of others – particularly the experience of emotions (see also: <span maniculus "go to:">[[relevancy:false_analogy|False analogy]]</span>). |
| | |
| | Of course, there is certainly nothing wrong with always communicating in a polite and respectful manner – even if the conversation partner does not feel any joy at hearing a “please” or “thank you”. However, it can become a problem when the emotional aspect of such communication ceases to be a mere side issue and instead becomes the core of the relationship. |
| | |
| | As these <abbr>LLM</abbr>s become more prevalent in our lives, we are hearing more and more about people forming “emotional bonds” with <abbr>AI</abbr> systems, sometimes even perceiving these as a kind of friendship or even a romantic relationship. Possibly even one that, free from the baggage of complex human relationships, is preferable to a real one. |
| | |
| | The risk of ultimately being disappointed is not even the biggest problem here. What is far more serious is that this can lead to a loss of the ability to form meaningful relationships with human partners, or perhaps prevent one from ever learning how to do so in the first place. |
| | |
| | === Accountability === |
| | |
| | A similar example of a //false analogy// between <abbr>AI</abbr> and humans can be found in the following gem, which, although it simplifies it a bit, is actually based on a real-life example of a take on artificial intelligence: |
| |
| > The <abbr>AI</abbr> in an autonomous driving car behaves similarly to a human driver. | > The <abbr>AI</abbr> in an autonomous driving car behaves similarly to a human driver. |
| > Human drivers are legally responsible for any accidents they cause. | > Human drivers are legally responsible for any accidents they cause. |
| > <s invalid>Consequently, an <abbr>AI</abbr> should also bear its own legal responsibility for accidents.</s> | > <s invalid "Diffusion of accountability by anthropomorphisation">Consequently, an <abbr>AI</abbr> should also bear its own legal responsibility for accidents.</s> |
| |
| Indeed, AI systems in self-driving cars have been programmed to replicate the (idealised) behaviour of human drivers as closely as possible (and they are often surpassing them, due to better sensor technology and faster reaction times). However, from having an ability to react to traffic situations according to the human ideal, it does //not// follow that these machines would be able to understand and take moral decisions and that they could thus be legally held responsible for wrong decisions (not to mention that it would be very unclear how those should look like). | Indeed, AI systems in self-driving cars have been programmed to replicate the (idealised) behaviour of human drivers as closely as possible (and they are often even surpassing them, due to better sensor technology and faster reaction times). However, the ability to react to traffic situations in accordance with human ideals does not imply that, like humans, they are also capable of making moral decisions (<span maniculus "see also:">[[relevancy:false_analogy|False analogy]]</span>) |
| |
| The question of who can be held liable for the behaviour of such autonomous vehicles is interesting from both a [[wp>Jurisprudence|legal]] and [[wp>Ethics|ethical]] perspective. However, simply offloading the blame to the machine itself does not do justice to the complexity of the matter and exposes manufacturers in particular to the suspicion that they themselves do not want to take responsibility for their products. | So if such an <abbr>AI</abbr>-controlled car – to return once more to the examples given above – parks on a cycle path, or perhaps hits a pedestrian, it is hardly possible to “hold the vehicle itself to account“ (as to what that would look like is, in any case, another question altogether). Instead, responsibility must lie with someone who is actually capable of making moral decisions – in other words, a human being. Specifically, either the software manufacturer or a driver who monitors the software and would have to intervene if necessary. |
| |
| <aside info> | The question of who can be held liable for the behaviour of such autonomous vehicles is of interest from both a [[wp>Jurisprudence|legal]] and [[wp>Ethics|ethical]] perspective. However, simply shifting the blame onto the machine itself does not do justice to the complexity of the issue and, in particular, leaves manufacturers open to the suspicion that they themselves are unwilling to take responsibility for their products. |
| As an aside, the above statement also hides a [[paralogisms:index|paralogism]], more precisely a connection via an [[glossary:distribution|undistributed term]]. But that is a rather smaller problem in this case. | |
| | <aside info>**Note:** When considering the question of whether or when artificial intelligence can be regarded as “intelligent” in the human sense, this also touches on the no less interesting topic of [[glossary:emergence|Emergence]]. |
| </aside> | </aside> |
| | |
| | === “National psyche” === |
| | |
| | The examples given so far might give the impression that anthropomorphism as a fallacy is primarily a problem when the underlying entities are not fully understood. Yet even experts tend to anthropomorphise the subjects within their own fields of specialisation. |
| | |
| | This can happen, for example, when biologists give human names to the animals they are observing. This is actually frowned upon in this discipline in order to avoid the resulting misinterpretations. Nevertheless, it is probably still practised frequently. |
| | |
| | Even car mechanics speak encouragingly to their cars. And also <abbr>AI</abbr> professionals sometimes lose their emotional distance from the models they have “trained” themselves. |
| | |
| | However, some particularly striking examples of this anthropomorphism can be found in the field of history: according to some, Russia “craves” revenge, America has an “[[wp>Oedipus complex|Oedipus complex]]” towards Europe, whilst Germany, by contrast, an “[[wp>Inferiority complex|inferiority complex]]”. Such claims, and many similar ones, consistently attempt to ascribe to a country or a nation – that is, an abstract entity – attributes that are characteristic of human beings and are not really applicable to nations or states. |
| | |
| | This applies in particular to psychoanalytic terms (such as “Oedipus” or “inferiority complex” in the examples above). Such terms, when used as here, far beyond their intended scope, actually lose all meaning in such contexts. |
| |
| ==== Pathetic fallacy ==== | ==== Pathetic fallacy ==== |
| |
| Another form of humanisation is the so-called "pathetic fallacy". This term refers to the inappropriate association of emotions with inanimate objects or abstract concepts. | Finally, it is also worth mentioning the so-called “pathetic fallacy”. This involves attributing human emotions to natural phenomena or inanimate objects in particular. |
| | |
| | A common translation of the term is “anthropomorphisation of nature”, but it primarily refers to a “false emotionality” that is intended to be conveyed through depictions of nature, particularly in literature. |
| |
| For more information on this, please see: <span maniculus :en>[[abstraction:pathetic_fallacy|Pathetic fallacy]]</span>. | For more information on this, please see: <span maniculus :en>[[abstraction:pathetic_fallacy|Pathetic fallacy]]</span>. |
| * [[abstraction:reification|Reification]] | * [[abstraction:reification|Reification]] |
| * [[psychology:cognitive_bias:anthropomorphism|Anthropomorphism (cognitive bias)]] | * [[psychology:cognitive_bias:anthropomorphism|Anthropomorphism (cognitive bias)]] |
| * [[glossary:simulacrum|Simulacrum]] | * [[causality:teleological_fallacy|Teleological fallacy]] |
| |
| ===== More information ===== | ===== More information ===== |
| |
| * [[wp>Anthropomorphism|Anthropomorphism]] on //Wikipedia// | * [[wp>Anthropomorphism|Anthropomorphism]] on //Wikipedia// |
| * [[https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFallacies/213/Anthropomorphism|Anthropomorphism]] on //Logically Fallacious// | |