It’s how we think, not what we think, that makes all the difference

It’s how we think, not what we think, that makes all the difference
Excavator on Mountain

READ: ‘Artificial Intelligence’: A misused metaphor?

Connell Fanning and Marija Laugalyte

Metaphors play a large role in how we think. The purpose of metaphors is to help us understand one thing (not yet understood) in terms of another thing (already understood). Hence, we need to pay attention to the metaphors we use to ensure they are appropriate to our purpose.

Currently, a pervasive metaphor being bandied about is ‘artificial intelligence’. This, like many other metaphors we use unthinkingly, such as the ubiquitous war metaphors, is a dangerous metaphor. The idea of artificial intelligence is to carry over something we understand in relation to human beings, ‘intelligence’ (in whatever variant we mean it) to computing machines and algorithmic software, albeit with the qualifying ‘artificial’ to represent the latter.

This is misleading in the way it prompts the idea of the machines and algorithms having something akin to human intelligence. This idea is programmed into us by the kind of images which are invariably used to represent ‘AI’ – a quick Google search will throw up hundreds of images of the human head or brain with mechanical elements imposed.

The consequence is misunderstanding and the ensuing fears beyond the reality of ‘AI’. There is no doubt that this information technology is a threat, as it may be seen, to some occupations where routine and repetitive processing of information is involved, such as many tasks in the legal profession and generation of standard texts.

The threat, however, comes from what ‘AI’ really is when it is spelled out beyond the metaphor, that is, algorithmic statistical large language pattern modelling, a much less elegant and affective matter than so called ‘artificial intelligence’. Properly described, this information technology is a different matter from human intelligence.

However powerful this information technology may be, it is only a means of ‘scraping’, using probability algorithms, across the vast array of data on the internet and remembering and compiling it according to instructions. As Meredith Broussard of New York University put it, “All generative AI does is remix and regurgitate stuff in its source material”.[1] So, it’s all about the source material.

Much of the conversation about artificial intelligence confuses making with creating. As Susanne Langer put it, “An automobile is not created on a conveyer belt, but manufactured. We don’t create bricks, aluminium pots, or toothpaste; we simply make such things. But we create works of art”. She continues:

“The difference between creation and other productive work is this: an ordinary object, say a shoe, is made by putting pieces of leather together; the pieces were there before. The shoe is a construction of leather. It has a special shape and use and name, but it is still an article of leather and is thought of as such. A picture is made by deploying pigments on a piece of canvass, but the picture is not a pigment-and-canvass structure. The picture that emerges from the process is a structure of space, and the space itself is an emergent whole of shapes, visible colored volumes. Neither the space nor the shapes in it were in the room before.”[2]

Much thinking conducted by people is of a creative nature.

The power of making, at least for now, is where the threat lies to peoples’ livelihoods. It is also where biases – gender, racial, and other – and other controversial issues arise from the source material on which the software is ‘trained’, as it is put.

Nevertheless, this is also where the opportunity lies, to release people from repetitive tasks to focus on what human intelligence is best at, that is, creativity and imagining, making and understanding meaning and truths, judgement-making, sentiment, joyfulness, significance, values, expectations, hopes, disappointments, decisions, and the psychological uncertainties that pervade peoples’ lives, that is, all that make us human, not artificial. Information software and equipment obviously do not have bodies unlike human beings. That, for one, is the difference that makes a difference.

To manage our thinking better, we might consider alternative metaphors that we may use for algorithmic statistical large language pattern modelling and how this affects how we think about it. For instance, what if we used the metaphor of ‘an excavator’ crawling over a large mountain of data?


[1] Nick Robins-Early. ‘Very wonderful, very toxic’: how AI became the culture war’s new frontier. The Guardian, 21 August 2023, Accessed at www.theguardian.com/us-news/2023/aug/21/artificial-intelligence-culture-war-woke-far-right?CMP=Share_iOSApp_Other on 22.08.2023, 15:00.

[2] Susanne Langer. Problems of Art: Ten Philosophical Lectures. Charles Scribner’s Sons, New York, 1957, pages 27-28.