“The easiest way to mismanage a technology is to misunderstand it.”
- Jaron Lanier
Jaron Lanier’s remark about the risks of mishandling technology was written in the context of artificial intelligence, a term he (and I) dislike and find misleading. But there is a wisdom here that applies to far more than AI. The problem with saying we will go awry is we misunderstand a particular technology, is that completely understanding any technology is impossible. This is one of the dirty secrets of our supposedly advanced age. We judge our alleged superiority because our tools seem so powerful, even though many of our most important techniques, those we would not even consider ‘technology’, are now so atrophied that even adequately understanding a technology has become challenging.
A Silicon Valley engineer with a far more critical eye than most of his peers, Lanier’s family background is as extraordinary as he is. His mother had to fast talk her way out of a Nazi concentration camp, and almost all his paternal relatives were killed in the Ukrainian pogroms. His grandfather escaped to New York, where Jaron’s father, Sterling E. Lanier was born. He went on to become an editor for Chilton Books, and in 1965 was instrumental in persuading them to publish Frank Herbert’s Dune. He was also a writer in his own right - his short story “A Father’s Tale” was shortlisted for the World Fantasy Award. He was also one of many writers who corresponded with J.R.R. Tolkien.
Nicknamed ‘the dismal optimist’, Jaron Lanier has been a popular voice in raising concerns about the internet and virtual reality, including that we are at great risk of becoming unintelligible to one another. As Lanier argues, ‘artificial intelligence’ doesn’t name some new kind of being, it is merely a label for the computerisation of certain narrow aspects of our cognitive functions. The greatest risk of AI, therefore, is not that it will destroy us: as Lanier has repeatedly warned, if AI has a role in our extinction, we will still have been the ones responsible. The risk of artificial intelligence is that its ever-growing potency will cause further damage to the social relationships that lie at the heart of the human experience. The risk inherent to AI that Lanier foregrounds is that it will drive us insane - and this is something we have all seen play out around us in recent years.
The trouble with his warning that we will inevitably mismanage any technology we do not understand is that it embeds two problematic assumptions. The first is that to talk of ‘mismanaging’ a technology begins by presuming that technology is something to be managed. This idea of ‘management’ is a relatively young concept, a means of transferring the old social structures of feudalism onto new societies centred upon commerce. But you can only manage what you can understand, as Lanier warns, and in the vast majority of situations where we attempt management there is no possibility of understanding. Increasingly, we have given up even trying to do so. The idea that complex situations need to be managed rests upon wild assumptions about the efficacy of managerial expertise that are disconfirmed almost daily.
Beyond and behind even this, however, is the conceptual crisis of our time that was so presciently foreshadowed by the German philosopher Martin Heidegger. The essential quality of technology, the thing that makes tools into technology is not in itself technological. It lies instead in the palpable shift in our ways of thinking, a shearing of our worlds away from our previous means of understanding and into a new space where we can comfortably retrofit fire or printing into our new category of ‘technology’. The essence of technology is the transformation of our planet into resources to be harnessed, and the strip mining of thought into mere information.
Remember that old saw about what people with hammers are apt to do...? People who think about their tools as technology see problems as something to be managed by acquiring and deploying resources, a term that ghoulishly includes people as well as everything from forests to minerals and even sunlight. This is what it means to live within technology, and it is impossible to understand this circumstance completely. Mismanagement of technology is thus inevitable, and we never once question whether ‘managing’ is even what we should be doing. Until we understand this deeper aspect of our technological transformation, we are doomed to misunderstand everything.
For Matt
Well, I am late to the game on this thread, but I am motivated to leave a comment for the philosophers.
Since Technology encompasses applying Knowledge to do something, the question of misunderstanding technology reduces to misunderstanding the knowledge being applied to the action or task. Technology also arises from invention and with every invention, there are un intended consequences which is another way of saying the Technology was misunderstood.
Usually the field of creating Technology is called Engineering. Tools are engineered devices for actionable tasks while the technology also encompasses the recipes, process flows, instructions etc for completing the task.
Take for example the Technology for making a complex integrated circuit chip designed for AI Systems. Fabricating such chips typically requires a process flow with an equipment set of 1,000 tools each having a specific recipe. This process flow is executing a design typically developed using a dozen or more software design tools and printing tools that create the patterns on the more than 100 layers that make up the chip. This chip is now typically assembled into a package with other chips by a process known as Heterogeneous Integration creating a system in a package and on and on until you ask your question to ChatGPT...
By "understand the problem" are you suggesting questioning the whole approach? Is there a problem inherent to this utilitarian approach?