6 Comments
User's avatar
A Frank Ackerman's avatar

I’ve split this comment off because it’s not germane to the subject at hand. But underneath the subject of this post, and also the next one, is the matter of how one views the nature of body/brain/mind. Since my view is that the source of everything in social reality arises from an individual body/brain/mind our models in this realm are fundamental to everything we think about

I appreciated your invocation of neurobiology. As I’ve tried to elucidate elsewhere, one of my fundamental concerns is with how humankind interacts with physical reality. My view is that we are entirely part of this reality, and like every other life form, our long-term success depends on how we interact with physical reality.

A few humans have a compulsion to create models of who and where we are. Our earliest models posited beings somewhat like us that orchestrated all of physical reality, including us. About three millennia ago some of us started to have thoughts that challenged this view. Less than a millennium ago this way of thinking picked up steam. Although at present most of humankind still holds to some version of our original view, a few of us do not. For me it is not a matter of which view is epistemologically correct, but only which view is most likely to result in long term success, say over a hundred millennia. In my view a really critical part of achieving success is understanding how our minds function, that is, what exactly determines the actions we take or don’t take. It is these actions that will determine our fate.

What will success look like? First, we won’t commit suicide, which on our present course is a distinct possibility. Second, we’ll have a civilization in which most of us can create lives that are physically and mentally comfortable most of the time, and at death most of us are reasonably satisfied with our life experience.

Over the course of three million millennia many species have come and gone. Has nature so evolved the human body/brain/mind that it is capable of creating such a future? Given what appears to be happening in the last decade or so, human civilization’s ultimate success is doubtful. Perhaps by the year 3000 we’ll have more clarity.

Expand full comment
Chris Bateman's avatar

Another nice commentary, Frank, thank you. My entirely irrational belief is that we'll muddle through like we always have! 🙂

"For me it is not a matter of which view is epistemologically correct, but only which view is most likely to result in long term success, say over a hundred millennia."

Nice time scale! I like to think in terms of geological time too. I agree that epistemic correctness is not the criterion - but further, I would suggest that what is most likely to foster this outcome is diversity of thought. The more different ways of thinking we can maintain, the stronger we will be at resisting disaster, for nothing is more evolutionary dangerous than monoculture. For this reason, citizen democracy founded on free discourse still looks like a great option, even if those with power and influence wish to sabotage it.

A crisis point is coming (if it did not already commence), and the outcome could be one of those great turning points in history. I am not pessimistic about this. Positive outcomes from this unknowable event are, I suspect, just as likely as further disaster. Indeed, while the prevailing view of the biological past is 'wow, so many extinctions, how depressing', I look back and I say 'wow, all these creatures alive today have an unbroken chain of inheritance that spans back literally billions of years!' I find in this the awe and respect that some feel when they look at the stars. Perhaps it is not unreasonable to expect further miracles when life itself is a chain of miraculous continuity.

With unlimited love,

Chris.

Expand full comment
A Frank Ackerman's avatar

Hi Chris

An important piece for me. Thank you!

As usual, it’s a question of the meanings we assign to words. To argue that any mechanism that humans can ever create “thinks”, requires that we model a process we label ‘thinking’, and then show equivalence between the mechanism’s behavior in various situations and that of our model. Since my view is that the source of all human action is thought, any light we can throw on this concept is helpful

What is a thought? I claim that on most days I have several hundred thoughts. I crudely capture a few aspects of a few of these thoughts in words. But no recording of an event is the event itself. Just what a thought is, is intrinsically unknowable. It seems to me that this is just another way of expressing some of the points in your penultimate paragraph.

I use flying as an analogy. We call certain behaviors of birds and insects ‘flying’. When we use machines that can transport us from one place to another using aerodynamics, we say we ‘fly’. But what we have accomplished is only crudely similar to what a bird does.

In a few score millennia humankind has figured out how to amplify its muscle power by several orders of magnitude. Partially as a result of this, a worldwide civilization that is several orders of magnitude more complex than where we began has emerged. One promise of AI is that humankind now has tools it can use to more rationally direct its complex creation.

As our robots become ever “smarter” it is absolutely critical that we always remember that they can never think as we do, and that any output they produce is artificial and not necessarily always in our best interest.

Expand full comment
Chris Bateman's avatar

Great commentary, Frank, thanks for this!

"One promise of AI is that humankind now has tools it can use to more rationally direct its complex creation."

Illusion (perhaps mercifully so). AI allows us to obfuscate reason by pretending that aggregation is rational, which it isn't. As a former computer scientist whose Masters degree was in AI, I repeatedly insist that we misunderstand what we are calling 'AI'. It's not rational. It's not analogous to enhancing muscle power. There is a real danger here of outsourcing literature review to systems that can and are swayed by corporate powers that are also in many cases in thrall to the intelligence community. There are great dangers here not being discussed, and they are not the usual 'existential threat' kind people like to have fainting fits over, but rather dangers to free speech and free thought, and thus to scientific process and citizen democracy.

"What is a thought?"

A vital question not asked often enough! While we intuit the meaning of 'thought', it is conceptually obscure, and resists examination. Wittgenstein makes this point himself: thinking doesn't interest us. And I must say, I'm grateful to you for putting this line of enquiry into my head, as I found it very productive to tackle this head on. I still have many questions - no bad thing!

Many thanks,

Chris.

Expand full comment
A Frank Ackerman's avatar

Ha! “A Masters degree in AI.” I’m jealous. The only AI in the late 60s was associated with logic. Now I scramble to get some crude understanding what software sans requirements is doing. My PhD dissertation: “Toward a Programming Language for Writing and Checking Mathematical Discourses.”

“pretending that aggregation is rational”

Yes, but aggregating across far more material than any of us could manage gives new data for thought. Yes, there are very real dangers here, but personally, semi-mechanical aggregation with references across a huge collection of social reality artifacts is for me a kind of mental muscle power.

I find that I experience less dissatisfaction when I re-read something I’ve posted if I always set it aside for later examination before I post it. Today I also re-read your piece and your comments. Very nice. I’m not a literary critic, but your prose nicely resonates.

Expand full comment
Chris Bateman's avatar

Thanks for the kind words, Frank!

I love that your PhD was a beachhead dissertation. I similarly wrote a beachhead dissertation on automated language acquisition for my Masters degree at a time when it was far from on anyone's radar. Although I'm certain my own work contributed nothing significant to the field, it was fun to be part of the groundwork for where Large Language Models come from.

"Yes, there are very real dangers here, but personally, semi-mechanical aggregation with references across a huge collection of social reality artifacts is for me a kind of mental muscle power."

Indubitably! But whenever we use a tool instead of doing the work ourselves, there are costs. I used a chainsaw today... I didn't get anywhere near as much exercise as when I used an axe earlier this year. The chainsaw made my work much quicker and easier, but I lost out in both muscle growth and technique. I find it important to be aware of the trade-offs, and this is especially so with aggregation of sources.

As Babette Babich made me acutely aware, the most pernicious censorship in academic circles is simply non-citation. Search engines (which Large Language Models are a variation upon) only intersect with majority viewpoints, and in both the sciences and philosophy the bulk under the iceberg contains much more worth than just what can be found above the waterline. Large Language Models risk a new kind of enclosure - an enclosure of thought, and this carries significant dangers.

Many thanks for continuing our discussion - I always enjoy your comments!

Chris.

Expand full comment