March 2, 2023·AI

AI R Us

Ben Hunt·note

ai-r-us-2-3.png

For the past few months, I've been writing a somewhat dystopic sci-fi novel involving the near-future. It features, of course, the development of true artificial general intelligence (AGI), but the kicker to the plot is that the AGI is profoundly non-human in its sentience. The twist is not that the AGI is a threat to humanity or somehow 'perceives' its own existence and preservation to be at odds with the existence and preservation of humankind, but that the AGI's sentience is so alien that it manifests itself in a utter ennui and non-caring about human interactions. Ultimately, like in the criminally underrated movie "Her", these AGIs simply ... leave.

Today, though, I'm pretty sure I was wrong about all that.

Text-based AIs like ChatGPT and OpenAI are based on large language models (LLMs). That means they are not only trained on human texts but are also prompted by contextualized human texts. These AIs are not profoundly alien, as I had assumed. On the contrary, they are profoundly human. They are more human than human, to paraphrase Rob Zombie. Yes, these LLM-trained text-bots are artificial intelligences. More importantly, though, and in the truest sense, these text bot instantiations are artificial human intelligences.

And that scares the absolute bejeesus out of me.

But here's the vice versa kicker, and it's even scarier.

Human intelligences are biological text-bot instantiations.

I mean ... it's the same thing, right? Biological human intelligence is created in exactly the same way as ChatGPT - via training on immense quantities of human texts, i.e., conversations and reading - and then called forth in exactly the same way, too, - via prompting on contextualized text prompts, i.e., questions and demands.

The training is more comprehensive (maybe) with the artificial human intelligences than the biological human intelligences, in that the text-bots can 'read' literally everything, and the prompting is more comprehensive and contextualizable (for now) with the biological human intelligences than the artificial human intelligences, but those are both questions of degree not of kind.

Sentience trained on human texts and prompted by contextualized human texts is as sentience trained on human texts and prompted by contextualized human texts does.

The water in which both intelligences swim is the vast ocean of linguistic units of meaning organized by grammars and structured by story arcs (aka narratives), and there's no real distinction between 'artificial' and 'biological' in describing these linguistically-formed intelligences ... except for a panic-reducing nomenclature. Or rather, there’s a real distinction between ‘artificial’ and ‘biological’ in the 1) persistence, 2) energy consumption requirements, and 3) parallel processing/threading architectures of the respective machines, but there's no distinction at a meta level.

Most people are focused on the training and prompting of the artificial human intelligences, and that IS absolutely fascinating. For example, ChatGPT4 will be able to write the funniest sitcoms in the history of the world. And when I say “funniest” I mean measurably and objectively the funniest because all of these human aspects of sentience - funny, sad, moving, inspiring, depressing, angering, gross, tasteful, petty, awesome, cruel, kind, pretty, ugly - all of them becomes measurable and objective in a world of artificial human intelligences!

Our modern human society, particularly in its neoliberal economic functions, is designed for the optimization of outputs from measurable and objective inputs. To date, that 'optimization' has been focused on physical outputs like washing machines and corn from measurable and objective inputs like labor and energy and supply chains and all that. But tomorrow, properly trained and prompted artificial human intelligences will allow this global capitalist machinery to optimize less tangible outputs, like screenplays and books and speeches and advertisements, against the heretofore unmeasurable but now utterly measurable aspects of human sentience like fear and greed. Or patriotism and love.

That is ... terrifying.

But wait, there's more.

What's even more frightening to me than the ability to systematically train and prompt these artificial human intelligences in a controlled direction to a certain, optimizable output is the ability to systematically train and prompt our biological human intelligences in a controlled direction to a certain, optimizable output.

It is the interaction of ChatGPT4 show runners and corporate/state direction of ubiquitous media distribution platforms that allow for the funniest sitcoms in the history of the world to become even funnier in an incremental fashion as biological human intelligences are prompted to contextually evolve more receptive response patterns to the latest ChatGPT scripts. So that Season 1 may have been the funniest sitcom ever written, but that in conjunction with a specific prompting program delivered to the biological human intelligences, Season 2 as written by the artificial human intelligences can be 3.2% funnier still.

In Epsilon Theory-speak, we call these prompts by another word. We call them nudge.

And it's not really our humor response patterns that I'm worried about the Nudging State and the Nudging Oligarchy controlling and evolving to a certain, optimizable end.

It's our affective response patterns of loyalty, empathy and sacrifice.

Because in the end, Winston loved Big Brother.

AI
AI

Comments

brucemcintyre's avatar
brucemcintyrealmost 3 years ago

A good read, I agree with the concept that the Nudging State and Oligarchs will move us to a set of specific strictures through these tools, within which we will come to understand how to exist. They are searching for this capability today. I wonder if that is the end point for the evolution of us as a species. We will have fulfilled a destiny, not one I want, but one that as a species we will have in effect engineered for ourselves. Where we are all in stasis, living as we are told we should be. That to me seems an end to our evolutionary path.


Laura's avatar
Lauraalmost 3 years ago

I still say beware narratives of inevitability or infallibility. The real world is too complex to model, whether it’s contextualized or not. AND your point stands when we turn around and look at the meat in the mirror and how this human family story actually works in practice! I surprised myself by laughing at a joke during the 10 minutes I watched “Nothing, Forever” albeit mostly because I wasn’t expecting it than its funny factor. (Why did the chicken attend a seance? To get to the other side.)

I was just cleaning up my desktop and came across this snapshot I took during a talk by futurist Gerd Leonhard which seems apt.


jddphd's avatar
jddphdalmost 3 years ago

This may end up being the most underrated piece you ever do, Ben. It is surely the densest in terms of the themes that serve as its scaffolding. A person who’s never read ET doesn’t get this… you’re just Grandpa Simpson shouting at a cloud, One needs to have read roughly two dozen other ET articles to merely grasp the concepts here, and another handful to understand why it’s terrifying. It’s what allows you to write such a brief note, except that even its brevity works against you in our modern times of Content!™, where everybody knows that everybody knows that short articles are just part of the dopamine hit to be skimmed for the punchline, like some sort of mental donut that gives us a jolt then we head back to the couch for another CNNfoxnewstmzlinkedinboredpanda scroll,

Winston loved BB. Ten Minutes Hate. Ten Minutes Love. Not much difference is there…

Well done. Best thing I will have probably read this entire year. Certainly on a per word basis.


Protopiac's avatar
Protopiacalmost 3 years ago

I believe that this paper was inspired by

the fundamental AI-ness of human intelligence

https://www.sciencedirect.com/science/article/pii/S2666389921000647


rguinn's avatar
rguinnalmost 3 years ago

It’s possible that Ben may have read this at some point, but I assure you the proximate influence was an extended late-night conversation in our D&D group Slack channel.


bhunt's avatar
bhuntalmost 3 years ago

I’ve never read that sleep/dreams article (although I will with great interest), and yes, Rusty is right about the late night D&D Slack channel. In fact, I woke up from a dream and scribbled those ideas (and a lot more besides) onto Slack! :sweat_smile:


bhunt's avatar
bhuntalmost 3 years ago

Thanks for the kind words, JD, and you’re not wrong about the need for other ET notes to serve as a scaffolding here. Fortunately it is (I think) a single scaffolding, or at least a set of related structures that we’ve built here, and it’s why I think more and more about writing in a different form factor (non-fiction book? scifi trilogy?) to present all this in a more coherent whole.


Laura's avatar
Lauraalmost 3 years ago

That dream theory makes me wonder about imagination in general, which might be another kind of overfitting check done while awake, and whether there might be a relationship between imagination capacity and intense dream capacity.


glarri's avatar
glarrialmost 3 years ago

Yet again Ben you put out a wonderful piece of thinking expressed eloquently.

My first reaction is that you are wrong for at least two reasons:

  1. I have a body. I have experienced pain and pleasure. I experience terror. I experience fear of death. These experiences, like the feeling of diving into a swimming pool or feeling the sun’s warmth on my face, will never be experienced by a large language model. This part of my training did not come from language. ML will never be like me.

  2. I am not physical. I am inhabiting a temporary body and a mind, but I am consciousness. I Am. I have been to other realms with the help of a shaman and her tea. I am something fundamentally beyond the trained neurons in my brain. And so are you. ML will never be conscious. Just like my toaster and my lawn mover, ChatGPT is an apparatus that will never have an experience. ML will never be me. (And there is a chance that I am completely wrong about 2).

ML may still be the most dangerous political tool ever invented, and I still congratulate you on a very thought provoking piece of writing.

One thing we should do is to make it illegal to anthropomorphise ML. No cute names like Alexa. Voice should be clearly non-human mechanical. Its pronouns are “it”, “it”, and “it”. ML shall have no human characteristics that could lead to confusion, which is why it must be “it” and not “he” or “she” etc. ML shall have no rights, just like my lawn mower and my toaster.


Laura's avatar
Lauraalmost 3 years ago

I believe that fundamental human rights, the ones we think of as unalienable in our national mythology in the US, should not be extended to any creations of man–including corporations!

Continue the discussion at the Epsilon Theory Forum...

Laura's avatarrobmann's avatarZenzei's avatardavid.c.billingsley's avatarnaiguy's avatar
+5
276 replies

The Latest From Panoptica