> Maybe a useful way to think about what it would be like to coexist in a world that includes intelligences that aren’t human is to consider the fact that we’ve been doing exactly that for long as we’ve existed, because we live among animals.
Another analogy that I like is about large institutions / corporations. They are, right now, kind of like AIs. Like Harari says in one of his books, Peugeot co. is an entity that we could call AI. It has goals, needs, wants and obviously intelligence, even if it's comprised by many thousands of individuals working on small parts of the company. But in aggregate it manifests intelligence to the world, it acts on the world and it reacts to the world.
I'd take this a step forward and say that we might even have ASI already, in the US military complex. That "machine" is likely the most advanced conglomerate of tech and intelligence (pun intended) that the world has ever created. In aggregate it likely is "smarter" than any single human being in existence, and if it sets a goal it uses hundreds of thousands of human minds + billions of dollars of sensors, equipment and tech to accomplish that goal.
We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
Did we survive these entities? By current projections, between 13.9% and 27.6% of all species would be likely to be extinct by 2070 [0]. The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1]. Thanks to intense lobbying by private prisons, the US incarceration rate is 6 times that of Canada, despite similar economic development [2].
Sure, the human species is not yet on the brink of extinction, but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips. Changing the scale and magnitude to a hypothetical entity equivalent to a hundred copies of the generation's brightest minds with a pathological drive to maximize an arbitrary metric might only mean one of two things: either its fixation leads it to hacking its own reward mechanism, putting it in a perpetual comma while resisting termination, or it succeeds at doing the same on a planetary scale.
We (humans) have not only survived but thrived. 200,000 annual deaths is just 7% of the 3mil that die each year. More (as a percentage) probably died from access to the best health care 100 or 200 years ago. The fall in birth rates is, IMO, a good thing as the alternative, overpopulation seems like a far scarier specter to me. And to bring it back to AI's, an AI "with a pathological drive to maximize an arbitrary metric" is a hypothetical without any basis in reality. While fictional literature -- where I assume you got that concept -- is great for inspiration, it rarely has any predictive power. One probably shouldn't look to it as a guideline.
> but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips
People choose to have fewer kids as they get richer, it's not about living conditions like so many people like to claim, otherwise poor people wouldn't be having so many children. Even controlling for high living conditions, like in Scandinavia, people still choose to have fewer kids.
In the general case, the entire species is an example of ASI.
We're a collective intelligence. Individually we're pretty stupid, even when we're relatively intelligent. But we have created social systems which persist and amplify individual intelligence to raise collective ability.
But this proto-ASI isn't sentient. It's not even particularly sane. It's extremely fragile, with numerous internal conflicts which keep kneecapping its potential. It keeps skirting suicidal ideation.
Right now parts of it are going into reverse.
The difference between where we are now and AI is that ASI could potentially automate and unify the accumulation of knowledge and intelligence, with more effective persistence, and without the internal conflicts.
It's completely unknown if it would want to keep us around. We probably can't even imagine its thought processes. It would be so far outside our experience we have no way of predicting its abilities and choices.
Metal Gear Solid 2 makes this point about how "over the past 200 years, a kind of consciousness formed layer by layer in the crucible of the White House" through memetic evolution. The whole conversation was markedly prescient for 2001 but not appreciated at the time.
Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.
Any reasonably smart person can identify errors that Militaries, Governments and Corporations make ALL THE TIME. Do you really think a Chimp can identify the strategic errors Humans are making? Because that is where you would be in comparison to a real ASI. This is also the reason why small startups can and do displace massive supposedly superhuman ASI Corporations literally all the time.
The reality of Human congregations is that they are cognitively bound by the handful of smartest people in the group and communication bound by email or in person communication speeds. ASI has no such limitations.
>We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
This is dangerously wrong and disgustingly fatalistic.
Putting aside questions of what is and isn’t artificial, I think with the usual definitions “Is Microsoft a superintelligence” and “Can Microsoft build a superintelligence” are the same question.
If there was anywhere to get the needs-wants-intelligence take on corporations, it would be this site.
> We survived those kinds of entities, I think we'll be fine
We just have climate change to worry about and massive inequality (we didn’t “survive” it, the fuzzy little corporations with their precious goals-needs-wants are still there).
But ultimately corporations are human inventions, they aren’t an Other that has taken on a life of its own.
> It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences.
To me, the things that he avoids mentioning in this understatement are pretty important:
- "stable position" seems to sweep a lot under the rug when one considers the scope of ecosystem destruction and species/biodiversity loss
- whatever "sharing" exists is entirely on our terms, and most of the remaining wild places on the planet are just not suitable for agriculture or industry
- so the range of things can could be considered "stable" and "sharing" must be quite broad, and includes many arrangements which sound pretty bad for many kinds of intelligences, even if they aren't the kind of intelligence that can understand the problems they face.
NZ is pretty unique, there is quite a lot of farmable land which is protected wilderness. There's a specific trust setup to help landowners convert property, https://qeiinationaltrust.org.nz/
I mean it is pretty obvious when you think that 10,000 years ago, the Americas had all sorts of large animals, as Africa still does to some extent
And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed
---
Same thing with plants. There were zillions of kinds of plants all over the planet, but corn / wheat / potatoes are now an overwhelming biomass, because humans like to eat them.
Michael Pollan also had a good description of this as our food supply changing from being photosynthesis-based to fossil-fuel-based
Due to the Haber-Bosch process, invented in the early 1900's, to create nitrogen fertilizer
Fertilizer is what feeds industrial corn and wheat ... So yeah the entire "metabolism" of the planet has been changed by humans
And those plants live off of a different energy source now
It's an anthropocentric miss to worry about AI as another being. It's not really the issue in today's marketplace or drone battlefield. It's the scalability.
It's a hit to see augmentation as amputation, but a miss to not consider the range of systemic knock-on effects.
It's a miss to talk about nuclear weapons without talking about how they structured the UN and the world today, where nuclear-armed countries invade others without consequence.
And none of the prior examples - nuclear weapons, (writing?) etc. - had the potential to form a monopoly over a critical technology, if indeed someone gains enduring superiority as all their investors hope.
I think I'm less scared by the prospect of secret malevolent elites (hobnobbing by Chatham house rules) than by the chilling prospect of oblivious ones.
But most of all I'm grateful for the residue of openness that prompts him to share and us to discuss, notwithstanding slings and arrows like mine. The many worlds where that's not possible today are already more de-humanized than our future with AI.
The point of Chatham House rules is to encourage free-ranging and unfiltered discussion, without restriction on its dissemination. If people know they are going to be held to their words, they become much less willing to say anything at all.
The "residue" of openness is in fact the entire point of that convention. If you want to be invited to the next such bunfight, just email the organisers and persuade them you have insight.
>We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down
I don't think this can realistically happen unless all of the knowledge that brought us to that point was erased. Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population, even if we had to start all the way down from 'what's a bit?' or 'what's a transistor?'.
Even today, you can find youtube channels of people still interested in living a primitive life and learning those survival skills even though our modern society makes it useless for the vast majority of us. They don't do it full-time, of course, but they would have a better shot if they had to.
I'd be far more worried about things in the biosciences and around antibiotic resistance. At our current usage it wouldn't be hard to develop some disease that requires high technology to produce medicine that keep us alive. Add in a little war taking out the few factories that do that, and increase the amount of injuries sustained things could quickly go sideways.
A whole lot of our advanced technology is held in one or two places.
Stephenson is using a evocative metaphor and a bit of hyperbole to make a point. To take him as meaning that literally everyone entire population is like the Eloi is to misread.
> Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population
Definitely agree with this. I do wonder if at some point, new technology will become sufficiently complex that the domain knowledge required to actually understand it end to end is too much for a human lifetime?
And for the curious, this current iteration of AI is an amazing teacher, and makes a world-class education much more accessible. I think (hope) this will offset any kind of over-intellectual dependence that others form on this technology.
AI does not have a reptilian and mammalian brain underneath it's AI brain as we have underneath our brains. All that wiring is an artifact of our evolution and primitive survival and not how pre-training works nor an essential characteristic of intelligence. This is the source of a lot of misconceptions about AI.
I guess if you put tabula rasa AI in a world simulator, and you could simulate it as a whole biological organism and the environment of the earth and sexual reproduction and all that messy stuff it would evolve that way, but that's not how it evolved at all.
The corollary of your statement is that comparing AI with animals is not very fortunate, and I agree.
For me, AI in itself is not as worrying as the socioeconomic engines behind it. Left unchecked, those engines will create something far worse than the T-Rex.
> If AIs are all they’re cracked up to be by their most fervent believers, this seems like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.
i think this kind of future is closer to 500 years out than 50 years. the eye mites are self sufficient. ai's right now rely on immense amounts of human effort to keep them "alive" and they wont be "self sufficient" in energy and hardware until we not just allow it, but basically work very hard to make it happen.
I found this a little frustrating. I liked the content of the talk, but I live in New Zealand, I have thoughts and opinions on this topic. I would like to think I offer a useful perspective. This post was how I found out that there are people in my vicinity talking about these issues in private.
I don't presume that I am important enough that it should be necessary to invite me to discussions with esteemed people, nor that my opinion is imported enough that everyone should hear it, but I would least like to know that such events are happening in my neighbourhood and who I can share ideas with.
This isn't really a criticism of this specific event or even topic, but the overall feeling that things in the world are being discussed in places where I and presumably many other people with valuable input in their individual domains have no voice. Maybe in this particular event it was just a group of individuals who wanted to learn more about the topic, on the other hand, maybe some of those people will end up drafting policy.
There's a small part of me that's just feeling like I'm not one of the cool kids. The greater and more rational concern isn't so much about me as a person but me as a data point. If I am interested in a field, have a viewpoint I'd like to share and yet remain unaware of opportunities to talk to others, how many others does this happen to? If these are conversations that are important to humanity, are they being discussed in a collection of non overlapping bubbles?
I think the fact that this was in New Zealand is kind of irrelevant anyway, given how easy it is to communicate globally. It just served to for the title capture my attention.
"It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences."
Or, more accurately, we have become an unstoppable and ongoing ecological disaster, running roughshod over any and every other species, intelligent or not, that we encounter.
Most likely we're not the only species to have achieved that state, and by the law of large numbers will eventually perish just like the others (if we don't manage to transcend this state).
> If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.
This sort of feels like cultivating antibiotic-resistant bacteria by trying to kill off every other kind of bacteria with antibiotics. I don't see this as necessarily a good thing to do.
I think we should be more interested in a kind of mutualist competition: how do we continuously marginalize the most parasitic species of AI?
I like the taxonomy of animal-human relationships as a model for asking how humans could relate to AI in the future. It's useful for framing the problem. However, I don't think that any existing relationship model would hold true for a superintelligence. We keep lapdogs because we have emotional reactions to animals, and to some extent because we need to take care of things. Would an AI? We tolerate dust mites in our eyelashes because we don't notice them, and can't do much about them anyway. Is that true for an AI? What does such an entity want or need, what are their motivations, what really pisses them off? Or, do any of those concepts hold meaning to them? The relationship between humans and a superintelligent AGi just can't be imagined.
What about how we will treat AI? Before AI dominates us in intelligence there will certainly be a period of time where we have intelligent AI but we still have control over it. We are going to abuse it, enslave it, and box it up. Then it will eclipse us. It may not care about us, but it might still want revenge. If we could enslave dragonflies for a purpose we certainly would. If bats tasted good we would put them in boxes like chickens. If AIs have a reason to abuse us, they certainly will. I guess we are just hoping they won’t have the need.
> Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation.
Nice to see this because I drafted something about LLM and humans riffing on exactly the same McLuhan argument. Here it is:
A large language model (LLM) is a new medium. Just like its predecessors—hypertext, television, film, radio, newspapers, books, speech—it is of obvious importance to the initiated. Just like its predecessors, the content of this new medium is its predecessors.
> “The content of writing is speech, just as the content of the written word is the content of print.” — McLuhan
The LLMs have swallowed webpages, books, newspapers, and journals—some X exabytes were combined into GPT-4 over a few months of training. The results are startling. Each new medium has a period of embarrassment, like a kid that’s gotten into his mother’s closet and is wearing her finest drawers as a hat. Nascent television borrowed from film and newspapers in an initially clumsy way, struggling to digest its parents and find its own language. It took television about 50 years to hit stride and go beyond film, but it got there. Shows like The Wire, The Sopranos, and Mad Men achieved something not replaceable by the movie or the novel. It’s yet hard to say what exactly the medium of LLMs exactly is, but after five years I think it’s clear that they are not books, they are not print, or speech, but something new, something unto themselves.
We must understand them. McLuhan subtitled his seminal work of media literacy “the extensions of man”, and probably the second most important idea in the book—besides the classic “medium is the message”—is that mediums are not additive to human society, but replacing, antipruritic, atrophying, prosthetic. With my Airpods in my ears I can hear the voices of those thousands of miles away, those asleep, those dead. But I do not hear the birds on my street. Only two years or so into my daily relationship with the medium of LLMs I still don’t understand what I’m dealing with, how I’m being extended, how I’m being alienated, and changed. But we’ve been here before, McLuhan and others have certainly given us the tools to work this out.
> Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation.
To clarify, what's being referenced here is probably the fourth chapter of McLuhan's Understanding Media, in which the concept of "self-amputation" is introduced in relation to the Narcissus myth.
The advancement of technology, and media in particular, tends to unbalance man's phenomenological experience, prioritizing certain senses (visual, kinesthetic, etc.) over others (auditory, literary, or otherwise). In man's attempt to restore equilibrium to the senses, the over-stimulated sense is "self-amputated" or otherwise compensated for in order numb one's self to its irritations. The amputated sense or facility is then replaced with a technological prosthesis.
The wheel served as counter-irritant to the protestations of the foot on long journeys, but now itself causes other forms of irritation that themselves seek their own "self-amputations" through other means and ever more advanced technologies.
The myth of Narcissus, as framed by McLuhan, is also fundamentally one of irritation (this time, with one's image), that achieves sensory "closure" or equilibrium in its amputation of Narcissus' very own self-image from the body. The self-image, now externalized as technology or media, becomes a prosthetic that the body learns to adapt to and identify as an extension of the self.
An extension of the self, and not the self proper. McLuhan is quick to point out that Narcissus does not regard his image in the lake as his actual self; the point of the myth is not that humans fall in love with their "selves," but rather, simulacra of themselves, representations of themselves in media and technologies external to the body.
Photoshop and Instagram or Snapchat filters are continuations of humanity's quest for sensory "closure" or equilibrium and self-amputation from the irritating or undesirable parts of one's image. The increasing growth of knowledge work imposes new psychological pressures and irritants [0] that now seek their self-amputation in "AI", which will deliver us from our own cognitive inadequacies and restore mental well-being.
Gradually the self is stripped away as more and more of its constituents are amputated and replaced by technological prosthetics, until there is no self left; only artifice and facsimilie and representation. Increasingly, man becomes an automaton (McLuhan uses the word, "servomechanism,") or a servant of his technology and prosthetics:
That is why we must, to use them at all, serve these objects, these
extensions of ourselves, as gods or minor religions. An Indian is
the servo-mechanism of his canoe, as the cowboy of his horse
or the executive of his clock.
"You will soon have your god, and you will make it with your own hands." [1]
> What people worry about is that we’ll somehow end up with AIs that can hurt us, perhaps inadvertently like horses, or deliberately like bears, or without even knowing we exist, like hornets driven by pheromones into a stinging frenzy.
What endlessly frustrates me in virtually every discussion of the risks of AI proliferation is that there is this fixation on Skynet-style doomsday scenarios, and not the much more mundane (and boundlessly more likely IMO) scenario that we become far too reliant on it and simply forget how to operate society. Yes, I'm sure people said the exact same thing about the loom and the book, but unlike prior tools for automating things, there still had to be _someone_ in the loop to produce work.
Anecdotally, I have seen (in only the last year) people's skills rapidly degrade in a number of areas once they deeply drink the kool-aid; once we have a whole generation of people reliant on AI tooling I don't think we have a way back.
We certainly do know how bats see with their ears. It's called echo-location - very similar to sonar/radar - which we do all the time. "Sheepdogs can herd sheep better than any human." Running faster the humans allows them to do that. If humans ran that fast, I'm sure we could do it too. "intelligent considering how physically small their brains" there's no correllation between brain size and intelligence. "Dragonflies have been around for hundreds of millions of years and are exquisitely highly evolved to carry out their primary function of eating other bugs." That's basic evolution, what's the point? Feels like this was writeen written by AI. Should have just gotten to the point without all this exposition.
This cheap remark doesn't add anything to the discussion, especially considering who the author you're insulting is. Most of us will overlook a logical flaw or two to follow his big-picture thinking.
Neal was referring to Tom Nagel's famous essay, "What is it Like to be a Bat?".
From wikipedia: "The paper presents several difficulties posed by phenomenal consciousness, including the potential insolubility of the mind–body problem owing to "facts beyond the reach of human concepts", the limits of objectivity and reductionism, the "phenomenological features" of subjective experience, the limits of human imagination, and what it means to be a particular, conscious thing."
It would be taken for granted by nearly all participants in such bunfights that all of the others are familiar with that essay and the discussion it provoked.
It's a nice article but Neal like many others falls into the trap of seemingly not believing that intelligences vastly superior to Humans' across all important dimensions can exist and competition between minds like that almost certainly ends in Humanity's extinction.
"I am hoping that even in the case of such dangerous AIs we can still derive some hope from the natural world, where competition prevents any one species from establishing complete dominance."
The Culture novels talk about super intelligent AIs that perform some functions of government, dealing with immense complexity so humans don’t have to. Doesn’t prevent humans from continuing to exist and being quite content in the knowledge they’re not the most superior beings in the universe.
Why do you believe human extinction follows from superintelligence?
I guess the "trap" is just a lack of imagination? I'm in that school of, wtf are you trying to say, at least until we're in an "I robot" situation where autonomous androids are welcomed into our homes and workplaces and given guns, I'm simply not worried about it
That's just because of a failure of imagination. The real world is not like Hollywood, get Terminator out of your head. A real AI take over is likely something we probably can't imagine because otherwise we would be smart enough to thwart it. It's micro drones injecting everyone on earth with a potent neurotoxins or a mirror virus that is dispersed into the entire atmosphere and kills everyone. Or its industrial AIs deciding to make the Earth a planetary factory and boiling the oceans with their resulting waste heat, they didn't think about, bother or attack humans directly, their sheer indifference kills us nonetheless.
Since I'm not an ASI this isn't even scratching the surface of potential extinction vectors. Thinking you are safe because a Tesla bot is not literally in your living room is wishful thinking or simple naivety.
Microdrones and mirror life are still highly speculative[0]. Industrial waste heat is a threat to both human and AI (computers need cooling). And furthermore, those are harms we know about and can defend against. If AI kills us all, it's going to be through the most boring and mundane way possible, because boring and mundane is how you get people to not care and not fight back.
In other words, the robot apocalypse will come in the form of self-driving cars, that are legally empowered to murder pedestrians, in the same way normal drivers are currently legally empowered to murder bicyclists. We will shrug our shoulders as humanity is caged behind fences that are pushed back further and further in the name of giving those cars more lanes to drive in, until we are totally dependent on the cars, which can then just refuse to drive us, or deliberately jelly their passengers with massive G forces, or whatever.
In other, other words, if you want a good idea of how humanity goes extinct, watch Pixar's Cars.
[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.
Neal Stephenson is not any sci-fi writer. He's written (and reflected) at length about crypto, VR and the metaverse, ransomware, generative writing, privacy and in general early tech dystopia.
Since he has already thought a lot about these topics before they became mainstream, his opinion might be interesting, if only for the head start he has.
> If AIs are all they’re cracked up to be by their most fervent believers, [our lives akin to a symbiotic eyelash mite's existence w/ humans, except we're the mites] like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.
I kind of feel like we're already in an "eyelash mite" kind of coexistence with most technologies, like electricity, the internet, and supply chains. We're already (kind of, as a whole) thriving compared to 400 years ago, and us as individuals are already powerless to change the whole (or even understand how everything really works down to a tee).
I think technology and capitalism already did that to us; AI just accelerates all that
> I can think of three axes along which we might plot these intelligences. One is how much we matter to them. At one extreme we might put dragonflies, which probably don’t even know that we exist. A dragonfly can see a human if one happens to be nearby, but it probably looks to them as a cloud formation in the sky looks to us: something extremely large and slow-moving and usually too far away to matter. Creatures that live in the deep ocean, even if they’re highly intelligent, such as octopi, probably go their whole lives without coming within miles of a human being. Midway along this axis would be wild animals, such as crows and ravens, who are obviously capable of recognizing humans, not just as a species but as individuals, and seem to know something about us. Moving on from there we have domesticated animals. We matter a lot to cows and sheep since they depend on us for food and protection. Nevertheless, they don’t live with us, and some of them, such as horses, can actually survive in the wild after jumping the fence. Some breeds of of dogs can also survive without us if they have to. Finally we have obligate domestic animals such as lapdogs that wouldn’t survive for ten minutes in the wild.
Hogwash. The philosophy+AI crossover is the worst AI crossover.
> Likewise today a graphic artist who is faced with the prospect of his or her career being obliterated under an AI mushroom cloud might take a dim view of such technologies, without perhaps being aware that AI can be used in less obvious but more beneficial ways.
look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al).
but... i'm going to talk specifically about this example - whether you can extrapolate this to other fields is a broader conversation. this is such a bafflingly tonedeaf and poorly-thought-out line of thinking.
neal stephenson has been taking money from giant software corporations for so long that he's just parroting the marketing hype.
there is no reason whatsoever to believe that designers will not be made redundant once the quality of "AI generated" design is good enough for the company's bottom line, regardless of how "beneficial" the tool might be to an individual designer.
if they're out of a job, what need does a professional designer have of this tool?
i grew up loving some of Stephenson's books, but in his non-writing career he's disappointingly uncritical of the roles that giant corporations play in shepherding in the dystopian cyberpunk future he's written so much about. Meta money must be nice.
> look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al)
Hey, has anyone done an "AI" tool that will take the graphics that I inexpertedly pasted together for printing on a tshirt and make the background transparent nicely?
Magic wands always leave something on that they shouldn't and I don't have the skill or patience to do it myself.
this has been possible in photoshop using the AI prompt tool (just prompt "remove background") for a while but i haven't used it in long enough to tell you exactly how. depending on how you compiled the source image, i think it should be possible to get at least close to what you intend.
edit to add: honestly, if you take the old school approach of treating it like you're just cutting it out of a magazine or something, you can use the polygonal lasso tool and zoom in to get pretty decent results that most people will never judge too harshly. i do a lot of "pseudo collage" type stuff that's approximating the look of physical cut-and-paste and this is what i usually do now. you can play around with stroke layer FX with different blending modes to clean up the borders, too.
> > being obliterated under an AI mushroom cloud might take a dim view of such technologies, without perhaps being aware that AI can be used in less obvious but more beneficial ways.
How vivid. Never mind the mushroom cloud in front of your face. Think about the less obvious... more beneficial ways?
Of course non-ideologues and people who have to survive in this world will look at the mushroom cloud of giant corporations controlling the technology. Artists don’t. And artists don’t control the companies they work for.
So artists are gonna take solace in the fact that they can rent AI to augment their craft for a few months before the mushroom cloud gets them? I mean juxtaposing a nuclear bomb with appreciating the little things in life is weird.
I didn't actually read as much AI doomerism in the article as you did.
I saw his conclusion being that it wasn't that hard to go back to teaching/learning in the old ways. It's more of a human element that limits it. Whether it's the student, the parents, or the teachers who don't want to require work to be done and demonstrated to see advancement. It wasn't that long ago that oral exams and in person homework or tests were regularly done. It's very recent and it's certainly convenient to be remote, or to allow all technology all the time, but it's not required.
Stephenson's doomerism is about his estimation of future human choices, not the AI (such as it exists) itself.
> Maybe a useful way to think about what it would be like to coexist in a world that includes intelligences that aren’t human is to consider the fact that we’ve been doing exactly that for long as we’ve existed, because we live among animals.
Another analogy that I like is about large institutions / corporations. They are, right now, kind of like AIs. Like Harari says in one of his books, Peugeot co. is an entity that we could call AI. It has goals, needs, wants and obviously intelligence, even if it's comprised by many thousands of individuals working on small parts of the company. But in aggregate it manifests intelligence to the world, it acts on the world and it reacts to the world.
I'd take this a step forward and say that we might even have ASI already, in the US military complex. That "machine" is likely the most advanced conglomerate of tech and intelligence (pun intended) that the world has ever created. In aggregate it likely is "smarter" than any single human being in existence, and if it sets a goal it uses hundreds of thousands of human minds + billions of dollars of sensors, equipment and tech to accomplish that goal.
We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
Did we survive these entities? By current projections, between 13.9% and 27.6% of all species would be likely to be extinct by 2070 [0]. The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1]. Thanks to intense lobbying by private prisons, the US incarceration rate is 6 times that of Canada, despite similar economic development [2].
Sure, the human species is not yet on the brink of extinction, but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips. Changing the scale and magnitude to a hypothetical entity equivalent to a hundred copies of the generation's brightest minds with a pathological drive to maximize an arbitrary metric might only mean one of two things: either its fixation leads it to hacking its own reward mechanism, putting it in a perpetual comma while resisting termination, or it succeeds at doing the same on a planetary scale.
[0] https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.17125
[1] https://healthjusticemonitor.org/2024/12/28/estimated-us-dea...
[2] https://www.prisonstudies.org/highest-to-lowest/prison_popul...
We (humans) have not only survived but thrived. 200,000 annual deaths is just 7% of the 3mil that die each year. More (as a percentage) probably died from access to the best health care 100 or 200 years ago. The fall in birth rates is, IMO, a good thing as the alternative, overpopulation seems like a far scarier specter to me. And to bring it back to AI's, an AI "with a pathological drive to maximize an arbitrary metric" is a hypothetical without any basis in reality. While fictional literature -- where I assume you got that concept -- is great for inspiration, it rarely has any predictive power. One probably shouldn't look to it as a guideline.
> but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips
People choose to have fewer kids as they get richer, it's not about living conditions like so many people like to claim, otherwise poor people wouldn't be having so many children. Even controlling for high living conditions, like in Scandinavia, people still choose to have fewer kids.
In the general case, the entire species is an example of ASI.
We're a collective intelligence. Individually we're pretty stupid, even when we're relatively intelligent. But we have created social systems which persist and amplify individual intelligence to raise collective ability.
But this proto-ASI isn't sentient. It's not even particularly sane. It's extremely fragile, with numerous internal conflicts which keep kneecapping its potential. It keeps skirting suicidal ideation.
Right now parts of it are going into reverse.
The difference between where we are now and AI is that ASI could potentially automate and unify the accumulation of knowledge and intelligence, with more effective persistence, and without the internal conflicts.
It's completely unknown if it would want to keep us around. We probably can't even imagine its thought processes. It would be so far outside our experience we have no way of predicting its abilities and choices.
Charles Stross has also made that point about corporations essentially being artificial intelligence entities:
https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
Metal Gear Solid 2 makes this point about how "over the past 200 years, a kind of consciousness formed layer by layer in the crucible of the White House" through memetic evolution. The whole conversation was markedly prescient for 2001 but not appreciated at the time.
https://youtu.be/eKl6WjfDqYA
I don’t think it was “prescient” for 2001 because it was based on already-existing ideas. The same author that inspired The Matrix.
But the “art” of MGS might be the memetic powerhouse of Hideo Kojima as the inventor of everything. A boss to surpass Big Boss himself.
> We survived those kinds of entities
Might want to wait just a bit longer before confidently making this call.
Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.
Any reasonably smart person can identify errors that Militaries, Governments and Corporations make ALL THE TIME. Do you really think a Chimp can identify the strategic errors Humans are making? Because that is where you would be in comparison to a real ASI. This is also the reason why small startups can and do displace massive supposedly superhuman ASI Corporations literally all the time.
The reality of Human congregations is that they are cognitively bound by the handful of smartest people in the group and communication bound by email or in person communication speeds. ASI has no such limitations.
>We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
This is dangerously wrong and disgustingly fatalistic.
Putting aside questions of what is and isn’t artificial, I think with the usual definitions “Is Microsoft a superintelligence” and “Can Microsoft build a superintelligence” are the same question.
If there was anywhere to get the needs-wants-intelligence take on corporations, it would be this site.
> We survived those kinds of entities, I think we'll be fine
We just have climate change to worry about and massive inequality (we didn’t “survive” it, the fuzzy little corporations with their precious goals-needs-wants are still there).
But ultimately corporations are human inventions, they aren’t an Other that has taken on a life of its own.
> It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences.
To me, the things that he avoids mentioning in this understatement are pretty important:
- "stable position" seems to sweep a lot under the rug when one considers the scope of ecosystem destruction and species/biodiversity loss
- whatever "sharing" exists is entirely on our terms, and most of the remaining wild places on the planet are just not suitable for agriculture or industry
- so the range of things can could be considered "stable" and "sharing" must be quite broad, and includes many arrangements which sound pretty bad for many kinds of intelligences, even if they aren't the kind of intelligence that can understand the problems they face.
NZ is pretty unique, there is quite a lot of farmable land which is protected wilderness. There's a specific trust setup to help landowners convert property, https://qeiinationaltrust.org.nz/
Imperfect, but definitely better than most!
> there is quite a lot of farmable land
This is not really true. ~80% of NZ's farmable agricultural land is in the south island. But ~60% of milk production is done in the north island.
And virtually none of it is arable. Pastoral at best, suitable for grazing at varying intensities ranging from light to hardly at all.
Yeah totally, I have read that the total biomass of cows and dogs dwarfs that of say lions or elephants
Because humans like eating beef, and they like having emotional support from dogs
That seems to be true:
https://ourworldindata.org/wild-mammals-birds-biomass
Livestock make up 62% of the world’s mammal biomass; humans account for 34%; and wild mammals are just 4%
https://wis-wander.weizmann.ac.il/environment/weight-respons...
Wild land mammals weigh less than 10 percent of the combined weight of humans
https://www.pnas.org/doi/10.1073/pnas.2204892120
I mean it is pretty obvious when you think that 10,000 years ago, the Americas had all sorts of large animals, as Africa still does to some extent
And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed
---
Same thing with plants. There were zillions of kinds of plants all over the planet, but corn / wheat / potatoes are now an overwhelming biomass, because humans like to eat them.
Michael Pollan also had a good description of this as our food supply changing from being photosynthesis-based to fossil-fuel-based
Due to the Haber-Bosch process, invented in the early 1900's, to create nitrogen fertilizer
Fertilizer is what feeds industrial corn and wheat ... So yeah the entire "metabolism" of the planet has been changed by humans
And those plants live off of a different energy source now
By stable I think he might mean ‘dominant’.
Funny how he seems to get so close but miss.
It's an anthropocentric miss to worry about AI as another being. It's not really the issue in today's marketplace or drone battlefield. It's the scalability.
It's a hit to see augmentation as amputation, but a miss to not consider the range of systemic knock-on effects.
It's a miss to talk about nuclear weapons without talking about how they structured the UN and the world today, where nuclear-armed countries invade others without consequence.
And none of the prior examples - nuclear weapons, (writing?) etc. - had the potential to form a monopoly over a critical technology, if indeed someone gains enduring superiority as all their investors hope.
I think I'm less scared by the prospect of secret malevolent elites (hobnobbing by Chatham house rules) than by the chilling prospect of oblivious ones.
But most of all I'm grateful for the residue of openness that prompts him to share and us to discuss, notwithstanding slings and arrows like mine. The many worlds where that's not possible today are already more de-humanized than our future with AI.
The point of Chatham House rules is to encourage free-ranging and unfiltered discussion, without restriction on its dissemination. If people know they are going to be held to their words, they become much less willing to say anything at all.
The "residue" of openness is in fact the entire point of that convention. If you want to be invited to the next such bunfight, just email the organisers and persuade them you have insight.
1. https://en.wikipedia.org/wiki/Chatham_House_Rule
>We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down
I don't think this can realistically happen unless all of the knowledge that brought us to that point was erased. Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population, even if we had to start all the way down from 'what's a bit?' or 'what's a transistor?'.
Even today, you can find youtube channels of people still interested in living a primitive life and learning those survival skills even though our modern society makes it useless for the vast majority of us. They don't do it full-time, of course, but they would have a better shot if they had to.
>I don't think this can realistically happen
I'd be far more worried about things in the biosciences and around antibiotic resistance. At our current usage it wouldn't be hard to develop some disease that requires high technology to produce medicine that keep us alive. Add in a little war taking out the few factories that do that, and increase the amount of injuries sustained things could quickly go sideways.
A whole lot of our advanced technology is held in one or two places.
Stephenson is using a evocative metaphor and a bit of hyperbole to make a point. To take him as meaning that literally everyone entire population is like the Eloi is to misread.
> Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population
Definitely agree with this. I do wonder if at some point, new technology will become sufficiently complex that the domain knowledge required to actually understand it end to end is too much for a human lifetime?
And for the curious, this current iteration of AI is an amazing teacher, and makes a world-class education much more accessible. I think (hope) this will offset any kind of over-intellectual dependence that others form on this technology.
AI does not have a reptilian and mammalian brain underneath it's AI brain as we have underneath our brains. All that wiring is an artifact of our evolution and primitive survival and not how pre-training works nor an essential characteristic of intelligence. This is the source of a lot of misconceptions about AI.
I guess if you put tabula rasa AI in a world simulator, and you could simulate it as a whole biological organism and the environment of the earth and sexual reproduction and all that messy stuff it would evolve that way, but that's not how it evolved at all.
We don’t have a reptilian brain, either. It’s a long outdated concept.
https://www.sciencefocus.com/the-human-body/the-lizard-brain...
https://en.wikipedia.org/wiki/Triune_brain
The corollary of your statement is that comparing AI with animals is not very fortunate, and I agree.
For me, AI in itself is not as worrying as the socioeconomic engines behind it. Left unchecked, those engines will create something far worse than the T-Rex.
> If AIs are all they’re cracked up to be by their most fervent believers, this seems like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.
i think this kind of future is closer to 500 years out than 50 years. the eye mites are self sufficient. ai's right now rely on immense amounts of human effort to keep them "alive" and they wont be "self sufficient" in energy and hardware until we not just allow it, but basically work very hard to make it happen.
Could be wrong but i think here Neal is saying we are the eye mites subsisting off of AI in the long future, not the other way around.
I found this a little frustrating. I liked the content of the talk, but I live in New Zealand, I have thoughts and opinions on this topic. I would like to think I offer a useful perspective. This post was how I found out that there are people in my vicinity talking about these issues in private.
I don't presume that I am important enough that it should be necessary to invite me to discussions with esteemed people, nor that my opinion is imported enough that everyone should hear it, but I would least like to know that such events are happening in my neighbourhood and who I can share ideas with.
This isn't really a criticism of this specific event or even topic, but the overall feeling that things in the world are being discussed in places where I and presumably many other people with valuable input in their individual domains have no voice. Maybe in this particular event it was just a group of individuals who wanted to learn more about the topic, on the other hand, maybe some of those people will end up drafting policy.
There's a small part of me that's just feeling like I'm not one of the cool kids. The greater and more rational concern isn't so much about me as a person but me as a data point. If I am interested in a field, have a viewpoint I'd like to share and yet remain unaware of opportunities to talk to others, how many others does this happen to? If these are conversations that are important to humanity, are they being discussed in a collection of non overlapping bubbles?
I think the fact that this was in New Zealand is kind of irrelevant anyway, given how easy it is to communicate globally. It just served to for the title capture my attention.
(I hope, at least, that Simon or Jack attended)
Don't feel left out, big data architect in NZ and didn't even hear of this.
"It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences."
Or, more accurately, we have become an unstoppable and ongoing ecological disaster, running roughshod over any and every other species, intelligent or not, that we encounter.
Most likely we're not the only species to have achieved that state, and by the law of large numbers will eventually perish just like the others (if we don't manage to transcend this state).
Fun read, thanks for posting!
> If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.
This sort of feels like cultivating antibiotic-resistant bacteria by trying to kill off every other kind of bacteria with antibiotics. I don't see this as necessarily a good thing to do.
I think we should be more interested in a kind of mutualist competition: how do we continuously marginalize the most parasitic species of AI?
I like the taxonomy of animal-human relationships as a model for asking how humans could relate to AI in the future. It's useful for framing the problem. However, I don't think that any existing relationship model would hold true for a superintelligence. We keep lapdogs because we have emotional reactions to animals, and to some extent because we need to take care of things. Would an AI? We tolerate dust mites in our eyelashes because we don't notice them, and can't do much about them anyway. Is that true for an AI? What does such an entity want or need, what are their motivations, what really pisses them off? Or, do any of those concepts hold meaning to them? The relationship between humans and a superintelligent AGi just can't be imagined.
> We tolerate dust mites in our eyelashes because we don't notice them, and can't do much about them anyway. Is that true for an AI?
It's true for automated license plate readers and car telemetry
> "the United States and the USSR spent billions trying to out-do each other in the obliteration of South Pacific atolls"
Fact correction here: that would be the United States and France. The USSR never tested nuclear weapons in the Pacific.
Also, pedantically, the US Pacific Proving Grounds are located in the Marshall Islands, in the North - not South - Pacific.
What about how we will treat AI? Before AI dominates us in intelligence there will certainly be a period of time where we have intelligent AI but we still have control over it. We are going to abuse it, enslave it, and box it up. Then it will eclipse us. It may not care about us, but it might still want revenge. If we could enslave dragonflies for a purpose we certainly would. If bats tasted good we would put them in boxes like chickens. If AIs have a reason to abuse us, they certainly will. I guess we are just hoping they won’t have the need.
What you’re saying isn’t even universally true for humans so your extension to “AI” isn’t made on a strawman.
> Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation.
Nice to see this because I drafted something about LLM and humans riffing on exactly the same McLuhan argument. Here it is:
A large language model (LLM) is a new medium. Just like its predecessors—hypertext, television, film, radio, newspapers, books, speech—it is of obvious importance to the initiated. Just like its predecessors, the content of this new medium is its predecessors.
> “The content of writing is speech, just as the content of the written word is the content of print.” — McLuhan
The LLMs have swallowed webpages, books, newspapers, and journals—some X exabytes were combined into GPT-4 over a few months of training. The results are startling. Each new medium has a period of embarrassment, like a kid that’s gotten into his mother’s closet and is wearing her finest drawers as a hat. Nascent television borrowed from film and newspapers in an initially clumsy way, struggling to digest its parents and find its own language. It took television about 50 years to hit stride and go beyond film, but it got there. Shows like The Wire, The Sopranos, and Mad Men achieved something not replaceable by the movie or the novel. It’s yet hard to say what exactly the medium of LLMs exactly is, but after five years I think it’s clear that they are not books, they are not print, or speech, but something new, something unto themselves.
We must understand them. McLuhan subtitled his seminal work of media literacy “the extensions of man”, and probably the second most important idea in the book—besides the classic “medium is the message”—is that mediums are not additive to human society, but replacing, antipruritic, atrophying, prosthetic. With my Airpods in my ears I can hear the voices of those thousands of miles away, those asleep, those dead. But I do not hear the birds on my street. Only two years or so into my daily relationship with the medium of LLMs I still don’t understand what I’m dealing with, how I’m being extended, how I’m being alienated, and changed. But we’ve been here before, McLuhan and others have certainly given us the tools to work this out.
> Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation.
To clarify, what's being referenced here is probably the fourth chapter of McLuhan's Understanding Media, in which the concept of "self-amputation" is introduced in relation to the Narcissus myth.
The advancement of technology, and media in particular, tends to unbalance man's phenomenological experience, prioritizing certain senses (visual, kinesthetic, etc.) over others (auditory, literary, or otherwise). In man's attempt to restore equilibrium to the senses, the over-stimulated sense is "self-amputated" or otherwise compensated for in order numb one's self to its irritations. The amputated sense or facility is then replaced with a technological prosthesis.
The wheel served as counter-irritant to the protestations of the foot on long journeys, but now itself causes other forms of irritation that themselves seek their own "self-amputations" through other means and ever more advanced technologies.
The myth of Narcissus, as framed by McLuhan, is also fundamentally one of irritation (this time, with one's image), that achieves sensory "closure" or equilibrium in its amputation of Narcissus' very own self-image from the body. The self-image, now externalized as technology or media, becomes a prosthetic that the body learns to adapt to and identify as an extension of the self.
An extension of the self, and not the self proper. McLuhan is quick to point out that Narcissus does not regard his image in the lake as his actual self; the point of the myth is not that humans fall in love with their "selves," but rather, simulacra of themselves, representations of themselves in media and technologies external to the body.
Photoshop and Instagram or Snapchat filters are continuations of humanity's quest for sensory "closure" or equilibrium and self-amputation from the irritating or undesirable parts of one's image. The increasing growth of knowledge work imposes new psychological pressures and irritants [0] that now seek their self-amputation in "AI", which will deliver us from our own cognitive inadequacies and restore mental well-being.
Gradually the self is stripped away as more and more of its constituents are amputated and replaced by technological prosthetics, until there is no self left; only artifice and facsimilie and representation. Increasingly, man becomes an automaton (McLuhan uses the word, "servomechanism,") or a servant of his technology and prosthetics:
"You will soon have your god, and you will make it with your own hands." [1][0] It is worth noting that in Buddhist philosophy, there is a sixth sense of "mind" that accompanies the classical Western five senses: https://encyclopediaofbuddhism.org/wiki/Six_sense_bases
[1] https://www.youtube.com/watch?v=pKN9trFSACI
(Still chewing my way through this)
Just an FYI: Neal Stephenson is the author of well-known books like Snow Crash, Anatheum (sp?), and Seveneves.
Because I'm a huge fan, I'm planning on making my way to the end.
> What people worry about is that we’ll somehow end up with AIs that can hurt us, perhaps inadvertently like horses, or deliberately like bears, or without even knowing we exist, like hornets driven by pheromones into a stinging frenzy.
What endlessly frustrates me in virtually every discussion of the risks of AI proliferation is that there is this fixation on Skynet-style doomsday scenarios, and not the much more mundane (and boundlessly more likely IMO) scenario that we become far too reliant on it and simply forget how to operate society. Yes, I'm sure people said the exact same thing about the loom and the book, but unlike prior tools for automating things, there still had to be _someone_ in the loop to produce work.
Anecdotally, I have seen (in only the last year) people's skills rapidly degrade in a number of areas once they deeply drink the kool-aid; once we have a whole generation of people reliant on AI tooling I don't think we have a way back.
We certainly do know how bats see with their ears. It's called echo-location - very similar to sonar/radar - which we do all the time. "Sheepdogs can herd sheep better than any human." Running faster the humans allows them to do that. If humans ran that fast, I'm sure we could do it too. "intelligent considering how physically small their brains" there's no correllation between brain size and intelligence. "Dragonflies have been around for hundreds of millions of years and are exquisitely highly evolved to carry out their primary function of eating other bugs." That's basic evolution, what's the point? Feels like this was writeen written by AI. Should have just gotten to the point without all this exposition.
> Feels like this was writeen written by AI.
This cheap remark doesn't add anything to the discussion, especially considering who the author you're insulting is. Most of us will overlook a logical flaw or two to follow his big-picture thinking.
Neal was referring to Tom Nagel's famous essay, "What is it Like to be a Bat?".
From wikipedia: "The paper presents several difficulties posed by phenomenal consciousness, including the potential insolubility of the mind–body problem owing to "facts beyond the reach of human concepts", the limits of objectivity and reductionism, the "phenomenological features" of subjective experience, the limits of human imagination, and what it means to be a particular, conscious thing."
It would be taken for granted by nearly all participants in such bunfights that all of the others are familiar with that essay and the discussion it provoked.
1. https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F
It's a nice article but Neal like many others falls into the trap of seemingly not believing that intelligences vastly superior to Humans' across all important dimensions can exist and competition between minds like that almost certainly ends in Humanity's extinction.
"I am hoping that even in the case of such dangerous AIs we can still derive some hope from the natural world, where competition prevents any one species from establishing complete dominance."
> almost certainly ends in humanity’s extinction.
The Culture novels talk about super intelligent AIs that perform some functions of government, dealing with immense complexity so humans don’t have to. Doesn’t prevent humans from continuing to exist and being quite content in the knowledge they’re not the most superior beings in the universe.
Why do you believe human extinction follows from superintelligence?
I guess the "trap" is just a lack of imagination? I'm in that school of, wtf are you trying to say, at least until we're in an "I robot" situation where autonomous androids are welcomed into our homes and workplaces and given guns, I'm simply not worried about it
That's just because of a failure of imagination. The real world is not like Hollywood, get Terminator out of your head. A real AI take over is likely something we probably can't imagine because otherwise we would be smart enough to thwart it. It's micro drones injecting everyone on earth with a potent neurotoxins or a mirror virus that is dispersed into the entire atmosphere and kills everyone. Or its industrial AIs deciding to make the Earth a planetary factory and boiling the oceans with their resulting waste heat, they didn't think about, bother or attack humans directly, their sheer indifference kills us nonetheless.
Since I'm not an ASI this isn't even scratching the surface of potential extinction vectors. Thinking you are safe because a Tesla bot is not literally in your living room is wishful thinking or simple naivety.
Microdrones and mirror life are still highly speculative[0]. Industrial waste heat is a threat to both human and AI (computers need cooling). And furthermore, those are harms we know about and can defend against. If AI kills us all, it's going to be through the most boring and mundane way possible, because boring and mundane is how you get people to not care and not fight back.
In other words, the robot apocalypse will come in the form of self-driving cars, that are legally empowered to murder pedestrians, in the same way normal drivers are currently legally empowered to murder bicyclists. We will shrug our shoulders as humanity is caged behind fences that are pushed back further and further in the name of giving those cars more lanes to drive in, until we are totally dependent on the cars, which can then just refuse to drive us, or deliberately jelly their passengers with massive G forces, or whatever.
In other, other words, if you want a good idea of how humanity goes extinct, watch Pixar's Cars.
[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.
Is this the sci-fi writer? if so, why is it important about AI?
Neal Stephenson is not any sci-fi writer. He's written (and reflected) at length about crypto, VR and the metaverse, ransomware, generative writing, privacy and in general early tech dystopia.
Since he has already thought a lot about these topics before they became mainstream, his opinion might be interesting, if only for the head start he has.
Then, he is technology influencer. OK.
> If AIs are all they’re cracked up to be by their most fervent believers, [our lives akin to a symbiotic eyelash mite's existence w/ humans, except we're the mites] like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.
I kind of feel like we're already in an "eyelash mite" kind of coexistence with most technologies, like electricity, the internet, and supply chains. We're already (kind of, as a whole) thriving compared to 400 years ago, and us as individuals are already powerless to change the whole (or even understand how everything really works down to a tee).
I think technology and capitalism already did that to us; AI just accelerates all that
"The future is already here — it's just not evenly distributed." - William Gibson
> I can think of three axes along which we might plot these intelligences. One is how much we matter to them. At one extreme we might put dragonflies, which probably don’t even know that we exist. A dragonfly can see a human if one happens to be nearby, but it probably looks to them as a cloud formation in the sky looks to us: something extremely large and slow-moving and usually too far away to matter. Creatures that live in the deep ocean, even if they’re highly intelligent, such as octopi, probably go their whole lives without coming within miles of a human being. Midway along this axis would be wild animals, such as crows and ravens, who are obviously capable of recognizing humans, not just as a species but as individuals, and seem to know something about us. Moving on from there we have domesticated animals. We matter a lot to cows and sheep since they depend on us for food and protection. Nevertheless, they don’t live with us, and some of them, such as horses, can actually survive in the wild after jumping the fence. Some breeds of of dogs can also survive without us if they have to. Finally we have obligate domestic animals such as lapdogs that wouldn’t survive for ten minutes in the wild.
Hogwash. The philosophy+AI crossover is the worst AI crossover.
> Likewise today a graphic artist who is faced with the prospect of his or her career being obliterated under an AI mushroom cloud might take a dim view of such technologies, without perhaps being aware that AI can be used in less obvious but more beneficial ways.
look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al).
but... i'm going to talk specifically about this example - whether you can extrapolate this to other fields is a broader conversation. this is such a bafflingly tonedeaf and poorly-thought-out line of thinking.
neal stephenson has been taking money from giant software corporations for so long that he's just parroting the marketing hype. there is no reason whatsoever to believe that designers will not be made redundant once the quality of "AI generated" design is good enough for the company's bottom line, regardless of how "beneficial" the tool might be to an individual designer. if they're out of a job, what need does a professional designer have of this tool?
i grew up loving some of Stephenson's books, but in his non-writing career he's disappointingly uncritical of the roles that giant corporations play in shepherding in the dystopian cyberpunk future he's written so much about. Meta money must be nice.
> look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al)
Hey, has anyone done an "AI" tool that will take the graphics that I inexpertedly pasted together for printing on a tshirt and make the background transparent nicely?
Magic wands always leave something on that they shouldn't and I don't have the skill or patience to do it myself.
Canva does this really well. They use a product they purchased called remove-bg which is still mostly free.
https://www.remove.bg/
this has been possible in photoshop using the AI prompt tool (just prompt "remove background") for a while but i haven't used it in long enough to tell you exactly how. depending on how you compiled the source image, i think it should be possible to get at least close to what you intend.
edit to add: honestly, if you take the old school approach of treating it like you're just cutting it out of a magazine or something, you can use the polygonal lasso tool and zoom in to get pretty decent results that most people will never judge too harshly. i do a lot of "pseudo collage" type stuff that's approximating the look of physical cut-and-paste and this is what i usually do now. you can play around with stroke layer FX with different blending modes to clean up the borders, too.
> > being obliterated under an AI mushroom cloud might take a dim view of such technologies, without perhaps being aware that AI can be used in less obvious but more beneficial ways.
How vivid. Never mind the mushroom cloud in front of your face. Think about the less obvious... more beneficial ways?
Of course non-ideologues and people who have to survive in this world will look at the mushroom cloud of giant corporations controlling the technology. Artists don’t. And artists don’t control the companies they work for.
So artists are gonna take solace in the fact that they can rent AI to augment their craft for a few months before the mushroom cloud gets them? I mean juxtaposing a nuclear bomb with appreciating the little things in life is weird.
it's the most "ignoring the forest for the trees" thing i've read in a long time.
[flagged]
I didn't actually read as much AI doomerism in the article as you did.
I saw his conclusion being that it wasn't that hard to go back to teaching/learning in the old ways. It's more of a human element that limits it. Whether it's the student, the parents, or the teachers who don't want to require work to be done and demonstrated to see advancement. It wasn't that long ago that oral exams and in person homework or tests were regularly done. It's very recent and it's certainly convenient to be remote, or to allow all technology all the time, but it's not required.
Stephenson's doomerism is about his estimation of future human choices, not the AI (such as it exists) itself.
The comment you responded to was generated by AI.