Out of curiousity some years back, I checked out the virtual world “Second Life.” I signed up and with my first login I descended into a tropical landscape populated with ridiculously buffed hipster-avatars. In comparison my avatar looked a bit bland. I wanted to stand out a bit, so after figuring out the key commands for moving around I disported my avatar to Second Life’s virtual shop to check out the available costumes. I selected a green tyrannosaurus suit with a rocket jetpack. Dorky but free.
At one point in my non-adventures putzing around in this multiplayer platform, I had a serious wardrobe malfunction. Somehow, a leg came off my dinosaur suit. As I flew around the Second Life island belching smoke from my jetpack, my avatar’s “real” leg dangled visibly from the dinosaur suit (that’s what you get for free). Whatever; I just figured it made me look even more entertainingly fucked up. A few days later, I received this email message:
A message from Second Life:
Your object ‘Object’ has been returned to your inventory lost and found folder by Rayne Keynes from parcel ‘Keynes Cove’ at Wingo 52.5, 207 due to parcel owner return.
Some member - or perhaps an automated script - had come across my costume leg and returned it for my retrieval. Intrigued, I logged in, retrieved my ‘object’ and stuck it back on.
Second Life was a big thing for a while in some circles. A mutual friend told me that my then-newspaper editor was not only a member, but had built a digital home there. In other words, he had constructed a house out of bytes to hang out in.
“Has Brian invited you over?” she asked. “No,” I responded, “Brian has never had me over to his real home, why would I be interested in visiting his unreal one?” Even with an invite, I couldn’t imagine my T-Rex avatar sitting across from whatever my editor looked like in his Jetsons-like abode, while tapping out pleasantries on a real-world keyboard. In any case, Second Life failed to hold my interest for long; it felt like Burning Man for shut-ins.
In the time since I dropped my digital dinosaur leg, online gaming and multiplayer platforms have seen enormous strides in computing power, with astoundingly immersive worlds absorbing the obsessive attention of millions.
The rise of “deepfakes” in photography, audio and video promise to merge reality with virtuality in disturbing new ways. Tesla and SpaceX CEO Elon Musk expects us to fall into ever more sedentary relationship with digital media, echoing critical theorist Arthur Kroker’s line that American culture is a “civilization in recline.” Musk insists that if the rendering quality of gaming only improves by a few percent every year, they will at some point, with mathematical certainty, become indistinguishable from reality. And once that happens, all bets are off for human attention and interaction.
AIs are already able to compete with photographers in rendering photorealistic images indistinguishable from real camera shots. Only a short time ago, many commentators would have deemed this capability some years away, if not impossible. How long before full-length feature films fall within the domain of Artificial Intelligence?
THE NOOSPHERE IS THE NOWSPHERE
In 2014, MIT physicist Max Tegmark coauthored a paper with Cambridge cosmologist Stephen Hawking and several other colleagues on the problematic aspects of AI. The paper was rejected by The New York Times and other media outlets before The Huffington Post accepted it. The publication set off a wave of commentary and media pieces on the perils of AI.
Tegmark was not enthusiastic about the apocalyptic clickbait that followed. Yet he recognized the need for a sober discussion on the ethics and logistics of containing and corralling increasingly sophisticated AI systems. In 2015 he and his colleagues organized a free conference in Puerto Rico and gave it “the most non-alarmist title” they could come up with: “The Future of AI: Opportunities and Challenges.” Media were banned from the event.
Some of the invited thinkers were optimistic about the social effects of advanced AI systems and some were not. “I have exposure to the very cutting-edge AI, and I think people should really be concerned about it,” said attendee Elon Musk a few years later. In passing, the Tesla and SpaceX CEO semi-jokingly referred to it as “summoning the demon,” a line that got much circulation in the press.
Toward the end of his life, Stephen Hawking observed that “success in creating effective AI could be the biggest event in the history of our civilization.” However, “unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” he warned. Like meeting hostile aliens we had invented ourselves.
A future AI threat is typically positioned as involving robots with consciousness. This simplistic scenario grabs headlines, but the main concern of actual AI researchers “isn’t with robots but with intelligence itself: specifically intelligence whose goals are misaligned with ours,” observed Tegmark in his 2017 book, Life 3.0.
“To cause us trouble, such misaligned intelligence needs no robotic body, merely an Internet connection,” he adds.
Back in the 1920’s, the French philosopher and Jesuit priest Teilhard de Chardin prophesized the “Noosphere”: a future planetary membrane of collective consciousness. The Earth awakening, essentially. Hippie intellectuals enthusiastically resurrected the idea in the seventies, and just two decades later, with the World Wide Web a gleam in the eye of Tim Berners-Lee, something Noosphere-like began to take form.
The Internet is not any one thing, but its many pieces and parts are becoming increasingly integrated in form and function, with billions of artificial eyes and ears looking out into the world, in the form of smart phones, webcams, and microphones.
The concern among some AI researchers is what happens when some advanced, autonomous Artificial Intelligence escapes from the lab and onto the electronic networks that buttress the Internet. Yet in a sense, this has already happened. One reason social media sites have become so fractious is that Silicon Valley engineers have optimized the algorithms for outrage - because this nets the most “engagement” from users. The algorithms perform all on their own, helping push users into political red/blue silos while multiplying online fiefdoms of grievance.
In the Netflix documentary The Social Dilemma, a former Google “design ethicist” by the name of Tristan Harris uses a lab analogy: “We’re pointing these engines of AI back at ourselves to reverse-engineer what elicits responses from us. Almost like you’re stimulating nerve cells on a spider to see what causes its legs to respond.”
AIs aren’t just incredibly sophisticated tools, they are self-improving tools, notes Harris. Other tools, like televisions and automobiles, may improve linearly year by year, but only AI re-engineers its own code to improve exponentially.
There’s a sobering moment in The Social Dilemma when Harris stands before a screen displaying a chart with a line curving upward. A point on the line marks the moment in the future when AI overtakes all human strengths. Yet there is a second point lower down, indicating when AI has overtaken human weaknesses - that is, the ability for users to actively or passively resist the engagement algorithms of social media. Harris insists that AI has already passed this point.
He’s probably right. On one side of a smart phone is a hairless primate with a damp, plodding brain that evolved at an Ice Age pace. On the other side are hectares of server farms with algorithms moving at near-light speeds, programmed to keep the user attached to his or her feed. We call it the “Cloud,” but it’s really the earthbound sandbox of a youthful and growing AI.
For some tech observers, notably the community of Silicon Valley “accelerationists” and “transhumanists,” it’s all good news. They believe computing technology, along with advances in genetic engineering and nanotechnology, will reach a stage where humanity and machines merge into new evolutionary entities. “Wearable technology” is only the first phase of the cybernetization of the human body-mind.
In this view, our phone-clutching habits will shade into closer relationships with technology, with our natural abilities augmented by microchipping and biotech upgrades. Yet even if things don’t go disastrously for our species long-term in this mooted merging, in the short term we can still expect a fair bit of what economists cheerfully call the “creative destruction” of capitalism.
“AI failures and premeditated malevolent AI incidents will increase in frequency and severity proportionate to AIs’ capability,” predicts computing professor Roman Yampolskiy. Which sounds a lot like that cosmic imperative we call Murphy’s Law.
WE’LL MAKE GREAT PETS
In December 1960, while space exploration was still in its infancy, the Brookings Institute released a NASA-commissioned study entitled Proposed Studies on the Implications of Peaceful Space Activities for Human Affairs. A final section addressed the possibility of a human encounter with a superior extraterrestrial civilization. The Brookings conclusion, drawing from historical examples of western civilization’s contacts with indigenous peoples, was not encouraging:
“Anthropological files contain many examples of societies, sure of their place in the universe, which have disintegrated when they have had to associate with previously unfamiliar societies espousing different ideas and different life ways; others that survived such an experience usually did so by paying the price of changes in values and attitudes and behaviour.”
There’s Tegmark’s “misaligned goals” again, in predigital form. Will the aliens humans meet one day be of our own making? If so, let’s hope their goals and ours are in acceptable alignment.
We've met the alien and the alien is us. Pogo reworked for modern times. Excellent article, Geoff. Thanks so much for that.