[ Envisioning possibilities ]

VE 2018

COVID-19 UPDATE

Dearest Future Frontiers community: Due to the COVID-19 Pandemic, Future Frontiers is cancelling the 2020 conference. We currently do not have a plan for a 2021 event. Please stay connected by joining the email list and following us on Facebook and Instagram.

GET BLOOMS PASSES (FOR SATURDAY NIGHT ACCESS ONLY)Register Now
Showcase Request

Do you have a tech demo, art installation or performance that you believe represents a flourishing world? Send us some info!

Fields marked with an * are required

Pay in Full

Click here if you wish to pay for your Future Frontiers 2020 ticket for full price today!

Register now

Pay in Full

Click here if you wish to pay for your Future Frontiers 2020 ticket for full price today!

Register now
Join the Future Frontiers Community

And receive all updates for Future Frontiers 2020 (including when tickets are going on sale)!

Get Your Free 2020 Guide to the Frontiers

Join the Future Frontiers Community & Receive Your FREE Copy of the Guide to the Frontiers!

Contact Us

Do you have questions? Interested in renting an expo booth? Or do you just want to say hi? Leave us a message and we will get back to you as soon as possible.

Fields marked with an * are required
Speaker submission

Future Frontiers is always looking to keep up to date with incredible, contrarian thought leaders operating at the fringes of possibility, and contributing to a flourishing world. Let us know who you think would be a perfect fit.

Fields marked with an * are required
Volunteer

Student tickets are $37 and require volunteer hours. If you are not a student, but would like to volunteer, please apply below:

Fields marked with an * are required
Friday Installment Plan

Can't pay the whole price for a pass today? No problem. We offer 4 month installment plans for all pass levels. Please see below for details.

Future Frontiers

Full Installment Plan

Can't pay the whole price for a pass today? No problem. We offer 2 month installment plans for all pass levels. Please see below for details.

Full Pass Installment Plan
Number of payments 2
Start payments At checkout
Due*Amount
At checkout$124.50 USD
Every 1 month (x 1)$124.50 USD
Total $249.00 USD
* We calculate payments from the date of checkout.
Sign up for
Mastermind Installment Plan

Can't pay the whole price for a pass today? No problem. We offer 2 month installment plans for all pass levels. Please see below for details.

Mastermind Pass Installment Plan
Number of payments 2
Start payments At checkout
Due*Amount
At checkout$399.50 USD
Every 1 month (x 1)$399.50 USD
Total $799.00 USD
* We calculate payments from the date of checkout.
Sign up for

Robot Sociopaths: Playing Devil’s Advocate with Steven Pinker on the Dangers of Artificial Intelligence

By Max Borders

To explore the implications of artificial intelligence, register for the Voice & Exit experience.

Should you be afraid of the robot apocalypse? (I don’t mean AI taking our jobs, I mean AI hurting or enslaving us.)

Steven Pinker has a cool little video helping to assuage concerns about the dangers of advanced AI. He even gets in a little dig about “alpha males” being the ones who fear it most.

He probably has a point. But does he make the case?

Here’s a summary passage of Pinker’s video from Big Think:

In reality we design AI, and if we place safeguards in our designs, we truly have nothing to fear. Machines are what we allow them to be. The dread of them turning evil really says more about our own psyches than it does about robots. Pinker believes an alpha male thinking pattern is at the root of our AI fears, and that it is misguided. Something can be highly intelligent and not have malevolent intentions to overthrow and dominate, Pinker says, it’s called women. An interesting question would be: does how aggressive or alpha you are as a person, affect how much you fear the robopocalypse? Although by this point the fear is contagious, not organic.

I consider myself neither an Alpha Male nor a pessimist. Indeed, I welcome our singularitarian future. But I would still like to be charitable to those who fear advanced AI. Shouldn’t we at least have some concerns about the kind of future sketched in the movie Ex Machina?

Let’s play devil’s advocate for a moment.

Assume a sentient AI. It has the capability to network itself. Also assume this intelligent being is at some level programmed to maximize benefit to itself. Simply said, it has preferences and it ranks those preference to some degree based on self-interest. (It doesn’t have to be a high degree.)

Unless you can program this advanced AI system explicitly not to harm others — or to experience pleasure or benefit from serving and protecting others — might it not include preferences based on utility maximization even if humans are in the way?

Pinker is right about the following: It’s at least theoretically possible to program super-intelligent robots with an ethic of care, just as nature programmed moms. But unless we only program empathy, ethics, servility, or some combination of these into an AI, shouldn’t we expect it’s at least possible for a utility-maximizing being to have sociopathic proclivities? 

It’s not enough to say we can simply introduce ethical programming. If it’s possible for proto-sociopathic programming to get into the codebase, we have reason to be concerned. The programmed, after all, will be reflections of the programmers.

There are a few reasons why it’s at least possible for some sociopathic AI to emerge despite ethical programming:

  • The first iteration of any advanced AI programming will be a reflection of some human ethical stance. Eventually, though, we could introduce code by Alpha Males or worse, by programmers who are less empathic and willing to step over others to win. In short, there are malicious coders out there.
  • There is also the possibility that an element of sociopathic programming is just inadvertently introduced into an enormously complicated code. Maybe it lies dormant, but then expresses itself later. 

Even a small self-interest program could be enough, because…

  • Future AI programming might not be static, but rather dynamic and self-correcting over time. If so, we could see an evolutionary process that we can neither predict nor fully control. We program this intelligence to allow for such evolution and we could see an unpredictable result, depending on the circumstances in which this being finds itself. (Humans can’t evolve on the fly, really. But robots might be able to because their code will likely be more malleable than our wetware.)
  • Networked AI might be smart enough to network itself and incorporate new, potentially malicious code from the cloud.

Suppose an AI is intelligent enough to understand human behavior based on repeated interactions with humans. The more intelligent it becomes, the more it will be able to see the benefits of trust and cooperation. But won’t it be smart enough to understand its own power relative to others — and see openings where exploitation is more “logical”? Or might it find an instance in which there’s a high probability of getting away with something? Pinker admits the AI will have a “goals” orientation. Won’t it continuously improve at reaching its goals? And what if a human is an obstacle on the path to one of those goals?

In a matrix of ethical weightings in which self-interest competes with more altruistic programming, there could be unpredictable results. Our own evolutionary programming gave us a nice cooperation strategy — with all the good feelings that come with that strategy. But as Pinker admits, that self-same programming also gave some people power lust. Still, claims Pinker, “there is no reason to think that it would naturally evolve in that direction.”

My worry is there no reason to think it would not.

The means and rate of evolution will likely be very different for AIs. A single AI might be able to evolve many times during a single human lifetime, just as a single AI (IBM’s Watson) was able to diagnose a rare form of leukemia in ten minutes.

So yes, you might want to program the being to seek consensus, care for people, respect human rights and generally to be kind to humans. But let’s first be clear that any such ethics will be programmed. And then let’s admit that if this being’s code can be altered over time, it can evolve away from the initial programming. (For example, we can imagine an AI getting lost in a hostile natural environment and adapting, suppressing its care, accelerating its hunter instincts, and becoming “feral.”)

Like evolution, emergent complexity doesn’t have an ethic in itself. And yet what we think of as ethical (or unethical) behavior could be an emergent property in an advanced, evolving mind.

I’ll note briefly that Pinker as a cognitive scientist is aware of the modular nature of the brain. He therefore knows that forms of intelligence such as mathematics, logic and reasoning happen mostly in a region of the frontal lobe and that other forms of intuition, social intelligence, and emotion occur more in other parts of the frontal lobe and brain. How analogous will the AI’s “brain” be to humans modular brains? Who knows? But it seems like we will be able to start reproducing in AI many of the functions that give rise to the stuff that really makes us human — like consciousness, and care and greed. For good or ill, we are born with competing ethics programs.

AIs will probably also require multiple ethics algorithms of various strengths and weightings, depending on the goals of the AI. At the very least we hope such ethics will include, say, a game theoretical rationale that emphasizes benefit from cooperation over time (as opposed to predation). If some AI becomes as intelligent or more intelligent than any human, it’s not clear its emergent mind will settle on a single ethical program at all — and indeed it’s not hard to imagine that the AI would need multiple ethics programs from the outset, to function effectively in different contexts (same as humans).

And finally, a super-intelligent being with the power to dominate us might also find that we are useful as pets, slaves, or victims.

Anyway, I’m no Luddite. But it’s important to respect the promethean nature of AI. Consider this headline: “Artificially Intelligent Russian Robot Makes a Run for It … Again” Even at this stage, AI is unpredictable. And so are its programmers. It makes sense to be optimistic, but cautious, about the future of AI.

On a perhaps more sanguine note: It might also be useful to think of our intelligence as merging with AI — so that the issue is not framed within the duality of ‘us’ versus ‘them.’ Advanced AI might co-evolve with humans. In that sense the difference between humans and robots might not be as stark in the future.

In a future post, I will shed my devil’s advocacy and talk about why and how I think we (humans and AI) can be both smarter and more ethical. 

Max Borders is co-founder of Voice & Exit.