Sign Up For Updates!

VE 2018
GET BLOOMS PASSES (FOR SATURDAY NIGHT ACCESS ONLY)Register Now
Showcase Request

Do you have a tech demo, art installation or performance that you believe represents a flourishing world? Send us some info!

Fields marked with an * are required

Pay in Full

Click here if you wish to pay for your Future Frontiers 2019 ticket for full price today!

Register now

Pay in Full

Click here if you wish to pay for your Future Frontiers 2019 ticket for full price today!

Register now
Join the Future Frontiers Community

And receive all updates for Future Frontiers 2019 (including when tickets are going on sale)!

Get the Future Frontiers Manifesto

Join the Exiter Community & Receive Your FREE Copy of the Future Frontiers Manifesto!

Speaker submission

Future Frontiers is always looking to keep up to date with incredible, contrarian thought leaders operating at the fringes of possibility, and contributing to a flourishing world. Let us know who you think would be a perfect fit.

Fields marked with an * are required
Volunteer

Student tickets are $37 and require volunteer hours. If you are not a student, but would like to volunteer, please apply below:

Fields marked with an * are required

10 Questions About Conscious Machines

Artificial intelligence

How will we handle the rights of AI?

For thousands of years, people have watched the skies and wondered if our species is alone in the universe — and what it would mean if we’re not. But lately, we have reason to think that we will build the answer long before we get a call from the stars. What happens to our institutions when the machines wake up? Max Borders raises ten questions we’ll need to answer. -Ed.

In the past year or so, there have been a lot of films about artificial intelligence: Her, Chappie, and now there’s Ex Machina.

These films are good for us.

They stretch our thinking. They prompt us to ask serious questions about the possibility and prospects of conscious machines — the answers to which may be needed if we must someday co-exist with newly-sentient beings. Some of them may sound far out, but they force us to think critically about important first principles.

Ten come to mind.

  1. Can conscious awareness arise from causal-physical stuff — like that assembled (or grown) in a laboratory — to make a sentient being?
  2. If such beings become conscious, aware, and have volition, does that mean they could experience pain, pleasure, and emotion too?
  3. If these beings have human-like emotions, as well as volition, does that mean they are owed humane and ethical treatment?
  4. If these beings ought to be treated humanely and ethically, does that also confer certain rights upon them — and are they equal to the rights that humans have come to expect from each other? Does the comparison even make sense?
  5. If these beings have rights, is it wrong to program them for the specific task of serving us? What if they derive pleasure from serving us, or are programmed to do so?
  6. If these beings have rights by virtue of their consciousness and volition, does that offer the philosophical basis of rights in general?
  7. If these beings do not have rights people need respect, could anything at all grant rights to them?
  8. If these beings have nothing that grants them ethical treatment or rights, what makes humans distinct in this respect?
  9. If we were able to combine human intelligence with AI — a hybrid, if you will, in which the brain was a mix of biological material and sophisticated circuitry — what would be the ethical/legal status of this being?
  10. If it turns out that humans are not distinct in any meaningful sense from robots, at least in terms of justifying rights, does that mean that rights are a social construct?

These questions might make some people uncomfortable. They should. I merely raise them; I do not purport to answer them here.

I invite your invective and your reasoned, tentative answers in the comments.

This article was originally posted at Anything Peaceful.