Thread Rating:
  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Who Needs Humans?
12-02-2012, 02:54 AM, (This post was last modified: 12-02-2012, 03:19 AM by R.R.)
RE: Who Needs Humans?
Quote:The Cambridge Project for Existential Risk

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. (For a brief introduction to the issues in the case of AI, with links to further reading, see this recent online article by Huw Price and Jaan Tallinn.)

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind. We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future. (In the process, we hope to make it a little more certain that we humans will be around to celebrate the University's own millennium, now less than two centuries hence.)

We will be developing a prospectus for a Cambridge-based Centre for the Study of Existential Risk in coming months, and welcome enquiries and offers of support. (However, we regret that due to the volume of enquiries we now receive, we are not able to respond to all emails individually.)

April 2012


Huw Price
Bertrand Russell Professor of Philosophy, Cambridge

Martin Rees
Emeritus Professor of Cosmology & Astrophysics, Cambridge

Jaan Tallinn
Co-founder of Skype

Cambridge advisors

David Cleevely
Founding Director, Centre for Science and Policy

Tim Crane
Knightbridge Professor of Philosophy

Robert Doubleday
Executive Director, Centre for Science and Policy

Hermann Hauser
Co-founder, Amadeus Capital Partners

Jane Heal
Emeritus Professor of Philosophy

Sean Holden
Senior Lecturer, Computing Laboratory; Fellow of Trinity College

David Spiegelhalter
Winton Professor of the Public Understanding of Risk

External advisors

Nick Bostrom
Professor of Philosophy, Future of Humanity Institute, Oxford

David Chalmers
Professor of Philosophy, NYU & ANU

George M Church
Professor of Genetics, Harvard Medical School

Dana Scott
Emeritus Professor of Computer Science, Philosophy & Mathematical Logic, Carnegie Mellon University

Murray Shanahan
Professor of Cognitive Robotics, Imperial College, London

Max Tegmark
Professor of Physics, MIT

Jonathan B Wiener
Professor of Law, Environmental Policy & Public Policy, Duke University

Quote:Risk of robot uprising wiping out human race to be studied

26 November 2012 Last updated at 18:28

Cambridge researchers are to assess whether technology could end up destroying human civilisation.

The Centre for the Study of Existential Risk (CSER) will study dangers posed by biotechnology, artificial life, nanotechnology and climate change.

The scientists said that to dismiss concerns of a potential robot uprising would be "dangerous".

Fears that machines may take over have been central to the plot of some of the most popular science fiction films.

Perhaps most famous is Skynet, a rogue computer system depicted in the Terminator films.

Skynet gained self-awareness and fought back after first being developed by the US military.

'Reasonable prediction'

But despite being the subject of far-fetched fantasy, researchers said the concept of machines outsmarting us demanded mature attention.

"The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake," the researchers wrote on a website set up for the centre.

The CSER project has been co-founded by Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn.

"It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology," Prof Price told the AFP news agency.

"What we're trying to do is to push it forward in the respectable scientific community."

He added that as robots and computers become smarter than humans, we could find ourselves at the mercy of "machines that are not malicious, but machines whose interests don't include us".

Survival of the human race permitting, the centre will launch next year.

Quote:Humanity’s last invention and our uncertain future

November 25, 2012

A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge to address developments in human technologies that might pose “extinction-level” risks to our species, from biotechnology to artificial intelligence.

In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.

This machine, he continued, would be the “last invention” that mankind will ever make, leading to an “intelligence explosion” – an exponential increase in self-generating machine intelligence. For Good, who went on to advise Stanley Kubrick on 2001: a Space Odyssey, the “survival of man” depended on the construction of this ultra-intelligent machine.

Fast forward almost 50 years and the world looks very different. Computers dominate modern life across vast swathes of the planet, underpinning key functions of global governance and economics, increasing precision in healthcare, monitoring identity and facilitating most forms of communication – from the paradigm shifting to the most personally intimate. Technology advances for the most part unchecked and unabated.

While few would deny the benefits humanity has received as a result of its engineering genius – from longer life to global networks – some are starting to question whether the acceleration of human technologies will result in the survival of man, as Good contended, or if in fact this is the very thing that will end us.

Now a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.

“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.

“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”

Price’s interest in AGI risk stems from a chance meeting with Jaan Tallinn, a former software engineer who was one of the founders of Skype, which – like Google and Facebook – has become a digital cornerstone. In recent years Tallinn has become an evangelist for the serious discussion of ethical and safety aspects of AI and AGI, and Price was intrigued by his view:

“He (Tallinn) said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease. I was intrigued that someone with his feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to do something about it.”

We Homo sapiens have, for Tallinn, become optimised – in the sense that we now control the future, having grabbed the reins from 4 billion years of natural evolution. Our technological progress has by and large replaced evolution as the dominant, future-shaping force.

We move faster, live longer, and can destroy at a ferocious rate. And we use our technology to do it. AI geared to specific tasks continues its rapid development – from financial trading to face recognition – and the power of computing chips doubles every two years in accordance with Moore’s law, as set out by Intel founder Gordon Moore in the same year that Good predicted the ultra-intelligence machine.

We know that ‘dumb matter’ can think, say Price and Tallinn – biology has already solved that problem, in a container the size of our skulls. That’s a fixed cap to the level of complexity required, and it seems irresponsible, they argue, to assume that the rising curve of computing complexity will not reach and even exceed that bar in the future. The critical point might come if computers reach human capacity to write computer programs and develop their own technologies. This, Good’s “intelligence explosion”, might be the point we are left behind – permanently – to a future-defining AGI.

“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”

Price and Tallinn stress the uncertainties in these projections, but point out that this simply underlines the need to know more about AGI and other kinds of technological risk.

In Cambridge, Price introduced Tallinn to Lord Martin Rees, former Master of Trinity College and President of the Royal Society, whose own work on catastrophic risk includes his books Our Final Century (2003) and From Here to Infinity: Scientific Horizons (2011). The three formed an alliance, aiming to establish CSER.

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point. “To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.”

“What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?” he adds. “We hope that CSER will be a place where world class minds from a variety of disciplines can collaborate in exploring technological risks in both the near and far future.

“Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate its millennium.”

Messages In This Thread
Who Needs Humans? - R.R - 07-18-2011, 04:05 AM
RE: Who Needs Humans? - zapoper - 07-18-2011, 09:50 AM
RE: Who Needs Humans? - R.R - 07-19-2011, 01:54 AM
RE: Who Needs Humans? - Cheng - 07-18-2011, 05:31 PM
RE: Who Needs Humans? - Deathaniel - 07-18-2011, 06:27 PM
RE: Who Needs Humans? - Cheng - 07-18-2011, 07:32 PM
RE: Who Needs Humans? - Bull Medicine - 07-25-2011, 08:18 AM
RE: Who Needs Humans? - Deathaniel - 07-25-2011, 05:26 PM
RE: Who Needs Humans? - Deathaniel - 07-19-2011, 06:12 AM
RE: Who Needs Humans? - R.R - 07-19-2011, 06:52 AM
RE: Who Needs Humans? - zapoper - 07-19-2011, 08:04 AM
RE: Who Needs Humans? - JazzRoc - 07-22-2011, 02:31 PM
RE: Who Needs Humans? - rsol - 07-22-2011, 02:36 PM
RE: Who Needs Humans? - JazzRoc - 07-22-2011, 04:19 PM
RE: Who Needs Humans? - macfadden - 12-02-2012, 03:48 AM
RE: Who Needs Humans? - R.R - 12-02-2012, 04:18 AM
RE: Who Needs Humans? - FastTadpole - 08-08-2011, 11:33 AM
RE: Who Needs Humans? - JazzRoc - 08-10-2011, 11:54 AM
RE: Who Needs Humans? - R.R - 08-14-2011, 06:25 PM
RE: Who Needs Humans? - R.R - 08-29-2011, 04:33 PM
RE: Who Needs Humans? - JazzRoc - 11-14-2011, 08:55 PM
RE: Who Needs Humans? - R.R - 07-18-2012, 06:55 AM
RE: Who Needs Humans? - R.R - 12-02-2012, 02:54 AM
RE: Who Needs Humans? - R.R - 12-22-2012, 04:50 AM
RE: Who Needs Humans? - Watchdog - 12-24-2012, 12:35 AM
RE: Who Needs Humans? - R.R - 06-17-2013, 02:49 PM
RE: Who Needs Humans? - R.R - 11-02-2013, 02:35 AM
RE: Who Needs Humans? - Watchdog - 11-02-2013, 03:12 AM
RE: Who Needs Humans? - R.R - 11-02-2013, 03:15 AM

Possibly Related Threads...
Thread Author Replies Views Last Post
  A New Breed of Human: First Litter of Genetically Modified Humans Born in NJ FastTadpole 6 3,121 06-11-2013, 04:24 PM
Last Post: temp9
  Are we the last generation of humans as God made them? hilly7 14 5,895 09-18-2010, 09:30 PM
Last Post: JazzRoc
  Neanderthal genome reveals interbreeding with humans --- 1 1,159 06-23-2010, 10:44 PM
Last Post: Deathaniel
  Designer Babies / GMO Humans FastTadpole 1 1,997 06-13-2010, 04:35 PM
Last Post: Deathaniel
  Society must decide if it 'will accept relationships between humans and robots' TriWooOx 0 1,078 11-18-2009, 07:56 PM
Last Post: TriWooOx
  GMO Scandal: The Long Term Effects of Genetically Modified Food on Humans TriWooOx 0 1,266 08-02-2009, 09:27 AM
Last Post: TriWooOx
  Humans Emulate Volcanoes in the Stratosphere --- 1 1,125 07-28-2009, 12:09 PM
Last Post: JazzRoc
  Soccer robots being built to beat humans --- 0 813 03-27-2009, 07:41 PM
Last Post: ---
  Blue-eyed humans have a single, common ancestor Easy Skanking 12 3,014 03-06-2008, 04:32 PM
Last Post: stanteau
  Humans' Beef With Livestock: A Warmer Planet rockclimber 3 1,285 07-30-2007, 06:03 PM
Last Post: flatron

Forum Jump:

Users browsing this thread: 1 Guest(s)