I think there's a great need for robot pickers/harvesters.
Occupying a rolling countryside you see "meadows", "woods", and narrow pathways as far as the eye can see. But this countryside is a PERMACULTURE and comprises plant guilds of mostly "useful" plants from a human point-of-view, while maintaining and supporting wildlife migratory routes.
Dotted throughout, equispaced, are lightweight lattice towers supporting photovoltaic arrays aligned with the sun.
At night these towers fold down their arrays and begin to move, using insect-like legs with paw-like feet, and pick produce/plant seedling/harvest/maintain beehives, etc. They pick the fruit/root/tuber that which has reached the requisite stage of ripeness, and pack it into bags, replanting where necessary.
A collection/dispatch robot scuttles up just before dawn to remove the produce, restock the bags, supply seedlings, etc. Then the picker picks a spot to catch the sun and adjusts its aerial. Etc.
This system (under radio control by computer based at the road depot at all times) allows high efficiency and quick and variable response at minimal cost, with less reliance on refrigeration. The crop is harvested on an INDIVIDUAL basis, assessed by the "rules" of the picker, which uses a burst of illumination, a quick "scan", and some "fuzzy logic".
Being permaculture, there is no necessity for involvement with "agrichemistry" or other "biocrap".
A robot outfitted with a rudimentary brain-like neural network is able to tackle new tasks by calling on its past experience and knowledge to think and act for itself.
This breakthrough demonstrates the evolving ability of robots to adapt to ever-changing environments, according to Osamu Hasegawa, an associate professor at the Tokyo Institute of Technology who is developing the technology.
"So far, robots, including industrial robots, have been able to do specific tasks quickly and accurately. But if their environment changes slightly, robots like that can't change," Hasegawa said in a press release.
In this video, the robot uses its artificial intelligence to pour a glass of "water" (beads, actually, since water and electronics don't mix all that well) and then, mid-task and with its hands full, it's told to make the water cold. What to do? It spies the "ice cube" on a nearby tray and decides to put down the bottle so it can pick up the ice cube and put in the glass.
Hasegawa's robot is similar to the Bakerbot reported on earlier this week that is able to make and bake a cookie almost from scratch, using code that enables it to determine where ingredients are, pour and mix them together, and place them in the oven.
Hasegawa's team has developed an algorithm called a Self Organizing Incremental Neural Network, or SOINN, to do the thinking. The network obtains information from the robot's visual, auditory and tactile sensors. In addition, it does what people do these days: goes online and chats with others (robots in this case).
So, for example, let's say this robot in Japan is asked to make a cup of tea. It doesn't know how, so it goes online and learns from a robot in London how to make a perfect cup of English tea. But, since it's in Japan, the robot knows this isn't quite right.
Based on its past experience and surroundings, Hasegawa said, "we think this robot will become able to transfer that knowledge to its immediate situation, and make green tea using a Japanese teapot."
By John Roach
updated 8/3/2011 4:26:12 PM ET
The future of robots is shaping up to be wonderful for couch potatoes: they can fetch beers , fold laundry, and now they can even bake cookies.
This latest breakthrough comes from the Distributed Robotics Lab at MIT where graduate student Mario Bollini is plugging away at code that allows robots to make decisions for themselves as they accomplish specific tasks.
The Bakerbot, which is a Willow Garage PR2 robot, represents a hybrid approach to this end goal, he said. The robot knows, for example, that four bowls with cookie ingredients are on the table as well as a mixing bowl and a cookie sheet.
"All the manipulation is done on the fly," he said. It calculates, for example, how to pick up the bowls with ingredients and pour them into the mixing bowl, mix them together, and put them in the oven. The result is a baked cookie, not the prettiest cookie in the world, but nevertheless a baked cookie.
Ultimately, researchers would like to use the knowledge (and code) gained from this Bakerbot project and use it to design a robot that would know what to do when asked to bake a cake, for example.
"It would try to understand that, find a recipe for that, and it would try to understand what the recipe is telling it to do and then use actions that it knows how to do to accomplish it," Bollini said.
Beyond baking, robots with these types of skills are already being eyed for factory jobs. Current robots on the assembly line are programmed to do one task over and over again. If someone gets in its way, they get hit. And if they need to do a different task, they have to be completely reprogrammed.
A more dynamic robot could be useful, for example, on an auto assembly line where robots install windshields of all shapes and sizes on several different models of cars and do so without crashing into each other and their human colleagues.
In the more distant future, Bakerbot really might find its home inside a home, particularly for elder care in countries with aging populations such as Japan, Bollini noted. There, they'll likely start out doing cooperative tasks, not baking cookies all by themselves.
"If you're not strong enough to lift the mixing bowl and put it out, the robot will do that part of the task and then the human does things that are easier for the person to do like recognize where everything is and get it out of the cupboards," he said.
This result might actually be the prettiest cookie in the world.
Quote:Future of war: Private robot armies fight it out
By Greg Lindsay
updated 8/6/2011 3:12:35 PM ET
Last month, NATO’s commanders in Libya went with caps-in-hand to the Pentagon to ask for reconnaissance help in the form of more Predator drones. "It’s getting more difficult to find stuff to blow up," a senior NATO officer complained to The Los Angeles Times. The Libyan rebels’ envoy in Washington had already made a similar request. “We can't get rid of (Moammar Gadhafi) by throwing eggs at him,” the envoy told the newspaper.
The Pentagon told both camps it would think about it, citing the need for drones in places like Yemen, Somalia and Pakistan, where Predator strikes have killed dozens this month alone. So why doesn’t NATO or the rebels do what Cote d’Ivoire’s Air Force, Mexican police and college student peacekeepers have done — buy, rent or build drones of their own? The development of deadly hardware and software is leading to a democratization of war tech, which could soon mean that every army — private or national — has battalions of automated soldiers at their command.
"Drones are essentially flying — and sometimes armed — computers," the Brookings Institution noted in a paper published last month. They’re robots that follow the curve of Moore’s Law rather than the Pentagon’s budgets, rapidly evolving in performance since the Predator’s 2002 debut while falling in price to the point where Make magazine recently carried instructions on how to launch your own satellite for $8,000.
"You have high school kids competing in robotics competitions with equipment that 10 years ago would have been considered military-grade," says Peter W. Singer, author of "Wired for War" and a senior fellow at Brookings, who predicts robots on the battlefield will be a paradigm-shifting "revolution in military affairs." First comes the high-tech arms race with China, Israel and all the other nations competing to build their own drones. Then comes the low-cost trickle-down into low-tech wars like Libya’s, where tomorrow’s rag-tag militias fight with DIY drones. Finally, if robots are simply computers with wings (and missiles), then expect to see future wars fought by the descendants of flash-trading algorithms, with humans as anxious bystanders.
Flattening the battlespace
Since the Predator first appeared above Afghanistan nearly a decade ago, the Pentagon’s inventory of drones has risen from less than 50 devices to more than 7,000. But the gap between the U.S. and its closest competitors may actually be shrinking. China, for example, has pinned its military ambitions on 2,000 missiles guided by target data from some two-dozen models of surveillance drones.
The worldwide drone market is projected by the Teal Group to be worth $94 billion over the next decade, led by the Pentagon, which has asked Congress for $5 billion for next year's expenses alone. One reason for the ballooning arms race between anywhere from 44 to 70 nations (depending on which estimate you believe) is self-interest. So far, the Pentagon has refused to share its toys, instituting tight export controls on drones such as the Predator or Reaper, both of which are made by General Atomics.
Another is purely financial. An F-22 stealth fighter costs $150 million, roughly 15 times a top-of-the-line Predator. The U.S. military’s blank check of a budget — more than the rest of the world’s combined — means little and less when the cost of drones keeps falling.
But the most important factor may be doctrinal. Unlike the U.S., which is still feeling its way forward with robotic warriors while entrenched generals fight for their tanks and aircraft carriers, small nations with shrinking budgets stand to gain the most from embracing robotic warfare.
"There’s no such thing as a first-mover advantage in war," says Singer. "This technology is different than an aircraft carrier. You don’t need a big military infrastructure to use it, or even to build it. This is more akin to the open source movement in software. You’re flattening the battlespace, and the barriers to entry for other actors is falling."
In 2004, French troops arrived in Cote d’Ivoire to help police a cease-fire in the country’s simmering civil war. Not expecting trouble, they left their air defenses at home. But on Nov. 4, 2004, a pair of Israeli-made Aerostar drones circled their base, reconnoitering targets for the Russian-made jets that bombed them a few hours later, killing nine soldiers and a U.S. aid worker. The drones belonged to an Israeli private military firm hired by Ivorian President Laurent Gbagbo, who claimed (unconvincingly) that the whole thing was an accident.
Hiring drone-bearing mercenaries is easy when you’re a president; what about when you’re a college student? A year later, a trio of Swarthmore students formed the Genocide Intervention Network to help bring attention to Darfur. After raising almost half-a-million dollars in donations, the group solicited a bid from Evergreen International to remotely fly four surveillance drones above Sudan, documenting atrocities. Sadly, the price tag was a cool $22 million a year. (They passed.)
Today, they would toss the project on Kickstarter and build their drone using Arduino modules developed by hobbyist sites such as DIY Drones. In a recent essay, the consultant and futurist Scott Smith noted that both the "maker" movement and the Libyan rebels desperately hacking together weaponry are drawing on the same open source knowledge base. Or for that matter, so are the Mexican drug cartels assembling their own tanks and submarines.
"We’ve come to a point where you put together a parallel system to the U.S. Department of Defense," says Smith. And also to the point where the DoD is soliciting the hobbyists themselves to be the next generation of weapon designers via DARPA’s crowdsourcing effort, UAVForge. "If I were at a major arms contractor, I would be worried about being disrupted," Smith says.
He wonders if the world is headed toward "peak arms," in which open source, distributed, low-cost tools fatally undermine big-ticket weapons sales in all but a few cases (most of them involving the Strait of Taiwan). And that goes double for non-state actors, e.g. roll-your-own NGOs and drug cartels. "The era of large scale, run-and-gun DIY micro-warfare is just around the corner," Smith concludes.
The robot wars
The trajectory of drones and warbots is the same as computing in general — smaller, cheaper, more ubiquitous. In February, AeroVironment unveiled the prototype of a hummingbird-sized drone that can perch on a windowsill can peer in. Insect-size is next.
But the shift from a single pair of eyes in the sky to a swarm of bots would create havoc with U.S. military doctrine, which requires having a human operator at all times, or a "man in the loop." This is one reason why the Air Force is training more remote pilots this year (some 350) than bomber and fighter pilots combined. Then again, that’s not nearly enough for 7,000 drones, let alone 7 million, all of which would have the intelligence to fight or fly on their own, with faster-than-human response times.
That’s why the definition of "in the loop" is blurring from direct human control "to a veto power we’re unwilling to use," says Singer. In the case of missile defense systems already in use, "you can turn it on or off," but you can’t pick and choose which bogeys to shoot. "The speed and complexity is such that the human interface has to be minimized to be effective," he adds, which suggests the generals in "WarGames" were right all along.
Or were they? Releasing increasingly autonomous warbots into the wild will demand new algorithms to command them, raising the specter of a "flash crash" on the battlefield as opposing algorithms clash and chase each other’s tails. Or what if hackers were to assemble a botnet for real: an army of machines ready to do their bidding? Perhaps a decade from now, there will be no "cyber war." There will only be war.
Quote:Hugo de Garis explains his "Artilect War" theory. The Artilect is a portmanteau of the words Artificial and Intellect. Hugo de Garis believes there will be a massive war at the end of this century, regarding A.I. Ptolemaic, sinularity, artilect war.
Quote:Top A.I. researcher Hugo de Garis explains in the following clip that the development of super-intelligent A.I. may lead to a devastating world war that could kill billions of people. He adds that he is more than willing to take the risk, saying, “As a brain builder myself, am I prepared to risk the extinction of the human species for the sake of building an artilect? … yep.”
Quote:7 jobs you'll have to trust robots to do in the near future
By Evan Dashevsky
2:42PM on Jul 16, 2012
Chances are you will not have a job in the future. This isn't anything against you personally, or even a comment on the economy. It's just a statement of fact. As technology (and specifically robotics) marches into the future, there will simply less of a need for human workers and all their annoying human-y hang-ups such as "due compensation," "sick time" and "sleep."
Futurist Thomas Frey has gone as far to predict that two billion jobs (nearly 50% of all current jobs) will be technologically outmoded by 2030. If this prediction holds true, any child born today will graduate from high school into a radically different world where all human needs are met cheaply, but where there will be little need for actual humans.
We've only begun to see the beginning of this new jobless age where all services are filled by robots and other assorted automatons. And this coming iceberg is much bigger than you probably think. Here, we present a list of jobs will be "manned" by robots in the closer-than-you-think future.
Self-driving cars (or robot cars) are coming to a street near you. They are greener, cheaper and safer. So, where there's a more efficient way, there will be an economic will to develop it.
Traditional manufacturers have predicted that we will see a commercially viable self-driving car this decade. As these robo-cars fill our streets and highways, there are a slew of occupations that will no longer need some inefficient human. Taxi drivers, bus operators and delivery jobs might be some career paths to guide your kids away from. But this driverless era would eventually outmode other jobs, too, such as gas station attendants, valets, automotive claims adjusters and even traffic cops.
In addition to robotic cars for individuals, governments are beginning to invest in automated public transportation. In fact, this world of robotic public transportation is arguably already here. The dictionary defines a robot as "a device that automatically performs complicated often repetitive tasks." By this parameter, the age of robotic transportation has been replacing human operators for years, if not decades.
If you've ever taken an airport tram to speed you between terminals, the chances are that the vehicle you were on had no human in charge. And if you want to get crazy, the everyday elevator could be described as a robot that carries people between floors (a technology that in its earliest days was run by human operators before becoming fitted to "automatically perform complicated often repetitive tasks").
South Korea is the nation at the vanguard of creating a freaky robot future. And in that robot-centric view of the future, the country has begun an ambitious plan to introduce robots into the educational system.
Many of these robot classroom aides are, for now, little more than glorified novelties or telepresence mediums. While these current devices might be best described as a human teacher's aide, they will develop capabilities over time and will take over more and more responsibilities currently handled by Homo sapien schoolmasters.
it may sound crazy to us old farts, but today's children are for more comfortable interacting with technology than any other previous generation. They already surf the vast digital seas with as much tenacity as their parents. it wouldn't be too much of a shock to them to have tests and lesson plans administered by what is essentially a roving interactive computer.
Following last year's catastrophes, large swaths of farmland in the Miyagi prefecture in northeast Japan were left ravaged. The soil was laden with salt and oil from the tsunami, as well as radiation contamination from the damaged Fukushima Daiichi nuclear plant. But where there's devastation, there is also opportunity. In this case, the opportunity is for Japan's Ministry of Agriculture to experiment with a massive "robot farm" where automated machines will grow rice, wheat, soybeans, fruits and vegetables.
The so-called "Dream Project" will be built on a disaster zone spanning 600 square miles. Planning for the facility is currently underway and will be backed by a $52 million investment from the Japanese government. The project will involve unmanned tractors and other automated farmhands. The plan will even bolster crops by using recycled carbon dioxide from the onsite machines.
This is just one massive example of the coming age of robot agrarians. Over the course of the next few decades, nearly all jobs handled by human hands will be replaced by our automated and soulless synthetic friends.
4. Construction Workers
While still early in their development, we've seen various examples of automated bots that can handle construction tasks once solely the domain of us hairless apes. Take for example bots that move about trusses with the greatest of ease; roving qudrocopters that can assemble structures nearly anywhere; and freaky snake robots that can inspect and fix perilous nooks and crannies that currently put humans at risk.
Besides being cheaper and more efficient, robots will be able to build in habitats unwelcoming to our wimpy human anatomies. NASA has taken a keen interest in creating robots that can build bases or structures in space or other extraterrestrial environments. Future construction sites will be more akin to a swarming mass of metal productivity.
The numbers are officially top secret, but at least one source counts some 217 drone strikes in Pakistan over the past three years. These strikes are not technically the work of robot soldiers as they are operated by remote human pilots (who are usually stationed out of harm's way on a base in Nevada, BTW). Still, they are indicative of the U.S. military's push to remove human soldiers from the frontlines.
The nerd warriors over at DARPA have poured mondo bucks into developing non-living soldiers for the battlefield. We've seen everything from hummingbird-shaped nano-bot spies equipped with video cameras to terrifying, agile Cheetah-bots. Many of these designs are still years away from battle readiness, but robots are already making their mark in combat. Lt. Col. Dave Thompson, the Marine Corps' "top robot-handler" commented to Wired last year that one in 50 troops in Afghanistan are robots. These contemporary crop of battlebots are, to use a phrase, "stupid" in that they handle tasks such as investigating and defusing IEDs under remote control of a human handler. Really, these bots might be more described as "tools" than Terminators at this point.
However, it is clear that given the U.S. Military's resources and desires, the automated military is coming. War is hell. And in the future, it may be a hell devoid of human warriors.
The human sex drive thrives at the forefronts of technology. Porn was there at the dawn of photography, the beginning of home video tech, and has smothered itself all over the Internet (not that any of us would know anything about that). And it's probably only a matter of time before the human need for nookie spreads its filthy tentacles into the world of robotics.
A huge industry already facilitates prosthetic stand-ins for female sex partners in Real Dolls, or even for just parts of female partners. (We're not gonna link that — just Google around yourself, weirdo.) So, as robots evolve to become more human like, right down to the nitty gritty details, the jump into robot sex partners is an inevitable one.
This will bring us some interesting questions such as: Would having sex with a human-like machine be considered cheating? Could having sex with a robot be as good — or perhaps even better — than sex with a human? Robot sex workers also might solve some of society's ills. Namely, substituting the need for human trafficking or stopping the spread of STDs through prostitution. Robots, which could cater to any whim, could also perform outlawed sex acts.
Technological innovation is intimately tied with the need to get intimate. Robot lovin' may sound weird, but it will happen.
7. Doctors and Nurses
Unfortunately, even a fancy medical degree won't protect you from obsolescence. Web MD has already become the hypochondriac's go-to source for their latest malady. The X-Prize launched a $10 million prize for a team to invent a "Tricorder," a handheld device that could accurately make medical diagnosis without the help of a human doctor.
We've already have machines that are used in minimally invasive surgery. These machines are less like the medical droid that fixed Luke Skywalker's hand than they are surgeon's tools. But these tools are gaining new capabilities such as the power to administer anesthesia. The need for humans in these processes will denegrate as the machines become more sophisticated. Human doctors will take on greater oversight roles, before eventually not needing to be there at all.
One engineer has even proposed an automated system to take over the role of hospice care at the end of a patient's life. This End-of-Life Machine, which lies is somewhere in the nexus between freaky and icky, may one day be a cold stand-in goodbye for elderly patients who live alone. At the same time, in a medical home where the elderly can sometime feel abandoned or alone, robots could provide constant, unconditional affection or, at least, attention.
These are just a few of the professions that will in all likelihood go the way of the bowling pin setter and the ice delivery man. Of course, this workless existence will be counterbalanced by a world of cheap and plenty. All our needs and wants will be affordable and available. There just might not be a need for us to "work" anymore. This future doesn't necessarily spell impending doom as much as it is a problem that will require a radical reimagining of what it means to be human.
Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. (For a brief introduction to the issues in the case of AI, with links to further reading, see this recent online article by Huw Price and Jaan Tallinn.)
The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind. We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future. (In the process, we hope to make it a little more certain that we humans will be around to celebrate the University's own millennium, now less than two centuries hence.)
We will be developing a prospectus for a Cambridge-based Centre for the Study of Existential Risk in coming months, and welcome enquiries and offers of support. (However, we regret that due to the volume of enquiries we now receive, we are not able to respond to all emails individually.)
HP, MJR & JT
Bertrand Russell Professor of Philosophy, Cambridge
Emeritus Professor of Cosmology & Astrophysics, Cambridge
Co-founder of Skype
Founding Director, Centre for Science and Policy
Knightbridge Professor of Philosophy
Executive Director, Centre for Science and Policy
Co-founder, Amadeus Capital Partners
Emeritus Professor of Philosophy
Senior Lecturer, Computing Laboratory; Fellow of Trinity College
Winton Professor of the Public Understanding of Risk
Professor of Philosophy, Future of Humanity Institute, Oxford
Professor of Philosophy, NYU & ANU
George M Church
Professor of Genetics, Harvard Medical School
Emeritus Professor of Computer Science, Philosophy & Mathematical Logic, Carnegie Mellon University
Professor of Cognitive Robotics, Imperial College, London
Professor of Physics, MIT
Jonathan B Wiener
Professor of Law, Environmental Policy & Public Policy, Duke University
Quote:Humanity’s last invention and our uncertain future
November 25, 2012
A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge to address developments in human technologies that might pose “extinction-level” risks to our species, from biotechnology to artificial intelligence.
In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.
This machine, he continued, would be the “last invention” that mankind will ever make, leading to an “intelligence explosion” – an exponential increase in self-generating machine intelligence. For Good, who went on to advise Stanley Kubrick on 2001: a Space Odyssey, the “survival of man” depended on the construction of this ultra-intelligent machine.
Fast forward almost 50 years and the world looks very different. Computers dominate modern life across vast swathes of the planet, underpinning key functions of global governance and economics, increasing precision in healthcare, monitoring identity and facilitating most forms of communication – from the paradigm shifting to the most personally intimate. Technology advances for the most part unchecked and unabated.
While few would deny the benefits humanity has received as a result of its engineering genius – from longer life to global networks – some are starting to question whether the acceleration of human technologies will result in the survival of man, as Good contended, or if in fact this is the very thing that will end us.
Now a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.
“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.
“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”
Price’s interest in AGI risk stems from a chance meeting with Jaan Tallinn, a former software engineer who was one of the founders of Skype, which – like Google and Facebook – has become a digital cornerstone. In recent years Tallinn has become an evangelist for the serious discussion of ethical and safety aspects of AI and AGI, and Price was intrigued by his view:
“He (Tallinn) said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease. I was intrigued that someone with his feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to do something about it.”
We Homo sapiens have, for Tallinn, become optimised – in the sense that we now control the future, having grabbed the reins from 4 billion years of natural evolution. Our technological progress has by and large replaced evolution as the dominant, future-shaping force.
We move faster, live longer, and can destroy at a ferocious rate. And we use our technology to do it. AI geared to specific tasks continues its rapid development – from financial trading to face recognition – and the power of computing chips doubles every two years in accordance with Moore’s law, as set out by Intel founder Gordon Moore in the same year that Good predicted the ultra-intelligence machine.
We know that ‘dumb matter’ can think, say Price and Tallinn – biology has already solved that problem, in a container the size of our skulls. That’s a fixed cap to the level of complexity required, and it seems irresponsible, they argue, to assume that the rising curve of computing complexity will not reach and even exceed that bar in the future. The critical point might come if computers reach human capacity to write computer programs and develop their own technologies. This, Good’s “intelligence explosion”, might be the point we are left behind – permanently – to a future-defining AGI.
“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”
Price and Tallinn stress the uncertainties in these projections, but point out that this simply underlines the need to know more about AGI and other kinds of technological risk.
In Cambridge, Price introduced Tallinn to Lord Martin Rees, former Master of Trinity College and President of the Royal Society, whose own work on catastrophic risk includes his books Our Final Century (2003) and From Here to Infinity: Scientific Horizons (2011). The three formed an alliance, aiming to establish CSER.
With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”
Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point. “To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.”
“What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?” he adds. “We hope that CSER will be a place where world class minds from a variety of disciplines can collaborate in exploring technological risks in both the near and far future.
“Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate its millennium.”
(07-22-2011 02:36 PM)rsol Wrote: if you are worried about losing your job to a robot become a mechanic specializing in robots.
Or write a book entitled "How to Be Successful in Whatever You Do - Secret Advice from God". Content immaterial...
Or become an officer in The Lord's Anti-robot Resistance Army. Ahh, job security at last, you'll never be displaced by a robot... but you may be terminated by one .
Or become a star of homo-robo erotica (payment in digital credits):
Quote:Humans Marrying Robots? A Q&A with David Levy
Is love and marriage with robots an institute you can disparage? Not to computer pioneer David Levy. Continuing advances in computers and robotics, he thinks, will make legal marriages between Homo and Robo feasible by mid-century
By Charles Q. Choi
Last year, David Levy published a book, Love and Sex with Robots, which marked a culmination of years of research about the interactions between humans and computers. His basic idea is that, for humans who cannot establish emotional or sexual connections with other people, they might form them with robots. The topic is ripe for ridicule: On The Colbert Report in January, host Stephen Colbert asked Levy, "Are these people who can't establish relationships with other human beings, are they by any chance people who write about love and sex with robots?" The 62-year-old Levy, though, is quite serious, as he explains to frequent contributor Charles Q. Choi in the Insights story "Not Tonight, Dear, I Have to Reboot," appearing in the March issue of Scientific American. Here is an expanded interview.
How did you first become interested in artificial intelligence (AI)?
Everything happened almost by accident. I learned to play chess by eight—it was my big passion in high school and university. In my last year at university, I came across a thing called a computer. I heard about it, but knew nothing about it. They were incredibly primitive then—they didn't run on transistors, but on vacuum tubes. I got interested in computer programming through programming games. Then I head about a subject called AI, which people in Edinburgh were working on, such as Donald Michie, the head of the department of machine intelligence at the University of Edinburgh, who worked with Alan Turing on breaking German codes. Donald Michie was an amazing guy who was killed just recently in a car crash. He was the founding father of AI in the U.K. and introduced me and others to AI.
So your interest in chess programs led you to computers and, ultimately, artificial intelligence?
Back then, people wrote chess programs to simulate human thought processes. It turned out in time that approach didn't work, that chess programs would use completely different techniques that are not humanlike at all. But I was still left interested in simulating human thought processes and emotions and personality. I thought, "Wouldn't it be interesting if there were artificial people we could talk to?" So that started me thinking even more about the way humans interact with computers—not just by typing on a keyboard, but how people could interact with computers in a humanlike way. I funded a project for three years that won the Loebner Prize in 1997, a world championship for conversational computer programs decided by a Turing test–type conversation.
In other words, the program's responses tried as much as possible to be indistinguishable from those of a human, and in Turing's conception, the machine could be said to think. So, moving on from mere conversation, you began researching how, um, far interaction between humans and robots could go?
Around the year 2003, I started researching this topic very seriously. I was writing a book, Robots Unlimited, with a couple of chapters on robot emotion—love, even sex. I found so much material that when I finished that book, I wanted to look even more deeply in human emotional relationships with computers, with the possibility of sexual relations. I decided to call the book I wrote Love and Sex with Robots.
Did any of the research you found prove especially memorable?
The one single thing that made me go into this subject deeply was when I read a book by Sherry Turkle, The Second Self. In there, she wrote about some students she interviewed in her attempts to figure out how people related to computers. In one anecdote with a student dubbed "Anthony," he tried having girlfriends but preferred relationships with computers. With girls, he wasn't sure how to react; but with computers, he knew how to react. I thought that was so fascinating. And there are loads of Anthonys out there who find it difficult to, or can't form satisfying relationships with, humans. I dedicated my book Love and Sex with Robots to Anthony and all the other Anthonys before and since of both sexes—to all those who feel lost and hopeless without relationships, to let them know there will come a time when they can form relationships with robots.
So what was it like researching the possibility of sex with robots? You ended up writing a lot about sex dolls—did you know about sex dolls before you wrote your book?
I hadn't thought about them beforehand at all. It was absolutely fascinating doing the research. Then I got the idea that sex with dolls is like sex with prostitutes—you know the prostitute doesn't love you and care for you, is only interested in the size of your wallet. So I think robots can simulate love, but even if they can't, so what? People pay prostitutes millions and millions for regular services. I thought prostitution was a very good analogy.
And, as you mention in Love and Sex with Robots, brothels in Japan and South Korea already offer sex with dolls for the same rates they would charge for human prostitutes. So in studying sex with prostitutes, you figured you might begin to understand what the thinking behind sex with robots would be.
I started analyzing the psychology of clients of prostitutes. One of the most common reasons people pay for sex was that people wanted variety in sex partners. And with robots, you could have a blonde robot today or a brunette or a redhead. Or people want different sexual experiences. Or they don't want to commit to a relationship, but just to have a sexual relationship bound in time. All those reasons that people want to have sex with prostitutes could also apply to sex with robots.
But sex with robots won't just be a guy thing?
When I started, the research was almost entirely on male clients, but the number of women who pay for sex is on the increase, although there's not much published on the subject. That shows both sexes are interested and willing and desirous to get sex they paid for. Heidi Fleiss is proposing to open a brothel in Nevada where all the sex workers are male and the clients are female. You already have something similar in Spain.
If people fall in love with robots, aren't they just falling in love with an algorithm?
It's not that people will fall in love with an algorithm, but that people will fall in love with a convincing simulation of a human being, and convincing simulations can have a remarkable effect on people.
When I was 10, I was in Madame Tussauds waxworks in London with my aunt. I wanted to find someone to get to some part of the exhibition and I saw someone, and it didn't dawn on me for a few seconds that that person was a waxwork. It had a profound effect on me—that not everything is as it seems, and that simulations can be very convincing. And that was just a simple waxwork.
And if you or others could be taken in just by a wax figure, even for a moment, imagine what a realistic robotic simulation of a person would do. But if people are aware that a robot's just electronics, won't that be an obstacle to true love?
By 40 or 50 years, everyone of a marriageable age will have grown up with electronics all around them at home, and not see them as abnormal. People who grow up with all sorts of electronic gizmos will find android robots to be fairly normal as friends, partners, lovers.
Now did science fiction inspire you at all? Because science fiction is naturally one of the first things that leapt to my mind when I think of a society with robots in it.
I don't read science fiction at all. The only sci-fi book I ever read was as a favor to a publisher who wanted a quote from me on the back cover, but the book was so dreadful that I couldn't support it.
Are advances in robotics really going to happen that fast? Wouldn't the technology take up rooms of electronics?
Computer technology is getting faster and more powerful and smaller all the time. What fits in a backpack now could fit in a matchbook in 30 years' time.
If we don't yet completely understand humans, how can we make a humanlike robot?
It will be an iterative process, to be sure. But while we don't understand humans perfectly, we know quite a bit now about human behavior and psychology, and we could program that in.
Isn't your prediction about humans marrying robots in 50 years optimistic?
If you went back 100 years, if you proposed the idea that men would be marrying men, you'd be locked up in the loony bin. And it was only in the second half of the 20th century that you had the U.S. federal government repealing laws in about 12 states that said marriage across racial boundaries was illegal. That's how much the nature of marriage has changed.
I think the nature of marriage in the future is that it will be what we want it to be. If you and your partner decide to be married, you decide what the bounds are, what its purpose is to you.
Would people really want a robot that agreed with everything you wanted or were completely predictable?
I do think there is often a need for friction in relationships. You wouldn't actually want a robot that does everything you want. Most people might want robots that sometimes say, "I don't really want to do that," that rejects certain requests from time to time. So you could program that in, the level of disagreement you want.
And you could program a robot to have different tastes from you. I personally find it very beneficial that my wife has interests that I do not. You could find it fascinating, for instance, that your robot knows a lot about 19th-century South African stamps or the like.
How might human–robot relationships alter human society?
I don't think the advent of emotional and sexual relationships with robots with end or damage human–human relationships. People will still love people and have sex with people. But I think there are people who feel a void in their emotional and sex lives for any number of reasons who could benefit from robots. Other people might try out a relationship with a robot out of curiosity, or be fascinated by what's written in the media. And there are always people who want to keep up with the neighbors.
One point a friend made to me was that there will be people who say, "Oh, you're only a robot." But I also think there will be people who take the view, "Oh, you're only a human."
Isn't falling in love with a robot reminiscent of falling in love in a chat room?
I think it's a very small step at the end of the day—if you are sitting at home talking in a chat room with someone who purports to be a 26-year-old female—between that person being a human or a robot. It's a kind of Turing test. So many people nowadays are developing strong emotional attachments across the Internet, even agreeing to marry, that I think it doesn't matter what's on the other end of the line. It just matters what you experience and perceive.
Do you think others will follow this field?
I was actually recently contacted by a woman at the University of Washington, who wants to write a thesis on human–robot relationships.
What directions will you pursue now?
I'm writing an academic paper on the ethical treatment of robots. Not just the ethics of designing robots to do certain things—people write about whether we should design robots to go into combat and kill people, for instance—but should we be treating robots in an ethical way. If we treat robots in an unethical way, would that be a very bad lesson for other people and children? If it's seen as okay to maltreat robots, would that send a message about living creatures? Robots can certainly have a semblance of being alive.
What does your wife think?
She has a different slant from me. She's not a science person at all—her background's in English and drama; she's not interested in computers or robots or AI. She was totally skeptical of the idea that humans would fall in love with robots. She's still fairly skeptical, but she's beginning to appreciate something like this will happen.
What happens if 50 years from now your predictions have not proved true, and humans and robots don't marry?
I know some people think the idea is totally outlandish. But I am totally convinced it's inevitable. I would be absolutely astounded if I'm proven wrong—not if I'm a few years off, but if I'm proven completely wrong.
Is the clock of doom ticking for mankind? Yes, says an eminent 95-year-old scientist from Australia. Professor Frank Fenner — the same scientist who brought the myxomatosis virus to rabbits to control their numbers in the 1950's — is acutely aware of the impact of overpopulation and shortage of resources.
In 1980, Fenner announced to the World Health Assembly that smallpox had been eradicated, an achievement that is widely regarded as the World Health Organization's finest hour.
Now, in an interview with The Australian, the well-respected microbiologist expressed his pessimism for our future. "We're going to become extinct," he said. "Whatever we do now is too late."
After all the hype surrounding the pseudoscience of 2012, I've become a bit numb to "yet another" warning of doomsday, but when a scientist of Fenner's caliber goes on the record to say mankind will die off, it's hard not to listen.
"Homo sapiens will become extinct, perhaps within 100 years," he said. "A lot of other animals will, too. It's an irreversible situation. I think it's too late. I try not to express that because people are trying to do something, but they keep putting it off."
Although efforts are under way to mitigate the worst effects of overpopulation and climate change, Fenner believes it is futile, that our fate is sealed.
The world's population is forecast to balloon to 7 billion next year, putting a terrible strain on food and water supplies. So much so that Fenner predicts "food wars" in the coming decades as nations fight to secure dwindling supplies. Global droughts continue to ravage farmland, intensifying widespread malnutrition and poverty.
Climate change is a big driving factor behind his warning and, in Fenner's opinion, we've passed the point of no return. Although we have the scientific ability to tackle global problems, it's the lack of political will to do anything before the planet turns into a dust bowl that's the problem.
Although these warnings aren't without merit, I see Fenner's belief that all of mankind may not exist in a century to be overly pessimistic. It's not that I doubt the world will be a very different place in 100 years, it's just that he hasn't considered the technological factors of what makes humans human.
Granted, we're not very good at looking after our planet, and we are in a dire predicament, but thinking we'll be extinct in less than a century is a little over the top. A "collapse of civilization" or "rapid population decline" might be a better forecast.
Extinction occurs when every single member of a species dies, so unless a succession of global catastrophes (pandemics, runaway global warming, nuclear wars, collapse of resources, throw in an asteroid impact) happened at the same time, a small number of our descendants should still be able to eke out an existence in sheltered pockets around the planet.
In a paper published in the journal Futures last year, researchers approached the question: "Human Extinction: How Could It Happen?"
"The human race is unlikely to become extinct without a combination of difficult, severe and catastrophic events," Tobin Lopes, of the University of Colorado at Denver, said in an interview with Discovery News. He added that his team "were very surprised about how difficult it was to come up with plausible scenarios in which the entire human race would become extinct."
Sure, we could be faced with a "perfect storm" of catastrophes leading to a mass extinction, but I think it will be more likely that we'll adapt quickly, using technology not necessarily to reverse the damage we have caused, but to support life in a hostile new world.
But this is as speculative as Fenner's gloomy forecast. I suspect the realities of living on a warming planet with a spiraling population and dwindling resources will remain unknown for some time yet. However, if our continuing abuse of resources continues at this rate unchecked, we can be anything but optimistic about our species' future.
Quote:DARPA seeks to build robots with more human-like ‘brains’
By Joe McKendrick | April 12, 2013, 1:48 PM PDT
Back in 1999, when Ray Kurzweil wrote The Age of Spiritual Machines, he predicted that by 2020, a standard $1,000 personal computer will equal the capacity of the human brain. Researchers are already working on systems that replicate the human brain — not just in capacity, but also in cognitive thought processes.
Sandra Erwin of National Defense reports that the Defense Advanced Research Projects Agency (DARPA) is working on a project called “physical intelligence,” in which robots become truly autonomous, capable of digesting information and making decisions on their own. The project seeks to boost the intelligence of the robot being created by mimicking the human brain instead of using standard robotics instruction, says James Gimzewski, professor of chemistry at the University of California of Los Angeles.
Gimzewski explained that the robot will develop neurological-like pathways as it learns. That’s a departure from code-driven approaches to AI:
“What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.”
Physical intelligence does not require human intervention or oversight, Gimzewski adds. And, importantly, this robot won’t necessarily take a human form, so don’t look for Star Trek’s “Data” — the compassionate android Starfleet officer — any time soon.
“An aircraft would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.”
The inability to generate human-like reasoning has been the missing piece in artificial intelligence research over the past five decades, Gimzewski says. The DARPA project seeks to bring more human-like reasoning into the process.
The ability for systems to learn and make intelligent decisions has been a tantalizing possibility for organizations, covering a range of functions, from fraud detection to customer service. Currently, low-level, routine decisions — such as granting customers additional credit — have been successfully automated through sophisticated rules engines. With the development of self-learning systems such as what DARPA is proposing, machines may start to make higher-level calls. But given the trouble decision systems can get us into (think about the banking system in 2008), there will always be a need for human brakes and overrides.