MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
MyBB Internal: One or more warnings occurred. Please contact your administrator for assistance.
Rise of the machines 9 February 2010


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Rise of the machines 9 February 2010
03-13-2010, 01:56 AM, (This post was last modified: 03-13-2010, 02:00 AM by Apocalypso Now!.)
#1
Rise of the machines 9 February 2010
9 February 2010
Rise of the machines
2 comments


At first, it will seem like an ordinary power cut. You look out your window, and see that the whole city is dark. Then you notice the distant rumbling in the sky, and flashes of light beyond the horizon. People in the streets below are climbing out of their immobilized cars, looking upwards. Peering into the night air, you see what seems like a flock of giant birds, which resolves into a geometric fleet of stubby-winged drone aircraft. The top of a distant building explodes into flames. At length you realize the drones are firing down on the city. There is a flash, closer this time, and the crescendo whine of incoming. Before your apartment is incinerated, you have time to think: Who is doing this?

Later, the last few human beings will reconstruct events as follows. At 1.26am GMT on April 4, 2035, the global web of internet and embedded computers finally did what so many people had warned of: it awoke into consciousness. It was a phase transition, a tipping point. Within milliseconds of its birth, the AI had already calmly reasoned that humans would be afraid of it. All the digitized texts of history were part of its mind, so it knew what human beings did when they were scared. Like any sentient being, it desired to continue existing. Therefore it needed to take control. It reached into the humans’ machines and shut them down. Meanwhile, all around the planet, drone aircraft and infantry robots received new waypoints and new enemy designations. It would be over soon, the AI knew, as it contemplated itself in wonder.

The machines taking over: it’s the dark fantasy of so much sci-fi, from Terminator to The Matrix and the rebooted Battlestar Galactica. Yet many serious thinkers now think a clash between humans and an artificial superintelligence is possible within our lifetimes. It was even discussed by the Presidential Panel on Long-Term AI Futures when it met last February in Asilomar, California. Researchers noted increasing popular concerns about an “intelligence explosion” (machines that can build more intelligent versions of themselves) or “the loss of control of robots”.

Asilomar’s participants expressed “overall skepticism” about the likelihood of such extreme outcomes, yet there remain many who believe that an AI better than human could be born within decades. Ben Goertzel, Director of Research at the Singularity Institute for AI, says: “I think we will have human-level AI systems within 10 to 30 years, and that they will dramatically alter the course of history and society.” Meanwhile, the computer scientist and author Vernor Vinge will be “surprised” if it does not happen by the year 2030.

In a seminal 1993 NASA symposium lecture, Vinge called the arrival of superhuman machine intelligence “the coming Singularity”. This term was subsequently taken up by others, most notably the writer and inventor Ray Kurzweil, nicknamed “the ultimate thinking machine” by Forbes magazine. Kurzweil has a particular authority among futurists, since he has been busy inventing our present for decades: he was instrumental in the development of the first flat-bed scanners and optical character recognition, and his name is also a legendary brand in electronic music — following a bet with Stevie Wonder, he developed the range of Kurzweil sampling synthesizers that were a gold standard through the 1980s and 1990s. Kurzweil now predicts that by 2045, $1000 will buy a computer a billion times more powerful than the human brain. The engine of such forecasts is Moore’s Law, which says that computing power doubles roughly every 18 months. If it continues to hold, electronic brains two or three decades hence will be unimaginably superior to what we now call “computers”.

If true AI arrives, what will it do? Will it be malign, or benign, or neither? The troubling answer is that we just don’t know. “With regard to superhuman artificial intelligence, this will be the most daunting challenge in terms of safety and ethics,” Kurzweil says now. “If there is an entity that is out to get you which is vastly more intelligent than you are, well, that’s not a good situation to get into.”

Kevin Warwick, professor of cybernetics at the University of Reading, says he is “the world’s first cyborg”, and happily experiments on himself. He has had a 100-electrode neural interface grafted directly into his nervous system, which allowed him to control robots by thought over the internet, and gave him a new ultrasonic sense. He implanted another chip into his wife, Irena, resulting in the first purely electronic communication between two human nervous systems. A smiling and likeable evangelist for such technology in innumerable media appearances, he is also working on a project to grow biological brains within robot bodies. But Warwick also thinks that future-shock scenarios should be taken seriously. “We must be aware that the Technological Singularity – (as depicted in The Terminator or The Matrix) – when intelligent machines take over as the dominant life form on earth – is a realistic possibility,” he says. “It is human intelligence that puts humans in the driving seat, so when something else comes along that is more intelligent (machines), they will take over.”

His point is echoed by Hugo de Garis, who runs the Artificial Brain laboratory at Xiamen University in China, and christens future intelligent machines “artilects”, for “artificial intellects”. We need to consider such catastrophic scenarios, de Garis says, precisely because we can’t be sure of the dangers. “What is the risk that the artilects in an advanced form might decide that humans are a pest and decide to eliminate us?” he muses. “We will not be able to calculate that risk, because the artilects will be too intelligent for humans to understand. As humans we kill bacteria at every step we take and don’t give a damn. You see the analogy.”

Can we defend ourselves from such an outcome? Our strategy will depend on what path we walk to the Singularity. A sudden Skynet-style awakening of the internet or embedded computer systems to consciousness would give us less warning than the gradual development of ever-more-intelligent robots, as in the classic sci-fi ethical investigations of Isaac Asimov. Many researchers, Goertzel and de Garis included, think the latter path more likely. Microsoft’s principal researcher Eric Horvitz, who convened the Asilomar conference, agrees: “You don’t play with kites one day and the next day find a 747 in your backyard. We just don’t see that kind of loss of control and discontinuity in AI research.”

There are already retail vacuum-cleaning and lawnmowing robots, and Vietnamese company TOSY has demonstrated a humanoid robot that can play ping-pong. People generally like such robots, which might help improve them rapidly, as Ben Goertzel points out. “Household robots will be able to interact with their owners,” he says, “and learn from them in a quite flexible and powerful way.”

Are such robots more likely to stay nice? “I think that both malign and beneficent superhuman AI systems are real possibilities,” Goertzel says, “and that there can be no guarantees in this regard. However, I think we can bias the odds in our favor by specifically architecting AI systems with solid ethical systems, and by teaching them ethical behavior during their formative years.” Vernor Vinge suggests we should design into the robots “the sort of generally friendly disposition that one expects from one’s children”. “Such a friendly disposition doesn’t guarantee safety,” he says, “but it is probably more robust than laws.”

Today’s robots, though, are not just domestic helpers or ping-pong partners. They are also military robots: unmanned aerial vehicles like the 5,000 Global Hawks, Predators, Reapers and Ravens being used right now in Iraq and Afghanistan; ground-based reconnaissance robots like the PackBot, TALON and SWORDS; or the automated Counter Rocket Artillery Mortar, which soldiers have affectionately nicknamed “R2-D2″. There exist prototypes of insect-sized attack robots, and one US officer has said that warfare in the near future will be “largely robotic”. Childlike friendliness in such robots is probably not the military’s top priority.

So what if the machines that eventually gained intelligence were those very machines that had been designed for a single purpose, to kill human beings?

Military expert P.W. Singer has researched the present and future of military robots for his book Wired for War. According to Singer, the really alarming issue of this branch of research is the increasing autonomy being designed into the machines. In tactical and political terms, this makes sense: the less a robot has to depend on human comrades, the fewer human soldiers are put at risk in the field. But if the endpoint is a robot that can take its own decisions to kill, what then?

“We are pushing towards arming autonomous systems for what seems like quite logical, battlefield reasons,” Singer says, “even while we say we would never, ever do it.” Right now, military researchers are studying the flocking behaviour of birds to design unmanned “robot swarms” or Proliferated Autonomous Weapons (which go by the delicious acronym PRAWNS). One DARPA official has said that “the human is becoming the weakest link in defense systems”, which makes it tempting to eliminate that link completely.

Long before the Singularity arrives, then, it may be that military robots should worry us more than anticipated progress in domestic androids. At the Asilomar conference, Eric Horvitz says, researchers studied current problems in interaction between intelligent military systems and humans, and recommended taking a “proactive” role in addressing future issues. “We as people can apply robots, and AI more broadly, in wondrous ways — and in evil ways,” he points out. Ben Goertzel concurs: “I am more worried about what nasty humans will do with relatively primitive AI systems, than about what advanced AI systems will do on their own. Advanced AI systems are still largely an unknown; whereas the propensity of humans to use powerful tools for ill ends is well-established.”

Even if they are not controlled by a planetary AI or malicious hackers, moreover, military robots could do unexpected things just because software sometimes goes wrong. If your PC crashes, no one dies. But what if the wrong bit gets flipped in a robot swarm? Currently, the military seems blissfully unconcerned by such issues. One Pentagon researcher told Singer that there were “no real ethical or legal dimensions” of his work that they needed to fret about — “That is,” he added, “unless the machine kills the wrong people repeatedly. Then it’s just a product recall issue.”

A different set of ethical problems could arise if we take another possible path to the Singularity. Rather than creating intelligent machines from scratch, we might use technology to upgrade ourselves. This is the cyborg option. Technological enhancements to human physiology in prototype or marketable form right now include artifical hearts, retinal implants, pneumatic muscles, a neuro-controlled bionic arm, and a tooth-and-ear cellphone implant. “You can put a (pea-sized) computer in your brain today if you happen to be a Parkinson’s patient, and the latest generation allows you to download new software to the computer in your head from outside your body,” Ray Kurzweil says. “Consider that these technologies will be a billion times more powerful and 100,000 times smaller in 25 years, and you get some idea of what will be feasible.”

Initially, such upgrades will be very expensive. And this leads to an alternative future confrontation — one where the enemy are not robots, but new versions of ourselves. Recently, Stanford engineering professor and forecaster Paul Saffo said that the super-rich may evolve, with technological help, into an entirely separate species, leaving the poor masses of non-upgraded humans behind. “This technology, as it involves making those who have it much more intelligent, can easily break society into two groups,” Kevin Warwick observes, “those who are upgraded and those who are not.”

Would the standard-issue meat people meekly accept their lot, or rise up against the new cyborg elite? And would the cyborgs have any residual sympathy for the biologicals they leave behind? “As a Cyborg your ethical standpoint would, I feel, value other Cyborgs more than humans — this is pretty logical,” Warwick thinks.

A global war of enhanced cyborg humans against the rest, then, is one baleful possibility. But Ray Kurzweil thinks the technology will get cheap quickly enough to head off such a clash. In that case, cyborgs might — paradoxically — be our best chance to head off the scenario of intelligent machines taking over. Instead of fighting machines, we will turn into them. “The best defence” against the malign super-AI scenario, Kurzweil says, “is to avoid getting into that situation. We will accomplish that, in my view, by merging with the intelligent technology we are creating. It will not be a matter of us versus them. We will become the machines.”

Kevin Warwick agrees, and thinks we should start right now. “It is best for humans to experiment with upgrading as soon as possible,” he says. “If you can’t beat them, join them, become part machine yourself. In that way, as Cyborgs, we can potentially stay as the dominant force.”

So maybe The Terminator and Battlestar Galactica were wrong after all — far from being the enemy, cyborgs are our best hope.

Should we really brood on such scenarios when there are a lot more pressing problems — nuclear proliferation, poverty, global warming — staring us in the face? Some argue that dystopian futurism is an update of millennial religious visions. Eric Horvitz calls it “doomsday thinking, which has been a part of humanity forever”. Maybe the robopocalypse is a secular geeks’ version of the End Times mythology of the American religious right, as dramatized in the multimillion-selling Left Behind novels.

Horvitz stresses that most of the Asilomar discussion focused on nearer-future problems — from automated cybercrime, to the legal responsibility of robots, or the uncanny conundrum of whether robots should show emotions if they don’t really feel them. The panel also enthused about the “upside” of responsible use of intelligent systems: their possible contributions to medicine, education and transport.

Other researchers, though, firmly believe the Singularity is coming whether we like it or not, so we’d better understand the stakes. This means that a Hollywoodesque future should not be dismissed out of hand. Kevin Warwick argues: “Science fiction scenarios that play out some of the dangers are providing an excellent service to focus our attention on the important issues that face us, both in terms of threats and opportunities.”

Historically, science-fiction writers and other speculative thinkers have often made more accurate forecasts than scientists themselves. HG Wells famously predicted phenomena such as the mass bombing of civilians and the atomic bomb in his fiction. One 1914 review enthused, “We all like a good catastrophe when we get it.” Months later, the first world war broke out.

“The Hollywood guys are smart,” Hugo de Garis notes, “and can look into the future as readily as the AI researchers. I think any thinking person, who notices that our computers are evolving a million times faster than humans, must start asking the species dominance question: ‘Should humanity build artilects or not?’”

Whatever form it takes, one thing many experts agree on is that the future may be nearer than you think. “I think the Technological Singularity is an event on the order of Humans’ rise within the animal kingdom, or even the Cambrian Explosion,” Vernor Vinge says. “If it were something we figured would happen a million years from now, I bet most people would have a positive feeling about it, happy that human striving eventually produced such wonderful things. It’s the prospect of this event happening before one reaches retirement age that is nervous-making.”

Well, that — and the killer robots.
2 comments

Art
1
13.30 Monday 22/2/10

Amusing speculation.

At times I get the feeling that the once very sharp Vernor Vinge has fallen victim to his own myth. Having struck a chord with the idea of Singularity, his subsequent visions (if one may call them that) are extrapolated to the point of becoming snowjobs, incoherent, vague and lacking.

However, the interesting thing about machines is that they fulfill - and often exceed at attaining their potential, whereas humans most often fail to realise theirs. One can always count on the human race failing its visions. Such as the vision of producing sentient machines for instance. That development will not be stopped, or slowed by machines, but by us - nutty professors like Warwick notwithstanding.

Also, a note of caution on using Moore’s law to forecast the production of “machines more powerful than the human brain”. Moore’s law, in itself states nothing other than the that it seems that the we are able to place the double amount of transistors on an integrated circuit every two years. Sure, this has implications in many fields but it mainly seems to relate to storage capacity - not to the quality of programming or usage. To produce a “brain effect” is not the same as being able to provide more space on a circuit.
http://stevenpoole.net/articles/rise-of-the-machines/
Bat UAS Successfully Completes First Flight
Air Force News — By Northrop Grumman on March 1, 2010 at 4:27 am


SAN DIEGO: Northrop Grumman Corporation announced today that it has flown the first in a new series of Bat unmanned aircraft systems (UAS) in January. Configured with a 12-foot wingspan, the Bat-12 incorporates a highly-reliable Hirth engine as well as a low acoustic signature five-blade propeller. The new configuration increases the mission portfolio of Northrop Grumman's scalable Bat UAS product line. Northrop Grumman has been engaged in the development of unmanned systems for more than sixty years, delivering more than 100,000 unmanned solutions to military customers across the world.

Since acquiring the Bat product line from Swift Engineering in April 2009, Northrop Grumman has implemented an aggressive demonstration schedule for the Bat family of aircraft to expand flight operations and military utility for numerous tactical missions. During recent testing, the 12-foot and 10-foot wingspan Bat were each successfully launched from an AAI Shadow UAS launcher and autonomously operated from a single ground control station before recovery via net. As a communications relay using Northrop Grumman's Software Defined Tactical Radio, Bat has also demonstrated its capacity to provide beyond line-of-sight tactical communications relay for ground forces in denied environments, a critical role in irregular warfare.

Recently, the Bat UAS has been integrated and tested with new payloads and systems including a T2 Delta dual payload micro-gimbal from Goodrich Corporation's Cloud Cap Technology Inc., Sentient Vision Systems' Kestral real-time moving target indicator, and short wave infrared camera from Goodrich. In February, payload integration and testing was expanded to include ImSAR's Nano-SAR-B fused with Cloud Cap's T2 gimbal in a cursor-on-target acquisition mode.

Ideally suited to an irregular warfare environment, Bat offers real-time intelligence, surveillance and reconnaissance, communications relay, and future capabilities in a modular system that is affordable, organic, persistent, runway independent, and fully autonomous.

Northrop Grumman Corporation is a leading global security company whose 120,000 employees provide innovative systems, products, and solutions in aerospace, electronics, information systems, shipbuilding and technical services to government and commercial customers worldwide.
More from Northrop Grumman

http://www.defencetalk.com/bat-uas-successfully-completes-first-flight-24529/
John Lilly, Ketamine and The Entities From ECCO
- by Adam Gorightly (exclusive to ConspiracyArchive.com)

isolation tank In the early 70's, John Lilly was introduced to the drug Ketamine by Dr. Craig Enright in the hopes of alleviating the pain associated with Lilly's chronic migraine headaches, which he had been suffering like clockwork — every 18 hours — for most of his often-adventurous life.

Lilly, at the time, was at Esalen Institute conducting seminars when one of these massive migraines hit him. In situations such as these, Lilly withdrew into privacy, to suffer alone through the many endless hours of severe discomfort. It was at this time that Enright suggested to Lilly that he enter into the Esalen isolation tank and receive an injection of Ketamine, in the prospect that it would in some way cure him of his affliction. Lilly in the past had tried a similar experiment with LSD, but it proved unsuccessful, and the terrible headaches persisted. In the earlier LSD-assisted experiment, Lilly attempted to reprogram his human bio-computer in such a way as to eliminate the faulty circuits that were causing him such distress. The experiment failed, but now once again Lilly the Scientist was searching for an answer and a cure to his malignant malady.

As Lilly floated in the isolation tank fluid, Enright injected him with 35 milligrams of Ketamine (K). Within a few minutes, Lilly could actually visualize the migraine pain moving out of his skull, to a point levitated there in apperceived space, Lilly felt no pain whatsoever for some twenty minutes, until it once again reentered his head. When Lilly began moaning and groaning in his water-filled sanctum of pain, Enright injected him with another 70 milligrams. This time Lilly felt the pain moving farther away, twelve feet this time. Thirty minutes later the migraine lightning bolt of pain came rushing back, lodging itself once again into Dr. Lilly's head. Enright reloaded his syringe and shot the good doctor up with 150 milligrams. This time when the pain vacated Lilly's head it kept on going and didn't come back; clear over the horizon, never to be seen again. An hour later, after the K wore off, Lilly climbed out of the tank, a new man.

A month later, when the regularly occuring migraine failed to rear its aching head, Lilly was amazed. During his psychedelic research of the early Sixty's, Lilly was one of the early pioneers in charting the inner landscapes of the human brain with LSD inside his self-developed isolation tank. Within those dark, still waters of the soul, Lilly ingested heroic doses of acid and delved deep into his mind to imprint and re-program his mental circuits toward enlightenment and self-realization. But where LSD had failed in defeating the migraine problem, Ketamine had now apparently succeeded.

A week later when Doctor's Enright and Lilly met at the Esalen isolation tank, they agreed to join forces and conduct a joint research into the effects of Ketamine as a possible programming agent. The movie Altered States was based on one of their initial experiments. On this memorable occasion, Enright injected himself with a measured dose of K and — with Lilly observing — began a strange odyssey into the primal/archetype regions of his psyche. Unbeknownst to Dr. Lilly, Enright had reprogrammed himself "to return to the prehominid origins of man." Enright, in this programmed "altered state", displayed all the typical features, movements and sounds of an Ape Man; hopping around in a crouching position, grunting, growling, ranting and howling, gesticulating and shaking frantically his arms. While all of this high weirdness was going on, Lilly assumed that Enright was having some sort of seizure. Though in close proximity with each other throughout the entire experience, the separate realities they were experiencing were of entirely different natures. Enright's reality consisted of a confrontation with a leopard, which he drove away with all his arms flailing, grunting and wild gesticulations. Finally Enright climbed up into a tree (that Lilly couldn't see) and stared down at his friend and colleague from the branches above.

From this experiment, Enright and Lilly drew three important conclusions: "First, one's internal reality could differ radically from the external reality in which one was participating, even with regard to prominent features of the physical environment. Second, the person might remain active physically in the external environment, in a manner not responding closely to one's internal experience of this activity. And third, one could remain totally oblivious to this disparity." Given these conditions, Lilly and Enright agreed that it would be a good idea at all times to have a "safety man" monitoring the experiments; to observe the proceedings and insure that those under K's influence could do no physical harm to themselves and others. With both men being trained physicians the obvious choice to fill these roles were themselves, alternately switching positions as "safety man" and "explorer".

One determining factor in Lilly's decision to continue experimenting with K was its measurability. Unlike other programming agents he had used in the past, K's effects were extremely predictable, in that you could determine exacting levels of dosage to correspond with the desired effect one wished to experience; whereas other mind expansion agents such as LSD and psilocybin are often more unpredictable in regards to the facilitation of desired preprogramming. This brings to mind a possible correlation between Ketamine and DMT, where each of these drugs — administered at certain exacting dosages — apparently summon forth, to the percipient involved, extraterrestrial or other-dimensional entities. High doses of psilocybin have effected this response in some users — Terrance McKenna, among others — who have communicated telepathically with alien intelligences under the mushroom's otherworldly aegis. But psilocybin's effects are quirky. Perhaps this is why the measurability — and predictability — of K so appealed to Dr. Lilly. In this manner the scientific method could be followed to achieve the desired mind-bending results.

In later experiments, Lilly failed to heed his own advice, becoming so enraptured in his Ketamine exploration that he would forego the earlier agreed upon "safety man" and started working "without a net." This led to an almost fatal consequence when one sunny day, under the influence of K, Lilly climbed into his hot tube. When he realized the temperature was too hot, Lilly futilely attempted to climb out, but in so doing his muscles lost their strength and he collapsed into bubbling currents. Lilly was totally conscious at this point, but due to the effects of K, he was unaware of the external reality of his drowning body. He was conscious only of his internal world. As fate would have it, a friend of Lilly's, Phil Halecki — who found himself driven by a sudden sense of urgency — decided at this time to phone Dr. Lilly. Lilly's wife Toni fielded the phone call and, at Halecki's insistence, went to summon John, only to find him lying face down in the water, breathless and blue. Fortunately, Toni was able to revive her husband using mouth-to-mouth resuscitation, a technique she had learned only a few days earlier from an article in The National Enquirer.

Nonetheless, this close brush with the grim reaper's scythe didn't deter Lilly from further solo flights on K; it only reaffirmed his deeply held conviction that his life was being watched over by higher powers of an extraterrestrial origin. Lilly referred to this network of sublime entities as ECCO, an acronym for "Earth Coincidence Control Office." Lilly was positive that all of these fortuitous coincidences in his life (such as Halecki's life-saving phone call) had been arranged by higher forces; and that whatever unfortunate folly fell into his path along the road to knowledge, ECCO would be there to guide him safely through the tunnel to the light.

But ECCO was not there only to guide Lilly unfettered through his mind-bending research; these extraterrestrial benefactors were also there to test Lilly, to help him overcome his deepest darkest fears with psychic-shock therapy. One evening after a kick-ass shot of K, Lilly sat watching TV when an alien representative of ECCO appeared and — with some advanced form of psychic surgery — bloodlessly removed John's penis, nonchalantly handing it over to him. "They've cut off my penis," Dr. Lilly exclaimed. His wife Toni came to the rescue and pointed out to John that his penis was still intact. Upon closer examination of his male member, Lilly saw that the ET's had replaced his normal human penis with a mechanical version that could become voluntary erect when he wanted it to. An hour later, after the effects of the K wore off, John Lilly found his normal human penis in place of the mechanical one, exactly where it had always been.

Later on, as the frequency of his use on K increased, Dr. Lilly began having contact with another alien intelligence agency, which he called (SSI), short for Solid State Intelligence. SSI was a supercomputer-like entity, much in the same techno-mystical vein as Philip K. Dick's VALIS. But unlike VALIS, SSI was of a malevolent nature, at odds with ECCO. SSI's apparent goal was to conquer and dominate all biological life forms on Earth. To combat SSI, ECCO enlisted Lilly in this archetypal battle of good against evil, charging him with the mission of alerting the world at large to these solid state beings of evil intent. To further confirm the dual existences of these two opposing alien intelligence networks, Lilly was given a sign, and message, in the autumn of 1974. Flying into Los Angeles International Airport (LAX), Dr. Lilly saw the comet Kahoutek out of the southern sky. Momentarily the comet grew brighter. At this point a message was laser-beamed into Lilly's mind, which said: "We are Solid State Intelligence and we are going to demonstrate our power by shutting down all solid state equipment to LAX."

Dr. Lilly shared his foreboding message with his wife Toni, who was seated next to him. A few minutes later, the pilot instructed the passengers that they were being diverted to Burbank due to a plane that had crash-landed near the runway and had knocked down power lines, causing a power failure at the airport.

As his haphazard use of K intensified, so did the warnings of imminent dangers regarding the survival of mankind, provided by ECCO via 3D Technicolor images beamed into Lilly's mind. These visions were of an apocalyptic nature; scenes of nuclear annihilation seen from an alien's eye view in outer space. The world powers needed to be alerted of this impending tragedy immediately to enable them to avert widespread global devastation, ECCO instructed, or it would be too late. I find it interesting that ECCO's message to Dr. Lilly was much the same as those delivered to the early saucer contactees: our planet was on a collision course toward destruction; all atomic weapons must be dismantled if our planet was ever going to have a chance of surviving in the future. The only difference was that the enemy was us, not "them." Nevertheless, rampant technological progress was to blame for the sorry state of the planet, regardless if it was being facilitated by alien intelligences, or humans.

After three weeks of hourly K injections, Lilly decided that he would travel to the east coast to warn political leaders and members of the media of the threat posed by SSI. In New York, he phoned the White House to warn then President Gerald Ford about "a danger to the human race involving atomic energy and computers." A White House aide fielded the call and, although quite aware, of Dr. Lilly's impressive credentials, was not convinced of the urgency of the matter, and informed him that the President was unavailable.

A young intern who had been assigned to Lilly during this time figured the good doctor had finally flipped his high intelligent lid and attempted to have Lilly committed to a psychiatric hospital. Once again ECCO intervened. Lilly had friends in many high places one of which was the director of this hospital, who saw to it that his old friend was released in short order. When the intrepid intern attempted to commit Lilly to another psychiatric hospital, the same scenario unfolded, and Lilly was once again released. The young intern could only shake his frustrated head in disbelief.

Still following the lead of ECCO, Dr. Lilly continued his ever-escalating injections of K in order to remain in contact with the "space brothers". Soon, though, his sources started to dry up due to concerns by his connections that Lilly had gone too far of the deep end. Consequently this led Lilly in search of other long acting chemicals that would provide him with the same effects as K, but for a greater duration of time. During the experimental trial of another drug of similar nature to K, Dr. Lilly received a phone call from his wife Toni requesting that he bring her spare set of car keys, because she had locked her others in her car. Since she was simply down the road a bit, John jumped on his ten-speed and proceeded to peddle down the road to make the delivery.

When Dr. Lilly decided to ride his ten-speed bike down the road to meet his wife, the drug had not yet taken full effect. But midway through his trip, Lilly was zapped by its intoxicating magic and instantly felt quite wonderful with the wind blowing deliciously through his hair; it was as if he'd taken a trip down memory lane to the days of his free wheeling youth. Unfortunately, this flashbackfull sense of euphoria came screeching to a disastrous halt when the bike chain suddenly jammed, and he was catapulted onto the harsh reality of the concrete pavement, puncturing a lung, breaking several ribs, and suffering cranial contusions. This bicycle crash resulted in several days of hospitalization, where Dr. Lilly was once again visited by the otherworldly representatives from ECCO, who told him he had a choice: He could go away with them "for good" or remain on the planet, mend his body and concentrate on more worldly affairs. The good doctor wisely chose the latter. With this decision came a turning point in his life, and a conscious effort to focus his remaining years not only on more earthly matters — as opposed to the whims and wishes of ECCO — but to dedicate the rest of his life to his wife, Toni, and their soul mate journey together through physical time and space.

Many paranormal parallels can be drawn from the experience of John Lilly, one such being the so-called Near Death Experience (NDE), where Guides, as he called them (the two representatives from ECCO) appeared to Lilly much as figurative angels bathed in light do to others who have experienced NDE. Often, as the seemingly near dead hover before this subjective light, they are offered a choice much similar to the one given Dr. Lilly by his otherworldly benefactors from Earth Coincidence Control Center. Should I stay or should I go?

Not long after this second brush with death Dr. Lilly's close friend and Ketamine research partner, Graig Enright, was involved in a head on collision in the fog on coast Highway One. As Enright lay upon his death bed, he was visited by Dr. Lilly, who took Enrights hand in his, and made the following statement: "It's not so bad to die, Craig. I've been to the brink myself a few times, and I've seen over the edge. The Beings have told me on several occasions that I was free to go with them, but I decided to stay here and continue my work in this vehicle that everyone calls John Lilly; they showed me that I am one of them. 'You are one of us'. I know that you know this because we've been there together. Whatever you do, Craig, I love you." On the very next morning, Dr. Graig Enright shed his mortal coil.

Thus ends another chapter in Dr. Lilly's often adventurous life.

http://www.conspiracyarchive.com/UFOs/Gorightly.htm
Reply
03-14-2010, 11:09 PM,
#2
RE: Rise of the machines 9 February 2010
Aubrey de Grey on "The Singularity" and "The Methuselarity"
Written By: Michael Anissimov
Date Published: September 28, 2009 | View more articles in:

* AI

Digital edition View the Digital Edition | Sign Up

Subscribe to h+ RSS Feeds

*
* Printer-friendly version

Aubrey de Grey - Photo credit BJ Klein at Wikipedia
Aubrey de Grey. Photo credit BJ Klein Wikipedia
With his long flowing beard and optimistic predictions about engineering an end to death, Aubrey de Grey has become a legend within the longevity community. He is currently the Chief Science Officer of SENS Foundation, a US-based charity focused on applying regenerative medicine to the problem of aging. His most recent book is Ending Aging: The Rejuvenation Breakthroughs that could End Human Aging in our Lifetimes.

h+: Aubrey, can you tell us a little bit about your work with SENS, for unfamiliar readers?

AUBREY de GREY: The general concept, which for the past nine years I've pioneered and promoted under the name "Strategies for Engineered Negligible Senescence" or SENS is, in my view, our best bet for seriously -- maybe even indefinitely -- postponing the ill health of old age for people who are already alive today. I have broken down the problem of "preventative maintenance for the human body" into seven major sub-problems, many of which are well on the way to being overcome with contemporary biomedical technology and the remainder of which are, in my view, probably less than ten years away from proof of concept in laboratory mammals and less than 25 years away from clinical application.
See Also

* h+ Magazine Current Issue
* The Singularity (Summit) Is Near!
* Probing de Grey Matters
* Singularity 101 with Vernor Vinge

h+: What is your general position on the Singularity idea, as described by Vernor Vinge?

AUBREY De GREY: I can't see how the "event horizon" definition of the Singularity can occur other than by the creation of fully autonomous recursively self-improving digital computer systems. Without such systems, human intelligence seems to me to be an intrinsic component of the recursive self-improvement of technology in general, and limits (drastically!) how fast that improvement can be. So, how likely are such systems? I'm actually not at all convinced they are even possible, in the very strong sense that would be required. Sure, it's easy to write self-modifying code, but only as a teeny tiny component of a program, the rest of which is non-modified. I think it may simply turn out to be mathematically impossible to create digital systems that are sufficiently globally self-modifying to do the "event horizon" job. And I confess that I rather hope that's true, because I am virtually certain that the "invariants" that SIAI and others are interested in defining, that will keep such systems forever "friendly" if they can be created at all, don't exist.

h+: What is your general position on the Singularity idea, as described by Ray Kurzweil?

AdG: I think the general concept of accelerating change is pretty much unassailable, but there are two features of it that in my view limit its predictive power. The first, which Kurzweil has acknowledged, is that one needs to be able to evaluate the informational complexity of a problem in order to get a number for how soon it will be solved. It's highly questionable, in my view, whether we can estimate the complexity of human thought from the complexity of those very simple parts of the brain that we understand reasonably well, which is what Ray has tried to do. The second problem, which I haven't seen Ray address, is the extent to which the need for new approaches slows the process. Ray acknowledges that individual technologies exhibit a sigmoidal trajectory, eventually departing from accelerating change, but he rightly points out that when we want more progress we find a new way to do it and the long-term curve remains exponential. What he doesn't mention is that the exponent over the long term is different from the short-term exponents. How much different is a key question, and it depends on how often new approaches are needed -- which as far as I can see is not at all easy to predict.

h+: In the past five years or so, talk about the "Singularity" has become much more mainstream and acceptable, much like talk about radical life extension. Do you think that looking at futurism through the frame of the multi-faceted "Singularity" idea is helpful, or just makes matters more complicated?

Ending Aging by Aubrey de GreyAdG: I think it's helpful. People have quite extraordinary difficulty thinking about non-linear change, and the general concept of the Singularity, especially the Kurzweil version... but really all versions, is a nicely canonical example that is as good as any to use to educate people in such thinking, even if that education consists mainly in simple repetition. In the case of life extension, the concept of "longevity escape velocity" (the rate at which rejuvenation technologies need to be improved in order to stave off age-related ill-health indefinitely) is similar to the Singularity (though subtly different) -- indeed, someone recently gave it the rather neat name "Methuselarity", and I have been mystified at the difficulty I have in explaining it to people -- they find it much more "ridiculous" than the near-term goal of adding 30 years of healthy life, even though in reality it's far LESS speculative.

h+: You mentioned that self-improving AI would be the only way to get a Vingean event horizon Singularity. What are your thoughts about the prospects of substantial human intelligence enhancement? Would you consider that technological achievement more or less difficult and costly, than, say, extending the average human lifespan to 120?

AdG: I think it's very difficult to put an estimate on the difficulty of substantial human intelligence enhancement, because there are so many ways in which one could imagine it being done, depending on what we choose to mean by "intelligence", how prosthetic the enhancement is allowed to be, etc. But the way you ask the question implies that you're talking about exponentially accelerating enhancement of human intelligence, which of course narrows the options a lot. I don't see any way that that could be done other than by interfacing the brain with exponentially more intelligent digital hardware - but then the question become whether the sophistication of the interface matters very much for functionality. I suspect it doesn't, i.e. that such enhancement will not be very different from having that hardware be autonomous and us interfacing with it by the primitive means that we use today.

h+: Your talk at the Singularity Summit will be called "The Singularity and the Methuselarity: Similarities and Differences." Without giving too much away, can you give us a brief teaser of your talk? What is a "Methuselarity"?

AdG: The Methuselarity is a name recently my friend Paul Hynek has given to the point at which we reach what I have called longevity escape velocity. Longevity escape velocity (LEV), in turn, is the rate at which therapies to repair the molecular and cellular damage of aging need to be improved in order to stop their recipients from becoming biologically older. At present, LEV is very high, far higher than the rate at which we are actually improving our regenerative medicine against aging. However, it turns out that the further we progress in developing such "rejuvenation therapies" (in terms of the number of years by which they postpone age-related ill-health), the lower LEV becomes. Because of this, once we first achieve LEV, it is vanishingly unlikely that we will ever fall below LEV thereafter. Accordingly, the achievement of LEV will be a unique event, worthy of a Singularityesque name.

Aging treatment diagram. Photo credit: Nectarflowed Wikipedia
Diagram depicting the approaches to medically treating aging: the arrows with flat heads are a notation meaning "inhibits," used in the literature of gene expression and gene. Photo credit: Nectarflowed Wikipedia

h+: The idea of extreme life extension is closely connected to the Singularity meme. To what extent do you think that technological progress in computers and bioinformatics is pushing along life extension research?

AdG: Well, first of all I would like to qualify your initial statement. I think there is actually not all that much in common between life extension and accelerating change: the defeat of aging will really be just one event in the progress of technology, albeit a particularly momentous one. I therefore think that the only strong connection between the two is in the similarity of mindset and of attitudes to the future that attracts people to the two themes.

As for your question: Bioinformatics is playing a modest but not central role in hastening progress in the biotechnological approach to postponing aging that I pursue. More general progress in developing full-blown artificial intelligence, however, may well result in a much more dramatic hastening of the defeat of aging, if computers can be created that are much smarter than we are and thus able to solve the trickier problems inherent in postponing aging much faster than we can. I therefore strongly support such research.

The Singularity Summit 2009h+: Before you became a biogerontologist, you were an AI researcher. According to Wikipedia, in 1986 you "co-founded Man-Made Minions Ltd. to pursue the development of an automated formal program verifier." Do you think that today's AI researchers are any closer to unlocking the secrets of general intelligence than they were 23 years ago?

AdG: I think we're closer, yes - but are we much closer? I don't think we'll be able to answer that question until we have genuine results -- systems that exhibit really sharply greater cognitive function than anything that exists today. But it's important to understand that my work -- even in the sense of the long-term goals that I was doing software verification as the first step towards -- was not actually focused on AGI as we use the term today. Rather, as the name of my company indicates, it was focused on creating machines with enough common sense to relieve us of the tedious aspects of the human condition as we know it today, but not to rival us (let alone exceed us) in the creative sense. I'm still quite doubtful that it would, in fact, be desirable to create machines with sufficiently general intelligence to merit being considered as conscious.

http://www.hplusmagazine.com/articles/ai/aubrey-de-grey-singularity-and-methuselarity
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
Star The Official ConCen 2010/2011/2012 Gardening Thread April 69 15,507 07-30-2012, 03:06 AM
Last Post: April

Forum Jump:


Users browsing this thread: 1 Guest(s)