Who Vladimir Putin thinks will rule the world

(THIS ARTICLE IS COURTESY OF CNN)

 

Who Vladimir Putin thinks will rule the world

Story highlights

  • The Russian President gives an “open lesson” to more than a million schoolchildren
  • “Whoever becomes the leader in this sphere will become the ruler of the world,” he says

(CNN)On the first day of the new school year in Russia, students learned an important lesson directly from their president — who he thinks will rule the world.

Speaking to students during a national “open lesson” from the city of Yaroslavl, northeast of Moscow, Russian President Vladimir Putin said the country that takes the lead in the sphere of computer-based artificial intelligence (AI) will rule.
“Artificial intelligence is the future not only of Russia but of all of mankind,” said Putin. “There are huge opportunities, but also threats that are difficult to foresee today.”
“Whoever becomes the leader in this sphere will become the ruler of the world,” he said, adding that it would be better to prevent any particular “pair of hands” from achieving a monopoly in the field.
If Russia becomes the leader in the development of artificial intelligence, “we will share our technology with the rest of the world, like we are doing now with atomic and nuclear technology,” said Putin.
More than a million schoolchildren around Russia were expected to watch the televised open lesson online, titled “Russia Focused on the Future,” according to the Kremlin.

Putin visits new hockey school in Yaroslavl.

Participants in the lesson also watched videos about the large-scale innovative projects, including the development of a new generation of nuclear-powered icebreakers and a heavy-class space launch center.
The words of the Russian President echo what scientists in Russia and around the world have been mulling over for quite some time.
Work on developing drones and vehicles for military and civilian usage is well under way in Russia, according to state media.
The Russian military is also developing robots, anti-drone systems, and cruise missiles that would be able to analyze radars and make decisions on the altitude, speed and direction of their flight, according to state media.
While in Yaroslavl, Putin didn’t miss the opportunity to show off his hockey skills during a visit to a new school. Putin attended a training session of the children’s hockey team, talked to the young players and played some hockey himself.

Is Elon Musk A Genius An Idiot Or Maybe Both?

Is Elon Musk A Genius An Idiot Or Maybe Both?

by oldpoet56

 

I have never met the man Elon Musk but I have read quite a bit about him during this past year or two. So, I do not know him personally so the best I can do is to garner what I can from him through his quotes. Personally I have no doubt that the man is a genius as far as his IQ is concerned. I have learned during my time here on this Earth that a person can be brilliant yet still do and or believe things that are just plain stupid. I also have learned that a person with a very low IQ can sometimes come up with great ideas, sometimes things in life simply are defined by the angle or the light in which one looks at the issue in question. This article today is going to be my opinions that I have taken from an article that I read this morning in ” livescience.com “. This article is one that I reblogged earlier this morning if you wish to read it before or after you read this article. When I write articles it is always my wish and attempt to get folks to think, to stretch their minds beyond their everyday plain, this article will be no different. I am not really saying that you need to agree with me but I hope you will take a couple of moments to consider what I am laying out for you to think about. This article today is one that does concern every ones life and their Soul.

 

The Science article I mentioned to you a moment ago is concerning a company that Mr. Musk owns that is called ‘Neuralink’. Mr. Musk’s ambition with this company is to develop a “Ultrahigh-Bandwith Brain-Computer Interface.” Mr. Musk says one of the purposes is to ‘accelerate human evolution.’ He is not seeking to create pure machines like you see in the Terminator movies or even in the Will Smith movie simply called, AI. Mr. Musk says that “he sees a real danger in Artificial Intelligence” he has called AI a “fundamental risk to the existence of human civilization.” I believe that he is correct there as science, which is often pushed by military government funding seeks to have pilot-less aircraft, not just Drones, but also big Jets, folks the Navy has a sailor-less battle ship! Of course that will then lead to commercial airlines getting rid of all of their pilots. Think about it, driver-less cars, tractor-trailer units, driver-less trains. O yes, we already have this technologies don’t we? Think about factory jobs for a moment please. When I was in my teen years my Dad worked at a Chrysler Assembly plant in northern Illinois, back then the assembly line had far more employees putting together the units than what you see these days. Now machines directed by computer brains have replaced most of those ‘human’ jobs. Machines, computers don’t have Unions, don’t ask for pay raises, paid days off, overtime pay, medical benefits and that list goes on and on. Why let a human do what a computer can do much cheaper, and in most cases, better?

 

Evidently Mr. Musk is concerned that we humans, starting with the poorest, weakest, least educated will only be a burden on society (the wealthiest people), if you are not a positive to society, why should you be allowed to live off of someone else (the rich)? What was Arnold’s phrase, ‘you have been Terminated’? Mr. Musk believes (and he is trying to accomplish this through his Neuralink Company) “that the best way to keep pace with the machines intelligence is to up grade human intelligence.” In the good ole days wasn’t that called going to school and getting the best education that you were able to get?

 

From a pure science perspective Mr. Musk is correct on a couple of different plains. I believe that he is correct about his concerns regarding AI. Do you not believe that the servant can become the master? Could the humble public servant (politicians/bureaucrats/police) ever dare to become the master over the people? We already have, and we have had for many years now the integration of computer chips for people. It started out with chips for our pets so that they don’t get lost from us. Then we went into chips for new-born babies, just in case they ever got lost or stolen. Then came the chips for employees and their convenience. We have had little ‘brain’ chips for well over a decade now. Neuralink and Mr. Musk are now simply trying to stretch the human-computer ‘interface’ as he puts it. There will soon be a day where if you are an employee or if you are an office supervisor of importance that the company will require you to have mandated chip technology in your hand or you can’t get the job or the promotion. If you don’t think that what I am saying to you is logical or true, my friend it is you who are living in a fantasy world, not me.

 

This last paragraph is going to be from my Christian Biblical viewpoint. We are told several times in the book of Revelation about the ‘Mark of the Beast’ being put into our hand or into our head, we are told that if we humans allow this that when Christ and His Angels return that we will die twice. The first death is when this body dies, the second death is when God severs His relationship with us and cast’s us into Hell for all eternity. Many will say things along the lines of ‘what has the Mark of the Beast got to do with computer chips’? I know that most folks still do not realize what ‘Armageddon’ really is. Scripture is very plain that Armageddon is when the Nations of the Earth and their Armies fight against God and His Angels at the Second Advent of Christ. We are also told that the people who are found to have the mark of the Beast in their hand or in their head will be totally crushed as if in a wine-press. Friends think about it for a moment, it is the governments which at that time will be led by Demons and Satan Himself that are going to fight against God, so yes, the governments will be even more wicked than they are now. Friends the mark of the ‘Beast’ is not the number 666, no where does Scripture say that it is. Simply there will come a time when 10 governments will control almost all of the globe and these 10 governments will sit upon the 7 Continents. Then the power will be consolidated into 3 all-powerful governments, then into one. Six is the sign of man, three is the sign of God. The world will have 3 all-powerful governments that are ruled by 3 of Satan’s top Generals. 3 Men who will try to take the place of God, as if they are God’s. Then they will give up their power to the 1 true Anti-Christ, Satan Himself. 3 Men (6’s) who would be God (3’s) if they could. Friends, all I can say to you as I close this article today is please for no reason ever allow anyone to ever put any kind of chip into you, please.

What the Rise of Sentient Robots Will Mean for Human Beings

(THIS ARTICLE IS COURTESY OF NBC MACH)

What the Rise of Sentient Robots Will Mean for Human Beings

Science fiction may have us worried about sentient robots, but it’s the mindless ones we need to be cautious of. Conscious machines may actually be our allies.

Jun.19.2017 / 12:45 PM ET

TERMINATOR GENISYS, Series T-800 Robot, 2015. ph: Melinda Sue Gordon/(C)Paramount Pictures/courtesy :: (C)Paramount/Courtesy Everett Collection / (C)Paramount/Courtesy Everett Collection
The series T-800 Robot in the “Terminator” movie franchise is an agent of Skynet, an artificial intelligence system that becomes self-aware. | Paramount/Courtesy Of Everett CollectionZombies and aliens may not be a realistic threat to our species. But there’s one stock movie villain we can’t be so sanguine about: sentient robots. If anything, their arrival is probably just a matter of time. But what will a world of conscious machines be like? Will there be a place in it for us?

Artificial intelligence research has been going through a recent revolution. AI systems can now outperform humans at playing chess and Go, recognizing faces, and driving safely. Even so, most researchers say truly conscious machines — ones that don’t just follow programs but have feelings and are self-aware — are decades away. First, the reasoning goes, researchers have to build a generalized intelligence, a single machine with the above talents and the capacity to learn more. Only then will AI reach the level of sophistication needed for consciousness.

But some think it won’t take nearly that long.

“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.

No one claims that robots have a rich inner experience — that they have pride in floors they’ve vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.

Related: Will Robots Take Over the World?

Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are simple. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.

Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.

“If we could capture some of the structure of consciousness, it’s a good bet that we’d be producing some interesting capacity,” says Selmer Bringsjord, an AI researcher at the Rensselaer Polytechnic Institute in Troy, N.Y. Although science fiction may have us worried about sentient robots, it’s really the mindless robots we need to be cautious of. Conscious machines may actually be our allies.

Image::Children interact with the programmable humanoid robot "Pepper," developed by French robotics company Aldebaran Robotics, at the Global Robot Expo in Madrid in 2016.|||[object Object]
Children interact with the programmable humanoid robot “Pepper,” developed by French robotics company Aldebaran Robotics, at the Global Robot Expo in Madrid in 2016. AFP-Getty Images

ROBOT, KNOW THYSELF

Self-driving cars have some of the most advanced AI systems today. They decide where to steer and when to brake by taking constant radar and laser readings and feeding them into algorithms. But much of driving is anticipating other drivers’ maneuvers and responding defensively — functions that are associated with consciousness.

“Self-driving cars will have to read the minds of what other self-driving cars want to do,” says Paul Verschure, a neuroscientist at Universitat Pompeu Fabra in Barcelona.

As a demonstration of how that might look, Hod Lipson, an engineering professor at Columbia University and co-author of a recent book on self-driving cars, and Kyung-Joong Kim at Sejong University in Seoul, South Korea built the robotic equivalent of a crazy driver. The small round robot (about the size of a hockey puck) moves on a loopy path according to its own logic. Then a second robot is set with the goal of intercepting the first robot no matter where the first one started, so it couldn’t record a fixed route; it had to divine the moving robot’s logic.

People expect that self-awareness is going to be this end game of AI when really there are no scientific pursuits where you start at the end.

 

Using a procedure that mimicked Darwinian evolution, Lipson and Kim crafted an interception strategy. “It had basically developed a duplicate of the brain of the actor — not perfect, but good enough that it could anticipate what it’s going to do,” Lipson says.

Lipson’s team also built a robot that can develop an understanding of its body. The four-legged spidery machine is about the size of a large tarantula. When switched on, its internal computer has no prior information about itself. “It doesn’t know how its motors are arranged, what its body plan is,” Lipson says

But it has the capacity to learn. It makes all the actions it is capable of to see what happens: how, for example, turning on a motor bends a leg joint. “Very much like a baby, it babbles,” Lipson says. “It moves its motors in a random way.”

After four days of flailing, it realizes it has four legs and figures out how to coordinate and move them so it can slither across the floor. When Lipson unplugs one of the motors, the robot realizes it now has only three legs and that its actions no longer produce the intended effects.

“I would argue this robot is self-aware in a very primitive way,” Lipson says.

Could Robots Create a ‘Jobless Future’ for Humans?

Another humanlike capability that researchers would like to build into AI is initiative. Machines excel at playing the game Go because humans directed the machines to solve it. They can’t define problems on their own, and defining problems is usually the hard part.

In a forthcoming paper for the journal “Trends in Cognitive Science,” Ryota Kanai, a neuroscientist and founder of a Tokyo-based startup Araya discusses how to give machines intrinsic motivation. In a demonstration, he and his colleagues simulated agents driving a car in a virtual landscape that includes a hill too steep for the car to climb unless it gets a running start. If told to climb the hill, the agents figure out how to do so. Until they receive this command, the car sits idle.

Then Kanai’s team endowed these virtual agents with curiosity. They surveyed the landscape, identified the hill as a problem, and figured out how to climb it even without instruction.

“We did not give a goal to the agent,” Kanai says. “The agent just explores the environment to learn what kind of situation it is in by making predictions about the consequence of its own action.”

Related: This Robot Can Compose Its Own Music

The trick is to give robots enough intrinsic motivation to make them better problem solvers, and not so much that they quit and walk out of the lab. Machines can prove as stubborn as humans. Joscha Bach, an AI researcher at Harvard, put virtual robots into a “Minecraft”-like world filled with tasty but poisonous mushrooms. He expected them to learn to avoid them. Instead, they stuffed their mouths.

“They discounted future experiences in the same way as people did, so they didn’t care,” Bach says. “These mushrooms were so nice to eat.” He had to instill an innate aversion into the bots. In a sense, they had to be taught values, not just goals.

PAYING ATTENTION

In addition to self-awareness and self-motivation, a key function of consciousness is the capacity to focus your attention. Selective attention has been an important area in AI research lately, not least by Google DeepMind, which developed the Go-playing computer.

Image::China's 19-year-old Go player Ke Jie prepares to make a move during the second match against Google's artificial intelligence program.|||[object Object]
China’s 19-year-old Go player Ke Jie prepares to make a move during the second match against Google’s artificial intelligence program. AFP – Getty Images / AFP Or Licensors

“Consciousness is an attention filter,” says Stanley Franklin, a computer science professor at the University of Memphis. In a paper published last year in the journal “Biologically Inspired Cognitive Architectures,” Franklin and his colleagues reviewed their progress in building an AI system called LIDA that decides what to concentrate on through a competitive process, as suggested by neuroscientist Bernard Baars in the 1980s. The processes watch for interesting stimuli — loud, bright, exotic — and then vie for dominance. The one that prevails determines where the mental spotlight falls and informs a wide range of brain function, including deliberation and movement. The cycle of perception, attention, and action repeats five to 10 times a second.

The first version of LIDA was a job-matching server for the U.S. Navy. It read emails and focused on pertinent information while juggling each job hunter’s interests, the availability of jobs, and the requirements of government bureaucracy.

Since then, Franklin’s team has used the system to model animals’ minds, especially behavioral quirks that result from focusing on one thing at a time. For example, LIDA is just as prone as humans are to a curious psychological phenomenon known as “attentional blink.” When something catches your attention, you become oblivious to anything else for about half a second. This cognitive blind spot depends on many factors and LIDA shows humanlike responses to these same factors.

Pentti Haikonen, a Finnish AI researcher, has built a robot named XCR-1 on similar principles. Whereas other researchers make modest claims — create some quality of consciousness — Haikonen argues that his creation is capable of genuine subjective experience and basic emotions.

Related

This giant mech, made by a mysterious South Korean robotics company, doesn’t exactly have a purpose yet but one can certainly imagine a few uses and most of them involve war, destruction, or both.

INNOVATION
Giant Robot Is Action Movies Come To Life

The system learns to make associations much like the neurons in our brains do. If Haikonen shows the robot a green ball and speaks the word “green,” the vision and auditory modules respond and become linked. If Haikonen says “green” again, the auditory module will respond and, through the link, so will the vision module. The robot will proceed as if it heard the word and saw the color, even if it’s staring into an empty void. Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.

Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.

“If we see a ball, we may say so to ourselves, and at that moment our perception is rather similar to the case when we actually hear that word,” Haikonen says. “The situation in the XCR-1 is the same.”

Things get interesting when the modules clash — if, for example, the vision module sees green while the auditory module hears “blue.” If the auditory module prevails, the system as a whole turns its attention to the word it hears while ignoring the color it sees. The robot has a simple stream of consciousness consisting of the perceptions that dominate it moment by moment: “green,” “ball,” “blue,” and so on. When Haikonen wires the auditory module to a speech engine, the robot will keep a running monolog about everything it sees and feels.

Haikonen also gives vibration a special significance as “pain,” which preempts other sensory inputs and consumes the robot’s attention. In one demonstration, Haikonen taps the robot and it blurts, “Me hurt.”

“Some people get emotionally disturbed by this, for some reason,” Haikonen says. (He and others are unsentimental about the creations. “I’m never like, ‘Poor robot,’” Verschure says.)

A NEW SPECIES

Building on these early efforts, researchers will develop more lifelike machines. We could see a continuum of conscious systems, just as there is in nature, from amoebas to dogs to chimps to humans and beyond. The gradual progress of this technology is good because it gives us time adjust to the idea that, one day, we won’t be the only advanced beings on the planet.

Image::A child reaches out to a robotic dog at the World Robot Conference in Beijing in 2016.|||[object Object]
A child reaches out to a robotic dog at the World Robot Conference in Beijing in 2016. AP / Copyright 2016 The Associated Press. All Rights Reserved.

For a long while, our artificial companions will be vulnerable — more pet than threat. How we treat them will hinge on whether we recognize them as conscious and as capable of suffering.

“The reason that we value non-human animals, to the extent that people do, is that we see, based on our own consciousness, the light of consciousness within them as well,” says Susan Schneider, a philosopher at the University of Connecticut who studies the implications of AI. In fact, she thinks we will deliberately hold back from building conscious machines to avoid the moral dilemmas it poses.

“If you’re building conscious systems and having them work for us, that would be akin to slavery,” Schneider says. By the same token, if we don’t give advanced robots the gift of sentience, it worsens the threat they may eventually pose to humanity because they will see no particular reason to identify with us and value us.

Related: Speedy Delivery Bots May Change Our Buying Habits

Judging by what we’ve seen so far, conscious machines will inherit our human vulnerabilities. If robots have to anticipate what other robots do, they will treat one another as creatures with agency. Like us, they may start attributing agency to inanimate objects: stuffed animals, carved statues, the wind.

Last year, social psychologists Kurt Gray of the University of North Carolina and the late Daniel Wegner suggested in their book “The Mind Club” that this instinct was the origin of religion. “I would like to see a movie where the robots develop a religion because we have engineered them to have an intentionality prior so that they can be social,” Verschure says. ”But their intentionality prior runs away.”

These machines will vastly exceed our problem-solving ability, but not everything is a solvable problem. The only response they could have to conscious experience is to revel in it, and with their expanded ranges of sensory perception, they will see things people wouldn’t believe.

“I don’t think a future robotic species is going to be heartless and cold, as we sometimes imagine robots to be,” Lipson says. “They’ll probably have music and poetry that we’ll never understand.”

FOLLOW NBC MACH ON TWITTER, FACEBOOK, AND INSTAGRAM.

The Simple Economics of Machine Intelligence—How Soon Will Humans Not Be Needed?

(THIS ARTICLE IS COURTESY OF ‘DIGITOPOLY NEWS’)

[This post was co-written with Ajay Agrawal and Avi Goldfarb and appeared in HBR Blogs on 17 November 2016]

The year 1995 was heralded as the beginning of the “New Economy.” Digital communication was set to upend markets and change everything. But economists by and large didn’t buy into the hype. It wasn’t that we didn’t recognize that something changed. It was that we recognized that the old economics lens remained useful for looking at the changes taking place. The economics of the “New Economy” could be described at a high level: Digital technology would cause a reduction in the cost of search and communication. This would lead to more search, more communication, and more activities that go together with search and communication. That’s essentially what happened.

Today we are seeing similar hype about machine intelligence. But once again, as economists, we believe some simple rules apply. Technological revolutions tend to involve some important activity becoming cheap, like the cost of communication or finding information. Machine intelligence is, in its essence, a prediction technology, so the economic shift will center around a drop in the cost of prediction.

The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.

When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.

Lots of tasks will be reframed as prediction problems

As machine intelligence lowers the cost of prediction, we will begin to use it as an input for things for which we never previously did. As a historical example, consider semiconductors, an area of technological advance that caused a significant drop in the cost of a different input: arithmetic. With semiconductors we could calculate cheaply, so activities for which arithmetic was a key input, such as data analysis and accounting, became much cheaper. However, we also started using the newly cheap arithmetic to solve problems that were not historically arithmetic problems. An example is photography. We shifted from a film-oriented, chemistry-based approach to a digital-oriented, arithmetic-based approach. Other new applications for cheap arithmetic include communications, music, and drug discovery.

The same goes for machine intelligence and prediction. As the cost of prediction falls, not only will activities that were historically prediction-oriented become cheaper — like inventory management and demand forecasting — but we will also use prediction to tackle other problems for which prediction was not historically an input.

Consider navigation. Until recently, autonomous driving was limited to highly controlled environments such as warehouses and factories where programmers could anticipate the range of scenarios a vehicle may encounter, and could program if-then-else-type decision algorithms accordingly (e.g., “If an object approaches the vehicle, then slowdown”). It was inconceivable to put an autonomous vehicle on a city street because the number of possible scenarios in such an uncontrolled environment would require programming an infinite number of if-then-else statements.

Inconceivable, that is, until recently. Once prediction became cheap, innovators reframed driving as a prediction problem. Rather than programing endless if-then-else statements, they instead simply asked the AI to predict: “What would a human driver do?” They outfitted vehicles with a variety of sensors – cameras, lidar, radar, etc. – and then collected millions of miles of human driving data. By linking the incoming environmental data from sensors on the outside of the car to the driving decisions made by the human inside the car (steering, braking, accelerating), the AI learned to predict how humans would react to each second of incoming data about their environment. Thus, prediction is now a major component of the solution to a problem that was previously not considered a prediction problem.

Judgment will become more valuable

When the cost of a foundational input plummets, it often affects the value of other inputs. The value goes up for complements and down for substitutes. In the case of photography, the value of the hardware and software components associated with digital cameras went up as the cost of arithmetic dropped because demand increased – we wanted more of them. These components were complements to arithmetic; they were used together.  In contrast, the value of film-related chemicals fell – we wanted less of them.

All human activities can be described by five high-level components: data, prediction, judgment, action, and outcomes. For example, a visit to the doctor in response to pain leads to: 1) x-rays, blood tests, monitoring (data), 2) diagnosis of the problem, such as “if we administer treatment A, then we predict outcome X, but if we administer treatment B, then we predict outcome Y” (prediction), 3) weighing options: “given your age, lifestyle, and family status, I think you might be best with treatment A; let’s discuss how you feel about the risks and side effects” (judgment); 4) administering treatment A (action), and 5) full recovery with minor side effects (outcome).

As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.

For example, when prediction is cheap, diagnosis will be more frequent and convenient, and thus we’ll detect many more early-stage, treatable conditions. This will mean more decisions will be made about medical treatment, which means greater demand for the application of ethics, and for emotional support, which are provided by humans. The line between judgment and prediction isn’t clear cut – some judgment tasks will even be reframed as a series of predictions. Yet, overall the value of prediction-related human skills will fall, and the value of judgment-related skills will rise.

Interpreting the rise of machine intelligence as a drop in the cost of prediction doesn’t offer an answer to every specific question of how the technology will play out. But it yields two key implications: 1) an expanded role of prediction as an input to more goods and services, and 2) a change in the value of other inputs, driven by the extent to which they are complements to or substitutes for prediction. These changes are coming. The speed and extent to which managers should invest in judgment-related capabilities will depend on the how fast the changes arrive.

Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

(THIS ARTICLE IS COURTESY OF THE WASHINGTON POST)

Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

 (THE END OF THE HUMAN RACE?)
January 29, 2015

Bill Gates is a passionate technology advocate (big surprise), but his predictions about the future of computing aren’t uniformly positive.

During a wide-ranging Reddit “Ask me Anything” session — one that touched upon everything from his biggest regrets to his favorite spread to lather on bread — the Microsoft co-founder and billionaire philanthropist outlined a future that is equal parts promising and ominous.

Midway through the discussion on Wednesday, Gates was asked what personal computing will look like in 2045. Gates responded by asserting that the next 30 years will be a time of rapid progress.

“Even in the next 10 problems like vision and speech understanding and translation will be very good,” he wrote. “Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.”

He went on to highlight a Microsoft project known as the “Personal Agent,” which is being designed to help people manage their memory, attention and focus. “The idea that you have to find applications and pick them and they each are trying to tell you what is new is just not the efficient model – the agent will help solve this,” he said. “It will work across all your devices.”

The response from Reddit users was mixed, with some making light of Gates’s revelation (“Clippy 2.0?,” wrote one user) — and others sounding the alarm.

“This technology you are developing sounds at its essence like the centralization of knowledge intake,” a Redditor wrote. “Ergo, whomever controls this will control what information people make their own. Even today, we see the daily consequences of people who live in an environment that essentially tunnel-visions their knowledge.”

Shortly after, Gates was asked how much of an existential threat superintelligent machines pose to humans.

The question has been at the forefront of several recent discussions among prominent futurists. Last month, theoretical physicist Stephen Hawking said artificial intelligence “could spell the end of the human race.”

[Why the world’s most intelligent people shouldn’t be so afraid of artificial intelligence]

Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium in October, Tesla boss Elon Musk referred to artificial intelligence as “summoning the demon.”

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

British inventor Clive Sinclair has said he thinks artificial intelligence will doom mankind.

“Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”

After gushing about the immediate future of technology in his Reddit AMA, Gates aligned himself with the AI alarm-sounders.

“I am in the camp that is concerned about super intelligence,” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Once he finished addressing the potential demise of humankind, Gates got back to answering more immediate, less serious questions, like revealing his favorite spread to put on bread.

“Butter? Peanut butter? Cheese spread?” he wrote. “Any of these.”

The Microsoft co-founder’s comments on AI came shortly after the managing director of Microsoft Research’s Redmond Lab said the doomsday declarations about the threat to human life are overblown.

“There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” Eric Horvitz said, according to the BBC. “I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

Horvitz noted that “over a quarter of all attention and resources” at Microsoft Research are focused on artificial intelligence.

This blog, trouthtroubles.com is owned, written, and operated by oldpoet56. All articles, posts, and materials found here, except for those that I have pressed here from someone else’s blog for the purpose of showing off their work, are under copyright and this website must be credited if my articles are re-blogged, pressed, or shared.

—Thank You, oldpoet56, T.R.S.

Physicsmania

The best thing about physics is that it is always true either you believe or not

Circles&Stalls

Theatre and performance in Greater Manchester mainly and the North generally.

msamba

A blog from Manchester School of Samba

THE WORDSMITHSCRIBE--MLST

A personal comprehensive compendum of related personal thought, diary, articles geared towards championing and alleviating the course of humanity towards the achievement of a greater society whereby all the inhabitants of the world are seeing as one and treated equally without any division along religious affinity, social class and tribal affliation.This is all about creating a platform where everybody interested in the betterment of the society will have a voice in the scheme of things going on in the larger society.This is an outcome of deep yearning of the author to have his voice heard across the globe.The change needed by all and sundry all over the globe starts with us individually.Our world will be a better place if every effort at our disposal is geared towards taking a little simple step that rally around thinking outside the box.

pearlsinshell

शब्द मेरे दिल के, सजाती कलम है , यही मेरा आलम है ~

Anokhi Roshani

Everything In Hindi

VictHim

Reshape the Idea of Rape

Amazing Grace

Faith | Hope | Love; but the greatest of these is Love

%d bloggers like this: