What the Rise of Sentient Robots Will Mean for Human Beings

(THIS ARTICLE IS COURTESY OF NBC MACH)

What the Rise of Sentient Robots Will Mean for Human Beings

Science fiction may have us worried about sentient robots, but it’s the mindless ones we need to be cautious of. Conscious machines may actually be our allies.

Jun.19.2017 / 12:45 PM ET

TERMINATOR GENISYS, Series T-800 Robot, 2015. ph: Melinda Sue Gordon/(C)Paramount Pictures/courtesy :: (C)Paramount/Courtesy Everett Collection / (C)Paramount/Courtesy Everett Collection
The series T-800 Robot in the “Terminator” movie franchise is an agent of Skynet, an artificial intelligence system that becomes self-aware. | Paramount/Courtesy Of Everett CollectionZombies and aliens may not be a realistic threat to our species. But there’s one stock movie villain we can’t be so sanguine about: sentient robots. If anything, their arrival is probably just a matter of time. But what will a world of conscious machines be like? Will there be a place in it for us?

Artificial intelligence research has been going through a recent revolution. AI systems can now outperform humans at playing chess and Go, recognizing faces, and driving safely. Even so, most researchers say truly conscious machines — ones that don’t just follow programs but have feelings and are self-aware — are decades away. First, the reasoning goes, researchers have to build a generalized intelligence, a single machine with the above talents and the capacity to learn more. Only then will AI reach the level of sophistication needed for consciousness.

But some think it won’t take nearly that long.

“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.

No one claims that robots have a rich inner experience — that they have pride in floors they’ve vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.

Related: Will Robots Take Over the World?

Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are simple. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.

Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.

“If we could capture some of the structure of consciousness, it’s a good bet that we’d be producing some interesting capacity,” says Selmer Bringsjord, an AI researcher at the Rensselaer Polytechnic Institute in Troy, N.Y. Although science fiction may have us worried about sentient robots, it’s really the mindless robots we need to be cautious of. Conscious machines may actually be our allies.

Image::Children interact with the programmable humanoid robot "Pepper," developed by French robotics company Aldebaran Robotics, at the Global Robot Expo in Madrid in 2016.|||[object Object]
Children interact with the programmable humanoid robot “Pepper,” developed by French robotics company Aldebaran Robotics, at the Global Robot Expo in Madrid in 2016. AFP-Getty Images

ROBOT, KNOW THYSELF

Self-driving cars have some of the most advanced AI systems today. They decide where to steer and when to brake by taking constant radar and laser readings and feeding them into algorithms. But much of driving is anticipating other drivers’ maneuvers and responding defensively — functions that are associated with consciousness.

“Self-driving cars will have to read the minds of what other self-driving cars want to do,” says Paul Verschure, a neuroscientist at Universitat Pompeu Fabra in Barcelona.

As a demonstration of how that might look, Hod Lipson, an engineering professor at Columbia University and co-author of a recent book on self-driving cars, and Kyung-Joong Kim at Sejong University in Seoul, South Korea built the robotic equivalent of a crazy driver. The small round robot (about the size of a hockey puck) moves on a loopy path according to its own logic. Then a second robot is set with the goal of intercepting the first robot no matter where the first one started, so it couldn’t record a fixed route; it had to divine the moving robot’s logic.

People expect that self-awareness is going to be this end game of AI when really there are no scientific pursuits where you start at the end.

 

Using a procedure that mimicked Darwinian evolution, Lipson and Kim crafted an interception strategy. “It had basically developed a duplicate of the brain of the actor — not perfect, but good enough that it could anticipate what it’s going to do,” Lipson says.

Lipson’s team also built a robot that can develop an understanding of its body. The four-legged spidery machine is about the size of a large tarantula. When switched on, its internal computer has no prior information about itself. “It doesn’t know how its motors are arranged, what its body plan is,” Lipson says

But it has the capacity to learn. It makes all the actions it is capable of to see what happens: how, for example, turning on a motor bends a leg joint. “Very much like a baby, it babbles,” Lipson says. “It moves its motors in a random way.”

After four days of flailing, it realizes it has four legs and figures out how to coordinate and move them so it can slither across the floor. When Lipson unplugs one of the motors, the robot realizes it now has only three legs and that its actions no longer produce the intended effects.

“I would argue this robot is self-aware in a very primitive way,” Lipson says.

Could Robots Create a ‘Jobless Future’ for Humans?

Another humanlike capability that researchers would like to build into AI is initiative. Machines excel at playing the game Go because humans directed the machines to solve it. They can’t define problems on their own, and defining problems is usually the hard part.

In a forthcoming paper for the journal “Trends in Cognitive Science,” Ryota Kanai, a neuroscientist and founder of a Tokyo-based startup Araya discusses how to give machines intrinsic motivation. In a demonstration, he and his colleagues simulated agents driving a car in a virtual landscape that includes a hill too steep for the car to climb unless it gets a running start. If told to climb the hill, the agents figure out how to do so. Until they receive this command, the car sits idle.

Then Kanai’s team endowed these virtual agents with curiosity. They surveyed the landscape, identified the hill as a problem, and figured out how to climb it even without instruction.

“We did not give a goal to the agent,” Kanai says. “The agent just explores the environment to learn what kind of situation it is in by making predictions about the consequence of its own action.”

Related: This Robot Can Compose Its Own Music

The trick is to give robots enough intrinsic motivation to make them better problem solvers, and not so much that they quit and walk out of the lab. Machines can prove as stubborn as humans. Joscha Bach, an AI researcher at Harvard, put virtual robots into a “Minecraft”-like world filled with tasty but poisonous mushrooms. He expected them to learn to avoid them. Instead, they stuffed their mouths.

“They discounted future experiences in the same way as people did, so they didn’t care,” Bach says. “These mushrooms were so nice to eat.” He had to instill an innate aversion into the bots. In a sense, they had to be taught values, not just goals.

PAYING ATTENTION

In addition to self-awareness and self-motivation, a key function of consciousness is the capacity to focus your attention. Selective attention has been an important area in AI research lately, not least by Google DeepMind, which developed the Go-playing computer.

Image::China's 19-year-old Go player Ke Jie prepares to make a move during the second match against Google's artificial intelligence program.|||[object Object]
China’s 19-year-old Go player Ke Jie prepares to make a move during the second match against Google’s artificial intelligence program. AFP – Getty Images / AFP Or Licensors

“Consciousness is an attention filter,” says Stanley Franklin, a computer science professor at the University of Memphis. In a paper published last year in the journal “Biologically Inspired Cognitive Architectures,” Franklin and his colleagues reviewed their progress in building an AI system called LIDA that decides what to concentrate on through a competitive process, as suggested by neuroscientist Bernard Baars in the 1980s. The processes watch for interesting stimuli — loud, bright, exotic — and then vie for dominance. The one that prevails determines where the mental spotlight falls and informs a wide range of brain function, including deliberation and movement. The cycle of perception, attention, and action repeats five to 10 times a second.

The first version of LIDA was a job-matching server for the U.S. Navy. It read emails and focused on pertinent information while juggling each job hunter’s interests, the availability of jobs, and the requirements of government bureaucracy.

Since then, Franklin’s team has used the system to model animals’ minds, especially behavioral quirks that result from focusing on one thing at a time. For example, LIDA is just as prone as humans are to a curious psychological phenomenon known as “attentional blink.” When something catches your attention, you become oblivious to anything else for about half a second. This cognitive blind spot depends on many factors and LIDA shows humanlike responses to these same factors.

Pentti Haikonen, a Finnish AI researcher, has built a robot named XCR-1 on similar principles. Whereas other researchers make modest claims — create some quality of consciousness — Haikonen argues that his creation is capable of genuine subjective experience and basic emotions.

Related

This giant mech, made by a mysterious South Korean robotics company, doesn’t exactly have a purpose yet but one can certainly imagine a few uses and most of them involve war, destruction, or both.

INNOVATION
Giant Robot Is Action Movies Come To Life

The system learns to make associations much like the neurons in our brains do. If Haikonen shows the robot a green ball and speaks the word “green,” the vision and auditory modules respond and become linked. If Haikonen says “green” again, the auditory module will respond and, through the link, so will the vision module. The robot will proceed as if it heard the word and saw the color, even if it’s staring into an empty void. Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.

Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.

“If we see a ball, we may say so to ourselves, and at that moment our perception is rather similar to the case when we actually hear that word,” Haikonen says. “The situation in the XCR-1 is the same.”

Things get interesting when the modules clash — if, for example, the vision module sees green while the auditory module hears “blue.” If the auditory module prevails, the system as a whole turns its attention to the word it hears while ignoring the color it sees. The robot has a simple stream of consciousness consisting of the perceptions that dominate it moment by moment: “green,” “ball,” “blue,” and so on. When Haikonen wires the auditory module to a speech engine, the robot will keep a running monolog about everything it sees and feels.

Haikonen also gives vibration a special significance as “pain,” which preempts other sensory inputs and consumes the robot’s attention. In one demonstration, Haikonen taps the robot and it blurts, “Me hurt.”

“Some people get emotionally disturbed by this, for some reason,” Haikonen says. (He and others are unsentimental about the creations. “I’m never like, ‘Poor robot,’” Verschure says.)

A NEW SPECIES

Building on these early efforts, researchers will develop more lifelike machines. We could see a continuum of conscious systems, just as there is in nature, from amoebas to dogs to chimps to humans and beyond. The gradual progress of this technology is good because it gives us time adjust to the idea that, one day, we won’t be the only advanced beings on the planet.

Image::A child reaches out to a robotic dog at the World Robot Conference in Beijing in 2016.|||[object Object]
A child reaches out to a robotic dog at the World Robot Conference in Beijing in 2016. AP / Copyright 2016 The Associated Press. All Rights Reserved.

For a long while, our artificial companions will be vulnerable — more pet than threat. How we treat them will hinge on whether we recognize them as conscious and as capable of suffering.

“The reason that we value non-human animals, to the extent that people do, is that we see, based on our own consciousness, the light of consciousness within them as well,” says Susan Schneider, a philosopher at the University of Connecticut who studies the implications of AI. In fact, she thinks we will deliberately hold back from building conscious machines to avoid the moral dilemmas it poses.

“If you’re building conscious systems and having them work for us, that would be akin to slavery,” Schneider says. By the same token, if we don’t give advanced robots the gift of sentience, it worsens the threat they may eventually pose to humanity because they will see no particular reason to identify with us and value us.

Related: Speedy Delivery Bots May Change Our Buying Habits

Judging by what we’ve seen so far, conscious machines will inherit our human vulnerabilities. If robots have to anticipate what other robots do, they will treat one another as creatures with agency. Like us, they may start attributing agency to inanimate objects: stuffed animals, carved statues, the wind.

Last year, social psychologists Kurt Gray of the University of North Carolina and the late Daniel Wegner suggested in their book “The Mind Club” that this instinct was the origin of religion. “I would like to see a movie where the robots develop a religion because we have engineered them to have an intentionality prior so that they can be social,” Verschure says. ”But their intentionality prior runs away.”

These machines will vastly exceed our problem-solving ability, but not everything is a solvable problem. The only response they could have to conscious experience is to revel in it, and with their expanded ranges of sensory perception, they will see things people wouldn’t believe.

“I don’t think a future robotic species is going to be heartless and cold, as we sometimes imagine robots to be,” Lipson says. “They’ll probably have music and poetry that we’ll never understand.”

FOLLOW NBC MACH ON TWITTER, FACEBOOK, AND INSTAGRAM.

The Perfect Weapon: How Russian Cyberpower Invaded the U.S.

(THIS ARTICLE IS COURTESY OF THE NEW YORK TIMES)

WASHINGTON — When Special Agent Adrian Hawkins of the Federal Bureau of Investigation called the Democratic National Committee in September 2015 to pass along some troubling news about its computer network, he was transferred, naturally, to the help desk.

His message was brief, if alarming. At least one computer system belonging to the D.N.C. had been compromised by hackers federal investigators had named “the Dukes,” a cyberespionage team linked to the Russian government.

The F.B.I. knew it well: The bureau had spent the last few years trying to kick the Dukes out of the unclassified email systems of the White House, the State Department and even the Joint Chiefs of Staff, one of the government’s best-protected networks.

Yared Tamene, the tech-support contractor at the D.N.C. who fielded the call, was no expert in cyberattacks. His first moves were to check Google for “the Dukes” and conduct a cursory search of the D.N.C. computer system logs to look for hints of such a cyberintrusion. By his own account, he did not look too hard even after Special Agent Hawkins called back repeatedly over the next several weeks — in part because he wasn’t certain the caller was a real F.B.I. agent and not an impostor.

Continue reading the main story

“I had no way of differentiating the call I just received from a prank call,” Mr. Tamene wrote in an internal memo, obtained by The New York Times, that detailed his contact with the F.B.I.

It was the cryptic first sign of a cyberespionage and information-warfare campaign devised to disrupt the 2016 presidential election, the first such attempt by a foreign power in American history. What started as an information-gathering operation, intelligence officials believe, ultimately morphed into an effort to harm one candidate, Hillary Clinton, and tip the election to her opponent, Donald J. Trump.

Like another famous American election scandal, it started with a break-in at the D.N.C. The first time, 44 years ago at the committee’s old offices in the Watergate complex, the burglars planted listening devices and jimmied a filing cabinet. This time, the burglary was conducted from afar, directed by the Kremlin, with spear-phishing emails and zeros and ones.

What is phishing?

Phishing uses an innocent-looking email to entice unwary recipients to click on a deceptive link, giving hackers access to their information or a network. In “spear-phishing,” the email is tailored to fool a specific person.

An examination by The Times of the Russian operation — based on interviews with dozens of players targeted in the attack, intelligence officials who investigated it and Obama administration officials who deliberated over the best response — reveals a series of missed signals, slow responses and a continuing underestimation of the seriousness of the cyberattack.

The D.N.C.’s fumbling encounter with the F.B.I. meant the best chance to halt the Russian intrusion was lost. The failure to grasp the scope of the attacks undercut efforts to minimize their impact. And the White House’s reluctance to respond forcefully meant the Russians have not paid a heavy price for their actions, a decision that could prove critical in deterring future cyberattacks.

The low-key approach of the F.B.I. meant that Russian hackers could roam freely through the committee’s network for nearly seven months before top D.N.C. officials were alerted to the attack and hired cyberexperts to protect their systems. In the meantime, the hackers moved on to targets outside the D.N.C., including Mrs. Clinton’s campaign chairman, John D. Podesta, whose private email account was hacked months later.

Even Mr. Podesta, a savvy Washington insider who had written a 2014 report on cyberprivacy for President Obama, did not truly understand the gravity of the hacking.

Photo

Charles Delavan, a Clinton campaign aide, incorrectly legitimized a phishing email sent to the personal account of John D. Podesta, the campaign chairman.

By last summer, Democrats watched in helpless fury as their private emails and confidential documents appeared online day after day — procured by Russian intelligence agents, posted on WikiLeaks and other websites, then eagerly reported on by the American media, including The Times. Mr. Trump gleefully cited many of the purloined emails on the campaign trail.

The fallout included the resignations of Representative Debbie Wasserman Schultz of Florida, the chairwoman of the D.N.C., and most of her top party aides. Leading Democrats were sidelined at the height of the campaign, silenced by revelations of embarrassing emails or consumed by the scramble to deal with the hacking. Though little-noticed by the public, confidential documents taken by the Russian hackers from the D.N.C.’s sister organization, the Democratic Congressional Campaign Committee, turned up in congressional races in a dozen states, tainting some of them with accusations of scandal.

In recent days, a skeptical president-elect, the nation’s intelligence agencies and the two major parties have become embroiled in an extraordinary public dispute over what evidence exists that President Vladimir V. Putin of Russia moved beyond mere espionage to deliberately try to subvert American democracy and pick the winner of the presidential election.

Many of Mrs. Clinton’s closest aides believe that the Russian assault had a profound impact on the election, while conceding that other factors — Mrs. Clinton’s weaknesses as a candidate; her private email server; the public statements of the F.B.I. director, James B. Comey, about her handling of classified information — were also important.

While there’s no way to be certain of the ultimate impact of the hack, this much is clear: A low-cost, high-impact weapon that Russia had test-fired in elections from Ukraine to Europe was trained on the United States, with devastating effectiveness. For Russia, with an enfeebled economy and a nuclear arsenal it cannot use short of all-out war, cyberpower proved the perfect weapon: cheap, hard to see coming, hard to trace.

GRAPHIC

Following the Links From Russian Hackers to the U.S. Election

The Central Intelligence Agency concluded that the Russian government deployed computer hackers to help elect Donald J. Trump.

OPEN GRAPHIC

“There shouldn’t be any doubt in anybody’s mind,” Adm. Michael S. Rogers, the director of the National Security Agency and commander of United States Cyber Command, said at a postelection conference. “This was not something that was done casually, this was not something that was done by chance, this was not a target that was selected purely arbitrarily,” he said. “This was a conscious effort by a nation-state to attempt to achieve a specific effect.”

For the people whose emails were stolen, this new form of political sabotage has left a trail of shock and professional damage. Neera Tanden, president of the Center for American Progress and a key Clinton supporter, recalls walking into the busy Clinton transition offices, humiliated to see her face on television screens as pundits discussed a leaked email in which she had called Mrs. Clinton’s instincts “suboptimal.”

“It was just a sucker punch to the gut every day,” Ms. Tanden said. “It was the worst professional experience of my life.”

The United States, too, has carried out cyberattacks, and in decades past the C.I.A. tried to subvert foreign elections. But the Russian attack is increasingly understood across the political spectrum as an ominous historic landmark — with one notable exception: Mr. Trump has rejected the findings of the intelligence agencies he will soon oversee as “ridiculous,” insisting that the hacker may be American, or Chinese, but that “they have no idea.”

Mr. Trump cited the reported disagreements between the agencies about whether Mr. Putin intended to help elect him. On Tuesday, a Russian government spokesman echoed Mr. Trump’s scorn.

“This tale of ‘hacks’ resembles a banal brawl between American security officials over spheres of influence,” Maria Zakharova, the spokeswoman for the Russian Foreign Ministry, wrote on Facebook.

Democratic House Candidates Were Also Targets of Russian Hacking

Over the weekend, four prominent senators — two Republicans and two Democrats — joined forces to pledge an investigation while pointedly ignoring Mr. Trump’s skeptical claims.

“Democrats and Republicans must work together, and across the jurisdictional lines of the Congress, to examine these recent incidents thoroughly and devise comprehensive solutions to deter and defend against further cyberattacks,” said Senators John McCain, Lindsey Graham, Chuck Schumer and Jack Reed.

“This cannot become a partisan issue,” they said. “The stakes are too high for our country.”

A Target for Break-Ins

Sitting in the basement of the Democratic National Committee headquarters, below a wall-size 2012 portrait of a smiling Barack Obama, is a 1960s-era filing cabinet missing the handle on the bottom drawer. Only a framed newspaper story hanging on the wall hints at the importance of this aged piece of office furniture.

“GOP Security Aide Among 5 Arrested in Bugging Affair,” reads the headline from the front page of The Washington Post on June 19, 1972, with the bylines of Bob Woodward and Carl Bernstein.

Andrew Brown, 37, the technology director at the D.N.C., was born after that famous break-in. But as he began to plan for this year’s election cycle, he was well aware that the D.N.C. could become a break-in target again.

There were aspirations to ensure that the D.N.C. was well protected against cyberintruders — and then there was the reality, Mr. Brown and his bosses at the organization acknowledged: The D.N.C. was a nonprofit group, dependent on donations, with a fraction of the security budget that a corporation its size would have.

“There was never enough money to do everything we needed to do,” Mr. Brown said.

The D.N.C. had a standard email spam-filtering service, intended to block phishing attacks and malware created to resemble legitimate email. But when Russian hackers started in on the D.N.C., the committee did not have the most advanced systems in place to track suspicious traffic, internal D.N.C. memos show.

Mr. Tamene, who reports to Mr. Brown and fielded the call from the F.B.I. agent, was not a full-time D.N.C. employee; he works for a Chicago-based contracting firm called The MIS Department. He was left to figure out, largely on his own, how to respond — and even whether the man who had called in to the D.N.C. switchboard was really an F.B.I. agent.

“The F.B.I. thinks the D.N.C. has at least one compromised computer on its network and the F.B.I. wanted to know if the D.N.C. is aware, and if so, what the D.N.C. is doing about it,” Mr. Tamene wrote in an internal memo about his contacts with the F.B.I. He added that “the Special Agent told me to look for a specific type of malware dubbed ‘Dukes’ by the U.S. intelligence community and in cybersecurity circles.”

Part of the problem was that Special Agent Hawkins did not show up in person at the D.N.C. Nor could he email anyone there, as that risked alerting the hackers that the F.B.I. knew they were in the system.

Photo

An internal memo by Yared Tamene, a tech-support contractor at the D.N.C., expressed uncertainty about the identity of Special Agent Adrian Hawkins of the F.B.I., who called to inform him of the breach.

Mr. Tamene’s initial scan of the D.N.C. system — using his less-than-optimal tools and incomplete targeting information from the F.B.I. — found nothing. So when Special Agent Hawkins called repeatedly in October, leaving voice mail messages for Mr. Tamene, urging him to call back, “I did not return his calls, as I had nothing to report,” Mr. Tamene explained in his memo.

In November, Special Agent Hawkins called with more ominous news. A D.N.C. computer was “calling home, where home meant Russia,” Mr. Tamene’s memo says, referring to software sending information to Moscow. “SA Hawkins added that the F.B.I. thinks that this calling home behavior could be the result of a state-sponsored attack.”

Mr. Brown knew that Mr. Tamene, who declined to comment, was fielding calls from the F.B.I. But he was tied up on a different problem: evidence suggesting that the campaign of Senator Bernie Sanders of Vermont, Mrs. Clinton’s main Democratic opponent, had improperly gained access to her campaign data.

Ms. Wasserman Schultz, then the D.N.C.’s chairwoman, and Amy Dacey, then its chief executive, said in interviews that neither of them was notified about the early reports that the committee’s system had likely been compromised.

Shawn Henry, who once led the F.B.I.’s cyber division and is now president of CrowdStrike Services, the cybersecurity firm retained by the D.N.C. in April, said he was baffled that the F.B.I. did not call a more senior official at the D.N.C. or send an agent in person to the party headquarters to try to force a more vigorous response.

“We are not talking about an office that is in the middle of the woods of Montana,” Mr. Henry said. “We are talking about an office that is half a mile from the F.B.I. office that is getting the notification.”

“This is not a mom-and-pop delicatessen or a local library. This is a critical piece of the U.S. infrastructure because it relates to our electoral process, our elected officials, our legislative process, our executive process,” he added. “To me it is a high-level, serious issue, and if after a couple of months you don’t see any results, somebody ought to raise that to a higher level.”

The F.B.I. declined to comment on the agency’s handling of the hack. “The F.B.I. takes very seriously any compromise of public and private sector systems,” it said in a statement, adding that agents “will continue to share information” to help targets “safeguard their systems against the actions of persistent cybercriminals.”

By March, Mr. Tamene and his team had met at least twice in person with the F.B.I. and concluded that Agent Hawkins was really a federal employee. But then the situation took a dire turn.

A second team of Russian-affiliated hackers began to target the D.N.C. and other players in the political world, particularly Democrats. Billy Rinehart, a former D.N.C. regional field director who was then working for Mrs. Clinton’s campaign, got an odd email warning from Google.

“Someone just used your password to try to sign into your Google account,” the March 22 email said, adding that the sign-in attempt had occurred in Ukraine. “Google stopped this sign-in attempt. You should change your password immediately.”

Mr. Rinehart was in Hawaii at the time. He remembers checking his email at 4 a.m. for messages from East Coast associates. Without thinking much about the notification, he clicked on the “change password” button and half asleep, as best he can remember, he typed in a new password.

I need help with a WordPress transferring issue!

Okay, I know very little about computers, back when I was in school I only passed the class with a D- and that was on my third try. I do not know if the stroke I had during the first try in that class has anything to do with my memory issues but computer knowledge is a difficult thing for me to pick up or to retain.—My issue that is frustrating me right now is that when I read someone else’s article that I like and I try to reblog it for them as a courtesy to them I somehow manage to really mess up on the transfer over to my blog. Sometimes (seems like a lot of) the time when I go back and look at their article off of my Dashboard I do not see where there is a highlighted link to their blog. Sometimes there is one, sometimes not. This is very frustrating to me. I am trying to help people’s work ‘get out’ to where more people get to see it, I’m not trying to take credit for someone else’s material. I always put whose article it is, all in caps at the top of the story but I have been told that this is not the same. Can someone help me with this? As I said, sometimes the link is transferred over and sometimes it is not and I don’t know what I am doing any different one time to another. This is very frustrating to me, I do not want anyone to think I am trying to ‘steal’ their work, not in any fashion that is why I do that all caps stuff at the top of their stories. Sorry for the long note, I’m just frustrated with myself.

Thank You,
ted

Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

(THIS ARTICLE IS COURTESY OF THE WASHINGTON POST)

Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

 (THE END OF THE HUMAN RACE?)
January 29, 2015

Bill Gates is a passionate technology advocate (big surprise), but his predictions about the future of computing aren’t uniformly positive.

During a wide-ranging Reddit “Ask me Anything” session — one that touched upon everything from his biggest regrets to his favorite spread to lather on bread — the Microsoft co-founder and billionaire philanthropist outlined a future that is equal parts promising and ominous.

Midway through the discussion on Wednesday, Gates was asked what personal computing will look like in 2045. Gates responded by asserting that the next 30 years will be a time of rapid progress.

“Even in the next 10 problems like vision and speech understanding and translation will be very good,” he wrote. “Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.”

He went on to highlight a Microsoft project known as the “Personal Agent,” which is being designed to help people manage their memory, attention and focus. “The idea that you have to find applications and pick them and they each are trying to tell you what is new is just not the efficient model – the agent will help solve this,” he said. “It will work across all your devices.”

The response from Reddit users was mixed, with some making light of Gates’s revelation (“Clippy 2.0?,” wrote one user) — and others sounding the alarm.

“This technology you are developing sounds at its essence like the centralization of knowledge intake,” a Redditor wrote. “Ergo, whomever controls this will control what information people make their own. Even today, we see the daily consequences of people who live in an environment that essentially tunnel-visions their knowledge.”

Shortly after, Gates was asked how much of an existential threat superintelligent machines pose to humans.

The question has been at the forefront of several recent discussions among prominent futurists. Last month, theoretical physicist Stephen Hawking said artificial intelligence “could spell the end of the human race.”

[Why the world’s most intelligent people shouldn’t be so afraid of artificial intelligence]

Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium in October, Tesla boss Elon Musk referred to artificial intelligence as “summoning the demon.”

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

British inventor Clive Sinclair has said he thinks artificial intelligence will doom mankind.

“Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”

After gushing about the immediate future of technology in his Reddit AMA, Gates aligned himself with the AI alarm-sounders.

“I am in the camp that is concerned about super intelligence,” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Once he finished addressing the potential demise of humankind, Gates got back to answering more immediate, less serious questions, like revealing his favorite spread to put on bread.

“Butter? Peanut butter? Cheese spread?” he wrote. “Any of these.”

The Microsoft co-founder’s comments on AI came shortly after the managing director of Microsoft Research’s Redmond Lab said the doomsday declarations about the threat to human life are overblown.

“There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” Eric Horvitz said, according to the BBC. “I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

Horvitz noted that “over a quarter of all attention and resources” at Microsoft Research are focused on artificial intelligence.

This blog, trouthtroubles.com is owned, written, and operated by oldpoet56. All articles, posts, and materials found here, except for those that I have pressed here from someone else’s blog for the purpose of showing off their work, are under copyright and this website must be credited if my articles are re-blogged, pressed, or shared.

—Thank You, oldpoet56, T.R.S.

MillionDollarGirl❤️

Life around a girl who loves to daydream, who believes in her dreams and keeps going on.. Stay with her, believe in her! 😊

UptightPrettyGirl

Have fun reading. Message me any time.

Alisa Jordan Writes

Move to a new city, see the world and never forget your loved ones.

sisirblog

Adding value to life

#SIMPLY KIMATHI

MALWAGROUP

The Budding Flower

An aspiring artist in search of a path that reflects her strength

Red Letters

Following Jesus, Loving life

bienvenido

El mundo es un libro y aquellos que no viajan, solo leen una página.

%d bloggers like this: