STEPHEN HAWKING AI WARNING: ARTIFICIAL INTELLIGENCE COULD DESTROY CIVILIZATION

(THIS ARTICLE IS COURTESY OF NEWSWEEK)

(JUST YESTERDAY NOV. 6th, I WROTE AN ARTICLE TITLED ‘THE UNNEEDED POOR WILL BE EXTERMINATED’, THINGS I POINTED OUT IN THAT ARTICLE DO AGREE WITH WHAT MR. HAWKING IS SAYING HERE IN THIS ARTICLE TODAY, PLEASE CONSIDER READING YESTERDAYS ARTICLE ALSO, THANK YOU.)

STEPHEN HAWKING AI WARNING: ARTIFICIAL INTELLIGENCE COULD DESTROY CIVILIZATION

World-renowned physicist Stephen Hawking has warned that artificial intelligence (AI) has the potential to destroy civilization and could be the worst thing that has ever happened to humanity.

Speaking at a technology conference in Lisbon, Portugal, Hawking told attendees that mankind had to find a way to control computers, CNBC reports.

“Computers can, in theory, emulate human intelligence, and exceed it,” he said. “Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

stephen hawking Earth extinction colonizeStephen Hawking sits onstage during an announcement of the Breakthrough Starshot initiative with investor Yuri Milner in New York City, on April 12, 2016. Hawking, the English physicist, warns humanity needs to become a multiplanetary species to ensure its survival.REUTERS/LUCAS JACKSON

Hawking said that while AI has the potential to transform society—it could be used to eradicate poverty and disease, for example—it also comes with huge risks.

Society, he said, must be prepared for that eventuality. “AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” he said.

This is not the first time Hawking has warned about the dangers of AI. In a recent interview with Wired, the University of Cambridge Professor said AI could one day reach a level where it outperforms humans and becomes a “new form of life.”

artificial intelligence Artificial intelligence GLAS-8/FLICKR

“I fear that AI may replace humans altogether,” he told the magazine. “If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”

Even if AI does not take over the world, either by destroying or enslaving mankind, Hawking still believes human beings are doomed. Over recent years, he has become increasingly vocal about the need to leave Earth in search of a new planet.

In May, he said humans have around 100 years to leave Earth in order to survive as a species. “I strongly believe we should start seeking alternative planets for possible habitation,” he said during a speech at the Royal Society in London, U.K. “We are running out of space on Earth and we need to break through the technological limitations preventing us from living elsewhere in the universe.”

The following month at the Starmus Festival in Norway, which celebrates science and art, Hawking told his audience that the current threats to Earth are “too big and too numerous” for him to be positive about the future.

“Our physical resources are being drained at an alarming rate. We have given our planet the disastrous gift of climate change. Rising temperatures, reduction of the polar ice caps, deforestation and decimation of animal species. We can be an ignorant, unthinking lot.

“We are running out of space and the only places to go to are other worlds. It is time to explore other solar systems. Spreading out may be the only thing that saves us from ourselves. I am convinced that humans need to leave Earth.”

Big data meets Big Brother as China moves to rate its citizens

(THIS ARTICLE IS COURTESY OF ‘WIRED’ MAGAZINE)

 

Big data meets Big Brother as China moves to rate its citizens

The Chinese government plans to launch its Social Credit System in 2020. The aim? To judge the trustworthiness – or otherwise – of its 1.3 billion residents


Kevin Hong

On June 14, 2014, the State Council of China published an ominous-sounding document called “Planning Outline for the Construction of a Social Credit System”. In the way of Chinese policy documents, it was a lengthy and rather dry affair, but it contained a radical idea. What if there was a national trust score that rated the kind of citizen you were?

Imagine a world where many of your daily activities were constantly monitored and evaluated: what you buy at the shops and online; where you are at any given time; who your friends are and how you interact with them; how many hours you spend watching content or playing video games; and what bills and taxes you pay (or not). It’s not hard to picture, because most of that already happens, thanks to all those data-collecting behemoths like Google, Facebook and Instagram or health-tracking apps such as Fitbit. But now imagine a system where all these behaviours are rated as either positive or negative and distilled into a single number, according to rules set by the government. That would create your Citizen Score and it would tell everyone whether or not you were trustworthy. Plus, your rating would be publicly ranked against that of the entire population and used to determine your eligibility for a mortgage or a job, where your children can go to school – or even just your chances of getting a date.

A futuristic vision of Big Brother out of control? No, it’s already getting underway in China, where the government is developing the Social Credit System (SCS) to rate the trustworthiness of its 1.3 billion citizens. The Chinese government is pitching the system as a desirable way to measure and enhance “trust” nationwide and to build a culture of “sincerity”. As the policy states, “It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility.”

Others are less sanguine about its wider purpose. “It is very ambitious in both depth and scope, including scrutinising individual behaviour and what books people are reading. It’s Amazon’s consumer tracking with an Orwellian political twist,” is how Johan Lagerkvist, a Chinese internet specialist at the Swedish Institute of International Affairs, described the social credit system. Rogier Creemers, a post-doctoral scholar specialising in Chinese law and governance at the Van Vollenhoven Institute at Leiden University, who published a comprehensive translation of the plan, compared it to “Yelp reviews with the nanny state watching over your shoulder”.

For now, technically, participating in China’s Citizen Scores is voluntary. But by 2020 it will be mandatory. The behaviour of every single citizen and legal person (which includes every company or other entity)in China will be rated and ranked, whether they like it or not.

Kevin Hong

Prior to its national roll-out in 2020, the Chinesegovernment is taking a watch-and-learn approach. In this marriage between communist oversight and capitalist can-do, the government has given a licence to eight private companies to come up with systems and algorithms for social credit scores. Predictably, data giants currently run two of the best-known projects.

The first is with China Rapid Finance, a partner of the social-network behemoth Tencent and developer of the messaging app WeChat with more than 850 million active users. The other, Sesame Credit, is run by the Ant Financial Services Group (AFSG), an affiliate company of Alibaba. Ant Financial sells insurance products and provides loans to small- to medium-sized businesses. However, the real star of Ant is AliPay, its payments arm that people use not only to buy things online, but also for restaurants, taxis, school fees, cinema tickets and even to transfer money to each other.

Sesame Credit has also teamed up with other data-generating platforms, such as Didi Chuxing, the ride-hailing company that was Uber’s main competitor in China before it acquired the American company’s Chinese operations in 2016, and Baihe, the country’s largest online matchmaking service. It’s not hard to see how that all adds up to gargantuan amounts of big data that Sesame Credit can tap into to assess how people behave and rate them accordingly.

So just how are people rated? Individuals on Sesame Credit are measured by a score ranging between 350 and 950 points. Alibaba does not divulge the “complex algorithm” it uses to calculate the number but they do reveal the five factors taken into account. The first is credit history. For example, does the citizen pay their electricity or phone bill on time? Next is fulfilment capacity, which it defines in its guidelines as “a user’s ability to fulfil his/her contract obligations”. The third factor is personal characteristics, verifying personal information such as someone’s mobile phone number and address. But the fourth category, behaviour and preference, is where it gets interesting.

Under this system, something as innocuous as a person’s shopping habits become a measure of character. Alibaba admits it judges people by the types of products they buy. “Someone who plays video games for ten hours a day, for example, would be considered an idle person,” says Li Yingyun, Sesame’s Technology Director. “Someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility.” So the system not only investigates behaviour – it shapes it. It “nudges” citizens away from purchases and behaviours the government does not like.

Friends matter, too. The fifth category is interpersonal relationships. What does their choice of online friends and their interactions say about the person being assessed? Sharing what Sesame Credit refers to as “positive energy” online, nice messages about the government or how well the country’s economy is doing, will make your score go up.

Alibaba is adamant that, currently, anything negative posted on social media does not affect scores (we don’t know if this is true or not because the algorithm is secret). But you can see how this might play out when the government’s own citizen score system officially launches in 2020. Even though there is no suggestion yet that any of the eight private companies involved in the ongoing pilot scheme will be ultimately responsible for running the government’s own system, it’s hard to believe that the government will not want to extract the maximum amount of data for its SCS, from the pilots. If that happens, and continues as the new normal under the government’s own SCS it will result in private platforms acting essentially as spy agencies for the government. They may have no choice.

Posting dissenting political opinions or links mentioning Tiananmen Square has never been wise in China, but now it could directly hurt a citizen’s rating. But here’s the real kicker: a person’s own score will also be affected by what their online friends say and do, beyond their own contact with them. If someone they are connected to online posts a negative comment, their own score will also be dragged down.

So why have millions of people already signed up to what amounts to a trial run for a publicly endorsed government surveillance system? There may be darker, unstated reasons – fear of reprisals, for instance, for those who don’t put their hand up – but there is also a lure, in the form of rewards and “special privileges” for those citizens who prove themselves to be “trustworthy” on Sesame Credit.

If their score reaches 600, they can take out a Just Spend loan of up to 5,000 yuan (around £565) to use to shop online, as long as it’s on an Alibaba site. Reach 650 points, they may rent a car without leaving a deposit. They are also entitled to faster check-in at hotels and use of the VIP check-in at Beijing Capital International Airport. Those with more than 666 points can get a cash loan of up to 50,000 yuan (£5,700), obviously from Ant Financial Services. Get above 700 and they can apply for Singapore travel without supporting documents such as an employee letter. And at 750, they get fast-tracked application to a coveted pan-European Schengen visa. “I think the best way to understand the system is as a sort of bastard love child of a loyalty scheme,” says Creemers.

Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch. A citizen’s score can even affect their odds of getting a date, or a marriage partner, because the higher their Sesame rating, the more prominent their dating profile is on Baihe.

Sesame Credit already offers tips to help individuals improve their ranking, including warning about the downsides of friending someone who has a low score. This might lead to the rise of score advisers, who will share tips on how to gain points, or reputation consultants willing to offer expert advice on how to strategically improve a ranking or get off the trust-breaking blacklist.

Indeed, Sesame Credit is basically a big data gamified version of the Communist Party’s surveillance methods; the disquieting dang’an. The regime kept a dossier on every individual that tracked political and personal transgressions. A citizen’s dang’anfollowed them for life, from schools to jobs. People started reporting on friends and even family members, raising suspicion and lowering social trust in China. The same thing will happen with digital dossiers. People will have an incentive to say to their friends and family, “Don’t post that. I don’t want you to hurt your score but I also don’t want you to hurt mine.”

We’re also bound to see the birth of reputation black markets selling under-the-counter ways to boost trustworthiness. In the same way that Facebook Likes and Twitter followers can be bought, individuals will pay to manipulate their score. What about keeping the system secure? Hackers (some even state-backed) could change or steal the digitally stored information.

The new system reflects a cunning paradigm shift. Aswe’ve noted, instead of trying to enforce stability or conformity with a big stick and a good dose of top-down fear, the government is attempting to make obedience feel like gaming. It is a method of social control dressed up in some points-reward system. It’s gamified obedience.

In a trendy neighbourhood in downtown Beijing, the BBC news services hit the streets in October 2015 to ask people about their Sesame Credit ratings. Most spoke about the upsides. But then, who would publicly criticise the system? Ding, your score might go down. Alarmingly, few people understood that a bad score could hurt them in the future. Even more concerning was how many people had no idea that they were being rated.

Currently, Sesame Credit does not directly penalise people for being “untrustworthy” – it’s more effective to lock people in with treats for good behaviour. But Hu Tao, Sesame Credit’s chief manager, warns people that the system is designed so that “untrustworthy people can’t rent a car, can’t borrow money or even can’t find a job”. She has even disclosed that Sesame Credit has approached China’s Education Bureau about sharing a list of its students who cheated on national examinations, in order to make them pay into the future for their dishonesty.

Penalties are set to change dramatically when the government system becomes mandatory in 2020. Indeed, on September 25, 2016, the State Council General Office updated its policy entitled “Warning and Punishment Mechanisms for Persons Subject to Enforcement for Trust-Breaking”. The overriding principle is simple: “If trust is broken in one place, restrictions are imposed everywhere,” the policy document states.

For instance, people with low ratings will have slower internet speeds; restricted access to restaurants, nightclubs or golf courses; and the removal of the right to travel freely abroad with, I quote, “restrictive control on consumption within holiday areas or travel businesses”. Scores will influence a person’s rental applications, their ability to get insurance or a loan and even social-security benefits. Citizens with low scores will not be hired by certain employers and will be forbidden from obtaining some jobs, including in the civil service, journalism and legal fields, where of course you must be deemed trustworthy. Low-rating citizens will also be restricted when it comes to enrolling themselves or their children in high-paying private schools. I am not fabricating this list of punishments. It’s the reality Chinese citizens will face. As the government document states, the social credit system will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”.

According to Luciano Floridi, a professor of philosophy and ethics of information at the University of Oxford and the director of research at the Oxford Internet Institute, there have been three critical “de-centering shifts” that have altered our view in self-understanding: Copernicus’s model of the Earth orbiting the Sun; Darwin’s theory of natural selection; and Freud’s claim that our daily actions are controlled by the unconscious mind.

Floridi believes we are now entering the fourth shift, as what we do online and offline merge into an onlife. He asserts that, as our society increasingly becomes an infosphere, a mixture of physical and virtual experiences, we are acquiring an onlife personality – different from who we innately are in the “real world” alone. We see this writ large on Facebook, where people present an edited or idealised portrait of their lives. Think about your Uber experiences. Are you just a little bit nicer to the driver because you know you will be rated? But Uber ratings are nothing compared to Peeple, an app launched in March 2016, which is like a Yelp for humans. It allows you to assign ratings and reviews to everyone you know – your spouse, neighbour, boss and even your ex. A profile displays a “Peeple Number”, a score based on all the feedback and recommendations you receive. Worryingly, once your name is in the Peeple system, it’s there for good. You can’t opt out.

Peeple has forbidden certain bad behaviours including mentioning private health conditions, making profanities or being sexist (however you objectively assess that). But there are few rules on how people are graded or standards about transparency.

China’s trust system might be voluntary as yet, but it’s already having consequences. In February 2017, the country’s Supreme People’s Court announced that 6.15 million of its citizens had been banned from taking flights over the past four years for social misdeeds. The ban is being pointed to as a step toward blacklisting in the SCS. “We have signed a memorandum… [with over] 44 government departments in order to limit ‘discredited’ people on multiple levels,” says Meng Xiang, head of the executive department of the Supreme Court. Another 1.65 million blacklisted people cannot take trains.

Where these systems really descend into nightmarish territory is that the trust algorithms used are unfairly reductive. They don’t take into account context. For instance, one person might miss paying a bill or a fine because they were in hospital; another may simply be a freeloader. And therein lies the challenge facing all of us in the digital world, and not just the Chinese. If life-determining algorithms are here to stay, we need to figure out how they can embrace the nuances, inconsistencies and contradictions inherent in human beings and how they can reflect real life.

Kevin Hong

You could see China’s so-called trust plan asOrwell’s 1984 meets Pavlov’s dogs. Act like a good citizen, be rewarded and be made to think you’re having fun. It’s worth remembering, however, that personal scoring systems have been present in the west for decades.

More than 70 years ago, two men called Bill Fair and Earl Isaac invented credit scores. Today, companies use FICO scores to determine many financial decisions, including the interest rate on our mortgage or whether we should be given a loan.

For the majority of Chinese people, they have never had credit scores and so they can’t get credit. “Many people don’t own houses, cars or credit cards in China, so that kind of information isn’t available to measure,” explains Wen Quan, an influential blogger who writes about technology and finance. “The central bank has the financial data from 800 million people, but only 320 million have a traditional credit history.” According to the Chinese Ministry of Commerce, the annual economic loss caused by lack of credit information is more than 600 billion yuan (£68bn).

China’s lack of a national credit system is why the government is adamant that Citizen Scores are long overdue and badly needed to fix what they refer to as a “trust deficit”. In a poorly regulated market, the sale of counterfeit and substandard products is a massive problem. According to the Organization for Economic Co-operation and Development (OECD), 63 per cent of all fake goods, from watches to handbags to baby food, originate from China. “The level of micro corruption is enormous,” Creemers says. “So if this particular scheme results in more effective oversight and accountability, it will likely be warmly welcomed.”

The government also argues that the system is a way to bring in those people left out of traditional credit systems, such as students and low-income households. Professor Wang Shuqin from the Office of Philosophy and Social Science at Capital Normal University in China recently won the bid to help the government develop the system that she refers to as “China’s Social Faithful System”. Without such a mechanism, doing business in China is risky, she stresses, as about half of the signed contracts are not kept. “Given the speed of the digital economy it’s crucial that people can quickly verify each other’s credit worthiness,” she says. “The behaviour of the majority is determined by their world of thoughts. A person who believes in socialist core values is behaving more decently.” She regards the “moral standards” the system assesses, as well as financial data, as a bonus.

Indeed, the State Council’s aim is to raise the “honest mentality and credit levels of the entire society” in order to improve “the overall competitiveness of the country”. Is it possible that the SCS is in fact a more desirably transparent approach to surveillance in a country that has a long history of watching its citizens? “As a Chinese person, knowing that everything I do online is being tracked, would I rather be aware of the details of what is being monitored and use this information to teach myself how to abide by the rules?” says Rasul Majid, a Chinese blogger based in Shanghai who writes about behavioural design and gaming psychology. “Or would I rather live in ignorance and hope/wish/dream that personal privacy still exists and that our ruling bodies respect us enough not to take advantage?” Put simply, Majid thinks the system gives him a tiny bit more control over his data.

Kevin Hong

When I tell westerners about the Social CreditSystem in China, their responses are fervent and visceral. Yet we already rate restaurants, movies, books and even doctors. Facebook, meanwhile, is now capable of identifying you in pictures without seeing your face; it only needs your clothes, hair and body type to tag you in an image with 83 per cent accuracy.

In 2015, the OECD published a study revealing that in the US there are at least 24.9 connected devices per 100 inhabitants. All kinds of companies scrutinise the “big data” emitted from these devices to understand our lives and desires, and to predict our actions in ways that we couldn’t even predict ourselves.

Governments around the world are already in the business of monitoring and rating. In the US, the National Security Agency (NSA) is not the only official digital eye following the movements of its citizens. In 2015, the US Transportation Security Administration proposed the idea of expanding the PreCheck background checks to include social-media records, location data and purchase history. The idea was scrapped after heavy criticism, but that doesn’t mean it’s dead. We already live in a world of predictive algorithms that determine if we are a threat, a risk, a good citizen and even if we are trustworthy. We’re getting closer to the Chinese system – the expansion of credit scoring into life scoring – even if we don’t know we are.

So are we heading for a future where we will all be branded online and data-mined? It’s certainly trending that way. Barring some kind of mass citizen revolt to wrench back privacy, we are entering an age where an individual’s actions will be judged by standards they can’t control and where that judgement can’t be erased. The consequences are not only troubling; they’re permanent. Forget the right to delete or to be forgotten, to be young and foolish.

While it might be too late to stop this new era, we do have choices and rights we can exert now. For one thing, we need to be able rate the raters. In his book The Inevitable, Kevin Kelly describes a future where the watchers and the watched will transparently track each other. “Our central choice now is whether this surveillance is a secret, one-way panopticon – or a mutual, transparent kind of ‘coveillance’ that involves watching the watchers,” he writes.

Our trust should start with individuals within government (or whoever is controlling the system). We need trustworthy mechanisms to make sure ratings and data are used responsibly and with our permission. To trust the system, we need to reduce the unknowns. That means taking steps to reduce the opacity of the algorithms. The argument against mandatory disclosures is that if you know what happens under the hood, the system could become rigged or hacked. But if humans are being reduced to a rating that could significantly impact their lives, there must be transparency in how the scoring works.

In China, certain citizens, such as government officials, will likely be deemed above the system. What will be the public reaction when their unfavourable actions don’t affect their score? We could see a Panama Papers 3.0 for reputation fraud.

It is still too early to know how a culture of constant monitoring plus rating will turn out. What will happen when these systems, charting the social, moral and financial history of an entire population, come into full force? How much further will privacy and freedom of speech (long under siege in China) be eroded? Who will decide which way the system goes? These are questions we all need to consider, and soon. Today China, tomorrow a place near you. The real questions about the future of trust are not technological or economic; they are ethical.

If we are not vigilant, distributed trust could become networked shame. Life will become an endless popularity contest, with us all vying for the highest rating that only a few can attain.

This is an extract from Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart (Penguin Portfolio) by Rachel Botsman, published on October 4. Since this piece was written, The People’s Bank of China delayed the licences to the eight companies conducting social credit pilots. The government’s plans to launch the Social Credit System in 2020 remain unchanged

 

Who Vladimir Putin thinks will rule the world

(THIS ARTICLE IS COURTESY OF CNN)

 

Who Vladimir Putin thinks will rule the world

Story highlights

  • The Russian President gives an “open lesson” to more than a million schoolchildren
  • “Whoever becomes the leader in this sphere will become the ruler of the world,” he says

(CNN)On the first day of the new school year in Russia, students learned an important lesson directly from their president — who he thinks will rule the world.

Speaking to students during a national “open lesson” from the city of Yaroslavl, northeast of Moscow, Russian President Vladimir Putin said the country that takes the lead in the sphere of computer-based artificial intelligence (AI) will rule.
“Artificial intelligence is the future not only of Russia but of all of mankind,” said Putin. “There are huge opportunities, but also threats that are difficult to foresee today.”
“Whoever becomes the leader in this sphere will become the ruler of the world,” he said, adding that it would be better to prevent any particular “pair of hands” from achieving a monopoly in the field.
If Russia becomes the leader in the development of artificial intelligence, “we will share our technology with the rest of the world, like we are doing now with atomic and nuclear technology,” said Putin.
More than a million schoolchildren around Russia were expected to watch the televised open lesson online, titled “Russia Focused on the Future,” according to the Kremlin.

Putin visits new hockey school in Yaroslavl.

Participants in the lesson also watched videos about the large-scale innovative projects, including the development of a new generation of nuclear-powered icebreakers and a heavy-class space launch center.
The words of the Russian President echo what scientists in Russia and around the world have been mulling over for quite some time.
Work on developing drones and vehicles for military and civilian usage is well under way in Russia, according to state media.
The Russian military is also developing robots, anti-drone systems, and cruise missiles that would be able to analyze radars and make decisions on the altitude, speed and direction of their flight, according to state media.
While in Yaroslavl, Putin didn’t miss the opportunity to show off his hockey skills during a visit to a new school. Putin attended a training session of the children’s hockey team, talked to the young players and played some hockey himself.

Is Elon Musk A Genius An Idiot Or Maybe Both?

Is Elon Musk A Genius An Idiot Or Maybe Both?

by oldpoet56

 

I have never met the man Elon Musk but I have read quite a bit about him during this past year or two. So, I do not know him personally so the best I can do is to garner what I can from him through his quotes. Personally I have no doubt that the man is a genius as far as his IQ is concerned. I have learned during my time here on this Earth that a person can be brilliant yet still do and or believe things that are just plain stupid. I also have learned that a person with a very low IQ can sometimes come up with great ideas, sometimes things in life simply are defined by the angle or the light in which one looks at the issue in question. This article today is going to be my opinions that I have taken from an article that I read this morning in ” livescience.com “. This article is one that I reblogged earlier this morning if you wish to read it before or after you read this article. When I write articles it is always my wish and attempt to get folks to think, to stretch their minds beyond their everyday plain, this article will be no different. I am not really saying that you need to agree with me but I hope you will take a couple of moments to consider what I am laying out for you to think about. This article today is one that does concern every ones life and their Soul.

 

The Science article I mentioned to you a moment ago is concerning a company that Mr. Musk owns that is called ‘Neuralink’. Mr. Musk’s ambition with this company is to develop a “Ultrahigh-Bandwith Brain-Computer Interface.” Mr. Musk says one of the purposes is to ‘accelerate human evolution.’ He is not seeking to create pure machines like you see in the Terminator movies or even in the Will Smith movie simply called, AI. Mr. Musk says that “he sees a real danger in Artificial Intelligence” he has called AI a “fundamental risk to the existence of human civilization.” I believe that he is correct there as science, which is often pushed by military government funding seeks to have pilot-less aircraft, not just Drones, but also big Jets, folks the Navy has a sailor-less battle ship! Of course that will then lead to commercial airlines getting rid of all of their pilots. Think about it, driver-less cars, tractor-trailer units, driver-less trains. O yes, we already have this technologies don’t we? Think about factory jobs for a moment please. When I was in my teen years my Dad worked at a Chrysler Assembly plant in northern Illinois, back then the assembly line had far more employees putting together the units than what you see these days. Now machines directed by computer brains have replaced most of those ‘human’ jobs. Machines, computers don’t have Unions, don’t ask for pay raises, paid days off, overtime pay, medical benefits and that list goes on and on. Why let a human do what a computer can do much cheaper, and in most cases, better?

 

Evidently Mr. Musk is concerned that we humans, starting with the poorest, weakest, least educated will only be a burden on society (the wealthiest people), if you are not a positive to society, why should you be allowed to live off of someone else (the rich)? What was Arnold’s phrase, ‘you have been Terminated’? Mr. Musk believes (and he is trying to accomplish this through his Neuralink Company) “that the best way to keep pace with the machines intelligence is to up grade human intelligence.” In the good ole days wasn’t that called going to school and getting the best education that you were able to get?

 

From a pure science perspective Mr. Musk is correct on a couple of different plains. I believe that he is correct about his concerns regarding AI. Do you not believe that the servant can become the master? Could the humble public servant (politicians/bureaucrats/police) ever dare to become the master over the people? We already have, and we have had for many years now the integration of computer chips for people. It started out with chips for our pets so that they don’t get lost from us. Then we went into chips for new-born babies, just in case they ever got lost or stolen. Then came the chips for employees and their convenience. We have had little ‘brain’ chips for well over a decade now. Neuralink and Mr. Musk are now simply trying to stretch the human-computer ‘interface’ as he puts it. There will soon be a day where if you are an employee or if you are an office supervisor of importance that the company will require you to have mandated chip technology in your hand or you can’t get the job or the promotion. If you don’t think that what I am saying to you is logical or true, my friend it is you who are living in a fantasy world, not me.

 

This last paragraph is going to be from my Christian Biblical viewpoint. We are told several times in the book of Revelation about the ‘Mark of the Beast’ being put into our hand or into our head, we are told that if we humans allow this that when Christ and His Angels return that we will die twice. The first death is when this body dies, the second death is when God severs His relationship with us and cast’s us into Hell for all eternity. Many will say things along the lines of ‘what has the Mark of the Beast got to do with computer chips’? I know that most folks still do not realize what ‘Armageddon’ really is. Scripture is very plain that Armageddon is when the Nations of the Earth and their Armies fight against God and His Angels at the Second Advent of Christ. We are also told that the people who are found to have the mark of the Beast in their hand or in their head will be totally crushed as if in a wine-press. Friends think about it for a moment, it is the governments which at that time will be led by Demons and Satan Himself that are going to fight against God, so yes, the governments will be even more wicked than they are now. Friends the mark of the ‘Beast’ is not the number 666, no where does Scripture say that it is. Simply there will come a time when 10 governments will control almost all of the globe and these 10 governments will sit upon the 7 Continents. Then the power will be consolidated into 3 all-powerful governments, then into one. Six is the sign of man, three is the sign of God. The world will have 3 all-powerful governments that are ruled by 3 of Satan’s top Generals. 3 Men who will try to take the place of God, as if they are God’s. Then they will give up their power to the 1 true Anti-Christ, Satan Himself. 3 Men (6’s) who would be God (3’s) if they could. Friends, all I can say to you as I close this article today is please for no reason ever allow anyone to ever put any kind of chip into you, please.

What the Rise of Sentient Robots Will Mean for Human Beings

(THIS ARTICLE IS COURTESY OF NBC MACH)

What the Rise of Sentient Robots Will Mean for Human Beings

Science fiction may have us worried about sentient robots, but it’s the mindless ones we need to be cautious of. Conscious machines may actually be our allies.

Jun.19.2017 / 12:45 PM ET

TERMINATOR GENISYS, Series T-800 Robot, 2015. ph: Melinda Sue Gordon/(C)Paramount Pictures/courtesy :: (C)Paramount/Courtesy Everett Collection / (C)Paramount/Courtesy Everett Collection
The series T-800 Robot in the “Terminator” movie franchise is an agent of Skynet, an artificial intelligence system that becomes self-aware. | Paramount/Courtesy Of Everett CollectionZombies and aliens may not be a realistic threat to our species. But there’s one stock movie villain we can’t be so sanguine about: sentient robots. If anything, their arrival is probably just a matter of time. But what will a world of conscious machines be like? Will there be a place in it for us?

Artificial intelligence research has been going through a recent revolution. AI systems can now outperform humans at playing chess and Go, recognizing faces, and driving safely. Even so, most researchers say truly conscious machines — ones that don’t just follow programs but have feelings and are self-aware — are decades away. First, the reasoning goes, researchers have to build a generalized intelligence, a single machine with the above talents and the capacity to learn more. Only then will AI reach the level of sophistication needed for consciousness.

But some think it won’t take nearly that long.

“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.

No one claims that robots have a rich inner experience — that they have pride in floors they’ve vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.

Related: Will Robots Take Over the World?

Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are simple. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.

Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.

“If we could capture some of the structure of consciousness, it’s a good bet that we’d be producing some interesting capacity,” says Selmer Bringsjord, an AI researcher at the Rensselaer Polytechnic Institute in Troy, N.Y. Although science fiction may have us worried about sentient robots, it’s really the mindless robots we need to be cautious of. Conscious machines may actually be our allies.

Image::Children interact with the programmable humanoid robot "Pepper," developed by French robotics company Aldebaran Robotics, at the Global Robot Expo in Madrid in 2016.|||[object Object]
Children interact with the programmable humanoid robot “Pepper,” developed by French robotics company Aldebaran Robotics, at the Global Robot Expo in Madrid in 2016. AFP-Getty Images

ROBOT, KNOW THYSELF

Self-driving cars have some of the most advanced AI systems today. They decide where to steer and when to brake by taking constant radar and laser readings and feeding them into algorithms. But much of driving is anticipating other drivers’ maneuvers and responding defensively — functions that are associated with consciousness.

“Self-driving cars will have to read the minds of what other self-driving cars want to do,” says Paul Verschure, a neuroscientist at Universitat Pompeu Fabra in Barcelona.

As a demonstration of how that might look, Hod Lipson, an engineering professor at Columbia University and co-author of a recent book on self-driving cars, and Kyung-Joong Kim at Sejong University in Seoul, South Korea built the robotic equivalent of a crazy driver. The small round robot (about the size of a hockey puck) moves on a loopy path according to its own logic. Then a second robot is set with the goal of intercepting the first robot no matter where the first one started, so it couldn’t record a fixed route; it had to divine the moving robot’s logic.

People expect that self-awareness is going to be this end game of AI when really there are no scientific pursuits where you start at the end.

 

Using a procedure that mimicked Darwinian evolution, Lipson and Kim crafted an interception strategy. “It had basically developed a duplicate of the brain of the actor — not perfect, but good enough that it could anticipate what it’s going to do,” Lipson says.

Lipson’s team also built a robot that can develop an understanding of its body. The four-legged spidery machine is about the size of a large tarantula. When switched on, its internal computer has no prior information about itself. “It doesn’t know how its motors are arranged, what its body plan is,” Lipson says

But it has the capacity to learn. It makes all the actions it is capable of to see what happens: how, for example, turning on a motor bends a leg joint. “Very much like a baby, it babbles,” Lipson says. “It moves its motors in a random way.”

After four days of flailing, it realizes it has four legs and figures out how to coordinate and move them so it can slither across the floor. When Lipson unplugs one of the motors, the robot realizes it now has only three legs and that its actions no longer produce the intended effects.

“I would argue this robot is self-aware in a very primitive way,” Lipson says.

Could Robots Create a ‘Jobless Future’ for Humans?

Another humanlike capability that researchers would like to build into AI is initiative. Machines excel at playing the game Go because humans directed the machines to solve it. They can’t define problems on their own, and defining problems is usually the hard part.

In a forthcoming paper for the journal “Trends in Cognitive Science,” Ryota Kanai, a neuroscientist and founder of a Tokyo-based startup Araya discusses how to give machines intrinsic motivation. In a demonstration, he and his colleagues simulated agents driving a car in a virtual landscape that includes a hill too steep for the car to climb unless it gets a running start. If told to climb the hill, the agents figure out how to do so. Until they receive this command, the car sits idle.

Then Kanai’s team endowed these virtual agents with curiosity. They surveyed the landscape, identified the hill as a problem, and figured out how to climb it even without instruction.

“We did not give a goal to the agent,” Kanai says. “The agent just explores the environment to learn what kind of situation it is in by making predictions about the consequence of its own action.”

Related: This Robot Can Compose Its Own Music

The trick is to give robots enough intrinsic motivation to make them better problem solvers, and not so much that they quit and walk out of the lab. Machines can prove as stubborn as humans. Joscha Bach, an AI researcher at Harvard, put virtual robots into a “Minecraft”-like world filled with tasty but poisonous mushrooms. He expected them to learn to avoid them. Instead, they stuffed their mouths.

“They discounted future experiences in the same way as people did, so they didn’t care,” Bach says. “These mushrooms were so nice to eat.” He had to instill an innate aversion into the bots. In a sense, they had to be taught values, not just goals.

PAYING ATTENTION

In addition to self-awareness and self-motivation, a key function of consciousness is the capacity to focus your attention. Selective attention has been an important area in AI research lately, not least by Google DeepMind, which developed the Go-playing computer.

Image::China's 19-year-old Go player Ke Jie prepares to make a move during the second match against Google's artificial intelligence program.|||[object Object]
China’s 19-year-old Go player Ke Jie prepares to make a move during the second match against Google’s artificial intelligence program. AFP – Getty Images / AFP Or Licensors

“Consciousness is an attention filter,” says Stanley Franklin, a computer science professor at the University of Memphis. In a paper published last year in the journal “Biologically Inspired Cognitive Architectures,” Franklin and his colleagues reviewed their progress in building an AI system called LIDA that decides what to concentrate on through a competitive process, as suggested by neuroscientist Bernard Baars in the 1980s. The processes watch for interesting stimuli — loud, bright, exotic — and then vie for dominance. The one that prevails determines where the mental spotlight falls and informs a wide range of brain function, including deliberation and movement. The cycle of perception, attention, and action repeats five to 10 times a second.

The first version of LIDA was a job-matching server for the U.S. Navy. It read emails and focused on pertinent information while juggling each job hunter’s interests, the availability of jobs, and the requirements of government bureaucracy.

Since then, Franklin’s team has used the system to model animals’ minds, especially behavioral quirks that result from focusing on one thing at a time. For example, LIDA is just as prone as humans are to a curious psychological phenomenon known as “attentional blink.” When something catches your attention, you become oblivious to anything else for about half a second. This cognitive blind spot depends on many factors and LIDA shows humanlike responses to these same factors.

Pentti Haikonen, a Finnish AI researcher, has built a robot named XCR-1 on similar principles. Whereas other researchers make modest claims — create some quality of consciousness — Haikonen argues that his creation is capable of genuine subjective experience and basic emotions.

Related

This giant mech, made by a mysterious South Korean robotics company, doesn’t exactly have a purpose yet but one can certainly imagine a few uses and most of them involve war, destruction, or both.

INNOVATION
Giant Robot Is Action Movies Come To Life

The system learns to make associations much like the neurons in our brains do. If Haikonen shows the robot a green ball and speaks the word “green,” the vision and auditory modules respond and become linked. If Haikonen says “green” again, the auditory module will respond and, through the link, so will the vision module. The robot will proceed as if it heard the word and saw the color, even if it’s staring into an empty void. Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.

Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.

“If we see a ball, we may say so to ourselves, and at that moment our perception is rather similar to the case when we actually hear that word,” Haikonen says. “The situation in the XCR-1 is the same.”

Things get interesting when the modules clash — if, for example, the vision module sees green while the auditory module hears “blue.” If the auditory module prevails, the system as a whole turns its attention to the word it hears while ignoring the color it sees. The robot has a simple stream of consciousness consisting of the perceptions that dominate it moment by moment: “green,” “ball,” “blue,” and so on. When Haikonen wires the auditory module to a speech engine, the robot will keep a running monolog about everything it sees and feels.

Haikonen also gives vibration a special significance as “pain,” which preempts other sensory inputs and consumes the robot’s attention. In one demonstration, Haikonen taps the robot and it blurts, “Me hurt.”

“Some people get emotionally disturbed by this, for some reason,” Haikonen says. (He and others are unsentimental about the creations. “I’m never like, ‘Poor robot,’” Verschure says.)

A NEW SPECIES

Building on these early efforts, researchers will develop more lifelike machines. We could see a continuum of conscious systems, just as there is in nature, from amoebas to dogs to chimps to humans and beyond. The gradual progress of this technology is good because it gives us time adjust to the idea that, one day, we won’t be the only advanced beings on the planet.

Image::A child reaches out to a robotic dog at the World Robot Conference in Beijing in 2016.|||[object Object]
A child reaches out to a robotic dog at the World Robot Conference in Beijing in 2016. AP / Copyright 2016 The Associated Press. All Rights Reserved.

For a long while, our artificial companions will be vulnerable — more pet than threat. How we treat them will hinge on whether we recognize them as conscious and as capable of suffering.

“The reason that we value non-human animals, to the extent that people do, is that we see, based on our own consciousness, the light of consciousness within them as well,” says Susan Schneider, a philosopher at the University of Connecticut who studies the implications of AI. In fact, she thinks we will deliberately hold back from building conscious machines to avoid the moral dilemmas it poses.

“If you’re building conscious systems and having them work for us, that would be akin to slavery,” Schneider says. By the same token, if we don’t give advanced robots the gift of sentience, it worsens the threat they may eventually pose to humanity because they will see no particular reason to identify with us and value us.

Related: Speedy Delivery Bots May Change Our Buying Habits

Judging by what we’ve seen so far, conscious machines will inherit our human vulnerabilities. If robots have to anticipate what other robots do, they will treat one another as creatures with agency. Like us, they may start attributing agency to inanimate objects: stuffed animals, carved statues, the wind.

Last year, social psychologists Kurt Gray of the University of North Carolina and the late Daniel Wegner suggested in their book “The Mind Club” that this instinct was the origin of religion. “I would like to see a movie where the robots develop a religion because we have engineered them to have an intentionality prior so that they can be social,” Verschure says. ”But their intentionality prior runs away.”

These machines will vastly exceed our problem-solving ability, but not everything is a solvable problem. The only response they could have to conscious experience is to revel in it, and with their expanded ranges of sensory perception, they will see things people wouldn’t believe.

“I don’t think a future robotic species is going to be heartless and cold, as we sometimes imagine robots to be,” Lipson says. “They’ll probably have music and poetry that we’ll never understand.”

FOLLOW NBC MACH ON TWITTER, FACEBOOK, AND INSTAGRAM.

The Simple Economics of Machine Intelligence—How Soon Will Humans Not Be Needed?

(THIS ARTICLE IS COURTESY OF ‘DIGITOPOLY NEWS’)

[This post was co-written with Ajay Agrawal and Avi Goldfarb and appeared in HBR Blogs on 17 November 2016]

The year 1995 was heralded as the beginning of the “New Economy.” Digital communication was set to upend markets and change everything. But economists by and large didn’t buy into the hype. It wasn’t that we didn’t recognize that something changed. It was that we recognized that the old economics lens remained useful for looking at the changes taking place. The economics of the “New Economy” could be described at a high level: Digital technology would cause a reduction in the cost of search and communication. This would lead to more search, more communication, and more activities that go together with search and communication. That’s essentially what happened.

Today we are seeing similar hype about machine intelligence. But once again, as economists, we believe some simple rules apply. Technological revolutions tend to involve some important activity becoming cheap, like the cost of communication or finding information. Machine intelligence is, in its essence, a prediction technology, so the economic shift will center around a drop in the cost of prediction.

The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.

When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.

Lots of tasks will be reframed as prediction problems

As machine intelligence lowers the cost of prediction, we will begin to use it as an input for things for which we never previously did. As a historical example, consider semiconductors, an area of technological advance that caused a significant drop in the cost of a different input: arithmetic. With semiconductors we could calculate cheaply, so activities for which arithmetic was a key input, such as data analysis and accounting, became much cheaper. However, we also started using the newly cheap arithmetic to solve problems that were not historically arithmetic problems. An example is photography. We shifted from a film-oriented, chemistry-based approach to a digital-oriented, arithmetic-based approach. Other new applications for cheap arithmetic include communications, music, and drug discovery.

The same goes for machine intelligence and prediction. As the cost of prediction falls, not only will activities that were historically prediction-oriented become cheaper — like inventory management and demand forecasting — but we will also use prediction to tackle other problems for which prediction was not historically an input.

Consider navigation. Until recently, autonomous driving was limited to highly controlled environments such as warehouses and factories where programmers could anticipate the range of scenarios a vehicle may encounter, and could program if-then-else-type decision algorithms accordingly (e.g., “If an object approaches the vehicle, then slowdown”). It was inconceivable to put an autonomous vehicle on a city street because the number of possible scenarios in such an uncontrolled environment would require programming an infinite number of if-then-else statements.

Inconceivable, that is, until recently. Once prediction became cheap, innovators reframed driving as a prediction problem. Rather than programing endless if-then-else statements, they instead simply asked the AI to predict: “What would a human driver do?” They outfitted vehicles with a variety of sensors – cameras, lidar, radar, etc. – and then collected millions of miles of human driving data. By linking the incoming environmental data from sensors on the outside of the car to the driving decisions made by the human inside the car (steering, braking, accelerating), the AI learned to predict how humans would react to each second of incoming data about their environment. Thus, prediction is now a major component of the solution to a problem that was previously not considered a prediction problem.

Judgment will become more valuable

When the cost of a foundational input plummets, it often affects the value of other inputs. The value goes up for complements and down for substitutes. In the case of photography, the value of the hardware and software components associated with digital cameras went up as the cost of arithmetic dropped because demand increased – we wanted more of them. These components were complements to arithmetic; they were used together.  In contrast, the value of film-related chemicals fell – we wanted less of them.

All human activities can be described by five high-level components: data, prediction, judgment, action, and outcomes. For example, a visit to the doctor in response to pain leads to: 1) x-rays, blood tests, monitoring (data), 2) diagnosis of the problem, such as “if we administer treatment A, then we predict outcome X, but if we administer treatment B, then we predict outcome Y” (prediction), 3) weighing options: “given your age, lifestyle, and family status, I think you might be best with treatment A; let’s discuss how you feel about the risks and side effects” (judgment); 4) administering treatment A (action), and 5) full recovery with minor side effects (outcome).

As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.

For example, when prediction is cheap, diagnosis will be more frequent and convenient, and thus we’ll detect many more early-stage, treatable conditions. This will mean more decisions will be made about medical treatment, which means greater demand for the application of ethics, and for emotional support, which are provided by humans. The line between judgment and prediction isn’t clear cut – some judgment tasks will even be reframed as a series of predictions. Yet, overall the value of prediction-related human skills will fall, and the value of judgment-related skills will rise.

Interpreting the rise of machine intelligence as a drop in the cost of prediction doesn’t offer an answer to every specific question of how the technology will play out. But it yields two key implications: 1) an expanded role of prediction as an input to more goods and services, and 2) a change in the value of other inputs, driven by the extent to which they are complements to or substitutes for prediction. These changes are coming. The speed and extent to which managers should invest in judgment-related capabilities will depend on the how fast the changes arrive.

Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

(THIS ARTICLE IS COURTESY OF THE WASHINGTON POST)

Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

 (THE END OF THE HUMAN RACE?)
January 29, 2015

Bill Gates is a passionate technology advocate (big surprise), but his predictions about the future of computing aren’t uniformly positive.

During a wide-ranging Reddit “Ask me Anything” session — one that touched upon everything from his biggest regrets to his favorite spread to lather on bread — the Microsoft co-founder and billionaire philanthropist outlined a future that is equal parts promising and ominous.

Midway through the discussion on Wednesday, Gates was asked what personal computing will look like in 2045. Gates responded by asserting that the next 30 years will be a time of rapid progress.

“Even in the next 10 problems like vision and speech understanding and translation will be very good,” he wrote. “Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.”

He went on to highlight a Microsoft project known as the “Personal Agent,” which is being designed to help people manage their memory, attention and focus. “The idea that you have to find applications and pick them and they each are trying to tell you what is new is just not the efficient model – the agent will help solve this,” he said. “It will work across all your devices.”

The response from Reddit users was mixed, with some making light of Gates’s revelation (“Clippy 2.0?,” wrote one user) — and others sounding the alarm.

“This technology you are developing sounds at its essence like the centralization of knowledge intake,” a Redditor wrote. “Ergo, whomever controls this will control what information people make their own. Even today, we see the daily consequences of people who live in an environment that essentially tunnel-visions their knowledge.”

Shortly after, Gates was asked how much of an existential threat superintelligent machines pose to humans.

The question has been at the forefront of several recent discussions among prominent futurists. Last month, theoretical physicist Stephen Hawking said artificial intelligence “could spell the end of the human race.”

[Why the world’s most intelligent people shouldn’t be so afraid of artificial intelligence]

Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium in October, Tesla boss Elon Musk referred to artificial intelligence as “summoning the demon.”

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

British inventor Clive Sinclair has said he thinks artificial intelligence will doom mankind.

“Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”

After gushing about the immediate future of technology in his Reddit AMA, Gates aligned himself with the AI alarm-sounders.

“I am in the camp that is concerned about super intelligence,” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Once he finished addressing the potential demise of humankind, Gates got back to answering more immediate, less serious questions, like revealing his favorite spread to put on bread.

“Butter? Peanut butter? Cheese spread?” he wrote. “Any of these.”

The Microsoft co-founder’s comments on AI came shortly after the managing director of Microsoft Research’s Redmond Lab said the doomsday declarations about the threat to human life are overblown.

“There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” Eric Horvitz said, according to the BBC. “I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

Horvitz noted that “over a quarter of all attention and resources” at Microsoft Research are focused on artificial intelligence.

TheCagedBirdSings

The song of a heart can never be caged...

CuriousHumans

We have no idea what we are doing

Vasa and Ypres

They're British. They're Fabulous. They're Almost Humorous.

Breakaway Consciousness

Seeking Ideas Beyond Conventional Thought

NUTTY CONSUMER'S WORLDVIEW

CONSUMER AND TRAVEL ISSUES

Cooking with Kathy Man

Celebrating delicious and healthy food

Tippy Tales

Adventures of a rescue dog

%d bloggers like this: