What Company Boards Need To Know About AI

(THIS ARTICLE IS COURTESY OF HARVARD BUSINESS REVIEW)

 

What Boards Need to Know About AI

MAY 24, 2019

C. J. BURTON/GETTY IMAGES

Being a board member is a hard job — ask anyone who has ever been one. Company directors have to understand the nature of the business, review documents, engage in meaningful conversation with CEOs, and give feedback while still maintaining positive relationships with management. These are all hard things to balance. But, normally, boards don’t have to get involved with individual operational projects, especially technical ones. In fact, a majority of boards have very few members who are comfortable with advanced technology, and this generally has little impact on the company.

This is about to change, thanks to machine learning and artificial intelligence.

More than half of technology executives in the 2019 Gartner CIO Survey say they intend to employ AI before the end of 2020, up from 14% today. If you’re moving too slowly, a competitor could use AI to put you out of business. But if you move too quickly, you risk taking an approach the company doesn’t truly know how to manage. In a recent report by NewVantage Partners, 75% of companies cited fear of disruption from data-driven digital competitors as the top reason they’re investing.

The questions boards are going to have to ask themselves are similar to those they would ask in the face of any large opportunity investment: Why are we spending all this money? What’s the economic benefit? How does it impact our people and our long-term competitiveness?

INSIGHT CENTER

Answering these questions requires expertise in technology. But you can’t just add a tech expert to the board and count on him or her to keep the rest of the board up to speed. Having served in that role, I have found it to be at best a useful half-step. Relying on a single techie is no replacement for having a full board mastering at least a basic understanding of AI and its disruptive potential.

Every board’s comfort level is going to differ depending on the industry. Manufacturers well understand how robots can free up people to do higher-order work by taking on repetitive and potentially dangerous jobs. Hospitals and health insurers are starting to deploy AI widely, but big successes have been elusive. By contrast, the financial services business is ripe for disruption by AI. Lenders have massive amounts of data and the potential to free up billions in cash flow by finding new efficiencies through applications that will, for example, help bankers make smarter lending decisions and create new revenue opportunities by offering customers better, more tailored products.

That said, here are four guideposts that board members in any industry can use to orient themselves when they begin the journey:

It’s math, not magic. Boards shouldn’t be intimidated by AI. Members don’t need to have degrees in computer engineering to understand the technology behind AI, just like they don’t need to be CPAs to understand the company’s balance sheet. Any good use of ML or AI is going to be an outgrowth of what the company is already doing, not some kind of universal all-knowing Skynet type of AI. Keeping that perspective at the forefront and gaining a basic understanding of AI will help boards better decide how to direct AI use.

Well-run AI projects should be easily understood. When evaluating if a project is right for their company, boards should feel confident enough to say when something doesn’t make sense. The best-run AI projects should be explainable in plain English. It should be clear how real groups of people, whether employees, customers or management, will be affected. If a vendor or internal team can’t explain how an AI project works, it may not be the right fit for your company. This is not unique to ML — it used to be true for ERP implementations — but ML is moving more quickly through the corporate world than ERPs did. For example, when I presented an ML-underwriting project to the board of one top credit-card issuer, I started with the economic impact to their business, the timeframe for delivery, what the roadblocks might be for IT and compliance, and who would need to get involved.

You don’t have to get creepy to get value out of data. Too often, companies assume that in order to make the most out of AI, they need to be like Facebook or Google and pull in every last bit of data they can find. But that can get creepy fast and, usually, there’s no need for that level of data. Our work developing machine learning-based credit underwriting models with banks and lenders has shown that social media data doesn’t provide such strong signals, anyway. Most companies are already sitting on a ton of pretty banal data that’s full of signal and insights that can be unlocked using ML.

AI is an operating expense, not a capital investment. If management’s plan for getting on the AI bandwagon revolves around a big one-time investment, chances are they are going about it wrong. AI has the potential to enhance the bottom line by boosting revenue and cutting costs, but budget needs to be put aside to ensure the algorithms and models are functioning properly and are being rebuilt or refit as macro factors change and new sources of data emerge. Think of AI as you would a Formula 1 race car, which performs best when its support team has a real-time view of the vehicle’s health as it’s zipping around the track.

Widespread adoption of AI in business is still in its infancy. Boards that fail to get in front of this trend will pay the price.


Douglas Merrill is the CEO and founder of ZestFinance, a Los Angeles-based financial services technology company. He was previously CIO and VP of Engineering at Google.

China: AI Chip With World-Class Algorithm Launched

(THIS ARTICLE IS COURTESY OF SHANGHAI CHINA’S SHINE NEWS)

 

AI chip with world-class algorithm launched

Shanghai-based startup Yitu launched server AI chips on Thursday, becoming the world’s first AI chip specifically made for computers for facial recognition and smart driving.

It combines Yitu’s algorithm and Shanghai’s advanced IC, or integrated circuit eco-system, reflecting Shanghai’s effort to establish “AI highland” nationwide and globally in both AI and IC sectors.

“It’s prime time for Shanghai to boost AI development and integration between innovation and traditional industries,” Chen Mingbo, deputy secretary general of the Shanghai government, said today.

Zhu Long, co-founder and chief executive of Yitu, called the new questcore chip with “world-class algorithm” competitive enough to take on rivals including current market leader Nvidia. It also marks the end of Moore’s law when chip makers like Intel dominated the market.

In the new AI era, application, cost and energy consumption are the priorities for users besides calculation capacities, analysts said.

In China, Huawei, Yitu and Cambricon have developed AI chips for servers, which are expected to become the new “brains and hearts” of modern society. Yitu’s questcore, or Qiusuo in Mandarin (meaning exploration), is the first AI sever chip for computer vision processing.

Yitu demonstrated the use of the AI chip in real-time intelligent video analysis task — matching the faces of several hundred people with previously provided photos.

The result was an almost 100 percent success, all done within a second.

The chip offers three to five times capacity on computer vision and uses less energy compared with rivals like Nvidia.

In the future, the chip can be used in airports and railway stations to process 10,000 camera video simultaneously and even in autonomous driving cars, Zhu added.

The chip can also be used for motion capture, cancer diagnosis, bone test, vehicle management and autonomous driving. Yitu’s portfolio will cover software, algorithm, chip and severs.

Shanghai’s development of the IC industry also helped Yitu in developing the chip, Zhu said.

Industrial AI alliance debuts in Shanghai China

(THIS ARTICLE IS COURTESY OF SHANGHAI CHINA’S ‘SHINE’ NEWSPAPER)

 

Industrial AI alliance debuts in Shanghai

Zhu Shenshen / SHINE

AI industrial alliance gets founded in Shanghai on Thursday, with 22 members in the first batch.

An AI industrial alliance was founded in Shanghai on Thursday, another step toward a complete AI ecosystem in the city.

Shanghai AI Development Alliance (SAA) consists of 22 firms, including Baidu, DeepBlue and UCloud, along with the Microsoft Asia Research Center (Shanghai) and ABB, a Swiss-Swedish multinational operating in robotics and automation technology.

Unlike other tech alliances, SAA players from a variety of industries, including automakers, banks, industrial zones and telecommunications carriers.

Shanghai is home to more than 1,000 AI firms. The city generates a huge volume of data each providing huge potential for AI firms, said Zhang Ying, chief engineer of Shanghai Municipal Commission of Economy and Informatization.

The local AI industry generated 70 billion yuan (US$10.4 billion) in 2017. Core sectors are expected to produce more than 100 billion yuan by 2020.

Binjiang in Xuhui District and Zhangjiang of the Pudong New Area are leading AI innovation bases, with other developments in Yangpu, Changning, Minhang and Jing’an districts.

AIWIN or Artificial Intelligence World Innovations, a global-oriented AI competition also debuted on Thursday, part of the coming World Artificial Intelligence Conference in Shanghai this autumn. AIWIN has invited startups from Israel, France and UK to attend the WAIC event.

When The Poor Serve No Need We Will Be Exterminated

When The Poor Serve No Need We Will Be Exterminated

 

Earlier I posted an article that came from the Government of China, the article was in several of their news outlets, the article stated that by the year 2027 in China’s Financial district alone that AI will cause the loss of 2.3 million jobs. Remember that their current President for life Mr. Xi Jinping is a devout follower of Chairman Mao. When Chairman Mao was in charge in China their country’s population was about one billion people and his policies were to let about half of the Nation starve to death. One of the main reason he gave was the Central Government’s inability to not only be able to control them but also their inability to feed them. The population of the United States and of Russia combined today is about 470 million people, Mao was speaking of letting 500 million of his own people starve to death. There are many reasons that China went to their ‘one child’ policies for several decades, these were two of their top reasons.

 

There are those in China and elsewhere in the world who will argue that these things could not happen today because we are now much more civilized and to this I have to say, O really. The United States is without a doubt a ‘surveillance State’ today, if you think otherwise you are being quite naive. There are good things about living in constant surveillance though, I have no doubt that the FBI, CIA, and the NSA have stopped quite a few attacks upon the American people because of their secretive work. Yet how much freedom do the people give up for the sake of being safer? The more a government knows, the more easily they can then totally control the lives of the people. When it comes to governing a Nation the main building block of their power is their ability to control the people. Lose control on the streets, they lose their grip on their power.

 

Now let’s get back to financials within a government. Unless you are oblivious to reality you should know that the tail that wags the dog, is money. Back in the mid 1970’s I worked in a Chrysler Assembly Plant in norther Illinois for just a couple of weeks (I couldn’t stand the thought of working on an assembly line putting cushions in-car seats for at least 37 years) so I quit. What I did notice was how many people worked on the different ‘lines’. As the cars went down the assembly line you had many people doing manual labor like spot welding and putting windshields into the car frames. Go there now, see how many jobs are still there and how many are being done by automation, the job loss is staggering. Even think of stores like Wal-Mart who are getting rid of their cashiers in favor of automation and self-checkouts. Now think about self driving cars, trucks and even trains. Even companies like Uber are killing the Taxi industry. What do all of these things have in common folks? Companies are trying to get rid of human employees and the reason is simple, more profits for the top end persons in these companies.

 

If you are old enough (I am 62) do you remember when we used to hear how technologies were going to allow worker to only have to work 4 days a week because with technologies we could get 5 days work done in 4 days? Some people were foolish enough to think that their employer was going to pay you for 5 days work even though you only worked 4 days. Reality was that the employees still worked 5 days a week but the companies demanded 6 or 7 days of finished product in the 5 days, for no more pay. Then of course the companies could ‘let go’ some of their workforce because they didn’t need them anymore. The employment issue has just grown from there as more and more computers and machines have taken over jobs that humans used to do.

 

I have spoken of the world Stock Markets before, how I believe that they are nothing but a Ponzi scheme and a curse to the working class, the working poor who labor in these corporations who are on these ‘Markets.’ Some will argue that throughout the years that they have been buying and selling stocks and bonds that they have been able to amass a ‘nice little retirement fund’, yet in reality all of a persons profits that they have amassed over the past thirty years can easily be wiped out in one or two hours on this same ‘Market scheme.’ Little people like us working class folks at best get the crumbs that fall off of the ‘Boss Mans’ plate. We are no more than dogs licking their floor and their shoes. What takes you or I 30 years to amass the ‘connected’ make in one 5 minute transaction.

 

When there are lets say 4 billion working age poor people (ages 10-75) but there are only 2 billion actual jobs that need a humans hands to do, what will happen to the other 2 billion people, and all of their families, all of the children? The Republicans in the U.S Congress often refer to things like Social Security, Medicare, Medicaid, Food Stamps, Aid For Dependent Children, unemployment checks, VA Disability checks and even the VA itself as “entitlements” as “Welfare”, things that must be “defunded”, “stopped.” Why is this? The answer is simple, it takes away from the money that flows to the top end of the financial class. The Republicans say that they are the “Christian right” yet their actions are as anti-Christian as you can get in American politics. Do not get me wrong, I am no fan of the Democratic Party either with their platform of murdering babies (pro-abortion). Both ‘Parties’ are pure evil, they will both do everything that they can to make sure that the American people never get to have a viable 3rd or 4th political party and the reason is simple, that would take away from their power and they aren’t about to let that happen.

 

When there is not enough jobs for the poor people to do, not even slave labor jobs, who is going to house and feed these people if they can’t get an income? Is the top 1% going to just ‘give’ these people money from their bank accounts? When there is 7 billion people on the planet but only enough food or clean drinking water for 6 billion, who is going to get that food and clean water, the poorest of the poor people? Really? If you really think so, how naive you are my friend! In this new world that is on our doorstep, indeed kicking down our doors right now, you are either the lead dog, or you are daily looking up the lead dogs ass, drinking their piss for water and licking up their shit for food. In this regard, for the poor, this new world that we are all hurtling into, thousands, then millions, then billions of people will be fighting for a position behind these lead dogs just so they can stay alive. Those who refuse will not be fed and housed, we will be exterminated!

 

Within Two Years China Will Be Ahead Of The U.S. In AI Technology

(THIS ARTICLE IS COURTESY OF THE SAN FRANCISCO CHRONICLE)

 

Race to develop artificial intelligence is one between Chinese authoritarianism and U.S. democracy

Monitors show facial recognition software in use at the headquarters of the artificial intelligence company Megvii, in Beijing, May 10, 2018. Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.

Photo: Gilles Sabrie / New York Times
“In two years, China will be ahead of the United States in AI (artificial intelligence),” states Denis Barrier, CEO of global venture firm Cathay Innovation. Others say the same. If so, China will largely determine how this technology transforms the world. Today’s contest is more than a race for dominance in a new technology — it’s one between authoritarianism and democracy.
“AI is the world’s next big inflection point,” says Ajeet Singh, CEO of Thought Spot in Palo Alto. Artificial intelligence is machine learning, which self-learns programmed tasks, using data, and the more it gets, the more learned it becomes. It drives cars, recognizes individuals, diagnoses diseases and more. Like past informational technologies, artificial intelligence will convey advantages to the nation that leads its use — accelerating research, increasing productivity and enabling dominant military capabilities.
Hence, China’s race to dominate the technology.

In the United States, companies and agencies are pursuing artificial intelligence development in a decentralized manner. In China, the government has a focused national effort, following Google’s Deep Mind artificial intelligence defeating the world’s top Go players in 2016-17. That defeat was China’s “Sputnik moment,” (the moment that a technological achievement by a rival galvanized American political resolve to invest in space technology) — one the U.S. has yet to have with artificial intelligence. And, unlike the United States, China has a national strategy for artificial intelligence, setting milestones, accelerating China’s pursuit of the technology:
2020: Be equal to the United States
2025: Surpass the United States
2030: Lead the world as an artificial-intelligence innovation center

“Research institutes, universities, private companies and the government all working together … I haven’t seen anything like it,” said Steven White, an associate professor at Tsinghua University, China’s MIT. In the race for artificial intelligence dominance, “the U.S. will lose because they don’t have the resources,” said White.
But, needs are driving China, too. It’s artificial intelligence strategy addresses:
Its shrinking labor force — A “national crisis,” says the National Committee of Chinese People’s Political Consultative Conference, which predicts China’s working population will drop from 631 million in 2020, to 523 million in 2035, and 424 million in 2050. China must also care for a growing elderly population. The United Nations estimates China’s over-65-age group will increase from about 160 million in 2020, to 360 million in 2050.
How to remain an economic power — China seeks to operate almost a million robots and produce 150,000 industrial ones in 2020.

Growing health care needs — China seeks a “rapid, accurate intelligent medical system,” including artificial intelligence-scanning imagery for cancer, robots providing medical references for doctors, and artificial intelligence-powered online consultations.
Military dominance of the East and South China seas, which allows access for China’s export-driven economy. China’s government seeks a civil-military fusion of artificial intelligence, enabling faster military decision-making, robotic submarines and large drone swarms that could overwhelm opposing forces.
Control by the Communist Party over China’s population. Internal unrest — coastal rich vs. interior poor; ethnically different regions like Tibet; an anxious middle class; and pro-democracy efforts — has long concerned authorities. China is using artificial intelligence to build an Orwellian state. Smart cities track peoples’ movements. China, netted with millions of cameras and facial and vehicle recognition systems, can rapidly identify individuals. Police wear facial recognition glasses that do the same. Bio-metric data provide even better identification. And people get social credit scores, which determine eligibility for loans, travel and more. This artificial-intelligence-enabled system enables political repression and strengthens autocratic rule.
Today, a divided America needs to “get [its] act together as a country” regarding artificial intelligence said former Alphabet CEO Eric Schmidt. If it doesn’t, America’s greatness will pass, and so will hope for a free world order.
Thomas C. Linn is a U.S. Naval War College professor, a U.S. Army War College instructor, author of “Think and Write for Your Life — or Be Replaced by a Robot” and a retired U.S. Marine. The views expressed are his own.

Google Bars Using Artificial Intelligence Tech in Weapons, Unreasonable Surveillance

(THIS ARTICLE IS COURTESY OF THE SAUDI NEWS AGENCY ASHARQ AL-AWSAT)

 

Google Bars Using Artificial Intelligence Tech in Weapons, Unreasonable Surveillance

Friday, 8 June, 2018 – 09:45
FILE PHOTO: Google CEO Sundar Pichai speaks on stage during the annual Google I/O developers conference in Mountain View, California, U.S., May 8, 2018. REUTERS/Stephen Lam/File Photo
Asharq Al-Awsat
Google announced Thursday it would not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts under new standards for its business decisions in the nascent field.

The Alphabet Inc (GOOGL.O) unit said the restriction could help Google management defuse months of protest by thousands of employees against the company’s work with the US military to identify objects in drone video.

Chief Executive Sundar Pichai said in a blog post: “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” such as cybersecurity, training, or search and rescue.

Pichai set out seven principles for Google’s application of artificial intelligence, or advanced computing that can simulate intelligent human behavior.

He said Google is using AI “to help people tackle urgent problems” such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness, AFP reported.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in the blog.

“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

He added that the principles also called for AI applications to be “built and tested for safety,” to be “accountable to people” and to “incorporate privacy design principles.”

The move came after potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents from mass collection of online communications has sparked concerns among academic ethicists and Google employees, according to Reuters.

Several technology firms have already agreed to the general principles of using artificial intelligence for good, but Google appeared to offer a more precise set of standards.

The West is ill-prepared for the wave of “deep fakes” From AI

(THIS ARTICLE IS COURTESY OF THE BROOKINGS INSTITUTE)

 

ORDER FROM CHAOS

The West is ill-prepared for the wave of “deep fakes” that artificial intelligence could unleash

Chris Meserole and Alina Polyakova

Editor’s Note:To get ahead of new problems related to disinformation and technology, policymakers in Europe and the United States should focus on the coming wave of disruptive technologies, write Chris Meserole and Alina Polyakova. Fueled by advances in artificial intelligence and decentralized computing, the next generation of disinformation promises to be even more sophisticated and difficult to detect. This piece originally appeared on ForeignPolicy.com.

Russian disinformation has become a problem for European governments. In the last two years, Kremlin-backed campaigns have spread false stories alleging that French President Emmanuel Macron was backed by the “gay lobby,” fabricated a story of a Russian-German girl raped by Arab migrants, and spread a litany of conspiracy theories about the Catalan independence referendum, among other efforts.

Europe is finally taking action. In January, Germany’s Network Enforcement Act came into effect. Designed to limit hate speech and fake news online, the law prompted both France and Spain to consider counterdisinformation legislation of their own. More important, in April the European Union unveiled a new strategy for tackling online disinformation. The EU plan focuses on several sensible responses: promoting media literacy, funding a third-party fact-checking service, and pushing Facebook and others to highlight news from credible media outlets, among others. Although the plan itself stops short of regulation, EU officials have not been shy about hinting that regulation may be forthcoming. Indeed, when Facebook CEO Mark Zuckerberg appeared at an EU hearing this week, lawmakers reminded him of their regulatory power after he appeared to dodge their questions on fake news and extremist content.

The problem is that technology advances far more quickly than government policies.

The recent European actions are important first steps. Ultimately, none of the laws or strategies that have been unveiled so far will be enough. The problem is that technology advances far more quickly than government policies. The EU’s measures are still designed to target the disinformation of yesterday rather than that of tomorrow.

To get ahead of the problem, policymakers in Europe and the United States should focus on the coming wave of disruptive technologies. Fueled by advances in artificial intelligence and decentralized computing, the next generation of disinformation promises to be even more sophisticated and difficult to detect.

To craft effective strategies for the near term, lawmakers should focus on four emerging threats in particular: the democratization of artificial intelligence, the evolution of social networks, the rise of decentralized applications, and the “back end” of disinformation.

Thanks to bigger data, better algorithms, and custom hardware, in the coming years, individuals around the world will increasingly have access to cutting-edge artificial intelligence. From health care to transportation, the democratization of AI holds enormous promise.

Yet as with any dual-use technology, the proliferation of AI also poses significant risks. Among other concerns, it promises to democratize the creation of fake print, audio, and video stories. Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone. However, deep learning and generative adversarial networks have made it possible to doctor imagesand video so well that it’s difficult to distinguish manipulated files from authentic ones. And thanks to apps like FakeApp and Lyrebird, these so-called “deep fakes” can now be produced by anyone with a computer or smartphone. Earlier this year, a tool that allowed users to easily swap faces in video produced fake celebrity porn, which went viral on Twitter and Pornhub.

Deep fakes and the democratization of disinformation will prove challenging for governments and civil society to counter effectively. Because the algorithms that generate the fakes continuously learn how to more effectively replicate the appearance of reality, deep fakes cannot easily be detected by other algorithms—indeed, in the case of generative adversarial networks, the algorithm works by getting really good at fooling itself. To address the democratization of disinformation, governments, civil society, and the technology sector therefore cannot rely on algorithms alone, but will instead need to invest in new models of social verification, too.

At the same time as artificial technology and other emerging technologies mature, legacy platforms will continue to play an outsized role in the production and dissemination of information online. For instance, consider the current proliferation of disinformation on Google, Facebook, and Twitter.

A growing cottage industry of search engine optimization (SEO) manipulation provides services to clients looking to rise in the Google rankings. And while for the most part, Google is able to stay ahead of attempts to manipulate its algorithms through continuous tweaks, SEO manipulators are also becoming increasingly savvy at gaming the system so that the desired content, including disinformation, appears at the top of search results.

For example, stories from RT and Sputnik—the Russian government’s propaganda outlets—appeared on the first page of Google searches after the March nerve agent attack in the United Kingdom and the April chemical weapons attack in Syria. Similarly, YouTube (which is owned by Google) has an algorithm that prioritizes the amount of time users spend watching content as the key metric for determining which content appears first in search results. This algorithmic preference results in false, extremist, and unreliable information appearing at the top, which in turn means that this content is viewed more often and is perceived as more reliable by users. Revenue for the SEO manipulation industry is estimated to be in the billions of dollars.

On Facebook, disinformation appears in one of two ways: through shared content and through paid advertising. The company has tried to curtail disinformation across each vector, but thus far to no avail. Most famously, Facebook introduced a “Disputed Flag” to signify possible false news—only to discover that the flag made users more likely to engage with the content, rather than less. Less conspicuously, in Canada, the company is experimenting with increasing the transparency of its paid advertisements by making all ads available for review, including those micro-targeted to a small set of users. Yet, the effort is limited: The sponsors of ads are often buried, requiring users to do time-consuming research, and the archive Facebook set up for the ads is not a permanent database but only shows active ads. Facebook’s early efforts do not augur well for a future in which foreign actors can continue to exploit its news feed and ad products to deliver disinformation—including deep fakes produced and targeted at specific individuals or groups.

Although Twitter has taken steps to combat the proliferation of trolls and bots on its platform, it remains deeply vulnerable to disinformation campaigns, since accounts are not verified and its application programming interface, or API, still makes it possible to easily generate and spread false content on the platform. Even if Twitter takes further steps to crack down on abuse, its detection algorithms can be reverse-engineered in much the same way Google’s search algorithm is. Without fundamental changes to its API and interaction design, Twitter will remain rife with disinformation. It’s telling, for example, that when the U.S. military struck Syrian chemical weapons facilities in April—well after Twitter’s latest reforms were put in place—the Pentagon reported a massive surge in Russian disinformation in the hours immediately following the attack. The tweets appeared to come from legitimate accounts, and there was no way to report them as misinformation.

Blockchain technologies and other distributed ledgers are best known for powering cryptocurrencies such as bitcoin and ethereum. Yet their biggest impact may lie in transforming how the internet works. As more and more decentralized applications come online, the web will increasingly be powered by services and protocols that are designed from the ground up to resist the kind of centralized control that Facebook and others enjoy. For instance, users can already browse videos on DTube rather than YouTube, surf the web on the Blockstack browser rather than Safari, and store files using IPFS, a peer-to-peer file system, rather than Dropbox or Google Docs. To be sure, the decentralized application ecosystem is still a niche area that will take time to mature and work out the glitches. But as security improves over time with fixes to the underlying network architecture, distributed ledger technologies promise to make for a web that is both more secure and outside the control of major corporations and states.

If and when online activity migrates onto decentralized applications, the security and decentralization they provide will be a boon for privacy advocates and human rights dissidents. But it will also be a godsend for malicious actors. Most of these services have anonymity and public-key cryptography baked in, making accounts difficult to track back to real-life individuals or organizations. Moreover, once information is submitted to a decentralized application, it can be nearly impossible to take down. For instance, the IPFS protocol has no method for deletion—users can only add content, they cannot remove it.

For governments, civil society, and private actors, decentralized applications will thus pose an unprecedented challenge, as the current methods for responding to and disrupting disinformation campaigns will no longer apply. Whereas governments and civil society can ultimately appeal to Twitter CEO Jack Dorsey if they want to block or remove a malicious user or problematic content on Twitter, with decentralized applications, there won’t always be someone to turn to. If the Manchester bomber had viewed bomb-making instructions on a decentralized app rather than on YouTube, it’s not clear who authorities should or could approach about blocking the content.

Over the last three years, renewed attention to Russian disinformation efforts has sparked research and activities among a growing number of nonprofit organizations, governments, journalists, and activists. So far, these efforts have focused on documenting the mechanisms and actors involved in disinformation campaigns—tracking bot networks, identifying troll accounts, monitoring media narratives, and tracing the diffusion of disinformation content. They’ve also included governmental efforts to implement data protection and privacy policies, such as the EU’s General Data Protection Regulation, and legislative proposals to introduce more transparency and accountability into the online advertising space.

While these efforts are certainly valuable for raising awareness among the public and policymakers, by focusing on the end product (the content), they rarely delve into the underlying infrastructure and advertising marketsdriving disinformation campaigns. Doing so requires a deeper examination and assessment of the “back end” of disinformation. In other words, the algorithms and industries—the online advertising market, the SEO manipulation market, and data brokers—behind the end product. Increased automation paired with machine learning will transform this space as well.

To get ahead of these emerging threats, Europe and the United States should consider several policy responses.

First, the EU and the United States should commit significant funding to research and development at the intersection of AI and information warfare. In April, the European Commission called for at least 20 billion euros (about $23 billion) to be spent on research on AI by 2020, prioritizing the health, agriculture, and transportation sectors. None of the funds are earmarked for research and development specifically on disinformation. At the same time, current European initiatives to counter disinformation prioritize education and fact-checking while leaving out AI and other new technologies.

As long as tech research and counterdisinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of emerging threats.

As long as tech research and counterdisinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of emerging threats. In the United States, the government has been reluctant to step in to push forward tech research as Silicon Valley drives innovation with little oversight. The 2016 Obama administration report on the future of AI did not allocate funding, and the Trump administration has yet to release its own strategy. As revelations of Russian manipulation of digital platforms continue, it is becoming increasingly clear that governments will need to work together with private sector firms to identify vulnerabilities and national security threats.

Furthermore, the EU and the U.S. government should also move quickly to prevent the rise of misinformation on decentralized applications. The emergence of decentralized applications presents policymakers with a rare second chance: When social networks were being built a decade ago, lawmakers failed to anticipate the way in which they could be exploited by malicious actors. With such applications still a niche market, policymakers can respond before the decentralized web reaches global scale. Governments should form new public-private partnerships to help developers ensure that the next generation of the web isn’t as ripe for misinformation campaigns. A model could be the United Nations’ Tech Against Terrorism project, which works closely with small tech companies to help them design their platforms from the ground up to guard against terrorist exploitation.

Finally, legislators should continue to push for reforms in the digital advertising industry. As AI continues to transform the industry, disinformation content will become more precise and micro-targeted to specific audiences. AI will make it far easier for malicious actors and legitimate advertisers alike to track user behavior online, identify potential new users to target, and collect information about users’ attitudes, beliefs, and preferences.

In 2014, the U.S. Federal Trade Commission released a report calling for transparency and accountability in the data broker industry. The report called on Congress to consider legislation that would shine light on these firms’ activities by giving individuals access and information about how their data is collected and used online. The EU’s protection regulation goes a long way in giving users control over their data and limits how social media platforms process users’ data for ad-targeting purposes. Facebook is also experimenting with blocking foreign ad sales ahead of contentious votes. Still, the digital ads industry as a whole remains a black box to policymakers, and much more can still be done to limit data mining and regulate political ads online.

Effectively tracking and targeting each of the areas above won’t be easy. Yet policymakers need to start focusing on them now. If the EU’s new anti-disinformation effort and other related policies fail to track evolving technologies, they risk being antiquated before they’re even introduced.

A how-to guide for managing the end of the post-Cold War era. Read all the Order from Chaos content »

2.3 million – jobs lost to artificial intelligence in China’s financial sectors by 2027

(THIS ARTICLE IS COURTESY OF ANDY TAI’S CHINA DEBATE)

2.3 million – the number of jobs that could be lost to artificial intelligence in China’s financial sectors by 2027 – SCMP

About 2.3 million finance industry employees in mainland China are likely to either lose their jobs, or be reassigned to new roles, by 2027, as they fall victim to disruptive artificial intelligence technologies.
A study by Boston Consulting Group (BCG) found that 23 per cent of the total 9.93 million jobs in the country’s banking, insurance and securities sectors will be affected, with entry-level staff engaged in repetitious daily operations bearing the brunt of any cuts.

“Many jobs, particularly those involving mechanical, repetitious operations, will gradually be replaced by AI,” David He, a BCG partner, said in a statement. “Consequently, some positions will be cancelled, but other positions will see improvements in efficiency, and new jobs will be created.”

2.3 million – the number of jobs that could be lost to artificial intelligence in China’s financial sectors by 2027 | South China Morning Post
2.3 million – the number of jobs that could be lost to artificial intelligence in China’s financial sectors by 2027 | South China Morning Post
scmp.com
Shared publicly

Riyadh, Beijing Launch Digital Silk Road Initiative

(THIS ARTICLE IS COURTESY OF THE SAUDI NEWS AGENCY ASHARQ AL-AWSAT)

 

Riyadh, Beijing Launch Digital Silk Road Initiative

Tuesday, 12 December, 2017 – 12:15
Riyadh – Asharq Al-Awsat

Undersecretary of the Ministry of Communications and Information Technology for Saudi Planning and Development Dr. Mohammed al-Mishaigeh revealed that his country has embarked on transformation programs and developing young talents and establishing innovation labs during his participation in the World Internet Conference, which concluded Monday in Wuzhen City, east China.

Mishaigeh said that the Kingdom and China have also launched Digital Silk Road initiative, and he called on the Chinese to boost partnerships and benefit from Saudi investment and geographical capabilities to transfer knowledge and achieve progress in the field of technology, which the Kingdom is betting on as a knowledge and economic resource.

He pointed out that the city of NEOM will be the focus of artificial intelligence, automation, manufacturing and renewable energy in the world.

Speaking at the conference, Mishaigeh said that his country has started implementing the desired social and economic transformation led by Vision 2030, which aims to bring about profound changes that will extend to many aspects of life reaching the lead in all aspects.

Notably, the Saudi participating delegation has visited the Huawei Research Center in Shanghai to learn about the latest technologies in infrastructure, smart cities and the Chinese experience in enabling the digital economy in the indoor environment.

On the sidelines of the conference, the delegation held a meeting with the National Development and Reform Commission, during which a mechanism was discussed to activate the terms of the memorandum of understanding signed between the two parties in January 2016 on promoting the development of the Digital Silk Road as well as a review of the Chinese experience in building smart cities.

The Kingdom of Saudi Arabia, represented by the Ministry of Communications and Information Technology, has participated in the World Internet Conference, which was organized by Chinese Electronic Space Administration and Zhejiang Province’s Government with participation of leading figures from governments, international organizations, companies, competent technical sector departments as well as non-government relevant agencies.

STEPHEN HAWKING AI WARNING: ARTIFICIAL INTELLIGENCE COULD DESTROY CIVILIZATION

(THIS ARTICLE IS COURTESY OF NEWSWEEK)

(JUST YESTERDAY NOV. 6th, I WROTE AN ARTICLE TITLED ‘THE UNNEEDED POOR WILL BE EXTERMINATED’, THINGS I POINTED OUT IN THAT ARTICLE DO AGREE WITH WHAT MR. HAWKING IS SAYING HERE IN THIS ARTICLE TODAY, PLEASE CONSIDER READING YESTERDAYS ARTICLE ALSO, THANK YOU.)

STEPHEN HAWKING AI WARNING: ARTIFICIAL INTELLIGENCE COULD DESTROY CIVILIZATION

World-renowned physicist Stephen Hawking has warned that artificial intelligence (AI) has the potential to destroy civilization and could be the worst thing that has ever happened to humanity.

Speaking at a technology conference in Lisbon, Portugal, Hawking told attendees that mankind had to find a way to control computers, CNBC reports.

“Computers can, in theory, emulate human intelligence, and exceed it,” he said. “Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

stephen hawking Earth extinction colonizeStephen Hawking sits onstage during an announcement of the Breakthrough Starshot initiative with investor Yuri Milner in New York City, on April 12, 2016. Hawking, the English physicist, warns humanity needs to become a multiplanetary species to ensure its survival.REUTERS/LUCAS JACKSON

Hawking said that while AI has the potential to transform society—it could be used to eradicate poverty and disease, for example—it also comes with huge risks.

Society, he said, must be prepared for that eventuality. “AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” he said.

This is not the first time Hawking has warned about the dangers of AI. In a recent interview with Wired, the University of Cambridge Professor said AI could one day reach a level where it outperforms humans and becomes a “new form of life.”

artificial intelligence Artificial intelligence GLAS-8/FLICKR

“I fear that AI may replace humans altogether,” he told the magazine. “If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”

Even if AI does not take over the world, either by destroying or enslaving mankind, Hawking still believes human beings are doomed. Over recent years, he has become increasingly vocal about the need to leave Earth in search of a new planet.

In May, he said humans have around 100 years to leave Earth in order to survive as a species. “I strongly believe we should start seeking alternative planets for possible habitation,” he said during a speech at the Royal Society in London, U.K. “We are running out of space on Earth and we need to break through the technological limitations preventing us from living elsewhere in the universe.”

The following month at the Starmus Festival in Norway, which celebrates science and art, Hawking told his audience that the current threats to Earth are “too big and too numerous” for him to be positive about the future.

“Our physical resources are being drained at an alarming rate. We have given our planet the disastrous gift of climate change. Rising temperatures, reduction of the polar ice caps, deforestation and decimation of animal species. We can be an ignorant, unthinking lot.

“We are running out of space and the only places to go to are other worlds. It is time to explore other solar systems. Spreading out may be the only thing that saves us from ourselves. I am convinced that humans need to leave Earth.”

dawns-ad-lib.com®  

➳ a convivial atmosphere 🐣 🍴 ♨

super market

Jo Bhi Bikega Achha Bikega

HopsSkipsandJumps

Retired and travelling - it doesn't get better than this!

Real Currencies

Supporting People and the Commonwealth and resisting the Money Power by defeating Usury

Peas & Peace

Vegan (Plant Based) & Wellness

A lot from Lydia

You can learn a lot from Lydia...(It's a song, not a promise.)

LEAPING LIFE

Pause, Breathe, Live!

Consumed by Ink

A word after a word after a word is power. --Margaret Atwood

%d bloggers like this: