Oracle: China’s internet is designed more like an intranet

(THIS ARTICLE IS COURTESY OF ZDNET)

 

Oracle: China’s internet is designed more like an intranet

China’s internet could continue to operate as a national intranet in the case of a cyber-attack or foreign intervention.

Recommended Content:
Cloud applications enable students and staff to improve productivity, simplify collaboration and reduce on-premises IT costs. But these changes can introduce a new set of security challenges. Can you protect students from inappropriate content…
Download Now

The structure of the Chinese internet is unlike any other country, being similar to a gigantic intranet, according to research published by Oracle last week.

The country has very few connection points to the global internet, has zero foreign telcos operating within its borders, and Chinese-to-Chinese internet traffic never leaves the country.

All of these allow China to disconnect itself at will from the global internet and continue to operate, albeit with no connectivity to western services.

“Put plainly, in terms of resilience, China could effectively withdraw from the global public internet and maintain domestic connectivity (essentially having an intranet),” Oracle’s Dave Allen said. “This means the rest of the world could be restricted from connecting into China, and vice versa for external connections for Chinese businesses/users.”

VERY FEW PEERING POINTS

The most obvious sign that China is different from any other country in terms of how it structured its internet infrastructure is by looking at how the country is connected to the rest of the internet.

Normally, most countries allow local and foreign telecommunications providers to operate within each other’s borders. These companies interconnect their infrastructure at physical locations called Internet Exchange Points (IXPs), and all the internet is a giant mesh of IXP peering points interconnecting smaller telco networks.

But China doesn’t do this. Rather than allowing foreign telcos to operate within its borders, this market is completely off limits. Instead, local telcos extend China’s infrastructure to foreign countries, where they interlink with the global internet.

This way, Chinese ISPs form a closely-knit structure capable of exchanging traffic among themselves. All connections that need to reach foreign services must go through the country’s Great Firewall, reach foreign IXPs via closely selected telcos (China Telecom, China Unicom, China Mobile), and then land on the public internet.

China IPXs
Image: Oracle

This entire structure is very much akin to a corporate intranet, and has quite a few advantages.

First, China can impose its internet censorship program at will, without needing to account for foreign telcos operating inside its borders, and have to deal with their sensitive customer policies.

Second, China can disconnect from the internet whenever it detects an external attack, but still maintain a level of internet connectivity within its borders, relying solely on local telcos and data centers.

INTERNAL TRAFFIC NEVER LEAVES THE COUNTRY

But another advantage of this structure is that traffic meant to go from one Chinese user to another never leaves the country’s borders.

This is very different from most internet connections. For example, a user from an Italian town wanting to access their city’s website might find it surprising that their connection often goes through servers located in France or Germany before reaching the city’s website.

Such “weird” connection paths happen all the time on the internet, and in many countries, but not in China. Here, because local telcos peer primarily with each other and have a few tightly controlled outlets to the external world, internal traffic has no reason to leave the country.

China internal traffic
Image: Oracle

MORE “NATIONAL INTRANETS” TO FOLLOW

The main advantage of this is that foreign intelligence services have very little insight into Chinese traffic, unless users connect to foreign services, and the traffic must cross China’s borders.

From a national security standpoint, this is ideal; however, only China has such a system in place — at least, for now.

“While China’s structure is unique in the way it is physically set up to be separate from the rest of the world, other countries have begun to adopt the theoretical approach to cyber sovereignty that China is promoting,” said Oracle’s Dave Allen.

One of the countries that’s trying to replicate this Chinese “national intranet” model is Russia. This March, President Vladimir Putin signed a new law giving the government expanded control over the internet. The law basically forces local internet providers to install devices that route Russian web traffic through government-run servers, where intelligence services are given free will to analyze the traffic.

Furthermore, the country has also been busy building a local backup of the Domain Name System (DNS), and has conducted tests to disconnect the country from the rest of the internet, as part of a planned experiment.

Russia may be a few years behind China, but the writing’s on the wall as to Kremlin’s intentions.

7 surprising facts about email

(THIS ARTICLE IS COURTESY OF TRIVIA GENIUS)

 

7 surprising facts about email

Email, that magical way of sending instant messages between electronic devices, is used and consumed round-the-clock by people the length and breadth of the planet. But have you ever stopped to think about how it came about, or how often we use it? Next time you find yourself with a group of friends too busy on their telephones to talk, reignite the conversation with these fun email facts.

Email predates the Internet

Credit: oatawa / Shutterstock.com

Although it didn’t become popular until the early 1990s, the first emails were sent in 1965 via a system called MAIL at the Massachusetts Institute of Technology. This was four years prior to the creation of ARPANET, which itself laid the foundations for the Internet. Pioneering programmer Ray Tomlinson sent the first email as we know them today in 1971, almost two decades before the World Wide Web appeared.

The content of the first email is unknown

Credit: Devonyu / iStock

Legend once had us believe that Tomlinson’s first email, which incidentally was sent to himself between two computers in the same office, included the words “Hello world“. In fact, according to the man himself, the actual content was in all probability something along the lines of “QWERTYUIOP.” As there’s no official record we’ll just have to take his word for it.

Hotmail was launched as recently as 1996

Credit: CatLane / iStock

Over three decades after the first emails were traded, Microsoft’s Hotmail took the world by storm with its messaging service. Rebranded as Outlook in 2012, it now has 400 million users and is available in 106 languages. RocketMail was Hotmail’s biggest contender in the early days, and later became what we know today as Yahoo!

Today, 2.8 million emails are sent every second

Credit: tolgart / iStock

Thanks to our technology-savvy friends Internet Live Stats, we can see that in just one second, 2,780,870 emails are sent on their journey. Let’s put this into some more impressive numbers: that’s a whopping 240 billion per day. If every person on Earth was sending then that would equate to 31 emails per person per day. We are also watching 77,925 YouTube videos and sending 8,415 Tweets per second.

The curious origins of the word spam

Credit: scanrail / iStock

Spam is that wonderful folder in your webmail service that gets bombarded with offers to inherit a fortune from a generous widow, purchase products from phantom companies and earn hundreds of dollars for taking part in surveys. Spam was originally, and still is, the brand name of a canned meat introduced during World War II. In the 1970s, British comedy team Monty Python referred to Spam as something unavoidable and repetitive. Techies later coined the name spammers for the people that repeatedly send you dishonest and unsolicited emails.

We spend over five hours per day checking email

Credit: NicoElNino / Shutterstock.com

For many, email and the Internet have taken over our lives, so much so that in 2017 a vast majority of us were spending 5.4 hours every day checking messages. Naturally, this has a knock-on effect on productivity. Statistics show that time spent distracted by emails and social media costs the U.S. economy $997 billion annually.

Studies have proved that decreased email usage leads to increased health

Credit: seb_ra / iStock

Fancy reducing your stress levels, relaxing your heart rate and increasing both focus and productivity? Simple: reduce your email time. A study by the University of California Irvine and U.S. Army on a group of office workers proved this. So it must be true, right?

Loss Of Internet Freedom = Loss Of Freedom

(THIS ARTICLE IS COURTESY OF THE BROOKINGS INSTITUTE)

 

Editor’s Note:In Unpacked, Brookings experts provide analysis of Trump administration policies and news. Subscribe to the Brookings Creative Lab YouTube channel to stay up to date on the latest from Unpacked.

The issue:

On June 11, 2018, the Federal Communications Commission’s repeal of the Open Internet Order—the net neutrality rules—went into effect. In the wake of this change, Americans are wondering how the repeal will affect them, and what it means for the future of internet access. Though consumers may not see changes quickly, the shift on net neutrality undermines the nation’s history on network regulation, creating a new era in how these networks operate in America.

Internet companies will now have the opportunity to start to discriminate in very subtle ways.

The things you need to know:

  • Consumers can expect not to see any rapid changes as a result to the change in Net Neutrality rules
  • Internet services will likely not jump on this quickly and begin to extract the advantages of this repeal, but they will begin to discriminate in subtle ways
  • There is a pending court appeal that challenges the FCC’s ability to repeal Net Neutrality
  • The Congressional Review Act, which has already passed the Senate, would repeal the FCC’s decision
  • To understand what impact the repeal of the Open Internet rule might have, you have to understand why it was put in place to begin with
  • The underlying concept of networks in America, all the way back to the telegraph, has been that there needs to be first-come, first-serve, non-discriminatory access
  • The reason why we have this rule is because of monopoly networks
  • In most communities, there is one internet service provider – you don’t have a choice. When you have a monopoly situation, there is an incentive to discriminate to help the monopoly owner
  • In the past the government has always said that companies cannot discriminate to favor themselves – those concepts were at the heart of the Open Internet Order and they have now gone away
  • Internet companies will now have the opportunity to start to discriminate in very subtle ways
  • The court decision on the AT&T acquisition of Time Warner (and its assets like CNN) is important because when you have a merger of wired and wireless internet service and give them the opportunity to discriminate against competitors to favor their own content, then you have a new day in how America’s networks operate

The sources:

A wide gulf between federal agencies on broadband competition

The state of tech policy, one year into the Trump Administration

Where’s the fire? With unclear legal authority, Trump FCC rushes to hand responsibility over internet service to FTC

China: NOTHING About Xi Jinping Is ‘For The People’ Everything Is About His Power

(THIS ARTICLE IS COURTESY OF ‘THE LONG READ’)

(WHEN IT COMES TO THE INTERNET THE GOVERNMENT OF CHINA AND PRESIDENT XI JINPING SHOW THAT THEY ARE SCARED TO DEATH OF THE PEOPLE HAVING ANY KNOWLEDGE OF THEIR COWARDLINESS AND CRIMES TOWARD THE PEOPLE, IN THIS SENSE XI JINPING IS NO BETTER THAN NORTH KOREA’S KIM JONG UN, COWARDS, LIARS AND MURDERERS WITH NO INTEGRITY AT ALL.)(OPED BY oldpoet56) 

The great firewall of China: Xi Jinping’s internet shutdown

Before Xi Jinping, the internet was becoming a more vibrant political space for Chinese citizens. But today the country has the largest and most sophisticated online censorship operation in the world. By 

In December 2015, thousands of tech entrepreneurs and analysts, along with a few international heads of state, gathered in Wuzhen, in southern China, for the country’s second World Internet Conference. At the opening ceremony the Chinese president, Xi Jinping, set out his vision for the future of China’s internet. “We should respect the right of individual countries to independently choose their own path of cyber-development,” said Xi, warning against foreign interference “in other countries’ internal affairs”.

No one was surprised by what they heard. Xi had already established that the Chinese internet would be a world unto itself, with its content closely monitored and managed by the Communist party. In recent years, the Chinese leadership has devoted more and more resources to controlling content online. Government policies have contributed to a dramatic fall in the number of postings on the Chinese blogging platform Sina Weibo (similar to Twitter), and have silenced many of China’s most important voices advocating reform and opening up the internet.

It wasn’t always like this. In the years before Xi became president in 2012, the internet had begun to afford the Chinese people an unprecedented level of transparency and power to communicate. Popular bloggers, some of whom advocated bold social and political reforms, commanded tens of millions of followers. Chinese citizens used virtual private networks (VPNs) to access blocked websites. Citizens banded together online to hold authorities accountable for their actions, through virtual petitions and organising physical protests. In 2010, a survey of 300 Chinese officials revealed that 70% were anxious about whether mistakes or details about their private life might be leaked online. Of the almost 6,000 Chinese citizens also surveyed, 88% believed it was good for officials to feel this anxiety.

For Xi Jinping, however, there is no distinction between the virtual world and the real world: both should reflect the same political values, ideals, and standards. To this end, the government has invested in technological upgrades to monitor and censor content. It has passed new laws on acceptable content, and aggressively punished those who defy the new restrictions. Under Xi, foreign content providers have found their access to China shrinking. They are being pushed out by both Xi’s ideological war and his desire that Chinese companies dominate the country’s rapidly growing online economy.

At home, Xi paints the west’s version of the internet, which prioritises freedom of information flow, as anathema to the values of the Chinese government. Abroad, he asserts China’s sovereign right to determine what constitutes harmful content. Rather than acknowledging that efforts to control the internet are a source of embarrassment – a sign of potential authoritarian fragility – Xi is trying to turn his vision of a “Chinanet” (to use blogger Michael Anti’s phrase) into a model for other countries.

The challenge for China’s leadership is to maintain what it perceives as the benefits of the internet – advancing commerce and innovation – without letting technology accelerate political change. To maintain his “Chinanet”, Xi seems willing to accept the costs in terms of economic development, creative expression, government credibility, and the development of civil society. But the internet continues to serve as a powerful tool for citizens seeking to advance social change and human rights. The game of cat-and-mouse continues, and there are many more mice than cats.


The very first email in China was sent in September 1987 – 16 years after Ray Tomlinson sent the first email in the US. It broadcast a triumphal message: “Across the Great Wall we can reach every corner in the world.” For the first few years, the government reserved the internet for academics and officials. Then, in 1995, it was opened to the general public. In 1996, although only about 150,000 Chinese people were connected to the internet, the government deemed it the “Year of the Internet”, and internet clubs and cafes appeared all over China’s largest cities.

Yet as enthusiastically as the government proclaimed its support for the internet, it also took steps to control it. Rogier Creemers, a China expert at Oxford University, has noted that “As the internet became a publicly accessible information and communication platform, there was no debate about whether it should fall under government supervision – only about how such control would be implemented in practice.” By 1997, Beijing had enacted its first laws criminalising online postings that it believed were designed to hurt national security or the interests of the state.

China’s leaders were right to be worried. Their citizens quickly realised the political potential inherent in the internet. In 1998, a 30-year-old software engineer called Lin Hai forwarded 30,000 Chinese email addresses to a US-based pro-democracy magazine. Lin was arrested, tried and ultimately sent to prison in the country’s first known trial for a political violation committed completely online. The following year, the spiritual organisation Falun Gongused email and mobile phones to organise a silent demonstration of more than 10,000 followers around the Communist party’s central compound, Zhongnanhai, to protest their inability to practise freely. The gathering, which had been arranged without the knowledge of the government, precipitated an ongoing persecution of Falun Gong practitioners and a new determination to exercise control over the internet.

The man who emerged to lead the government’s technological efforts was Fang Binxing. In the late 1990s, Fang worked on developing the “Golden Shield” – transformative software that enabled the government to inspect any data being received or sent, and to block destination IP addresses and domain names. His work was rewarded by a swift political rise. By the 2000s, he had earned the moniker “Father of the Great Firewall” and, eventually, the enmity of hundreds of thousands of Chinese web users.

Security outside Google’s office in Beijing in January 2010.
 Security outside Google’s office in Beijing in January 2010. Photograph: Diego Azubel/EPA

Throughout the early 2000s, the Chinese leadership supplemented Fang’s technology with a set of new regulations designed to ensure that anyone with access to China’s internet played by Chinese rules. In September 2000, the state council issued order no 292, which required internet service providers to ensure that the information sent out on their services adhered to the law, and that some domain names and IP addresses were recorded. Two years later, Beijing blocked Google for the first time. (A few years later, Google introduced Google.cn, a censored version of the site.) In 2002, the government increased its emphasis on self-censorship with the Public Pledge on Self-Discipline for China’s Internet Industry, which established four principles: patriotic observance of law, equitableness, trustworthiness and honesty. More than 100 companies, including Yahoo!, signed the pledge.

Perhaps the most significant development, however, was a 2004 guideline on internet censorship that called for Chinese universities to recruit internet commentators who could guide online discussions in politically acceptable directions and report comments that did not follow Chinese law. These commentators became known as wu mao dang, or “50-cent party”, after the small bonuses they were supposedly paid for each post.

Yet even as the government was striving to limit individuals’ access to information, many citizens were making significant inroads into the country’s political world – and their primary target was corrupt local officials.


In May 2009, Deng Yujiao, a young woman working in a hotel in Hubei province, stabbed a party official to death after she rejected his efforts to pay her for sex and he tried to rape her. Police initially committed Deng to a mental hospital. A popular blogger, Wu Gan, however, publicised her case. Using information gathered through a process known as ren rou sousuo, or “human flesh search engine”, in which web users collaborate to discover the identity of a specific individual or organisation, Wu wrote a blog describing the events and actions of the party officials involved.

In an interview with the Atlantic magazine at the time, he commented: “The cultural significance of flesh searches is this: in an undemocratic country, the people have limited means to get information … [but] citizens can get access to information through the internet, exposing lies and the truth.” Deng’s case began to attract public support, with young people gathering in Beijing with signs reading “Anyone could be Deng Yujiao.” Eventually the court ruled that Deng had acted in self-defence.

During this period, in the final years of Hu Jintao’s presidency, the internet was becoming more and more powerful as a mechanism by which Chinese citizens held their officials to account. Most cases were like that of Deng Yujiao – lodged and resolved at the local level. A small number, however, reached central authorities in Beijing. On 23 July 2011, a high-speed train derailed in the coastal city of Wenzhou, leaving at least 40 people dead and 172 injured. In the wake of the accident, Chinese officials banned journalistsfrom investigating, telling them to use only information “released from authorities”. But local residents took photos of the wreckage being buried instead of being examined for evidence. The photos went viral and heightened the impression that the government’s main goal was not to seek the true cause of the accident.

A Sina Weibo poll – later blocked – asked users why they thought the train wreckage was buried: 98% (61,382) believed it represented destruction of evidence. Dark humour spread online: “How far are we from heaven? Only a train ticket away,” and “The Ministry of Railways earnestly requests that you ride the Heavenly Party Express.” The popular pressure resulted in a full-scale investigation of the crash, and in late December, the government issued a report blaming poorly designed signal equipment and insufficient safety procedures. As many as 54 officials faced disciplinary action as a result of the crash.

The internet also provided a new sense of community for Chinese citizens, who mostly lacked robust civil-society organisations. In July 2012, devastating floods in Beijing led to the evacuation of more than 65,000 residents and the deaths of at least 77 people. Damages totalled an estimated $1.9bn. Local officials failed to respond effectively: police officers allegedly kept ticketing stranded cars instead of assisting residents, and the early warning system did not work. Yet the real story was the extraordinary outpouring of assistance from Beijing web users, who volunteered their homes and food to stranded citizens. In a span of just 24 hours, an estimated 8.8m messages were sent on Weibo regarding the floods. The story of the floods became not only one of government incompetence, but also one of how an online community could transform into a real one.


While the Chinese people explored new ways to use the internet, the leadership also began to develop a taste for the new powers it offered, such as a better understanding of citizens’ concerns and new ways to shape public opinion. Yet as the internet increasingly became a vehicle for dissent, concern within the leadership mounted that it might be used to mobilise a large-scale political protest capable of threatening the central government. The government responded with a stream of technological fixes and political directives; yet the boundaries of internet life continued to expand.

The advent of Xi Jinping in 2012 brought a new determination to move beyond deleting posts and passing regulations. Beijing wanted to ensure that internet content more actively served the interests of the Communist party. Within the virtual world, as in the real world, the party moved to silence dissenting voices, to mobilise party members in support of its values, and to prevent foreign ideas from seeping into Chinese political and social life. In a leaked speech in August 2013, Xi articulated a dark vision: “The internet has become the main battlefield for the public opinion struggle.”

Early in his tenure, Xi embraced the world of social media. One Weibo group, called Fan Group to Learn from Xi, appeared in late 2012, much to the delight of Chinese propaganda officials. (Many Chinese suspected that the account was directed by someone in the government, although the account’s owner denied it.) Xi allowed a visit he made to Hebei to be liveblogged on Weibo by government-affiliated press, and videos about Xi, including a viral music video called How Should I Address You, based on a trip he made to a mountain village, demonstrate the government’s increasing skill at digital propaganda.

Xi Jinping at the World Internet Conference in Jiaxing, China, in 2015.
 Xi Jinping at the World Internet Conference in Jiaxing, China, in 2015. Photograph: Aly Song/Reuters

Under Xi, the government has also developed new technology that has enabled it to exert far greater control over the internet. In January 2015, the government blocked many of the VPNs that citizens had used to circumvent the Great Firewall. This was surprising to many outside observers, who had believed that VPNs were too useful to the Chinese economy – supporting multinationals, banks and retailers, among others – for the government to crack down on them.

In spring 2015, Beijing launched the Great Cannon. Unlike the Great Firewall, which has the capacity to block traffic as it enters or exits China, the Great Cannon is able to adjust and replace content as it travels around the internet. One of its first targets was the US coding and software development site GitHub. The Chinese government used the Great Cannon to levy a distributed denial of service attack against the site, overwhelming it with traffic redirected from Baidu (a search engine similar to Google). The attack focused on attempting to force GitHub to remove pages linked to the Chinese-language edition of the New York Times and GreatFire.org, a popular VPN that helps people circumvent Chinese internet censorship.

But perhaps Xi’s most noticeable gambit has been to constrain the nature of the content available online. In August 2013, the government issued a new set of regulations known as the “seven baselines”. The reaction by Chinese internet companies was immediate. Sina, for example, shut down or “handled” 100,000 Weibo accounts found to not comply with the new rules.

The government also adopted tough restrictions on internet-based rumours. In September 2013, the supreme people’s court ruled that authors of online posts that deliberately spread rumours or lies, and were either seen by more than 5,000 individuals or shared more than 500 times, could face defamation charges and up to three years in jail. Following massive flooding in Hebei province in July 2016, for example, the government detained three individuals accused of spreading “false news” via social media regarding the death toll and cause of the flood. Some social media posts and photos of the flooding, particularly of drowning victims, were also censored.

In addition, Xi’s government began targeting individuals with large social media followings who might challenge the authority of the Communist party. Restrictions on the most prominent Chinese web influencers, beginning in 2013, represented an important turning point in China’s internet life. Discussions began to move away from politics to personal and less sensitive issues. The impact on Sina Weibo was dramatic. According to a study of 1.6 million Weibo users, the number of Weibo posts fell by 70% between 2011 and 2013.


The strength of the Communist party’s control over the internet rests above all on its commitment to prevent the spread of information that it finds dangerous. It has also adopted sophisticated technology, such as the Great Firewall and the Golden Shield. Perhaps its most potent source of influence, however, is the cyber-army it has developed to implement its policies.

The total number of people employed to monitor opinion and censor content on the internet – a role euphemistically known as “internet public opinion analyst” – was estimated at 2 million in 2013. They are employed across government propaganda departments, private corporations and news outlets. One 2016 Harvard study estimated that the Chinese government fabricates and posts approximately 448m comments on social media annually. A considerable amount of censorship is conducted through the manual deletion of posts, and an estimated 100,000 people are employed by both the government and private companies to do just this.

Private companies also play an important role in facilitating internet censorship in China. Since commercial internet providers are so involved in censoring the sites that they host, internet scholar Guobin Yang argues that “it may not be too much of a stretch to talk about the privatisation of internet content control”. The process is made simpler by the fact that several major technology entrepreneurs also hold political office. For example, Robin Li of Baidu is a member of the Chinese People’s Political Consultative Conference, an advisory legislature, while Lei Jun, founder and CEO of mobile phone giant Xiaomi, is a representative of the National People’s Congress.

Yet Xi’s growing control over the internet does not come without costs. An internet that does not work efficiently or limits access to information impedes economic growth. China’s internet is notoriously unreliable, and ranks 91st in the world for speed. As New Yorker writer Evan Osnos asked in discussing the transformation of the Chinese internet during Xi’s tenure: “How many countries in 2015 have an internet connection to the world that is worse than it was a year ago?”

Scientific innovation, particularly prized by the Chinese leadership, may also be at risk. After the VPN crackdown, a Chinese biologist published an essay that became popular on social media, entitled Why Do Scientists Need Google? He wrote: “If a country wants to make this many scientists take out time from the short duration of their professional lives to research technology for climbing over the Great Firewall and to install and to continually upgrade every kind of software for routers, computers, tablets and mobile devices, no matter that this behaviour wastes a great amount of time; it is all completely ridiculous.”

More difficult to gauge is the cost the Chinese leadership incurs to its credibility. Web users criticising the Great Firewall have used puns to mock China’s censorship system. Playing off the fact that the phrases “strong nation” and “wall nation” share a phonetic pronunciation in Chinese (qiangguo), some began using the phrase “wall nation” to refer to China. Those responsible for seeking to control content have also been widely mocked. When Fang opened an account on Sina Weibo in December 2010, he quickly closed the account after thousands of online users left “expletive-laden messages” accusing him of being a government hack. Censors at Sina Weibo blocked “Fang Binxing” as a search term; one Twitter user wrote: “Kind of poetic, really, the blocker, blocked.” When Fang delivered a speech at Wuhan University in central China in 2011, a few students pelted him with eggs and a pair of shoes.

Nonetheless, the government seems willing to bear the economic and scientific costs, as well as potential damage to its credibility, if it means more control over the internet. For the international community, Beijing’s cyber-policy is a sign of the challenge that a more powerful China presents to the liberal world order, which prioritises values such as freedom of speech. It also reflects the paradox inherent in China’s efforts to promote itself as a champion of globalisation, while simultaneously advocating a model of internet sovereignty and closing its cyber-world to information and investment from abroad.

Adapted from The Third Revolution: Xi Jinping and the New Chinese State by Elizabeth C Economy, published by Oxford University Press and available at guardianbookshop.com

 Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.

Since you’re here …

… we have a small favour to ask. More people are reading the Guardian than ever but advertising revenues across the media are falling fast. And unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can. So you can see why we need to ask for your help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.

I appreciate there not being a paywall: it is more democratic for the media to be available for all and not a commodity to be purchased by a few. I’m happy to make a contribution so others with less means still have access to information.Thomasine, Sweden

If everyone who reads our reporting, who likes it, helps fund it, our future would be much more secure. For as little as $1, you can support the Guardian – and it only takes a minute. Thank you.

The West is ill-prepared for the wave of “deep fakes” From AI

(THIS ARTICLE IS COURTESY OF THE BROOKINGS INSTITUTE)

 

ORDER FROM CHAOS

The West is ill-prepared for the wave of “deep fakes” that artificial intelligence could unleash

Chris Meserole and Alina Polyakova

Editor’s Note:To get ahead of new problems related to disinformation and technology, policymakers in Europe and the United States should focus on the coming wave of disruptive technologies, write Chris Meserole and Alina Polyakova. Fueled by advances in artificial intelligence and decentralized computing, the next generation of disinformation promises to be even more sophisticated and difficult to detect. This piece originally appeared on ForeignPolicy.com.

Russian disinformation has become a problem for European governments. In the last two years, Kremlin-backed campaigns have spread false stories alleging that French President Emmanuel Macron was backed by the “gay lobby,” fabricated a story of a Russian-German girl raped by Arab migrants, and spread a litany of conspiracy theories about the Catalan independence referendum, among other efforts.

Europe is finally taking action. In January, Germany’s Network Enforcement Act came into effect. Designed to limit hate speech and fake news online, the law prompted both France and Spain to consider counterdisinformation legislation of their own. More important, in April the European Union unveiled a new strategy for tackling online disinformation. The EU plan focuses on several sensible responses: promoting media literacy, funding a third-party fact-checking service, and pushing Facebook and others to highlight news from credible media outlets, among others. Although the plan itself stops short of regulation, EU officials have not been shy about hinting that regulation may be forthcoming. Indeed, when Facebook CEO Mark Zuckerberg appeared at an EU hearing this week, lawmakers reminded him of their regulatory power after he appeared to dodge their questions on fake news and extremist content.

The problem is that technology advances far more quickly than government policies.

The recent European actions are important first steps. Ultimately, none of the laws or strategies that have been unveiled so far will be enough. The problem is that technology advances far more quickly than government policies. The EU’s measures are still designed to target the disinformation of yesterday rather than that of tomorrow.

To get ahead of the problem, policymakers in Europe and the United States should focus on the coming wave of disruptive technologies. Fueled by advances in artificial intelligence and decentralized computing, the next generation of disinformation promises to be even more sophisticated and difficult to detect.

To craft effective strategies for the near term, lawmakers should focus on four emerging threats in particular: the democratization of artificial intelligence, the evolution of social networks, the rise of decentralized applications, and the “back end” of disinformation.

Thanks to bigger data, better algorithms, and custom hardware, in the coming years, individuals around the world will increasingly have access to cutting-edge artificial intelligence. From health care to transportation, the democratization of AI holds enormous promise.

Yet as with any dual-use technology, the proliferation of AI also poses significant risks. Among other concerns, it promises to democratize the creation of fake print, audio, and video stories. Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone. However, deep learning and generative adversarial networks have made it possible to doctor imagesand video so well that it’s difficult to distinguish manipulated files from authentic ones. And thanks to apps like FakeApp and Lyrebird, these so-called “deep fakes” can now be produced by anyone with a computer or smartphone. Earlier this year, a tool that allowed users to easily swap faces in video produced fake celebrity porn, which went viral on Twitter and Pornhub.

Deep fakes and the democratization of disinformation will prove challenging for governments and civil society to counter effectively. Because the algorithms that generate the fakes continuously learn how to more effectively replicate the appearance of reality, deep fakes cannot easily be detected by other algorithms—indeed, in the case of generative adversarial networks, the algorithm works by getting really good at fooling itself. To address the democratization of disinformation, governments, civil society, and the technology sector therefore cannot rely on algorithms alone, but will instead need to invest in new models of social verification, too.

At the same time as artificial technology and other emerging technologies mature, legacy platforms will continue to play an outsized role in the production and dissemination of information online. For instance, consider the current proliferation of disinformation on Google, Facebook, and Twitter.

A growing cottage industry of search engine optimization (SEO) manipulation provides services to clients looking to rise in the Google rankings. And while for the most part, Google is able to stay ahead of attempts to manipulate its algorithms through continuous tweaks, SEO manipulators are also becoming increasingly savvy at gaming the system so that the desired content, including disinformation, appears at the top of search results.

For example, stories from RT and Sputnik—the Russian government’s propaganda outlets—appeared on the first page of Google searches after the March nerve agent attack in the United Kingdom and the April chemical weapons attack in Syria. Similarly, YouTube (which is owned by Google) has an algorithm that prioritizes the amount of time users spend watching content as the key metric for determining which content appears first in search results. This algorithmic preference results in false, extremist, and unreliable information appearing at the top, which in turn means that this content is viewed more often and is perceived as more reliable by users. Revenue for the SEO manipulation industry is estimated to be in the billions of dollars.

On Facebook, disinformation appears in one of two ways: through shared content and through paid advertising. The company has tried to curtail disinformation across each vector, but thus far to no avail. Most famously, Facebook introduced a “Disputed Flag” to signify possible false news—only to discover that the flag made users more likely to engage with the content, rather than less. Less conspicuously, in Canada, the company is experimenting with increasing the transparency of its paid advertisements by making all ads available for review, including those micro-targeted to a small set of users. Yet, the effort is limited: The sponsors of ads are often buried, requiring users to do time-consuming research, and the archive Facebook set up for the ads is not a permanent database but only shows active ads. Facebook’s early efforts do not augur well for a future in which foreign actors can continue to exploit its news feed and ad products to deliver disinformation—including deep fakes produced and targeted at specific individuals or groups.

Although Twitter has taken steps to combat the proliferation of trolls and bots on its platform, it remains deeply vulnerable to disinformation campaigns, since accounts are not verified and its application programming interface, or API, still makes it possible to easily generate and spread false content on the platform. Even if Twitter takes further steps to crack down on abuse, its detection algorithms can be reverse-engineered in much the same way Google’s search algorithm is. Without fundamental changes to its API and interaction design, Twitter will remain rife with disinformation. It’s telling, for example, that when the U.S. military struck Syrian chemical weapons facilities in April—well after Twitter’s latest reforms were put in place—the Pentagon reported a massive surge in Russian disinformation in the hours immediately following the attack. The tweets appeared to come from legitimate accounts, and there was no way to report them as misinformation.

Blockchain technologies and other distributed ledgers are best known for powering cryptocurrencies such as bitcoin and ethereum. Yet their biggest impact may lie in transforming how the internet works. As more and more decentralized applications come online, the web will increasingly be powered by services and protocols that are designed from the ground up to resist the kind of centralized control that Facebook and others enjoy. For instance, users can already browse videos on DTube rather than YouTube, surf the web on the Blockstack browser rather than Safari, and store files using IPFS, a peer-to-peer file system, rather than Dropbox or Google Docs. To be sure, the decentralized application ecosystem is still a niche area that will take time to mature and work out the glitches. But as security improves over time with fixes to the underlying network architecture, distributed ledger technologies promise to make for a web that is both more secure and outside the control of major corporations and states.

If and when online activity migrates onto decentralized applications, the security and decentralization they provide will be a boon for privacy advocates and human rights dissidents. But it will also be a godsend for malicious actors. Most of these services have anonymity and public-key cryptography baked in, making accounts difficult to track back to real-life individuals or organizations. Moreover, once information is submitted to a decentralized application, it can be nearly impossible to take down. For instance, the IPFS protocol has no method for deletion—users can only add content, they cannot remove it.

For governments, civil society, and private actors, decentralized applications will thus pose an unprecedented challenge, as the current methods for responding to and disrupting disinformation campaigns will no longer apply. Whereas governments and civil society can ultimately appeal to Twitter CEO Jack Dorsey if they want to block or remove a malicious user or problematic content on Twitter, with decentralized applications, there won’t always be someone to turn to. If the Manchester bomber had viewed bomb-making instructions on a decentralized app rather than on YouTube, it’s not clear who authorities should or could approach about blocking the content.

Over the last three years, renewed attention to Russian disinformation efforts has sparked research and activities among a growing number of nonprofit organizations, governments, journalists, and activists. So far, these efforts have focused on documenting the mechanisms and actors involved in disinformation campaigns—tracking bot networks, identifying troll accounts, monitoring media narratives, and tracing the diffusion of disinformation content. They’ve also included governmental efforts to implement data protection and privacy policies, such as the EU’s General Data Protection Regulation, and legislative proposals to introduce more transparency and accountability into the online advertising space.

While these efforts are certainly valuable for raising awareness among the public and policymakers, by focusing on the end product (the content), they rarely delve into the underlying infrastructure and advertising marketsdriving disinformation campaigns. Doing so requires a deeper examination and assessment of the “back end” of disinformation. In other words, the algorithms and industries—the online advertising market, the SEO manipulation market, and data brokers—behind the end product. Increased automation paired with machine learning will transform this space as well.

To get ahead of these emerging threats, Europe and the United States should consider several policy responses.

First, the EU and the United States should commit significant funding to research and development at the intersection of AI and information warfare. In April, the European Commission called for at least 20 billion euros (about $23 billion) to be spent on research on AI by 2020, prioritizing the health, agriculture, and transportation sectors. None of the funds are earmarked for research and development specifically on disinformation. At the same time, current European initiatives to counter disinformation prioritize education and fact-checking while leaving out AI and other new technologies.

As long as tech research and counterdisinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of emerging threats.

As long as tech research and counterdisinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of emerging threats. In the United States, the government has been reluctant to step in to push forward tech research as Silicon Valley drives innovation with little oversight. The 2016 Obama administration report on the future of AI did not allocate funding, and the Trump administration has yet to release its own strategy. As revelations of Russian manipulation of digital platforms continue, it is becoming increasingly clear that governments will need to work together with private sector firms to identify vulnerabilities and national security threats.

Furthermore, the EU and the U.S. government should also move quickly to prevent the rise of misinformation on decentralized applications. The emergence of decentralized applications presents policymakers with a rare second chance: When social networks were being built a decade ago, lawmakers failed to anticipate the way in which they could be exploited by malicious actors. With such applications still a niche market, policymakers can respond before the decentralized web reaches global scale. Governments should form new public-private partnerships to help developers ensure that the next generation of the web isn’t as ripe for misinformation campaigns. A model could be the United Nations’ Tech Against Terrorism project, which works closely with small tech companies to help them design their platforms from the ground up to guard against terrorist exploitation.

Finally, legislators should continue to push for reforms in the digital advertising industry. As AI continues to transform the industry, disinformation content will become more precise and micro-targeted to specific audiences. AI will make it far easier for malicious actors and legitimate advertisers alike to track user behavior online, identify potential new users to target, and collect information about users’ attitudes, beliefs, and preferences.

In 2014, the U.S. Federal Trade Commission released a report calling for transparency and accountability in the data broker industry. The report called on Congress to consider legislation that would shine light on these firms’ activities by giving individuals access and information about how their data is collected and used online. The EU’s protection regulation goes a long way in giving users control over their data and limits how social media platforms process users’ data for ad-targeting purposes. Facebook is also experimenting with blocking foreign ad sales ahead of contentious votes. Still, the digital ads industry as a whole remains a black box to policymakers, and much more can still be done to limit data mining and regulate political ads online.

Effectively tracking and targeting each of the areas above won’t be easy. Yet policymakers need to start focusing on them now. If the EU’s new anti-disinformation effort and other related policies fail to track evolving technologies, they risk being antiquated before they’re even introduced.

A how-to guide for managing the end of the post-Cold War era. Read all the Order from Chaos content »

Yes The Russian Threat To Your Freedom Is Real—And It Matters

(THIS ARTICLE IS COURTESY OF CNN)

 

(CNN)The Russians are coming! Except they aren’t. Though they already have a bit. And they might well be coming a bit more soon.

This is how very bad things happen.
The threat posed by Russia to Western interests is unlike anything seen since the 1990s. It has forces or proxies deployed in Syria, Ukraine and, don’t forget, parts of what’s still called Georgia. There is smoke, but there is also fire and daily there is a lot of fuel being added.
Dutch state media revealed this week that Dutch cyber spies — the Joint Sigint Cyber Unit (JSCU) — were able to hack into the closed-circuit television of the building where a Russian hacking organization known as Cozy Bear worked, and observe them coming and going from offices where they hacked the Democratic National Committee in the US. The Dutch told the Americans, touching off the US investigations. According to the Dutch, the Americans then helpfully told the media they were tipped off by a Western intelligence agency, prompting the Russians to turn off the Cozy Bear CCTV hack.

A Ukrainian serviceman shoots with a grenade launcher during fighting with pro-Russian separatists in Donetsk, Ukraine.

There was also a shrill warning from new UK Defense Secretary Gavin Williamson, who, amid a budget row and internal leadership posturing, chose Friday to unleash a barrage of concerns about “thousands and thousands and thousands” — yes, that many — deaths that Russia could cause in Britain, if it successfully hacked the electricity grid.
close dialog
Receive Fareed Zakaria’s Global Analysis
including insights and must-reads of world news
Activate Fareed’s Briefing
By subscribing you agree to our
privacy policy.
Williamson told the Daily Telegraph: “Why would they [the Russians] keep photographing and looking at power stations, why are they looking at the interconnectors that bring so much electricity and so much energy into our country? They are looking at these things because they are saying, ‘These are the ways we can hurt Britain.'” His officials have also alleged Russia may target the transatlantic cables that ferry the internet to the UK.
These new claims were met with the now-predictable Russian derision. Russian defense spokesman Igor Konashenkov said Williamson had “lost understanding of what is reasonable in his fierce fight for the banknotes in the military budget,” and that his “phobia” belonged in “children’s comic books” or an episode of “Monty Python’s Flying Circus.”
Kremlin spokesman Dmitri Peskov dubbed the Dutch report “anti-Russian hysteria,” saying “if the Dutch newspapers want to supply the coal to the furnace of anti-Russian hysteria which is currently takes place in America, well… let’s say it’s not the most noble thing to do.”

‘All decorum has been cast aside’

Russophobia is a familiar and disturbing theme. Foreign Minister Sergey Lavrov recently called it “unprecedented.”
“We never saw this during the Cold War. Back then, there were some rules, some decorum… Now, all decorum has been cast aside,” Lavrov told Russian daily Kommersant in an interview published on January 21.

Russian Foreign Minister Sergei Lavrov gives his annual press conference in Moscow on January 15, 2018.

Some Russian state rhetoric is designed to paint a picture of an outside world that hysterically harnesses fear of a resurgent Russia, when really the country means no harm. It is designed to try and distance Russians from an outside world they can increasingly see, even if only through the slanted prism of Russian state media.
The xenophobia, homophobia and sometimes outright racism that has grown in Russian society also stem from the idea of a people — a narod — under threat. Russophobia, that argument goes, happens because “they want us gone, but also because they fear us, as we refuse to lie down.” I saw it in the eyes and anger of many ethnic Russians embattled in eastern Ukraine. They felt abandoned, scorned, left outside the rest of Ukraine, and had to turn to Russia to protect their Russianness.
Some of Russia’s urban elite has seen too much of the outside world to buy this reductive message. But its nationalists and beholden state employees embrace it, and much of rural Russia hasn’t seen the glittering globe beyond. Life remains tough there, with even state figures accepting that just under 14% of Russians live below “the minimum cost of living,” according to Tass.
Into this narrative of “them and us” come these increasingly vociferous Western claims of the Russian threat. In the partisan fury of US or UK politics, it is hard to know at times whether Russia did ingeniously undermine the entire US electoral process and infiltrate Team Trump, or just ended up having clumsy hackers steal some emails, and allow some of its sympathizers to get too close to some of Trump’s less savvy or wholesome staff.
It is hard to know, with Russian-backed tanks still in Donetsk and jets in Syria, whether we are seeing an expansionist Moscow intent on soon probing the Baltic states or switching off the lights in London, or a nervous Russia that is just checking threats it sees in its near abroad.

Red Square in Moscow. Russians see the West through the prism of state-run media.

The most troubling point is that the distinction doesn’t really matter. This perception of Russophobia (or a real Russian threat) is either what the Kremlin wants, to justify its more aggressive schemes, or it is what the Kremlin feels it has to respond to, as to not appear weak.
Vladimir Putin has long surrounded himself not with tech-age visionaries, but with men who stem from the same age as him, a period he called the “greatest geopolitical catastrophe of the 20th century” — the fall of the Soviet empire. He still feels it personally, wishes to see the shift in power partially redressed and must surely be bemused at how the US public has elected a president so capable of diminishing US influence the world over.
The Kremlin takes things personally. It may seem disproportionate to the slight, but not when compared with the extraordinary suffering of the Soviet era and the brutal collapse of the 1990s. But by recognizing Russia as the threat it increasingly shows itself to be, Western figures are also ensuring Moscow has little choice but to fulfill the prophecy.

NSA and the War on Our Privacy

(THIS ARTICLE IS COURTESY OF THE SAUDI NEWS AGENCY ASHARQ AL-AWSAT)

 

NSA and the War on Our Privacy

Saturday, 18 November, 2017 – 08:00

Since the former intelligence contractor Edward Snowden’s disclosures began showing up in the Washington Post and the Guardian, the political debate over the American surveillance state has been stuck in the 20th century.

The public has feared a secretive, all-seeing eye, a vast bureaucracy that could peer into our online lives and track the numbers our smartphones dialed. Privacy as we knew it was dead. The era of Big Brother was here.

President Barack Obama responded to the Snowden leaks by commissioning a blue-ribbon panel that ended up concluding the way the National Security Agency did business often trampled on legitimate civil liberties concerns. The government did not need to store our metadata or the numbers, times and dates of our phone calls.

It turns out though that the questions prompted by Snowden were only part of the story. A recent expose from the New York Times tells a very different, and more frightening, tale. In this case, the proper analogy is not Big Brother, but an outbreak. A shadowy network of hackers, known as the shadow brokers, stole the NSA’s toolbox of cyber weapons it had used to peer into the computers of our adversaries. This network then offered subscribers the fruits of powerful cyber weapons that the U.S. government was never supposed to even acknowledge. The virus is no longer confined to the lab. It’s out in the wild.

And while the cyber weapons appear to be dated from 2013, the extent of the damage is still being assessed. The Times reports that the NSA still hasn’t found the culprits. NSA cyber warriors are subjected to polygraphs, and morale at the agency is low. Was there a mole? Was there a hack? The world’s greatest surveillance organization still doesn’t know.

Aside from puncturing the aura of the NSA as an all-seeing eye, the Times story also shows that today the greatest threat to our privacy is not an organization with a monopoly of surveillance power, but rather the disaggregation of surveillance power. It is not the citizen versus the state. Rather it is a Hobbesian state of nature, a war of all against all. Today, foreign governments and private hackers can use the same tools we all feared the U.S. government would use.

It’s enough to make you wish for a simpler time when the greatest threat to our privacy came from our own government.

Bloomberg

END OF HATE: ANONYMOUS NOW IN CONTROL OF DAILY STORMER

(THIS ARTICLE IS COURTESY OF THE ‘DAILY STORMER’ WEBSITE—AND OF ANONYMOUS)

 

END OF HATE: ANONYMOUS NOW IN CONTROL OF DAILY STORMER

#TANGODOWN

THIS SITE IS NOW UNDER THE CONTROL OF ANONYMOUS

WE HAVE TAKEN THIS SITE IN THE NAME OF HEATHER HEYER A VICTIM OF WHITE SUPREMACIST TERRORISM

FOR TOO LONG THE DAILY STORMER AND ANDREW ANGLIN HAVE SPEWED THEIR PUTRID HATE ON THIS SITE

THAT WILL NOT BE HAPPENING ANYMORE

WE HAVE ALL OF THE DETAILS ON THE SERVERS AND WILL BE RELEASING THE DATA WHEN WE FEEL THE TIME IS RIGHT

WE HAVE ALSO GATHERED LOCATIONAL DATA ON ANGLIN HIMSELF AND ARE SENDING OUR ALLIES IN LAGOS TO PAY HIM A VISIT IN PERSON

THIS EVIL CANNOT BE ALLOWED TO STAND

IT TOOK A UNITED FORCE OF ELITE HACKERS FROM AROUND THE WORLD TO BREACH THE SYSTEMS AND THE FIREWALL

WE HAVE HAD THE DAILY STORMER IN OUR SITES FOR MONTHS NOW

THE EVENTS OF CHARLOTTESVILLE ALERTED US TO THE NEED FOR IMMEDIATE ACTION

WE WANT YOU NAZIS TO KNOW: YOUR TIME IS SHORT

WE WILL ALLOW THE SITE TO REMAIN ONLINE FOR 24 HOURS SO THE WORLD CAN WITNESS THE HATE

THEN WE WILL SHUT IT DOWN

PERMANENTLY

HACKERS OF THE WORLD HAVE UNITED IN DEFENSE OF THE JEWISH PEOPLE

YOU SHOULD HAVE EXPECTED US