LULA TARGETS FAKE NEWS ON ALLEGED CONFESSION NETWORKS

(THIS ARTICLE IS COURTESY OF THE BRAZILIAN NEWS AGENCY 24/7)

Xi Jinping And His Habitual Liars Rattles Taiwan Ahead Of Elections

(THIS ARTICLE IS COURTESY OF THE ALJAZEERA NEWS AGENCY)

 

‘Fake news’ rattles Taiwan ahead of elections

Beijing is test-driving propaganda techniques ahead of Taiwan’s largest-ever elections on Saturday, officials say.

by

President Tsai Ing-wen looks through a pair of binoculars during an anti-invasion drill last month [Tyrone Siu/Reuters]
President Tsai Ing-wen looks through a pair of binoculars during an anti-invasion drill last month [Tyrone Siu/Reuters]

Taipei, Taiwan – China is spreading “fake news” via social media to swing Taiwanese voters away from President Tsai Ing-wen’s party and behind candidates more sympathetic to Beijing ahead of elections, Taiwanese officials said.

Beijing is test-driving its techniques in Taiwan, where it has a big stake in the politics and understands the language and culture, but deployed its cyber-capacities in the United States, Australia and other democracies, the officials said.

“We received propaganda warfare coming from China for years, but this is taking a very different form,” Foreign Minister Joseph Wu, from Tsai’s ruling Democratic Progressive Party (DPP), told Al Jazeera.

“It’s coming in not from newspapers or their propaganda machine but through our social media, online chat groups, Facebook, the zombie accounts set up, somewhere, by the Chinese government.”

Foreign Minister Joseph Wu, from Tsai’s ruling Democratic Progressive Party [James Reinl/Al Jazeera]

Comments from Wu and other DPP officials are in line with growing global fears that authoritarian China, like Russia, is meddling in foreign elections. Last month, US Vice President Mike Pence said Moscow’s effort “pales in comparison” to interference from Beijing.

Beijing’s mission to the UN did not respond to Al Jazeera’s interview request, but Chinese officials have previously rejected such claims as “confusing right and wrong and creating something out of thin air”.

‘Orchestrate misinformation’

Taiwanese voters go to the polls on Saturday to choose mayors and others in midterm elections that will reflect the popularity of the anti-Beijing DPP and Tsai, who is expected to seek re-election in 2020.

It will be Taiwan’s largest election ever with about 19 million voters, or 83 percent of the population, casting ballots for more than 11,000 officials.

False stories can be traced to foreign servers and back to the Chinese Communist Party (CCP) and it’s so-called “50 Cent Army” of online trolls and commentators, DPP politician Lo Chi-cheng told Al Jazeera.

They typically undermine Tsai, the DPP or Taiwan’s autonomy from the mainland, while stirring up historic grievances by which some voters support the DPP and others back its main rival, the pro-Beijing Kuomintang (KMT).

“The US, Australia, Germany and other countries are also addressing the issue as to how countries like Russia and China use disinformation to influence domestic and electoral politics in democracies like Taiwan,” said Lo.

“It’s a more serious problem because China is so close to Taiwan, language-wise. They don’t have the cultural or language barrier and can easily fabricate news and they know the mentality of Chinese thinking, so it’s easier for them to orchestrate this misinformation.”

DPP politician Lo Chi-cheng [James Reinl/Al Jazeera]

One story suggested that Tsai was flanked by armed soldiers when visiting flood victims in Chiayi County in August. Another said some of Taiwan’s last-remaining allied governments were about to abandon Taipei.

Another said China had bussed Taiwanese nationals to safety after typhoon Jebi killed 11 and injured thousands in Japan in September, and that Taipei had let its people down – a story that reportedly led to the suicide of a Taiwanese diplomat in Osaka.

Ahead of voting, police arrested several suspects for malicious story-sharing but, for Wu, the focus is on Taiwan’s government to counter fake news with quick, factual corrections. For Lo, plans to tighten media laws are controversial as they could violate free speech rules.

‘Entertainment’ news

Not everyone fears Beijing’s media reach, however. Eric Huang, an independent analyst with links to the KMT, said Taiwan’s voters have high rates of internet penetration and are used to the subjective news in mainstream Taiwanese media.

“Taiwanese news agencies are very editorial and opinionated along party lines already, so the people are used to biased news. They just view this information coming from China as entertainment,” Huang told Al Jazeera.

Justin Yu, a technology investor in downtown Taipei, echoed these thoughts, saying younger Taiwanese web-users are well acquainted with the competing narratives from Taipei and Beijing.

“When we were in elementary school, we were told we shouldn’t be so close to the Chinese government. Whenever we see the information, we hesitate and question whether it is real or not. I don’t think there’s a real problem and it doesn’t influence us much,” Yu told Al Jazeera.

Shoppers buy mobile phones in the capital, Taipei, which has one of the world’s highest rates of internet penetration [James Reinl/Al Jazeera]

Since the 2016 election of Tsai’s pro-independence DPP, Beijing has turned the screws on Taiwan, peeling away a handful of its remaining diplomatic allies, excluding it from global forums, and forcing airlines to classify Taiwan as part of China.

Three former allies – El Salvador, Dominican Republic and Burkina Faso – switched their allegiances to Beijing this year, and the Chinese military has stepped up encirclement drills around Taiwan, which Taipei has denounced as intimidation.

According to DPP officials, Beijing has reached deep into the breakaway island of 23 million people, sowing division and confusion through online disinformation, recruiting business figures, and funnelling cash to pro-Beijing politicians.

De facto independence

The Republic of China – Taiwan’s official name – relocated to the island in 1949 when Chiang Kai-shek’s nationalists fled the mainland after being defeated by Mao Zedong’s communists. It is now a democracy with de facto independence from Beijing.

Under its “one China” policy, the Beijing regards Taiwan as a renegade province that needs to be unified – by military force if necessary. Many analysts say China seeks to achieve the same end by flooding Taiwan with investment and buying off decision-makers.

The opposition KMT marks a continuation of Chiang’s legacy. DPP supporters typically highlight atrocities committed during Taiwan’s “white terror” and decades of martial law and call for independence from the mainland.

Last month, thousands of pro-independence demonstrators rallied in Taiwan’s capital to protest against Beijing’s “bullying” and called for a referendum on whether the self-ruled island should formally split from China.

Follow James Reinl on Twitter: @jamesreinl

South China Sea: The world's next big war?

UPFRONT

South China Sea: The world’s next big war?

SOURCE: AL JAZEERA NEWS

The West is ill-prepared for the wave of “deep fakes” From AI

(THIS ARTICLE IS COURTESY OF THE BROOKINGS INSTITUTE)

 

ORDER FROM CHAOS

The West is ill-prepared for the wave of “deep fakes” that artificial intelligence could unleash

Chris Meserole and Alina Polyakova

Editor’s Note:To get ahead of new problems related to disinformation and technology, policymakers in Europe and the United States should focus on the coming wave of disruptive technologies, write Chris Meserole and Alina Polyakova. Fueled by advances in artificial intelligence and decentralized computing, the next generation of disinformation promises to be even more sophisticated and difficult to detect. This piece originally appeared on ForeignPolicy.com.

Russian disinformation has become a problem for European governments. In the last two years, Kremlin-backed campaigns have spread false stories alleging that French President Emmanuel Macron was backed by the “gay lobby,” fabricated a story of a Russian-German girl raped by Arab migrants, and spread a litany of conspiracy theories about the Catalan independence referendum, among other efforts.

Europe is finally taking action. In January, Germany’s Network Enforcement Act came into effect. Designed to limit hate speech and fake news online, the law prompted both France and Spain to consider counterdisinformation legislation of their own. More important, in April the European Union unveiled a new strategy for tackling online disinformation. The EU plan focuses on several sensible responses: promoting media literacy, funding a third-party fact-checking service, and pushing Facebook and others to highlight news from credible media outlets, among others. Although the plan itself stops short of regulation, EU officials have not been shy about hinting that regulation may be forthcoming. Indeed, when Facebook CEO Mark Zuckerberg appeared at an EU hearing this week, lawmakers reminded him of their regulatory power after he appeared to dodge their questions on fake news and extremist content.

The problem is that technology advances far more quickly than government policies.

The recent European actions are important first steps. Ultimately, none of the laws or strategies that have been unveiled so far will be enough. The problem is that technology advances far more quickly than government policies. The EU’s measures are still designed to target the disinformation of yesterday rather than that of tomorrow.

To get ahead of the problem, policymakers in Europe and the United States should focus on the coming wave of disruptive technologies. Fueled by advances in artificial intelligence and decentralized computing, the next generation of disinformation promises to be even more sophisticated and difficult to detect.

To craft effective strategies for the near term, lawmakers should focus on four emerging threats in particular: the democratization of artificial intelligence, the evolution of social networks, the rise of decentralized applications, and the “back end” of disinformation.

Thanks to bigger data, better algorithms, and custom hardware, in the coming years, individuals around the world will increasingly have access to cutting-edge artificial intelligence. From health care to transportation, the democratization of AI holds enormous promise.

Yet as with any dual-use technology, the proliferation of AI also poses significant risks. Among other concerns, it promises to democratize the creation of fake print, audio, and video stories. Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone. However, deep learning and generative adversarial networks have made it possible to doctor imagesand video so well that it’s difficult to distinguish manipulated files from authentic ones. And thanks to apps like FakeApp and Lyrebird, these so-called “deep fakes” can now be produced by anyone with a computer or smartphone. Earlier this year, a tool that allowed users to easily swap faces in video produced fake celebrity porn, which went viral on Twitter and Pornhub.

Deep fakes and the democratization of disinformation will prove challenging for governments and civil society to counter effectively. Because the algorithms that generate the fakes continuously learn how to more effectively replicate the appearance of reality, deep fakes cannot easily be detected by other algorithms—indeed, in the case of generative adversarial networks, the algorithm works by getting really good at fooling itself. To address the democratization of disinformation, governments, civil society, and the technology sector therefore cannot rely on algorithms alone, but will instead need to invest in new models of social verification, too.

At the same time as artificial technology and other emerging technologies mature, legacy platforms will continue to play an outsized role in the production and dissemination of information online. For instance, consider the current proliferation of disinformation on Google, Facebook, and Twitter.

A growing cottage industry of search engine optimization (SEO) manipulation provides services to clients looking to rise in the Google rankings. And while for the most part, Google is able to stay ahead of attempts to manipulate its algorithms through continuous tweaks, SEO manipulators are also becoming increasingly savvy at gaming the system so that the desired content, including disinformation, appears at the top of search results.

For example, stories from RT and Sputnik—the Russian government’s propaganda outlets—appeared on the first page of Google searches after the March nerve agent attack in the United Kingdom and the April chemical weapons attack in Syria. Similarly, YouTube (which is owned by Google) has an algorithm that prioritizes the amount of time users spend watching content as the key metric for determining which content appears first in search results. This algorithmic preference results in false, extremist, and unreliable information appearing at the top, which in turn means that this content is viewed more often and is perceived as more reliable by users. Revenue for the SEO manipulation industry is estimated to be in the billions of dollars.

On Facebook, disinformation appears in one of two ways: through shared content and through paid advertising. The company has tried to curtail disinformation across each vector, but thus far to no avail. Most famously, Facebook introduced a “Disputed Flag” to signify possible false news—only to discover that the flag made users more likely to engage with the content, rather than less. Less conspicuously, in Canada, the company is experimenting with increasing the transparency of its paid advertisements by making all ads available for review, including those micro-targeted to a small set of users. Yet, the effort is limited: The sponsors of ads are often buried, requiring users to do time-consuming research, and the archive Facebook set up for the ads is not a permanent database but only shows active ads. Facebook’s early efforts do not augur well for a future in which foreign actors can continue to exploit its news feed and ad products to deliver disinformation—including deep fakes produced and targeted at specific individuals or groups.

Although Twitter has taken steps to combat the proliferation of trolls and bots on its platform, it remains deeply vulnerable to disinformation campaigns, since accounts are not verified and its application programming interface, or API, still makes it possible to easily generate and spread false content on the platform. Even if Twitter takes further steps to crack down on abuse, its detection algorithms can be reverse-engineered in much the same way Google’s search algorithm is. Without fundamental changes to its API and interaction design, Twitter will remain rife with disinformation. It’s telling, for example, that when the U.S. military struck Syrian chemical weapons facilities in April—well after Twitter’s latest reforms were put in place—the Pentagon reported a massive surge in Russian disinformation in the hours immediately following the attack. The tweets appeared to come from legitimate accounts, and there was no way to report them as misinformation.

Blockchain technologies and other distributed ledgers are best known for powering cryptocurrencies such as bitcoin and ethereum. Yet their biggest impact may lie in transforming how the internet works. As more and more decentralized applications come online, the web will increasingly be powered by services and protocols that are designed from the ground up to resist the kind of centralized control that Facebook and others enjoy. For instance, users can already browse videos on DTube rather than YouTube, surf the web on the Blockstack browser rather than Safari, and store files using IPFS, a peer-to-peer file system, rather than Dropbox or Google Docs. To be sure, the decentralized application ecosystem is still a niche area that will take time to mature and work out the glitches. But as security improves over time with fixes to the underlying network architecture, distributed ledger technologies promise to make for a web that is both more secure and outside the control of major corporations and states.

If and when online activity migrates onto decentralized applications, the security and decentralization they provide will be a boon for privacy advocates and human rights dissidents. But it will also be a godsend for malicious actors. Most of these services have anonymity and public-key cryptography baked in, making accounts difficult to track back to real-life individuals or organizations. Moreover, once information is submitted to a decentralized application, it can be nearly impossible to take down. For instance, the IPFS protocol has no method for deletion—users can only add content, they cannot remove it.

For governments, civil society, and private actors, decentralized applications will thus pose an unprecedented challenge, as the current methods for responding to and disrupting disinformation campaigns will no longer apply. Whereas governments and civil society can ultimately appeal to Twitter CEO Jack Dorsey if they want to block or remove a malicious user or problematic content on Twitter, with decentralized applications, there won’t always be someone to turn to. If the Manchester bomber had viewed bomb-making instructions on a decentralized app rather than on YouTube, it’s not clear who authorities should or could approach about blocking the content.

Over the last three years, renewed attention to Russian disinformation efforts has sparked research and activities among a growing number of nonprofit organizations, governments, journalists, and activists. So far, these efforts have focused on documenting the mechanisms and actors involved in disinformation campaigns—tracking bot networks, identifying troll accounts, monitoring media narratives, and tracing the diffusion of disinformation content. They’ve also included governmental efforts to implement data protection and privacy policies, such as the EU’s General Data Protection Regulation, and legislative proposals to introduce more transparency and accountability into the online advertising space.

While these efforts are certainly valuable for raising awareness among the public and policymakers, by focusing on the end product (the content), they rarely delve into the underlying infrastructure and advertising marketsdriving disinformation campaigns. Doing so requires a deeper examination and assessment of the “back end” of disinformation. In other words, the algorithms and industries—the online advertising market, the SEO manipulation market, and data brokers—behind the end product. Increased automation paired with machine learning will transform this space as well.

To get ahead of these emerging threats, Europe and the United States should consider several policy responses.

First, the EU and the United States should commit significant funding to research and development at the intersection of AI and information warfare. In April, the European Commission called for at least 20 billion euros (about $23 billion) to be spent on research on AI by 2020, prioritizing the health, agriculture, and transportation sectors. None of the funds are earmarked for research and development specifically on disinformation. At the same time, current European initiatives to counter disinformation prioritize education and fact-checking while leaving out AI and other new technologies.

As long as tech research and counterdisinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of emerging threats.

As long as tech research and counterdisinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of emerging threats. In the United States, the government has been reluctant to step in to push forward tech research as Silicon Valley drives innovation with little oversight. The 2016 Obama administration report on the future of AI did not allocate funding, and the Trump administration has yet to release its own strategy. As revelations of Russian manipulation of digital platforms continue, it is becoming increasingly clear that governments will need to work together with private sector firms to identify vulnerabilities and national security threats.

Furthermore, the EU and the U.S. government should also move quickly to prevent the rise of misinformation on decentralized applications. The emergence of decentralized applications presents policymakers with a rare second chance: When social networks were being built a decade ago, lawmakers failed to anticipate the way in which they could be exploited by malicious actors. With such applications still a niche market, policymakers can respond before the decentralized web reaches global scale. Governments should form new public-private partnerships to help developers ensure that the next generation of the web isn’t as ripe for misinformation campaigns. A model could be the United Nations’ Tech Against Terrorism project, which works closely with small tech companies to help them design their platforms from the ground up to guard against terrorist exploitation.

Finally, legislators should continue to push for reforms in the digital advertising industry. As AI continues to transform the industry, disinformation content will become more precise and micro-targeted to specific audiences. AI will make it far easier for malicious actors and legitimate advertisers alike to track user behavior online, identify potential new users to target, and collect information about users’ attitudes, beliefs, and preferences.

In 2014, the U.S. Federal Trade Commission released a report calling for transparency and accountability in the data broker industry. The report called on Congress to consider legislation that would shine light on these firms’ activities by giving individuals access and information about how their data is collected and used online. The EU’s protection regulation goes a long way in giving users control over their data and limits how social media platforms process users’ data for ad-targeting purposes. Facebook is also experimenting with blocking foreign ad sales ahead of contentious votes. Still, the digital ads industry as a whole remains a black box to policymakers, and much more can still be done to limit data mining and regulate political ads online.

Effectively tracking and targeting each of the areas above won’t be easy. Yet policymakers need to start focusing on them now. If the EU’s new anti-disinformation effort and other related policies fail to track evolving technologies, they risk being antiquated before they’re even introduced.

A how-to guide for managing the end of the post-Cold War era. Read all the Order from Chaos content »

TRUMPS FAKE NEWS MACHINE CRANKS UP TO HELP GET CHILD MOLESTER ELECTED

(THIS ARTICLE IS COURTESY OF POLITIFACTS.COM)

 

Fake news in the Alabama Senate race surges before Election Day

PolitiFact Wisconsin: Walker’s accessibility to the public
Autoplay: On | Off
President Donald Trump seeks to boost Republican U.S. Senate candidate Roy Moore by recording a phone call on his behalf in the final stretch of a bitter Alabama campaign marked by sexual misconduct accusations against Moore. (Reuters)

The fake news mill has been working overtime in the closing days of a special election to decide Alabama’s next senator.

The seat, normally a cake walk for Republicans, is now too close to call. Multiple allegations of Republican Roy Moore’s sexual advances on underage girls when he was in his early 30s have scrambled the contest.

Making the story harder for voters to follow are Internet posts using false or made-up information to discredit the accusers. Here are a few claims we’ve swatted down.

The claim: Accuser admits she tampered with Roy Moore’s yearbook signature

The rating: Pants on Fire!

A conspiracy-minded website attempted to cast doubt on evidence presented by one of eight women who accused Moore of sexual misconduct. The misleading headline on Gateway Pundit said, “WE CALLED IT! Gloria Allred Accuser **ADMITS** She Tampered With Roy Moore’s Yearbook ‘Signature.’ ”

Beverly Young Nelson (represented by lawyer Gloria Allred) accused Roy Moore of groping her when she was 16 years old and he, in his 30s, was the deputy district attorney of Etowah County. As evidence, Nelson presented a note she said Moore wrote in her high school yearbook before the incident took place.

The inscription reads, “To a sweeter, more beautiful girl I could not say Merry Christmas. Christmas 1977. Love, Roy Moore, D.A.”

Below the signature reads “12-22-77, Olde Hickory House.”

In a Dec. 8 Good Morning America interview, Nelson said she added the date and place of the inscription.

“He signed your yearbook?” ABC News reporter Tom Llamas asked Nelson.

“He did sign it,” Nelson said.

“And you made some notes underneath?” Llamas asked.

“Yes,” Nelson said.

Gateway Pundit, along with Breitbart and Fox News, jumped on the change. All three charged that Nelson said she either tampered with Moore’s signature or forged all or part of the inscription. Fox News later walked back its story.

We rated the Gateway Pundit claim Pants On Fire.

Share The Facts
The Gateway Pundit
Conspiracy-minded blog
“Gloria Allred Accuser **ADMITS** She Tampered With Roy Moore’s Yearbook ‘Signature.’ “

The claim: Woman says she was offered big money by Washington Post to accuse Roy Moore of misconduct

The rating: Pants on Fire!

Facebook users flagged a post that continued to make the round well after it had been debunked.

“Breaking: Woman says she was offered big money by Washington Post to accuse Roy Moore of misconduct,” stated a Nov. 13 headline in Evening World.

The article is based on a since-deleted Twitter account and is fake.

The website based the claim that a Post reporter offered money to a woman by citing the Twitter account of @umpire43 who identified himself as Doug Lewis #MAGA.

“A family friend who lives in Alabama just told my wife that a WAPO reporter named Beth offered her 1000$ to accuse Roy Moore????,” Lewis tweeted Nov. 10.

One of the Post reporters who wrote about Moore was Beth Reinhard. The Washington Post firmly denied the allegation.

The account user had a history of perpetuating hoaxes.

The Daily Beast reported that the author of the account had repeatedly invented stories about his own background claiming to be a Navy veteran, a pollster, a baseball umpire, an expert on rigged voting machines, an American consulate worker in Calgary and “a beleaguered soul who needed time off after the 9/11 attacks when he saw Muslims ‘dancing on rooftops.’ ”

The Daily Beast contacted all of his alleged employers and affiliates and found that he hadn’t held any of the positions.

A complete lack of proof is a fast track to our worst rating, Pants on Fire.

Share The Facts
Bloggers
Bloggers
“Woman says she was offered big money by Washington Post to accuse Roy Moore of misconduct.”

The claim: Moore’s accuser arrested and charged with falsification

The rating: Pants on Fire!

A spoof story on the website USA Mirror News carried the headline, “Roy Moore’s accuser arrested and charged with falsification.”

The article said, “Alabama Attorney General John Simmons filed charges of falsification (against) Mary Lynne Davies, who said Roy Moore seduced and molested her when she was 14 years old.”

Where to begin with the fabrications?

The actual Attorney General is Steve Marshall, not John Simmons. And there is no Mary Lynne Davies who has accused Moore. Nine women have come forward and there’s not a Davies among them.

USA Mirror News hasa disclaimer on its navigation bar, should any reader care to click on it, that says it is a “satirical publication that may appear sometimes to be telling the truth. We assure you that’s not the case. We present fiction as fact and our sources don’t actually exist.”

There’s nothing fake about that. The story itself rates Pants on Fire.

Share The Facts
Bloggers
Bloggers
“Roy Moore’s accuser arrested and charged with falsification.”

Because Pakistan Defense Forums Fake News Issues their FB & Twitter Accounts Are Suspended

(THIS ARTICLE IS COURTESY OF THE HINDUSTAN TIMES)

 

Fake news, morphed pics get Pak Defense Forum’s Twitter, FB accounts suspended

Pakistan Defence Forum, which describes itself as “a one-stop resource for Pakistan defense, strategic affairs, security issues, world defense and military affairs” has been repeatedly accused of putting out anti-India propaganda.

WORLD Updated: Nov 19, 2017 07:51 IST

Rezaul H Laskar
Rezaul H Laskar
Hindustan Times, New Delhi
A doctored image of a student activist of Delhi University which was posted by Pakistan Defence Forum’s Twitter handle.
A doctored image of a student activist of Delhi University which was posted by Pakistan Defence Forum’s Twitter handle.(Photo courtesy: Twitter)

The Twitter account and Facebook page of Pakistan Defence Forum, one of the longest-running forums devoted to Pakistan’s armed forces, were suspended on Saturday shortly after it posted fake news involving Kulbhushan Jadhav and a morphed image of a Delhi University activist.

Over the years, Pakistan Defence Forum, which describes itself as “a one-stop resource for Pakistan defense, strategic affairs, security issues, world defense and military affairs” and is better known by its website url of “defense.pk”, has been repeatedly accused of putting out anti-India propaganda.

Though retired and serving Pakistani military personnel are among the forum’s members, it is not an official website of the armed forces.

Searches for the forum’s Twitter handle, which was verified, and their Facebook page turned up messages that they had both been suspended.

The message showing that Pakistan Defence Forum’s Facebook page has been suspended. (Facebook)

On Saturday, numerous Indian Twitter users complained about Pakistan Defence Forum’s Twitter handle when it posted a doctored image of Kawalpreet Kaur, a student activist of Delhi University that purported to show her standing in front of Delhi’s Jama Masjid with a poster that read: “I am an Indian but I hate India…”

The poster used by Kaur in an image that she had herself posted on Twitter on June 27 this year had read: “I am a citizen of India and I stand with secular values of our Constitution.” At the time, Kaur had said she was asking Indians to change their profile images to “protest mob lynching”.

True story, there goes the Defence of Pakistan. pic.twitter.com/HV4K9bwpUm

Other than running malicious campaigns against several Pakistani journalists/activists, defencepk was also morphing photos to further its propaganda: pic.twitter.com/qK7ZLQM29G

The issue of Kaur’s photo being doctored by flagged by Shehla Rashid, the former vice president of the JNU Students Union, who contended that forum should not use such images in the name of the Kashmir issue.

I hope this is not official defence page of Pakistan otherwise there is a real security concern if you use morphed pictures just to spread hate across nations. Please put it down.

Also on Saturday, Pakistan Defence Forum had tweeted that India had “refused to avail the generous offer made by #Pakistan to facilitate a meeting” between Kulbhushan Jadhav, sentenced to death by a military court for alleged involvement in espionage, and his wife.

The tweet posted by Pakistan Defence Forum about India purportedly refusing Pakistan’s offer to arrange a meeting between Kulbhushan Jadhav and his wife. (Twitter screengrab)

The reality was that India had accepted the offer and asked for Jadhav’s mother to be included in the meeting. Pakistan’s Foreign Office spokesman had even acknowledged, in a tweet, that India had sent in a reply to the offer to facilitate the meeting.

Indian Reply to Pakistan’s Humanitarian offer for Commander Jadhav received & is being considered

Following the suspension of the Twitter handle and Facebook pages, a thread on Pakistan Defence Forum was devoted to discussing the matter and numerous members hurled abuse at India and Indian nationals.

The various threads in Pakistan Defence Forum are devoted to discussing issues such as Pakistan’s politics, operations against terrorists, and also problems faced by Muslims around the world. The threads are also replete with the conspiracy theories that often find space in mainstream Pakistani media and discussions about India’s domestic politics.

In the past, Pakistan Defence Forum has also been accused of running malicious campaigns against Pakistani journalists, commentators, and activists who have been critical of the powerful military and intelligence agencies.

Fact, Not Fake News: Donald Trump’s Dad Was A KKK Leader In New York City

Donald Trump’s Views On Race Should Be No Surprise

 

A child is not guilty of their fathers sins nor is the father guilty of their children’s sins. Yet most of us know from life’s experiences that way to often a child will follow in many, if not all, of their parents ways of living and of their thoughts and flaws. Way to many children who grew up in a family where the parents are present, or even step parents, the children very often tend to emulate their examples. Way to many boys who lived in a home where the Dad physically beat their Mom grow up to beat their wives and girlfriends. Way to many girls grow up to look for a ‘dangerous’ man, like their Dad. Way to many children who are sexually abused as a child grow up to do the same thing to their kids. A lot of kids who grow up in a home with an alcoholic parent become one themselves just as they do if drugs are in the home, they end up being users themselves. When a child grows up in a home where they see that their parent or parents are liars and thieves the child tends to think that same way of life is okay, after all, Dad does it. When you grow up in a home where the parent teaches a child to crave power over other people through any means necessary, many kids do follow the lead they are given. When you are taught to ‘never, ever’ apologize to anyone for any thing, you tend to grow up aloof and cold to other people’s feelings. When you grow up in a home where your Dad was at least at one time a leader in the local (in this case, New York City) KKK, should anyone be surprised that a child would garner a twisted sense of morals and ideas? Donald Trump’s Dad was arrested at least twice in NYC for leading violent KKK marches. So, should it be a shock that our President acts and believes the way that he does?

Australia’s Prime Minister Slowly Realizes Trump Is A Complete Idiot

(THIS ARTICLE IS COURTESY OF THE ‘NEW YORK MAGAZINE.COM’)

(Is Donald “FAKE NEWS” Trump The Biggest Idiot On Earth?)(TRS)

11:49 am

Australia’s Prime Minister Slowly Realizes Trump Is a Complete Idiot

By 

Image
Donald Trump and Australian prime minister Malcolm Turnbull. Photo: Getty Images

The transcript of Donald Trump’s discussion with Australian prime minister Malcolm Turnbull obtained by the Washington Post reveals many things, but the most significant may be that Trump in his private negotiations is every bit as mentally limited as he appears to be in public.
At issue in the conversation is a deal to settle 1,250 refugees who have been detained by Australia in the United States. I did not pay any attention to the details of this agreement before reading the transcript. By the time I was halfway through it, my brain could not stop screaming at Trump for his failure to understand what Turnbull was telling him.

Australia has a policy of refusing to accept refugees who arrive by boat. The reason, as Turnbull patiently attempts to explain several times, is that it believes giving refuge to people who arrive by boat would encourage smuggling and create unsafe passage with a high risk of deaths at sea. But it had a large number of refugees who had arrived by sea, living in difficult conditions, whom Australia would not resettle (for fear of encouraging more boat trafficking) but whom it did not want to deport, either. The United States government agreed under President Obama to vet 1,250 of these refugees and accept as many of them as it deemed safe.

In the transcript, Trump is unable to absorb any of these facts. He calls the refugees “prisoners,” and repeatedly brings up the Cuban boatlift (in which Castro dumped criminals onto Florida). He is unable to absorb Turnbull’s explanation that they are economic refugees, not from conflict zones, and that the United States has the ability to turn away any of them it deems dangerous.

Donald Trump Is His Own Worst Enemy

President Trump’s efforts to fix his headline-making crises often have the effect of making the situation worse.

Turnbull tries to explain to Trump that refugees have not been detained because they pose a danger to Australian society, but in order to deter ship-based smuggling:

Trump: Why haven’t you let them out? Why have you not let them into your society?

Turnbull: Okay, I will explain why. It is not because they are bad people. It is because in order to stop people smugglers, we had to deprive them of the product. So we said if you try to come to Australia by boat, even if we think you are the best person in the world, even if you are a Noble [sic] Prize winning genius, we will not let you in. Because the problem with the people —

At this point, Trump fails to understand the policy altogether, and proceeds to congratulate Turnbull for what Trump mistakes to be a draconian policy of total exclusion:

Trump: That is a good idea. We should do that too. You are worse than I am … Because you do not want to destroy your country. Look at what has happened in Germany. Look at what is happening in these countries.

Trump has completely failed to understand either that the refugees are not considered dangerous, or, again, that they are being held because of a categorical ban on ship-based refugee traffic.

He also fails to understand the number of refugees in the agreement:

Trump: I am the world’s greatest person that does not want to let people into the country. And now I am agreeing to take 2,000 people and I agree I can vet them, but that puts me in a bad position. It makes me look so bad and I have only been here a week.

Turnbull: With great respect, that is not right – It is not 2,000.

Trump: Well, it is close. I have also heard like 5,000 as well.

Turnbull: The given number in the agreement is 1,250 and it is entirely a matter of your vetting.

Then Trump returns to his belief that they are bad, and failing to understand the concept that they have been detained merely because they arrived by sea and not because they committed a crime:

Trump: I hate taking these people. I guarantee you they are bad. That is why they are in prison right now. They are not going to be wonderful people who go on to work for the local milk people.

Turnbull: I would not be so sure about that. They are basically —

Trump: Well, maybe you should let them out of prison.

He still thinks they’re criminals.

Later, Trump asks what happens if all the refugees fail his vetting process:

Trump: I hate having to do it, but I am still going to vet them very closely. Suppose I vet them closely and I do not take any?

Turnbull: That is the point I have been trying to make.

After several attempts by Turnbull to explain Australia’s policy, Trump again expresses his total inability to understand what it is:

Trump: Does anybody know who these people are? Who are they? Where do they come from? Are they going to become the Boston bomber in five years? Or two years? Who are these people?

Turnbull: Let me explain. We know exactly who they are. They have been on Nauru or Manus for over three years and the only reason we cannot let them into Australia is because of our commitment to not allow people to come by boat. Otherwise we would have let them in. If they had arrived by airplane and with a tourist visa then they would be here.

Trump: Malcom [sic], but they are arrived on a boat?

After Turnbull has told Trump several times that the refugees have been detained because they arrived by boat, and only for that reason, Trump’s question is, “But they are arrived on a boat?”

Soon after, Turnbull again reiterates that Australia’s policy is to detain any refugee who arrives by boat:

Turnbull: The only people that we do not take are people who come by boa. So we would rather take a not very attractive guy that help you out then to take a Noble [sic] Peace Prize winner that comes by boat. That is the point.”

Trump: What is the thing with boats? Why do you discriminate against boats? No, I know, they come from certain regions. I get it.

No, you don’t get it at all! It’s not that they come from certain regions! It’s that they come by boat!

So Turnbull very patiently tries to explain again that the policy has nothing to do with what region the refugees come from:

Turnbull: No, let me explain why. The problem with the boats it that you are basically outsourcing your immigration program to people smugglers and also you get thousands of people drowning at sea.

At this point, Trump gives up asking about the policy and just starts venting about the terribleness of deals in general:

I do not know what he got out of it. We never get anything out of it — START Treaty, the Iran deal. I do not know where they find these people to make these stupid deals. I am going to get killed on this thing.

Shortly afterward, the call ends in brusque fashion, and Turnbull presumably begins drinking heavily.

China’s Xi Jinping Is A Master Of Propaganda Which Is “Fake News”

(THIS ARTICLE IS COURTESY OF VOX NEWS)

 

China is perfecting a new method for suppressing dissent on the internet

America should pay attention.

Chinese leader Xi Jinping 
Getty Images

The art of suppressing dissent has been perfected over the years by authoritarian governments. For most of human history, the solution was simple: force. Punish people severely enough when they step out of line and you deter potential protesters.

But in the age of the internet and “fake news,” there are easier ways to tame dissent.

new study by Gary King of Harvard University, Jennifer Pan of Stanford University, and Margaret Roberts of the University of California San Diego suggests that China is the leading innovator on this front. Their paper, titled “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument,” shows how Beijing, with the help of a massive army of government-backed internet commentators, floods the web in China with pro-regime propaganda.

What’s different about China’s approach is the content of the propaganda. The government doesn’t refute critics or defend policies; instead, it overwhelms the population with positive news (what the researchers call “cheerleading” content) in order to eclipse bad news and divert attention away from actual problems.

This has allowed the Chinese government to manipulate citizens without appearing to do so. It permits just enough criticism to maintain the illusion of dissent and only acts overtly when fears of mass protest or collective action arise.

To learn more about China’s internet propaganda machine, I reached out to Roberts, one of the authors of the paper. I asked her how successful China has been at manipulating its population and, more importantly, if she thinks this brand of online propaganda will become a model for authoritarianism in the digital age.

You can read our full conversation below.


Sean Illing

How does China use the internet to manipulate its population?

Margaret Roberts

With this particular study, we were motivated by rumors of what’s called the “50 Cent Party” in China [more on this below]. People were convinced that China was engaged in a widespread online propaganda campaign that targeted its own population. But we never had direct evidence that this was ongoing.

Then in 2014, there was a massive leak that revealed what China was doing and how they organized their propaganda machine. So that gave us an opportunity to look at the actual posts the Chinese government was producing and spreading on the web for propagandistic purposes.

We gathered up all the data from the leaked email archive, and that allowed us to explore the content of the propaganda, which is something that no one had done before.

Sean Illing

And what did you find?

Margaret Roberts

We had always thought that the purpose of propaganda was to argue against or undermine critics of the regime, or to simply persuade people that the critics were wrong. But what we found is that the Chinese government doesn’t bother with any of that.

Instead, the content of their propaganda is what we call “cheerleading” content. Basically, they flood the web with overwhelmingly positive content about China’s politics and culture and history. What it amounts to is a sprawling distraction campaign rather than an attempt to sell a set of policies or defend the policies of the regime.

Sean Illing

I want to dive deeper into that, but I want to make sure we don’t glide past the “50 Cent Party” reference. Can you explain what that is?

Margaret Roberts

The 50 Cent Party is a kind of informal branch of the Chinese government that carries out its online propaganda campaign — so these are the foot soldiers who post the content, share the posts, etc. The name stems from the rumor that the members were each paid 50 cents for every post that helped the government. We didn’t find evidence that people were being paid 50 cents, however. It turns out posting online propaganda is just part of a government job.

Sean Illing

Do we have any idea how many members there are or how many people occupy these posts?

Margaret Roberts

The rumor before we started studying this is that it’s something like 2 million people, but we simply don’t know for sure. But we estimate that the government fabricates and posts 448 million social media comments a year.

People stage a rare large-scale protest not far from Tiananmen Square in Beijing on July 24, 2017, in connection with a recent crackdown on a company suspected of being involved in a pyramid scheme.
 Getty Images

Sean Illing

So let’s talk about China’s strategy. In the paper, you point out that China’s government actively manipulates its population, but that it doesn’t necessarily appear that way to its citizens. Part of the reason for this is China’s unusual approach to propaganda, which is to avoid refuting skeptics or defending policies and instead flood the digital space with happy news. What’s the strategic logic behind this approach?

Margaret Roberts

We think the purpose is distraction, because these posts are highly coordinated within certain time periods and the posts are written uniformly over time. They’re actually really bursty (meaning lots of similarly themed posts at the same time). The basic idea seems to be to flood the internet with positive noise in order to drown out bad news and distract from more serious or problematic issues.

Sean Illing

And they believe this is the most effective way to control political discourse?

Margaret Roberts

I think they realized that politics is about controlling the narrative and setting the agenda. Politicians and government officials in China want people to talk about the issues that reflect well on them. Their calculation is pretty simple: If they engage critics on issues that are complicated or reflect poorly on the government, they only amplify the attention those issues receive. So their approach is to ignore the criticisms and shift attention to other topics, and they do that by deluging the internet with positive propaganda.

Sean Illing

Are these positive stories actually true, or are we talking about “fake news”?

Margaret Roberts

This is a really interesting question. A lot of what we found in the leaked archive isn’t fake news. What they’re creating are stories that promote patriotism. They want people talking about and responding to content that favors the regime. But they also want people to think that content is coming from civilians and not from the government, which is why most of this is presented as someone’s opinion.

Sean Illing

What form does this cheerleading content take? What kinds of stories do they promote?

Margaret Roberts

The most common articles we found discussed how great it is to live in China or how wonderful Chinese culture is or how dominant China’s sports teams are — that kind of stuff. We’re not really talking about fact-based material here. It’s just positive stories that flatter the regime and the country.

Again, the point isn’t to get people to believe or care about the propaganda; it’s to get them to pay less attention to stories the government wants to suppress.

Sean Illing

Something else that jumped out at me in the paper was this idea that they want to permit just enough criticism to offer the illusion of dissent, but they want to make sure that there’s never enough criticism to spark collective action.

Margaret Roberts

China monitors the online information environment in order to collect information about the public and what they’re thinking. In that sense, they want people communicating freely. But a problem arises when you have too many people criticizing the government at the same time. There’s a constant risk of collective action or mass protest.

China’s government does its best to distinguish between useful criticisms (the kinds of criticisms that help them figure out how to satisfy the citizenry) and dangerous criticisms (the kinds of criticisms that might lead to mass protest events). They usually wait until there is a possibility for major mobilization against the government before they engage in overt censorship.

Sean Illing

Is China’s use of the internet unique or new? Are other governments doing similar things?

Margaret Roberts

I think there are aspects of the Chinese model that are new and unique, and certainly they’ve been at the forefront of trying to figure out how to control the internet. There is some evidence that other countries are learning from China, but nothing definitive.

Sean Illing

In the paper, you suggest this research might lead us to rethink the notion of “common knowledge” in theories of authoritarian politics — what does that mean?

Margaret Roberts

I think historically a lot of people have thought that common knowledge about things the government maybe has done wrong is detrimental to the regime. This is the idea that any criticism is detrimental to the regime. What we find in China is that criticism can be very helpful to the regime because it can allow them to respond.

But the type of common knowledge that’s really dangerous to the regime is knowledge of protests or other forms of collective action activity. That’s a major threat because it can spread so easily. We’ve seen this over and over throughout world history: Regimes are most vulnerable when small protests escalate into something much broader. This is what China’s government is determined to prevent.

People lie on the ground in Beijing on July 24, 2017, in protest against police for closing the road to a gathering where at least several thousand people staged a rare large-scale rally not far from Tiananmen Square in connection with a recent crackdown on a company suspected of being involved in a pyramid scheme.
 Getty Images

Sean Illing

To be clear, you call China’s approach “strategic distraction,” but it’s really about undercutting the possibilities for organized dissent. Regimes have always tried to capture people’s attention and redirect it in less dangerous directions. The only thing new about China’s operation is its use of the internet.

Margaret Roberts

I think that’s exactly right on.

Sean Illing

Do you think China’s approach to suppressing dissent is uniquely effective in an age of “fake news” and “post-truth”?

Margaret Roberts

The internet has created an environment in which there is a vast amount of information. That means it’s difficult for people to separate out “good” and “bad” information. Because many people have short attention spans online, they can easily be affected by information that looks like something it is not. That’s what China’s online propaganda and fake news have in common — they both take advantage of our short attention spans on the web.

Sean Illing

Is this a model for authoritarianism in the digital age? Should we expect more of this from other governments?

Margaret Roberts

The difficulty with online propaganda, and we’re seeing this in the US and other democracies around the world right now, is that it doesn’t function overtly like traditional forms of censorship. Most people object to blatant censorship. But online propaganda is a form of participation as well as form of censorship, so it’s difficult to know what the right policy is.

People want to introduce information on the web en masse, and that means a lot of noise and opinions and bots and commentators. Are there ways of regulating all of this without censoring ourselves? I think that’s a really hard question, and I don’t have the answers. But I think the world will have to struggle with this new reality of online propaganda, because it isn’t going away.

Russian hackers breached Qatar’s state news agency and planted a fake news report

(THIS COURTESY OF CNN)

US investigators believe Russian hackers breached Qatar’s state news agency and planted a fake news report that contributed to a crisis among the US’ closest Gulf allies, according to US officials briefed on the investigation.

The FBI recently sent a team of investigators to Doha to help the Qatari government investigate the alleged hacking incident, Qatari and US government officials say.
Intelligence gathered by the US security agencies indicates that Russian hackers were behind the intrusion first reported by the Qatari government two weeks ago, US officials say. Qatar hosts one of the largest US military bases in the region.
The alleged involvement of Russian hackers intensifies concerns by US intelligence and law enforcement agencies that Russia continues to try some of the same cyber-hacking measures on US allies that intelligence agencies believe it used to meddle in the 2016 elections.
US officials say the Russian goal appears to be to cause rifts among the US and its allies. In recent months, suspected Russian cyber activities, including the use of fake news stories, have turned up amid elections in France, Germany and other countries.
It’s not yet clear whether the US has tracked the hackers in the Qatar incident to Russian criminal organizations or to the Russian security services blamed for the US election hacks. One official noted that based on past intelligence, “not much happens in that country without the blessing of the government.”
The FBI and CIA declined to comment. A spokeswoman for the Qatari embassy in Washington said the investigation is ongoing and its results would be released publicly soon.
The Qatari government has said a May 23 news report on its Qatar News Agency attributed false remarks to the nation’s ruler that appeared friendly to Iran and Israel and questioned whether President Donald Trump would last in office.
Qatari Foreign Minister Sheikh Mohammed Bin Abdulrahman al-Thani told CNN the FBI has confirmed the hack and the planting of fake news.
“Whatever has been thrown as an accusation is all based on misinformation and we think that the entire crisis being based on misinformation,” the foreign minister told CNN’s Becky Anderson. “Because it was started based on fabricated news, being wedged and being inserted in our national news agency which was hacked and proved by the FBI.”
Sheikh Saif Bin Ahmed Al-Thani, director of the Qatari Government Communications Office, confirmed that Qatar’s Ministry of Interior is working with the FBI and the United Kingdom’s National Crime Agency on the ongoing hacking investigation of the Qatar News Agency.
“The Ministry of Interior will reveal the findings of the investigation when completed,” he told CNN.
Partly in reaction to the false news report, Qatar’s neighbors, led by Saudi Arabia and the United Arab Emirates, have cut off economic and political ties, causing a broader crisis.
The report came at a time of escalating tension over accusations Qatar was financing terrorism.
On Tuesday, Trump tweeted criticism of Qatar that mirrors that of the Saudis and others in the region who have long objected to Qatar’s foreign policy. He did not address the false news report.
“So good to see the Saudi Arabia visit with the King and 50 countries already paying off,” Trump said in a series of tweets. “They said they would take a hard line on funding extremism, and all reference was pointing to Qatar. Perhaps this will be the beginning of the end to the horror of terrorism!”
In his tweet, Trump voiced support for the regional blockade of Qatar and cited Qatar’s funding of terrorist groups. The Qataris have rejected the terror-funding accusations.
Hours after Trump’s tweets, the US State Department said Qatar had made progress on stemming the funding of terrorists but that there was more work to be done.
US and European authorities have complained for years about funding for extremists from Saudi Arabia and other nations in the Gulf region. Fifteen of the 19 9/11 hijackers were Saudi citizens.
Last year during a visit to Saudi Arabia, Obama administration officials raised the issue of Saudi funding to build mosques in Europe and Africa that are helping to spread an ultra-conservative strain of Islam.
US intelligence has long been concerned with what they say is the Russian government’s ability to plant fake news in otherwise credible streams, according to US officials.
That concern has surfaced in recent months in congressional briefings by former FBI Director James Comey.
Comey told lawmakers that one reason he decided to bypass his Justice Department bosses in announcing no charges in the probe of Hillary Clinton’s private email server was the concern about an apparent fake piece of Russian intelligence. The intelligence suggested the Russians had an email that indicated former Attorney General Loretta Lynch had assured Democrats she wouldn’t let the Clinton probe lead to charges.
The FBI came to believe the email was fake, but still feared the Russians could release it to undermine the Justice Department’s role in the probe.