Russian firms offer propaganda with a professional touch


Patrick Lux | Getty Images News | Getty Images

The same kinds of digital dirty tricks used to interfere in the 2016 U.S. presidential election and beyond are now up for sale on underground Russian forums for as little as a few thousand dollars, according to a new report from an internet security company.

Businesses, individuals and politicians remain at risk of attack from rivals taking advantage of “disinformation for hire” services that are able to place seemingly legitimate articles on various websites and then spread links to them through networks of inauthentic social media accounts, warned researchers at Insikt Group, a unit of the Boston-area-based threat intelligence firm Recorded Future, in a report released Monday.

And to prove it, the researchers created a fake company — then paid one Russian group $1,850 to build up its reputation and another $4,200 to tear it down. The groups were highly professional, offering responsive, polite customer service, and a menu of services. Firms charged varying prices for services, such as $8 for a social media post, $100 per 10 comments made on an article or post and $65 for contacting a media source. Each firm the researchers hired claimed to have experience working on targets in the West.

One firm even had a public website with customer testimonials. Researchers said the disinformation firms offered the kind of professional responsiveness a company might expect from any contractor.

“This trolling-as-a-service is the expected next step of social media influence after the success of the Internet Research Agency,” said Clint Watts, a senior fellow at the Foreign Policy Research Institute and NBC News security analyst, referring to the Kremlin-linked digital manipulation firm accused in Mueller indictments with disrupting the 2016 election. “There’s high demand for nefarious influence and manipulation, and trained disinformation operators who will seek higher profits.”

Politicians and companies have deployed and countered disinformation for centuries, but its reach has been vastly extended digital platforms designed to promote hot-button content and sell targeted ads. Recently businesses have been hit by fake correspondence and videos that hurt their stock prices and send executives scrambling to hire third-party firms to monitor for erroneous online headlines.

Previously, vendors of these kinds of malicious online influence campaigns focused on Eastern Europe and Russia. But after Russia’s playbook for social media manipulation became public after the 2016 election, sellers have proved willing to pursue other geographies and deploy their services in the West, Roman Sannikov, an analyst with Recorded Future, told NBC News.

“I don’t think social media companies have come up with an automated way to filter out this content yet,” Sannikov said.

He advised company executives to stay vigilant for false information being disseminated about their company and reach out to social media companies to get it taken down before it spreads.

“It’s really the symbiotic relationship between media and social media, where they can take an article that looks legit with a sensational headline and plop it into social to amplify the effect,” Sannikov said. “It’s this feedback loop that is so dangerous.”

The researchers created a fake company and hired two firms that advertised their services on Russian-language private marketplaces. One firm was hired to build the fake company’s reputation, the other to destroy it.

Because the company was fake with no one following it or talking about it, there was no way to measure the campaign’s impact on real conversations. Activity about a fictitious company is also less likely to trigger moderation.

But for as little as $6,000 the researchers used the firms to plant four pre-written articles on websites, some of which were lesser known. One website was for a media organization that has been in existence for almost a century, according to the researchers, who withheld the name of the company. One of the articles carried a paid content disclaimer.

Controlled accounts under fictitious personas then spread links to those articles on social media with hyped-up headlines. One of the firms first used more established accounts and then reposted the content with batches of newer accounts on a variety of platforms including Facebook and LinkedIn. One firm said it usually created several thousand accounts per campaign because only a few would survive being banned. The accounts also friended and followed other accounts in the target country.

The firms were also able to create social media accounts for the fake company and drew more than 100 followers, although it was impossible to determine if any were real.

The security firm’s findings offer fresh evidence that even after years of crackdowns and tens of thousands of account removals by social media platforms, it’s still possible to create networks of phony digital personas and operate them in concert to try to spread false information online.

The firms claimed to use a network of editors, translators, search engine optimization specialists, hackers and journalists, some of them on retainer, as well as investigators on staff who could dig up dirt.

One firm even offered to lodge complaints about the company for being involved in human trafficking. It also offered reputation cratering services that could set someone up at work, counter a disinformation attack, or “sink an opponent in an election.”

“If our experience is any indication, we predict that disinformation as a service will spread from a nation-state tool to one increasingly used by private individuals and entities, given how easy it is to implement,” the researchers concluded.



Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version