philip lelyveld The world of entertainment technology


Information Apocalypse

sub-buzz-11268-1518286975-8technologist Aviv Ovadya

Today Ovadya and a cohort of loosely affiliated researchers and academics are anxiously looking ahead — toward a future that is alarmingly dystopian.

For Ovadya — now the chief technologist for the University of Michigan’s Center for Social Media Responsibility and a Knight News innovation fellow at the Tow Center for Digital Journalism at Columbia — the shock and ongoing anxiety over Russian Facebook ads and Twitter bots pales in comparison to the greater threat: Technologies that can be used to enhance and distort what is real are evolving faster than our ability to understand and control or mitigate it.

Then there’s automated laser phishing, a tactic Ovadya notes security researchers are already whispering about. Essentially, it's using AI to scan things, like our social media presences, and craft false but believable messages from people we know. The game changer, according to Ovadya, is that something like laser phishing would allow bad actors to target anyone and to create a believable imitation of them using publicly available data.

“Previously one would have needed to have a human to mimic a voice or come up with an authentic fake conversation — in this version you could just press a button using open source software,” Ovadya said.

That can lead to something Ovadya calls “reality apathy”: Beset by a torrent of constant misinformation, people simply start to give up.  “People stop paying attention to news and that fundamental level of informedness required for functional democracy becomes unstable.”

Ovadya (and other researchers) see laser phishing as an inevitability. “It’s a threat for sure, but even worse — I don't think there's a solution right now,” he said. “There's internet scale infrastructure stuff that needs to be built to stop this if it starts.”

Ovadya’s premonitions are particularly terrifying given the ease with which our democracy has already been manipulated by the most rudimentary, blunt-force misinformation techniques.

For those paying close attention to developments in artificial intelligence and machine learning, none of this feels like much of a stretch. Software currently in development at the chip manufacturer Nvidia can already convincingly generate hyperrealistic photos of objects, people, and even some landscapes by scouring tens of thousands of images. Adobe also recently piloted two projects Voco and Cloak — the first a "Photoshop for audio," the second a tool that can seamlessly remove objects (and people!) from video in a matter of clicks.

... “generative adversarial network” (GAN), which is a neural network capable of learning without human supervision...GANs have both “imagination and introspection” and “can tell how well the generator is doing without relying on human feedback.”...

“Whether it’s AI, peculiar Amazon manipulation hacks, or fake political activism — these technological underpinnings [lead] to the increasing erosion of trust,” computational propaganda researcher Renee DiResta said of the future threat. “It makes it possible to cast aspersions on whether videos — or advocacy for that matter — are real.” DiResta pointed out Donald Trump’s recent denial that it was his voice on the infamous Access Hollywood tape, citing experts who told him it’s possible it was digitally faked. “You don't need to create the fake video for this tech to have a serious impact. You just point to the fact that the tech exists and you can impugn the integrity of the stuff that’s real.”

Last week, the NYC Media Lab, which helps the city’s companies and academics collaborate, announced a plan to bring together technologists and researchers in June to “explore worst case scenarios” for the future of news and tech. The event, which they’ve named Fake News Horror Show, is billed as “a science fair of terrifying propaganda tools — some real and some imagined, but all based on plausible technologies.”

...the first step for researchers like Ovadya is a daunting one: Convince the greater public, as well as lawmakers, university technologists, and tech companies, that a reality-distorting information apocalypse is not only plausible, but close at hand.

“I think what you’re seeing now is an attack on the enlightenment — and enlightenment documents like the Constitution — by adversaries trying to create a post-truth society. And that’s a direct threat to the foundations of our current civilization."

"I’m from the free and open source culture — the goal isn't to stop technology but ensure we're in an equilibria that's positive for people. So I’m not just shouting ‘this is going to happen,' but instead saying, ‘consider it seriously, examine the implications," Ovadya told BuzzFeed News.

That said, Ovadya does admit to a bit of optimism. ... "But the last few months have been really promising. Some of the checks and balances are beginning to fall into place." Similarly, there are solutions to be found — like cryptographic verification of images and audio, which could help distinguish what's real and what's manipulated.

See the full story here:

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.