philip lelyveld The world of entertainment technology


When AI hurts people, who is held responsible?

“Where society decides that AI is too beneficial to set aside, we will likely need a new regulatory paradigm to compensate the victims of AI’s use, and it should be one divorced from the need to find fault. This could be strict liability, it could be broad insurance, or it could be ex ante regulation,” the paper reads.

Secrecy in the AI industry is a major hurdle when it comes to accountability. Negligence law typically evolves over time to reflect common definitions of what constitutes reasonable behavior on the part of, for example, a doctor or driver accused of negligence. But corporate secrecy is likely to keep common occurrences that result in injury hidden from the public. As with Big Tobacco, some of that information may come into public view through whistleblowers, but a lack of transparency leaves people exposed in the interim. And AI’s rapid development threatens to overwhelm the pace of changes to negligence or tort law, exacerbating the situation.

“As a result of the secrecy, we know little of what individual companies have learned about the errors and vulnerabilities in their products. Under these circumstances, it is impossible for the public to come to any conclusions about what kinds of failures are reasonable or not,” the paper states.

See the full story here:


Using Rayleigh Waves in Making Touchable Holograms

The researchers discovered that Rayleigh waves travel through the skin and bone layers, which the body's touch receptor cells pick up and send the message to the brain.

They used mathematical models of touch receptors to show how it responds to the Rayleigh waves, which could vary on different species. Still, the wavelength's rate remains the same based on receptor depth, enabling the universal law to be defined.

Study lead author Dr. Tom Montenegro-Johnson from the School of Mathematics of the University of Birmingham said that touch is an essential primordial sense and the most complex, making it the least understood of the senses. 

"For example, if you indent the skin of a rhinoceros by 5mm, they would have the same sensation as a human with a similar indentation - it's just that the forces required to produce the indentation would be different," Andrews said.

See the full story here:


Inside the strange new world of being a deepfake actor

To bring the experiment to life, they chose an equally provocative subject: they would create an alternative history of the 1969 Apollo moon landing. Before the launch, US president Richard Nixon’s speechwriters had prepared two versions of his national address—one designated “In Event of Moon Disaster,” in case things didn’t go as planned. The real Nixon, fortunately, never had to deliver it. But a deepfake Nixon could.

So Panetta, the creative director at MIT's Center for Virtuality, and Burgund, a fellow at the MIT Open Documentary Lab, partnered up with two AI companies. Canny AI would handle the deepfake video, and Respeecher would prepare the deepfake audio. With all the technical components in place, they just needed one last thing: an actor who would supply the performance.

“We needed to find somebody who was willing to do this, because it’s a little bit of a weird ask,” Burgund says. “Somebody who was more flexible in their thinking about what an actor is and does.”

For the visuals, Canny AI specializes in video dialogue replacement, which uses an actor’s mouth movements to manipulate someone else’s mouth in existing footage. The actor, in other words, serves as a puppeteer, never to be seen in the final product. The person’s appearance, gender, age, and ethnicity don’t really matter.

But for the audio, Respeecher, which transmutes one voice into another, said it’d be easier to work with an actor who had a similar register and accent to Nixon’s.

In some ways, there’s little difference between deepfake acting and CGI acting, or perhaps voice acting for a cartoon. Your likeness doesn’t make it into the final production, but the result still has your signature and interpretation. But deepfake casting can also go the other direction, with an person's face swapped into someone else’s performance.

While professionalized deepfakes have pushed the boundaries of art and creativity, their existence also raises tricky ethical questions. There are currently no real guidelines on how to label deepfakes, for example, or where the line falls between satire and misinformation.

See the full story here:


USPTO Report on Public Views on Artificial Intelligence and IP – Current Laws are Adequate, but Data is Key Issue

With respect to patent protection, the consensus remains that inventors must be human, but some differences arose as to what human activities should qualify as a contribution to the conception of an invention.  The Report notes that “activities such as designing the architecture of the AI system, choosing the specific data to provide to the AI system, developing the algorithm to permit the AI system to process that data, and other activities not expressly listed here may be adequate to qualify as a contribution to the conception of the invention.”  Report, p. 5.  But perhaps the USPTO will need to revisit the question of whether machines can be inventors when and if science agrees that machines can “think” on their own.

Finally, we will likely see further developments in the case law on the issue of copyright infringement through machine ingestion of data, and the applicability (or not) of the fair use defense.  

See the full story here:


HBR: How GPT-3 Is Shaping Our AI Future

[PhilNote: this is a 46 minute podcast.]

OpenAI stunned the world with the release of Generative Pre-trained Transformer 3 (GPT-3), the world’s most impressive language-generating AI. OpenAI CEO Sam Altman joins Azeem Azhar to reflect on the huge attention generated by GPT-3 and what it heralds for the future research and development toward the creation of a true artificial general intelligence (AGI).

They also explore:

  • How AGI could be used both to reduce and exacerbate inequality.
  • How governance models need to change to address the growing power of technology companies.
  • How Altman’s experience leading Y Combinator informed his leadership of OpenAI.

Listen here:


When newly-minted Nobel Prize winner Roger Penrose riffed on very weird physics with Joe Rogan (2018)

This is a remarkable conversation!


Which A.I. planet do you live on?

For the past three years, two London-based investors have compiled an extremely comprehensive summary of the current “State of A.I.” It's the work of Ian Hogarth, who founded the concert discovery site Songkick and is now a prominent angel investor, and Nathan Benaich, a venture capitalist whose firm Air Street Capital focuses on startups built around applications of artificial intelligence.

This year’s report ( ) runs to 177 detailed Powerpoint slides. It’s a great way to take the pulse of the whole field.

One trend ...: a growing dichotomy between the priorities of A.I. researchers and those of A.I. practitioners who work in other kinds of businesses, such as healthcare and finance.

Here are some other key takeaways from “The State of A.I.”:

  • A.I. in healthcare and medicine is booming. 
  • Privacy-preserving machine learning is going to be huge. 
  • Demand for A.I. talent continues to far outstrip supply, despite a drop-off in job postings due to the Covid-19 pandemic. 
  • The U.S. remains the best place in the world for A.I. talent—but a lot of that talent is foreign-born. 
  • Regulators are finally starting to scrutinize the use of A.I.
  • The U.S. military is increasingly experimenting with cutting-edge A.I. techniques and incorporating A.I. into its arsenal.

See the full story here:


Augmented reality goggles for military working dogs could let handlers give them commands remotely

Military working dogs are directed via hand signals, speaking or laser pointers, which require the handler to remain close by. That can potentially endanger soldiers on missions that involve finding explosives and hazardous materials, or assisting in rescue operations, the Army statement said Tuesday.

The goggles developed by the Army and the Seattle-based company Command Sight show dogs where to go using a simulated laser pointer.

Initial feedback indicates “the system could fundamentally change how military canines are deployed in the future,” said A.J. Peper, the founder of Command Sight, as quoted in the Army’s statement.

See the full story here:


The Void Co-founder Unveils VR Skydiving Attraction ‘JUMP’, Locations Coming 2021

James Jensen, co-founder and creator of VR attraction The Void, recently unveiled his next VR startup which aims to bring the thrills of wingsuit skydiving to people particularly averse to jumping out of a perfectly good airplane.

The company, called JUMP, exited its two-year stint in stealth mode this past weekend. According to Jensen’s LinkedIn page, he’s been working as CEO of Jump since March 2018, or just a few months before he left his position as Chief Visionary Officer at The Void.

Academy Award-winning designer John Gaeta has signed onto the project as an advisor; Gaeta is best known for pioneering ‘Bullet Time’ for The Matrix films, his work on volumetric capture methods, and for co-founding Lucasfilms’ immersive skunkworks ILMxLAB.

See the full story here:



Neuroeconomist Daeyeol Lee discusses his new book and the development of artificial intelligence, asking 'Will AI ever surpass human intelligence?'

How might AI impact the relationship between humans and machines, or human civilization as a whole?

Increasingly powerful artificial intelligence and machines equipped with such AI will continue to develop, undoubtedly increasing the productivity for people who control such tools. While increased productivity is good, this process will unfold unevenly throughout society, amplifying already existing wealth inequality. I think this is something we have witnessed many times throughout history. Sharing the benefits of technological advances fairly among all the members of a society has always been a much harder problem than developing the technology itself, and we have frequently failed to find a good solution for everyone. In order to truly gain the most from technological advances, we also need to be aware of their limitations and potential for abuse. Reflecting on these and how we resolve them might even give us an opportunity to better understand human nature as the gap between our intelligence and AI continues to narrow.

See the full story here: