philip lelyveld The world of entertainment technology



AI-and-entertainment-1024x7171. Targeted Advertising

When it comes to online shopping, the use of personalized, AI-generated banners is now standard practice, with the global e-commerce giant Alibaba’s Luban algorithm generating hundreds of millions of unique ad banners on its site every single day.

2. Personalized Content

Netflix’s sophisticated Meson algorithm is able to analyze user behavior and sentiment to curate a bespoke selection of media content for each of its millions of account holders, even using different movie posters and trailer clips depending on who is watching.

3. Accessibility

AI is being used to improve content classification and flag potentially problematic or harmful content, often with the use of sentiment recognition to identify anything that might be described as being sensitive.

4. New Frontiers of Entertainment

One of the emerging shifts in entertainment that looks set to completely disrupt the industry is the growing use of VR (virtual reality) and AR (augmented reality), two mediums that rely heavily on AI processes to function and develop.

See the full story here:



Black creators are underrepresented in VR, AR and XR. Crux helps them secure funds and tell their stories.

Back in the 1980s and 1990s, a series of discoveries allowed geologists and seismologists to pinpoint one of the largest earthquakes in history down to its date, magnitude and duration, despite it having hit nearly 300 years prior. It was only years later that researchers decided to analyze Native American stories of earthquakes in the same region, handed down orally by generation — studies that pointed them to a remarkably accurate date range.

Clearly, they should have considered those stories much sooner.

For Ruffin, the tale illustrates a failure on the part of researchers to expand their frameworks, to recognize relevance where it plainly stands. It’s a shortcoming she sees in her own field, as someone committed to helping VR, XR and AR creators from marginalized communities secure funding for their projects.

“When do we get to a place when Black, brown and queer creators don’t have to make our stories legible to white funders?” she said. “I think half the time people get no’s because white people can’t see themselves in the story, or they can’t see where the audience is.”

From Roger Ross Williams’ Traveling While Black to Hyphen Labs’ NeuroSpeculative AfroFeminism, virtual reality projects by Black creators have received notable coverage and accolades in recent years, but making those projects happen can be difficult.

The watershed 2011 report Fusing Art, Culture and Social Change found that distribution of funds to arts institutions was “demonstrably out of balance” with demographics. A 2017 follow-up determined that the problem had only worsened.

Crux takes a multi-pronged approach to supporting to Black virtual reality creators. It’s part producer, part virtual events platform and part fundraiser.

Inequity manifests itself in a few ways in the virtual-reality world, Ruffin said, including hardware design that suits certain facial structures better than others and — in some VR games — lingering bugs that allow for abusive interactions. But funding disparities top her list of concerns.

Last year, the organization hosted two salons that brought together traditional philanthropic investors with Black artists. Crux has since been able to distribute close to a million dollars from funders to Black XR/VR creators. Among the projects in the works are a large-scale public-art AR piece, featuring living Black Panthers Party members, in partnership with Metastage. Crux also functions as a cooperative, with a small percentage of member earnings going into a fund to support future projects.

See the full story here;


Frank Turner on playing the virtual reality version of Glastonbury’s Shangri-La: “It’s cool as shit!”

Frank-Turner-1392x884Viewers or punters – however you want to put it – go into this virtual reality version of Shangri-La and can walk around and stop in and see different stuff and one of them will be me, playing away as a hologram. You don’t have to have an actual VR headset to be a part of this either, that’s important to stress, you can just do it on an iPhone.

I went down to a warehouse in Bow to record my set, which was all very carefully socially distanced – and I’ve got to say, it was extremely nice to get out of the house to play some music. It was nice to remember that there are other people out there – my wife and I have been at home just the two of us for a really long time – and so nice to see people doing whatever they can to keep some form of live music going.

I would recommend some warm cans of lager, then put some wet trainers on and shit in a bucket – then you’ll be pretty close to the Glastonbury experience.”

See the full story here:


Holographic Optics for Thin and Lightweight Virtual Reality

Abstract: We present a class of display designs combining holographic optics, directional backlighting, laser illumination, and polarization-based optical folding to achieve thin, lightweight, and high performance near-eye displays for virtual reality. Several design alternatives are proposed, compared, and experimentally validated as prototypes. Using only thin, flat films as optical components, we demonstrate VR displays with thicknesses of less than 9 mm, fields of view of over 90◦ horizontally, and form factors approaching sunglasses. In a benchtop form factor, we also demonstrate a full color display using wavelength-multiplexed holographic lenses that uses laser illumination to provide a large gamut and highly saturated color. We show experimentally that our designs support resolutions expected of modern VR headsets and can scale to human visual acuity limits. Current limitations are identified, and we discuss challenges to obtain full practicality.

6.2 Conclusion

Lightweight, high resolution, and sunglasses-like VR displays may be the key to enabling the next generation of demanding virtual reality applications that can be taken advantage of anywhere and for extended periods of time. We made progress towards this goal by proposing a new design space for virtual reality displays that combines polarization-based optical folding, holographic optics, and a host of supporting technologies to demonstrate full color display, sunglasses-like form factors, and high resolution across a series of hardware prototypes. Many practical challenges remain: we must achieve a full color display in a sunglasses-like form factor, obtain a larger viewing eye box, and work to suppress ghost images. In doing so, we hope to be one step closer to achieving ubiquitous and immersive computing platforms that increase productivity and bridge physical distance.

See the paper here:


Verizon Media Immersive Launches To Foster Next-Gen 5G, AR And VR Advertising Ecosystem

960x0-2Today, Verizon Media announced the launch of a new extended reality (XR) toolset called Verizon Media Immersive, the largest XR platform for creating augmented, mixed, and virtual reality advertising and branded content with an emphasis on next-generation 5G experiences.

Verizon Media is a suite of brands that includes Yahoo, HuffPost, and TechCrunch, reaching nearly 900M unique monthly visitors worldwide. As a toolset, Verizon Media Immersive enables partners to create, distribute, and monetize XR content with authoring tools; a content library and search; and the ability to track performance through analytics. Through direct integration with Verizon Media Ad Platform, this branded content can then be deployed across different articles, search, and affiliate links in Verizon Media properties.

Because the platform incorporates WebAR (sometimes referred to more broadly as “WebXR”) toolset, experiences created with VMI can be delivered directly from the browser, as opposed to needing to work exclusively from an app. Brands can integrate immersive experiences across the web, whether on desktop and mobile, opening new pathways for connecting with audiences.

RYOT, which Verizon acquired in 2014—and has become the first and only 5G production studio in the U.S.—was a driving force behind the development of Verizon Media Immersive. Through the studio’s creative expertise across film/TV, VFX, and XR, Verizon Media intends its immersive platform as one that allows partners to streamline content across delivery formats.

Of course, 5G plays a large role in Verizon Media Immersive’s functionality. Formats like volumetric capture and motion capture have traditionally been difficult to deliver across 4G LTE, but the advent of 5G will dramatically reduce the barrier to these mediums...

For more information, visit the official Verizon Media website.

See the full story here:


Punchdrunk is working in partnership with Niantic to develop the next generation of live experience.

Since the launch of Sleep No More, New York we’ve often heard audiences comparing the show to a game.

We believe that the future of interactive audience experience will be at the cross section of gaming and theatre.

Niantic’s Real World Platform will give us the opportunity to take our work outside of buildings and into the world around you. Across multiple projects, we want to bend the rules of genre and redefine the norms of mobile gaming. Our hope is that together, we can create something no one else can.

We can’t say more, but there is much to come.

See the full post here:


Measuring Augmented Reality Experiences

1*CjklfvNY_K9e3k1X-2uLOwDwell Time: “How long were you there?”

Engagement: “What did you do”?

Recall: “What did you see”?

Sentiment: “Did you have fun?”

To help you get started with measuring your experiences, we have recently updated our documentation with a new section on “Advanced Analytics,”beginning with outlining how to integrate Google Analytics and Google Tag Manager in your WebAR projects.

See the full story here:


The startup taking on Apple and Snapchat in a mini-app war

Here's Dmitry Shapiro's big idea: apps that go inside your apps. Want to play a game inside a Reddit post, make a GIF without leaving the iMessage window, or pick theater seats with your friends? That's what Shapiro, his co-founder Sean Thielen, and their new company Koji, want to do.

See the full story here:


Disney’s deepfake algorithm

Disney Research and ETH Zurich introduced an algorithm for automatic face-swapping in photos and videos, which has a high enough resolution for filmmaking. It's the first deepfake method with a megapixel resolution, meaning it's convincing enough to alter faces for TV and movies, according to the researchers.


  • The goal is to use the neural-network-based method to substitute actors' faces, such as in the de-aging process or when an actor has died.
  • Disney’s model can create deepfake videos with a 1024 x 1024 resolution, much higher than open-source model DeepFakeLab's 256 x 256 pixels.
  • It's less time-consuming and expensive than Disney's current method for face-swapping, traditional VFX, which was used to re-create Carrie Fisher in Rogue One.
  • TechCrunch's Darrell Etherington described the results as "a lot less nightmare fuel" than other deepfake attempts. He still noted that the subjects in the sample video were primarily white.
  • The paper, “High-Resolution Neural Face Swapping for Visual Effects,” will be presented at this week's virtual 2020 Eurographics Symposium on Rendering.

See the full story here:


This German town replicated itself in VR to keep its tourism alive

Herrenberg-647x647Tourists may soon be able to explore the picturesque cross-timbered houses and historic churches of Herrenberg via virtual reality (VR), thanks to a digital twin developed with the High-Performance Computing Center Stuttgart (HLRS).

Building a digital twin

HLRS developed Herrenberg’s digital twin together with the Fraunhofer Institute, the University of Stuttgart, and Kommunikationsbüro Ulmer, starting with a concept called ‘space syntax’.

Dr. Fabian Dembski of HLRS said: “Just as the human skeleton provides a scaffolding for all of the other systems and functions of the human body, space syntax produces a 2D outline of physical grids in a city, offering a framework for performing spatial analysis, such as predicting the likely paths that car or pedestrian traffic might take to move from one point to another.”

We shouldn’t overestimate the technology, though, he noted. “Cities are not machines,” he commented. “A digital twin can be a great help in reducing complexity for cities, assessing measures at an early stage, and explaining interrelationships. But there are many aspects that are deeply human and cannot be reproduced in digital copies, such as culture, interpersonal relationships, joy, and happiness. A digital twin is a tool, not a solution.”

He added: “I also think it is important for cities to retain control over data and models,” urging co-operation between science, city administrations, businesses, and citizens.

See the full story here: