philip lelyveld The world of entertainment technology


Google’s AI learns how actions in videos are connected

image3-1...scientists at Google propose Temporal Cycle-Consistency Learning (TCC), a self-supervised AI training technique that taps “correspondences” between examples of similar sequential processes (like weight-lifting repetitions or baseball pitches) to learn representations well-suited for temporal video understanding. The codebase is available in open source on GitHub.

As the researchers explain, footage that captures certain actions contains key common moments — or correspondences — that exist independent of factors like viewpoint changes, scale, container style, or the speed of the event. TCC attempts to find such correspondences across videos by leveraging cycle-consistency.

...Moreover, they say that it can transfer metadata (like temporal semantic labels, sound, or text) associated with any frame in one video to its matching frame in another video, and that each frame in a given video could be used to retrieve similar frames by looking up the nearest neighbors in the embedding space.

See the full story here:


Quantum system virtually cooled to half of its actual temperature

quantumsysteAs the researchers explained, the results are based on the idea that there is a strong connection between temperature and .

"A modern perspective in physics is that temperature is an emergent property of quantum entanglement," Cotler told "In other words, certain patterns of quantum entanglement give rise to the familiar notion of temperature. By purposefully manipulating the pattern of entanglement in a system, we can gain access to lower temperatures. While these remarkable ideas were previously understood theoretically, we figured out how to implement them experimentally."

"We may be able to use quantum virtual cooling to 'cross' what are called finite-temperature phase transitions," Cotler said. "This seems quite bizarre—it would be like taking two glasses of liquid water, and by making a quantum measurement, you learn about the properties of solid ice.

Due to their quantum properties, quantum simulators can perform certain tasks like this that are out of the reach of classical computers, which cannot leverage quantum entanglement and superposition.

See the full story here:


AI used to test evolution’s oldest mathematical model

aiusedtotestThe researchers found that different butterfly species act both as model and as mimic, 'borrowing' features from each other and even generating new patterns.

"We can now apply AI in new fields to make discoveries which simply weren't possible before," said lead author Dr. Jennifer Hoyal Cuthill from Cambridge's Department of Earth Sciences. "We wanted to test Müller's theory in the real world: did these species converge on each other's wing patterns and if so how much? We haven't been able to test mimicry across this evolutionary system before because of the difficulty in quantifying how similar two butterflies are."

See the full story here:


Google Maps Now Offers Augmented Reality Directions

EBcjJ_nXsAAzgtQIn a nutshell, Live View displays virtual signs in the real world. So, you hold your phone in the air, and Google Maps will direct you to your destination using signs hanging in mid-air. This only works while walking for obvious reasons.

See the full story here:


Will Smith, Robert De Niro and the Rise of the All-Digital Actor

will_smith_aging_-_h_-_2019A believable, fully digital human is still considered among the most difficult tasks in visual effects. "Digital humans are still very hard, but it's not unachievable. You only see that level of success at the top-level companies," explains Chris Nichols, a director at Chaos Group Labs and key member of the Digital Human League, a research and development group. He adds that this approach can be "extraordinarily expensive. It involves teams of people and months of work, research and development and a lot of revisions. They can look excellent if you involve the right talent."

The VFX team must first create the "asset," effectively a movable model of the human. Darren Hendler, head of VFX house Digital Domain's digital human group, estimates that this could cost from $500,000 to $1 million to create. Then, he suggests, producers could expect to pay anywhere from $30,000 to $100,000 per shot, depending on the individual requirements of the performance in the scene.

More often, filmmakers use what has been broadly described as "digital cosmetics," which could be thought of as a digital makeup applications — for instance, removing wrinkles for smoother skin. This means that age is becoming less of an issue when casting an actor. "It's safer and cheaper than plastic surgery," notes Nichols. Marvel's Avengers: Endgame involved the creation of roughly 200 such de-aging shots, with work on actors such as Robert Downey Jr. and Chris Evans, to enable its time-traveling story.

AI and machine learning, and the related category known as generative adversarial networks (GANS), which involves neural networks, could advance this area even further. "I wouldn't be surprised if The Irishman and Gemini Man are the last fully digital human versions that don't use some sort of GANS as part of the process," Hendler says, adding that de-aging techniques and digital humans could start to appear in more films, and not just those with Marvel-size budgets. "I think we'll start to see some of this used on smaller-budget shows."

See the full story here:


How Netflix Is Using Its Muscle to Push Filmmaking Technology Boundaries

arri_camera_embedIf you have used the "Netflix calibrated mode" on your Sony or Panasonic TV or seen the "Netflix Recommended TV" logo in a consumer electronics display, you've had a glimpse of how the streaming giant has exercised its clout to become the most influential entertainment company in the technology field, pushing boundaries (and occasionally ruffling feathers). Netflix's size has allowed it to touch and influence everything from hardware and software development to industry display standards.

"Netflix has hired some of the top industry engineers out of the top postproduction houses in Hollywood and beyond. I don't know any other company that has reached out more and demonstrated more respect for the post community than Netflix … It has had a profound influence on everything we do."

Case in point: Netflix wanted its original content to be delivered in 4K resolution with the Dolby Vision brand of high dynamic range (Netflix now offers about 1,000 hours of HDR across its catalog) and Dolby Atmos brand of immersive sound. Broadcasters are still entrenched in 2K, and many acknowledge that, if not for Netflix, adoption of the advanced formats might have stagnated.

To make sure its content is being produced how it wants, the streamer in September launched a Netflix Post Technology Alliance with MTI, Adobe, Sony and others.

Netflix also is involved in industry standardization and development efforts. For instance, it recently joined the Academy of Motion Picture Arts and Sciences' Academy Software Foundation, a forum for open source software developers.

See the full story here:


Dan Carlin’s WWI VR Experience ‘War Remains’ Opens in Austin

Fans of Dan Carlin’s “Hardcore History” podcast can now explore the drama of World War I in a historic immersive reality experience narrated by the podcaster. “War Remains,” which has been produced by MWM Immersive with development by Flight School Studio and audio design by Skywalker Sound, opened its doors in Austin, Tx. Monday.

The experience allows participants to explore a set with physical cues, including rumbling floors and wind machines, that is tied to a story playing in VR. It transports viewers to the Western Front battlefield of World War I, complete with dark and scary trenches under fire from advancing enemy troops.

“War Remains” is just the latest location-based VR experience from MWM Immersive, which also produced “Chained: A Victorian Nightmare,” a piece that combined VR with immersive theater. MWM Immersive executive producer Ethan Stearns said that the company never once worried about the fact that “War Remains” was a lot darker than a lot of the other location-based VR experiences out there. “There are lots and lots of people who are trying to seek out deeper understanding of historical events,” he said. “To us, it made sense.”

See the full story here:


Snap, in augmented reality push, launches new Spectacles version

Spectacles 3, which will begin shipping in the fall, will cost $380, almost twice the $200 cost of the previous version.

It will have dual cameras to add depth and dimension to photos and videos. After uploading the content to the messaging app Snapchat, users can add new lighting, landscapes and three-dimensional effects to the images, Snap said.

The company added 13 million users in the second quarter, of which 7 million to 9 million were from the new AR lenses, Snap said.

Last week, Snap said it would raise $1.1 billion in debt to fund further investments in AR, content and possible acquisitions.

See the full story here:


mkt res: Has Sony Captured 30% of VR Hardware Revenues?

sa-vr Screen-Shot-2019-06-07-at-6.00.51-PM Screen-Shot-2019-08-08-at-9.02.25-PMSony is the market share leader for VR hardware revenues according to the latest report from Strategy Insights. Specifically, the firm pegs PSVR’s revenue share at 30 percent. That’s followed by Oculus (presumably all variants) at 25 percent and HTC (same) at 22 percent.

Collectively, the high end of the market accounts for 77 percent of revenues, compared with the receding class of mobile VR headsets. That includes Google (11 percent) Samsung (5 percent) and others (6 percent). Google saw the biggest YoY drop from 21 percent to 11 percent.

So Quest and PSVR are essentially the betting favorites for the next year of VR’s market-share horse race. PSVR has better specs but less portability. Game libraries and switching cost favor PSVR, but Quest is catching up with 50 games out of the gate and 100 by year-end.

See the full story here:


Light Field Lab Raises $28 Million For Huge Holographic Displays

light-field-lab-2San Jose, California-based Light Field Lab will use the money to scale its display technology from prototype to product. The aim is to create holographic objects that appear to be three dimensional and float in space without head-mounted gear such as augmented reality or virtual reality goggles.

Jon Karafin, CEO of Light Field Lab, told me in an interview in November that he wants to bring real-world holographic experiences to life with up to hundreds of gigapixels of resolution, including modular video walls for live event and large-scale installations.

“The ultimate goal is to enable the things that we all think of in science fiction as the hologram,” Karafin said. “There’s a lot of things out there, but you know, they say that flying cars and holograms are the two things that science fiction hasn’t yet quite delivered. And we’re going to at least get that started.”

Light Field Lab’s technology re-creates what optical physics calls a “real image” for off-screen projected objects by generating a massive number of viewing angles that correctly change with the point of view and location just like in the real world. This is accomplished with a directly emissive, modular, and flat-panel display surface coupled with a complex series of waveguides that modulate the dense field of collimated light rays. With this implementation, a viewer sees around objects when moving in any direction such that motion parallax is maintained, reflections and refractions behave correctly, and the eyes freely focus on the items formed in mid-air. The result is that the brain says, “this is real,” without having any physical objects. In other words, Light Field Lab creates real holograms with no headgear.

The company plans to take smaller holographic image components and assemble them into very large images. Back in November, the company showed me two-inch, see-through holographic image that the company can produce as its basic core building block. There’s no head-tracking, no motion sickness, and no latency in the display. It takes place within a six-inch by four-inch space, or the core building block.

Jon Karafin said “We want to make sure that the way we roll our technologies out is starting with the really big thing, using large-scale, high-value entertainment experiences, showing something that is transformational that nobody has ever seen before.”

In addition to holographic displays, Light Field Lab’s technology includes the hardware and software platform required for content distribution.

“Verizon’s new 5G network features the higher bandwidth, low latency, and speed/throughput to deliver next generation content,” said Kristina Serafim, investment director at Verizon Ventures, in a statement. “Light Field Lab’s innovative solution will help build the 5G future for Verizon’s consumer, business, network, and media customers.”

See the full story here: