philip lelyveld The world of entertainment technology

14Dec/17Off

VR and AR Expected to Further Redefine Experiences at CES

By
December 14, 2017

Virtual reality, augmented reality and immersive experiences crossed over the apex of the hype curve and are now tracking the slope of enlightenment as they develop into niche market applications or pivot into location-based entertainment. Resolution, frame rate, horizontal and vertical field of view in the HMDs (head mounted displays), and head and body tracking technology, have all improved over the last year with further advances expected next month at CES 2018 in Las Vegas. This applies equally to next generation HMDs, projection, and heads-up approaches to immersive experiences.

Audio, especially head-tracked spatial audio, got more respect in 2017, although monophonic or non-tracking 360 experiences are still common. Other control and sensory input technologies, such as eye- and hand-tracking control systems and haptic feedback solutions also continue to improve, both adding creative options for experience design and muddying the waters of what will constitute a clearly messaged consumer immersive media experience product.

VR_Headset_Clouds

Embedded sensors communicating with either internal processors or cloud-based resources such as geolocation, object/facial recognition, obstacle avoidance, and AI/personal assistants are expanding the tools available to experienced developers and blurring the distinction between the real and virtual worlds of work and play.

Location-based immersive experiences, from theme parks to small installations within arcades and purpose-built attractions in storefronts, received a significant amount of funding and support in 2017, especially in Asia. Press coverage of IAAPA Attractions Expo 2017, the annual out-of-home amusement conference, highlighted both VR and AR overlays on traditional rides and mixed reality experiences incorporating escape rooms, puzzle solving, and immersive experiences.

Hologate, the creator of turnkey multiplayer VR experiences, sold more than 60 units of its easily-maintained 4-player experience at IAAPA.

The number of funded startups and established companies in both the VR and AR ecosystem continues to grow, implying sustained confidence in the future of the industry. The press was filled with announcements of advances in cameras, photogrammetry tools, volumetric and lightfield image capture and processing, audio and video editing and game engine tools.

Price cuts and product bundles drove an increase in integrated HMD sales in the second half of 2017. One million HMDs were shipped in Q3. Sony  accounted for 49 percent of those, followed by Oculus (21 percent) and HTC Vive (16 percent). Those numbers are separate from mobile HMD sales. For example, Samsung Gear VR has sold well over 5 million units in the past year.

Sony_PlayStation_VR_Bundle_2016

A large number of key players, with the notable exception of Apple, are involved in the OpenXR effort run by the Khronos Group to standardize the way AR and VR apps communicate with HMDs. Microsoft joined OpenXR in November.

AREA, the Augmented Reality for Enterprise Alliance, released functional requirements for AR devices that could be a good template for entertainment guidelines. Many other standardization efforts, including the IEEE’s, are still in the early stages of developing big picture statements of what they want to drill down to before they work on specific recommendations.

The market, technology and art of immersive experiences is still immature, so it is not clear what should be standardized as opposed to what provides competitive advantage.

Immersion and engagement are the keys to driving adoption of VR and AR, but they must be built on a good story-world. Alejandro Iñárritu was awarded a special Oscar for illustrating how this could be done in a site-specific way with his “Carne y Arena” (Virtual Present, Physically Invisible) installation piece at the Los Angeles County Museum of Art.

A potentially mass-market example is “Wolves in the Walls” from the former Oculus Story Studio team. The work is getting strong buzz for the way it uses game engine technology to support compelling interactive moments that don’t distract from the overall narrative.

This year saw growth in the number of VR, AR and immersive experience competitions at existing festivals, as well as festivals dedicated to immersive experiences. The 2018 edition of New Frontier at the Sundance Film Festival is a collection of mixed media performance pieces, AI-enabled experiences, multisensory interactive experiences, and mobile VR/360 documentaries.

The heat that was on VR and AR last year has moved on to Bitcoin, blockchain and cryptocurrencies this year, and there are efforts to use blockchain to enable new entertainment opportunities and business models.

StreamSpace, an Austin, Texas startup, is building a platform for secure video distribution using blockchain. StreamSpace is funding its business build-out with an ICO (Initial Coin Offering).

OTOY_Rndr_Token

Similarly, OTOY has announced the Render Token, a blockchain-based currency that underpins a distributed GPU rendering network for rendering lightfield image files. People who buy the Render Token will be able to redeem it for computer rendering time in the future, with the hope that the coin will increase in value so the amount of computer resources it will buy will also increase over time.

ICOs are a new way to fund the build-out of elements of the immersive media ecosystem ahead of and in support of the development of the market for immersive experiences.

For more information on CES 2018 (#CES2018), visit the event’s official website or its Facebook page. If you plan on attending, you can save $200 when registering by December 18. The ETCentric community should also be interested in C Space at CES, which examines “disruptive trends and how they are going to change the future of brand marketing and entertainment.”

See the original post here: http://www.etcentric.org/vr-and-ar-expected-to-further-redefine-experiences-at-ces/

12Dec/17Off

A Team of MIT Scientists Taught an AI to Get Emotional Over Movies

INSIDE OUT – Pictured: Joy. ©2015 Disney•Pixar. All Rights Reserved.

INSIDE OUT – Pictured: Joy. ©2015 Disney•Pixar. All Rights Reserved.

“We developed machine-learning models that rely on deep neural networks to ‘watch’ small slices of video—movies, TV, and short online features—and estimate their positive or negative emotional content by the second,” the team wrote in a blog post Monday morning.

The approach didn’t just pay attention to the general plot line of a movie, but also to more subtle aspects, including the score, and close-ups of a person’s face. Using these clues, the project’s machine learning algorithms were able to identify positive and negative emotions, and map out the extend to which each scene would provoke emotional responses — something the researchers called “visual valence.”

...the fact that artificial intelligence can be used to analyze the emotional arc of a movie, and with that predict an audience response, could be profound for Hollywood, the researchers argued.

See the full story here: http://variety.com/2017/digital/news/ai-emotional-arcs-mit-mckinsey-1202635570/

12Dec/17Off

AI-Assisted Fake Porn Is Here and We’re All Fucked

1513018103056-Screen-Shot-2017-12-11-at-120730-PM.pngThe video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.

It's especially striking considering that it's allegedly the work of one person—a Redditor who goes by the name 'deepfakes'—not a big special effects studio that can digitally recreate a young Princess Leia in Rogue One using CGI. Instead, deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning.

Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we're on the verge of living in a world where it's trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex.

Artificial intelligence researcher Alex Champandard told me in an email that a decent, consumer-grade graphics card could process this effect in hours, but a CPU would work just as well, only more slowly, over days.

Ethically, the implications are “huge,” Champandard said. Malicious use of technology often can’t be avoided, but it can be countered.

“We need to have a very loud and public debate,” he said. ”Everyone needs to know just how easy it is to fake images and videos, to the point where we won't able to distinguish forgeries in a few months from now.

Champandard said researchers can then begin developing technology to detect fake videos and help moderate what’s fake and what isn’t, and internet policy can improve to regulate what happens when these types of forgeries and harassment come up.

“In a strange way,” this is a good thing, Champandard said. “We need to put our focus on transforming society to be able to deal with this.”

See the full story here: https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn?utm_campaign=233eb4065e-The_Download&utm_medium=email&utm_source=MIT+Technology+Review&utm_term=0_997ed6f472-233eb4065e-153894145

11Dec/17Off

ARTIFICIAL INTELLIGENCE SEEKS AN ETHICAL CONSCIENCE

AI-Conscience-FINALCrawford’s good-humored talk featured nary an equation and took the form of an ethical wake-up call. She urged attendees to start considering, and finding ways to mitigate, accidental or intentional harms caused by their creations. “Amongst the very real excitement about what we can do there are also some really concerning problems arising,” Crawford said.

One such problem occurred in 2015, when Google’s photo service labeled some black people as gorillas. More recently, researchers found that image-processing algorithms both learned and amplified gender stereotypes. Crawford told the audience that more troubling errors are surely brewing behind closed doors, as companies and governments adopt machine learning in areas such as criminal justice, and finance. “The common examples I’m sharing today are just the tip of the iceberg,” she said. In addition to her Microsoft role, Crawford is also a cofounder of the AI Now Institute at NYU, which studies social implications of artificial intelligence.

On Thursday, Victoria Krakovna, a researcher from Alphabet’s DeepMind research group, is scheduled to give a talk on “AI safety,” a relatively new strand of work concerned with preventing software developing undesirable or surprising behaviors, such as trying to avoid being switched off.

Krakovna’s talk is part of a one-day workshop dedicated to techniques for peering inside machine-learning systems to understand how they work—making them “interpretable,” in the jargon of the field.

“A lot of decisions about the future of this field cannot be made in the disciplines in which it began,” says Terah Lyons, executive director of Partnership on AI, a nonprofit launched last year by tech companies to mull the societal impacts of AI. (The organization held a board meeting on the sidelines of NIPS this week.) She says companies, civic-society groups, citizens, and governments all need to engage with the issue.

See the full story here: https://www.wired.com/story/artificial-intelligence-seeks-an-ethical-conscience/

11Dec/17Off

A TINY NEW CHIP COULD SECURE THE NEXT GENERATION OF IOT

The Project Sopris microcontroller prototype is designed to incorporate what Microsoft terms the "Seven Properties of Highly Secure Devices," a common-sense melange of best practices. It includes the usual suspects, like enabling regular software updates, and requiring devices to store cryptographic keys in a secure part of the hardware. Hunt says they built the chip with “recognition that you build in security and then you also have to have mechanisms so that if in the future hackers get more clever, you are able to—without the consumer doing anything—be able to update and improve the security on the device.”

Stuffing so many elements onto a microcontroller asks a lot of such a tiny processor, so the Sopris chip includes a secondary security processor that handles much of the cryptographic overhead. That specialized processor also does periodic software audits to check for deviations or any misbehavior. If it finds something, it can reset individual processes—or the whole device—as needed.

This type of mechanism matters, because many IoT devices—think routers, connected printers—are essentially on all the time.

See the full story here: https://www.wired.com/story/project-sopris-iot-security/?utm_source=MIT+Technology+Review&utm_campaign=f59d1673a0-The_Download&utm_medium=email&utm_term=0_997ed6f472-f59d1673a0-153894145

The Microsoft paper (pdf) is here: https://www.microsoft.com/en-us/research/wp-content/uploads/2017/03/SevenPropertiesofHighlySecureDevices.pdf 

11Dec/17Off

China’s iQiyi bets on digital girlfriend to boost virtual reality headset sales

Picture1For iQiyi, Vivi assists users in accessing the company’s content based on the history of which videos they prefer to watch. The avatar can also answer simple queries raised by users, such as the time, weather or scheduled television shows, and even help complete missing portions of a poem.

And for those so inclined, Vivi can flirt, compliment a user’s looks and act coy when asked about her age. Because it is VR, a user can “touch” Vivi, who would giggle, act playful or pretend to be angry as part of the interaction.

“I’m already a middle-aged man, and if I like it, I’m sure younger people would like it too,” Ma said. “If a nerd wants to see her dance, he can order her to, and she would go into dancer mode.”

The company declined to release sales figures for the headset, which bears the Qiyu brand. It sells for 3,499 yuan (US$529) in China, compared with 3,999 yuan for Taiwan-based HTC’s Vive Focus.

The Qiyu headset did not rank among the top five VR headset brands in China in the second quarter, which is led by Shanghai-based Deepoon VR, HTC and Japan’s Sony, according to industry research group Canalys.

See the full story here: http://www.scmp.com/tech/social-gadgets/article/2123599/chinas-iqiyi-bets-digital-girlfriend-boost-virtual-reality

11Dec/17Off

Chapter 1 – Artificial Realities as Data Visualization Environments: Problems and Prospects

Publisher Summary

This chapter describes ways in which artificial realities can enable one to deal more effectively with data. Visualization is one of the best hopes for making more effective use of data. The goal of visualization is to represent data in ways that make them perceptible and thus, able to engage human sensory systems. The three, nonexclusive ways in which visualization can help one in using and interpreting data are: (1) selective emphasis, (2) transformation, and (3) contextualization. Nonvisual data can be transformed into a visual image by mapping its values into visual characteristics. A system takes on the aura of artificial reality as it exhibits an increasingly tight coupling between an expanded range of input and a broader range of feedback options. In conventional graphic user interfaces, users are restricted to a keyboard and a single-point input device such as a mouse, with visual feedback, and generally no sonic feedback beyond that of a system beep or two.

See the full story here: http://www.sciencedirect.com/science/article/pii/B978012745045250009X

11Dec/17Off

Google is Developing a VR Display With 10x More Pixels Than Today’s Headsets

[Philip Lelyveld note: Clay Bavor also mentioned that 20 ms latency or less must be achieved.]

At 20 megapixels per eye, this is beyond Michael Abrash’s prediction of 4Kx4K per eye displays by the year 2021.

“I’ve seen these in the lab, and it’s spectacular. It’s not even what we’re going to need in the ‘final display’” he said, referring to the sort of pixel density needed to match the limits of human vision, “but it’s a very large step in the right direction.”

Bavor went on to explain the performance challenges of 20 MP per eye at 90-120 fps, which works out at unreasonably high data rates of 50-100 Gb/sec. He briefly described how foveated rendering combined with eye tracking and other optical advancements will allow for more efficient use of such super high resolution VR displays.

See the full story with a 30 minute video here: https://www.roadtovr.com/google-developing-vr-display-10x-pixels-todays-headsets/

11Dec/17Off

Gaiman-Inspired ‘Wolves in the Wall VR’ to Premiere at Sundance

wolves-in-the-walls-post5-620x189Wolves in the Walls VR pulls users into a digital storybook, where they must help a girl named Lucy discover what’s hiding in the walls of her house. The story touches on themes like how children are often the most observant and least believed in our families, modern social issues and diversity. At Sundance, “Chapter 1” will introduce festival attendees to an ambitious new level of interactivity which casts the experience’s viewer as one of the characters.

“Early VR narratives were about adapting what we knew from other formats. But VR is coming into it’s own now. We always ask ourselves, why VR? What can we do to that is only uniquely possible in this medium?” said Billington. “Good story is really about great characters, so that’s where we invested all of our energy … Subjective POV [with VR cinematography] allows the audience to see the world the way that Lucy feels the world.We can put you inside Lucy’s imagination, her drawings, her emotions”

After graduating from Oculus VR short Henry, the team wanted to plumb new depths of character interactivity. They met with game developers and other immersive storytellers to learn the fine points of creating compelling interactive moments that don’t distract from the overall narrative. Wolves’ Lucy is engaging and emotive, changing her performance based on user action. She keeps track of your movements, exchanges objects, confide in you and mold her responses while tracking your actions through the story.

Executive producer Saschka Unseld added: “No one has yet cracked what the promise of storytelling in VR is: How to organically combine a compelling and emotional story with interactive worlds and characters. Wolves in the Walls will be exactly that.”

See the full story here: http://www.animationmagazine.net/top-stories/gaiman-inspired-wolves-in-the-wall-vr-to-premiere-at-sundance/

9Dec/17Off

Vuzix To Supply Augmented Reality Smart Glasses To Toshiba

Vuzix-M300-DVT_3Vuzix Corporation has inked a deal with Toshiba in which the American technology firm will supply its custom M300 smart glasses to the Japanese conglomerate company over a three-year term. Under the agreement, Vuzix will deliver its augmented reality (AR) smart glasses exclusively to Toshiba for up to 12 months after the Tokyo, Japan-based company submits purchase orders for the said consumer electronics products worth at least $5 million, with the Vuzix-powered smart glasses slated to carry the Toshiba brand and set to be sold on a global scale. Vuzix also said that the smart glasses will ship with a mobile computing system to accompany the head-mounted device.

Other than Toshiba, Vuzix also previously entered into agreements with other tech companies that wanted to grow their stake in the smart glass segment. Last August, the company formed a AR smart glass partnership with BlackBerry, which also marked the Canadian multinational company’s first foray into the wearable category.

See the full story here: https://www.androidheadlines.com/2017/12/vuzix-supply-augmented-reality-smart-glasses-toshiba.html