philip lelyveld The world of entertainment technology

18Apr/19Off

‘High Fidelity’ Shifts Focus Towards Non-VR Due to Slow Growth

high-fidelity-1021x580Philip Rosedale, CEO of High Fidelity and founder of Second Life, announced at a High Fidelity community meeting recently that the company will be scaling back their VR efforts, focusing more on improving their PC and Mac performance. The company has also shut down all first-party user spaces and associated servers except for a single orientation room, which will be used for new users only.

High Fidelity is still allowing individual users to run their own spaces/servers, although refocusing efforts to flush out its desktop client and encourage user-generated spaces undoubtedly comes as a cost-savings maneuver by the company.

As first reported by New World Notes, Rosedale’s reasoning behind the move comes down to a pretty big linchpin: VR headsets sales. They simply aren’t getting into enough hands to make the company profitable in its current form.

“One thing to do, which all the companies have been doing, is better support for desktop users. Because any assessment of the rate of progress on HMDs is a sobering one; they’re not selling enough to create a general-purpose community that is both interesting and profitable,” Rosedale said at the meeting. “So, it’s really important to recognize, that through no fault of our collective selves, it’s not working. This model is not working right now. The flat world, that is an open building environment, is not compelling enough as it stands right now for the number of HMDs that are out there to get lift off. And so we’ve gotta think hard about that.”

High Fidelity was designed to be a platform anticipating the very broad use of VR across the Internet for things like this—public meetings, going to work, going to school— doing all kinds of different things.”

Rosedale says that while they’ve done their best to get that aspect of their business started, the company feels it’s making a mistake by taking an active role in the community by supporting first-party spaces. This, he says, comes down to the company’s inability to manage ban lists and moderate users.

“Let’s actually add all [users from NeosVR, Anyland, and Rec Room] together into one product. That company will not survive. There’s not enough revenue,” he said at the community meeting. “Everybody here that’s having such a good time: you guys need to pay us $10,000 a month for us to keep the company going indefinitely into the future, for us to basically be a positive cash-flow company, as we say here in The Valley. And everybody else in VR right now is faced by that.”

See the full story here: http://www.virtualrealitypulse.com/edition/daily-oculus-google-2019-04-17?open-article-id=10262741&article-title=-high-fidelity--shifts-focus-towards-non-vr-due-to-slow-growth&blog-domain=roadtovr.com&blog-title=road-to-vr

18Apr/19Off

Fraunhofer demonstrates 5G virtual reality video streaming software

Screen-Shot-2019-04-17-at-9.22.08-AM-e1555513440836Known worldwide for its contributions to the MPEG format — the compression technology used in MP3 audio files and MP4 videos — Germany’s Fraunhofer has recently turned its attention to the next frontier in media: virtual reality. After unveiling affordable VR headset microdisplay hardware last year, the company is now showing off next-generation video compression software using the new MPEG-OMAF standard, the first VR specification enabling 360-degree videos to stream over 5G networks.

Based in “significant” part upon Fraunhofer video compression technologies, MPEG-OMAF breaks wraparound videos into grids of tiles encoded at multiple resolutions.

Unlike traditional videos, which stream from servers at one user-selected resolution, these VR videos dynamically use high-resolution tiles where the viewer is currently looking, and low-resolution tiles for parts that are out of sight. As the user’s head position changes, the headset or display device requests a different mix of streamed tiles optimized for the user’s current focus area.

This trick enables the entire 360-degree video to continue streaming while devoting maximum detail to whatever the user is viewing. It parallels the recent use of foveated rendering to maximize real-time 3D graphics for VR users, guaranteeing that head-moving viewers will always be able to see something through their peripheral vision, even if it’s lower in fidelity.

Fraunhofer is demonstrating the new technology using a combination of JavaScript, Apple’s Safari web browser, the WebGL API for rendering, and HEVC video support; a technical video is available here. Source code for the JavaScript player and instructions on creating standards-compliant content are available now on GitHub.

See the full story here;https://venturebeat.com/2019/04/17/fraunhofer-demonstrates-5g-virtual-reality-video-streaming-software/

18Apr/19Off

UC Davis to study whether virtual reality can help kids with ADHD navigate reality

Schweitzer was looking for a way for kids to practice focusing in a way that is accessible and doesn’t require going to a clinic. VR became the perfect solution because it immerses kids in realistic situations.

HOW IT WORKS

Researchers will send kids home with a VR headset and phone programmed to put them through 25-minute daily training sessions in a virtual classroom. The idea is if kids with ADHD are exposed to distractions, they will become accustomed to them, and therefore less likely to lose focus when they meet distractions in a real classroom, Schweitzer said.

During the training, kids will feel as though they are sitting in a classroom chair looking at a whiteboard. They will be asked to perform attention-demanding tasks, such as math problems, which will appear on the white board.

But the classroom they are in will feature distractions like a loud bus driving by the window, kids talking and a teacher walking by with loud, clicking shoes. The kids in the classroom are always moving, even when they aren’t performing a specific distracting action, which makes the experience feel more like a real classroom.

See the full story here: https://www.sanluisobispo.com/news/state/california/article229377014.html
18Apr/19Off

8th Wall Brings Image Targets to Its Web-Based Augmented Reality Platform

giphy-6Release 11 of 8th Wall Web brings Image Targets, the company's take on image recognition for AR activations. The capability enables creators to define a 2D image as a marker and embed AR content, such as 3D models or video, which can be accessed through mobile web browsers on iOS and Android.

"Unlike other web-based image recognition technology, the image detection and tracking for 8th Wall Web is all performed directly on-device, in the mobile browser," the company announced in a blog post.

While both tech giants also have their own web AR options, Apple's AR Quick Lookonly works on iOS, and Google's web-based capability for ARCore is still experimental, so 8th Wall maintains an advantage in being cross-platform.

giphy-7See the full story here; https://mobile-ar.reality.news/news/8th-wall-brings-image-targets-its-web-based-augmented-reality-platform-0196339/

17Apr/19Off

Paul Allen’s legacy includes a virtual reality “Holodome”

1555419347134The centerpiece of the dome movement is Holodome, which was a pet project of the late Microsoft co-founder.

  • At TED, Vulcan debuted 2 new experiences: one is a live-action film that takes you to the top of Mount Everest and another takes you inside the Impressionist works of Claude Monet as you step inside the artist's world.
  • The video is not just all around you, but also above you and at your feet.

My thought bubble: Both new exhibits do exactly what good VR should — that is, convincingly take you to a place you couldn't go, somewhere either inaccessible, like Everest, or unreal, as with Monet.

Details: The centerpiece of the technology is 4 very-high-resolution projectors.

  • There are 2 Holodomes already in the wild, one that was used for consumers at the Museum of Popular Culture (MoPop) in Seattle and another in Venice, California, where Vulcan has been showing the technology to creators.
  • At MoPop, 40,000 consumers have been through the Holodome since last May.

See the full story here: https://www.axios.com/paul-allen-legacy-virtual-reality-holodome-e86d85d4-fb47-415a-8fa8-7349390b807b.html

17Apr/19Off

Virtual Reality Experience Takes Audience Back to 10,000 BC at This Year’s Tribeca Film Festival, April 24-May 5

ximage.php,qimage=,_images,_uploads,_2019,_03,_29,_CAVETribeca2019.jpg,awidth=502,aheight=334.pagespeed.ic.LMCbFOMpeVThe film experience moves forward while looking backward with the U.S. premiere of CAVE, a shared virtual-reality experience that transports audiences back thousands of years, April 24 through May 5 at the 2019 Tribeca Film Festival.

In this multi-faceted virtual reality experience—co-created by Ken Perlin, Kris Layng, and Sebastian Herscher at NYU’s Future Reality Lab—viewers journey to 10,000 BC, when stories were told around a campfire and the history of our ancestors was written on the walls of caves.

The piece whisks its audience into the past and drops them into a dilemma faced by Ayara, a young woman who is struggling to decide whether to accept her role as her tribe’s only emissary to the spirit world.

CAVE was designed from the ground up to challenge the status quo of how audiences collectively experience immersive arts and entertainment. The coming-of-age tale is told using the cutting-edge Parallux system, a fundamentally new kind of shared VR technology that allows virtual experiences to be shared by many people in the same location.

Unlike conventional 360-degree VR, viewers see and hear the story—as well as one another—from a unique point-of-view within the same virtual environment, letting them feel as physically present in the shared world as they would when attending a live theater or concert event.

See the full story here: https://www.newswise.com/articles/virtual-reality-experience-takes-audience-back-to-10-000-bc-at-this-year-s-tribeca-film-festival-april-24-may-5

17Apr/19Off

AI Robot paints its own moonscapes in traditional Chinese style

https://www.reuters.tv/v/PF67/2019/04/17/insight-the-ai-robot-with-unique-artistic-flair

A.I Gemini takes an average of 50 hours to create a blend of landscapes on traditional, fresh xuan paper made from bark and rice straw. The average price for a piece on sale in London is £10,000 ($13,000).

Wong designed the robot to use the ancient Chinese art of shuimo to create its paintings, using mainly black ink and water.

Randomness has been written into its algorithm, meaning Wong does not know what it will paint before it begins.

See the full story here: https://www.reuters.com/article/us-tech-robot-art/ai-robot-paints-its-own-moonscapes-in-traditional-chinese-style-idUSKCN1RS1MU

17Apr/19Off

Lightelligence releases prototype of its optical AI accelerator chip

download-10Accelerator chips that use light rather than electrons to carry out computations promise to supercharge AI model training and inference. In theory, they could process algorithms at the speed of light — dramatically faster than today’s speediest logic-gate circuits — but so far, light’s unpredictability has foiled most attempts to emulate transistors optically.

Boston-based Lightelligence, though, claims it’s achieved a measure of success with its optical AI chip, which today debuts in prototype form. It says that latency is improved up to 10,000 times compared with traditional hardware, and it estimates power consumption at “orders of magnitude” lower.

The chip in question — which is about the size of a printed circuit board — packs photonic circuits similar to optical fibers that transmit signals. It requires only limited energy, because light produces less heat than the electricity, and is less susceptible to changes in ambient temperature, electromagnetic fields, and other noise. It’s designed to slot into existing machines at the network edge, like on-premises servers, and will eventually ship with a software stack compatible with algorithms in commonly used frameworks like Google’s Tensorflow, Facebook’s Caffe2 and Pytorch, and others.

See the full story here: https://venturebeat.com/2019/04/15/lightelligence-releases-prototype-of-its-optical-ai-accelerator-chip/

17Apr/19Off

New York City’s AI task force stalls

curbed_new_york_city_street.0.0Nearly one year after its founding, the Automated Decision Systems Task Force hasn’t even agreed on the definition of an automated decision system

In May 2018, Mayor Bill de Blasio announced the formation of the Automated Decision Systems Task Force, a cross-disciplinary group of city officials and experts in artificial intelligence (AI), ethics, privacy, and law. Established by Local Law 49, the goal of the ADS Task Force is to develop a process for reviewing algorithms the city uses—such as those for determining public school assignments, predicting which buildings should be inspected, and fighting tenant harassment—through the lens of equity, fairness, and accountability.

“Algorithms should be subject to the same scrutiny with which we treat any regulation, standard, rule, or protocol. It is essential that they are highly vetted, transparent, accurate and do not generate injurious, unintended consequences,” Stringer wrote. “Without such oversight, misguided or outright inaccurate algorithms can fester and lead to increasingly problematic outcomes for city residents, employees, and contractors.”

This lack of progress to date reflects the overall difficulty of regulating technology, a field that’s coming under increased scrutiny at federal, state, and local levels. This month, the House and Senate introduced the Algorithmic Accountability Act, which, if passed, would require the FTC to create rules for assessing the impact of automated decision systems. HUD recently sued Facebook for housing discrimination in its ads, the New York Civil Liberties Union is suing ICE for its immigrant risk assessment algorithm, and a Connecticut judge recently ruled that tenant screening companies that use algorithmic risk assessments must comply with fair housing rules.

See the full story here: https://ny.curbed.com/2019/4/16/18335495/new-york-city-automated-decision-system-task-force-ai

17Apr/19Off

EU Votes For Copyright Rules Opposed by Nativist Groups

European_Commission_FlagsIn a vote of 348 to 274, nineteen out of the European Union’s 28 member countries voted in favor of reformed laws to protect content creators. Critics of the reform — including large tech companies — argue that the rules will reduce free speech online, with Articles 11 and 13 of particular concern. European Commission president Jean-Claude Juncker declared that the new copyright rules are “fit for the digital age.” In the lead-up to the vote, nativist groups in many countries worked to defeat the new rules.

VentureBeat reports that Article 11, the so-called link tax, “requires websites to pay publishers a fee if they display excerpts of copyrighted content” or even a link to it; and Article 13, the so-called upload filter, “makes digital platforms legally liable for any copyright infringements on their platform.”

See the full story here: http://www.etcentric.org/eu-votes-for-copyright-rules-opposed-by-nativist-groups/