philip lelyveld The world of entertainment technology


Verses sets up blockchain-based property rights in augmented reality

versesVerses has launched a protocol for virtual and augmented reality to help people establish ownership of virtual real estate, 3D space, and other goods in the virtual world. The company likes to refer to this idea of owning things within the virtual world as Web 3.0.

Just as websites have a proprietary address, Verses is using blockchain technology to identify specific 3D spaces in augmented reality. That allows people to own parts of the virtual world, or Metaverse, if you will. And Los Angeles-based Verses has set up the nonprofit Verses Foundation to serve as arbiter for the protocol that will delineate this “spatial web.”

The Verses Spatial Web Protocol is a universal and open standard for the next generation of the web that exists in AR or VR, connecting people, places, things, and currencies into a single digital network.

Verses has sold $6 million worth of tokens to various investors, including Blockchain Industries, Decentra, and other parties, and the company plans to raise additional capital via its private token sale from the international community.

This is reminiscent of the World Wide Web consortium that sets web protocols, such as hypertext transfer protocol, or http.

Verses is set up as a nonprofit. The Verses technology works as an open standard, establishing a universal protocol that powers its key feature set for virtual assets across a multitude of applications. This includes use cases such as virtual commerce, interconnected virtual reality spaces, smart city infrastructure, and secure holographic telecom.

The company currently has 35 staff and 15 partners.

See the full story here:


Virtual reality beginning to take off in the poultry house

002_40_IMG_Kopievantga405578-61The National Chicken Council (NCC) announced last month that it had developed a series of 360 degree virtual reality videos showing the various stages of a chicken’s life during modern, commercial production.

Other forms of virtual reality are being trialled at the University of Iowa by Professor Austin Stewart, who has developed Second Livestock, which places virtual reality headsets on chickens.

This allows them to enjoy the free range live wherever they area, even if they live their lives in a colony building.

Virtual Free Range combines the physical and psychological benefits of free range life with the safety and security of conventional agriculture. Chickens are free to roam, socialise and even eat “virtual food”, which appears in the virtual world where their real food trays are located.

Training can be boosted through virtual reality particularly in processing lines. For example, virtual reality can be used to train line workers how to trim meat, walk through a house without disturbing the birds or even how check the animals.

Researchers at Georgia Tech’s Agricultural Technology Research Programme have designed two systems that project graphical instructions from an automated inspection system onto birds on a processing line. These symbols tell workers how to trim or whether to discard defective products.

See the full story here:



180802130750_1_900x600A team from the UCLA Samueli School of Engineering in Los Angeles, California, have applied 3D printing to create a “seeing” device modeled on the human brain.

Appearing as a series of neatly stacked plastic plates, this device is capable of analyzing image data to identify objects such as items of clothing, and handwritten characters.

By developing technologies based on the device, the scientists could have discovered a simpler way of teaching artificially intelligent (AI) products, like autonomous vehicles and “smart assistants”, to perceive the world around them.

Each plate in the UCLA device is patterned with artificial neurons, in the shape of tiny pixels, that each diffract light in a different way.

When looking at an object, the device determines what it can see by the way light travels through the plates, and what comes out at the other side. UCLA principle investigator Aydogan Ozcan, explains, “This is intuitively like a very complex maze of glass and mirrors,”

“This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects,” adds Ozcan. “This optical artificial neural network device is intuitively modeled on how the brain processes information,”

“It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential.”

All-optical machine learning using diffractive deep neural networks” is published online in Science journal. It is co-authored by Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Yi Luo, Mona Jarrahi and Aydogan Ozcan.

See the full story here:


Fields places music/audio at specific geospatial coordinates

Even though it’s only available for iOS, Fields allows you to experience audio in a new way. Instead of just walking around and listening to a song in your headphones, snippets of audio are placed around whatever location you’re at. As you approach each of these—represented by a large globe on your iPhone’s screen—you’ll start to hear a specific audio track. As you move away, the track fades. Move your iPhone around, and the sound will shift its position within your headphones (and, yes, you’ll want to use this app with headphones).

Pitchfork puts it best:

“Imagine standing in a forest, hearing different animals and noises as you step through your living room, or being able to experience a choral performance while nestled among the singers. Fields makes that possible.”

See the full story here:


AR startup Ubiquity6 lands $27M Series B to build a more user-friendly augmented reality

TCP20258Ubiquity6 is one of a handful of startups aiming to tackle the backlog of backend features currently missing from most AR experiences available today. The fast-growing company is looking to build tools that will essentially enable users to create a cloud-based AR copy of the physical world and enable persistent, dynamic multiplayer AR experiences as a result.

A big focus of the Ubiquity6’s efforts have been on building 3D mesh maps of entire public areas so the onboarding process just naturally grows to be instantaneous.

This strategy works great for museums and much less well for your living room, but Ubiquity6 is hoping that the experiences available in their app can have episodic utility that ties them closely with events at public geographic locations.

The company’s partnership with the San Francisco Museum of Modern Art was previewed earlier this month with an activation inside the Magritte exhibit.


See the full story here:



SFMOMA is Putting the ‘AR’ in ‘Art’ With Augmented Reality Exhibits

Magritte's work is on display at the museum with an interactive gallery by frog design that's filled with digital puzzles based on the artist's paintings. As people roam through the exhibit snapping photos with their smartphones — a practice SFMOMA actually encourages — Coerver and his staff are considering making those phones a more central part of the museum experience.

In a massive one-night play test, San Francisco-based startup Ubiquity6 — which just announced $27 million in new funding — was invited to hand out a hundred Apple iPhones in the museum's lobby, loaded with new multiplayer augmented reality software. By looking through the phones' cameras, guests were invited to explore Magritte's bizarre universe, creating apples, pipes and bowler hats for others to find as they went along.

See the full story here:


StarVR debuts eye-tracking VR headset for enterprise use

starvr-one-propietary-oled-and-custom-lensesTaipei-based StarVR today debuts a new virtual reality headset for the enterprise. With integrated eye-tracking and a so-called 100-percent human viewing angle, the StarVR One is a new addition to the growing field of enterprise-only head mounted displays, a hardware category that's still making a case for itself as consumer VR tech gets more impressive.

The StarVR One's 210-degree horizontal field of view corresponds to the natural human field of vision.

Enterprise-only headsets like the DAQRI and HTC's VIVE Pro are catering to that booming market. Google Glass has been reinvented and is having a tremendous second act as an enterprise-first AR headset.

See the full story here:


Shape-shifting Cypher sculpture by Ozel Office is controlled by motion

cypher-ozel-office-ucla-researchers-design_dezeen_2364_sqCalifornia architecture studio Ozel Office has created a blobby robotic sculpture that changes shape in the presence of people, or through the movements of those wearing a matching virtual reality headset.

Cypher is an interactive robotic sculpture that is controlled by sensors, scanners and virtual reality (VR) technology. The sculpture inflates and deflates when people or objects are in its proximity, and also based on commands given by a person wearing a connected VR helmet.

The "cyberphysical sculptural installation" was designed and developed by Ozel Office, a Los Angeles studio that explores the intersection of architecture and technology. The firm is led by architect Güvenç Özel, who is a faculty member at the University of California, Los Angeles (UCLA). The Cypher project was largely funded by a grant from Google's Artists and Machine Intelligence Program.

 Once publicly launched, Cypher will be the first robotic sculpture that is simultaneously controlled by physical sensors and VR, according to the team. The project aims to challenge notions of what is real and virtual and to merge domains that are typically viewed as distinct.

The sculpture has a T-slotted aluminium frame with 3D-printed steel joints. Within the frame, the team placed an air compressor and a computer, which serve as the "brain" of the sculpture. These components are connected to sensors, valves, actuators and other elements that play a role in the sculpture's movement.

For the "skin", the team used flexible 3D-printed panels made of silicone and carbon fibre-infused thermoplastic. The spiky texture was influenced by the skin patterns of natural creatures, and is meant to challenge the aesthetic expectations of robots. The sculpture's dark hue also conveys a message and blurs one's reading of the object.

"The black glossy colour is used to enhance the mystique of the object further, therefore blurring the true morphological qualities of the sculpture through a play between the absence of light and variable reflection," said the team.

See the full story here:


Nvidia shares rise after chipmaker announces its next-generation graphics chips

"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," Nvidia CEO and founder Jensen Huang said at the SIGGRAPH conference, according to a company release. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."

See the full story here:


Design Principles as applied to Virtual Reality

A short primer

FittsSee the full story here: