How Canon’s VR lens becomes part of the Metaverse experience

3D stills and VR video will soon become second nature to creators, just like ordinary digital photography. Affordable equipment like new Canon VR Lens equip professionals and consumers with the tools to capture three-dimensional assets to populate the emerging spatial internet.

The Canon RF 5.2mm F2.8L Dual Fisheye Lens was launched late last year and received further publicity at the Consumer Electronics Show in early 2022. It is an interchangeable lens designed for the ‘EOS R5 and is listed at less than $2000.

Canon RF 5.2mm F2.8L Dual Fisheye lens in use. Picture: Canon.

“A lens like this opens up the world for users to go from 2D to 180 VR stereoscopic 3D,” says Brandon Chin, senior technical specialist at Canon USA. “This means that the versatile use of the R5 can now be harnessed in a whole new medium to deliver images for future content creation purposes. Now you have virtual reality in your camera bag.”

VR videos can be immediately posted today to apps like YouTube VR and for viewing in headsets like Oculus.

“You can imagine recording a concert and instead of seeing it in a flat two-dimensional way, we are now able to see it in depth and look around with freedom of vision in a way that is not communicated. by conventional 2D applications,” Chin said. .

Most previous methods of capturing stereoscopic images relied on two cameras and two paired lenses on a rig that was not only expensive and complex, but also fraught with challenges to align the optics and then post files again. .

“The big difference is that this lens is made up of two separate optical systems mounted as one lens, so all the alignment that would normally require custom gear to achieve – this camera can do it on its own.”

Canon RF 5.2mm F2.8L Dual Fisheye Lens.
Picture: Canon.

The dual circular fisheye lenses on the front of the camera are mirrored by two circular screens (for left eye and right eye) on the back. Saving both images, however, is done as a single file on a single card.

“Because you’re getting one file from one camera, the post-production process is much simpler. Optically, it does the job of two separate lenses.

He also points out that since Canon manufactures lenses, sensors and software for the process, the previous difficulties of matching different third-party manufactured elements are eliminated.

The image sensor records 8K DCI “at maximum”, although the resolution captured per lens is slightly lower than 4K due to the two image circles placed side by side on the sensor coating.

The file can be posted using one of two applications: the new standalone EOS VR Utility application for Mac and PC or the EOS VR plug-in for Adobe Premiere Pro.

Both applications will transform the side-by-side circular image into a 1×1 side-by-side equirectangular image and can be output to different file types and resolutions.

If you’re using the Premiere Pro plug-in, after conversion you can then drop clips into the timeline and do color correction in the usual way.

Obviously the parallax between the dual lenses is pretty fixed, but there are some slight alignment adjustments that can be made in post.

The camera does not support VR live streaming natively, but does have an HDMI port. Chin says he wouldn’t be surprised if someone in the market came out and “created some kind of ingest app that would let people see 180-degree images in super high resolution.”

When asked if Canon would look to add additional depth-sensing technology (such as LiDAR) to the system, Chin said Canon is looking for market feedback. The company aims for the adoption of virtual reality in many sectors such as training, travel, sports, live events and documentaries.

“VR innovators are trying to do things that are extremely difficult technologically. It’s a great new area that we haven’t explored yet. We receive all of this information and pass on to Canon Inc (the manufacturer) how best to support it.

“We’re very excited about what the future holds for immersive content and all the ways the metaverse will play into our lives.”

Footage captured for Canon’s new immersive VR video calling platform, Kokomo, was captured using this lens.

This video gives a comprehensive introduction to Kokomo, the app and how Canon wants the 3D experiences of VR to be combined with the ease of video calling.

Currently in development but slated for launch this year, Kokomo will allow users to make real-time video calls “with their live appearance and expression, in a photo-real environment, while experiencing a premium VR setting in captivating locations like Malibu, New York, or Hawaii.

The app uses cameras and Canon imaging technology to create realistic representations of users, so calls “feel like you’re interacting face-to-face, rather than through a screen or avatar” .

Mass creation of 3D assets

The creation of 3D assets is one bottleneck among many in the way of developing the 3D Internet, or the metaverse. Some developers think this could be solved with the advent of consumer LiDAR. New cell phones (such as the iPhone 12) contain LiDAR, putting this technology in the pocket of the average user.

Rumors abound that the iPhone 13 Pro could pack a second-generation LiDAR scanner, which, combined with machine learning algorithms, could transform the photos we take every day into three dimensions almost overnight.

“Many experts believe 3D capture is as inevitable as digital photography was in 2000,” reports Techradar.

It’s not just still images either. LiDAR could hold the key to user-generated volumetric video. As Apple Insider pointed out patents published by Apple in 2020 refer to compressing LiDAR spatial information in video using an encoder, “which could allow its ARM chip to simulate video bokeh based on LiDAR depth information , while shooting high quality video”.

3D media management platforms such as Make sketches and Poly.cam are based on interoperability standards such as glTF and already allow visualization and interactive manipulation of 3D models through a web browser.

“LiDAR technology… now allows anyone with the latest iPhone to mass render the physical world, translate it into machine-readable 3D models, and convert them into tradable NFTs that could be uploaded into open virtual worlds very quickly by populating avatars, clothing, furniture, and even entire buildings and streets,” said Jamie Burke, CEO and Founder of Outlier Ventures, a London-based venture capital firm.

Comments are closed.