In a new application titled “Processing of Equirectangular Object Data to Compensate for Distortion by Spherical Projections” published today by the U.S. Patent and Trademark Office, Apple has shared ways and methods for fixing errors and distortion issues when combining multi-directional image data from videos (via AppleInsider).
Apple explains that videos are produced in ways that allow a subject or scene to be captured from multiple viewpoints, such as by using multiple cameras pointing at the same spot. However, most modern coding applications “are not designed to process such omnidirectional or multi-directional image content”.
The filing notes that such applications are designed on an assumption the image data is “flat” or captured from a single view, and do not account for distortions that could appear when processing these types of videos. Apple’s method suggests that the encoder should split a video into pixel blocks, and for each block, it may compare it to other data it may have about the scene in a reference picture.
“Using a prediction search on the search block and the reference data, the encoder could perform different actions to the pixel block, in order to make it look more appropriate for the viewer in the format it with being used within.
For example, a spherical video could be created for viewing by someone wearing a VR headset, produced specifically with that format in mind. At the same time, an identical view of what the user is seeing could be shown on a second monitor as-live, but with changes made so it appears as if it’s from a normal “flat” single-lens viewpoint, without any of the distortions required for it to appear correct in a spherical view.”
Apple’s concept could be applied to videos produced with 360-degree cameras, which are expected to be a major content source for VR users in the future, However, as with any other patent application, there is no guarantee that the idea will actually make an appearance in a future consumer device.