I don’t have a volumetric display of my own,
But I’ve seen some wild things in regards to the looking glass display.
Recently a software called REFRACT for Looking Glass has become available allowing games to be played,
https://www.youtube.com/watch?v=Q-iyzyBFLNA, if you want to see what it looks like, here’s a link.
Everything seems to be in an Embryonic stage.
I’m curious if there is a way to do this with video. I also wanted to know if there was an open source alternative to rotoshapes available, similar to what was done in Jurassic Park 3d.
https://www.youtube.com/watch?v=5Sp8tSNnKy4, here is a link, you’ll see the rotoshapes influence the depth map in this montage, there is a lot more footage of how this is done.
Specifically from 2011, and using Autodesk software.
My guess is that using morphable models from neural nets would enhance the depth maps in a similar way to rotoshapes. I’m hoping that this could be used to also segment video, and allow for something like video epitomes to assist with inpainting and enhancement of depth maps.
I have a few ideas, but no experience in programming. If I had a GPU I would be working on this.
There is a mix of software I wanted to use for this, most of them licensed under the GPL, I wanted to use a mix of depth maps, morphable models. I figured using morphable models, like Volker Blanz implementations and PifuHD to enhance depth maps with wireframes, to guide it. I’ve seen that in some cases, geometry that is very accurate is encoded in a depth map, the prime example being a depth map of the Blender monkey.Several software projects can extract depth maps, and reconstruct facial geometry (similar to Total moving facial reconstruction).
My end game is to create software for converting traditional 2d content into holomovies, and I know that the computing power would be intensive. I was hoping just to use clips from Star Wars, LOST or Person of Interest as test clips. My goal is to produce 5 minutes of footage.
Several code bases could be combined, (If licenses allow it) for a research project, and an open source tool.
3D combine is also an app I’m interested in for generating animated depth maps.
I played a depth map with a monocular color image, and found that sometimes an accurate form of geometry could be produced. Sometimes things looked like Mountain ranges, or Dioramas too.
(Similar to a histogram).
Neural Scene Flow Fields
MiDas and LeRes
There are many similar projects to this one, that could be implemented
Television shows, and movies also include a lot of shots that seem to be natural for 3d Conversion, from camera shots that change focus with 15 frames, to shots that seem potentially ripe for photogrammetry, although not perfect for it.
An example is this scene from LOST.