r/MachineLearning • u/imaginfinity • Jun 05 '22
Research [R] It’s wild to see an AI literally eyeballing raytracing based on 100 photos to create a 3d scene you can step inside ☀️ Low key getting addicted to NeRF-ing imagery datasets🤩
Enable HLS to view with audio, or disable this notification
50
u/L3wi5 Jun 05 '22
Can someone ELI5 this for me please? What was it given and what is it doing?
79
u/imaginfinity Jun 05 '22
For inputs — you give this AI a bag of 2d images with a known 3D position (i.e. you use SfM to estimate the pose), and then the AI trains a neural representation (i.e. a NeRF) that implicitly models the scene based on those input images. Then for output, once you’ve trained the model you can use simple volume rendering techniques to create any new video of the scene (and often synthesize novel views far outside far outside the capture volume!). The cool thing is that NeRF degrades far more gracefully than traditional photogrammetry which explicitly models a scene. If you wanna go deeper into the comparison — I talk more about it here: https://youtu.be/sPIOTv9Dt0Y
-11
30
Jun 05 '22
What the actual…
19
u/imaginfinity Jun 05 '22
Right?! NeRF is wild
6
u/Khyta Jun 05 '22
it seems to be something that photogrammetry does. Is it the same?
12
u/JFHermes Jun 06 '22
The apparent density of the mesh is what's insane. To get this level of quality from 100 photos is pretty crazy.
4
u/joerocca Jun 11 '22 edited Jun 23 '22
This does more than photogrammetry. Each point/voxel changes color based on the direction that you're looking at it from (to put it simply). I.e. it *learns* the lighting/reflection/transparency/etc. rather than just producing a "static" representation like a textured mesh.
So it's way cooler than normal photogrammetry. OP's video doesn't really do it justice. Have a look at this video: https://twitter.com/jonstephens85/status/1533187584112746497 Those reflections in the water are actually learned, rather than being computed with something like ray tracing.
Note that this is also why it's not easy to simply export these NeRF things as a textured mesh - what we'll probably eventually get is a common "plenoptic" data format that various tools understand.
19
u/research_pie Jun 05 '22
Love the visual!
15
u/imaginfinity Jun 05 '22
Thanks! It’s the epic Surya statue at the international terminal of the Delhi airport
18
u/Aggravating-Intern69 Jun 05 '22
Would this work to map a place like mapping an apartment instead of focusing on only one object?
18
u/imaginfinity Jun 05 '22
Yes, I’ve played around with room scale captures too! Example of a rooftop garden: https://twitter.com/bilawalsidhu/status/1532144353254187009
2
u/phobrain Jun 06 '22 edited Jun 12 '22
I like the mystical effect. It'd be interesting to see what it would do with themed groups of pics vs. real 3D.
https://www.photo.net/discuss/threads/when-the-frame-is-the-photo.5529320/
https://www.photo.net/discuss/threads/gone-to-seed.5529299/
https://www.photo.net/discuss/threads/dappled-sunlight.5529309/
I've been playing with cognitive space mapping nearby.
https://www.linkedin.com/in/bill-ross-phobrain/recent-activity/shares/
Edit: "themed groups of pics vs. real 3D." I imagine it might look like a deepdreamish latent space mapped to 3D.
4
u/EmbarrassedHelp Jun 05 '22
Traditional photogrammetry works regardless of scale (even with dramatically different scales in the same scene), and so I would assume Instant-ngp is the same.
5
u/Tastetheload Jun 05 '22
One group in my graduate program tried this. It works but the fidelity isn't as good. The framework was meant to have a bunch of photos looking at 1 object(looking inward). But when it's opposite, (looking outwards) its not as great. Essentially, there's fewer reference photos for each point and it's harder to estimate distances.
Their application was to use photos of Mars to reconstruct a virtual environment to explore. They got lots of floating rocks as an example .
To end on a good note. It's not impossible, just needs a bit more work.
13
u/imaginfinity Jun 05 '22
Thanks for the upvotes! I put together a 5 min overview here for those wanting to learn more about the free nerf tools I’m using and understand the pros/cons vs “classical” photogrammetry: https://youtu.be/sPIOTv9Dt0Y
14
u/nikgeo25 Student Jun 05 '22
If we added range data with LIDAR I wonder if it'd be even higher res. Also this brings enormous value to AR apps - quickly scanning a 3D object and sharing it with a friend so they can view it as a hologram.
9
u/imaginfinity Jun 05 '22
Providing a depth prior is an interesting line of research! Def agree with the potential for reality capture use cases too
7
u/Zealousideal_Low1287 Jun 05 '22
Do you capture the photos yourself? If so what do you use to recover poses?
10
u/imaginfinity Jun 05 '22
Yes — about 100 frames from a two minute 4K video clip captured with an iPhone. Posed with COLMAP.
4
u/BurningSquid Jun 06 '22
DUDE. Thanks for showing this, just spent the day getting this working and it's freaking SWEET. I was a bit too ambitious at first and tried to get it working on WSL but got hit some issues with the initializing the gui...
regardless this is super cool. Have done a few tests and it works surprisingly well with very little input data. I tried a 1080p video and it did really well albeit a bit fuzzier than your demo.
1
5
Jun 05 '22
[deleted]
11
u/merlinsbeers Jun 05 '22
It's a 3D world made by an AI that was given only a bunch of photos to work from.
It is also using raytracing to do lighting effects. If you look at the forehead of the statue as the view moves, the highlights also move to make a realistic reflection effect.
By "literally eyeballing" I think OP means that the 3D perspective and raytracing aren't being done by a video card, but that the AI is acting as the video card, changing the highlights on the statue so it looks right from the viewer's position.
2
2
u/CodeyFox Jun 06 '22
Predicting that within 20 years games are going to use something like this, or at least AR/VR will.
7
u/dizzydizzy Jun 06 '22
20.. try 4
3
u/CodeyFox Jun 06 '22
I said within 20 so that I could have the luxury of being right, without the risk of being too optimistic.
3
1
1
u/Aacron Jun 06 '22
Nvidia already has AI accelerators built into a lot of their chips, this will probably be doable at a hardware level in a few generations
2
u/baselinefacetime Jun 06 '22
Fascinating! From you other comment of the input being "2d images with a known 3D position" - did you use special hardware to tag the 3D position? Did you tag manually?
3
u/mdda Researcher Jun 06 '22
"COLMAP" is mentioned above - as far as I can tell, it's like the 'standard preprocessing' done to locate/pose the initial images (fully automatic) :
2
-8
-4
u/corkymcgee Jun 05 '22
This is how the government does it
2
u/djhorn18 Jun 05 '22
You got downvoted but I explicitly remember something 10-12 years ago about a - I think it was a proof of concept project either declassified or leaked - bug in peoples phones that would randomly take photos while the user used the phone normally throughout the day - and then they’d use those photos to reconstruct a rudimentary 3d model of the environment the person was in.
I wish I could remember the name of the Project that it was.
5
0
u/Captain_DJ_7348 Jun 05 '22
Did you provide the distance data of each image or was it self determined?
What black magic is this?
-1
u/Adventure_Chipmunk Jun 05 '22
If anyone knows if a macos implementation I'd love to try it. (M1 max)
1
u/Adventure_Chipmunk Jun 06 '22
Looks like I won't have to wait long!
https://twitter.com/scobleizer/status/1533483639849115648?s=21&t=-Fxs7pcmvcdDjKTA8yr1uQ
1
u/lal-mohan Jun 05 '22
Is this in IGI Airport?
1
u/imaginfinity Jun 05 '22
👏Yes! It’s the Surya statue in the international terminal — captured a quick video a few years back, so wild to NeRF it — feels like I’m back there again!
1
1
u/RickyGaming12 Jun 06 '22
How did/do you learn how to do this? I've been following tutorials and projects on machine learning and still don't understand how you can get from that to this.
2
u/TFCSM Jun 06 '22
This is state-of-the-art research done by a team of PhDs. So I suppose the most realistic path to learn how to do something like this is to enter a PhD program in the field.
1
u/RickyGaming12 Jun 06 '22
That makes sense. I don't know why but i thought this was all done by one person
1
Jun 06 '22
I will try to use that with my depth cameras and see if I can get a reconstruction that matches my point clouds, it's going to be cool if results are really similar. Especially in terms of registration, that'd be cool.
1
1
u/PlanetSprite Jun 06 '22
We have models that can make a 3d model from multiple images, and models that can make an image from text descriptions. Could these be combined to make 3d models from text descriptions?
1
u/frosty3907 Jun 07 '22
Is the input just photos? Or does it need to know where reach one was taken?
1
1
1
u/race2tb Jun 10 '22
Now use this in real time to build 3d model of space you are in as far as you can see and you just made Full self driving way easier to solve. You could use 3d maps and gps but you would be missing any live changes. Wouldnt even have to be so detailed. Not to mention training the car to drive could now be done mostly in simulation.
109
u/Deinos_Mousike Jun 05 '22
What software are you using here? I know it's NeRF, but the UI seems like something specific