Hacker News
WorldGrow: Generating Infinite 3D World
Garlef
|next
[-]
It's about generating interesting virtual space!
otikik
|root
|parent
|next
[-]
[1] https://www.challies.com/articles/no-mans-sky-and-10000-bowl...
xwiz
|root
|parent
|next
[-]
zparky
|root
|parent
|previous
[-]
nsxwolf
|root
|parent
[-]
Once you build a base or create some goal for yourself, it becomes interesting.
james-bcn
|root
|parent
|next
|previous
[-]
jsheard
|root
|parent
|next
[-]
agravier
|root
|parent
|previous
[-]
NBJack
|root
|parent
|next
[-]
Minecraft is of course the poster child for very large worlds of interest these days.
Dwarf Fortress crafts an entire continent complete with a multi-century history, the results of which you can explore freely in adventure mode.
Most of the recent examples of 3D worlds like the post tend to do it through wave function collapse.
omnibrain
|root
|parent
[-]
Minecraft used to create very interesting worlds until they changed the algorithm and the landscapes became plain and boring. It took them about 10 year until the Caves and Cliffs Update to make the world generation interesting again.
SiempreViernes
|root
|parent
|next
|previous
[-]
bogwog
|root
|parent
[-]
I know 'interesting' is subjective, but your comment is demonstrably false. Just type "mario 64 staircase" into youtube, and look at the hundreds (thousands? millions?) of videos and many millions of views.
f17428d27584
|root
|parent
[-]
Redefining “interesting” just so you can provide a completely irrelevant “correction” is bad faith trolling.
bogwog
|root
|parent
[-]
There's no secret formula to culture. Some programmers and AI people seem to think there is some magic AI model that will be able to produce cultural hits at the click of a button. If you're a boring person, you're not likely to "get" why something is interesting, or why that part can't just be automated away. No technology can help with that.
keyle
|root
|parent
|next
|previous
[-]
And Valve I think used to have a series on level design, involving from big to small and "anchor points", but I seem to have misplaced the link.
jpalomaki
|root
|parent
|next
|previous
[-]
Maybe the idea is to create environments for AI robotics traini ng.
analog8374
|root
|parent
|next
|previous
[-]
Consider the patterns generated by cellular automata.
Both tend to stay interesting in the small scale but lose it to boring chaos in the large.
For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
(Vs plugging away at tunnel-building like a mole)
nonethewiser
|root
|parent
|next
[-]
I think that's a good way to put it. I started writing a reply before reading your comment entirely and arrived at basically the same conclusion as this but more verbosely:
> For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
It seems hard to get away from having some sort of overarching goal, and then constantly looking back at it. At progressively smaller levels. Like what is the universe of the thing you are generating randomly. Is it a dungeon in a roguelike? It it meant to be one of many floors? Or is it a space inside a building? Is it a house? Is it an office? Is the office a stand alone building or a sky scraper?
Perhaps a good algorithm would start big and go small.
- assume the universe to generate is a world
- pick a location and assign stuff to generate. lets say its a city
- pick a type of city thing to generate. lets say its an sky scraper
- etc. going, smaller and smaller
- look at the city so far. pick another type of city thing to generate based on what has been generated so far
- look at the world so far. pick another type of thing to generate
Or maybe instead of looking back you could pre-divide into zones.But then, if you want to make an entire universe (as in multiple worlds), you need to just make random worlds which leads to your original problem (boring chaos at large scale) like this or go up another level to more intelligently generate.
Point being, you need some sort of top down perspective on it.
analog8374
|root
|parent
[-]
https://www.flickr.com/photos/jonathanmccabe/albums/72157622...
theknarf
|next
|previous
[-]
gcr
|next
|previous
[-]
I've dreamed of a NeRF-powered backrooms walking simulator for quite a while now. This approach is "worse" because the mesh seems explicit rather than just the world becoming what you look at, but that's arguably better for real-world use cases of course.
fjfaase
|next
|previous
[-]
embedding-shape
|next
|previous
[-]
> The code is being prepared for public release; pretrained weights and full training/inference pipelines are planned.
Any ideas of how it would different and better compared to "traditional" PCG? Seems like it'd give you more resource consumption, worse results and less control, neither of which seem like a benefit.
glenneroo
|root
|parent
|next
[-]
> We tackle the challenge of generating the infinitely extendable 3D world — large, continuous environments with coherent geometry and realistic appearance. Existing methods face key challenges: 2D-lifting approaches suffer from geometric and appearance inconsistencies across views, 3D implicit representations are hard to scale up, and current 3D foundation models are mostly object-centric, limiting their applicability to scene-level generation. Our key insight is leveraging strong generation priors from pre-trained 3D models for structured scene block generation. To this end, we propose WorldGrow, a hierarchical framework for unbounded 3D scene synthesis. Our method features three core components: (1) a data curation pipeline that extracts high-quality scene blocks for training, making the 3D structured latent representations suitable for scene generation; (2) a 3D block inpainting mechanism that enables context-aware scene extension; and (3) a coarse-to-fine generation strategy that ensures both global layout plausibility and local geometric/textural fidelity. Evaluated on the large-scale 3D-FRONT dataset, WorldGrow achieves SOTA performance in geometry reconstruction, while uniquely supporting infinite scene generation with photorealistic and structurally consistent outputs. These results highlight its capability for constructing large-scale virtual environments and potential for building future world models.
keyle
|next
|previous
[-]
splintercell
|next
|previous
[-]
icoder
|root
|parent
|next
[-]
hobofan
|root
|parent
|next
|previous
[-]
Their block-by-block generation method seems to be too local in its considerations, where each 3x3 section (= the ones generated based on the immediate neighbors) looks a lot more coherent than the 4x4 sections and above. I think it might need to be extended to be less local and might also in general need to be paired with some sort of guidance systems (e.g. in the office example would generate the overall floor layout).