Procedural Generation + HyperDec

Hyper Light Breaker

Enter the Overgrowth, a new realm in the world of Hyper Light. Play alone or with friends to explore open worlds, create new builds, rip through hordes and overcome the Crowns and the Abyss King.

[h2]HyperDec - Intro[/h2] [img]{STEAM_CLAN_IMAGE}/42194569/73d23d4f97f88ddd69a622775b4944e2e1c74c30.gif[/img] Originally, before it was called HyperDec, the procedural “decking” system was built out to be able to evaluate the height of terrain at a given XY position & procedurally populate those spaces with props, using seed-informed deterministic random value selections for things like position, rotation, and scale, as well as parametric variation for things like space between props, maximum count, height and slope ranges, spawn area, etc. From there, we wanted to explore applying artistic intentionality with props/clusters of props, being able to define “child spawns” that would anchor themselves around spawned instances. Pieces had filters for what kinds of surfaces they could and couldn’t spawn on, as well as custom avoidance filters and world-aligned texture masks, so users could parameterize relational behaviors between types of props, all of which were piped into a global HISM array. [previewyoutube=l5JrffdCexs;full][VIDEO1][/previewyoutube] [previewyoutube=pIWiaye4JMY;full][VIDEO2][/previewyoutube] After proving out simply laying out these pieces & giving them relational considerations, we moved onto zone targeting. In addition to randomized terrain on each run (more on terrain from Len) we wanted to have distinctive zones with unique props in each. Thanks to some very clever programming from [url=https://www.linkedin.com/in/pehastings/]Peter Hastings[/url], Senior Gameplay Engineer, we were able to very efficiently read zone data encoded into world-aligned textures, and filter placement accordingly. [img]{STEAM_CLAN_IMAGE}/42194569/c99d9ac790347128f34bcae6c043d522c1abb745.gif[/img] [img]{STEAM_CLAN_IMAGE}/42194569/7e4e47042e7e0cd6034bcd8b09b193ba98ba8f20.gif[/img] [img]{STEAM_CLAN_IMAGE}/42194569/d51005ea9d87a457a71c0257578f91f631977133.gif[/img] Artists and designers could create Data-Only-Blueprint assets that would contain primary and secondary assets to spawn, and their parameters for placement on the terrain. This workflow of randomized terrain with zone identifications became the foundation of our procedural decking paradigm. Initially, this paradigm worked out well. But over time, we ran into issues when trying to implement at scale. [h2]A Setback[/h2] The implementation we had started to run into issues as it continued to grow. Rather than only placing static props using this system, we began utilizing it for placement of gameplay objects, applying more robust filtering for things like flatness detection, and our evaluation of terrain was happening at runtime per-prop, with prop counts getting up into the 70K - 100K range, which meant that the startup time for each run took longer and longer. We also ran into issues with balancing density & variation with replication for multiplayer; all of these tens of thousands of objects needed to consistently show up on every player’s instance. Having all procedural placement done on the server and then passing that enormous amount of data to players on begin play was unfeasible, and so instead we would only have the server spawn gameplay relevant pieces, and then each connected client would receive a seed number from the server to feed into the client-side placement of props. Utilizing the same seed across all clients meant that even though they were spawning objects locally, they would all spawn with the same transforms informed by the seed. While we were able to achieve a satisfying amount of variation and distinction, it became clear that the increasing generation time wouldn’t be sustainable long-term. [h2]Rethinking Our Design Paradigm[/h2] Tech Art & Engineering sat down and re-thought our design paradigm for procedurally generated content in the game, and wound up completely re-working our implementation from the ground up. We were able to move away from a solely-blueprint-driven pipeline for procedural decking, leveraging faster C++ execution, thanks to some awesome effort put in by [url=https://twitter.com/Burnrate_Dev]Justin Beales[/url], Senior Gameplay Engineer. We also moved the per-prop terrain evaluation from runtime to design-time. This allowed us to pre-determine placement of objects and then feed very simple data into a runtime system that grabbed the objects and their intended locations and place them accordingly. Each stage’s variants would have coinciding data to reference, and using a DataTable to layout objects & parameters, we could “pre-bake” candidate points for each object type in the editor, and then save that data for quick reference on begin play. So while there are a limited number of variants as a whole, the selection of candidate points from the list could be randomized with a seed, meaning that the same variant could have unique prop/gameplay layouts every time. [img]{STEAM_CLAN_IMAGE}/42194569/b5141d08e36a2d0c52b93ee6eaa9c28988baef23.gif[/img] Now that we had generation in a better spot, we set out to expand on the artistic intentionality of the pieces being spawned. It became clear over time that the use of anchor-clustering & avoidance distances would not be enough to make these levels look less like math in action and more like art. This idea and conversation led to the creation of HyperFabs, which are spawned just like regular props via HyperDec, but have some more advanced logic & artistic implications. [h2]HyperFabs[/h2] HyperFabs take the concept of Prefabs (prop or object arrangements saved as a new object for re-use) and add some additional utility & proceduralism to them. The overall idea is that artists can lay out arrangements/mesh assemblies, that are intended to represent a small part of what would normally be a hand-decorated level. They then can use a custom script we’ve built to store those meshes in a generated Blueprint asset, that can then be placed on the terrain. The center point of the actor will align to the terrain, but then based on rules exposed that artists can tweak and assign to components/groups of components using Tags, the individual pieces in the HyperFabs will also conform to the terrain surrounding the actor’s center point in the world. It takes our original idea of relational spawning, but allows artists to lay out these relations through traditional level design tools instead of strictly through DataTable parameters. [img]{STEAM_CLAN_IMAGE}/42194569/a2974f439a4202bc8c1c0b379ac3239e9272231a.gif[/img] [i]A boulder assembly turned into a HyperFab, made by Will in Enviro[/i] It doesnt have to just be for small arrangements though; entire city blocks have been baked into a HyperFab, which conforms to varying terrain as expected. [img]{STEAM_CLAN_IMAGE}/42194569/7ae3c7676493bbf6a0131c04050ecdd8ed6769ca.gif[/img] [i]A city block assembly turned into a HyperFab, made by Wolf in Enviro[/i] The script for baking HyperFabs from mesh assemblies is smart enough to know when to use static mesh components versus mesh instancing, and it also has a utility to merge stacked/co-dependent objects into new static mesh assets, which helps with performance & automation. [h2]Other cool bits[/h2] [h3]Shoreline Generation[/h3] A neato bit of tech I worked on before we used terrain variants was shoreline generation. Since terrain was being generated using a voxel system, each playthrough generated terrain that was completely random. (But also much harder to control/make look nice like our new approach!) This meant that we couldn’t pre-determine shoreline placement, either through splines, decals, or shader stuff. After a bit of research, I learned about [url=https://www.froyok.fr/blog/2018-11-realtime-distance-field-textures-in-unreal-engine-4/]Jump Flooding[/url], which is an algorithm that can generate a distance texture between bits of data in a texture in UV space. In the case of shorelines, I captured an intersection range of the terrain, and used that as a base-mask. That mask was then jump-flooded to give us a gradient, which could be fed into the UV channel of a basic waves-mask texture that ran perpendicular to the wave lines direction. Using some additional time math and noise modulation, waves could pan along that distance gradient, with shape and body breakup, controls for distance-from-shore, wave count, and initial capture depth. [previewyoutube=m1rCgoJO1pw;full][VIDEO3][/previewyoutube] [h3]Flatness Detection[/h3] Another challenge we ran into for procedural placement was flatness-range detection; some objects had platform-like bases that needed an acceptable range of flatness so that they weren’t perched awkwardly on the side of a cliff or floating on little bumps in the terrain. The first iteration for flatness detection used traces from randomly selected points in a grid formation, comparing the height offset averages, allowing for a variable number of failure tolerances and grid resolution, before determining if a point was flat enough. [previewyoutube=-dw61ly_0fk;full][VIDEO4][/previewyoutube] While this approach did find flat areas, it was costly & prone to prolonged searching resulting in a block in the generation process while platforms found their place. After we moved candidate point determination to design time, we reworked the search function to use the terrain point data in a similar grid-check fashion, using grid space partioning to speed up the referencing of bulk points, which led to this fun little clip of the proof-of concept, showing an object finding approximate-nearest-neighbors with no collision/overlap checks, just location data. [previewyoutube=NNM-6599l9o;full][/previewyoutube] While this did divert the computational cost of determining flatness over distance from runtime to design time, it was still very slow and presented a blocker to design & environment when pre-baking asset candidate points. After a bit of research, jump-flooding came to the rescue again. The workflow for flatness-range detection works in a series of steps. First you get a normal-map of the terrain you’re evaluating, and mask it by slope, with anything being below a configurable slope value being flat, and anything above it being too steep. [img]{STEAM_CLAN_IMAGE}/42194569/65686c54c4d0d4113a9c1b880d3b61b62a691f10.png[/img] [i]White areas are flat, black areas are too steep or below shoreline height[/i] We then invert this output to provide a sort of “cavity mask” of areas that were flat enough for placement. But we needed to be able to define how far from the nearest non-flat area a point was, so that we didn’t pick a point that was flat enough at that point, but not flat over the range that equaled the size/footprint of the object we were searching for. To solve for this, we jumpflood that slope/cavity mask, and then transpose the 0-1 values represented in the output-textures’ UV space into the world-space equivalent, based on the size of the terrain. This gave us a distance mask that we could then threshold, returning us to the yes-or-no mask configuration that could be read at each point-evaluation. [img]{STEAM_CLAN_IMAGE}/42194569/80331df5ed456982c29887f5f2a8154efa3476a4.png[/img] [img]{STEAM_CLAN_IMAGE}/42194569/40571c01099be69ccebf605df8e93b9f23d23ad1.png[/img] Because all of these steps are running with shader calculations instead of collision queries or trace loops, the time to find flat-range points for assets decreased so much that the generation time is nearly indistinguishable when baking points with and without flatness checks. Yay shaders! Here are some fun gifs of the values for distance & slope being changed when creating a flatness mask. [img]{STEAM_CLAN_IMAGE}/42194569/7ed66f89858ce187485342e4e667f591e7200e47.gif[/img] [img]{STEAM_CLAN_IMAGE}/42194569/b2ec1aab49e02a299ce4a9d534ee1a3af9f0ddca.gif[/img] [h3]Breaker Terrain Generation Basics[/h3] The Hyperdec terrain process generates the foundational landscapes upon which all other art and gameplay assets are placed. The ideal for a rogue-like would be that every run of every stage is completely unique in decking AND landscape. However, pretty early on we ruled out completely procedural landscape generation simply because of the R&D time it would have entailed. We also had the notion that our gameplay would require some very tight control over the kinds of hills, valleys, waterways, and other landscape features that emerged. In a fully procedural landscape system, running in seconds as a player waited, we might get landscapes that just plain broke gameplay at times; this was unacceptable. So we went with semi-procedural. Our approach is to generate a large, though not infinite, collection of terrain meshes off-line that, when combined with our highly randomized Hyperdecking system, can give the impression of limitless, fresh gameplay spaces every time you play. Initially we explored voxel-based terrain, since it was an artist-friendly way to quickly build interesting terrain shapes. This was eventually abandoned as the run-time implications of voxels were an unknown and we didn’t have the R&D time available to ensure their success. Work continued with algorithmic island generation spearheaded by Peter Hastings. Many of the features present in this early work exist in our current terrain as well. [img]{STEAM_CLAN_IMAGE}/42194569/e5346536d34dae912cbcf3305f0cd73445c73ee4.png[/img] [i]Procedural Island Generation, Peter Hastings[/i] At some point it was clear that iteration and experimentation would put serious strains on the purely algorithmic approach. This led to adopting the procedural tool Houdini as the master terrain generator. This was especially useful since we could directly translate all the algorithmic work into Houdini and then further refine the topology in later parts of the Houdini network. First algorithms were directly re-written in python and then later in Houdini’s native Vex language for speed. Further, Houdini is effective at generating lots of procedural variations once a network is generating solid results on a smaller scale. Our goal is to have at least 100 variations of each stage to draw from during play and using Houdini allows a single network to drive all variations. [img]{STEAM_CLAN_IMAGE}/42194569/9c9e1c107613687f03ad16f6e3233e15132acdae.png[/img] [i]A bird’s eye view of a Houdini sub-network generating a component of the terrain[/i] [img]{STEAM_CLAN_IMAGE}/42194569/7cc0b31e57e355659873a9b8014e528425c738e9.png[/img] [i]One of the current terrain variants for one stage, without any hyperdecking[/i] For many of our stages each terrain is effectively an island that’s composed of sub-islands which are each assigned a “Zone”. A Zone is basically like a biome in that it is intended to have a look and feel clearly distinct from other zones. They are intended to look good but also help the player navigate and get their bearings as they move around the play space. In order to provide these features in every terrain variant a combination of noise fields and specific scattering of necessary topological features occurs in the Houdini network. Each stage has a different take on this basic formula and R&D is ongoing on how to get more compelling, interesting caves, hills, nooks and crannies without creating game-breaking problems (like inescapable pits, for example). [img]{STEAM_CLAN_IMAGE}/42194569/42264cbdf5f3b6c9acc3b3f9cca16113cfe8116e.gif[/img] [i]Visualizing a walk through the Houdini processing chain that converts a circle into terrain.[/i] The animated image above shows one processing chain that starts with a basic circle geometry delineating the overall footprint of the island then, via a chain of surface operators, eventually ends up as playable terrain. Many of the operations involve random noise that contributes to the differences between variations. Both Houdini height fields (2D volumes) and mesh operators are employed at different points to achieve different results. The initial circle is distorted then fractured to yield the basis of a controllable number of separate sub-islands. Signed distance fields are calculated from the water’s edge (z=0) to produce the underwater-to-beach transition slopes. More specific mesa-type shapes are scatter-projected into the height field to yield controllable topology that plays well compared to purely noise-generated displacements. In the final section, geometry is projected at the boundary area into the height field as a mask, distorted via noise fields and displaced to create the stage’s outer perimeter. The full chain of operations can generate a large number of unique terrains that all exist within constraints set out by game design. Another feature that exploits the fact that our terrains are not pure height fields is cave tunnels and caverns. These are generated as distorted tube-like meshes that are then subtracted from a volume representation of the above mesh. We are excited to push cave-tech (tm) in the future to generate some interesting areas for discovery for the player. Unfortunately, to produce production quality terrains the resolution of the resulting mesh needs to also increase, which is starting to slow Houdini down compared to the early days when everything processed so briskly. These are relatively large meshes which are getting converted back and forth between mesh, height field, and voxel representations to get the job done. As production moves forward and we start generating all the variants needed for gameplay the plan is to offline processing to a nightly job on a build machine so no one has to sit at their screen for hours watching the wheel spin. [h3]Articles & Sources:[/h3] Jump Flood Process in UE4: [url=https://www.froyok.fr/blog/2018-11-realtime-distance-field-textures-in-unreal-engine-4/]https://www.froyok.fr/blog/2018-11-realtime-distance-field-textures-in-unreal-engine-4/[/url] Flatness Detection Abstract: [url=https://gamedev.stackexchange.com/questions/125902/is-there-an-existing-algorithm-to-find-suitable-locations-to-place-a-town-on-a-h]https://gamedev.stackexchange.com/questions/125902/is-there-an-existing-algorithm-to-find-suitable-locations-to-place-a-town-on-a-h[/url] Grid Space Partition Process: [url=https://gameprogrammingpatterns.com/spatial-partition.html#:~:text=Our%20grid%20example%20partitioned%20space,contains%20many%20objects%2C%20it%27s%20subdivided]https://gameprogrammingpatterns.com/spatial-partition.html#:~:text=Our grid example partitioned space,contains many objects%2C it's subdivided.[/url] [h2]Wrap Up[/h2] As you can see, our team has spent a considerable effort executing on thoughtful procedural generation in order to make the flow of game levels feel coherent and intentional. Want more stuff about procedural generation? Len also did [url=https://www.youtube.com/watch?v=NoW5yqPc11w]this talk on tech art in Solar Ash[/url]! [h2]Let Us Hear From You![/h2] What do you think of what you’ve seen (and heard) so far? Are you a tech artist or aspiring to be one? How would you have tackled these issues?