The ultimate climbing and bouldering simulation game. Explore and climb 250 real world routes, create your own routes and compete against your friends from the safety of your computer.
[img]{STEAM_CLAN_IMAGE}/43535104/dec419166cc2bc361f4a0a6d908abe2b4154f89d.jpg[/img]
As the final part of the photogrammetry process for us it’s all about getting the model in the game. Even though we decimated the mesh in the previous step it’s still in the tens of millions of faces and comes in one big piece. If we would just drop this into the game it might still run technically, but having such a high definition is not needed for most of the mesh most of the time.
The first solution you might think of is to decimate it further. This would be a perfectly fine solution for a lot of games, but in New Heights the climbing system actually needs the definition of the mesh to decide if something is climbable. So we still need to have the dense mesh, but when you’re at a distance we can still show the lower resolution version. Using LODs (Level of Detail) we can achieve this right in Unity.
The only problem with this approach is that when you get close to the mesh it will still go to the full resolution version which will drop your framerate fast. Our meshes are also very large inside of the game world, because they are actually scanned cliffs. This means that when you’re on the edge of the cliff on one side it would put the whole cliff as the dense mesh, even though most of it is still really far away.
With this in mind we decided it would be a good plan to cut the dense mesh into chunks. This way we could create individual LODs for each chunk and only show the dense version when you’re actually close enough that you can climb on it.
[img]{STEAM_CLAN_IMAGE}/43535104/130e4576821eec2c6e23a6cd68af27d63b911556.jpg[/img]
For the cutting of the mesh of the cliff and decimating for each of the LODs, we decided that it was time to switch programs and move to Blender. This allowed us to use the power of a fully blown 3D program together with Python to automate it.
The Python script was fully made with our pipeline in mind. So the input will always be the textured mesh from MetaShape and the output should be easy to integrate into Unity. Even though Python might not be the hardest language to work in, it took us quite a bit of testing, iterating and tweaking to get everything working and connecting properly. Any small misalignment would show up in the game as seems and ridges.
In the end we got the script dialed in and now we can fully automatically cut the cliff mesh into chunks, then create multiple decimated versions of those chunks (1 for each of the LODs) and structure them in a way that would be easy to create an import script in Unity to put them all together again.
The output of Blender ended up becoming a root folder for the cliff with a bunch of folders for each of the chunks. Then in those chunk folders would be the FBX files for each of the LODs (in our case 3 for now). With that structure in mind we created an import script in Unity that takes the root folder as input and then creates all the GameObjects for each chunk with the LOD groups filled with the correct meshes. In the end getting us a nice prefab, ready for use!
[img]{STEAM_CLAN_IMAGE}/43535104/404ae89f50256e50cfb599f3541dfa0cd1a72d19.gif[/img]
That is the overview of our full photogrammetry pipeline. All the way from the reasoning behind why we are using it, testing out if it would even work, doing our firsts scans, creating our first models and getting those models functional in the game. As you can imagine we are always looking at optimizing the workflow by automating more, improving the quality of the final prefab, decreasing the processing time and tweaking every step of the way. Currently we are looking into how we can better remove noise and fill holes in the mesh.
We hope that this overview about how we approach photogrammetry in games can help you when you want to use (meganormous) photogrammetry assets in your game. Let me know if you need any help!