Druidstone Art Pipeline

This is the first post of our of behind the scenes series, where we take a closer look how games are made and what happens under the hood of a game engine.

In game industry art pipeline is the process where an art asset is designed and produced in certain ways and tools so that it is compatible with the game engine and design. I’ll be walking you through the art pipeline we have created for Druidstone and show how the Dark Knight, one of the enemies facing the player was created.

Mainly the art pipeline in Druidstone revolves around Blender, a 3d-software that is open source and free for anybody to use. Pipelines are usually heavily built around the main software that is used most or that has some advanced features that are essential to the production. In our case we needed a software where you can rig (make a digital skeleton to a character and controls to move the system) and animate objects. There are many all-arounder 3d-tools available, but as they are highly complicated and specialized tools, they are also very expensive. We ended up trying Blender, and I must admit, I was a bit skeptical at first as it seemed to have a learning curve more steep than I was used to in other softwares. But in the end we got the hang of it and managed to push a test character all the way through our new Blender based pipeline.

So what are the steps an art pipeline usually involves? It can be broken into a couple main sections: concept design, modeling, texturing, rigging and animation. All these steps are art forms in themselves and bigger studios have full teams dedicated to them. But as an indie developer you have to have profounding understanding in all of them. Luckily we have Jyri in the team doing the rigging and animations, because that is definitely my weak point.

Basic flow of Druidstone art pipeline.

 

Concept art is the initial design phase of pretty much any asset that goes into game. In this case we first decide and plan what kind of  enemy we need, what abilities and features it should have and how it sits in the gameplay and story. Then I usually pick up Photoshop and draw/paint the concept. Concepts can be made in numerous ways – I think majority of my concepts are just doodles on post it notes. The amount of iterations what it takes to nail down a concept can vary greatly if the subject it very abstract or curious. Drawing a concept is a great way of going through different ideas and you get more tangible vision of the subject than for example in written description. Sometimes when I have a clear vision, I may skip the whole concept stage and wing it straight in 3d. That’s pretty rare and happens usually with weird organic monsters that are best blocked out in 3d. That’s the awesome part of being indie, I don’t have to approve designs on long meetings with bunch of producers 🙂

 

After the concept is done, it’s time to transfer that idea into three dimensions. I use variety of tools to do that, but mostly my tool of choice is Zbrush and Blender combined. At first the character is modelled in high resolution. That means that it’s highly detailed and is made up of sometimes millions of polygons (polygons are triangles in 3d-space that make a surface when combined. They are the backbone of every 3d-object). After the high resolution model is done, a low resolution version of the same model is made. That low resolution model has a lot less polygons in it for the game engine to be able to run it in real time 60 fps. The high resolution data is then “baked” into the low resolution model so that it looks like it has all the details, but only fraction of the polygons.

 

When the game model is finished it still needs textures to illustrate the different surface properties. But before textures can be applied, the model is UV mapped. That means the model is skinned and pealed into 2d-space. Think of it like skinning a hide from an animal that is then stretched to a floor. When the 2d- and 3d-data match, a texture can be painted on 2d-image that is then projected to corresponding places on the 3d-model. Nowadays you can paint directly on 3d-model, but usually the unwrapping is still needed. I use Allegorithmic’s Substance Painter and Designer on pretty much every object that goes to Druidstone. Substance tools use a PBR (physically based rendering) based workflow and we use tweaked version of “older” diffuse and specular workflow, but with little tweaking, I’ve got a pretty good marriage of the two going on despite the differences.

Now that the model and texturing is done, it’s time to give the character life by animating it. In a nutshell a skeleton is made then the bones are assigned to correct areas of the mesh, then control points are created that can move the skeleton around, but that is a blog post of its own. When the animations are done, the model and animations are exported in .FBX file format that the game engine in turn converts into files that it handles. That’s when things get a bit technical and maybe Petri will shed some light on the matter on some other blog post.

As you may see, we have much more detail in the characters than can be actually seen in the standard game view. That leaves us with the option of doing special camera zooms on the characters when for example the player does some cool special move. Also having the details in the models gives them that extra something. It’s like in the Lord of the Rings movie, when Weta studio made the props and armor pieces, they were riddled with minute details no one would see, but it was worth it and helped to make the world more believable.

Here’s how the Dark Knight looks in the game.

 

4 Comments:

  1. Great, thanks. When we see some ingame video 🙂

    • So many things are still in pre-alpha state (e.g. lots of sounds missing, placeholder art etc.), so it will take some time before we are comfortable making a video. But we are working hard towards that important milestone!

  2. Game is looking great!

    I’d like to know more how you feel about Blender as a production tool. You mentioned that the learning curve is steeper than in other software and I find that true as well but luckily there’s a LOT you can customize to fit it better to your personal workflow. How much if any did you end up customizing the software or the default hotkeys which I find absolutely horrible as the most used shortcuts are scattered all over the keyboard.

    After learning Blender how do you feel it compares to other software (Modo, Maya etc.) modeling and animation wise? Does it have all the tools you need or did you have to lean on any 3rd party add-ons. Also do you find it any slower or faster to work with compare to other software once your comfortable with the toolset? Finally have you tried the blender sculpt mode and if you have how useful do you see it?

    Sorry for all the questions! I hope you have time to answers at least some of them. Can’t wait to see more of the game!

    • Our use of Blender as a production tool is a bit lighter compared to offline-render world, since we mostly use just modeling and texturing tools. But I’ve personally switched to Blender at my private projects, where I use variety of tools from offline rendering to simulations. As a tool in our pipe, Blender seems to be as good as any on most parts (every software has it’s pros and cons).

      I don’t have a problem with Blender’s UI or anything like that. It took me a while to grasp sort of the fundamental difference of thought type of concept of the software. I don’t like to customize software too much, because if you change everything it’s hard for you or to your colleague to show things or work with your/his machine. Of course I’ve done some own shortcuts, but that’s the thing with every software. One major thing that helped me with Blender was binding shortcuts to transform gismos, so that helped Blender feel like a “normal” 3d-software. I hate playing with those axis shortcuts, if I want to say, move object up.

      So far I’ve used 3ds Max, Softimage Xsi, Modo and Blender in various production environments. Like said earlier, every software has it’s pros and cons, but if I’d have to pick favorites, it’s Modo for Modeling and Softimage for rigging and animation. We wanted to have both in the same software, so that I could work on both without the need of switching programs. I haven’t done that much rigging or animation that I can say it’s missing some big features. There’s definitely some annoying things, but no game brakers. On modeling side I’ve bought the HardOps plugin and I use couple of scripts that I’ve gotten from the community or done myself.

      Speed-vise I haven’t thought Blender would be faster or slower. In my case the bottle neck is my head rather that the software 🙂

      I’ve tried the sculpting tools, but I don’t use them that much because Zbrush is far superior. I sometimes use them as “proportional” editing tools to move big masses or smooth things out.

Leave a Reply

Your email address will not be published. Required fields are marked *