From sketch to programmed particle

Here’s a little example of our process.

We start with a problem.  We have fountains in game, and they’re meant to soak through and ruin the map.  The player can pop and umbrella open to protect themselves, but how can we best give feedback that this is happening?  In VR its more important than ever to give tons of feedback since the platform is so new and the relationship between player input and game interaction is so unestablished.

I start with a sketch.  The simpler the better, the idea is just to get motion or animation across.



Then I screenshot the game and paint directly over it.  This removes any wasted time in concepting that has to then be translated artistically and molded to fit the camera and game.


2D animation is more useful than you might think in a 3D VR game.  B is our resident 2D animator who translates my screenshot paintovers to technically tackle what I’m trying to do.


GIFs are our friends.  We post lots of these to prototype how an effect will play out before programming it.