About Aceria (twitter: @SassyBotStudio)
The post mortem we wrote about our Ludum Dare 27 game (Bomb Defuse Simulator 2013 [click here to view]) made it to the front page of Gamasutra today!
We’re really proud of what we managed to accomplish within 72 hours and would love it if more people would see what is possible within 72 hours using existing technology.
Hello! We are Elwin and Tino from SassyBot Studio. This is the post-mortem of Ludum Dare Jam 27 with the theme being ‘10 seconds’. In this jam we created a game using projection mapping and head tracking. This allows for gameplay around a 3D surface with the illusion of depth remaining intact. The resulting game had us think both outside, and inside of the box. Without further delay we present you with Bomb Defuse Simulator 2013!
Bomb Defuse Simulator 2013 is a game where you have 10 seconds to defuse a bomb before it explodes. The game is played with experimental technology which allows the player to walk around the game surface and look into the game scene. The player can control the camera by moving around in real space and control the in-game scissors with an Xbox controller. Using the real space and Xbox controller the player will have to defuse the bomb before it explodes.
Because the game is tied to the environment, and thus you cannot play it from home, we will try to explain how this works. To experience this from a spectator and player point of view we would recommend you watch this short video.
To start the game the player will have to stand in a predefined spot in the room and press start on the Xbox controller. This will calibrate the system so that the head tracking works properly. The game will start with an opening screen and verbal instructions explaining to the player to cut the correct wire or else the bomb explodes.
When the opening sequence is over the game creates three procedurally generated wires. All the wires stem from the ‘Bomb Logic Box’ in the bottom of the bomb case. One of these wires will lead to the bomb whereas the rest of the wires lead to the timer. Once the wires are generated the player is prompted with a ‘3, 2, 1, Go!’ after which the timer starts counting back from ten to zero. During this time, the player will have to walk around the bomb to the glass side of the case to see which wire leads to the bomb. When the player knows which wire to cut the player will need to walk to the open side of the bomb case to finally cut the wire using the Xbox controller.
If the player cuts the wrong wire, or the timer reaches zero, the bomb explodes and the player gets to defuse another bomb with newly generated wires. If the player cuts the correct wire the player is awarded with a cheering sound after which a new bomb is presented with an additional wire to increase the difficulty of the game.
Procedural Bomb Wires
The wires generated in Bomb Defuse Simulator 2013 are created procedurally to provide the player with a random challenge each time the game is played. The solution is a modification of an iTween script example (RandomPathGeneration), which can be obtained by purchasing the examples package of iTween.
With this script Unity then follows the path and place slices of a cylinder along it. The result is a something that looks like a wire. Sure, this is not a very efficient way to do this, but then again, who looks at optimisation and efficiency in a game jam?
The controls in the game are split up into physical input and Xbox controller input. With physical input the player moves around the bomb to see what is happening. This is literally done by walking around the real environment as you can see in the image below.
The Xbox controller input is used to manoeuvre the scissors to the correct wire before cutting it because when you have 10 seconds to defuse a bomb, you use obviously use scissors. Pictured below is how the controller buttons are mapped to the scissor behaviour.
To describe the act of creating a videogame to be played on a 3D surface with, or without, the addition of head tracking we would like to introduce the term ‘Videogame Mapping’. In this sense, to videogame map onto a cube would be to create a videogame that can be played on a cube.
The rest of this section will discuss the hardware, setup and provide insight into how videogame mapping works. The resources we used to make videogame mapping work consist of Unity 3.5.7f6, an Optoma EX532 projector, a Microsoft Kinect camera, a wireless Xbox controller and two cardboard boxes.
To understand more about how to project onto 3D geometry we kindly forward to this page: http://vvvv.org/documentation/how-to-project-on-3d-geometry. It covers everything needed to know about the basics of setting up projection mapping.
The primary problem that occurs with projecting depth, rather than texture, on a flat surface is that the illusion of depth will only look correct from the perspective of the projector. Using a Kinect camera to register the position of the player in real space can be used to adjust the projection so that the illusion will remain intact.
To set up a projection mapped construction it is recommended to start with taking measurements of the environment so that the metrics can be used to replicate the scene in 3D. This means measuring the dimensions of the projection surface, which can be boxes or other objects. Furthermore, it is useful to define a point in real space that acts as the world origin or world zero. Make sure to set a metric standard. For example, make one virtual Unity unit represent one centimetre in real space.
With the world origin defined it is easier to measure where the projector is in the scene relative to the world origin. Take relevant projector specifications into account such as field of view and lens-shift, and translate these values as closely as possible into values for the virtual camera. The setup for Bomb Defuse Simulator 2013 fortunately did not have to bother with lens-shift but be aware that this can be important.
Head tracking is crucial to making the illusion work. To get Kinect up and running we recommend you visit this page: http://wiki.etc.cmu.edu/unity3d/index.php/Microsoft_Kinect_-_Microsoft_SDK. After following the instructions to the letter our Kinect input was recognised by Unity. For the illusion to work, only the head position of the player is relevant meaning we simply turn everything else off.
The Kinect must be in a static and reasonably horizontal position relative to the player, making sure that the Kinect camera always covers the play space. Before the incoming Kinect values are usable, Unity needs to understand how these real world values translate into virtual space. Additionally, Unity needs to understand where the player starts in real and virtual space when the game is initialized.
To align the real world and virtual world distance using the Kinect, record the position of the player at two points in the room where one position is exactly a meter apart from the other. With these two positions in virtual space, compared to a meter in real space, it is fairly simple to create a modifier that can be used to make the Kinect’s input translate into useful virtual movement.
The player needs to stand in a specific spot in real space when starting the game in order for the player’s starting position in virtual space to be approximately identical. With the real and vertical position of the player aligned the Kinect should translate player movement reasonably accurately.
Advantages of Videogame Mapping
Videogame mapping certainly has its advantages and drawbacks. In this section we would like to briefly discuss a number of advantages that we see in games of this nature. We would love to know if there are more advantages you can think of and letting us know.
Hard to copy
A number of people have described Bomb Defuse Simulator 2013 as an alternate reality game where reality is enhanced by virtual reality. We would have to agree that without a similar environment and setup this game would be very hard or unappealing to play. With only a copy of the game executable there are still crucial pieces missing in order to play the game such as hardware, environment, and measurements. Copying a mapped videogame and playing it at home would not provide same desired experience that the game in its original setup provides.
The game is unique to the environment
In our case we projected onto cardboard boxes to prove the concept. In theory this concept can be applied to larger and more unconventional objects. Doing so will challenge the game designer with utilizing the real space in order to create a game in virtual space. Without the same object and hardware characteristics of an initial videogame mapping, a replica will not be the same. Even when a videogame mapping is recreated it will remain difficult to duplicate the environment in which the game is originally played. Because the mapping object and environment can vary, videogame mapping will often remain unique per game.
The position of the players head relative to the environment can matter for gameplay
We knew that the best way to showcase videogame mapping was to create a game that makes the depth in the object relevant to gameplay. Admittedly, in retrospect a bomb defuse game is not one of the most original concepts when compared to other Ludum Dare 27 submissions. However, it did make the gameplay fit well with the mapping object. Each and every new object a game is played on presents new gameplay possibilities. Imagine what game could be made when projecting onto a pyramid for example.
Make use of all what Unity has to offer
Bomb Defuse Simulator 2013 was created using Unity 3.5.7f6 and as many of you know Unity is capable of doing very impressive things, with the right guidance. Similar videogame mapping results can probably be achieved in comparable 3D game engines although we feel that without the ease of use that Unity offers this could not have been done within the timeframe of a game jam. Relying on Unity to create a game like we normally would and project that back onto an object makes game development much easier and more enjoyable.
Disadvantages of Videogame Mapping
As with many things in life, advantages sometimes also introduce disadvantages. We would like to go through a few of the downsides of games using videogame mapping. Of course, there are disadvantages we haven’t thought of and we would like to know which ones you can think of too. Here are a number of downsides that we could think of.
The illusion of perspective projected for one person will destroy the perception of perspective for most others
It is important to understand that the perspective projected from the position of one person does not look correct for any other person looking at the same object, or same section of an object. It should technically be possible to project the perspective of two different persons onto different sections of the same object using the same projector and Kinect although that was beyond the scope of this jam project. Besides, the room was not large enough to accommodate for this.
Specific hardware and ample space are required
Besides the computer required to execute the game there are also less accessible hardware pieces required in order to try this setup. Not everyone has a projector and Kinect lying around and in fact, we had to borrow the Kinect from a fellow game designer in order build Bomb Defuse Simulator 2013.
Apart from the hardware there is also space to take into consideration. As can be seen in the picture of the setup earlier in this article the Kinect was placed as far from the mapping object as the environment would allow it. We experienced that crouching in certain places would cause the Kinect to lose its tracking. Without enough space to play the head tracking in videogame mapping can turn out to be rather tricky.
Unique objects require custom games
This point is also raised as an advantage of videogame mapping. Games of this nature will have value for its exclusivity aspect. The downside to this is that, unless you use the same object to play on, adjustments to the game have to be made to play the game. These adjustments are very time consuming and may not even work depending on the object. There are game mechanics that will work while being indifferent to the object that is projected onto such as a memory game or shoot em’ up. However, the challenge is to create games where the mechanics make direct use of the characteristics that videogame mapping offers. This challenge is what makes this concept time consuming and inefficient to apply to various unique objects.
We believe that there is much more to be explored in this field and have only seen the tip of a very appealing iceberg. Avenues that we think are incredibly fascinating include:
- Networked videogame mapping multiplayer.
- Backside surface projection mapping.
- Use monitors as geometric faces instead of a projector to visualise virtual space.
- Enable multiple players with a single Kinect and projector.
- Optimise the setup and calibration of Kinect and mapping object.
- Prototype concave videogame mapping.
- Prototype immersive videogame mapping concepts.
Ludum Dare 27 has been another great experience that we have grown from on many different levels. Hopefully this article has been successful in explaining what kind of game Bomb Defuse Simulator 2013 is and how it is played. On a technical level we have provided insight in how the concept works and what is required to achieve this in a basic form. The nature of the game has its advantages and disadvantages; some of which we have described, although not all have been explored. Finally, we mentioned in which areas more investigation and experimentation is required. Thank you for your time and we hope this has been useful to the game development community.
We’ll just give you some hodor instead. There’s also a small preview of what’s to come, a generic bomb simulator game using some awesome new technology that we just finished making.
We’ll show more tomorrow.
Making a plan take shape
Let us start with an introduction. My name is Tino, and my role in the team was to provide art assets for whatever we would build in this jam. What you are reading is my personal perspective and experience from start to finish of the LD26. Before you read on; I urge you to play the game first as the explanation below will spoil the experience. Turn up the volume, (preferably put on some headphones), take your time, and open yourself up to what you can find in this link:
Testing the waters (for ducks of course! )
When going into the LD26 we already had the team together where the designer expressed he would like to make a game heavy on the narrative side of game development. Personally, I like a challenge and looked forward to working on a type of game that I have not made before. When the designer carefully shared his preliminary thoughts I thought his scope was pretty insane. On the first day of the jam I secretly hoped he changed his mind and I think he actually did although as a result the scope seemed to have gotten even more daunting.
The plan was as follows. The designer had thought of a touching story that is told through exploring and interacting with the environment. This required four environments and characters in different poses. The environments are (in order) a park, an apartment, city streets, and an office. Each environment features a single character model twice; each a different pose. The apartment is repeated and thus required additional poses as does the park. The amount of poses resulted in eleven.
Leaps and bounds
When absorbing the task ahead I thought to myself that the scope was way too large although I was curious to see how far we could get. If you have seen the game that Aceria and I made for LD24 you can see that it is a huge leap from boxes to a full on world like experience. On top of that, characters are my biggest nemesis. My experiences with modelling and rigging a character are rushed, crude, and done in different software.
We are in this together
How on earth did we manage then? What, I think, allowed us to pull this off was a combination of great team work and long hours. For example; What made making the environments really easy was the excellent assistance from the designer in providing me with reference material. The characters probably wouldn’t have looked half as presentable without the intervention of my anatomy loving girlfriend. Additionally, she also assisted in the apartment scene and whipped up some furniture out of one of the objects I had initially placed. In terms of hours I have counted about 47,5 hours of work with roughly 10 hours of sleep.
Long hours are nothing without a goal and focus
Because the scope was daunting I figured it was smartest, and safest, to model the scenes in passes starting with the scene the player would start in and provide the programmer/designer with a space to be explored as quickly as possible. With the environments blocked out faster than I can remember; it was time to go for the second pass. This pass included all the key items in the environments that has story attached to them. The programmer had to wait for this pass for such a long time that he started working on easter eggs (perhaps you may find them eventually). Once the key items were placed in the environment it was time to get the character modelled. As mentioned before I had some awesome assistance on this hurdle. Next up was rigging which resulted in the crudest rig imaginable. With the character modelled and rigged, and the key narrative items in place in the blocked out environments I started to see the light at the end of the tunnel.
Prettiness and performance
The characters were rather simple to place although getting the poses right took quite a bit of fiddling. With everything in place to provide for a minimum viable product it was up to the programmer to get it all to work from A to Z. In the mean time I got to focus on prop modelling to dress up the environments and give it more character. Initially we had also the idea to use Unity’s option to automatically generate UV’s on import, and bake shadows and ambient occlusion (fancy shadows) into the environments. A light bake during breakfast taught us that just the park required 40MB of light map textures. This was not an option and so we shifted from strain on the memory to strain on the processor. As if that wasn’t enough for the processor; I also really wanted to emphasise the mood and experience by using chromatic aberration (Google it) and a vignette. Without ambient occlusion and shadows the environment looks really flat. Luckily the deferred rendering method and SSAO (screen space ambient occlusion) in Unity provided solutions to these problems. With a considerable amount of tweaking in Unity’s settings we feel that we have come close to the sweet spot in terms of performance and strain.
We have seen the light
Because we are now using deferred rendering and SSAO, instead of baking shadows and AO, we could use many lights in the scene as the cherry on top. With the environments fully decked out in models and all the character poses in place all that was left was to do was to place these lights. Keen eyes may have also noticed the blinking light on the messaging machines and flickering candle lights.
Closing words on environments.
Environments have a history and can also tell stories. For example, in the office you may have noticed a picture frame in the bin next to the door; or the books beside the filing cabinet. How this happened is speculation and whatever your imagination may come up with for a reason, it adds to the experience of the world before you. Thank you for taking the time to play our game and listening to my words. If you had additional questions I would love to answer them.
My name is Dr. Mata Haggis and I was the narrative & game designer/producer on Fragments of Him, our entry to Ludum Dare 26.
I had never done a game jam before, so this was a new experience for me. I was fortunate enough to have two talented developers ask me to join their team. Between us we created a narrative game experience with one programmer, one artist (and his 3D modeller girlfriend for an evening), and myself in only 72 hours.
Below you will find out what a narrative designer does on a game jam, what a producer can do, and a little more about the decisions that I made when creating the title.
Before we go any further you should play the game:
Seriously. I mean it. This post will have a lot of spoilers very soon, so go play the game now and come back in ten minutes.
Okay, let’s talk about the process…
The theme was minimalism, and I wanted to do a game with a very prominent story in it. I worked on several ideas in my head, but the one that stuck was investigating why a person would choose to live in a minimalist style in their house. The result of this was the idea that the lead character had lost their partner and couldn’t stand to be reminded of the loss.
In narrative terminology, the loss of the partner would be referred to as ‘the inciting incident’. The process of creating minimalist spaces gave me a gameplay mechanic too.
I pitched the concept to the team – one artist and one programmer – and I was fortunate that they were enthusiastic about it. There were several key decisions that are worth examining here:
The theme was going to need a lot of art. Specifically, the world was going to need a lot of models in it. I immediately said that the world was going to be stylised and colour coded: the protagonist (the ‘hero’ of the story) would be blue, and yellow would be used to indicate the dead partner, along with objects related to that partner. Everything else in the game was going to be white – this would allow the artist to focus on building the space and objects without worrying about texturing any of it.
In terms of showing figures in the environment, again I needed to keep a close eye on the scope of the project. I asked the artist to create one generic character model which would be used for both the protagonist and the dead partner. In every scene, these figures would be posed in a tableau (a static pose that suggests narrative action). In this way we could give a powerful idea of character relationships without the difficulty of animating figures.
For the programmer, there were a few challenges that I had to consider. The audio and subtitles would need to be displayed, there would need to be highlighting and signposting of affordances, and the most difficult task was probably going to be the final scenes where the gameplay mechanic (clicking to remove objects) is reversed. By keeping these interactions very simple, I could limit the complexity of the task that I was giving to the programmer.
I voiced the lead character of the game, and I am male. In the game, the character talks about his dead boyfriend. I felt that the story would work perfectly with either a male or female partner character, but I also feel that non-heterosexual relationships are under-represented in gaming, and so I had a preference for making both characters male.
I described the outline of the story to the team and they had no feelings either way on the gender of the partner, and so we ended up with a story about coping with grief, where the lead characters happen to both be male.
When stories are told about the death of a gay man, they often focus on stereotypical perceptions of gay lifestyle choices: drugs, promiscuity, clubbing, and of course HIV/AIDS. I didn’t want to tell a gay love story; I just wanted to tell a love story.
I know that the audience for any story will be predominantly heterosexual, with then lower proportions of gay, bisexual, and transgender players. Part of my goal was to ignore the non-heterosexual elements and to write a story that anyone could relate to. I did this by choosing to focus on the small things in people’s lives.
I suspect that anyone who has had a break-up, especially after living with a partner, can relate to the quiet sadness of removing one towel from the bathroom. I think that feeling of sorrow is a universal experience that has nothing to do with sexual preferences, and that is what I wanted to convey in this game.
A benefit to the choice of going for a homosexual relationship was that the artist only needed to make one body and animation rig, saving him a lot of time!
How to write a good story quickly
I’ve been writing for several years and have cobbled together a system which works well for me. I’ve based it on several sources, but the main ones are ‘Save the Cat’ by Blake Synder:
… and ‘Will Write for Shoes’ by Cathy Yardley.
Neither of these are high-brow books on how to create your epic masterpiece, but they are very focussed on creating a tight, enjoyable story.
I put ideas from the books together and now here’s what I use whenever I start writing:
|Before the inciting incident – ‘Save The Cat’|
|Show the life of the character before life goes wrong. The character does something that makes you like them (‘save the cat’).|
|5% – Inciting incident|
|Something changes that forces the protagonist to act.|
|25% – Plot point one – state the external motivation|
|What forces the protagonist to make this clear statement of their objective?|
|50% – Plot point two – the low mid-point|
|It appears impossible to complete the external motivation, protagonist loses hope|
|75% – Plot point three – Hope|
|The protagonist is given hope that they can fulfil their external motivation goal, but only if they truly dedicate themselves to it.|
|90-95% – “The Black Moment”|
|The external motivation appears impossible to fulfil.|
|95%-100% – Resolution|
|The story concludes in a satisfying manner – this may be successful completion of the external and internal motivations (a happy ending), it may be a failure on external motivation but a success in the internal motivation (common in comedies, romance, or tales of self-discovery), success of external motivation but failure of internal motivation (common in tragedy and tales of self-discovery). It is not typical for a story to end with failure of both external and internal motivation – this is the total failure of the character to grow or succeed and makes an audience wonder why they spent their time with the character.|
I use this whenever I write and it’s working out pretty well for me so far!
In the case of Fragments of Him, as with other stories I write, I began from a feeling and worked back to an inciting incident. The feeling was a person clearing away all of their belongings to create a minimalist living space – why would they do this? This question led back to the inciting incident – the objects were related to grief at the sudden death of a partner.
From there I worked through the template, filling in the gaps. For Fragments of him, it looks like this:
|Before the inciting incident – ‘Save The Cat’|
|Scene – Park. Feeding ducks, narrator talks about how good life is.
Two characters – protagonist is blue, the ex is yellow. All other objects are yellow too.
The player clicks on objects (or parts of objects) in the scene, they turn transparent.
|5% – Inciting incident|
|End of first level – the player has been removing the polygons, when everything is transparent except for the main character – the partner dies.|
|25% – Plot point one – state the external motivation|
|Scene- House. Protagonist wants to remove all traces of the ex from his life.|
|50% – Plot point two – the low mid-point|
|Scene – Street . It appears impossible to remove everything – everywhere he goes there are reminders. He doesn’t want to go outside.|
|75% – Plot point three – Hope|
|Scene – Office. If he can remove everything from his interior spaces he feels like he might be able to cope.|
|90-95% – “The Black Moment”|
|Scene – back in the empty House. Protagonist is sitting on the floor. Ex appears behind… The protagonist feels like he will never be free of the memories, even when everything is gone.|
|95%-100% – Resolution|
|As the player clicks to remove the ex from the House scene, the protagonist’s colour changes, blending the two into green – the ex has become part of the protagonist. The player clicks through the scenes, where the transparent objects are back. As the player clicks on the transparent objects they turn green too. This goes much faster than the removal.
The protagonist understands that the ex is part of him. Things are different, but life will go on in a new way.
As you can see, I’ve used the basic structure from the template to create a narrative arc that is satisfying and also that integrates with the gameplay – at each step I have made sure the interaction with the game adds to the plot.
I chose locations where it would be viable to have few characters in the scene because of the limitations on the art scope.
There are three variations of most of the instances of dialogue in the game. These are chosen randomly during each play through, giving a slightly different experience for each time.
The dialogue was written with three key events in mind:
- The start of a level
- Removing a particular object or a percentage of objects
- The end of a level
The start and end triggers would give the main plot points, and the objects would trigger smaller memories.
I recorded the audio in a make-shift audio booth constructed from a pair of curtains and a clothes rack on a €100 digital microphone. It’s not ideal, but it did the trick. I then edited the sound files into one or two sentence chunks with some antiquated software. This was very laborious and time consuming for me, but sometimes design requires this kind of repetitive grunt work to get the project done.
Ambient audio and music is absolutely essential in selling any emotional experience: I believed this going into this project and now I am utterly convinced of this. In every scene there are several audio triggers built into the environment that work in a very subtle way to make the spaces feel more believable.
I have a suspicion that the audio space of a game may be more important than the visual style when it comes to creating emotional resonance. Most of this work will never be noticed by the player on a conscious level, but that it exists in the mix is important. In the apartment, did you hear the muffled footsteps of a neighbour going down the hall? Or in the office, did you notice the sound of typing outside the room, or the noise of a plane flying past overheard? Probably not, but they’re there, and they help you feel that the space you are in is alive.
For the music, I found a wonderful website of free music: https://musopen.org/
Everyone on the team instinctively felt that a piano score would suit the mood. I tried several pieces and found a 14 minute piece by Chopin that fitted the feeling I wanted to create… Then I had the laborious task of trying to cut it into pieces that would loop naturally.
Sound effects were more difficult: I wanted to get musical notes again but failed to find a copyright free source for these, and so I used generated audio tones with echo effects on them. It’s not ideal, but it does have the advantage that they contrast clearly with the ambient and musical soundtrack of the game.
To begin with I found a lot of reference material for the spaces that I wanted to create for the game. I recently went on a trip to London (possibly the greatest city in the world for any form of storytelling) and had decided that the story would be set in a location similar to Knightsbridge and Hyde Park.
We set up a shared Dropbox folder and so all of the reference material was instantly shared across the team.
As the artist worked, we often talked about ‘exactly what kind of lampshade would he have in his office’ or ‘what does he have on the side in his hallway’. Every time I was asked a question like this I would always think back to how I imagined the characters to be, what kind of people they are, and what their priorities are in life. Wherever possible, the art was always created to support the characters: if it didn’t say something about the people that owned the object then we would keep looking until we found a better, more expressive choice.
Sometimes the artist would create a space that inspired these choices. He modelled the handles on a drawer for the apartment in a particular way that made me realise that my characters were a little old-fashioned in their choices, or the filing cabinets were placed away from the wall in the office, and I would see that and decide that the character might have dropped a book down there but felt too depressed to be bothered about the effort of retrieving it. This iterative loop was very rapid, where I fed the artist ideas, and his responses inspired me to understand my characters more.
We wanted to keep the interface minimal, but there needed to be various elements to help the player understand what they needed to do. These were designed by me, but implemented by the programmer on the team.
Here is a list of events and the feedback I designed:
Reticule – not on object: Reticule is white
Reticule – on clickable object – (before second house scene): Reticule is yellow
Reticule – on clickable object – (during second house scene): Reticule is green
Reticule – clicking on empty space: Reticule pulses blue
Note that I also designed negative feedback (when the player clicks but there is nothing to remove from the game at that point). It is always important to make sure that the player knows when they are doing something wrong! Audio cues were added to support the visual positive and negative feedback systems.
In the world, we had highlighting in yellow, the colour of the dead boyfriend, to show objects that could be clicked on. There was a yellow bar at the top of the screen that got smaller when the player removed more yellow objects, and on the title screen the buttons were highlighted yellow when you started the game.
After testing with a non-gamer, we found that this still wasn’t sufficiently clear, and so a pink outline was added to objects that could be clicked but were too far away…
Which still wasn’t enough, and so we added a timer that means that clickable objects pulse slowly yellow after the player has been on a level for two and a half minutes.
I created a spreadsheet of the bugs for the game, allowing us to test, iterate, and improve all of the game over the last twelve hours of the game jam. It seemed trivial when I started the spreadsheet, but as the day wore on it proved to be invaluable in helping us focus our attention.
I need to upgrade my audio recording equipment and learn some more modern audio software. This part of the process was extremely time consuming and I feel that the quality of the audio could have been improved too.
Try and arrange to have an actor available for the weekend! My vocal performance is sufficient, but I think we could have added more emotion with a more powerful performance.
Convey the most difficult scenes more clearly to the programmer as early as possible. I don’t feel I did a good enough job in getting across my idea for the ending of the game at any early enough stage, resulting in pressure on the programmer at the end of the three days. If I had handled that better then maybe the ending could have been more polished.
Even with all of the highlighting on the interface, we still get comments that people find it frustrating to find the last object in a scene. What can I learn from this? Well, HIGHLIGHT ALL TEH THINGS! And that MOAR HIGHLIGHTING STILL is one way to go. Really – it’s hard to understand just how much highlighting is needed… But of course this has to be balanced to not break the mood of the game. Tricky. The other option would be to allow players to progress after they have got most of the objects and not require 100% completion… I don’t know about that. Many people like playing to completion, but I suspect a narrative game needs to remember that the story is more important than 100% completion. Believe me when I say that this balance is something I will think deeply about on future projects.
Testing the game resulted in big improvements on the interface. I definitely want to do this for future games I make – I think this probably doesn’t happen often for game jam creations!
I’m very proud of what we have created. As the first few comments started rolling in about players starting to cry while going through our game, I couldn’t help but think of Steven Spielberg when he said that games would only be art ‘when somebody confesses that they cried at Level 17.’
I’ve made games where you rip the heads off of space marines, games where you slam cars into each other as fast as you can, or where you control a monkey in a spaceship collecting space-bananas. I’m proud of those games in their own ways, but this game was very personal for me.
We have all felt loss at times, or incapable of coping with grief, but this is normal. We are supposed to feel like that sometimes, and we just have to learn how to cope with it.
I wanted to convey a message about acceptance and hope, and ultimately make people feel that there is always something more to look forward to. I have always believed that games could do that, but Fragments of Him has proved this to be true, at least for some players. I feel incredibly inspired by this experience.
My thanks go to Tino and Elwin from the award winning SassyBot Studio (http://sassybot.com/) for asking me to join them on this adventure, and thank you for reading this post-mortem of our game.
About Dr. Mata Haggis:
I’m a games & narrative designer with over ten years of experience of making both indie and AAA games, and writing for games, television, webcomics, and print. I am starting up a blog aimed these things here: http://games.matazone.co.uk/
For nearly three years I have been teaching the next generation of games developers on the IGAD (International Games Architecture & Design) programme at NHTV University in Breda, The Netherlands. It’s a very highly rated course, taught entirely in English, and if you’re interested in learning more about games development then I highly recommend it: http://made.nhtv.nl/
First things first, here’s a shameless plug to our game, Fragments of Him:
I highly recommend playing the game before reading on, it’s a lovely experience which will possibly be ruined by reading this post.
Now let’s get going to the juicy stuff.
We’ve had the thought of trying a narrative based game before for a Game Jam, although we never got around to actually trying it. But apparently all the designers we know are totally into narrative, so this time we got to give it a try. I’ll leave the designer talk to him though.
Our tool of choice was easy enough – Unity3D. I’ve grown to love the engine for its simplicity and the possibilities to quickly prototype. How quickly? We decided on the design and I started with a first rough prototype. Moving around in 3D space and clicking on objects. Since this was done within 5 minutes, it became clear that this was going to be a great game jam.
I threw in the default character controller and the default mouselook script, wrote a script that does a raycast when you click on an object. When an object has a certain tag it will be made transparent. Total lines of code written so far: 20.
That was the easy part, next part was telling a story. The story was to be told with individual lines of text which all have an audio file linked to it. Some of these would have to be played after each other, and some wouldn’t. Here’s my initial solution:
public var sound : AudioClip;
public var text : String;
public var isFinal : boolean;
public var nextScene : String;
public var actions.<Action> = new List.<Action>();
public var percentage : int;
public var actions.<ActionWrapper> = new List.<ActionWrapper>();
An action would always have a sound file and the text. The isFinal boolean would be checked whenever a file should trigger the event to go to the next scene. The ActionWrapper and ActionHolder simply hold lists of data. This data is simply fed to a function which picks the correct data and adds it to a queue. This queue is checked all the time and plays the audio (and shows the correct text). Something that nobody noticed so far, is that each event has 3 different options to pick from. The writing is unique with each of these options and the writing will always make sense, no matter which ones you get. Sweet.
I’ve never written shaders in my life and I think I’ve only seen a shader file twice. Now we suddenly needed a shader that could make an outline around objects. Switching shaders on runtime, no probem. Right? Luckily, Unity came to the rescue again! There’s a toon shader which gives it an outline. That did the trick for a while, but it wasn’t what I wanted. The shaders make it toony (obviously), which is very much a different effect than our current shader. On to my quest to make the perfect shader. I hacked together a shader by combining the default Diffuse shader that Unity provides and the Toon Outline shader. I have no clue how it works or why it works. But it works. Sadly I didn’t have time to make a similar shader for the final scene (but then for transparent objects). A shame, because that would’ve been very nice.
Looking back, I think we did well on all of the disciplines. The art looks great, the design was done well, we had great fitting audio and the tech side, well the tech side works.
If you have any questions with how I fixed things I’ll be more than happy to reply. Just note that I’m not that great of a programmer, but I somehow always manage to get things to work.
Here’s the difference between a screenshot that I made after about 3 hours into the jam and one I just made:
If you missed it, another shameless plug: http://www.ludumdare.com/compo/ludum-dare-26/?action=preview&uid=15100
We’ve been working very hard to get this far, but we’ve reached the point where we can announce our game: Duck Simulator 2013.
The AI is really getting along, the models are obviously WIP and will be improved tomorrow. The AI is insanely smart, and as you can see the ducks will flock towards each other and food sources. We’re still working on duck to duck interactions. We’ll be adding more exciting features in the morning!
Time for bed now, we’ve been working for +- 16 hours straight.
It’s not much yet, but the game is definitely getting there. A lot of things are working behind the scenes and we’re now casually adding content (voice acting, more content for in the scene).
We started the October Challenge during LD #24 by participating in the Kongregate competition. We won 2nd place there (after a long wait), but I am more interested in getting everything sorted for obtaining ad revenue.
The problem is, you need an U.S. tax id for that, which takes a very long time if you’re not from the U.S. Estimation? 4-8 weeks and a bunch of paperwork. I hate paperwork. I’ve filled in the documents but I do not have my passport nearby, soonest I will be able to send it out is next Saturday, which is way too late to succeed in this challenge. But at least I set the wheels in motion, that has to count for something, right?
I’m now in the works of sorting everything for the company I started in March this year, more paperwork and more reading. Have I mentioned my hatred towards paperwork?
To round up, here’s the balance so far:
(Shameless self promotion) If you want to give it a go, here’s the link:
One of the first thing that we solved after getting the basic mechanics working, was creating levels. We knew that if we wanted this to work, we had to have polished levels that were enjoyable. A quick level editor would be sweet.
Sadly, I didn’t have time to create a level editor. So we did the next best thing: we created the levels in Maya. This was amazing for 2 reasons:
1) The artist had more time than I had, I was kinda busy coding features.
2) Testing levels was blazing fast. Don’t like a level? Delete and rebuild.
What we did was create a default cube, this cube would be the same one as the artist would use in Maya. The script I wrote would read out all the level meshes from a certain directory and loop over all the cubes in that mesh. If these are named “Sticky” or “Bouncy”, they would be tagged in the script as their respective type. Simple and easily expandable. The only downside my script had was that it doesn’t allow for a single level to be reloaded, it would have to go over all the levels to rebuild them. The process only takes half a minute or so, so this wasn’t worth the time to fix.
On average we went through 3-4 iterations of each level, with some exceptions (level 9 and 16 come to mind, they were probably changed at least 8 times).
In the rush of getting this done, we completely forgot about writing a journal
This was our first Ludum Dare, and only our 2nd Game Jam. I personally think that we did a pretty fine job on making a polished game.
The core of the game was built in several hours on the first day. This allowed us to really quickly start building up levels and messing around with gameplay. This seems to be the way to go for any Game Jam. Simple mechanics that can be complex enough to be fun.
What will we improve on next time? Definitely a level editor. I wrote an importer that would read .fbx files into Unity and create the levels. While this wasn’t bad, it wasn’t optimal either (it required me to delete all the levels and regenerate them if I wanted to check 1 new thing). I would also like to get more feedback during the development from other people. Right now we only publicly released 1 other version (that was the one that was built during the first few hours), that made it difficult to understand how difficult/fun/frustrating/easy our game was.
If you’re interested in the first version, it’s still online here:
The team consisted of me, Elwin Verploegen (programming), Tino van der Kraan (art& level design) and music from Boulders Below (a friend of the team).
Looking forward to the feedback, hopefully it’ll catch on, as we have a lot of awesome ideas to add to the current version!