It was this heavy.
The definition of our beta phase is ‘Pencils Down, all work complete’. As our projects approached beta, all of the pie in the sky, nice-to-have features flee from the minds of the team. When a project enters beta, you work with what you have. You don’t enter beta until you have what you need. But once you do, your goal is to polish polish polish. Every feature, every mechanic, every asset has to be made the best it can be. That means testing, bug-fixing, optimizing…anything that needs to be made better than it already is, anything that needs to look good for real people to look at.
Release is probably the most stressful part of the production process. You haven’t been able to do everything you’ve wanted to do, but you’ve (hopefully) done everything you’ve had to do. You polished and you polished and you polished. You tested, internally, maybe even externally, you’ve killed bugs and you’ve marketed and everything you’ve thought to do. And then you hope it’s enough to succeed.
My take is that Beta and Release are actually low stress/high intensity times for the project team. On these projects, with a fixed date when the program ended, the teams got to experience a typical game development ‘hard deadline’ to ship. With a looming deadline, the stress about decisions evaporates – the hard reality of time pressure makes decisions self-evident.
As for release, something always goes wrong with release, and it is never what you expect. For the Nanoswarm team, we were actually delayed shipping the product to Apple because the students’ company, 80HD Games was not officially formed yet. They were code and feature complete, but had to hold off until they got the official paperwork.
One of the most satisfying results for me of the SIP program was that the students got to see projects in a complete cycle, from Concept to Release. Many developers go years in the industry without seeing a complete cycle and our program got folks to see it in just one summer. – Walt Yarbrough & Oleg Brodskiy
» Read More
Jim McCarthy, Artist, Self-Prettyfier.
I’m James McCarthy and I attend RPI and this summer I work for 80HD Games on their award-winning game, Nanoswarm. Some of my work on Nanoswarm included UV mapping the 3D models. UV mapping is one of those jobs most people underestimate and tend to dislike, and while I admit it’s not my favorite part of the 3D process, it’s an important part nonetheless. UV mapping is essentially taking a 3D model and unfolding its sides, then laying it flat on a 2D plane, like opposite of origami or papercraft. The names come from the variables, U and V, which represent X and Y on a 2D texture plane. X, Y, and Z are already variables used in 3D space, so we use U and V to store this texture information as separate variables. On average, it takes as long to UV map an object as it does to model it; the more complex the model, the longer it takes to map.
Yes, that is a Game Boy. And yes, it is awesome.
The first thing I did before UVing was look at what type of game we were making, what the player will be able to see, and the game’s art style. For this particular project the camera is high above the nanoswarm and the art style is simple and stylized, which means the resolution of the textures doesn’t need to be large. Having the camera high up also means a model can have subtle seams (two parts of a texture that don’t transition nicely).
After planning the UV maps, I started unfolding the 3D models onto a 2D plane. Here’s an example of an unfolded crate.
No, it’s not the Companion Cube. We can all weep together.
I then added a placeholder texture, so that I could identify which side corresponds to which location in 2D. This texture also helps identify any stretching and shows how much resolution each side has assigned to it (the smaller the squares, the more resolution).
It’s the Psychedelic Cube!
As you can see, most of the shapes on the model and 2D plane are identical, which means tiling wouldn’t really be noticeable, given the camera height and art style. When you lay two of these sides on top of each other on the 2d texture plane, they both hold identical information and have no variation between them. The model also showed some texture stretching (most visible on the flat corner at the center of the image) which I quickly remedied by changing the shape on the 2D plane slightly.
Now, it's the Cool Cube. Yes, I am sorry, thanks for asking.
You can now see that the identical sides on the object are the same color, and that a lot of space has been freed up on the texture sheet. This free space was used for other objects’ UV maps, so that multiple objects use the same texture, saving even more space. You can also see the seams I mentioned earlier where the triangle meets the edges, but once a texture is painted on the crate it will not be very noticeable. Now that the model is successfully UV mapped, I exported the 2D image you see and gave it to the texture artist, so that he can make the model look nice and pretty. – Jim McCarthy
» Read More
Dan Cherkassky, seen here, fleeing from himself.
Hello, my name is Dan Cherkassky and I’m the artist working on Energy Drive at SIP. Coming from a background where I was focused on digital sculpting and painting, switching over to meet the requirements for Energy Drive has been a bit of a challenge. Though the buildings we use are 3D models, they had to be adapted from the sprites used in the original game. This proved difficult thanks to the inclusion of a number of buildings, the need to keep their look varied, and making textures that are not too realistic or cartoony.
For example, a regular solar panel is composed of a number of cells, which normally would make for a fair amount of detail. However, we tried to keep with the original game’s colors and initially the mix of the two ended up looking weird. Having few details on the model also made it look bland, so we looked for an in-between; we came up with a simple solar panel model, with the cells being made from a light overlay image, giving it just enough detail. It was not too real and not too cartoony, exactly what our content owners wanted.
Many other buildings were fairly straightforward, as a few details here and there were not difficult to maintain. The wind turbine, however, proved difficult because of how much simpler it was than most of the other models. Out models do not have details, such as bricks or paneling, but they do have larger depressions and extrusions, such as windows and pipes, neither of which are really present on a wind turbine. This resulted in a very simple looking structure, even by the game’s art standards. Fortunately, it turned out that this simplicity was remedied once the building was animated, as the spinning blades drew attention from the building and looked more like a real wind turbine. Maybe it was the lack of spinning from the start that caused the visual disconnect, but whatever the reason, we felt better about the turbine once it was animated.
Overall it was great working with varying styles, new techniques, and simply getting things done. It’s great to work on a project like this, especially when it teaches you so much! – Dan Cherkassky
» Read More
Scott Henderson discusses the social impact of social cause games.
When non-profiteers and do-gooders ask me for the one piece of advice I could give to help market their cause, here’s what I tell them:
” Not everyone cares about your cause. But someone does, and they can connect you to others who do too.“
Let’s accept reality. The world is full of problems – some we can solve and others we need to figure out how to live with. In this hyper-connected age, you can be easily overwhelmed with stories of great causes from around the world and in our own backyard that need your help.
What’s true for causes is true for games. We’re competing in a very crowded marketplace and need to find a way to stand out and attract the people who care about our cause or game.
Most causes and games don’t have big Hollywood blockbuster-sized budgets to blanket TV, radio, and the Internet with advertisements. However, thanks to Facebook, Twitter, YouTube, and the rest of the social media ecosystem, everyone has a soapbox and megaphone to gather a crowd.
For those who want to bring social causes and gaming together, you have a range of approaches you can take to attract your crowd – some obvious and others not so much. Here are five that come to mind:
Zynga has helped a number of charities sell virtual goods and then donate the proceeds. This method started with a sweet potato seed packet, which benefitted relief charities in the aftermath of the Haiti earthquake.
Trying to educate players on the complexity of energy conservation? Want to teach health care providers ways to improve their patient diagnoses? Want to illustrate the challenge of those living on the edge of poverty? Create your game around the factors that drive those issues and challenge players to figure out how to win.
You can develop story arcs that help players have fun while gaining a deeper appreciation of the issue. It doesn’t have to be a morality tale or preaching. Maybe you create a quest for players to overcome. Or maybe you set your game in a developing country.
Want to challenge people’s stereotypes? Develop characters who are dealing with disability, homelessness, natural disaster, or some other issue. Make these characters relatable and portray them in a unique light. Even subtle character traits can make a huge difference.
Compassion Drives Action
The word “compassion” comes from the words that mean “suffer” and “together”. Your mind has mirror neurons . These neurons make it possible for you to look at others and feel what they’re feeling – as if you’re gazing in the mirror. When we sense pain in others, we will take immediate, specific action to remove that pain.
However you choose to bring social causes and gaming together, make sure to give your players ways to support you in taking action. – Scott Henderson, Cause Shift
» Read More
Ali Swei, hard at work.
My name is Ali Swei and I’m a student at Becker College and the lead artist at 80HD, winners of the MassDiGI Game Challenge. We’re currently working on our game NanoSwarm at SIP.
By trade, I’ve always been more of a character concept artist, and initially, that’s exactly what I was doing. But soon, the inevitable challenges of indie development meant that I needed to learn some new artistic abilities. Specifically, I transitioned into making textures for our models. For those who aren’t sure what a texture is, it’s the pretty stuff that makes a model look like more than a grayscale shape. Textures give color and definition to a model, like the skin on a person.
The transition was easier said than done. Concept art and texturing are two very different roles. To help myself with the switch, I approached texturing the same way I would painting. When you paint, you have to constantly step back to gauge your overall progress; you need to be sure that the little detail you’ve been wiling away the hours on is actually benefitting the whole picture. For a game, this is especially important as most cameras are fixed, just like the one in NanoSwarm. Therefore, by knowing what the player can see, I can ensure that the player sees everything that they need to. ‘Stepping away from the canvas,’ allows me the understanding I need to do the work efficiently.
Because I had no real experience with textures before this project, I’ve had to focus a lot of time and energy to increase my skills. Looking at my textures from the beginning of the project and comparing them to the ones I have now, I can see a marked improvement in my abilities. This is only natural when you spend a lot of time doing something, but it’s really motivating to be able to look at my work now and see how much better I am than I was before.
Unfortunately, that’s all the time I have today. We’re only halfway into SIP, and we have so much work left to do! Still, I can’t wait to reveal more and more of this game; I think you’ll be just as excited about it as I am. – Ali Swei
» Read More
Chris Gengler with an early NanoSwarm level design.
Hi, I’m Chris Gengler, a student at Becker College and lead programmer for 80HD Games, winners of the MassDigi Game Challenge, and I’m working on our game NanoSwarm, at SIP. When we won the Game Challenge, we still had a lot of questions floating about regarding how we would put the game together. What devices do we want the game to be on? Should we write our own game engine or use an existing one? What language would we use?
We soon found our answers in our requirements and constraints. We wanted to play the game with a touchscreen, meaning we were targeting smartphones and tablets, like the iPad. We also faced a limited timeframe in which to make the game, and since we wanted a prototype up and running quickly, writing our own engine was easily ruled out.
While looking at the available game engines, Unity was clearly our best option. Unity made it easy to release the game on Apple devices, while giving us the ability to easily port it to another platform, like Android. The Unity editor makes it easy to bring in assets like 3D models, textures, sound effects, and music, and then immediately see how they work together. Unity also appealed to me due to its use of C#, which I like quite a bit. Even better, two of our team members already had experience with iOS games in Unity!
From my experience with other engines, I was initially skeptical of Unity. I worried about potential limitations (most game engines are built for a specific kind of game, and it can be a real nightmare trying to force them to behave differently) and complexity (some engines essentially require you to research and understand their inner workings before you can use them effectively. *cough* Source *cough*). But Unity has proven to be remarkably flexible and surprisingly straightforward. There’s no need to mess around with loading file data and setting up model, world, and projection matrices just to get some graphics on the screen – you simply drop a 3D model into the scene. But what really impresses me is that Unity gives you the adaptability to add in as much or as little as you want. For example, you can write your own shaders to really take control of how your game looks. You can even extend the editor environment to better suit your needs, creating entire new panels and features. Here’s a screenshot from Unity showing some of the special editor extensions that I wrote for Nanoswarm.
The panel on the right is the Level Data Editor panel that I wrote just the other day – it lets us easily change the order and grouping of the levels in the game, along with some other nifty things. The red and blue lines are part of a system for communicating between puzzle pieces. I built it such that the level designer can build puzzles without having to write a single line of code!
Our choice of game engine was critical to make the game we wanted in the timeframe we have. Unity has definitely proven to be the right choice, and I would recommend tha other small dev teams check it out. It’s perfect for quickly bringing your ideas to life and it’s flexible enough that you won’t feel limited by it. I definitely will be using it again in the future! – Chris Gengler
» Read More
80HD and Walt Yarbrough breaking down their design.
As students working on several game projects at SIP, we’ve found that making a video game isn’t as simple as drawing some art, assembling some code, and selling it. As we work on our projects we got the chance to workshop a project that Brian Kaskie, CEO of Social Media Applications LLC, and his team are working on.
Social Media Applications is interested in making a game that will teach its audience the joys and challenges of parenthood. They hoped to coordinate with businesses and charities to provide materials for the audience on what raising a family really entails. Kaskie pitched the game to SIP, but found that the questions and comments had less to do with their partnerships and more to do with the game itself.
When it comes to moving from concept to execution, the SIP participants have found that grasping the scope of the game is incredibly important. What is being attempted? Is it feasible? And, most importantly, is it fun? Even if the project is realistic, retaining perspective will help ensure that the game doesn’t suffer from feature creep, and keep attainable project goals.
One of the details that the SIP interns focused on was that Kaskie’s team had less gameplay than monetization. Understanding that without a fun game there would be no audience, the team went back to really figure out the core fun of their game. The fun would drive the monetization.
Games are a method of escape, of release. The most important word in the phrase ‘sponsored game’ is ‘game’, not ‘sponsored’. Knowing that, the SIP interns thought of comparable games, and were able to present an idea on how the game would be played in the social gaming market. The group understood that while the social impact of the game was important, the focus had to be on building a good game first. As ideas were kicked around on the theme of raising virtual kids we were able to come up with some neat mechanics and game concepts. We look forward to see what develops. – Oleg Brodskiy
» Read More
Alex Harrington ponders the mysteries of the universe, and whiteboards.
My name is Alex Harrington I am the lead artist on OnCall and a student at Springfield College. My main challenge has been to present the interface in a very simple and clean fashion. The biggest problem is that the forms available to doctors and nurses are incredibly detailed; translating that information on to a tablet required massive amounts of simplification. In order to make these screens user-friendly, I implemented some simplified forms.
The first primary screen is called “The Whiteboard.” Here, the player can see their available cases, their character, and the code button. Additionally, the player can go to their character profile page, the shop, leaderboards, and options. Our challenge here was to find a graphical method which would present the information in a digestible manner. First, there was the “code” button. I wanted this button to act as a logo, imitating a beeper, drawing the player’s eye to the center of the screen. Normally it’s a cool dark blue, but when a code is called, it’ll turn bright red and pulsate. The player will be able to press the logo button and enter the case which had called the code. The next step was to create the case buttons, separated by severity. These buttons would have the patients’ name, age, and present condition, so they had to look comparatively similar. I ended up color coding the corner of each button along with a white cross. This way, the game can guide the player’s eye to the information that matters most. It also allows us to easily show the player an event when it occurs, by using a flash, for example.
If this post has shown anything, I’d hoped that it has shown how important it is to develop your main menu. As the first screen that the player looks at it, making sure they can quickly and easily do what they want is a huge challenge. A challenge we believe we’ve solved. – Alex Harrington
» Read More
My name is Oleg Brodskiy, and I’m the guy who has been running this blog from behind the scenes. I’ve written a few of the posts, but for the most part, I’ve tried to give you the view of the team through their eyes. Today, you’ll see these three projects, OnCall, Nanoswarm, and Energy Drive, through my eyes. You see, I’m not part of any of the teams, but I spend a large chunk of my week with them, both at work and at play. This gives me a unique perspective on the whole SIP program, one I will try to impart to you today.
OnCall is an interesting beast at SIP. It wasn’t a winner at the MassDigi Game Challenge, so there isn’t a third-party ‘project owner’. That means that the SIP team has a lot more flexibility in doing the things they need to do, because there’s not another layer of people they have to please. That said, it being a serious medical game, they have to spend a lot of time getting the medical facts just right. Since it’s being developed as a game-based medical training tool, the cases have to be realistic, otherwise the training falls flat. As you may have read from Cordell Zebrose’s dev diary, ’Simulating an ER’, this isn’t the easiest thing to accomplish, even from just a case pipeline standpoint.
The game has some serious challenges ahead of it, especially with time rapidly expiring. But the team has been rising to the challenge, and I can’t wait to get my hands on a playable build.
Nanoswarm isn’t Nanoswarm anymore! Unfortunately, that name was taken by a now-defunct government-sponsored game, so the team is soul-searching for a new name. However, that hasn’t stopped them from rapidly prototyping their game, with up to twelve basic level concepts designed already. The gameplay feels impressive already, with a lot of variety in their potential puzzle mechanics. The team has been working hard, and they’re getting ready to roll out into serious testing while they continue to crunch to add in functionality.
Energy Drive has gone through a lot of revisions since it was called USB. As Brian Little discussed in ‘To Pollute or Not To Pollute’, one of the core mechanics is still in development. The game has started to enter small scale testing, and I’ve had the opportunity to play with one of the early builds. My experiences have been mixed (it is, essentially, a pre-alpha, so it’s still in transition), but I can already see the diamond in the rough. I’m excited to be able to talk about it, even the little of it that I can, but hopefully we can begin to release a few more teasers, like some screenshots, over the next few weeks.
» Read More
Cordell Zebrose, deep in thought.
My name is Cordell Zebrose, I’m a senior at Worcester Polytechnic Institute, and I’m one of the three programmers currently working on OnCall at SIP. OnCall is an iOS game built to help communication between medical and nursing students. It does this by simulating an emergency room and forcing players to make decisions similar to what they would have to make in a real emergency room. One of our first challenges with OnCall was determining a way to simulate cases within the small scope of the project. It was a balance between the time we had and the realism we wanted. We wanted a realistic simulation which could viably be completed in a couple months.
Our first idea was to use decision trees to trace the progression of a case from start to finish. The problem was that the longer a case was, the decision tree would create exponentially more work. Every possible situation would have to be entered manually into the system. While this wasn’t necessarily a problem the first couple of times, it would be a big problem for future scalability.
Clearly, we needed another method. Our solution was to systematically generate symptoms from a list and then tie treatments to specific symptoms. We needed to create a database of symptoms for each condition, then generate the case so we could randomly pick which symptoms the patient had and which symptom was the patient’s chief complaint. Afterwards, we’d attach symptoms that the patient was experiencing to certain treatments which the nurse could administer. Once a treatment was given, the patient would be cured of some symptoms. A patient was cured once all the symptoms connected to his condition were cured. We liked the idea of systematically generating symptoms from a database, but the execution needed some work. The main problem was that it wasn’t a very good simulation and therefore wouldn’t work well as a teaching tool. So, we went back to the drawing board, with the idea of creating a symptoms database and possibly a database for treatments and tests too.
After some more brainstorming, we came up with a system, which we think will meet all the requirements. We want to have a list of vitals and test results that will return either a positive/negative value or a number. These vitals have a range where the patient is considered stable. Each test or treatment which the nurse or doctor administers will increase certain vitals, decrease certain vitals and do nothing to some vitals. In addition to this, there will be a protocol of treatments that the doctors must give the patient in-order to cure him of his underlying condition(s). Over time, the patient’s vitals would change and the doctors would need to stabilize his vitals while treating for the underlying condition. If the patient’s vitals ever left the stable range, then he would code. All available doctors & nurses will be able to respond to the code, in an attempt to save the patient.
Now that we have a basic system in place for creating cases, we can start working on an electronic prototype of the game and understand the software architecture that we’ll need to implement the final version. – Cordell Zebrose
» Read More