Star Wars: The Old Republic Interview

The editors at Gamasutra had the chance to interview BioWare Austin's principal designer Georg Zoeller on BioWare's production processes and tools used to create and iterate content for Star Wars: The Old Republic, in anticipation of his talk at GDC Online. Here's a snippet:
What would you say are the biggest challenges facing MMO content generation?

Achieving the required -- and expected -- volume of content without compromising quality. MMO players are pretty unforgiving when it comes to quality you usually get one shot to get it right. Your launch sets the trajectory of where your game is headed and quality of content, even more than quantity is a major contributing factor to success of failure.

Content wise, these games are insanely large undertakings. For example, in Star Wars: The Old Republic, the Planet Alderaan, which is one of 17 planets in the game, holds more creatures than the entirety of Dragon Age: Origins, a game offering 60-80 hours in a single playthrough that took us almost than 5 years to create.

We have thousands of differently voiced characters in the game, all with dialogs and quests that not only need to be written, recorded, staged, scripted and animated, but also tested and validated -- the most engaging quest isn't going to keep a player around if it fails to work.

In order to make the creation and validation of that much content manageable, you not only need more people, you also need to be a lot smarter in your workflows and tools.

...

How does user feedback influence content creation for The Old Republic? How do you gather this data?

Testing and the use of data generated from testing has been an integral part of our workflow for more than a year now and has been critical for us in validating the game design, rooting out problems and improving the overall game.

Data is gathered via a broad set of methods, including automation, very high detail metrics about user interaction with the game, professional focus testing, in-game player feedback systems, private testing forums and direct contact with individual testers or entire groups via chat.

It's possible for us to drill down into the game interactions of every single tester and correlate their feedback directly with issues encountered in-game. By using a several different data sources, we can eliminate a lot of the usual bias encountered in direct user feedback.

High detail user interaction metrics also help us analyze complex content issues, develop fixes and most importantly, validate the success of those fixes a few builds down the road.