Valve Hardware Day 2006 - Multithreaded Edition
by Jarred Walton on November 7, 2006 6:00 AM EST- Posted in
- Trade Shows
Gaming's Future, Continued
So what can be done besides making a world look better? One of the big buzzwords in the gaming industry right now is physics. Practically every new game title seems to be touting amazing new physics effects. Perhaps modern physics are more accurate, but having moderate amounts of physics in a gaming engine is nothing new. Games over a decade ago allowed you to do such things as pick up a stone and throw it (Ultima Underworld), or shoot a bow and have the arrow drop with distance. Even if the calculations were crude, that would still count as having "physics" in a game. Of course, there's a big difference in the amount of physics present in Half-Life and those present in Half-Life 2, and most people would agree that the Half-Life 2 was a better experience due to the improved interaction with the environment.
Going forward, physics can only become more important. One of the holy grails of gaming is still the creation of a world that has a fully destructible environment. Why do you need to find a stupid key to get through a wooden door when you're carrying a rocket launcher? Why can't you blow up the side of the building and watch the entire structure fall to the ground, perhaps taking out any enemies that were inside? How about the magical fence that's four inches too tall to jump over - why not just break it down instead of going around? It's true that various games have made attempts in this direction, but it's still safe to say that no one has yet created a gaming environment that allows you to demolish everything as you could in the real world (within reason). Gameplay still needs to play a role in what is allowed, but the more the possibilities for what can be done are increased, the more likely we are to see revolutionary gameplay.
Going along with physics and game world interactions, Valve spoke about the optimizations they've made in a structure called the spatial partition. The spatial partition is essentially a representation of the game world, and it is queried constantly to determine how objects interact. From what we could gather, it is used to allow rough approximations to take place where it makes sense, and it also helps determine where more complex (and accurate) mathematical calculations should be performed. One of the problems traditionally associated with multithreaded programming has been locking access to certain data structures in order to keep the world in a consistent state. For the spatial partition, the vast majority of the accesses are read operations that can occur concurrently, and Valve was able to use lock-free and wait-free algorithms in order to greatly improve performance. A read/write log is used to make sure the return values are correct, and Valve emphasized that the lock-free algorithms were a huge design win when it came to multithreading.
Another big area that can stand to see a lot of improvement is artificial intelligence. Often times, AI has a tacked on feel in current games. You want your adversaries to behave somewhat realistically, but you don't want the game to spend so much computational power figuring out what they should do that everything crawls to a slow. It's one thing to wait a few seconds (or more) for your opponent to make a move in a chess match; it's a completely different story in an action game being rendered at 60 frames per second. Valve discussed the possibilities for having a greater number of simplistic AI routines running, along with a few more sophisticated AI routines (i.e. Alyx in Episode One).
They had some demonstrations of swarms of creatures interacting more realistically with the environment, doing things like avoiding dangerous areas, toppling furniture, swarming opponents, etc. (The action was more impressive than the above screenshots might indicate.) The number of creatures could also be increased depending on CPU power (number of cores as well as clock speed), so where a Core 2 Quad might be able to handle 500 creatures, a single core Pentium 4 could start to choke on only 80 or so creatures.
In the past, getting other creatures in the game world to behave even remotely realistically was sufficient -- "Look, he got behind a rock to get shelter!" -- but there's so much more that can be done. With more computational power available to solve AI problems, we can only hope that more companies will decide to spend the time on improving their AI routines. Certainly, without having spare processor cycles, it is difficult to imagine any action games spending as much time on artificial intelligence as they spend on graphics.
There are a few less important types of AI that could be added as well. One of these is called "Out of Band AI" -- these are AI routines that are independent of the core AI. An example that was given would be a Half-Life 2 scene where Dr. Kleiner is playing chess. They could actually have a chess algorithm running in the background using spare CPU cycles. Useful? Perhaps not that example, unless you're really into chess, but these are all tools to create a more immersive game world, and there is almost certainly someone out there that can come up with more interesting applications of such concepts.
So what can be done besides making a world look better? One of the big buzzwords in the gaming industry right now is physics. Practically every new game title seems to be touting amazing new physics effects. Perhaps modern physics are more accurate, but having moderate amounts of physics in a gaming engine is nothing new. Games over a decade ago allowed you to do such things as pick up a stone and throw it (Ultima Underworld), or shoot a bow and have the arrow drop with distance. Even if the calculations were crude, that would still count as having "physics" in a game. Of course, there's a big difference in the amount of physics present in Half-Life and those present in Half-Life 2, and most people would agree that the Half-Life 2 was a better experience due to the improved interaction with the environment.
Going forward, physics can only become more important. One of the holy grails of gaming is still the creation of a world that has a fully destructible environment. Why do you need to find a stupid key to get through a wooden door when you're carrying a rocket launcher? Why can't you blow up the side of the building and watch the entire structure fall to the ground, perhaps taking out any enemies that were inside? How about the magical fence that's four inches too tall to jump over - why not just break it down instead of going around? It's true that various games have made attempts in this direction, but it's still safe to say that no one has yet created a gaming environment that allows you to demolish everything as you could in the real world (within reason). Gameplay still needs to play a role in what is allowed, but the more the possibilities for what can be done are increased, the more likely we are to see revolutionary gameplay.
Going along with physics and game world interactions, Valve spoke about the optimizations they've made in a structure called the spatial partition. The spatial partition is essentially a representation of the game world, and it is queried constantly to determine how objects interact. From what we could gather, it is used to allow rough approximations to take place where it makes sense, and it also helps determine where more complex (and accurate) mathematical calculations should be performed. One of the problems traditionally associated with multithreaded programming has been locking access to certain data structures in order to keep the world in a consistent state. For the spatial partition, the vast majority of the accesses are read operations that can occur concurrently, and Valve was able to use lock-free and wait-free algorithms in order to greatly improve performance. A read/write log is used to make sure the return values are correct, and Valve emphasized that the lock-free algorithms were a huge design win when it came to multithreading.
Another big area that can stand to see a lot of improvement is artificial intelligence. Often times, AI has a tacked on feel in current games. You want your adversaries to behave somewhat realistically, but you don't want the game to spend so much computational power figuring out what they should do that everything crawls to a slow. It's one thing to wait a few seconds (or more) for your opponent to make a move in a chess match; it's a completely different story in an action game being rendered at 60 frames per second. Valve discussed the possibilities for having a greater number of simplistic AI routines running, along with a few more sophisticated AI routines (i.e. Alyx in Episode One).
They had some demonstrations of swarms of creatures interacting more realistically with the environment, doing things like avoiding dangerous areas, toppling furniture, swarming opponents, etc. (The action was more impressive than the above screenshots might indicate.) The number of creatures could also be increased depending on CPU power (number of cores as well as clock speed), so where a Core 2 Quad might be able to handle 500 creatures, a single core Pentium 4 could start to choke on only 80 or so creatures.
In the past, getting other creatures in the game world to behave even remotely realistically was sufficient -- "Look, he got behind a rock to get shelter!" -- but there's so much more that can be done. With more computational power available to solve AI problems, we can only hope that more companies will decide to spend the time on improving their AI routines. Certainly, without having spare processor cycles, it is difficult to imagine any action games spending as much time on artificial intelligence as they spend on graphics.
There are a few less important types of AI that could be added as well. One of these is called "Out of Band AI" -- these are AI routines that are independent of the core AI. An example that was given would be a Half-Life 2 scene where Dr. Kleiner is playing chess. They could actually have a chess algorithm running in the background using spare CPU cycles. Useful? Perhaps not that example, unless you're really into chess, but these are all tools to create a more immersive game world, and there is almost certainly someone out there that can come up with more interesting applications of such concepts.
55 Comments
View All Comments
Nighteye2 - Wednesday, November 8, 2006 - link
Ok, so that's how Valve will implement multi-threading. But what about other companies, like Epic? How does the latest Unreal Engine multi-thread?Justin Case - Wednesday, November 8, 2006 - link
Why aren't any high-end AMD CPUs tested? You're testing 2GHz AMD CPUs against 2.6+ GHz Intel CPUs. Doesn't Anandtech have access to faster AMD chips? I know the point of the article is to compare single- and multi-core CPUs, but it seems a bit odd that all the Intel CPUs are top-of-the-line while all AMD CPUs are low end.JarredWalton - Wednesday, November 8, 2006 - link
AnandTech? Yes. Jarred? Not right now. I have a 5000+ AM2, but you can see that performance scaling doesn't change the situation. 1MB AMD chips do perform better than 512K versions, almost equaling a full CPU bin - 2.2GHz Opteron on 939 was nearly equal to the 2.4GHz 3800+ (both OC'ed). A 2.8 GHz FX-62 still isn't going to equal any of the upper Core 2 Duo chips.archcommus - Tuesday, November 7, 2006 - link
It must be a really great feeling for Valve knowing they have the capacity and capability to deliver this new engine to EVERY customer and player of their games as soon as it's ready. What a massive and ugly patch that would be for virtually any other developer.Don't really see how you could hate on Steam nowadays considering things like that. It's really powerful and works really well.
Zanfib - Tuesday, November 7, 2006 - link
While I design software (so not so much programming as GUI design and whatnot), I can remember my University courses dealing with threading, and all the pain threading can bring.I predicted (though I'm sure many could say this and I have no public proof) that Valve would be one of the first to do such work, they are a very forward thinking company with large resources (like Google--they want to work on ANYthing, they can...), a great deal of experience and, (as noted in the article) the content delivery system to support it all.
Great article about a great subject, goes a long way to putting to rest some of the fears myself and others have about just how well multi-core chips will be used (with the exception of Cell, but after reading a lot about Cell's hardware I think it will always be an insanely difficult chip to code for).
Bonesdad - Tuesday, November 7, 2006 - link
mmmmmmmmm, chicken and mashed potatoes....Aquila76 - Tuesday, November 7, 2006 - link
Jarred, I wanted to thank you for explaining in terms simple enough for my extremely non-technical wife to understand why I just bought a dual-core CPU! That was a great progression on it as well, going through the various multi-threading techniques. I am saving that for future reference.archcommus - Tuesday, November 7, 2006 - link
Another excellent article, I am extremely pleased with the depth your articles provide, and somehow, every time I come up with questions while reading, you always seem to answer exactly what I was thinking! It's great to see you can write on a technical level but still think like a common reader so you know how to appeal to them.With regards to Valve, well, I knew they were the best since Half-Life 1 and it still appears to be so. I remember back in the days when we weren't even sure if Half-Life 2 was being developed. Fast forward a few years and Valve is once again revolutionizing the industry. I'm glad HL2 was so popular as to give them the monetary resources to do this kind of development.
Right now I'm still sitting on a single core system with XP Pro and have lots of questions bustling in my head. What will be the sweet spot for Episode 2? Will a quad core really offer substantially better features than a dual core, or a dual core over a single core? Will Episode 2 be fully DX10, and will we need DX10 compliant hardware and Vista by its release? Will the rollout of the multithreaded Source engine affect the performance I already see in HL2 and Episode 1? Will Valve actually end up distributing different versions of the game based on your hardware? I thought that would not be necessary due to the fact that their engine is specifically designed to work for ANY number of cores, so that takes care of that automatically. Will having one core versus four make big graphical differences or only differences in AI and physics?
Like you said yourself, more questions than answers at this point!
archcommus - Tuesday, November 7, 2006 - link
One last question I forgot to put in. Say it was somehow possible to build a 10 or 15 GHz single core CPU with reasonable heat output. Would this be better than the multi-core direction we are moving towards today? In other words, are we only moving to mult-core because we CAN'T increase clock speeds further, or is this the preferred direction even if we could.saratoga - Tuesday, November 7, 2006 - link
You got it.A higher clock speed processor would be better, assuming performance scaled well enough anyway. Parallel hardware is less general then serial hardware at increasing performance because it requires parallelism to be present in the workload. If the work is highly serial, then adding parallelism to the hardware does nothing at all. Conversely, even if the workload is highly parallel, doubling serial performance still doubles performance. Doubleing the width of a unit could double the performance of that unit for certain workloads, while doing nothing at all for others. In general, if you can accelerate the entire system equally, doubling serial performance will always double program speed, regardless of the program.
Thats the theory anyway. Practice says you can only make certain parts faster. So you might get away with doubling clock speed, but probably not halving memory latency, so your serial performance doesn't scale like you'd hope. Not to mention increasing serial performance is extremely expensive compared to parallel performance. But if it were possible, no one would ever bother with parallelism. Its a huge pain in the ass from a software perspective, and its becoming big now mostly because we're starting to run out of tricks to increase serial performance.