[gameprogrammer] Re: What do the new processors mean for game programming?

  • From: brianevans <brianevans@xxxxxxxxxxxxxxx>
  • To: gameprogrammer@xxxxxxxxxxxxx
  • Date: Thu, 03 Mar 2005 13:47:07 -0600

I've read through some of the slashdot thread.  What this man says seems 
insightful: http://slashdot.org/comments.pl?sid=141119&cid=11824786

----------- quote -----------
"In other words, the idea of dividing a program into semantically distinct 
"tasks" is totally separate from distributing workload across threads. 
Trying to accomplish both at the same time by multithreading will quickly 
fall apart of the number of cores per chip starts increasing exponentially.

I really think the future is on fine-grained parallelism that isn't even 
apparent to most application programmers. Just as current GPUs have 
multiple graphics pipelines and OpenGL programmers don't have to worry 
about it. So for instance there will be a multithreaded collision detection 
library that you call. An mp3 encoder will use a separate thread for each 
frame of audio. "
--------- end quote ---------

Let me speculate for a bit, as I have not actually done any of what follows 
and I'm essentially making this up as I go:

So the first instinct to just split out subsystems (physics, audio, ai, 
etc) into threads is not, in my second instinct, the way to go about 
this.  It seems to me you will incur horrible penalties in ease of 
development and performance in synchronizing these subsystems together so 
they can communicate well enough to make something that works.

In games, the scalability factor isn't really how complicated your physics 
are, or how high quality of audio, or how smart your AI is.  Though all 
these things will increase your computation time per cycle, the actual 
things you are scaling on is game objects.  Namely: rockets, bullets, 
players running around, objects on the map, parts of the map itself (tiles, 
how big the map is), tanks, airplanes, explosions, fires... you name it.

A game object will have to do all the things your subsystems currently 
do.  It will have to know how to draw itself, how to make noises, animate, 
bounce around (physics), attack or retreat (ai), etc.  So I would think, 
instead of scaling on subsystems, you really would rather want to scale on 
self-contained game objects that know how to do everything that your 
subsystems would do for it.  Or at the very least, know how to calculate 
its specific data so that it can be handed off to the subsystems to do 
it.  I think most of your current game engines already have this high level 
view of a game object.

So you would generate a collection of these objects, and then split the 
collection into parts and assign each part to a CPU.  So now we're trying 
to scale on objects.

But then what?  Somehow we need a method for signaling the objects 
themselves so we can get input into the system.  And then objects need to 
interact with each other, so they need a method for signaling themselves.

I see two main synchronization problems for each time delta (call it a 
frame)... possibly two and a half or three depending on how you look at it:

1.0) Resolving input,
1.5) Resolving object interactions and inter-object communication,
2.0) Resolving output.

The first is getting input into the system.  Say your player shoots a 
rocket.  You signal "shoot rocket" to your player object.  Your player 
object must animate, make a noise, a flash, and then generate a rocket 
object.  The rocket object has to draw itself, and move to where it should 
be by the end of the frame.  At some other frame, the rocket will hit 
something and explode.

Well that seems like a big problem there.  Collision detection.  I have no 
ideas right now about how to distribute that effectively, though it seems 
like it should be possible.  What's worse is that you'll have to handle 
collisions between objects on different threads.  Maybe it would be 
possible to "schedule" or place objects into buckets so that only the 
objects that would have a chance to interact would be in each 
collection.  This is out of the scope of the current discussion however, 
and I leave it to a later thread.

Anyway, the point is that now there needs to be interobject communication, 
and each object must signal all objects they interact with and resolve 
those interactions.

Finally, you must somehow lock all these interactions to a specific time 
delta, so that when it comes time to draw your frame all your objects know 
where they're at and what they're doing at that time.  At this time, the 
threads compile the state of all their respective objects and report back 
to the controller thread, which compiles it into a coherent, global world 
view which it can then dispatch to the subsystems to handle.

At least, that is my intuition.  Hopefully I didn't make a fool out of 
myself, but apply salt liberally, as needed.

brian.

At 11:44 AM 3/3/2005, you wrote:

>Slashdot.org weighs in on the topic of games and multicore CPUs
>
>http://slashdot.org/article.pl?sid=05/03/02/1322206&tid=118
>
>                 Bob Pendleton
>
>
>
>
>
>---------------------
>To unsubscribe go to http://gameprogrammer.com/mailinglist.html



---------------------
To unsubscribe go to http://gameprogrammer.com/mailinglist.html


Other related posts: