[gameprogrammer] Re: FPS with distributed servers

  • From: "Kevin Martinet" <kmartinet@xxxxxxxxxxx>
  • To: gameprogrammer@xxxxxxxxxxxxx
  • Date: Fri, 04 Mar 2005 21:00:34 +0000

Blizzard is using that on World of Warcraft. The world is divide into 2 big 
continents, each one having is own server. But if one server's having 
problems, chances are you wont be able to move from one continent (server) 
to the other. Which happened a lot recently. But the continent not having 
problems will continue to work perfectly, even if peoples get kicked from 
the other one.

Kev

>On Fri, 2005-03-04 at 09:30 -0300, Daniel Cordeiro wrote:
> > On Thu, 3 Mar 2005 22:59:47 +0100, David Olofson <david@xxxxxxxxxxx> 
>wrote:
> > > On Thursday 03 March 2005 22.32, Jacob Briggs wrote:
> > > > Personally I think having a distributed server would=20
> > > > introduce all sorts of weird latency issues, but thats a naive=20
> > > > assumption :) I would like to hear more on the subject.
> > >
> > > This depends entirely on the software, OS and network hardware. A=20
> > > decent switched network has latencies in the =B5s range, so as long 
>as=20
> > > packets aren't too large in relation to the bandwidth, that 
>shouldn't=20
> > > be much of an issue. (Well, that depends on the number of=20
> > > machine/machine roundtrips per engine cycle. We're still talking=20
> > > about *much* higher latencies than calling functions in a=20
> > > single-threaded server, of course, so you'll have to think about 
>who=20
> > > you're talking to, and how often...)
> >
> > I think that maybe the most important problem is the syncronization of
> > distributed servers.
> > Our first idea is to distribute the simulation of the game using the
> > location of the entities (i.e., anything that can move). For instance,
> > if I have 2 servers then I divide the map in 2 and the first server
> > will handle (i.e., receive/response messages and do physics
> > simulation) the clients that are in left side of the map and the other
> > server will handle clients in right side of the map.
> >
> > To do this I must use some kind of syncronization between servers and
> > this can add latency.
>
>The edge conditions kill you. Think about two players having a sword
>fight bouncing back forth across the boundary. Every action causes data
>to move back and forth across the boundary and things get very slow.
>
>OTOH, if you make the world into a number of islands and force each
>island onto a single server you can make things work pretty well.
>
>Personally, I think that as soon as you try to map territory to servers
>you are dead. A copy of the map of the territory can be stored on each
>server. The location of players and objects can be updated as they move
>so each server knows where everything is. You can even build a special
>network just to synchronize the location data.
>
>The data that represents a specific object should most likely reside on
>one server at a time with a special object repository server (like a
>master copy of the database) being stored in one location. Changes to
>object should be flushed back to the main object server. When a group of
>object get close enough to interact they should be all moved to the same
>server (the one with the lowest current load) so they can interact
>without network latency problems.
>
>All of this assumes that there is a front end that is routing messages
>from players to objects. The font end has to know what server the object
>is currently in and send messages there. Message passing between objects
>is another real problem, but it can be handled by throwing hardware at
>it. If all else fails all messages can be sent to all servers and
>discarded if the object isn't currently on that server.
>
>I've been thinking about this problem for a while and that is the best I
>can come up with.
>
> >
> > > Some potential problems:
> > >  * Network is shared with "normal" stuff, so irrelevant
> > >    network traffic may add significantly to latencies.
> > >  * You might have some stupid hub/switch/router/firewall
> > >    or something in the way, that can't be arsed to pipe
> > >    data through "instantly", but instead buffers entire
> > >    packets, and then holds on to them for random amounts
> > >    of time before passing them on.
> >
> > It's ok to assume that servers are in a dedicated network. So this is
> > not a problem.
>
>You might want a network for each type of interaction. Depends on the
>volume of traffic.
>
> >
> > >  * The OS has a crappy protocol stack, that adds more
> > >    average and/or (worse) worst case latency than you
> > >    can handle.
> >
> > What OS has a crappy (TCP/IP?) protocol stack nowadays? Even Windows
> > uses a good protocol stack (Windows uses a modified version of BSD
> > TCP/IP stack).
> >
> > We will use Linux and IBM K42 OSes in our work.
> >
> > >  * The OS has a crappy scheduler that just won't wake
> > >    your server threads up in time when they receive
> > >    data after blocking.
> >
> > If the server is dedicated to serve only the game this is not a
> > problem, isn't it?
>
>No thread is ever guaranteed to wake up as long as there are other
>currently active threads. Of course, that assumes that all threads have
>equal priorities. But, there is no better way to hose yourself than to
>start depending on thread priorities.
>
>You have to make sure that all threads eventually wait for something so
>that you can ensure that all threads eventually get a chance to run.
>
> >
> > >  * Your OS thinks that very frequent calls to I/O APIs
> > >    (regardless of data size) means that your thread is
> > >    a bandwidth hog, and penalizes it suspend it for
> > >    extended periods of time as soon as there are any
> > >    other runnable threads in the system.
> >
> > Humm... I need to investigate that.
>
>Not a problem on server OSes. And, it is usually something you can tune.
>
> >
> > > So, basically, if you can pick a nice OS, decent hardware and use a=20
> > > dedicated, switched network, I think it could work just fine. Just=20
> > > installing a distributed game server on your average bunch of web=20
> > > servers might not work all that well sometimes, though...
> >
> > So, why I don't see any FPS game using distributed/p2p/clustered
> > servers? Any thoughts?
>
>You seem to have noticed that distributing a game server across multiple
>machines is not trivial :-) In fact, it is hard. Current commercial
>servers can support many thousands of players. Therefore it is cheaper
>to have multiple "shards" than to develop a fully scalable server.
>
>In other words, the small amount of money a commercial game company can
>make from solving the problem is less than it would cost to solve it.
>
>                       Bob Pendleton
>
>
> >
> >
> > Regards,
> > Daniel
> >
> >
> > ---------------------
> > To unsubscribe go to http://gameprogrammer.com/mailinglist.html
> >
> >
> >
>
>
>
>---------------------
>To unsubscribe go to http://gameprogrammer.com/mailinglist.html
>
>




---------------------
To unsubscribe go to http://gameprogrammer.com/mailinglist.html


Other related posts: