Re: When Not to Use virtual Methods for Performance (Was: game development layout question)

  • From: "treble" <lauraeaves@xxxxxxxxx>
  • To: <programmingblind@xxxxxxxxxxxxx>
  • Date: Sat, 13 Oct 2007 20:27:03 -0400

I'd like to add that virtual function calls do not have very significant 
overhead--the call actually involves dereferencing a couple of pointers and 
adding an offset in the relevant virtual function table for whatever class 
you are using, then pushing the relevant arguments and calling the function. 
In all, it is just a few lines of assembly.  I think that the overhead is 
well balanced between speed and size -- virtual tables take a lot of space 
as there needs to be one generated for every class containing a virtual 
function, even if only in a base class.
But the call itself is quite fast.
But Will is right, if you are worried about speed, you can do simple 
optimizations such as moving virtual calls outside a loop if possible.
And as for RTTI (runtime type identification), this takes at least as much 
overhead as virtual calls.  RTTI came about late in C++'s definition whereas 
virtual functions have been around since before C++ was called C++ (I 
believe the original name was C with Classes). RTTI and exception handling 
both require a lot of bookkeeping behind the scenes and in my view RTTI is a 
step backward rather than forward, making C++ a weakly typed rather than 
strongly typed language. But I suppose it has its uses and customer base.
--le

----- Original Message ----- 
From: "Will Pearson" <will@xxxxxxxxxxxxx>
To: <programmingblind@xxxxxxxxxxxxx>
Sent: Saturday, October 13, 2007 9:43 AM
Subject: Re: When Not to Use virtual Methods for Performance (Was: game 
development layout question)


Hi,

Performance can be a critical factor in some applications.  I would include
modern games in this category because they are usually bound by a particular
frame rate, usually 30Hz.  This means that all the collision detection,
response, sensory output, and other state updating has to be done within
this timeframe.  This time boundary in which a response has to be made makes
modern games realtime applications.

I agree that algorithm design can often provide the biggest performance
savings.  For example, an algorithm that runs in O(1) is much better than
one that runs in O(n)2; however, if you've got the most appropriate
algorithms then the only way in which you can improve performance is to
optimise the implementation or kill off features.  Although examining the
implementation takes quite a lot of time you can usually find some good
savings.  A profiler will help pick out hot spots that may be potential
candidates for optimisation.  You can then look at the code for the hot
spots and evaluate whether you can implement the same functionality in a way
that uses less memory or CPU instructions, depending on whether you are
memory or CPU bound.  It's worth trying to reduce both the memory and CPU
footprint as each can affect each other, for example hard page faults that
cost around 10000 cycles.

There are some well known optimisations that can be made.  Some of these
include loop unrolling, loop flipping, and code hoisting.  Some of these
optimisations do make the code more difficult to read but it's often more
important to have something that works and is useful than something that is
usable through usability.  You can find a pretty comprehensive, although a
little incomplete, list of low level optimisations here:
http://www.compileroptimizations.com/index.html
The optimisations are intended for compilers but you can implement many of
them in your own code yourself.

Will
__________
View the list's information and change your settings at
//www.freelists.org/list/programmingblind

__________
View the list's information and change your settings at 
//www.freelists.org/list/programmingblind

Other related posts: