Re: When Not to Use virtual Methods for Performance (Was: game development layout question)

  • From: "Will Pearson" <will@xxxxxxxxxxxxx>
  • To: <programmingblind@xxxxxxxxxxxxx>
  • Date: Sat, 13 Oct 2007 14:57:56 +0100

Hi,

One more thing.

I wouldn't be overly concerned with the performance cost of virtual function calls unless you are making a lot of function calls.

What may be more concerning than virtual functions is whether you intend to use a lot of branch statements, such as if, for, while, etc. Modern CPU's use an instruction pipeline and break up the processing into stages. As the result of a branch statement isn't known until the branch instruction reaches the end of the pipeline the CPU has to predict the outcome of the branch statement in order to keep the other stages of the pipeline busy whilst the branch statement is being executed. If the CPU mispredicts the branch statement then it has to flush the instruction pipeline and repopulate it, which wastes CPU cycles. Most high performance computing applications contain very few branch conditions for this reason.

Will
----- Original Message ----- From: "Will Pearson" <will@xxxxxxxxxxxxxxxxxxxxx>
To: <programmingblind@xxxxxxxxxxxxx>
Sent: Saturday, October 13, 2007 2:43 PM
Subject: Re: When Not to Use virtual Methods for Performance (Was: game development layout question)


Hi,

Performance can be a critical factor in some applications. I would include modern games in this category because they are usually bound by a particular frame rate, usually 30Hz. This means that all the collision detection, response, sensory output, and other state updating has to be done within this timeframe. This time boundary in which a response has to be made makes modern games realtime applications.

I agree that algorithm design can often provide the biggest performance savings. For example, an algorithm that runs in O(1) is much better than one that runs in O(n)2; however, if you've got the most appropriate algorithms then the only way in which you can improve performance is to optimise the implementation or kill off features. Although examining the implementation takes quite a lot of time you can usually find some good savings. A profiler will help pick out hot spots that may be potential candidates for optimisation. You can then look at the code for the hot spots and evaluate whether you can implement the same functionality in a way that uses less memory or CPU instructions, depending on whether you are memory or CPU bound. It's worth trying to reduce both the memory and CPU footprint as each can affect each other, for example hard page faults that cost around 10000 cycles.

There are some well known optimisations that can be made. Some of these include loop unrolling, loop flipping, and code hoisting. Some of these optimisations do make the code more difficult to read but it's often more important to have something that works and is useful than something that is usable through usability. You can find a pretty comprehensive, although a little incomplete, list of low level optimisations here:
http://www.compileroptimizations.com/index.html
The optimisations are intended for compilers but you can implement many of them in your own code yourself.

Will __________
View the list's information and change your settings at //www.freelists.org/list/programmingblind

__________
View the list's information and change your settings at //www.freelists.org/list/programmingblind

Other related posts: