[ibis-macro] Re: Question about the sliding window algorithm

  • From: Mike Steinberger <msteinb@xxxxxxxxxx>
  • To: Arpad_Muranyi@xxxxxxxxxx
  • Date: Wed, 14 May 2008 12:45:49 -0500

Arpad-

Your observations are certainly valid. The question is therefore what actions should be/need to be taken. I'm not going to comment on that directly. However, I will make a few simple observations.

Recall that in the early days of our work on AMI, we had discussions, especially with IBM and TI, which indicated that time domain simulations typically needed to be run for between one million and ten million bits. The general guidance was one million bits for a channel by itself and ten million bits if the simulation included crosstalk. My experience since then has confirmed these numbers.

Now consider how much RAM would be required to describe a single block of one million bits. Let's suppose that we're running the simulation at a modest eight samples per bit. Then we have eight million doubles for both the wave_in and clock_t arrays. At eight bytes per double, that's 64Meg per array, for a total of 128Meg of RAM. Furthermore, if one adds crosstalk to the analysis, as people are going to want to do, then the number goes up geometrically from there. I don't think that most users will willingly accept this kind of memory footprint. (One starts thinking of Big Foot...)

It was numbers like these that convinced us at SiSoft that we needed to implement the sliding window code in our example model, and support multi-block operation in our test program. We've always believed that multi-block operation was required for a practical solution, and have assumed from the beginning that this was fundamental to the design of the API, even as it was originally proposed.

As I say, however, it's not clear what we should therefore do, if anything.

Thanks for the observation.
Mike S.

Muranyi, Arpad wrote:

Hello IBIS-AMI gurus,

I am still experimenting with the SiSoft Tx and Cadence Rx
models, I am noticing that there is a difference between
these two in the way the GetWave function can be called.

The SiSoft model implemented a special mechanism to allow
the caller of GetWave to call the function multiple times
which is a feature to keep memory usage low regardless of
how long the waveform is.  (I call this the sliding window
algorithm).

The Cadence model doesn't have the additional code to make
the sliding window possible by the caller.  The GetWave
function must process the entire waveform to get correct
results.

This raises a question.  How does the caller know whether
it can call GetWave multiple times with smaller segments
of the waveform or whether it has to call it once and only
once with the entire waveform?  I tried to find the answer
to that in BIRD104, but so far wasn't able to find anything.
(Please tell me if I missed it somewhere).

If this is not spelled out in our BIRD, it is a serious flaw,
because the EDA tool doesn't have a way to know how to call
GetWave, and if it applies the sliding window algorithm to a
model that doesn't have the extra code to allow that, the
results will be bogus (see yellow waveform below).

ole0.bmp

If my observation is correct we must write a BIRD to take
care of this problem.

Thanks,

Arpad
===========================================================


---------------------------------------------------------------------
IBIS Macro website  :  http://www.eda.org/pub/ibis/macromodel_wip/
IBIS Macro reflector:  //www.freelists.org/list/ibis-macro
To unsubscribe send an email:
 To: ibis-macro-request@xxxxxxxxxxxxx
 Subject: unsubscribe

Other related posts: