[ibis-macro] Re: Question about the sliding window algorithm

  • From: "C. Kumar" <kumarchi@xxxxxxxxx>
  • To: Arpad_Muranyi@xxxxxxxxxx, IBIS-ATM <ibis-macro@xxxxxxxxxxxxx>
  • Date: Wed, 14 May 2008 16:48:20 -0700 (PDT)

the whole purpose of getwave is to support multiple calls. That is one of the 
major reasons for 'dll's' as opposed to executables. The EDA tool decides the 
block sizes and how many blocks.

As per my understanding, the cadence interoperability tool kit is limited to 
one call. It is there to illustrate interoperability and not meant as a full 
fledged solution.



"Muranyi, Arpad" <Arpad_Muranyi@xxxxxxxxxx> wrote: Mike,

Thanks for your reply.

From your writing I feel a sense of confirmation that BIRD 104
does not mention anything about this, is this correct?

If so, I see the following actions we could/should take:

1)  We need to decide whether this sliding window algorithm should
be a requirement or not.  From your examples, it would make sense to
make it required, but I would like to hear the opinion from others
(most specifically Cadence's) on this.

2)  If we agree that models should be required to have this capability,
we should add some text to the BIRD to make this clear, and request
a new model from Cadence which obeys this requirement.  From the
BIRD's perspective this would only be a "clarification".

3)  If we do not want to make this a requirement, then we need to
define how IBIS-AMI should handle this.  I am afraid that this would
be more than just a "clarification" BIRD, because it seems that it
would require a new parameter (perhaps Boolean) to tell the EDA tool
whether the model can use the sliding window algorithm or not.

Thanks,

Arpad
======================================================================

-----Original Message-----
From: Mike Steinberger [mailto:msteinb@xxxxxxxxxx] 
Sent: Wednesday, May 14, 2008 12:46 PM
To: Muranyi, Arpad
Cc: IBIS-ATM
Subject: Re: [ibis-macro] Question about the sliding window algorithm

Arpad-

Your observations are certainly valid. The question is therefore what 
actions should be/need to be taken. I'm not going to comment on that 
directly. However, I will make a few simple observations.

Recall that in the early days of our work on AMI, we had discussions, 
especially with IBM and TI, which indicated that time domain simulations 
typically needed to be run for between one million and ten million bits. 
The general guidance was one million bits for a channel by itself and 
ten million bits if the simulation included crosstalk. My experience 
since then has confirmed these numbers.

Now consider how much RAM would be required to describe a single block 
of one million bits. Let's suppose that we're running the simulation at 
a modest eight samples per bit. Then we have eight million doubles for 
both the wave_in and clock_t arrays. At eight bytes per double, that's 
64Meg per array, for a total of 128Meg of RAM. Furthermore, if one adds 
crosstalk to the analysis, as people are going to want to do, then the 
number goes up geometrically from there. I don't think that most users 
will willingly accept this kind of memory footprint. (One starts 
thinking of Big Foot...)

It was numbers like these that convinced us at SiSoft that we needed to 
implement the sliding window code in our example model, and support 
multi-block operation in our test program. We've always believed that 
multi-block operation was required for a practical solution, and have 
assumed from the beginning that this was fundamental to the design of 
the API, even as it was originally proposed.

As I say, however, it's not clear what we should therefore do, if anything.

Thanks for the observation.
Mike S.

Muranyi, Arpad wrote:
>
> Hello IBIS-AMI gurus,
>
> I am still experimenting with the SiSoft Tx and Cadence Rx
> models, I am noticing that there is a difference between
> these two in the way the GetWave function can be called.
>
> The SiSoft model implemented a special mechanism to allow
> the caller of GetWave to call the function multiple times
> which is a feature to keep memory usage low regardless of
> how long the waveform is.  (I call this the sliding window
> algorithm).
>
> The Cadence model doesn't have the additional code to make
> the sliding window possible by the caller.  The GetWave
> function must process the entire waveform to get correct
> results.
>
> This raises a question.  How does the caller know whether
> it can call GetWave multiple times with smaller segments
> of the waveform or whether it has to call it once and only
> once with the entire waveform?  I tried to find the answer
> to that in BIRD104, but so far wasn't able to find anything.
> (Please tell me if I missed it somewhere).
>
> If this is not spelled out in our BIRD, it is a serious flaw,
> because the EDA tool doesn't have a way to know how to call
> GetWave, and if it applies the sliding window algorithm to a
> model that doesn't have the extra code to allow that, the
> results will be bogus (see yellow waveform below).
>
> ole0.bmp
>
> If my observation is correct we must write a BIRD to take
> care of this problem.
>
> Thanks,
>
> Arpad
> ===========================================================
>

---------------------------------------------------------------------
IBIS Macro website  :  http://www.eda.org/pub/ibis/macromodel_wip/
IBIS Macro reflector:  //www.freelists.org/list/ibis-macro
To unsubscribe send an email:
  To: ibis-macro-request@xxxxxxxxxxxxx
  Subject: unsubscribe



       

Other related posts: