[ibis-macro] Re: IBIS-AMI Correlation and BIRD Update - comments

  • From: "Todd Westerhoff" <twesterh@xxxxxxxxxx>
  • To: "'IBIS-ATM'" <ibis-macro@xxxxxxxxxxxxx>
  • Date: Thu, 3 Apr 2008 00:39:09 -0400


You've made the interesting observation that the convolution performed by the 
EDA tool when
"Use_Init_Output = True" is a close parallel to the convolution performed by 
the original SiSoft TX
model based on SiSoft's original interpretation of the TX AMI_Getwave input 
waveform.  There's more
than a bit of serendipity in that ... it's a point that I didn't try to make 
explicitly in the
presentation on Tuesday, partly because it's subtle, and partly because it 
relies on intimate
knowledge of the reference model not that many people have.

The moral of the story is - convolution is convolution.  Whether it's done by 
the EDA tool or inside
the model's AMI_Getwave call, the answer should the same - and that is, in 
fact, what we saw.

Thanks for your suggestions on clarifications for the BIRD - that's the next 
thing on my plate, and
I'll try to incorporate them.

As far as the persistent memory discussion goes - I don't think I'm a good 
judge for the merits of
one technical implementation or another.  The existing spec allows AMI_Init to 
allocate whatever
amount of memory it wants based on the way the model developer has written 
AMI_Getwave and the block
size in question.  There are other ways to allocate memory, as you point out, 
but I'm not sure if it
ends up being simpler than what we have now, or just different.  I don't claim 
to be a competent "C"
programmer, nor do I play one on TV.  I'm going to leave comments on different 
styles of memory
allocation to others who know more than I do.

I think we've come to the point where we need to be talking about a specific 
memory management
scheme and how it compares to what we have now.  Do you want to write something 
up for discussion?


Todd Westerhoff
VP, Software Products
6 Clock Tower Place, Suite 250
Maynard, MA 01754
(978) 461-0449 x24

-----Original Message-----
From: ibis-macro-bounce@xxxxxxxxxxxxx [mailto:ibis-macro-bounce@xxxxxxxxxxxxx] 
On Behalf Of Muranyi,
Sent: Wednesday, April 02, 2008 7:35 PM
Subject: [ibis-macro] Re: IBIS-AMI Correlation and BIRD Update - comments


I will attempt to reply to both of your messages incorporating
the main points of our phone conversation we had this morning.

First, thanks for your (honest) reply.  Regarding your high
level summary, I think I only have one major problem which is
the first one you mentioned, the use of "persistent memory".
I do not mind the modification of waveforms "in place" at all.

Regarding the accomplishments of IBIS-AMI, I agree, it is
defined, approved, works, and models exist.  However, your
presentation yesterday proposes a change to what has been
already approved, so I am thinking that given the fact that
we are talking about making changes, we should do our best
to get it right.  There are no hidden motivations between
the lines of my writing about delaying or stalling any
progress.  If anything, it is the "German precision" and
perfectionism in my blood that peeks through all this, not
vendor politics.  I appreciate and thank you for the warnings
about proper use of language, I certainly do not want to
offend anyone.  There are times when I try to lighten up
my style a little by using slang expressions (such as hacker,
back door...) thinking that it would give me a more friendly
tone of voice instead of being cut and dry, but it tends to
end up backfiring at me...  I apologize for that, and I will
try to refrain from using them.

Regarding the technical content of this discussion, my intent
is not to start over.  I feel that with relatively minor
changes we could make the life of the model maker a little
easier, make the spec better, make the models more portable,
and solve the problem you have identified in the IBIS-ATM
meeting yesterday.

Now, based on the lengthy telephone conversation you and I
had this morning, and re-reading your presentation multiple
times, I admit that my understanding of the Boolean
"Use_Init_Output" was a little different yesterday than it
is today.  The reason I had such a hard time to change gears
in my mind was that I was still very strongly influenced by
the example Tx code that came with your toolkit, and the
description of this proposed Boolean on pg. 18 was not quite
clear to me.

This is the way I would summarize it now:

The "Use_Init_Output" Boolean contains the answer to the
question you ask on pg. 16.  If it is TRUE, Init and GetWave
are chained, which is the lower half of pg. 16.  In this
case the output of Init and the stimulus waveform is convolved
by the EDA tool, and that result is fed into the GetWave
function (though the input argument called "wave_in").  To
make this crystal clear, I would make a small modification to
the proposed text (marked by ***...***) to emphasize that this
convolution is the responsibility of the caller of GetWave,
not GetWave itself (as I thought before):

| Use_Init_Output is of usage Info and type Boolean. When
| is set to "True", the effects of the AMI_Init and AMI_GetWave calls
| chained together ***in the EDA tool*** by convolving the impulse
response returned by AMI_Init
| with the input waveform, which is then presented to the AMI_GetWave

On the other hand, when "Use_Init_Output" is FALSE, Init and
GetWave can be thought of as different views of the same
device, which is the upper half of pg. 16.  In this case,
the EDA tool is expected to use the (unmodified) impulse
response of the channel and convolve that with the stimulus
waveform and pass the result of that convolution to GetWave.
I completely misunderstood this because I thought that this
convolution will take place inside GetWave, and I thought
it would need an additional input (the impulse response of
the channel) which is not among the arguments of the function.
To make this thought crystal clear, I would change the following
proposed text:

| If the Reserved Parameter, Use_Init_Output, is set to "False", EDA
tools will
| use the original (unfiltered) impulse response of the channel.
| The algorithmic model is expected to modify the waveform in place.


| If the Reserved Parameter, Use_Init_Output, is set to "False", EDA
tools will
| convolve the original (unfiltered) impulse response of the channel
with the
| input waveform, which is then passed into the AMI_GetWave call.
| The algorithmic model is expected to modify the waveform in place.

If my understanding of the above is correct, and we all agree to
this interpretation, we have no problems so far, because everything
CAN BE passed in and out of Init and GetWave through the function
arguments and there is NO NEED FOR USING PERSISTENT MEMORY (for this).

There is one question I still don't see clearly regarding the filter
coefficients.  The "AMI_parameters_in" argument contains the tap
coefficients (among other info).  However, this is only passed into
the Init function.  Similarly, "sample_interval" and "bit_time"
are also passed only into Init.  Will the GetWave NEVER need any
of this info?  If it may need it, how do we pass it in?  (I know,
the answer here may be persistent memory).  If this is the case,
I would suggest to eliminate the use of persistent memory for this
purpose and pass these parameters into GetWave on its calling
statement the same way as it is done for Init.  This will not
result in additional memory usage, because "AMI_parameters_in"
is already a pointer, and the two other parameters I mentioned
are no more than two doubles.

Regarding your second email (and Mike's recent email) in which you 
explained why persistent memory is needed, first thanks for the
detailed explanation.  I think I understand why the sliding window
algorithm is needed, and why there is a need to pass data between
the calls of GetWave, but I don't believe that persistent memory
allocated by the Init function is the only way to do it.

My C programming is not good enough to outline something in a
complete detail here, but I could see a way that the caller of
GetWave would allocate some memory space for this "scratch pad"
area as Mike called it, or "carry/borrow" material as I called
it in my previous message.  The address of this memory location
could be passed into GetWave by a pointer variable.  We could
possible use the existing "*AMI_memory" argument for this
purpose without adding a new argument.   Before the first time
GetWave is called, this location could be initialized by the
caller, or GetWave could initialize it during the first 
call too.  The caller (EDA tool) certainly knows when it is
done calling GetWave, and at that time it could free up the

I know, the caller may not always know how much space to allocate,
but there are safe guesses.  The safest guess is the length of
the window, an arguably less safe but still reasonable guess
could be half of that.  Given the window size (vs. the full
waveform length) this is not going to become a terribly large
memory waste.

As far as I can tell, this mechanism would still allow us to
use the sliding window algorithm without giving any reason to
pass around entire waveforms (which I agree is not the way to
go).  As a result, the models would be less complicated, more

Another way I can see is to let GetWave allocate an initialize 
the memory it needs for itself, and pass the pointer up to the
caller (the same way as we do **msg or **AMI_parameters_out) so
that after the caller (EDA tool) is done with the last call, it
could free that memory using the address in the pointer...  This
way GetWave would know exactly how much memory it needs to allocate,
and it will know when to do it, i.e. when we called it for the
first time, because the first time around we would have a NULL
pointer due to not having it allocated it yet.

I am sure there are other ways to do this too, but I am not an
expert at C programming, so I am not able to go any deeper.

In summary, the proposal you made in the last ATM meeting eliminates
the need for persistent memory for the purpose of handing the output
of Init to GetWave.  The only real technical reason I can see for
needing persistent memory is to allow the iterative sliding window
algorithm to work, but I think there are ways to achieve that without
using Init for allocating that memory space for GetWave.

I firmly believe that taking care of this in different ways
could help making more models (which is our ultimate goal)
because it will be less difficult for the model maker, and
more languages may be used potentially.  And we could do this
with minimal changes to the BIRD we already approved.

Sorry for the long email.




From: Todd Westerhoff [mailto:twesterh@xxxxxxxxxx] 
Sent: Wednesday, April 02, 2008 8:54 AM
To: Muranyi, Arpad; 'IBIS-ATM'
Subject: RE: [ibis-macro] IBIS-AMI Correlation and BIRD Update -


If I were to come up with a high level summary of your concerns, it
would include:

- use of persistent memory between the AMI_Init and AMI_Getwave calls
- modifying time-domain waveforms in place

I know I'm simplifying, but I believe that's a reasonable 10,000 foot

Here's what we've all accomplished with IBIS-AMI

            - it's defined
            - it's approved
            - it works
            - we have working models in customer's hands

Does that mean everything associated with IBIS-AMI models is simple and
intuitive?  Nope.  Are there areas that we might do differently if we
had the chance to do it all over again?  Sure.  Is it worth opening up
an defined spec and asking vendors to change models they've already
developed (again)?  Probably not.

I'm not trying to minimize or ignore the impact on the model developer,
and I really do get that these techniques require time to understand.
I'm usually the "marketing guy" in any technical conversation, and
assume that if I can understand this stuff, it's accessible to others.

My point is this - as we already have a standard and a infrastructure
that works, so why don't we turn our attention to developing reference
models that isolate the model developer from the details of the
simulator interface?  Isolate the impulse response and waveform
filtering section of the reference models, with a *your specific
filtering code goes here* approach.  I think that's entirely workable
within the existing specification.  We don't need to rework the
standard, we need to take the reference code we've published as part of
the standard and build on it.

I'm therefore advocating that we work to upgrade reference models and
toolkits in response to your concerns.  I believe this addresses the
concerns you've raised without requiring anyone to rework their existing

Which brings me to the unpleasant part.

I have to take exception to phrases like:

"Stuff is going in and out of the functions BEHIND the SCENES through
the backdoors"  

"yet we are already setting the stage for doing things in a kludge way"

"computer science back door trickery in order to achieve the fundamental
goal of this technology"

and my favorite example from your presentation at DesignCon:

If you're trying to work towards an open standard, these choices of
language and graphics aren't helping your case.  In my personal opinion,
this is consistent with what I'd expect from a vendor interested in
delaying progress.

I apologize for having to make what sounds like a personal accusation -
I don't mean it that way ... but I am requesting that we all choose our
words very carefully.

This is on the record, after all.


Todd Westerhoff
VP, Software Products
6 Clock Tower Place, Suite 250
Maynard, MA 01754
(978) 461-0449 x24
IBIS Macro website  :  http://www.eda.org/pub/ibis/macromodel_wip/
IBIS Macro reflector:  //www.freelists.org/list/ibis-macro
To unsubscribe send an email:
  To: ibis-macro-request@xxxxxxxxxxxxx
  Subject: unsubscribe

IBIS Macro website  :  http://www.eda.org/pub/ibis/macromodel_wip/
IBIS Macro reflector:  //www.freelists.org/list/ibis-macro
To unsubscribe send an email:
  To: ibis-macro-request@xxxxxxxxxxxxx
  Subject: unsubscribe

Other related posts: