[ibis-macro] Re: IBIS-AMI Correlation and BIRD Update - comments

  • From: Mike Steinberger <msteinb@xxxxxxxxxx>
  • To: twesterh@xxxxxxxxxx
  • Date: Wed, 02 Apr 2008 17:33:35 -0500

Folks-

As far as I'm concerned, Todd is exactly correct. In order to understand the nature of 
this "persistent memory", however, it might help to walk through an example.

Suppose I have a clock recovery loop in a receiver, and I wish to model its 
behavior in my GetWave function (as most model developers will want to do). 
That clock recovery loop model will at the very least need to remember what the 
latest recovered phase is. In addition, there will undoubtedly be some 
filtering in the loop, and the information stored in that filter will have to 
be remembered as well.

Furthermore, as Todd points out, the mode of simulation we've all agreed to, 
and for good reason, is that there will usually be multiple calls to GetWave in 
order to simulate a data sequence of desired length. We wouldn't want the model 
of the clock recovery loop to start with a recovered phase of zero every time 
GetWave is called. Rather, for each call to GetWave, we'd like the clock 
recovery model to start with the recovered phase it had at the end of the last 
GetWave call. A similar statement goes for any information that was stored in 
the clock recovery loop filters.

So, how do we pass the recovered clock phase from one GetWave call to the next? Well, we could in principle have a separate variable for that and pass it explicitly in the function signature. However, would a similar approach work for the information stored in the clock recovery loop filters? That's a whole lot tougher because the information stored in one clock recovery loop's filters can be very different from that stored in another clock recovery loop. Are we going to somehow try to standardize that information so that it can be passed explicitly through the GetWave function's signature? I don't think so.

Suppose we provide the GetWave function's signature with a bunch of generic variables that the model developer can use any way they like? That would leave a lot less that had to be standardized and give the model developer more flexibility. But how do we know how many variables model developers will need? Are we somehow going to restrict the number of variables that can be passed? I don't think so.

OK, so how about giving the GetWave function a block of scratch pad memory that the model developer can use any way they like? That would certainly simplify the call to GetWave, offer more flexibility to the model developer, and further reduce the number of things to be standardized. That's a lot more manageable, but how do we know how big a block of memory to put in this scratch pad space? Are we going to somehow limit the amount of memory a model can use? I don't think so.

So, at the very least, the model needs to tell us how much memory it needs. We could ask the model developer to put that information into a file such as the AMI file. Given that the amount of memory required may change from one simulation to the next, however, there will be some models that need to request a lot of memory because they may need it, even though usually they won't.

By the way, this block of scratch pad memory will need to be initialized the 
first time it gets used. Are we going to ask the GetWave function to initialize 
this memory the first time it gets called? How does GetWave know when it's been 
called for the first time? Are we going to put another variable in the function 
signature for that?

So one solution would be to get the size of the required scratch pad memory from a file, allocate the memory in the EDA platform, pass a variable to GetWave to let it know that it's being called for the first time, and de-allocate the memory in the EDA platform at the end of the simulation. That can work.

But here's another idea: Suppose we had another function to be supplied by the 
model developer that would allocate the memory and initialize it as well? (We 
could give it a really spiffy name like AMI_Init().) We'd also need another 
function to de-allocate the memory at the end of the simulation. (Maybe call 
that one AMI_Close().) These two additional functions might also allow the 
model developer to do some nifty things like LTI processing if they want to, or 
to generate some sort of model-specific report at the end of the simulation if 
they want to. One thing and another, this solution wouldn't be much more 
complex than a less flexible solution such as that described in the previous 
paragraph, and there are things to like about the increased flexibility.

This is how we got to where we are.

Hope this helps.
Mike S.



Todd Westerhoff wrote:
Arpad,

Thanks for your time on the phone this morning - I now understand that the 
graphic in the DesignCon
presentation was in reference to the comment "the truth shall set you free" in 
the SiSoft TX model.
I have no problem with the humor [indeed, how could I?] - and apologize for the 
misunderstanding.

I also understand that your concerns are mostly based on the "persistent 
memory" memory created by
the AMI_Init call, and the fact that data is therefore passed to the 
AMI_Getwave code that doesn't
go through the AMI_Getwave function call.
Here's the rub: it's actually required by virtue of how the analysis is 
performed.  I didn't realize
this while we were speaking on the phone, but I'll try to explain it now, for 
everyone's benefit.
I'll let the experts step in and correct me if I mess this up.

AMI_Getwave processes data in blocks.  This is done for efficiency's sake - if 
we were trying to run
a 10,000,000 bit simulation and had to compile the entire waveform before 
passing it into a model,
there would be a penalty in computer memory [and presumably, run time].  
Processing waveforms in
blocks allows us to process large data streams without having to reserve memory 
for the entire
waveform at different nodes in the circuit.  We save or process the portions of 
the waveforms we're
interested in at the node we care about, but the rest of the data just passes 
through the system.

Here's the rub:

The AMI_Getwave call for a model gets called many times to process a waveform.  
If the AMI_Getwave
call were completely self-standing (i.e. no persistent memory from the AMI_Init 
call), the
AMI_Getwave code would have no way of knowing whether it was being called the 
1st, 2nd, 3rd or nth
time.

The data gets processed in blocks of an arbitrary size, but the filtering needs 
to be continuous.
Therefore, the AMI_Getwave call needs to have persistent memory between calls, 
because it's going to
need some of the waveform and model state data from the previous call to filter 
the waveform for the
current call.

This, I believe, was the thigk that Adge was talking about at DesignCon when he 
said something to
the effect of "it takes a little getting used to, but once you understand it, 
it's not that
difficult".

So now we have a conundrum: if there needs to be persistent memory between 
calls to AMI_Getwave, how
can that be accomplished?  And the answer is - it's AMI_Init's job.  
AMI_Getwave needs a block of
persistent memory, the size of which is both model-specific and based on the 
block size for the
simulation in question.  That's what 3.1.2.7 is getting at:

| 3.1.2.7 AMI_memory_handle
| =========================
|
| Used to point to local storage for the algorithmic block being modeled and
| shall be passed back during the AMI_GetWave calls. e.g. a code snippet may
| look like the following:
| | my_space = allocate_space( sizeof_space );
|   status = store_all_kinds_of_things( my_space );
|   *sedes_memory_handle = my_space;
|
| The memory pointed to by AMI_handle is allocated and de-allocated by the
| model.
|

Thus - if we make AMI_Getwave self standing (thereby eliminating the link 
between AMI_Init and
AMI_Getwave that is your main concern), then we will lose persistent memory 
between AMI_Getwave
calls, and be forced into passing entire waveform streams around.
I'm guessing that would mean a big hit in both memory use and simulation speed, 
but that's something
I'll let the experts comment on.

Thus - I'm back to where I was this morning - thinking we need to keep the 
existing call structure,
but focus on reference models and model development tools.

Todd.

Todd Westerhoff
VP, Software Products
SiSoft
6 Clock Tower Place, Suite 250
Maynard, MA 01754
(978) 461-0449 x24
twesterh@xxxxxxxxxx
www.sisoft.com
-----Original Message-----
From: ibis-macro-bounce@xxxxxxxxxxxxx [mailto:ibis-macro-bounce@xxxxxxxxxxxxx] 
On Behalf Of Muranyi,
Arpad
Sent: Wednesday, April 02, 2008 1:00 AM
To: IBIS-ATM
Subject: [ibis-macro] IBIS-AMI Correlation and BIRD Update - comments

Hello IBIS-AMI experts,

I want to preface this message with a "warning" to eliminate the
possibilities of going off in a tangent of personal remarks, and
ending up hurting each other's feelings.  My intensions are NOT
to end up with a blood bath, but the seriousness of the issues I
want to raise could very easily take us there if we don't handle
the topic in a professional way.

First, I would like to comment on the presentation we saw today
in the IBIS-ATM meeting.


1)  I am fine up to pg. 9., but I have a little problem on
pg. 9.  This is really a small thing, but can be confusing
considering the big picture.  Based on pg. 8, I gather that
the meaning of the arrow pointing to the black box from above
is "this is what's inside the box".  On pg. 9, however, the
same notation seems to mean "this is the input to the black
box".  (I am saying this based on the example Tx model).  As
I said, this is a small detail, but the reason I mention this
is because this leads me to something bigger later.


2)  On pg. 10 I am missing a statement that would clarify that:

   h_teg(t) = h_cr(t) * h_tei(t)

(where "_" stands for subscript and "*" stands for convolve),
or, on the right side of the bottom half I would have used the
same expression that is found on the left side of the bottom
half of pg. 11 to better clarity.  Again, I say this based on
what I see in the example Tx model.  (You actually show that
equivalence on pg. 12, but pg. 12 is still misleading somewhat
because it gives me the impression that all of that is inside
GetWave, when in reality that top arrow is an input to GetWave).

In terms of the drawing, I am missing an arrow indicating that
the output of the Init box is fed into the GetWave box.  I would
have drawn a similar arrow that you have on pg. 11, except
pointing to the GetWave box instead of the expression on the
left of the bottom half.


3)  I admit that these are nitpicky comments and I can understand
that Todd's busy schedule may have played a major role for missing
such minor details.


4)  However, as I am studying the example Tx model's source code,
I feel compelled to bring up a serious concern I have regarding
the coding style which may have an effect on this BIRD (and the
specification).  You may say, who am I to complain about coding
style when I made such a fool of myself (somewhat deliberately)
in my last DesignCon presentation, pretending that I don't know a
thing about C programming...  Please be patient and try to hear
me out despite of that.


Here is my understanding of the structure of the example Tx model:

Init:
=====
- the impulse response is passed to Init via a pointer variable
  along with several other parameters, including the filter tap
  coefficients
- the code convolves the impulse response with the tap coefficients
and returns the results "in place", i.e. in the same memory location where the impulse response came in
- in preparation for the convolution in GetWave, an integration
  is performed on the equalized impulse response to obtain a step
  response.  One could argue that this code would really belong
  in the GetWave function, but I can see it being here too, since
  it is related to the impulse response in some ways.
- NOTE: this step response is NOT returned in any of the function
  arguments to the caller.  It is just left in memory for GetWave
  assuming that no garbage collection is happening until we are
  done with GetWave.

GetWave:
========
- the stimulus waveform is passed to GetWave via a pointer variable
  along with some other parameters
- the output of Init that was left in memory is used AS THE SECOND
  INPUT to the convolution algorithm.  This second input is not
  passed into the function as a function argument "normally" as
  the other inputs are.
- additional code takes care of the block by block execution of this
  function using the same technique of leaving things in memory to
  pass left overs from a previous run as input for the next run.
- the result is returned "in place", i.e. in the same memory location where the stimulus waveform came in.

What bothers me the most about this example Tx model is that not all
of the input and output arguments are passed through the function call
interface.  Stuff is going in and out of the functions BEHIND the
SCENES through the backdoors!  Don't take me wrong, I think this is
a wonderful and clever engineering marvel for situations when there
is no other way to achieve things, hats off to whoever developed it.
But we are defining a new specification, we have all the freedom
to do it right, yet we are already setting the stage for doing
things in a kludge way.

As far as I am concerned, each function should have all of their
inputs and outputs go through the function arguments (and returns).
I don't think this would have to result in memory penalty (due to
duplication of data when calling or exiting the functions) if
pointers are used appropriately.  Even for the block by block
repetitive execution of GetWave, I could see mechanisms for
passing and returning the "left overs" around the boundaries
between the calls.  Something similar to "carry out" and "borrow"
could be implemented on the function calls for that, but I could
see even better ways of doing that without having to write any
code in the GetWave function itself (to reduce the burden of the
model maker).


5)  Now, we could argue that this is just a coding style problem,
we could fix it by writing a better example Tx model.  Unfortunately
not so.  The reason being that Section 10 of the BIRD describes
each function with a precise list and description for each argument.
There are no provisions there to pass two inputs to GetWave as it
may be necessary if we use the SiSoft interpretation of how data
flows.  There are no provisions there to do a "carry out" and
"borrow" for running the GetWave multiple times for block by block
execution either.

But even more, pg. 18 of today's presentation proposes a new
parameter associated with GetWave: "Use_Init_Output".  How would
the caller of GetWave pass in the output of Init without an
additional function argument for the second input?  This is only
possible through the backdoor technique I described above!


6)  I think we should have a spec with a function interface which
provides all of the necessary inputs and outputs, so that model
makers would not need to resort to computer science back door
trickery in order to achieve the fundamental goal of this
technology.

A properly designed function interface would also make the use of
other languages easier, because the function interfaces are much
more similar between the languages than the backdoor capabilities.

I firmly believe that correcting these issues would make the life
of the model maker much easier.  People usually understand function
calls much more readily than back door tricks which rely on memory
management features of a specific language.  These types of things
are invented by experienced programmers, not electronic engineers...


In summary:
===========

I would like to take this opportunity to clean up a little bit as
follows:

a)  Each function should have all of its arguments on the function
interface

b)  Each function should be an independent function on its own, i.e. one
function should not depend on memory allocations in the other, other
than
using pointer variables in the argument.

c)  The functions should not rely on stuff left in memory, i.e. no
back door data exchange should be allowed between functions (unless
someone is a hacker, just kidding).

d)  The caller of the functions should take care of passing arguments
around from one function's output to another function's input.

e)  The caller of the GetWave function should take care of breaking
up larger data blocks into smaller pieces and executing GetWave
repetitively without relying on any code related to this in the
GetWave function itself.

There may be more (or less), but I hope you all get the point.

I hope this will not result in a bunch of virtual rotten eggs and
tomatoes thrown at me...

Thanks,

Arpad
===================================================================
---------------------------------------------------------------------
IBIS Macro website  :  http://www.eda.org/pub/ibis/macromodel_wip/
IBIS Macro reflector:  //www.freelists.org/list/ibis-macro
To unsubscribe send an email:
  To: ibis-macro-request@xxxxxxxxxxxxx
  Subject: unsubscribe


---------------------------------------------------------------------
IBIS Macro website  :  http://www.eda.org/pub/ibis/macromodel_wip/
IBIS Macro reflector:  //www.freelists.org/list/ibis-macro
To unsubscribe send an email:
  To: ibis-macro-request@xxxxxxxxxxxxx
  Subject: unsubscribe


---------------------------------------------------------------------
IBIS Macro website  :  http://www.eda.org/pub/ibis/macromodel_wip/
IBIS Macro reflector:  //www.freelists.org/list/ibis-macro
To unsubscribe send an email:
 To: ibis-macro-request@xxxxxxxxxxxxx
 Subject: unsubscribe

Other related posts: