[SI-LIST] Re: Jitter transfer vs. accumulation

  • From: "Tang, George" <George.Tang@xxxxxxx>
  • To: <Chris.Cheng@xxxxxxxxxxxx>, <si-list@xxxxxxxxxxxxx>
  • Date: Wed, 14 Mar 2007 18:46:18 -0600

Chris,=20

You sound very confused.  A receiver core has 2 types of inputs --
reference clock input and RX data channel input.  You have these 2 types
of inputs mixed up completely.  Once you understand that they are
separate, most of your questions clear up automatically. =20

Thanks,=20
=20
George=20
=20

Note: Effective October 14, 2006, My LSI Logic Email address will change
to: george.tang@xxxxxxx

Please update address books and email lists accordingly.


-----Original Message-----
From: si-list-bounce@xxxxxxxxxxxxx [mailto:si-list-bounce@xxxxxxxxxxxxx]
On Behalf Of Chris Cheng
Sent: Tuesday, March 13, 2007 6:47 PM
To: si-list@xxxxxxxxxxxxx
Subject: [SI-LIST] Jitter transfer vs. accumulation

This stream of questions has been in my mind for the pass few years. And
every time I went to DesignCon I ended up with more and more questions
to myself rather than answers. So let me try to clear this up and let
everyone hammer me back down to the ground :-D.
Here it goes :

It is a well know phenomenon that PLL suffers from jitter accumulation.
i.e. Supply and substrate noise induced on the ultra high gain VCO
resulting in jitter. Many ways have been invented to combat this
including PLLVDD filters, on-chip regulators and most importantly,
adjusting the PLL filter loop dynamics. It can be shown (and quite
intuitively) that when you decrease the lock time and increase the
tracking bandwidth, you allow the PLL to correct itself quicker with the
induced jitter and thereby decreasing the peak to peak jitter.

It is also a well known phenomenon that in the presence of input jitter,
the loop dynamics need to be adjusted to minimize the propagation of the
jitter. Worst there is a possibility of jitter peaking where the jitter
may be amplified for a narrow band of frequency. It can also be shown
(and again quite intuitively) that when you increase the lock time and
decrease the tracking bandwidth, you decrease the jitter tracking of the
PLL.

So from the above, it is clear that the solutions to the two problems
are contradicting each other. If you believe you have a problem with
input jitter, you slow down your loop dynamics (or over-damped) in your
PLL. If  you believe you have supply ripples or noisy substrate, you
speed up your loop dynamics (well...at least make sure it is stable and
not under-damped).

To go deeper a little bit, what exactly are the input jitter we are
talking about ? Well, let's use the convention jitter definitions, Tj,
Dj,Rj,DDj,ISI,DCD,Pj etc etc. For most of the multi-gigabit systems I've
seen, Dj seems to be the dominant component at the input in a system
environment. Within Dj, I believe the DDj part is relatively high-speed.
After all, your impulse response pre and post cursor dies down after a
few UIs and your alternating rise/fall edge clock (DCD) happens in UI.
The benefit of the PLL dynamics has little or no effect on such high
speed jitters. Most (but not all) of PLL loop dynamics are at least 20x
slower than the reference frequencies just to be stable. Any jitter
happens within a few cycles of the sampling frequencies does not get the
benefit of low jitter transfer at any stable loop filter settings. So
now we have knock out the too big components of the input jitter, what's
left ? My guess is the true Rj AND the jitter induced from the transm
 it circuit PLL (i.e. JITTER ACCUMULATION).=20

So when we are trying to fight the jitter transfer by dropping the loop
bandwidth, aren't we actually INCREASING the jitter accumulation at the
transmitting end and we end up needing to cut the bandwidth downstream
to minimize jitter transfer ? Does the solution we are implementing
actually creates/worsen the problem ?

To every designers we see our own devil, I sure would not like to impose
my point of view on this issue on anyone who at least understand my
points above. Whether you agree with me or not. However, seeing the
latest FC or PCIe Gen II spec, it seems to be the group thinking has
already been set to minimizing the jitter transfer is more important
than the jitter accumulation. In fact, I am not even aware of any jitter
accumulation spec in FC or PCIe (please correct me if I am wrong). The
fact that Fc is set to something like 1667th of the bit rate means the
PLL is way over damped.

This seems countered to my own experience of characterizing PLL's where
jitter accumulation almost always larger than true jitter transfer. I
have to qualify that with the jitter definition above. Slow non-high
speed input jitter transfer is what I am talking about. Dj's and
specifically DDj input is big but not within the bandwidth of our
discussion here.

I know there are many smart brains that is responsible to come up with
this 1/1667 bit rate Fc, so somewhere along my logic I must be wrong.
Can someone point me to why jitter accumulation is less of a concern
than transfer in these standard ?

Chris


------------------------------------------------------------------
To unsubscribe from si-list:
si-list-request@xxxxxxxxxxxxx with 'unsubscribe' in the Subject field

or to administer your membership from a web page, go to:
//www.freelists.org/webpage/si-list

For help:
si-list-request@xxxxxxxxxxxxx with 'help' in the Subject field


List technical documents are available at:
                http://www.si-list.net

List archives are viewable at:     
                //www.freelists.org/archives/si-list
or at our remote archives:
                http://groups.yahoo.com/group/si-list/messages
Old (prior to June 6, 2001) list archives are viewable at:
                http://www.qsl.net/wb6tpu
  

Other related posts: