Re: index with very high LIO

  • From: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
  • To: Jeremy Schneider <jeremy.schneider@xxxxxxxxxxxxxx>
  • Date: Fri, 18 Nov 2011 16:12:41 -0800

The big problem I see here is simply that the lack hardware
infrastructure is requiring what I refer to as "unnatural acts of
engineering" in order to do this extract.  While this system may not
require 10GB/s of scan bandwidth, less than 100MB/s is really
insufficient for any production system these days -- especially one
that requires multi hundred GB scans.  To put it in perspective, one
can get more than 100MB/s from the SSD drive in a laptop.

I'd think that partitioning would be a big help here to reduce the
data required off disk (thinking txns & time series data) but if you
have to rebuild two 400GB tables and their indexes over that slow
storage connection to get there.

BTW, I'm not sure why you need trace files from PX servers to
calculate IO bandwidth - just monitor the network tput or just get a
rough number from elapsed time and the amount of data read from disk.

On Fri, Nov 18, 2011 at 10:46 AM, Jeremy Schneider
<jeremy.schneider@xxxxxxxxxxxxxx> wrote:
> The reason for 20MB/s is the storage network.  They're using DNFS with a
> single (bonded-backup-mode) 1G network card.  (FYI, tcp window sizes are
> 16M.)  Last night I ran a parallel(64) query and I think that I was
> still only getting something around 30 MB/s throughput w/direct path...
> but it's tough to piece together 64 tracefiles to get an accurate total
> throughput measure.  (By another measure I saw 60 MB/s but I think that
> might have included some cache effect...)  At any rate the best
> theoretical possible on a one gig card is around 100MB/s.  I don't think
> it's possible to full scan two 400 GB tables over a 1GB network link in
> 10 minutes, is it?  Also, this is not a warehouse system... it's
> *heavily* operational, and the operational workload is running
> satisfactorily with 1G cards.  In fact this query is trying to extract a
> day's worth of data to get it INTO a warehouse.
>
> I do think the slow link skews the usual points where it becomes
> beneficial to switch from index to hash methods.  I'd wager that on this
> system, a couple million rows were probably faster with the index.
> (Whereas generally a hash might win for this many rows.)  Recently this
> query has inched toward 10 million, and perhaps now a hash join is going
> to be better -- but it's still going to take many hours.

-- 
Regards,
Greg Rahn
http://structureddata.org
--
//www.freelists.org/webpage/oracle-l


Other related posts: