Kerry - Thanks for your reply. It does not look like finer-grained numbers as per my needs are aggregated in the AWR. In a nutshell, I was hoping to find statistics similar to the contents of DBA_HIST_SERVICE_STATS at a more granular level. Instrumentation only goes so far - the AWR appears to have MODULE/ACTION defined for only a few tables (DBA_HIST_SQLSTAT for example). A candidate table that might appear to have these statistics - DBA_HIST_SESSMETRIC_HISTORY is not being populated in 10gR2. Looking at DBA_HIST_METRIC_NAME for the group of 'Service Metrics" shows the metrics that would be useful for this table. Jeff On Fri, Sep 11, 2009 at 6:12 PM, Kerry Osborne <kerry.osborne@xxxxxxxxxxx> wrote: > Jeff, > > Here's the basic script pulled from statspack and then modified to use the > equivalent AWR tables instead. (it produces the same numbers as the standard > AWR report). I modified it to sort by dbtime though so I could see which > periods were busiest in terms of dbtime. It uses DBA_HIST_SYS_TIME_MODEL > (which is loaded from V$SYS_TIME_MODEL) as mentioned by Lei. This view > doesn't have user or session info though. Nor do the underlying X$ > structures (x$kewssmap and x$kewssysv). V$SESS_TIME_MODEL does have session > info so maybe you could pull something from there if you really need finer > grained info (i.e. by user). Anyway, here is the script. (there is a little > more info on my blog about this script if you're interested) > set lines 155 > col dbtime for 999,999.99 > col begin_timestamp for a40 > select begin_snap, end_snap, timestamp begin_timestamp, inst, a/1000000/60 > DBtime from > ( > select > e.snap_id end_snap, > lag(e.snap_id) over (order by e.snap_id) begin_snap, > lag(s.end_interval_time) over (order by e.snap_id) timestamp, > s.instance_number inst, > e.value, > nvl(value-lag(value) over (order by e.snap_id),0) a > from dba_hist_sys_time_model e, DBA_HIST_SNAPSHOT s > where s.snap_id = e.snap_id > and e.instance_number = s.instance_number > and to_char(e.instance_number) like > nvl('&instance_number',to_char(e.instance_number)) > and stat_name = 'DB time' > ) > where begin_snap between nvl('&begin_snap_id',0) and > nvl('&end_snap_id',99999999) > and begin_snap=end_snap-1 > order by dbtime desc > ) > where rownum < 31 > / > Kerry Osborne > Enkitec > blog: kerryosborne.oracle-guy.com > > > > > > > On Sep 9, 2009, at 8:13 PM, Jeffery Thomas wrote: > >> Hello all - >> >> I'm composing a few historical reports from the AWR, using this link as a >> guide: >> >> >> http://www.oracle.com/technology/products/manageability/database/pdf/owp_awr_historical_analysis.pdf >> >> One report I'm trying to develop is a "DB Time by User" trend report - >> if at all practical - but I'm having >> difficulty determining which DBA_HIST* tables are relevant with >> respect to aggregating by the user. >> >> Would anyone a similar report or at least, what tables may contain >> the necessary information? >> >> Thanks - >> Jeff >> -- >> //www.freelists.org/webpage/oracle-l >> >> > > -- //www.freelists.org/webpage/oracle-l