Hi Samuel,
not quite sure, if i get your scenario right as you mix / talk about two
different things. So please let me get it right first.
1) You were suffered by a storage problem
2) After this issue your execution plans changed from index (range scan???) to
full table scan
3) As a consequence of the execution plan changes you are suffered by "direct
path read"
4) You tried to rebuild the indexes and re-gather statistics for the
corresponding tables
5) Step 4 did not solve the problem so you faked the statistics to switch from
"direct path read" to "buffer read"
If all of these steps are correct then your root cause is somewhere in step 2.
Now the question would be - what kind of storage problem did you have?
Was it a storage downtime, corruption or performance problem? Did you gather
system statistics during this storage issue period by chance? Do your
problematic SQLs use PX?
Best Regards
Stefan Koehler
Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK
samuel guiñales cristobal <samuelg.cristobal@xxxxxxxxx> hat am 23. Oktober--
2016 um 20:31 geschrieben:
Hi all,
Recently we suffer a storage problem in RAC, all instances down.
After recover, detected high "direct path read" from some
small tables in some data bases, OS I/O wait 20-40%, hard performance
problem.
Rebuilding indexes and runing statistics in this tables not solve problem.
execution plan changed to do Full table scan and index not used like before.
maybe optimizer not involve and is parameter _small_table_threshold?
but why now start psyh reads?
some info of one database and workaround :
version............................................: 11.2.0.4
OS....................................................: Linux x86-64
info: no tunning and diagnostic pack
_small_table_threshold..............: 29022
Table with high psy rds.(almos all, 95%)
-Table size................: 273M
-Table block num .....: 34730
db memory buffers............................ : 1451136
we apply workaround,we fake table stats so it looks like that there’s only
20K blocks for that table so avoid phy reads:
EXEC DBMS_STATS.SET_TABLE_STATS('user','table_name',numblks=>20000);
any idea to solve that FTS? why after storage problem?
Regards
---
Samuel Guiñales Cristobal <samuelg.cristobal@xxxxxxxxx
mailto:samuelg.cristobal@xxxxxxxxx ;>
«Que tu alimento sea tu medicina, y la medicina tu alimento.» Hipócrates