I am evaluating a good table design to store data with the following characteristics. * Large number of records >200million rows * Tables have a unique "id" for each row (generated using sequence). * The rows can be logically grouped per "user_id" * The rows are primarily accessed per "user_id" * The distribution of the rows is not even among "user_id" i.e. Most users have <50 rows but there are a few who have >10000 rows. * The tables need to have some secondary indexes. * The tables have "minimal" updates and "moderate" inserts. In order to increase the performance I have the following options. A) IOT with primary key similar to "userid,id" B) Cluster the table on user_id (index) C) Cluster the table on user_id(hash) Since the distribution of the data over "user_id" is not equal I'd like to get feedback from folks who have found themselves in similar situations and what solutions they have chosen and why. Appreciate any input/comments -- Adi Alurkar aalurkar@xxxxxxxxxxxx -- //www.freelists.org/webpage/oracle-l