unfortunatey there arent any common columns, this is a pity really this is a data migration process and the guy who modelled this left the company :-( and yes, it reads million of rows even the entire tables are cached, I dont think this would work even with exadata let me get some test data... 2011/10/21 David Fitzjarrell <oratune@xxxxxxxxx> > That would be a merge join cartesian, not what you want, really as it > returs 1,200,000,000,000 rows with at most 8,000,000 rows of actually > useful results (the result set should be somewhere between 1 and 8,000,000 > depending upon how many rows in account match rows in tmp_account). There > must be some common column between the two tables to effect a proper join. > If not then this query is useless and is simply consuming resources better > used for more productive queries. > > Please provide create table statements for both tables and some sample > data. > > David Fitzjarrell > > > *From:* amonte <ax.mount@xxxxxxxxx> > *To:* Oracle-L Group <oracle-l@xxxxxxxxxxxxx> > *Sent:* Thursday, October 20, 2011 10:41 PM > *Subject:* tune between query/join > > Hello everyone > I am running a query which looks like following (running in 10.2.0.4): > > select a.account_number, a.id, b.* > from account a, tmp_account b > where b.amount between a.lowest_amount and a.highest_amount > > account has 8000000 rows and tmp_account 150000, the execution plan shows > merge sort join as expected, this query takes around 5 hours to run, is > there anyway to improve this? > > > Alex > > > -- > //www.freelists.org/webpage/oracle-l > > > > > -- //www.freelists.org/webpage/oracle-l