Re: How to select only columns having values..

  • From: Kim Berg Hansen <kibeha@xxxxxxxxx>
  • To: ric.van.dyke@xxxxxxxxxx
  • Date: Tue, 18 Feb 2014 15:20:31 +0100

Hi, all

I know it's dangerous to *guess *at the grander picture rather than getting
it from the OP, but just for fun and the sake of argument, here's a
*guess*at a use case ;-)


Suppose there is no app? No local structure to load data into and parse
through?

Suppose we are talking adhoc queries with some data analyst getting
requests for querying data and return an Excel file.
So the data analyst writes a query in SQL Developer or Toad or whatever and
then saves/exports the data grid to Excel.

Suppose said data analyst retrieves data from a 200 column table/view/query
and the enduser would not like him to return an Excel file with 200 columns.
Instead he must return only non-null columns in the Excel file.

If said data analyst does not know beforehand which columns will be
non-null, he could select *, export everything to Excel, and then spend a
couple minutes deleting columns in Excel that are entirely empty.
If that adds a couple minutes work for every adhoc query he has to make
Excel files for his endusers, that could add up ;-)

Given such a scenario - what would be the alternative to dynamic SQL?


Regards


Kim Berg Hansen

http://dspsd.blogspot.com
kibeha@xxxxxxxxx
@kibeha



On Tue, Feb 18, 2014 at 2:56 PM, Ric Van Dyke <ric.van.dyke@xxxxxxxxxx>wrote:

> Roger that Mark.
>
>
>
> But why do any of that?  Why not just read the entire row, really the
> whole table, into some local structure in the app and parse thru looking
> for what you need?  Now you hit the DB once and done.  No dynamic SQL no
> hacking thru the DD, and one parse/execute/fetch(s) cycle.   Even using a
> PL/SQL block that processes each row one at a time would likely take a
> couple of seconds at worse.
>
>
>
> I agree that this part of the code shouldn't take minutes of time for what
> appears to be a small amount of data.  Of course "above 1000 records" is
> vague enough to mean just about any number of rows, 1001?  10,000,000?
>
>
>
> We don't know the grander picture of all this so it's really hard to say
> what is exactly the right solution.  I just don't see the need for such
> complex SQL at this point.
>
>
>

Other related posts: