oracle sql plus spool

34,075

Solution 1

Are you concatenating & delimiting your columns, or are you exporting fixed-width?

See this documentation on SQL*Plus Script Tuning. Specific to your script, here are a few possible ways to speed it up:

  1. Make sure LINESIZE is as small as possible. Add your max column lengths (plus delimiters if not fixed-width). This can have a dramatic effect on performance, as SQL*Plus allocates that amount of memory for every exported line. 410 isn't that big, but if you can decrease it that would help. This has made a big difference, in my experience.
  2. Don't turn TRIMSPOOL on. This can also have a big impact. Each line will then be padded out to LINESIZE, but with an optimal linesize, and depending on how you're using the file, that may be acceptable. However if you want to elminate trailing spaces entirely, it can often be faster to trim them using other methods post-export.
  3. Play around with ARRAYSIZE. It may help (a little). It sets the fetch size for SQL*Plus. Default is 15 rows. Bumping to, say, 100 may help, but going too large might decrease speed.

Hope this helps!

Solution 2

You might find it quicker to use UTL_FILE, but probably not that much faster.

in my test it was slightly faster over about 20k of rows, blow that out over 14 million though and it might be worth it.

I believe if you want to get any quicker than this, the way to go would be pro*c.. but I haven't got into that, so can't really advise.

set pagesize 1000
set FLUSH OFF
drop user usera cascade;
create user usera default tablespace users identified by abc123;
grant create session to usera;
grant resource to usera;

create or replace directory testdir as '/tmp';
grant read,write on directory testdir to usera;
grant execute on UTL_FILE to usera;

connect usera/abc123;

set timing on

spool /tmp/spooltest.txt
select object_name from all_objects;
spool off

DECLARE
 v_file UTL_FILE.FILE_TYPE;
 TYPE t_col is table of all_objects.object_name%type index by PLS_INTEGER;
 v_object_names t_col;

BEGIN
  v_file := UTL_FILE.FOPEN('TESTDIR','utlfiletext.txt','w');

  select object_name BULK COLLECT INTO v_object_names
  from all_objects;

  for idx IN 1 .. v_object_names.COUNT LOOP
    UTL_FILE.PUT_LINE(v_file, v_object_names(idx), FALSE);
  END LOOP;

   UTL_FILE.FCLOSE(v_file);
END;
/

The results. The top result being from sqlplus only, the bottom using UTL_FILE

23931 rows selected.

Elapsed: 00:00:06.60

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.45

Solution 3

With a typical query, 14M records is at least several hundred megabytes of data to fetch out of the server, pass across the connection and save to the disk.

Given this, 12 minutes does not sound too much to me.

However, it is still possible that your query can be optimized. Could you please post it here?

Solution 4

So is this going over the wire or are you logged into the box that has the database? If you have access, maybe you can run your sqlplus session on the box where the database lives and zip the file up then send the file to your local machine. It might be faster to send a big file over the wire instead of sending millions of smaller records. Of course this won't make it super fast, but might shave some time off.

Also with that much data do you really need to spool it to file? Can you do an export instead?

Share:
34,075
CC.
Author by

CC.

Updated on July 05, 2022

Comments

  • CC.
    CC. almost 2 years

    I'm using sql plus to execute a query (a select) and dump the result into a file, using spool option. I have about 14 millions lines, and it takes about 12 minutes to do the dump. I was wondering if there is something to make the dump faster?

    Here below my sql plus options:

    whenever sqlerror exit sql.sqlcode
            set pagesize 0
            set linesize 410
            SET trimspool ON
            set heading on
            set feedback off
            set echo off
            set termout off
            spool file_to_dump_into.txt 
            select * from mytable;
    

    Thanks.

  • CC.
    CC. about 14 years
    The query is optimised, I've worked on to make it faster. So, what is left is the dump. The file it has about 400MB. I'm thinking at something like buffer size of the dump, but I don't know if there is such a option.
  • CC.
    CC. about 14 years
    The query itself it takes 1.5 minute, and the rest it the dump.
  • CC.
    CC. about 14 years
    Using Toad Session Browser I can see the time it takes to do the query and the time it takes to do the dump.
  • CC.
    CC. about 14 years
    Good question. Actually I need to do a spool because the select is not a simple select. I'm doing a small treatement on a field value based on the previous value of that field. At the end I'm generating a formatted file (not just the columns). So, it's kind of mandatory the spool. I'm doing my test on a test machine. I will ask to see if in the production environement the unix server is on the same machine as the oracle server.
  • CC.
    CC. about 14 years
    Yes, I just tryied it, but the same result. Thanks anyway.
  • AndyDan
    AndyDan about 14 years
    Oh, never mind. I re-read your post, and see now that you already have that in your settings.
  • William Robertson
    William Robertson over 8 years
    arraysize is the big one here. In my experience it can make a huge difference - I would expect an order of magnitude improvement for a 14 million row result set. I'll have to try out the no-trimspool thing.
  • Babika Babaka
    Babika Babaka almost 8 years
    The link is dead now. I think this is the same page : SQL plus script tuning (URL July 2016)