The files get created in the dir I have setup for writing my text file out to, we are using 12.2.2.1 not sure if they det deleted, because the actual data I want to write out is 3 million records. So I kill the job so it does not fill up the file system.
Oh I don’t think it is a good idea to write millions of Data that too which contains CLOB in it. Is the data in the CLOB column is really that big? Or just for the purpose it would grow it has been defined so?
However I think DI would some special thingy for long columns. I know that if you want to process LONG columns to table then DI defaults the Commit Size to 1. So it is very much logical for me that can do such thing to get the data written to file.
The naming of the file is the server name and the source table with a timestamp.
No it is not ideal to do this but it has to be done, no choice.
The only other way I am considering is PL/SQL and UTL_FILE but it would be good if I could find out why it creates these files and how to stop it from happening.
processing of long data is done through files instead of loading the data into memory, there is a DSConfig.txt parameter which controls the size of long data that can be processed in memory by default its 2048
Max_Long_Data_In_Memory=2048
in your case you can try doubling this value (4096), but I wouldn’t recommend setting it to very high since the DF or Job may run out of memory
I need to check the life of these files whether its load or end of DF or Job, in any case if your job completes these files should get deleted
The idea is, BLOBs cannot be embedded into a column of a text file. How would a CSV file look like if column three contains the image data in JPEG format?
Therefore the dataflow is using a binary writer then, one where each row is one file.
Thanks that is interesting, I think the main problem is it leaves the files where they are during the job, maybe I will have to shell out to the OS in a script and clean down as I go.
I understand why you would not want to put a BLOB into a text file, but it is the one time we have no choice but to provide a text file (SQL Insert statement hence having the CLOB in it).
not writing the long data to file as target, from 12.x onwards to handle long data, DS will internally write the data to temp file (depending on the max size you can cache, default is 2048) and once the job complete, these internally created temp files will be deleted
The problem is BODS does not always clean up after itself or the file system reaches its limits.
The table I am manipulating with a clob is 3 million rows, this tries to create 3 million files. Which breaks the OS limit on the number of files in a dir.
So the only way for the job to run is to clean the files as I go so I do not break the file system limits.