ORA-01652:unable to extend temp segment by 128 in tablespace

Hi,

There is a target table for which I simply want to get data loaded for a specific column.
The earlier column was mapped from a source table but it was later realized that source table was not the correct one. Instead the column had to be mapped from some other source table.
In the process, I had to bring that source table and also another source table to provide links.
As a result, there were 3,2,2 source tables joined with each query transform and a target table

On job execution, the job fails with “ORA-01652: unable to extend temp segment by 128 in tablespace TEMP”.
I got the needful done by DBA but the problem still exists.I am using BODI 6.5.Please advise


its_ranjan (BOB member since 2011-02-16)

Looks like TEMP Tablespace is full. Just ask your DBA’s to increase the TEMP Tablspace size? Or optimize your job


ganeshxp :us: (BOB member since 2008-07-17)

Hi Ganesh,DBA team has increased that.But the problem still exists.
Regarding optimizing the job, I increased DOP to 3 but no success.Any more inputs please?


its_ranjan (BOB member since 2011-02-16)

No it is not about increasing DOP.

TEMP Tablespace is a kinda Tablespace used as a TEMP Folder to process the data and once the processing is done it will be pushed to the actual Tablespace for the table. It is something like the DI - Cache Memory.

By the way, is your Oracle Table by itself lying in the TEMP Tablespace ? Did you find that? What is the volume of data you handle and what is the remaining space available in TEMP Tablespace?

This is purely an Oracle Space problem. Nothing DI can do with.


ganeshxp :us: (BOB member since 2008-07-17)

Hi Ganesh, thanks for the response.
The table is huge enough and reside in Oracle.
The problem occurs when I try to join the tables using query transform to meet the requirement I mentioned earlier.Else the earlier approach works fine.


its_ranjan (BOB member since 2011-02-16)

If any of your tables have parallelism enabled then turn that off. In the process of using parallelism Oracle has to accumulate the results of each parallel thread and it uses temp space to do this. I’ve had this happen in a number of projects.

SELECT /*+ PARALLEL(, 1) */ …

This will tell Oracle to not use parallelism. Also check to see if Oracle is performing a HASH JOIN. This also uses temp space. The trick to working around that is to:

SELECT /* USE_NL(,

) */ …

Where table A is the table you want driving the nested loop.

Sometimes Oracle does these things because it lacks statistics. The DBA should have run the system statistics already but it’s good to check. Then run statistics for your table. If there are no statistics then Oracle has a difficult time understanding how large the query really is. Statistics don’t always solve the problem and if that’s the case then you need to get a bit more creative in your ETL.


eganjp :us: (BOB member since 2007-09-12)

One of your dataflow is consuming the memory which is causing the error,could please attach the .atl file we can fine tune it.


vjraj (BOB member since 2008-06-05)

Please find the attached df.
I want to bring receipt_number from RCV_SHIPMENT which I added in the flow.[list=][/list]


its_ranjan (BOB member since 2011-02-16)

Saieshranjan, does your company know that you just posted the entire export of your repository to everyone on the Internet? If that was not your intention I suggest you remove your attachment. If the forum does not allow that then you need to contact a moderator and have them do that for you.

I don’t see what the problem is with adding receipt_number. Drag it into the output schema of Query_1.


eganjp :us: (BOB member since 2007-09-12)

Thanks Jim for letting me know. The attachment was DF.
Anyway, I have deleted the same.


its_ranjan (BOB member since 2011-02-16)

The receipt_num has been already mapped to query transform but no success.
Are there settings we need to configure somewhere in DSConfig.txt or somewhere else
DBA team has already incresed the TEMP size.


its_ranjan (BOB member since 2011-02-16)

Post the explain plan for the query that is causing the problem.


eganjp :us: (BOB member since 2007-09-12)

Attached is the query that causes the error.
Thanks.
df_optimised_SQL.txt (5.0 KB)


its_ranjan (BOB member since 2011-02-16)

I need to see the EXPLAIN PLAN for the query also.


eganjp :us: (BOB member since 2007-09-12)

The Query seems fine to me, based upon earlier email, it looks like the query returns lots of data, can you just do a count and see how long it takes, and how many rows are you expecting?

If your are seeing an INSERT /*+ APPEND */, which is a really good thing, but because of data volume, all these rows cannot be committed as a single transaction, and I don’t know, if your DBA just want to make an exception for this load, by increasing that Temp_Space to 20/40 Gb …

What, I would recommend, just use API Bulk Loader in the target table, that should get rid of that Temp Space issue, and your load should run fine. Set the Commit size to 10,0000 and mode to ‘Truncate’ or Append’ as needed!


data_guy :us: (BOB member since 2006-08-19)

I don’t think that will help. Temp space in Oracle is used for joins, DISTINCT, ORDER BY and GROUP BY type operations. In this case I think the size of the tables and the approach that Oracle is taking to perform the join (perhaps a hash join) is why temp space is exhausted. But it’s hard to tell without looking at the explain plan.


eganjp :us: (BOB member since 2007-09-12)

Hi Jim,

I am still expecting the EXPLAN PLAN from DBA team.
Thanks.


its_ranjan (BOB member since 2011-02-16)

Please find the attached explan plan.
explain plan.doc (50.0 KB)


its_ranjan (BOB member since 2011-02-16)

The MERGE JOIN CARTESIAN is causing the issue. But I tried to figure out a reason for that for solid five minutes and failed.
Can you please check the join clause in your query again comparing it to the primary keys of each table. Maybe one column is in the PK but not in the join because it was forgotten??

If not, the SQL statement used for the explain plan would help.

Or, if you have a good relationship with your dbas, show them the SQL and ask for suggestions.


Werner Daehn :de: (BOB member since 2004-12-17)

Hi Werner,
Thanks for the response.

I wanted to state that the issue comes up only when I try to add two source tables and connect to a converging query transform.
The two source tables are first joined to each other with a common column ID.
Then it is further joined to converging query transform.
The problem starts from there as a result of huge sort it tries to build somehow.

The attached query works fine after removing the objects from the previously attached one.
DF_working_SQL.txt (5.0 KB)


its_ranjan (BOB member since 2011-02-16)