Ive had a good look for information relating to Array Fetch Size and Array Bind Size and have found some comments but nothing definitive. :?
Ive been told by my DBA to increase the size of the Array Bind Size variable so that it decreases the time to run a query and therefore have LESS of an impact on the Oracle server (CPU and RAM); currently the query is almost killing the server. Im trying to determine what the parameter should be changed to, what relationship the parameters have to each other and whether they have any dependency on any of the other database settings eg tnsnames.ora etc.
Using these definitions:
Array Bind Size - is the area in memory that Connection Server stores a batch of data to be loaded (sent to the repository). When the bind array fills, it is transmitted to the database. Generally, the larger the bind array, the more rows (n) can be loaded in one operation and the better your performance.
Array fetch size - the maximum number of rows authorized with each fetch. If you enter 20, and your query retrieves 100 rows, the connection executes 5 fetches to retrieve your data. Five fetches of 20 rows each equals 100 rows of data. If you enter 1, the array fetch is deactivated and data is retrieved row by row.
Note: Deactivating array fetch is the safest way of retrieving your data but row-by-row retrieval slows down server performance. The greater the value in the Array fetch size option, the faster your rows are retrieved; you must, however, ensure you have adequate client system memory.
Currently I have the Bind Size = 5 and Fetch Size = 20.
How best can I determine what they should be to increase performance? Anyone have any ideas/comments or formulae’s to work it out.
Your best bet is to continue to work with your DBA. The size of the binding and fetch will really depend on an average usage.
Dangers involved: Having too large a FETCH size can cause FETCH OUT OF SEQUENCE errors when a received “packet” is corrupted and has to be re-transmitted. The results are then received out of order and can prevent you from getting an entire result set.
Other dangers are that your Middleware may decide that it has all rows of data and will ignore the final packet. This really used to be an issue with earlier versions of Oracle 8, but I did see this again with a later version (which I don’t recall at the moment). This is especially true when you have time limits around how long queries are allowed to run.
I believe that “ordering” or sorting the result set mitigated this somewhat, though there is a performance hit for this as well.
I believe my bind is at 10 and my fetch at 50. Those have worked and not really caused any issues, but again… it’s based on amount of data (how long is the fetch sequence) as well as how fast is your network.
I’ve seen Array Fetch Sizes of up to 500. I would try some experiements. Add a zero to the end of it - how does that compare? Try the maximum value you can - how does that compare. How does a value of 1 work? If you can, make sure the query isn’t cached, as this’ll skew your results. If you have TOAD you can use it to flush the cache.
I am using BO XI R3 on Oracle 10g database, changed the Array fetch size from 1 to 100 and I could see a BIG change in performance. The reports which were not running or timed out, started returning values within few seconds on the infoview. Currently not experienced any bad impact on the data quality or any other issues. So change the fetch parameter for better performance.
Were on BO 4.0 64 bit and i’m currently tweaking the connection pool mode, array fetch size and the array bind size. We’re using the oracle ODBC connector which was installed with BO.
After reading this thread i’m not sure what values to pick for the items mentioned above, so i’ve started to test to see what results come back. I’m testing performance against TOAD.
First test was a connection pool mode which disconnected after each transaction, a fetch size of 250 and a bind size of 32767. Performance was signifacantly slower than TOAD. The same query took 5 times longer on BO.
Second test was a connection pool that stayed alive for 60 minutes, and a fetch size of 500, bind size was 32767. Performance still significantly slower than on toad. It only took 3 times longer in BO than in TOAD though.
For the third test i used a connection pool whcih stayed active for 60 minutes, a fetch size of 1000 and a bind size of 32767. No improvement in this test when compared to the second test.
I know BO can take a little more time than TOAD to render results, hoewever i’m using the most simple query i could think of and used the same grouping strategy in TOAD as the one used in BO (Distinct in this
case).
Does anyone have a clue how to tweak BO in such a manner that performance of queries is more equal to the performance of TOAD?
Seems like we’ve had an incorrect join in a universe. Even so, tweaking must have done something, now queries are executing faster on the BO side than on TOAD.