I know that you can edit the Array Fetch Size parameter of a universe connection so that you can retrieve a larger number of rows each time you go to the database.
It seems the maximum size is 500 rows, but I don’t see why this shouldn’t be the default value. What is the advantage of having a smaller array fetch size?
I believe: If you have a bad (error prone) network connection smaller array fetch sizes will benefit you because in case of a re-transmit only a small package has to be resent.
In other words: on a fast reliable network with very few packet drops and low latency etc. a large array size pays off
With Array Fetch Size you need to experiment! BO recommend this and so do I. Try a wide variety of values - its just not as simple as 500 good and 20 bad - you need to find the value that suits your company’s setup.
I have seen that Array Fetch Size of 1 works best for SQL server. With array fetch size larger than that I used to get wrong values in my report earlier.
Array fetch size is an attempt to set the number of rows returned in a packet from the database. The issue - and the reason why it’s not set to 500 as a default - is that different databases and different networks can be defined with different packet sizes.
So if you say you want 500 rows at a time, there may be extra overhead on the network because it has to break your huge DB packet into smaller ethernet (or token ring) packets. And you run the possibility of getting dropped data when the packets have to be put back together.
SQL Server was mentioned as working best with a setting of 1 for array fetch. This means that every row is sent over the network in one packet, which adds a lot to the network traffic. However, the SQL Server drivers don’t seem to be very good at recognizing packet transmissions, and if you set it to 10 or something higher the possibility of missing data is fairly high. So for that database for that reason the packet size is set to 1.
It also depends on how “wide” your typical query is. If most of your queries are 10 objects or less, you can get by with a larger fetch size and still use the same number of network packets. If your queries generally involve 100 or more objects, that’s more data, and therefore your fetch size should probably be less.
Bottom line is that there is no “right” answer to this question.
Thats been the same for all Oracle databases I’ve ever seen. I’d try experimenting cos when I’ve had the time to play sometimes 350 or 450 benchmarked better than 500 - its one of those things that is just completely specific to your environment.
I am using BO XI R3 on Oracle 10g database, changed the Array fetch size from 1 to 100 and I could see a BIG change in performance. The reports which were not running or timed out, started returning values within few seconds on the infoview. Currently not experienced any bad impact on the data quality or any other issues. So change the fetch parameter for better performance.