BusinessObjects Board

Some advice on how to performance tune SAP BI BusObjects

Some advice on how to performance tune your SAP BI BusObjects solution:

Universes and DBMS:

  1. I believe that the order of tables in any modern DBMS does not play a role (see table weight), since all of modern DBMS use cost-based optimization nowadays (instead of old, rule-based optimizer).
  2. Regarding aggregate tables:
    I would prefer to create aggregate tables in the database and then let the database handle the optimization “automagically”, this is possible e.g. with Oracle using Materialized Views for aggregate tables and the “query_rewrite” option. This is totally transparent to the universe, which means less work. And it will affect ANY frontend tool (in a positive way), regardless of if the Frontend tool is utilizing a universe or not…
  3. Regarding measures:
    Often there still seems to be missing knowledge about SQL aggregation versus (local) projection. In brief, a measure is only a measure if it uses a SQL aggregation IMHO, see my post here with a detailed explanation.
  4. Derived tables
    are essentially nothing else but InLine Views, but with the added feature of being able to incorporate the @PROMPT function. As such they actually might be quite fast, faster than you think if the @Prompt restricts the derived table enough and is based on a db indexed database column. This is true in particular if you create a derived table with a SQL analytical function such as RANK OVER PARTITION combined with a @PROMPT to restrict to TOP x (prompted) rows for example.
  5. Consider using the JOIN_BY_SQL universe parameter to “push-down” the merging of multiple microcubes (of the same granularity) down to the DBMS level.
    Multiple microcubes will be generated when SQL is split because of fan/chasm traps, see also universe contexts and alias tables as well as: Daves Adventures in Business Intelligence: Do I Have a Fan Trap?
  6. Do not forget that the result set must be transported from your data source/DBMS to your SAPBI4 platform, so check for network latency as well as bandwidth.

DBMS:

  1. Do not forget that DB indexes cannot be used if you are applying a SQL function such as Upper, SubStr, etc. on a table column in your WHERE clause. Look at function based indices instead then (Oracle), see here for details.
  2. Index any table columns used in any join and on columns that are often used in a restriction/query filter
  3. Update your database statistics or you are running the risk that DB indices might not be used by your DBMS optimizer!
  4. Consider using bitmap indices (Oracle) in a dimensional modelled database. As a rule of thumb they perform better for table columns used in query filters.
  5. De-normalize your data model for optimal BI Reporting/Analytics, see: dimensional model a la R. Kimball, this is very important if you are using an OLTP as a data source for your reports/analytics.
  6. Of course consider using SAP HANA as your DBMS instead as well ; -)

With respect to report design in Webi:

  1. Use the Query Stripping feature available with SAPBI4.x, now also working for relational universes!
  2. Avoid using multiple data providers, rather consolidate them whenever possible as a rule of thumb; data providers/queries are ALWAYS executed sequentially, no parallel execution! Avoid merging of dimensions if possible as this is a costly operation. For more detail on how to speed up your Webi reports, please see here.

With respect to the SAPBI4 platform:
Consider using a so called “split deployment”, separating your JAVA Web Appplication Server (e.g. Tomcat) from your Web Server (e.g. Apache), which is serving the static web pages, see here for details: http://scn.sap.com/docs/DOC-6191
Also, with WebIntelligence the pure DHTML interface might often perform better than the JAVA applet, sadly both interface types do not offer parity when it comes to features/functionality.

Finally, DO SIZE your SAPBI4 platform accordingly taking into consideration:
• number of active concurrent users (ACUs)
• type of users: Consumers, moderate active users, very active users/power users
• type of frontend tools (WebIntelligence, Crystal Reports, DesignStudio, etc.)
• Size and complexity of reports (small, medium, large or easy, medium, complex)
etc.
For more details on sizing, please see: SAP BusinessObjects BI4 Sizing Guide | SCN and Best Practices for SAPBO BI 4.0 Adaptive Processing Servers[list][/list]


Andreas :de: (BOB member since 2002-06-20)

Good stuff Andreas, maybe this would make a good sticky 8) .


Mak 1 :uk: (BOB member since 2005-01-06)

Hi Andreas,
What a document size is consider small, medium and large? I couldn’t find this info anywhere and based on a particular environment that estimate might be completaly different. I have around 30% webi docs that are <1MB but I also have around 1% docs over 100MB up to 300MB. In my case large would be a doc that is 100-300MB but for someone else that might be XXL.
Also which parameter is taking under consideration when talking about size? Is this SI_SIZE that we can find in Repo db or maybe this is size when unziped in webi proc server memory or maybe when report is edited and size of it in webi proc server memory?

Any clue?


Marfi :poland: (BOB member since 2006-12-18)

Please, look at p. 70 of the sizing guide :slight_smile:


Andreas :de: (BOB member since 2002-06-20)

Good points, Andreas!

Always keep in mind few things: when it comes to DB, then work together with your DBA in order to optimize it, BI consultants usually are good in BI, but quit often sucks with DBA performance, even EIM consultants. DB optimization is extremely important part, and if DWH design is less than perfect then nothing can solve this, even HANA or DB2 BLUE etc.
Also very important part is Web Interface. As you well might know there are several Web application servers, SAP BI by default is being deployed on Tomcat which defiantly is not the best option. So, if your customer is running SAP, then consider NW as a platform or if there are no other options, then consider Split deployment, by adding constant stuff to Apache, while dynamic stays on Tomcat, internet is full of these user guides.


Antremons (BOB member since 2006-03-21)

p70 says webi
100,000rows = small doc,
250,000rows = medium doc,
500,000rows = large doc

but isn’t it too much simplified? If you but a lot of charts and components then a report with fewer pages might be much bigger than a many pages report. I think the physical size of the file should be considered. What do you think?


Marfi :poland: (BOB member since 2006-12-18)

Very much so.

Number of tabs, number of variables, components and complexity of fomatting should be considered. These tend to make a document a lot more heavy and cause Webi to run slowly, due to rendering conponents, re-calculating variables when switching tabs etc.


Mak 1 :uk: (BOB member since 2005-01-06)

Then the estimate numbers from the sizing doc are completaly useless. Having thousands of documents you need to be able to get some parameters from audit or repo dbs and based on this say what categories your documents fall into. I found 1 document which has 4000 rows so would be considered small by sizing doc but the file is 400MB so if you open 3 such docs on a single webi proc server then you crash it, giving that the size of it will be the same after unzipping in memory.


Marfi :poland: (BOB member since 2006-12-18)

@Marfi
Either the Webi doc you mentioned is corrupted or you are using fake universe measures without a SQL aggregate function that is, or you are pulling back millions of detail rows, but filtering afterward in your report…


Andreas :de: (BOB member since 2002-06-20)

We provide only infrastructure to users so they can put in their reports whatever they want and they do it indeed. That is why it’s very hard to calculate size of the report. The only thing that comes to my mind is to get SI_SIZE of each doc and based on this find out number of reports in each size category. Would that be a valid estimation?


Marfi :poland: (BOB member since 2006-12-18)

You might go with an almost worst case scenario, e.g. 80% large reports, 20% medium reports. Also, sizing is just to get you started, ongoing monitoring of the platform is key to re-adjust your sizing accordingly.


Andreas :de: (BOB member since 2002-06-20)

Totally concur with this, size worst case scenario and then tune according to what you find when monitoring platform dueing usage.

Also Load Runner is a good solution to stress test your system, prior to the users starting to use it.


Mak 1 :uk: (BOB member since 2005-01-06)

I agree and this is what I plan to do however I have a chance to do it the way SAP doc suggests and follow all steps. Any idea how can I get these numbers for a large amount of reports? Only query builder, audit or sdk makes sense is such case.


Marfi :poland: (BOB member since 2006-12-18)

This link definitely belongs here:- :wave:

http://scn.sap.com/docs/DOC-58571#jive_content_id_Tips_for_Optimizing_the_Performance_of_Web_Intelligence_Documents


Mak 1 :uk: (BOB member since 2005-01-06)

Mak, great link… finally had something to point to for my arguement that millions of rows in a webi is a bad thing. :+1: :+1:

Some users were trying to use webi for data dumps :hb:


JonnySpec :us: (BOB member since 2008-08-01)

Data dumps? That’s nothing, schedule to export to Excel and you can use it for your full ETL process :slight_smile:

Just an update. SAP BI 4.2 SP3+ will execute queries in parallel, but it does have some limits, so keep the number of queries under 6.


pcgeekus :us: (BOB member since 2003-10-16)

And the limits are, please? And why stick to at most 6?


Andreas :de: (BOB member since 2002-06-20)

I think that’s the SAP default limit, but Alan Mayer said that you aren’t stuck with that and it can be set higher.


pcgeekus :us: (BOB member since 2003-10-16)

Great stuff Andreas.


Mathew.ed (BOB member since 2017-04-29)