BusinessObjects Board

Measuring performance of user interaction

In the project I am working on right now, we have been able to successfully measure the performance of scheduled reports and have very good metrics for that. However, the biggest concern we hear from our users is that the interaction time (time to log in, go to the folder, open the report) is way too long.

I found this post: FAQ: Reporter which is really helpful.

Are there any other ways to measure interaction time? Other techniques, etc?

Thanks!


jgorricho :colombia: (BOB member since 2005-10-21)

Performance related questions are pretty much related to the architecture and setup so best list the version involved.

Generically you have to separate and measure each component involved in the user interaction.
Basically your core components would be the network, the Repository / CMS and the BO installation.
We used http://www.loadtest.com.au/Technology/winrunner.htm
to do network response and load tests - this gave us an idea as to the network traffic and response for each click action etc. And here we bumped up the html packeting scenario.
The CMS is critical due to it being the conduit to every action i.e. ensure that it is optimal for data fetches etc.
The system itself has many options - from caching to process waits that could enhance or delay your user responses.

In summary - you have to measure all components and understand the configuration and use of the underlying architecture with respect to user numbers and core actions / requests.
Auditing within BO could be improved by the vendor as it gives you some generics without really providing detailed or well explained measures of the system performance. To this respect there are some third party tools specific to BO, but for hard core system evaluations I prefer to use industry tools and methods for each measurable component in the process flow.

I.e. Map and define each component and create / introduce a clearly defined method of measuring all components - with test cases and result criteria that can form part of an ongoing process to be revisited periodicaly.

I see many orgs embark on this process only when there are problems - and then due to the pressure, this exercise is done without a view to be performed again.
In BI we are fond of proposing metrics and KPI’s but are oft lacking in defining those for the architecture we offer up to the users.

There are a variety of toolsets that measure performance on the respective OS’s available - the trick is to loacte and apply one that offers the best option for an enterprise wide load test as well as possible day to day monitoring. You user should never have to inform you of degrading or sub-service(s).

:wink:


MikeD :south_africa: (BOB member since 2002-06-18)

MikeD, what a nice reply, thank you very much.

I wholeheartedly agree with your comment about the lack of use of KPI and BI tools within BI teams. We create very sophisticated solutions to our partners yet manage our teams with manual Excel spreadsheets.

Our environment is BOXI 3.0 on AIX with two WAS servers. Part of the performance perception issue is that this is a new implementation of an ERP and its reporting solutions. The amount of data is 10 times bigger and therefore reports are larger. So, the change management part of the project needs to fill the gap and this is something that has been lacking.


jgorricho :colombia: (BOB member since 2005-10-21)

Sure - note that XI 3.0 had a few issues as well, so best get to a suitable XI 3.1 version.

As to data expansion - look into smarter reporting (linked reports, cubes etc). Understand that increased data also bumps up traffic flow and data fetches so optimise accordingly.
Caching gives some saving but only in the instance of multi-user / same report and parameter scenario’s.
HTML rendering is the biggie, so ensure your web server is optimal.
User peaks should be monitored - it does not help if everyone wants their reports in the morning so you have process contention - look into the scheduling and delivery mechanisms.

In summary - Yes, you should effectively monitor and constantly tweak for optimal user performance but don’t forget to look into effective usage methods using ALL the features and mechanisms available.
Anticipated impacts on production systems is generally relegated to a thumbsuck process where everyone just assumes the admins will let people know if the expected deliverable is going to cause problems i.e. what ever happened to anticipated load and performance impact analysis?


MikeD :south_africa: (BOB member since 2002-06-18)