Size of documents in repository

I’ve noticed something interesting regarding the size of reports in the repository.

If I have a report that is perhaps 100k, retrieve it from the repository, then decide I want to add more data, perhaps through drilldown, the report obviously grows in size, perhaps up to 3Mb.

However, if I add more data, then decide that’s not what I want to do, rerun the query so that it was exactly the same as the first query, and re-export to the repository, the report will retain the LARGEST size that it ever was, so in my example above, it would be 3Mb, even though the cube contained the exact same fields as when it was 100k.

In other words, the cube does not shrink if it has grown.

I found a work-around – I saved my report as a template, then created a new report as that template, selected the exact same fields, and the report was back down to the original size. However, this is cumbersome.

Does anyone know of a way to shrink down a report? This could also be affected if you run a report that only brings back 1 or 2 rows (because of sparse data), versus if you bring back thousands of rows. If you save this report in the repository, you obviously want the report to be as small as possible so download times will be small. But if you ever run the report for a large amount of data and then store it in the repository, you have a large file. Nothing that I can find will shrink it.

Thanks for the help,

Ralph Slate
slater@rpi.edu


Listserv Archives (BOB member since 2002-06-25)

Ralph Slate schrieb:

However, if I add more data, then decide that’s not what I want to do, rerun the query so that it was exactly the same as the first query, and re-export to the repository, the report will retain the LARGEST size that it ever was, so in my example above, it would be 3Mb, even though the cube contained the exact same fields as when it was 100k.

Did you also check, that the ‘scope of analysis’ is reset in the query panel?

Walter

DI Walter Muellner
Delphi Software GmbH, Vivenotgasse 48, A-1120 Vienna / Austria Tel: +43-1-8151456-12, Fax: +43-1-8151456-21 e-mail: w.muellner@delphi.at, WEB: http://www.delphi.at


Listserv Archives (BOB member since 2002-06-25)

Ralph Slate wrote:
However, if I add more data, then decide that’s not what I want to do, rerun the query so that it was exactly the same as the first query, and re-export to the repository, the report will retain the LARGEST size that it ever was, so in my example above, it would be 3Mb, even though the cube contained the exact same fields as when it was 100k.


Listserv Archives (BOB member since 2002-06-25)

Ralph I tested this out on our installation which is 4.0.5.5. I varied the amount or data returned. By that I mean the numbers of rows not extra columns. When I increased the number of rows and then decreased them, there was some “extra” space still left in the new file. It was not very much, 5k on a 83k file.

OK, that’s good that it’s only when you add additional columns. But when you add the columns, then remove them later, it doesn’t shrink. I’m nearly 100% positive about that one.

One option you
may want to try is purging the data from the document. You can do this by bringing up the Data Manager dialog box by choosing View → Data from the menu. In there on the bottom right is a button called purge which will remove all the data from your document.

This didn’t help when the additional columns were added – I had tried this already. Thanks for the tip though.

Ralph
slater@rpi.edu


Listserv Archives (BOB member since 2002-06-25)

Walter,
Out of curiosity, can you please explain what you mean by ‘scope os analysis’. I thought this related to OLAP reports only.

I too have seen similar occurences to what Ralph is describing within Business Objects.

  1. A report was created and then found to contain an object which it shouldn’t have. The data was purged, the object removed from the query panel, and the report re-run. When the data cube was viewed, the object which was removed from the query panel was still in the cube. 2. A query was created and run (step 1). The query was then linked to another universe and run (step 2). The query was then refreshed (not run). After the refresh, the data cube held data from the original query (step 1), not the second query (step2).

I though that the Business Objects data cube was supposed to be dynamic but…I don’t think that it is. Maybe someone from Business Objects can point me to the appropriate documentation, white papers, etc. regarding how the cube functions.

Ralph Slate schrieb:

However, if I add more data, then decide that’s not what I want to > do,
rerun the query so that it was exactly the same as the first > query, and re-export to the repository, the report will retain the > LARGEST size that it ever was, so in my example above, it would be > 3Mb, even though the cube contained the exact same fields as when it > was 100k.

Walter Muellner wrote:
Did you also check, that the ‘scope of analysis’ is reset in the query panel?


Listserv Archives (BOB member since 2002-06-25)

In a message dated 98-07-02 11:40:34 EDT, you write:

The only drawback is when a
user downloads them and looks at them they look strange because all you see is header cells and so on.

One way to get users around the “empty report when downloaded” problem is to set the document to automatic refresh when it is opened. If memory serves, this is found under the Tools menu, and is one of the items on the SAVE tab on the OPTIONS menu.

Regards,
Dave Rathbun
Integra Solutions
www.islink.com


Listserv Archives (BOB member since 2002-06-25)

Ralph Slate wrote:

However, if I add more data, then decide that’s not what I want to do, rerun the query so that it was exactly the same as the first query, and re-export to the repository, the report will retain the LARGEST size that it ever was, so in my example above, it would be 3Mb, even though the cube contained the exact same fields as when it was 100k.

In other words, the cube does not shrink if it has grown.

Once again: Did you check you reset also the ‘scope of analysis’ in the query panel before re-running the query? If you expand the cube by “drilling through” your local cubes, i.e. including additional objects into the query these objects cannot be seen in the query panel result window, BUT the ‘scope of analysis’ is changed to include those objects. If you do not reset this scope of analysis, the query looks like the original one BUT STILL contains objects which are ‘hidden from the user’ which will be retrieved from the database, and, therefore contribute to the cube’s size…

Walter

DI Walter Muellner
Delphi Software GmbH, Vivenotgasse 48, A-1120 Vienna / Austria Tel: +43-1-8151456-12, Fax: +43-1-8151456-21 e-mail: w.muellner@delphi.at, WEB: http://www.delphi.at


Listserv Archives (BOB member since 2002-06-25)

Crystal Golding wrote:

Walter,
Out of curiosity, can you please explain what you mean by ‘scope os analysis’. I thought this related to OLAP reports only.

There is actually no difference between ‘olap reports’ and other reports (except of the handling and the display) in terms of dataproviders. If you use drill down techniques and expand the cube dynamically by “drilling through” the local cubes. By doing this, additional objects are included into the query (obviousely, this must be done somewhere), and, you cannot see the additional objects in the ‘results window’, but in the list of all selected objects (top of the query panel, to the right of the ‘Scope of analysis’ window)

  1. A report was created and then found to contain an object which it shouldn’t have. The data was purged, the object removed from the query panel, and the report re-run. When the data cube was viewed, the object which was removed from the query panel was still in the cube.

I have no idea what happended to this query… To be serious, what were the steps of this query?

  1. A query was created and run (step 1). The query was then linked to another universe and run (step 2). The query was then refreshed (not run). After the refresh, the data cube held data from the original query (step 1), not the second query (step2).

Does ‘link to another universe’ mean, that you changed the underlying Universe (in data manager)? And, I do not understand why you have two queries? Do you mean, you have two data providers?

I though that the Business Objects data cube was supposed to be dynamic but…I don’t think that it is. Maybe someone from Business Objects can point me to the appropriate documentation, white papers, etc. regarding how the cube functions.

The datacubes are dynamic in the sense, that you are able to ‘dnymically’ add or remove dimensions (i.e dimension objects) by either adding them to the results windows in the query panel, or, by analyzing data on ‘objects not in the query’, in which case the cube is dynamically expanded.

Walter

DI Walter Muellner
Delphi Software GmbH, Vivenotgasse 48, A-1120 Vienna / Austria Tel: +43-1-8151456-12, Fax: +43-1-8151456-21 e-mail: w.muellner@delphi.at, WEB: http://www.delphi.at


Listserv Archives (BOB member since 2002-06-25)