Hi, has anyone seen this error and got a fix ? This is on Sybase 12
Ive got a report if I run it on machine it runs fine and returns 100k rows. If I schedule it though it shows as successful, however it shows 0 rows returned when I look at the document tab via the console. However when I open the report there are rows returned but not as many as when I run it directly on machine.
I’m sure its not the universe fresh holds as for me I set it a not applicable, I also know the BCA process time outs are high enough because, I know how long it takes to refresh on my machine and its no where near these. The report is saved to a dir.
I dont understand how it can show 0 rows in the BCA console yet when I open the report it has rows but says partial result.
in wich format it is saved ? It may be not this but I have had a similar problem with the use of bca unix and html flat files put in a directory which has less datas than the report refreshed on reported.
It was due to a lake of free space on the unix directory and the html files could not be completely generate…
Its just a standard .rep . I’ve checked the space on the drive its being saved to and its fine, it did notice though that one of the dps returns over 1 millions rows, I know thats a lot but this is a report with a date range of two years.
I noticed when I opened and saved the report directly on my machine it takes for ever, does the BCA factor in the time it takes to do the saving in any of its timesouts
I’m now managing to get all results, however the job still shows as successful on the scheduler, but with 0 rows returned for each DP. However when I open the report I see the rows in there, it seems the BCA Console is lieing. Things I have tried so far are,
Scan and repair on repo.
Stopped and started the BCA
Rebooted the box
Delete the copies of the universe file on the server.
Checked all connections.
Renamed the report, published to repo, import it back then scheduled it again.
Checked all the timeout on the BCA and universes and row number restrictions and no problems there.
How can I check whats going on with on the BCA Server machine in terms of memory usage etc.
I’m still having issues with reports that are comming back with ‘Partial Data’ error even when the .rep file is saved to a dir and not a directory.
The reports arent failing on the BCA console, they show as success.
I even sat with the task manager open watching the mem usage as the report goes through and is run but it never seems to max out, could I be looking at the wrong thing ?
Also could too many BCA been causing the problems, I have a queue for each group, which means 10 of them. Any suggestions greatly recieved
Use Performance Monitor, if you have it installed (perfmon.exe). Simply point it to your BCA server then select a few appropriate ‘measures’. There are a whole lot of memory related ones, I must admit I’ve never had much luck monitoring memory useage.
How about regenerating your .key file? Also, the above paragraph doesn’t make sense - saved to a dir and not a directory?
Well, that’d likely show up with jobs showing as, oh crikey I can’t remember what it is…not failed or successful or suspended or waiting…I think it begins with a d…err? DELAYED! And of course you can test this quite easily?
Also, if you have direct access to the server how about opening a report in full client and refreshing and seeing? And I assume you are scheduling with the same username as you are then using on your PC normally?
Opp, should of saidd ‘even when saving to a dir and not the repo’
Dont understand 'Well, that’d likely show up with jobs showing as, oh crikey I can’t remember what it is…not failed or successful or suspended or waiting…I think it begins with a d…err? DELAYED! And of course you can test this quite easily? ’
Yep same user name.
Cant open report on server as its prod so its to hetic seems to work fine on dr box though.
Sorry, what I meant was if you think you’re overloading the server, I’d expect to see jobs in a Delayed status? So why not wait for a quiet time and run the report which nothing else is running - to see if it’s a resource issue
To be honest Ive got a sneeky feeling its might be a report issue, may be currupted somewhere, as other reports from the same universe are fine.
I’m holding out though that its not, as the person who built this report has spent a long time on it, and if ever a report can b someones pride and joy this is :? Plus he is a big wig, sods law ah :?
One thing, the BCA Server is using BO 5.1.7, reports are being scheduled from 5.1.4, is there any known compatbiltity issues ?
It is a large report 1.4 million rows, he required the data at the most granular level as he wants to be able to drill down. I suggest using Drill through in stead to avoid having to bring all the data back but it was a no :(.
I thinking now that when I try and save a large .rep to my PC it hangs it, in some cases for 10 - 15mins. It must be doing the same on the BCA, is the a paremeter that would kill the process of saving it down if it went pass x ? what about BOMANAGEr timeout for inactive or timeout for Interactive actions
It was a bug in BCA 5.1.7 fixed in 5.1.8 not a memory issue, only took BO 7 days to tell me :?
The bug number is # 1066254 and this is fixed in the 5.1.8/2.7.4 release.
Subject When you send several big FC reports through Webi to be ran at the same time, after several schedule, it appears that some tasks are succesfull but reports is with partial result in it
Description Workflow to reproduce:
Take the report attached. Save the same report with 10 differents names…
Send these ten reports to one user each (send each report to a different user each) to BCA with refresh option
Increasing the Interactive Heap Size to 2562 made the problem occur more frequently on the server (it has been tested with a 4 CPU to reproduce the problem more quickly)
After 3 run consecutive. Approximately to the fourth run, we can see that several reports are in partial results. (you can see it through the console when you view it in Document properties of task succesfull.
Tested with WebIntelligence 2.7.2 and data in Oracle and in Sybase