I have a big issue using the XIR2 SDK when I want to get rights for an object (universe or overload for example). When I start my process the memory used keep growing, starting for 20 mb to more than 100mb and if I hav too many objects, the script hang. Here my code :
try {
// TODO Auto-generated method stub
String boQuery="SELECT * FROM CI_APPOBJECTS WHERE SI_KIND='OVERLOAD'";
Iterator<IOverload> itAllUniverses = BoxiRepositoryManager.getInstance().executeQuery(boQuery).iterator();
IOverload myIOverload = null;
while(itAllUniverses.hasNext()) {
myIOverload = itAllUniverses.next();
System.out.println("Universe = "+myIOverload.getTitle());
ISecurityInfo i = myIOverload.getSecurityInfo();
Iterator<IObjectPrincipal> iob = i.getObjectPrincipals().iterator();
int securityID_highBytes = 0;
int overload_SI_OBTYPE = ((Integer) myIOverload.properties().getProperty("SI_OBTYPE").getValue()).intValue();
IObjectPrincipal ciob = null;
while(iob.hasNext()) {
ciob = iob.next();
Iterator<ISecurityRight> irob = ciob.getRights().iterator();
ISecurityRight oSecurityRight = null;
while (irob.hasNext()) {
oSecurityRight = irob.next();
//Retrieving the high byte of the Security Right ID and compare with the SI_OBTYPE of the overload
securityID_highBytes = (int) oSecurityRight.getID()/(256*256);
if (securityID_highBytes==overload_SI_OBTYPE) {
System.out.println(" Principal = "+ciob.getName());
}
}
}
}
} catch(Exception e) {
}
The error
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Hashtable.rehash(Unknown Source)
at java.util.Hashtable.put(Unknown Source)
at com.crystaldecisions.sdk.occa.security.internal.a.a(Unknown Source)
at com.crystaldecisions.sdk.occa.security.internal.f.new(Unknown Source)
at com.crystaldecisions.sdk.occa.security.internal.a.commit(Unknown Source)
at com.crystaldecisions.sdk.occa.infostore.internal.ap.a(Unknown Source)
at com.crystaldecisions.sdk.occa.infostore.internal.ar.if(Unknown Source)
at com.crystaldecisions.sdk.occa.infostore.internal.ar.getObjectPrincipals(Unknown Source)
at com.crystaldecisions.sdk.occa.infostore.internal.ar.getObjectPrincipals(Unknown Source)
For me there is a memory leak somehere in the getObjectsPrincipals or getRights method but maybe I’m doing this in a bad way but i really dont understand why the memory is not release…
By “blocks”, I mean perhaps 500 or 1000 objects at a time. So your query would loosely look something like this pseudocode:
blockRetrieveCount = 1
lastProcessedID = 0
while (blockRetreiveCount > 0)
{
query("select top " + blockSize + " from ci_infoobjects where.... and si_id > lastProcessedID order by si_id")
...processing / record write / whatever you're doing with results...
blockRetrieveFlag = query.ResultSetSize
lastProcessedID = queryResults.property(SI_ID)
}
I’d give something like this a shot to see if it helps with memory consumption. Also, make sure you’re explicitely nulling your query result objects when you’re done with them so they’re free’d up for garbage collection.
I tried your solution by retriving only one row at a time in a loop and nulling all my objects after using them (even if in Java we dont need to do that with the garbage collector) but it doesn’t work I still have the memory usage which keeps growing and Im pretty sure that my code is good because if I dont use the methods getRights for IsecurityInfo everything is ok the memory usage is stable (around 30 Mo and with the method --> 120 Mo and more… more… more…). I think that I m going to write to BO support.
I was having a similar problem to the OP, I wrote a java app that would parse through all web intelligence documents, on our company CMS and output a list of which objects and classes were used by each report into a flat file.
Eventually I would get out of memory errors and the application would crash.
I tried what crystal01 suggested, using blocks of 10 per each IInfoobjects query.
This has done the trick, and my memory usage has now stabilised as opposed to continuously increasing slightly as it was doing previously.
Im guessing that the object containing the query must still hold references to every single report object and so the java garbage collection was not reclaiming the memory when I had finished with a report.
Thanks Crystal I had spent 2 weeks trying to solve this issue with little success.
I think that is the problem. When you retrieve your IInfoObjects collection, try calling infoObjects.remove(x) instead of using an iterator. This would be in addition to nulling references to individual IInfoObject objects to allow for garbage collection.
Revisiting this old topic - did you by chance get confirmation that there is a memory issue with the getObjectPrincipals() method? I’m seeing some similiar behaviour specifically with this method.
I got confirmation from SAP that there is indeed a memory leak in the getObjectPrincipals() method in the Java SDK. This leak was corrected in XI R2 SP5 for the COM and .NET SDKs, but apparently the Java version was not. I’m not holding my breath that this will be corrected either, as XI R2 is nearing support end of life.
Has anyone gotten any further information from SAP on this memory leak or better yet how to code around it on the R2 platform? I have a SAP note open on the issue, but have not been able to get clear direction on how / if this can be worked around.
I have tried:
nulling all relevant objects
explicit garbage collection
varying batch sizes for my IInfoObjects collection
with no success.
So long as getObjectPrincipals() is in my code, the available memory will be gradually consumed until a heap space error is thrown.
In case this might help anyone, I have found that forcing a periodic session disconnect / re-connect clears out the memory that is held by the getObjectPrincipals() method. It’s a bit of a hackish workaround, but at this point I don’t know that there is a better option.