BusinessObjects Board

Bad System Message and Signal 11 (Solved)

I worked with a client this morning on a Dataflow that suddenly starting failing when it had not been changed. A different Dataflow was added to the job yesterday. The errors were “bad system message” and “System Exception <Signal 11>”. This was in DS 14.1 on Linux.

Signal11.jpg

BadSystemMessage.jpg

I reimported the target table after seeing the first message. I then reran the job and got the second message. I reimported the source tables and ran the job again which completed successfully.

As near as I can tell the repository metadata had the DATETIME columns defined internally as datetime(9) and after the reimport they were datetime(0). It’s possible that the column in Oracle was changed from TIMESTAMP to DATE but never reimported into the repository.


eganjp :us: (BOB member since 2007-09-12)

As far as I understood is that Signal 11 is when some Linux variant decided that your process is running faulty (or running out of memory maybe). So sadly enough Signal 11 only reports that something is wrong, but not what.

Nice tip about re-importing the tables. I would not have thought about that.


Johannes Vink :netherlands: (BOB member since 2012-03-20)

I don’t think either of those error messages are always going to be specific to table metadata synchronization problems. But so far that has been the solution twice.


eganjp :us: (BOB member since 2007-09-12)

Yep. Whenever I get weird messages that I can’t figure out, I start with a re-import. Then usually export the job to ATL, delete all objects and reimport. If that doesn’t work, rebuild from scratch.

  • E

eepjr24 :us: (BOB member since 2005-09-16)

I have to mention that we sometimes get this error from SAP. Unclear error in BODS error log, ST22 says Signal 11.

My totally unproven guess says that it has something to do with the load on the SAP system and the amount of memory that the background job takes. If I for example remove a distinct to outside SAP then suddenly it can work.

But it falls under the category “weird”.


Johannes Vink :netherlands: (BOB member since 2012-03-20)

Two Dataflows in three different environments were randomly (but increasingly) failing. A restart of the job ran OK almost every time. In this case it was a Signal 9 error, not Signal 11.

I switched the Cache type in the properties for both Dataflows to In-memory from Pageable and so far (two days in a row) there have been no failures.

Both of the Dataflows in question had been optimized to ensure they used very little job server memory. But the errors continued. SAP Tech Support had no useful suggestions up to the point where I made the change. Since the Dataflows used very little memory I didn’t think there was much risk in changing the Cache type to In-memory.

This was on DS 4.1.2.378 on Linux RedHat 2.6.32-431.11.2.el6.x86_64 #1 SMP Mon Mar 3 13:32:45.

When I updated my incident to let Tech Support know about the success with In-memory they came back with this:

If changing the 'Cache type' to In-Memory does improve the issue, it sounds like one of pCache bugs that we've fixed since DS 4.2.

eganjp :us: (BOB member since 2007-09-12)