We are curently in the middle of converting our Data Integrator (v11.7) repository from SQL Server to Oracle. Also, the data databases are being switched from mySQL to Oracle.
Apart from the issues involved with migrating the data (this migration has taught me to never ever ever use table names in lower case, or fields names with length over 30 characters again) I expected the conversion of the repository as such to be easy: just export the jobs and flows we need from one repository to the other.
I created the datastore in the new repo by hand, but imported the tables along with the flows/jobs, and then reimported them from the Oracle environment.
I just noted that in all the dataflows where the “delete from table before loading” flag was set in the target table in the old repository, this flag has gone in the new repository!
So I now have the following questions:
Why do these flags disappear?
Should I be looking for more flags which might have disappeared?
Can I easily retrieve a list from my old repository of all the targets in dataflows where such a flag was set?
I can confirm we have the problem in 11.7.3.3 as well. It is not as frequent as in earlier versions, but we lose both the “Delete Before Load” and occasionally the Transactional status.
What exactly do you do then to get the system in order again? Do you have a list of all the data flows that are touched?
For the truncate flag I am now looking for a way to query the repository to come up with a list of dataflows, and so far have come up with the following:
select distinct
b.NAME
from BODI_REP.AL_LANGTEXT a
, BODI_REP.AL_LANG b
where text_value like '%<loader_load_choice%'
and a.parent_objid = b.object_key
and substring(text_value, instr(text_value, '<loader_load_choice')+20, 7) = 'replace'
Which gives me a list of 177 dataflows, that I will now go through to reset the flag…
Querying the repo DB will never be 100% accurate, because of line breaks. (That is, your search criteria of ‘%<loader_load_choice%’ could get split across 2 rows in AL_LANG.)
You may get more reliable results by exporting to ATL and searching that.
Yes, I noticed that after I wrote my previous comment.
Unfortunately, I have a totally different issue with exporting my repository to an ATL: when I do that twice after each other, the resulting file will be different in size!! Sometimes the difference is as big as 2Mb vs 8Mb.
But I did manage to create a new and improved query, which pastes two lines together and then does the search:
select distinct
b.NAME
from AL_LANGTEXT a
, AL_LANG b
, AL_LANGTEXT c
where a.text_value like '%loader_load_choice%'
and a.parent_objid = b.object_key
and c.parent_objid = a.parent_objid
and c.seqnum = a.seqnum - 1
and substring(concat(c.text_value, a.text_value), instr(concat(c.text_value, a.text_value), '<loader_load_choice')+20, 7) = 'replace'
and b.name like 'DF%'
and not exists (select 8 from AL_LANG d
where b.GUID = d.GUID
and d.VERSION > b.version)
;
Note: just for the record: my previous query also returned historic results.
And the current query also returns results where a data transfer is being used as an intermediate.
Another note on the double in the ATL: when I tried to run a job I got the following error:
The only way I could find to solve this was open the ATL, and going through all the targets in all the dataflows and removing the ODBC LDRConfiguration.
Apparently something went really wrong while converting the datastores.
Which makes me wonder: What I did, as I think I mentioned before, was start with an empty repository on Oracle, create the datastores in that empty repository (pointing to the new Oracle schemas) and then importing the code, including table definitions.
Would it perhaps have been better to not manually create the datastores, but instead import them from the old repository (with the old ODBC definition), and then adding a new loader configuration for Oracle?
I’m curious to know why you didn’t export the entire repository from the old system and import it into the new system. I use ATL files to go from dev to test to production using 11.5, 11.7.2.3 and 12.0.0.0. I haven’t seen any problems.
There were a couple of reasons for me not to do the complete export. One was that I was moving to a different database type for my datastores (mySQL -> Oracle), and I did not want all this ODBC connection stuff in my new repository.
Another was a post by Werner in this forum, where he indicated that he never exports/imports datastores, but creates them manually. And he gave some good arguments, if I recall correctly.
Maybe I was wrong in doing it like this, but it still surprises me that I run into so many problems.
BTW When you move from dev to test to production, I am assuming that you have the same database type in all environments, which makes it a different case from this migration. I have also frequently used the export/import to move between environments, and never experienced any problems.