BusinessObjects Board

Product direction: Local Repo versus Central Repo only!!!!

Agreed, my vote goes to keep local repos.


Nemesis :australia: (BOB member since 2004-06-09)

Keep local repositories. Don’t try to fix something that isn’t broken.

And improve the labelling, I really do not like the “free hand” text search for labels when constructing a release from Central Repo. If anyone made a typo mistake, the get-by-label approach could miss our vital parts of a job.

Instead, I would like to see the Central Repo to work more in line with the way Microsoft VSS deals with build releases, branches etc.


ErikR :new_zealand: (BOB member since 2007-01-10)

I’d like to be more precise then.

Yes, we would keep the local repositories for QA and for Prod (and other means) but in development you do not have to use local repositories anymore. The central repo would have a jobserver by itself any you execute the latest checked in version of each object others created, your objects would be executed in the latest version you saved.

What would be your answer then? Useful change and you would stop using local repos for development or not useful at all given that you do not have control over the versions being executed and the dependencies?


Werner Daehn :de: (BOB member since 2004-12-17)

This discussion reminds me of something I wrote-up a while back (when I was managing a DS practice) on applying Martin Fowler’s Continuous Integration principles to team development in BODS. Directly related to this, and very interested in what people think. In my experience, BODS developers tend to be lone wolf types…

See the file attachment for the full write-up. But here’s the bulk of it:

Introduction
In software development projects with multiple developers, things can go astray with disconcerting ease. This is as true for team development in SAP BusinessObjects Data Services (BODS) as for Java or C++.
If Tom makes two week’s worth of changes to his code units – which all work fine for him – and Kathy does likewise, when it comes time to integrate their pieces together, they will likely begin their descent into “integration hell.” To avoid hell requires religious adherence to a methodology that guards against the divergence of parallel streams of development. Our recommendation is for BODS development teams to follow the practice of Continuous Integration.

In a nutshell, Continuous Integration prevents divergence by requiring full daily builds of everybody’s committed code – and of required daily code commits. We didn’t create Continuous Integration – it’s an accepted methodology with an established literature – but have attempted to add value by translating it here into BODS-specific terms. Continuous Integration informs much of the Automated Continuous Integration Testing (ACIT) facility in the practice of Agile Data Warehousing (ADW) as described by Ralph Hughes in his book of the same name.

Your understanding will benefit by consulting the following background material:
http://en.wikipedia.org/wiki/Continuous_integration 
Continuous Integration Best Practices with Rational Team Concert - Library: Articles - Jazz Community Site 
http://www.martinfowler.com/articles/continuousIntegration.html 
Integration Hell (this one is pretty funny)
http://www.martinfowler.com/articles/originalContinuousIntegration.html – a foundational article

Continuous Integration - General Principles
Continuous Integration enjoins the development team to adhere to the following principles:

  1. Maintain a Single Source Repository.
  2. Automate the Build.
  3. Make Your Build Self-Testing.
  4. Everyone Commits To the Mainline Every Day.
  5. Every Commit Should Build the Mainline on an Integration Machine.
  6. Keep the Build Fast.
  7. Test in a Clone of the Production Environment.
  8. Make it Easy for Anyone to Get the Latest Executable.
  9. Everyone can see what’s happening.
  10. Automate Deployment.

These principles work, but they need to be translated into terms and technologies specific to BODS. In the following, we’re ignoring testing, both in general and as handled in an ACIT facility, as well as how to automate certain operations; both are simply too large to be addressed here, where the focus is more on team dynamics and do’s/don’ts.

In what follows, we’ll use “commit” to mean “check-in”, and the terms “mainline” and “code base” will be synonymous with the latest version of the job or jobs in the BODS central repository.
Continuous Integration in BODS

  1. Use a Single Central Repository and Strive to Keep Everything There
    In a team working on a single project, we have to have a single DS central repository for code which contains everything necessary for a “build.” We recommend that teams create a central repository “BODS_CENTRAL” for the purpose.

There’s a lot of talk about “builds” in the literature. In DS terms, what runs is the job, so, for us, “build” = job. Some people say it should be a project, and that would be OK – the “build,” then, would be the set of jobs. This doesn’t affect the general discussion.

We can have multiple jobs in a single DS central repository, and those jobs can use shared components (like custom functions and tables), and that’s all fine as long as we have a single central.

We do not recommend creating multiple central repositories for different code life cycle phases (typically, dev, QA, and prod). Central repositories intrinsically maintain versions, and we can use the code labeling feature to label our code with version numbers if desired.

Martin Fowler emphasizes that the repository must contain everything, and it should be possible for a developer to start w/ a virgin local repository, get the latest job, and run it successfully, with no external dependencies. A Data Services central repository is not a full-featured, Subversion-style repository, and we can’t easily add Word documents, DDL scripts, etc. But by properly documenting the code that can get added, we can, at least, refer to such dependencies, and make our jobs self-documenting and self-contained to the maximum possible extent. The general rule should be: by performing a ‘Get latest version’ of a job, everything required to run the job should be either directly present or referred-to within the job.

We encourage the practice of using BODS to create tables, vs. doing that with a modeling tool and DDL outside of BODS. Special “setup environment” or “create tables” jobs can be created, using template tables as the target, in lieu of external DDL scripts, and this helps keep everything self-contained in the central repository. BODS jobs can also be written to be self-checking and self-initializing, running scripts that check for the existence of objects (typically DBMS tables) and conditionally taking action to initialize those tables. Where you have need for advanced logical or physical data modeling, this won’t work, or will only work partially – it would be a stretch to write lots of advanced DDL script and execute it from BODS (although, yes, you could) – but for many purposes, regarding tables, all you need is the table and a primary key, which a BODS template table will handle just fine.

  1. Centralize and Standardize System Configurations
    System configurations are not, unfortunately, objects we can put in a central repository, but they need to be treated like “code.” Designate a person to be the “keeper” or manager of system configurations, data store objects, data store configurations, and substitution parameters, which all work in concert. Post the ATL files for system configurations and substitution parameters in a central network share. Each developer on the team should, every day, do an import of the latest official system configuration and substitution parameter ATL files from this share.

  2. Everybody Starts Fresh on Everything Every Morning
    Each developer must start the day, each day, every day, by performing a ‘Get Latest Version’ of the job (or jobs) in question and all dependents, for any job or jobs the developer intends to work on that day. Each developer should also import the latest ATL files for the system configurations and substitution parameters.

The point of ‘continuous integration’ is to continuously (at least daily) integrate-and-test to avoid serious divergence and speed overall efficiency and code quality. Will developers experience unpleasant surprises after doing complete “Get Latest Version” operations? Of course. But the surprises should always be from recent changes, and relatively easy to find and resolve. It is always easier to code in isolation in the short term, but the short-term productivity gains of ignoring coordination are paid for in spades later. Thus: developers are not allowed to pick-up where they left off on modifications to units which they’ve had continuously checked-out and uncommitted for days. At the beginning of each work day, each developer must re-align to the “mainline” or “code base,” and start from there – from an up-to-date base.

  1. Developers Always and Only Check-Out Units They Intend to Modify Soon
    Once a developer has a fresh code base, they check-out the specific unit or units they intend to modify soon – within a few hours. They do not preemptively check-out large branches of code, containing a number of units far in excess of what they could reasonably modify within “soon.”

If they want to make large “structural” changes to flow units such as workflows and conditionals, and want to perform check-out-with-dependents of the root workflow of a large branch to get everything at once for some major reorganization effort, then (in this typical example) they should immediately undo the checkout for all the dataflows within and any small workflows known to be irrelevant to this high-level restructuring.

Developers should think of checking-out an object as an act of communicating to their team members: “Hey, everybody – I’m actively working on this, right now.” If you check something out, but aren’t working on it, you’re misleading and confusing your team members.

  1. Avoid “Checkout Without Replacement”
    The “checkout without replacement” operation causes confusion, because it’s almost always used to get changes to a unit uploaded to the central repository in the absence of having properly checked-out the object beforehand.

Let’s say that on Tuesday, unbeknownst to each other, Tom and Kathy both decide to work on dataflow DF_ABC, but only Tom remembers to check it out. If Kathy had remembered, she would find that Tom already had DF_ABC out, and would be warned that whatever she intended to do in parallel would need to be manually merged with Tom’s changes – remember, a primary benefit of checking-out objects is to communicate. Indeed, Kathy should find something else to do – it makes little sense for her to work on DF_ABC if Tom’s got it checked-out and, presumably, is making changes she can’t see yet. But Kathy forgets checking-out, and forgets to check to see if anybody else has DF_ABC checked-out, and starts making complicated adjustments to the dataflow. At around 3pm that day, Tom finishes with his changes and checks-in the code. At 6pm, Kathy is finished for the day, and, satisfied with her changes, needs to get her code committed to the central repository. Only then does she remember that she did the day’s work on an object she hadn’t checked-out.

What should she do? She can certainly perform a “checkout without replacement” and upload her new version of DF_ABC. But she worked off a “dirty” code base – her coding didn’t reflect Tom’s changes, which paralleled hers and were committed in advance. Tomorrow morning, when Tom performs a ‘Get Latest Version’ on the job, his changes to DF_ABC will all have disappeared in favor of Kathy’s, and after a round of recriminations and hurt feelings, they’ll need to manually piece through their parallel efforts, that is, will need to spend some time in integration hell.

Avoid “checkout without replacement.”

If a developer does forget to checkout, however, all is not lost. The developer has two options:

  1. If the object in question has not been versioned by anybody else since the beginning of the day, then the developer can safely perform a ‘checkout without replacement’ and check the changes back in. No harm done.

  2. If the object has been versioned, the offending developer should go ahead and perform a ‘checkout without replacement’ and check-in, creating a new version, but then immediately use the comparison features in Data Services to see how the two versions differ and work with the other developer to integrate as necessary… expressing apologies.

  3. Always Test Against the Very Latest Code Before Checking-In
    Another way of saying this is “Never knowingly break the current job” or “Never commit from dirty code.”

Before checking-in changed units, developers are responsible to make sure it passes both their own unit testing (of course) and “mini-integration testing,” i.e., running the relevant jobs successfully, against up-to-the-minute latest code. If you work on a given unit till, say, 3pm, you can and should assume that other developers will have committed updates to other units earlier that day. They haven’t been made in relation to your changes – you haven’t committed yet, so they’ve been working off the version as of that morning – but they’ve done their part to not break the job(s). Before you check-in your unit(s), you must perform a ‘Get Latest Version’ of the entire job again, as you did that morning, and make sure your units still work with the changes that have been committed so far from everybody else. (Your code units are in a checked-out state and will not be overwritten by a “Get Latest Version” operation.)

  1. Everyone Commits Everything Daily
    Developers should check-in their changed code at least once per day, and preferably more often. Code should not be left in a checked-out state overnight. The divergence that leads to integration hell grows the wider the longer code is modified and not returned to the mainline, and under Continuous Integration, a day’s worth of changes is the limit of tolerance for this divergence.

What if a developer checks-out a unit in the morning, works all day to make changes, and still doesn’t have it working by the end of the day? Then – in general – that developer is biting off more than they can chew, and needs to decompose the work into smaller pieces.

A developer may not check-in broken code to the mainline – period. If he finds himself at the end of the day with a broken, in-process unit, and there are no smaller pieces of it which can be committed, then he will simply need to make a duplicate of the code and undo the morning’s checkout operation on the still-broken unit. He is not allowed to retain it overnight in a checked-out state. In the morning, he’ll get a fresh copy of the unit in question (with no guarantee that someone else may not have modified it during the night), and will need to start afresh on the changes.
ETL Doctor Data Services Best Practice for Team Development and Continuous Integration.pdf (121.0 KB)


JeffPrenevost :us: (BOB member since 2010-10-09)

Is there a specific reason/motivation for wanting to remove the development local repositories? I personally find them very useful and wouldn’t want to see them removed, but I remember some discussions with the Max Attention team about making the DEV/TEST/PROD promotion process easier and this resulted in the removal of local repositories.


Nemesis :australia: (BOB member since 2004-06-09)

No, I still want local repositories. Sometimes I write quick-n-dirty jobs that have no business being in a central repository.


eganjp :us: (BOB member since 2007-09-12)

That’s a good one, Jim.

No particular reason. I don’t like having to create so many repositories, one per developer, copying objects back and forth if not needed is another overhead. But no particular reason for having to remove local repos. Just checking…


Werner Daehn :de: (BOB member since 2004-12-17)

At least, centralized control of datastores, system configurations, and substitution parameters might be nice. Local control over that set of objects frequently leads to a mess.

I think an architecture in which four people can easily have four uncoordinated, divergent copies of DF_DIM_CUST, four versions of the DIM_CUST table, etc. is questionable. You can avoid integration hell under a tight team protocol of check-out, check-in, etc., but the current architecture encourages divergence with its focus on local repositories. If, to make a change to DF_DIM_CUST, Bob has to open it from the central code base, and anybody else attempting to do so while he’s got it open has to do so in read-only mode (as though it were a Word document on a file share), with no such thing as a local repo, and no ability to “save local” in any way – that, in many team environments, would be a very good thing.


JeffPrenevost :us: (BOB member since 2010-10-09)

hi

I think said this already on previous posts but I really don’t like the use of databases as a place where code is written to and maintained and would much prefer a file based approach.

I’ve a background in java and version control, code integration, promotion etc is much easier especially when you already have tools such as ant, maven, subversion etc for doing all this for you. If SDS was file based then all these tools will be available to exploit - no need to re-invent wheel.

The other advantage of being able to use file based repos is that you can package all the other components of the project in same repo such as release notes, deployment guides and, hopefully in future, universes and reports if BusObj just use subversion without having to wrap it in LCM.

As well as this the other main advantages of file based code are,

offline working
you may not be able to exec a job if you’re disconnected from the job server or source/target databases but you can still do some basic work: code review, add comments, minor edits etc. Also I can easily load an ATL file without having to worry about it overwriting content in my local repo as i can just save it to another folder on my desktop.

offshore/remote development
When working with offshore teams they cannot use a local install of SDS as the latency between the offshore desktop and the onshore RDBMS server is too great - I’ve seen logon taking 2 mins, retrieving list of objects taking 5-10 mins etc. A workaround is to build a citrix server hosting an SDS Designer install or replicate all databases offshore - again costly. If file based then code is written to local file system and can then be submitted to central server for execution.

I know this isn’t answering your question but with file based code your “local” repository is just the local hard disk of the developer. You still need a centralised location for integrated code but this is then just any version control tool.

AL


agulland :uk: (BOB member since 2004-03-17)

I understand the idea and goal. But how would the jobserver and engine execute something off your local files?


Werner Daehn :de: (BOB member since 2004-12-17)

And that right there is the Kiss of Death for using files for a repository.

A file would work well if it was on your local drive but it has to be accessible to the job server which may be on a different machine. Now you’re manipulating a potentially large file across the network. (Yes, I know that development can be done without connecting to a job server.)

No thank you, I’ll keep my repository in a database.

In my opinion, SAP should leverage the excellent desktop database from Sybase called SQL Anywhere for situations where a remote repository is to be used.


eganjp :us: (BOB member since 2007-09-12)

Ah that would be the technical challenge for SAP dev!

We know that designer can read log files generated on the job server so one option would be something similar in reverse.

Or when the developer executes a job Designer submits the ATL (for that job and any dependencies) to the job server.

Or a local jobserver runs on developers desktop

Yes it would be quite a big change but I believe it would be worth it long term - to have is a single repo for the full BI project containing everything: ETL, Reports, universes, data models, release notes and other doc etc.

al


agulland :uk: (BOB member since 2004-03-17)

I was just thinking on this for few minutes today…

I believe, that the existing Setup is holds very good except for the CHECK-IN/CHECK-OUT process…

–> I have certain ugly situations where I have to agressively solve the problem…In which case, as Jim said, I can’t write that funky little jobs in CENTRAL, for that reason I need LOCAL

–> Next, Jeffprenevost said about centralized Datastores…Yes I like it, but again for a PROD issue solver, I need a right to create my own Datastore and try loading something and delete it off!!!

Maybe I can think of a collaborative approach. Local Repo development can be plucked off in certain cases and give local repo rights to so called FireFighters maybe? Having said that, it will create a mess around the working piece of code…

So the best way is to maintain you piece of code with yourself. I will review and take it up!
If codes are generated in CENTRAL, if that piece of job is say developed by fairly new developer to team, not knowing the exact standards, all those non-standard job will sit as one piece of crap. I had a code reviewer who will ask the existence of each query transform…What he will do…

So I vote for not plucking off the LOCAL repo concept!


ganeshxp :us: (BOB member since 2008-07-17)

I agree with ErikR - Keep local repo and central.

But, please… please… improve the functioning of Central with labelling, move labelling, branching, etc… like other versioning product. I’m using BODS 3.x and labelling is a pain for us, especially in a large deployment, where after an initial label of all objects, then turn around and make a minor change on one of the object, I could not re-use the same label, ???
I should be able to re-use the same label on the latest object.


chiha (BOB member since 2011-06-10)

Another welcomed feature would be the ability to pull the latest version from the central repository using a command line option for the purposes of building an automated Continuous Integration solution. For a local repo, we have the al_engine command, but strangely this doesn’t exist for a central repository. This makes it impossible to script a Continuous Integration solution which automatically runs tests against the latest version of the central repository every night.

It seems a bit odd that one has to move away from the central repository, i.e. to a local repository in order to perform an export.

Please correct me if I have overlooked some feature, but from what I have read, and having talked to my BODS colleagues, the only way to get the latest version from the central repository is via the Data Integrator tool using manual steps.


gje306 (BOB member since 2013-01-28)

OK, you’re wrong. :slight_smile: There is another way but it is not one that most people would care to explore. It is possible to create your own ATL file by directly reading the repository tables. This requires an advanced level of programming as well as a rather detailed knowledge of the repository tables.


eganjp :us: (BOB member since 2007-09-12)

That doesn’t seem challenging enough to me, why not go down a level and reverse engineer the underlying db binary representation :wink:


gje306 (BOB member since 2013-01-28)

How about al_engine? Can this be relied upon to export a like for like .atl or .xml file as compared to exporting it through Data Integrator?


gje306 (BOB member since 2013-01-28)

Looks like you answered your own question here: Exporting Directly from Central Repository using AL_ENGINE


eganjp :us: (BOB member since 2007-09-12)

In my view, you are approaching this from the wrong angle. It isn’t a question of local v. central, its a question of what functionality is needed.

The way I see it, is if you developed the central repository into a mature version control tool, you could do everything in the central repo without sacrificing any functionality.

Take this scenario:

I create dataflow X which reads from table A connects to Query Q and write to table B.

If I create something, it should be implicitly checked out to me.

Now say you wish to write a dataflow that uses table B.

When you try and add table B to your flow, data services should warn you it is checked out to me and then give you the option of branching, or risking accessing my copy in a read-only fashion.

Assuming I branch, there is no danger of your change breaking my flow. BUT the underlying table in the database will still be wrong for one of us, so there is no silver bullet.

The question is who owns the checkout ? Is it the user, or the project ? For example I may be working on a bug fix for a flow in one project, but also working on a future enhancement in another. Currently I’d need to have two separate repository’s (and 2 designer sessions) to do this. But if you implemented the Version control appropriately, I could simply have 2 projects within the designer.


Leigh Kennedy :australia: (BOB member since 2012-01-17)