We are just about to set up a highly available environment for e6 on AIX.
There is a backup-system which starts up if the production-server crashes. This backup-server shares the filesystem (bo) with the production-server, which ist attached via nas.
When we did a first test this weekend we had a problem with the configuration. The scripts “wstart” and “wstop” (and others) generate the path “…/e6/nodes/HOSTNAME/…”. The Backup-host has a different hostname than the production-server.
What could I do to solve the problem?
I tried to make a link with the new hostname which points to “…/nodes/HOSTNAME/” but that didnt work.
You might be better off using a load balancer such as from CISCO and have two WebI servers running as Cluster Managers with the CISCO load balancer redirecting traffic between the two WebI servers.
With 2 webi servers running as Cluster Managers, you still need nas to share user documents, configuration files and cache if possible.
Did you succeed it?
If you read the deployment guide, it tells you how to do this.
With the current architecture, you’ll have to use a 3rd party switch of some sort. Cisco Local Director is the most commonly used one of these. That’s the only way to get automated failover. If you want manual failover, you can do this with clever DNS work and a cold-swap box.
In any event, you need to configure Shared Storage, so that files are not stored directly on the WebI server.
The only thing that we were wondering about was the question whether it is necessary to install BO on a second machine or whether this can be done by mirroring the existing installation.
We already tried mirroring which did not work because of the different machine’s hostnames…
We now have two installations with shared storage and linked directories for the apache/tomcat resource files.
This is working, but is not really ideal. I’m just preparing the migration towards 6.5.1. To do the Installation of the backup-Installation we have to switch to it in a separate service window.
Hello,
We plan to use F5 as an IP redirector and have two Webi Clusters on Sun Solaris servers which each access the same repository so it sounds like we are pursuing a similar approach to the one you have implemented. However, we cannot use Unix clustering to provide the Shared Storage. Is there any way to implement this approach either without Shared Storage or with it provided in another way. We would consider disabling the ability for users to save to Personal Storage if that were the only feature that necessitates Shared Storage but it sounds like that is not the case. Thank you.
This is probably an obvious option to consider, but I’ll offer it anyway: Did you try doing a WebI install by using “localhost” or “127.0.0.1” (rather than the hostname)?
What about synchronization of the storage directory using a regularly scheduled script? I’ve used that approach before with WebI and it works fine (on Windows, not Unix though).
Thanks Chris, I thought I’d seen that suggestion somewhere. So is the main content of the shared storage whatever users have saved to ‘personal storage’? If so, periodic copying should be adequate. I saw mention of a few other things in the deployment guide and wondered if these had to always be totally consistent between the two machines.
I haven’t dug too deeply yet into 6.5.1, but I believe so - it is a combination of BO binary files (such as WebI documents) and configuration text files. There are system procedures that interact between the WebI server, BLOBs in the database and personal storage (ie, shared storage). It would be best to understand them more deeply before setting up a synchronization script. For example, a user may send a report to another user, but the report may only exist in the repository as a BLOB until the user opens it - then it will be copied to disk. There’s a feature in Supervisor that allows for deletion of “unreceived” inbox BLOB’s - all files not yet copied to disk will be deleted.
BO has done a pretty good job with their documentation though and I’m sure you can find the details you need in their PDFs.