Wednesday, February 17, 2010

Migrating WebSphere MQ queue manager clusters

MQ Migration
========
Migrating queue managers is generally a simple process, because WebSphere MQ is designed to automatically migrate objects and messages, and support mixed version clusters. However, when planning the migration of a cluster, you need to consider a number of issues, which are described below.

Forward migration involves upgrading an existing queue manager to a later version and is supported on all platforms. You may wish to forward migrate in order to take advantage of new features or because the old version is nearing its end-of-service date.
Testing
———-
It is important when making any system changes to test the changes in a test or QA environment before rolling out the changes in production, especially when migrating software from one version to another. Ideally, an identical migration plan would be executed in both test and production to maximise the chance of finding potential problems in test rather than production. In practice, test and production environments are unlikely to be architected or configured identically or to have the same workloads, so it is unlikely that the migration steps carried out in test will exactly match those carried out in production. Whether the plans and environments for test and production differ or not, it is always possible to find problems when migrating the production cluster queue managers

Plan
—–
When creating the migration plan, you need to consider general queue manager migration issues, clustering specifics, wider system architecture, and change control policies. Document and test the plan before migrating production queue managers. Here is an example of a basic migration plan for a cluster queue manager:

1. Suspend queue manager from the cluster.
* Issue SUSPEND CLUSTER
* Monitor traffic to the suspended queue manager. The cluster workload algorithm can choose a suspended queue manager if there are no other valid destinations available or an application has affinity with a particular queue manager.
2. Save a record of all cluster objects known by this queue manager. This data will be used after migration to check that objects have been migrated successfully.
* Issue DISPLAY CLUSQMGR(*) to view cluster queue managers.
* Issue DISPLAY QC(*) to view cluster queues.
3. Save a record of the full repositories view of the cluster objects owned by this queue manager. This data will be used after migration to check that objects have been migrated successfully.
* Issue DISPLAY CLUSQMGR on the full repositories.
* Issue DISPLAY QC(*) WHERE(CLUSQMGR EQ ) on the full repositories.
4. Stop queue manager.
5. Take a backup of the queue manager.
6. Install the new version of WebSphere MQ.
7. Restart queue manager.
8. Ensure that all cluster objects have been migrated successfully.
* Issue DISPLAY CLUSQMGR(*) to view cluster queue managers and check output against the data saved before migration.
* Issue DISPLAY QC(*) to view cluster queues and check output against the data saved before migration.
9. Ensure that the queue manager is communicating with the full repositories correctly. Check that cluster channels to full repositories can start.
10. Ensure that the full repositories still know about the migrated cluster queue manager and its cluster queues.
* Issue DISPLAY CLUSQMGR on the full repositories and check output against the data saved before migration.
* Issue DISPLAY QC(*) WHERE(CLUSQMGR EQ ) on the full repositories and check output against the data saved before migration.
11. Test that applications on other queue managers can put messages to the migrated cluster queue manager’s queues.
12. Test that applications on the migrated queue manager can put messages to the other cluster queue manager’s queues.
13. Resume the queue manager.
* Issue RESUME CLUSTER
14. Closely monitor the queue manager and applications in the cluster for a period of time.

Backout plan
—————-
A backout plan should be documented before migrating. It should detail what constitutes a successful migration, the conditions that trigger the backout procedure, and the backout procedure itself. The procedure could involve removing or suspending the queue manager from the cluster, backwards migrating, or keeping the queue manager offline until an external issue is resolved.

No comments:

Post a Comment