Syncpoints in MQSeries for OS/2 Warp, WebSphere MQ for Windows, WebSphere MQ for iSeries, and WebSphere MQ on UNIX systems

Syncpoint support operates on two types of units of work: local and global.

A local unit of work is one in which the only resources updated are those of the WebSphere MQ queue manager. Here syncpoint coordination is provided by the queue manager itself using a single-phase commit procedure.

A global unit of work is one in which resources belonging to other resource managers, such as databases, are also updated. WebSphere MQ can coordinate such units of work itself. They can also be coordinated by an external commitment controller such as another transaction manager or the OS/400 commitment controller.

For full integrity, a two-phase commit procedure must be used. Two-phase commit can be provided by XA-compliant transaction managers and databases such as IBM's TXSeries and UDB and also by the OS/400 commitment controller. MQSeries or WebSphere MQ Version 5 products (except WebSphere MQ for iSeries and WebSphere MQ for z/OS) can coordinate global units of work using a two-phase commit process. WebSphere MQ for iSeries can act as a resource manager for global units of work within a WebSphere Application Server environment, but cannot act as a transaction manager.

Local units of work

Units of work that involve only the queue manager are called local units of work. Syncpoint coordination is provided by the queue manager itself (internal coordination) using a single-phase commit process.

To start a local unit of work, the application issues MQGET, MQPUT, or MQPUT1 requests specifying the appropriate syncpoint option. The unit of work is committed using MQCMIT or rolled back using MQBACK. However, the unit of work also ends when the connection between the application and the queue manager is broken, whether intentionally or unintentionally.

If an application disconnects (MQDISC) from a queue manager while a global unit of work coordinated by WebSphere MQ is still active, an attempt is made to commit the unit of work. If, however, the application terminates without disconnecting, the unit of work is rolled back as the application is deemed to have terminated abnormally.

Global units of work

Use global units of work when you also need to include updates to resources belonging to other resource managers. Here the coordination may be internal or external to the queue manager:

Internal syncpoint coordination

Queue manager coordination of global units of work is not supported by WebSphere MQ for iSeries or WebSphere MQ for z/OS. It is not supported in a WebSphere MQ client environment.

Here, the coordination is performed by WebSphere MQ. To start a global unit of work, the application issues the MQBEGIN call.

As input to the MQBEGIN call, you must supply the connection handle (Hconn), which is returned by the MQCONN or MQCONNX call. This handle represents the connection to the WebSphere MQ queue manager.

Again, the application issues MQGET, MQPUT, or MQPUT1 requests specifying the appropriate syncpoint option. This means that MQBEGIN can be used to initiate a global unit of work that updates local resources, resources belonging to other resource managers, or both. Updates made to resources belonging to other resource managers are made using the API of that resource manager. However, it is not possible to use the MQI to update queues that belong to other queue managers. MQCMIT or MQBACK must be issued before starting further units of work (local or global).

The global unit of work is committed using MQCMIT; this initiates a two-phase commit of all the resource managers involved in the unit of work. A two-phase commit process is used whereby resource managers (for example, XA-compliant database managers such as DB2(R), Oracle, and Sybase) are firstly all asked to prepare to commit. Only if all are prepared are they asked to commit. If any resource manager signals that it cannot commit, each is asked to back out instead. Alternatively, MQBACK can be used to roll back the updates of all the resource managers.

If an application disconnects (MQDISC) while a global unit of work is still active, the unit of work is committed. If, however, the application terminates without disconnecting, the unit of work is rolled back as the application is deemed to have terminated abnormally.

The output from MQBEGIN is a completion code and a reason code.

When MQBEGIN is used to start a global unit of work, all the external resource managers that have been configured with the queue manager are included. However, the call starts a unit of work but completes with a warning if:

In these cases, the unit of work should include updates to only those resource managers that were available when the unit of work was started.

If one of the resource managers is unable to commit its updates, all of the resource managers are instructed to roll back their updates, and MQCMIT completes with a warning. In unusual circumstances (typically, operator intervention), an MQCMIT call may fail if some resource managers commit their updates but others roll them back; the work is deemed to have completed with a 'mixed' outcome. Such occurrences are diagnosed in the error log of the queue manager so remedial action may be taken.

An MQCMIT of a global unit of work succeeds if all of the resource managers involved commit their updates.

For a description of the MQBEGIN call, see WebSphere MQ Application Programming Reference.

External syncpoint coordination

This occurs when a syncpoint coordinator other than WebSphere MQ has been selected; for example, CICS, Encina, or Tuxedo. In this situation, MQSeries for OS/2 Warp, WebSphere MQ on UNIX systems, and WebSphere MQ for Windows register their interest in the outcome of the unit of work with the syncpoint coordinator so that they can commit or roll back any uncommitted get or put operations as required. The external syncpoint coordinator determines whether one- or two-phase commitment protocols are provided.

When an external coordinator is used MQCMIT, MQBACK, and MQBEGIN may not be issued. Calls to these functions fail with the reason code MQRC_ENVIRONMENT_ERROR.

The way in which an externally coordinated unit of work is started is dependent on the programming interface provided by the syncpoint coordinator. An explicit call may, or may not, be required. If an explicit call is required, and you issue an MQPUT call specifying the MQPMO_SYNCPOINT option when a unit of work is not started, the completion code MQRC_SYNCPOINT_NOT_AVAILABLE is returned.

The scope of the unit of work is determined by the syncpoint coordinator. The state of the connection between the application and the queue manager affects the success or failure of MQI calls that an application issues, not the state of the unit of work. It is, for example, possible for an application to disconnect and reconnect to a queue manager during an active unit of work and perform further MQGET and MQPUT operations inside the same unit of work. This is known as a pending disconnect.

Interfaces to external syncpoint managers

MQSeries for OS/2 Warp, WebSphere MQ on UNIX systems, WebSphere MQ for iSeries, MQSeries for Compaq OpenVMS Alpha, and WebSphere MQ for Windows support coordination of transactions by external syncpoint managers which use the X/Open XA interface. This support is available only on server configurations. The interface is not available to client applications.

Some XA transaction managers (not CICS on Open Systems or Encina) require that each XA resource manager supplies its name. This is the string called name in the XA switch structure. The resource manager for WebSphere MQ on UNIX systems is named "MQSeries_XA_RMI". The name on OS/400 is "MQSeries XA RMI" For further details on XA interfaces refer to XA documentation CAE Specification Distributed Transaction Processing: The XA Specification, published by The Open Group.

In an XA configuration, WebSphere MQ on UNIX systems, MQSeries for Compaq OpenVMS Alpha, MQSeries for OS/2 Warp and WebSphere MQ for Windows fulfil the role of an XA Resource Manager. An XA syncpoint coordinator can manage a set of XA Resource Managers, and synchronize the commit or backout of transactions in both Resource Managers. This is how it works for a statically-registered resource manager:

  1. An application notifies the syncpoint coordinator that it wishes to start a transaction.
  2. The syncpoint coordinator issues a call to any resource managers that it knows of, to notify them of the current transaction.
  3. The application issues calls to update the resources managed by the resource managers associated with the current transaction.
  4. The application requests that the syncpoint coordinator either commit or roll back the transaction.
  5. The syncpoint coordinator issues calls to each resource manager using two-phase commit protocols to complete the transaction as requested.

The XA specification requires each Resource Manager to provide a structure called an XA Switch. This structure declares the capabilities of the Resource Manager, and the functions that are to be called by the syncpoint coordinator.

There are two versions of this structure:

MQRMIXASwitch Static XA resource management
MQRMIXASwitchDynamic Dynamic XA resource management

The structure is found in the following libraries:

mqmxa.lib UNIX, OS/2, and Windows XA library for Static resource management
mqmenc.lib AIX, HP-UX, Solaris, and Windows Encina XA library for Dynamic resource management
libmqmxa.a UNIX systems XA library (non-threaded)
libmqmxa_r.a UNIX systems XA library (threaded)
LIBMQMXA service program OS/400 systems XA library (non-threaded)
LIBMQMXA_R service program OS/400 systems XA library (threaded)

The method that must be used to link them to an XA syncpoint coordinator is defined by the coordinator, and you will need to consult the documentation provided by that coordinator to determine how to enable WebSphere MQ to cooperate with your XA syncpoint coordinator.

The xa_info structure that is passed on any xa_open call by the syncpoint coordinator should be the name of the queue manager that is to be administered. This takes the same form as the queue manager name passed to MQCONN or MQCONNX, and may be blank if the default queue manager is to be used.

Restrictions
  1. Global units of work are not allowed with a shared Hconn (as described in Shared (thread independent) connections with MQCONNX.
  2. WebSphere MQ for iSeries does not support dynamic registration of XA resource managers.

    The only transaction manager supported is WebSphere Application Server.

  3. On OS/2, all functions declared in the XA switch are declared as _System functions.
  4. On Windows systems, all functions declared in the XA switch are declared as _cdecl functions.
  5. Only one queue manager may be administered by an external syncpoint coordinator at a time. This is due to the fact that the coordinator has an effective connection to each queue manager, and is therefore subject to the rule that only one connection is allowed at a time.
  6. All applications that are run using the syncpoint coordinator can connect only to the queue manager that is administered by the coordinator because they are already effectively connected to that queue manager. They must issue MQCONN or MQCONNX to obtain a connection handle and must issue MQDISC before they exit. Alternatively, they can use the CICS user exit 15 for CICS for OS/2 V2 and V3, and CICS for Windows NT V2, or the exit UE014015 for TXSeries for Windows NT V4 and CICS on Open Systems.

The features not implemented are:

Because CICS Transaction Server V4 is 32-bit, changes are required to the source of CICS user exits. The supplied samples have been updated to work with CICS Transaction Server V4 as shown in Table 8.

Table 8. Linking MQSeries for OS/2 Warp with CICS Version 3 applications

User exit CICS V2 source CICS V2 dll TS V4 source TS V4 dll
exit 15 amqzsc52.c faaexp15.dll amqzsc53.c faaex315.dll
exit 17 amqzsc72.c faaexp17.dll amqzsc73.c faaex317.dll

For CICS Transaction Server V4, the supplied user exits faaex315.dll and faaex317.dll should be renamed to the standard names faaexp15.dll and faaexp17.dll.



© IBM Corporation 1993, 2002. All Rights Reserved