top of page

Replication over IP – an introduction to the GLVM


Adding “Gee” to the AIX LVM

Antony Steel,  Belisama

The Geographic Logical Volume Manager (GLVM) has found new users as customers struggle with moving data between environments when only an IP network is available. GLVM is not new and has been a part of AIX since the heady days of AIX 5L (2005).


While most of my customers have used GLVM with PowerHA Enterprise Edition (EE), where neither the application nor the Storage Subsystem can replicate data using the existing infrastructure, a couple of customers have used it standalone to replicate data to a remote site, that is, without using PowerHA to manage.


Initially, only synchronous mode replication was supported, but in 2008, asynchronous mode replication was introduced (taking advantage of the AIX Logical Volume Manager (LVM) mirror pools).


GLVM is really worth looking at if you are struggling with moving data from on premise to the cloud, whether for scaling, testing or redundancy/DR reasons. This is a growing area of interest in GLVM.


So what are the advantages of GLVM?
  • It is part of the robust AIX LVM and can be used standalone or installed with PowerHA EE for management, monitoring and automated recovery. If you have AIX installation media, you have the filesets and can use standalone;

  • It is simple to set up; and

  • It allows Logical Volumes (and file systems on top) to be replicated to a remote site over IP. It is designed for instances where neither the application nor the storage subsystem supports remote replication over the available infrastructure.


What are the downsides?
  • It is strongly recommended to use PowerHA Enterprise Edition (PowerHA EE) to manage - while this is not really a downside, it does add to the setup. PowerHA EE will control the status and direction of the replication to ensure the integrity of the data. GLVM itself has no understanding of the infrastructure, so in standalone mode, GLVM can be started on either or both sites in either mode without any checking;

  • GLVM is very reliant on the network bandwidth and latency, if you chose the synchronous option. Most of the critsits I have worked on for GLVM (and it’s elder sibling GeoRM) were due to the business growing while forgetting that the network is usually a fixed pipe;

  • It is not very granular, for example if the remote site is unavailable for a period of time, recovery will require the replication of every modified logical partition, even if only one bit has been changed in that logical partition;

  • Many applications are latency dependent, so it is vital to ensure that neither the distance nor the quality of the networking equipment is not going to be an issue;

  • While asynchronous mode is great for smoothing the peaks in I/O by using the local cache logical volume to store writes, it represents the amount of data that can potentially be lost in a disaster;

  • Asynchronous mode also makes the recovery process more complex as issues such as data divergence must be carefully managed.


How does it work?

At a high level, the GLVM client (Remote Physical Volume Client, or RPV Client) runs on the active server and provides a local “pseudo” physical volume, which is a local representation of the physical volume attached to the remote server. This pseudo physical volume can be treated as a regular physical volume, added a volume group and have mirror copies of logical volumes defined using it. The local RPV Client device driver works with the remote RPV Server kernel extension to take the local pseudo physical volume I/O over the network and perform it on the remote (real) physical volume. Changing direction of replication is just a matter of changing which site is running the RPV Client or Server.


GLVM supports both synchronous and asynchronous modes. For synchronous replication, the iodone() is not returned until the write to the remote physical volume is completed. For asynchronous replication, the write is stored in a local cache logical volume and the iodone() returned. At some later stage the write is sent to the remote server and the cache is cleared when that write is completed.



GLVM has the following requirements

  • Two sites with an AIX Server/LPAR at each - preferably running an up-to-date version of OS / firmware!;

  • The AIX LVM can support 3 copies, so one site can have 2 copies;

  • Attached storage, which can be anything supported by AIX, so doesn’t need to be the same across sites;

  • A network connecting the sites with sufficient bandwidth and low enough latency; and

  • Know your application and size your I/O for network planning (AIX tools or gmdsizing from PowerHA code).



Useful (but necessary) features

Primarily the AIX LVM is aware that the remote physical volumes are slightly slower (due to latency) and less reliable (due to network packet loss) physical volumes. So:

  • AIX will try to read from the local logical volume first;

  • If the remote physical volume(s) are not reachable, local I/O will continue, but each modified Logical Partition (LP) will be marked stale. When the remote site recovers, each stale LP will be replicated; and

  • Some customers want to have two copies at the primary site – this is supported.




The following should be consider when planning sites and connectivity:

  • As mentioned the network sizing is easy if we are dealing with an existing application – just use iostat or the gmdsizing tool (part of the PowerHA filesets) to work out peak and average I/O.  Otherwise you will need to work with the Application Vendor to get an accurate estimate of your I/O patterns and growth.

  • Note: with asynchronous mode, if the local cache fills up, GLVM will switch to synchronous mode until the cache is cleared;

  • Latency will be controlled by the distance between the sites and the supporting network infrastructure. Application requirements may then restrict your choice of mode (sync or async); and

  • Network redundancy is important – balance the load across two separate providers / infrastructure for greater availability. GLVM can support up to 4 networks, or redundancy can be configured using AIX (Ether channel or Network Interface Backup).



Note the following recommendations and limitations for geographically mirrored volume groups:

  • An important choice is whether quorum is turned on or not. While Quorum “on” is better for availability, “off” is better for data integrity and is recommended for PowerHA EE. For example, if you experience a site split, with quorum at the wrong site, then the volume group can be easily activated on the wrong site;

  • Mirror pools are required if using asynchronous mirroring and optional, but recommended, if using synchronous mode. The inter-disk allocation policy for the LVs must be set to “super strict”;

  • The rootvg cannot be geographically mirrored;

  • Remember that the AIX LVM only supports three copies. Each site must have one copy and only one site can have two copies;

  • The VGs must be configured as non-concurrent or enhanced concurrent;

  • Mirror write consistency is recommended to be off for PowerHA managed enhanced concurrent VGs;

  • LV Scheduling Policy – no need to worry about the default being parallel, as it has been modified to recognise a Geographic Volume Group and to read from the local copy if it is available; and

  • Don’t set GMVG to automatically activate (varyon) on startup.



  • There are some further restrictions when using as part of PowerHA – contact me if you have questions.


  • IBM PowerHA SystemMirror V7.2.3 for IBM AIX and V7.2.2 for Linux

  • IBM Knowledge Centre:


Please feel free to contact me if you have any further questions or would like a demonstration.

bottom of page