VNX Remote/ Local Protection Suite quick notes!!!

VNX Remote Protection Suite solutions:

VNX MirrorView,


MirrorView/S provides synchronous replication over a short distances. Since it’s synchronous solution, the RPO (Recovery Point Objective) is zero. The data flow of MirrorView/S is:

1. Host attached to the primary VNX system initiates a write;

2. The primary VNX replicates the data to the secondary VNX system;

3. The secondary VNX acknowledges the write as complete back to the primary VNX;

4. The primary VNX acknowledges the write as complete back to the host.

It is important to understand the data flow of MirrorView/S. As a consequence the RTT (Round Trip Time) between the two VNX system should be less (or equeal) than 10ms. If the RTT is high, then the host will higher response time, because it will take longer time to acknowledge the write request.

From <>


MirrorView/A provides replication over long distances. It can be used for replication between VNX arrays, where RTT (Round Trip Time) is high, but should not be higher then 200ms. MirrorView Asynchronous works on a periodic update model that tracks changes on the primary side, and then applies those changes to the secondary at a user-determined RPO (Recovery Point Objective) interval.

With MirrorView/A replications, writes are acknowledged back to the host once the primary VNX receives them, which basically gives no impact on the production environment (whereas with MirrorView/S all writes has to be acknowledged by primary and secondary VNX system).

The dataflow of MirrorView/A is:

1. Host attached to the primary VNX system initiates a write;

2. Primary VNX system sends and acknowledge to the host;

3. Primary VNX system tracks the changes and replicates the data to the secondary VNX system using the user-defined RPO;

4. Secondary VNX system receives the data and sends an ack back to the primary VNX.

From <>

MirrorView/A uses SnapView Snapshot technology for data protection on the primary and secondary systems.

VNX Replicator,

VNX Replicator is an IP-based replication solution that produces a read-only, point-intime copy of a source or production system. The VNX Replication service periodically updates this copy, making it consistent with the production file system

Replicator uses internal checkpoints to ensure availability of the most recent point-in-time copy.

These internal checkpoints are based on VNX SnapSure technology

RecoverPoint/SE Remote Protection

A Data Mover is a component that runs its own operating system. It retrieves data from a storage device and makes it available to a network client. It’s an EMC software feature that enables the grouping of Common Internet File Systems (CIFS) and/or Network File Systems (NFS) environments and servers into virtual containers. Each VDM has access only to the file systems mounted to that VDM, providing a logical isolation between physical Data Movers and other VDMs on the VNX system.

From <>

VNX Local Protection Suite solutions:

VNX SnapView creates block-based logical point-in-time views of production information using snapshots and point-in-time copies using clones. Snapshots use only a fraction of the original disk space, while clones require the same amount of disk space as the source.


The Copy On First Write Mechanism (COFW) uses pointers to track whether data on the source LUN, or in the Reserved LUN Pool. These pointers are kept in SP memory, which is volatile, and could therefore be lost if the SP should fail or the LUN be trespassed. A SnapView feature designed to prevent this loss of session metadata is persistence for sessions (which stores the pointers on the Reserved LUN(s) for the session). All sessions are automatically persistent and the user cannot turn off persistence.

VNX Snapshots create block based logical point-in-time views of production information using snapshot technology. By using a different approach to the way new writes to the production file system are handled, VNX Snapshots provide an improvement to the overall performance and consumes less allocated storage space.


The technology behind VNX Snapshots. After a snapshot is taken, new writes to the primary LUN are redirected (written) to a new location within a storage pool.

is a background process that scans the pool for eligible Primary LUNs, Snapshot Mount Points, Consistency Groups, snapshots, and snapshot sets. All eligible expired snapshots are deleted before processing for snapshots with the ‘auto-delete’ option enabled and the expired snapshots are deleted regardless of pool auto-delete thresholds.

Auto-delete is triggered using two independent thresholds

– consumed pool space

– consumed snapshot space.

The process is stopped when the threshold conditions are met, or if no eligible snapshots remain to be deleted or the process is manually stopped.

VNX SnapSure creates logical point-in-time views of production file systems using snapshots. SnapSure uses only a fraction of the original disk space used by the source file system.

RecoverPoint Local Protection
is a synchronous product that mirrors volumes in real time between one or more arrays at a local site. RecoverPoint maintains a history journal of all changes that can be used to roll back the mirrors to any point in time.

Journal Volumes:

The journal volumes hold snapshots of data to be replicated. Each Journal volume holds as many point in time images as its capacity allows, after which the oldest image is removed to make space for the newest.

There are two types of journal volumes:

1- Replica (Copy) Journal(s)

2- Production Journal(s) – this one is more-or-less not used during normal operation, however it is necessery and being used when the relationship is promoted to secondary side.

Repository Volume:

Stores pertinent replication environment information This volume must be seen only by the appliances on the same site as this volume.

Replication volumes:

or Replicas are the production storage volumes and their matching target volumes which are used during replication.

Target volumes must be the same size or larger than the source volumes. Any excess size will not be replicated or visible to the host. This is an important design consideration for heterogeneous storage environments.

Local Replication Remote Replication
SnapView Snapshot COFW MirrorView/A
VNX SnapSure ROW VNX Replicator

About Ahmad Sabry ElGendi
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s