Sizing Zimbra Infrastructures — Part 2 | Zimbra

Document
Alert! This article is written for Zimbra OSE users. As of December 2023, Synacor will no longer be providing support for Zimbra OSE. You might want to consider trying out Carbonio Community Edition – Zextras’s free and open-source email and collaboration platform.

For additional guidance, check out our community articles detailing the process of migrating from your current platform to Carbonio CE.

Expanding Zimbra storage using HSM policies

In the Part 1 of this series, we introduced the HSM architectural pattern. Let’s talk about how Zextras implements it into the Zimbra ecosystem.

Zextras HSM implementation for Zimbra

As you know, Zimbra supports 3 different types of volumesIndex, Primary, and Secondary. The first two are mandatory and are usually configured as:

Primary --> /opt/zimbra/store
Index   --> /opt/zimbra/index

We can define multiple instances of each volume type, but only one of them is marked as “current” : here, the server stores incoming email and the new datas, updating the corresponding metadata within the database.

And what about the Secondary type?

Zextras developed his own engine, designed to safely move items across server’s volumes. It’s the Zextras Storage Manager – named Powerstore.

Real Case Example: new server setup

Let’s use an example. Suppose we are sizing a server without using LVM. In a standard installation, all the data about Zimbra are stored using the mount-point /opt/zimbra/

Assuming a scenario of 200 users, with 10 GB of quota sending and receiving 5Gb/day. When all the users will have filled up their accounts we will need approximately 2TB, but really we don’t need all this space right now. We decide sizing the server with:

/dev/sda1 - 50GB  mount-point / (S.O.)
/dev/sdb1 - 250GB mount-point /opt/zimbra
/dev/sdc1 - 250GB mount-point /mnt/tier1

Our goal is to use sdb for metadata, index and incoming emails, and move older messages into sdc.

Let’s create the necessary mount point and assign the right grants. by executing, as root :

$> mkdir /mnt/tier1/blobs && chown zimbra:zimbra /mnt/tier1/blobs

Next we have to create the secondary store and make it current, executing as zimbra:

$> zxsuite powerstore doCreateVolume FileBlob  tier1_volume secondary /mnt/tier1/blobs volume_compressed true
$> zxsuite powerstore doUpdateVolume FileBlob  tier1_volume current_volume true

Last step is to create a new policy that moves blobs older than 7 days to tier1, limiting the use of the faster SDA approximately to 35GB

$> zxsuite powerstore setHsmPolicy "message,document:before:-7day"
$> zxsuite config server set mail.zextras.loc attribute ZxPowerstore_MoveSchedulingEnabled value true

That’s all.

Zimbra will stores metadata and index are on SDA, as for incoming email and local caching. Each night (default start is configured at 2 AM) Powerstore moves all the items that match the policies, freeing space on the primary volume, without downtime or any impact on connected users.

Moreover, the files on the secondary store are compressed, optimizing used space and I/O throughput.

Adding more space

After a couple of months, we notice that the SDC is going full, and we decided to move older emails to an NFS server.

We start creating the mountpoint:

mkdir /mnt/tier2/blobs && chown zimbra:zimbra /mnt/tier2/blobs

… adapting the /etc/fstab. For example:

...
#NSF4 example
nsf4.domain.local:/NFS/zimbra/server1/blobs        /mnt/tier2/blobs       nfs    rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.10,local_lock=none,_netdev        0       0
...
#NSF3 example
Questo un altro esempio con la 3
nsf3.domain.local:/NFS/zimbra/server1/blobs        /mnt/tier2/blobs       nfs        noauto,vers=3,wsize=32768,rsize=32768,_netdev,nouser,tcp,intr,nolock,soft 0 0
...

… and defining a new volume

$> zxsuite powerstore doCreateVolume FileBlob tier2_volume secondary /mnt/tier2/blobs volume_compressed true

This time we have not set the volume as current. We already have a policy that moves objects older than 7 days from primary to current secondary. We need to move all the object from /mnt/tier1 (current secondary) to /mnt/tier2 (the NFS share).

Before going haead, take a look to the output of zxsuite powerstore getAllVolumes.

$ zxsuite powerstore getAllVolumes
      primaries                               
             id              1
             name            message1
             path            /opt/zimbra/store
...
     secondaries
             id              3
             name            tier1_volume
             path            /mnt/tier1/blobs
             compressed      true
             storeType       LOCAL
             isCurrent       true
             volumeType      secondary

             id              4
             name            tier2_volume
             path            /mnt/tier2/blobs
             compressed      true
             storeType       LOCAL
             isCurrent       false
             volumeType      secondary
...

And next add a new policy using the volume_id from the prevous output:

$> zxsuite powerstore +setHsmPolicy "message,document:before:-60day source:3 destination:4 policy_order:10"

Now we have 2 policies:

$ zxsuite powerstore getHsmPolicy
      policies                                
            message,document:before:-7day
            message,document:before:-60day source:3 destination:4 policy_order:10
u

The first one selects all emails and documents older than 7 days from primary volumes and move them to the current secondary volume

The second one selects all emails and documents older than 60 days from volume 3 and move them to the volume 4 (our NFS share)

With this configuration

  • The primary disk size can be defined, allowing choosing a faster disk (such as nVME or full flash partition) for optimal performance of Metadata and index. They could be sized using the “quota rule”, statistically the 20% of the overall server’s quota.
  • The primary volume stores up to 7 days ( ~35GB ) of emails.
  • The additional local disk , such as the SDB, can be optimized for time-based or domain-based policies. In our case it will never fill up, storing messages for 53days ( ~ 200GB applying standard compression)
  • Older messages can be moved to additional mount-point, local or remote
  • The infrastructure can grow according to real user needs, allowing admins to reserve storage only when effectively needed.

Obviously this is just and an example of what is possible to setup using the HSM Policies.

Download Zextras Suite for Zimbra OSE

Post your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How to Set Up Email Signatures in Zimbra | Zimbra
Zextras Backup Path | Zimbra