Free inodes < 10...
 
Notifications
Clear all

Free inodes < 10% on volume /opt/zimbra/backup

1 Posts
1 Users
0 Likes
468 Views
(@victord)
Posts: 1
Topic starter
 

Hi guys

I've set up a Zimbra 8.6 test environment with Zextras. Last week I created and deleted a lot of accounts to test an application which drive zimbra provisioning in SOAP. I may have created and deleted 7500 accounts three or four times on a single store installation.

This morning I've got an alert because remaining inodes on zextras backup folder is very low. Space used is only 25 % but the folder /opt/zimbra/backup/zextras/accounts is 2.3 Go :

Filesystem Size Used Avail Use% Mounted on
lv_zimbra_backup 9,8G 2,3G 7,4G 24% /opt/zimbra/backup

Filesystem Inodes IUsed IFree IUse% Mounted on
lv_zimbra_backup 640K 564K 77K 89% /opt/zimbra/backup

I have two questions :
- In production (8 stores) should I do something like set inode size higher ?
- How can I reset zextras backup folder (lose all backup and start from zero but without losing zimbra mailstores data) ?

# tune2fs -l /dev/mapper/vg_lxlyofcs320_data-lv_zimbra_backup
tune2fs 1.41.12 (17-May-2010)
Filesystem volume name:   
Last mounted on:          /opt/zimbra/backup
Filesystem UUID:          a9933b39-1c54-4980-a069-6e155d39ef57
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              655360
Block count:              2621440
Reserved block count:     26214
Free blocks:              1957365
Free inodes:              78513
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      639
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Mon Oct 12 16:46:30 2015
Last mount time:          Fri Apr 29 16:30:34 2016
Last write time:          Mon May  2 00:34:41 2016
Mount count:              1
Maximum mount count:      37
Last checked:             Fri Apr 29 16:30:16 2016
Check interval:           15552000 (6 months)
Next check after:         Wed Oct 26 16:30:16 2016
Lifetime writes:          12 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      eb6b90b9-953e-4dde-a882-bccb8fb37b33
Journal backup:           inode blocks

Regards

Victor


Filesystem Size Used Avail Use% Mounted on
lv_zimbra_backup 9,8G 2,3G 7,4G 24% /opt/zimbra/backup

Filesystem Inodes IUsed IFree IUse% Mounted on
lv_zimbra_backup 640K 564K 77K 89% /opt/zimbra/backup

I have two questions :
- In production (8 stores) should I do something like set inode size higher ?
- How can I reset zextras backup folder (lose all backup and start from zero but without losing zimbra mailstores data) ?

# tune2fs -l /dev/mapper/vg_lxlyofcs320_data-lv_zimbra_backup
tune2fs 1.41.12 (17-May-2010)
Filesystem volume name:
Last mounted on: /opt/zimbra/backup
Filesystem UUID: a9933b39-1c54-4980-a069-6e155d39ef57
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 655360
Block count: 2621440
Reserved block count: 26214
Free blocks: 1957365
Free inodes: 78513
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 639
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Mon Oct 12 16:46:30 2015
Last mount time: Fri Apr 29 16:30:34 2016
Last write time: Mon May 2 00:34:41 2016
Mount count: 1
Maximum mount count: 37
Last checked: Fri Apr 29 16:30:16 2016
Check interval: 15552000 (6 months)
Next check after: Wed Oct 26 16:30:16 2016
Lifetime writes: 12 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: eb6b90b9-953e-4dde-a882-bccb8fb37b33
Journal backup: inode blocks

Regards

Victor


# tune2fs -l /dev/mapper/vg_lxlyofcs320_data-lv_zimbra_backup
tune2fs 1.41.12 (17-May-2010)
Filesystem volume name:
Last mounted on: /opt/zimbra/backup
Filesystem UUID: a9933b39-1c54-4980-a069-6e155d39ef57
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 655360
Block count: 2621440
Reserved block count: 26214
Free blocks: 1957365
Free inodes: 78513
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 639
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Mon Oct 12 16:46:30 2015
Last mount time: Fri Apr 29 16:30:34 2016
Last write time: Mon May 2 00:34:41 2016
Mount count: 1
Maximum mount count: 37
Last checked: Fri Apr 29 16:30:16 2016
Check interval: 15552000 (6 months)
Next check after: Wed Oct 26 16:30:16 2016
Lifetime writes: 12 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: eb6b90b9-953e-4dde-a882-bccb8fb37b33
Journal backup: inode blocks

Regards

Victor

 
Posted : 05/02/2016 09:26