12c ASM: Unable To Add New Disks With Dissimilar Size To 12.1.0.2 ASM Diskgroups

Hello,
Adding new disks to existing ASM diskgroups (Normal or High Redundancy) on ASM 12.1.0.2 release reports the following error.

SQL> alter diskgroup DATA add failgroup FG1 disk ‘/dev/mapper/oracleasm/DATA10’ size 20489M failgroup FG2 disk ‘/dev/mapper/oracleasm/DATA11’ size 20489M;
alter diskgroup DATA add failgroup FG1 disk ‘/dev/mapper/oracleasm/DATA10’ size 20489M failgroup FG2 disk ‘/dev/mapper/oracleasm/DATA11’ size 20489M
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15410: Disks in disk group DATA do not have equal size.

If you try to add using ASMCA, it gives following warning.
The size of the disks selected is not the same as to allow for an equal number of 1MB AU size blocks

Starting on 12.1.0.2 ASM release, this ASM constraint/validation is available. Disks with uneven capacity can create allocation problems that prevent full use of all of the available storage in the failgroup /diskgroup. This validation/constraint ensure that all disks in the same diskgroup have the same size, doing so provides more predictable overall performance and space utilization. If the disks are the same size, then ASM spreads the files evenly across all of the disks in the diskgroup. This allocation pattern maintains every disk at the same capacity level and ensures that all of the disks in a diskgroup have the same I/O load. Because ASM load balances workload among all of the disks in a diskgroup, different ASM disks should not share the same physical drive. This ASM new feature is enabled by default on ‘12.1.0.2’ Grid Infrastructure/ASM release and onwards.

Associated diskgroup was created using the ‘COMPATIBLE.ASM’=’12.1.0.2.0’ attribute and the disks size was =10239 MB. But the new candidate disks have a different size.

Use the original bigger disks (20489MB), but explicitly restrict/define the smaller/even required disk size (10239MB) at ASM level as follows:

sqlplus “/as sysasm”
SQL> alter diskgroup DATA add failgroup FG1 disk ‘/dev/mapper/oracleasm/DATA10’ size 10239M failgroup FG2 disk ‘/dev/mapper/oracleasm/DATA11’ size 10239M;

What’s compact phase during Oracle ASM rebalance?

The compact phase is part of Oracle ASM rebalance operation, it moves the data as close as possible to the outer tracks of the disks (the lower numbered offsets).

The first time you run rebalance with 11g, it could take a while if the disk group configuration changed (especially via ADD DISK) when running with 10g ASM. Subsequent manual rebalance without a configuration change should not take as much time.

A disk group where the compact phase of rebalance has done a lot of work will tend to have better performance than the pre-compact disk group. The data should be clustered near the higher performing tracks of the disk, resulting less seektime . It’s enabled by default from 11g onwards.It generally takes place at the end of rebalance operation.

Before 12c, we cannot see compact phase on v$asm_operation view at asm level. If one see EST_MINUTES shows as ‘0’ and waiting for long time, probably its is doing compact. This can be confirmed by seeing system state dump from ASM level and mostly we will see no blocking session and waits are “kfk:async IO”

From 12c onwards, we can see compact phase as separate operation . Compact phase can be disabled. Before 12c, use hidden parameter _disable_rebalance_compact=true at instance level .

From 12c onwards, _disable_rebalance_compact parameter is no longer available, however Diskgroup attribute _rebalance_compact can be used:

SQL> ALTER DISKGROUP DATA SET ATTRIBUTE ‘_rebalance_compact’=’FALSE’;