Hi,
If the Public Network interface change involves different subnet(netmask) or interface, delete the existing interface information from OCR and add it back with the correct information is required. In the example here, the subnet is changed from 10.2.156.0 to 10.2.166.0
% $GRID_HOME/bin/oifcfg delif -global eth0/10.2.156.0
% $GRID_HOME/bin/oifcfg setif -global eth0/10.2.166.0:public
Then make the change at OS layer. There is no requirement to restart Oracle clusterware unless OS change requires a node reboot. This can be done in rolling fashion.
Once public network is changed, its associated VIP and SCAN VIP are also required to change.
Please ensure that public network changes are made first. If there is a node reboot or Clusterware restart after the OS network change, vip will not start.
Gather Current VIP Configuration
as Grid Infrastructure owner:
$ srvctl config nodeapps -a
Verify VIP status. It should show VIPs are ONLINE
$ crsctl stat res -t
$ ifconfig -a
– VIP logical interface is bound to the public network interface
Stopping Resources
Stop the nodeapps resources (and all dependent resources ASM/DB only if required):
as Grid Infrastructure owner:
$ srvctl stop instance -d ORCDB -n orcnode1
$ srvctl stop vip -n orcnode1 -f
Verify VIP is now OFFLINE and the interface is no longer bound to the public network
Modifying VIP and Its Associated Attributes
Determine the new VIP IP/subnet/netmask or VIP hostname, make the network change on OS first, ensure the new VIP is registered in DNS or modified in /etc/hosts . If the network interface is changed, ensure the new interface is available on the server before proceeding with the modification.
For example:
New VIP is: 10.2.166.21 orcnode1-nvip
new subnet is 10.2.166.0
new netmask is 255.255.255.0
new interface is eth2
Modify the VIP resource, as root user:
$ srvctl modify nodeapps -n orcnode1 -A orcnode1-nvip/255.255.255.0/eth2
Verify the change
$ srvctl config nodeapps -n orcnode1 -a
VIP exists.: /orcnode1-nvip/10.2.166.21/255.255.255.0/eth2
Start the nodeapps and the other resources
as Grid Infrastructure owner:
$ srvctl start vip -n orcnode1
$ srvctl start listener -n orcnode1
$ srvctl start instance -d ORCDB -n orcnode1
Verify the new VIP is ONLINE and bind to the public network interface
$ crsctl stat res -t
$ ifconfig -a
Repeat the same steps for the rest nodes in the cluster only if the similar change is required.