Proxmox 4 Upgrade on OVH - Simple IP Failover Issues

Written by Kevin on November 02, 2015

Your using Proxmox 3 on an OVH Server and you want to upgrade to Proxmox 4. You have a pretty simple setup and figure it will be a simple process. While this is mostly true, if you are used to just assigning one of your Failover IP’s to a container via the Proxmox interface you will quite surprised when that no longer works.

So whats the problem? LXC does not work the same way (as far as I can tell) when it comes to the network. I was forced to manually configure the network in each machine.

The Process

Warning: Your server will be offline for some time.

First, you need to make a backup of all your linux containers. You can do this from either the Proxmox web interface, or via the command line. YOU MUST DO THIS

Secondly, note down all the IP addresses assigned to each container. Write it on paper, save it in a notepad. YOU MUST DO THIS

Once all containers are backed up, go ahead and follow this guide.

Once you have finished that guide, go ahead and delete your data directory and all openvz configs.

rm -f /etc/pve/openvz/<ct-id>.conf #do this for each openvz container
rm -R <storage-path>/private/*  #typically <storage-path> is /usr/lib/vz

Now, restore each of the openvz backups. Again this can be done via the Web interface or via the command line. Check out this guide

Now, here is the part that is special to OVH. Before you add a network device to each VM, go into your OVH control panel and add a Virtual Mac to each IP addresses. Do not add one to the whole block unless you want one VM to have the whole block. This will take some time as you have to do them 1 at a time.

Once each IP has a virtual MAC, go to your Proxmox Web Interface, navigate to network tab for the desired container. Fill it out with the following properties:

ID: net0
Name: eth0
MAC Address: <virtual mac from OVH for desired Failover IP>
Bridge: vmbr0

Leave everything else blank

The next step requires you to have root access to the container. I recommend logging into the host, and using pct enter <CID>. Depending on your system the following is slightly different.

Two IP addresses are needed. The first is your Failover IP. The second is the gateway for your server. The gateway for your server is as follows; Your servers primary ip with the last octet replaced with 254. For example, if your servers primary IP is 192.0.1.1, then your gateway is 192.0.1.254.

For Debian

Update the interfaces file. You can ignore the warnings in that file as they were added by OpenVZ and no longer relevent.

X.X.X.X = Failover IP
Y.Y.Y.254 = Gateway IP

/etc/networking/interfaces:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address X.X.X.X
        netmask 255.255.255.255
        broadcast X.X.X.X
        dns-nameservers 8.8.8.8 8.8.4.4
        post-up route add Y.Y.Y.254 dev eth0
        post-up route add default gw Y.Y.Y.254
        post-down route del Y.Y.Y.254 dev eth0
        post-down route del default gw Y.Y.Y.254

Then reboot the container or issue ifup eth0.

For CentOS 7

First, you must remove the references to venet0 and venet0:0.

rm /etc/sysconfig/network-scripts/ifcfg-venet*

Then, create two new files:

X.X.X.X = Failover IP
Y.Y.Y.254 = Gateway IP

/etc/sysconfig/network-scripts/ifcfg-eth0:

DEVICE=eth0 
BOOTPROTO=none 
ONBOOT=yes 
NETMASK=255.255.255.255 
IPADDR=X.X.X.X
USERCTL=no

/etc/sysconfig/network-scripts/route-eth0:

Y.Y.Y.254/32 dev eth0
default via Y.Y.Y.254

Then reboot the container or issue ifup eth0.

For Other Distributions

The basic process is to statically set the IP to your Failover IP. Then add two routes. One routing your gateway via eth0, and the other a default route to your gateway. See your Distributions documentation.

Notes about CentOS

By Default, many services will not start due to LXC not supporting PrivateTmp. To remedy this, add the following to the [Service] section of each service that will not start.

PrivateTmp=false
NoNewPrivileges=true