Proxmox Vyos Migration
Last week I assembled a new desktop for running a second instance of proxmox server, since my other proxmox server which hosts a lot of services has been running since a couple of years now with almost zero downtime and hence no opportunity for me to clean the dust that has accumulated inside the CPU. The old desktop acting as a server is an i5 7600K with 32 GB DDR4 ram and multiple SSDs for NAS and backups. The new server is based on an i3 12100 and MSI PRO B760M-E DDR4 motherboard supporting PCI gen 4 m.2 NVME. It is a cheap option for building a budget PC, particularly for a headless server type of application. The other, cheaper option was to use a H610 which I was initially looking at since I really don’t need a gen 4 ssd for server application. The price difference between the two boards is approx 2.5 to 3K rupees, which is 10% of the total cost of the build. Besides this B760 has better power delivery option, which again is not necessary for i3 processor but will be useful if someday I plan to upgrade to a 14th gen i7 may be, which again is very unlikely.
This is the first time I assembled a PC from scratch, including the processor, CPU cooler and all. It was a bit scary while putting in the CPU and pressing the latch. But after placing the CPU the test results were shown by the EZ debug LEDs on the motherboard indicating successful detection of the CPU and further after connection of RAM and SSD, BIOS sceen on the monitor confirmed everthing was all right. It was only after these tests, that I got the confidence to disconnect the power supply and mount everything inside the CPU cabinet, which takes quite a lot of time. Know that I knew that everything will work, I completed the rest of the steps with a lot of patience while routing the Cooler Master MWE 450 Bronze V2 cables behind the motherboard, allowing room for the fans to draw cooler air from outside.
So far my old server hosted virtual machines to run a vyos firewall for my home internet connection, as well as many other servers including my NAS server, web-server, home automation server and other development servers.
The first thing that I launched in my new server was an instance of vyos rolling release, freshly downloaded from the official vyos website. A rolling release is not meant for production, since it may have bugs introduced by latest developments, yet it is OK for a home lab, which is mostly a testing environment for hobbists and professionals like me. Now after launch I needed to transfer the internet load of my home from the old vyos instance running on the old server. While doing, this step I commited a lot of mistakes and learned during the process. In this blog post I document the requirements of migration that I learned the hard way.
Proxmox Network Configuration
The first thing to be done before migrating any VM from the old server to the new one or launching new servies on the proxmox server, is to match the network configuration of the new server to the old one. This replicates the exact newtwork environment expected by the migrated virtual machines. My existing network is a load balancing configuration between two network operators. The load-balancing is performed by vyos firewall and the VLANs are managed by openvsitch bridge.
First I installed openvswitch to configure bridges. By default proxmox uses linux bridge and openvswitch-switch package is not pre installed. I prever to manage bridges with openvswitch which has a lot of customisation options.
apt install openvswitch-switch
Following this I copied the /etc/network/interfaces file form the old server and used it in my new server, changing the name of the physical interface, which differs in the new server. Initially I was thinking of doing this from the webgui of proxmox and then I realised this will lead to the new server going unreachable from the network since it has only one physical interface through which I was logging into the webgui or ssh. So any temporary change in the bridging and layer three configuration will render it unreachable. Later I realised a configuration file followed by reboot is the correct approach without any hassle and it worked like a charm.
The configuration creates an openvswitch bridge and assigns the server IP to the bridge as static IP. The physical interface (enp2s0) is added to the bridge as a trunk port.
root@pve:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto enp2s0
iface enp2s0 inet manual
ovs_type OVSPort
ovs_bridge vmbr1
ovs_options trunks=0,101,103
auto vmbr1
iface vmbr1 inet static
address 192.168.1.5/24
gateway 192.168.1.1
ovs_type OVSBridge
ovs_ports enp2s0
#OVS Bridge
source /etc/network/interfaces.d/*
root@pve:~#
In the mean time I was trying to attach a second interface, which is a wifi TPLINK ARCHER rtw_8828au dongle as a second interface but with no luck. I compiled the kernel module by installing the pve header:
apt install pve-headers
For this I had to add the pve no subscription repository:
#cat etc/apt/sources.list.d/pve-no-subscription.list deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
But secure boot didn’t allow me to install the kernel module. I will need to figure out later, how to do that.
Vyos installation
I downloaded and installed a fresh rolling release of the vyos virtual router. The most important learing in replacing the old vyos instance in the old server to the new instance was to ensure that the MAC addresses of all the interfaces added to the new vyos instance exactly matches to those in the old instance. This allows us to save the configuration from the old instance and load the configuration in the new instance.
The interface names are tied to the mac addresses to achieve a consistent naming scheme accross reboots. If the mac addresses don’t match then configuration loading will fail at boot time.