VM based routers integration
Containerlab focuses on containers, but there are many routing products which are only shipped in a virtual machine packaging. Leaving containerlab users without ability to create topologies with both containerized and VM-based routing systems would have been a shame.
Keeping this requirement in mind from the very beginning, we added kinds like
ovs-bridge, that allows to, ehm, bridge your containerized topology with other resources available via a bridged network. For example, a VM based router:
With this approach, you could bridge VM-based routing systems by attaching interfaces to the bridge you define in your topology, however it doesn't allow users to define the VM based nodes in the same topology file. With
vrnetlab integration, containerlab is now capable of launching topologies with VM-based routers defined in the same topology file.
Vrnetlab essentially allows to package a regular VM inside a container and makes it runnable and accessible as if it was a container image.
To make this work, vrnetlab provides a set of scripts that will build the container image out of a user provided VM disk. This enables containerlab to build topologies which consist both of native containerized NOSes and VMs:
Make sure, that the VM that containerlab runs on have Nested virtualization enabled to support vrnetlab based containers.
To make vrnetlab images to work with container-based networking in containerlab we needed to fork vrnetlab project and implement the necessary improvements. This means that VM-based routers that you intend to run with containerlab should be built with
hellt/vrnetlab project, and not with the upstream vrnetlab.
Containerlab depends on
hellt/vrnetlab project and sometimes features added in containerlab must be implemented in
vrnetlab (and vice-versa). This leads to a cross-dependency between these projects.
The following table provides a link between the version combinations that were validated:
| || ||Initial release. Images: sros, vmx, xrv, xrv9k|
| || ||added vr-veos, support for boot-delay, SR OS will have a static route to docker network, improved XRv startup chances|
|--|| ||added timeout for SR OS images to allow eth interfaces to appear in the container namespace. Other images are not touched.|
|--|| ||fixed serial (telnet) access to SR OS nodes|
|--|| ||set default cpu/ram for SR OS images|
| || ||added support for Cisco CSR1000v via |
|--|| ||enhanced SR OS boot sequence|
|--|| ||fixed SR OS CPU allocation and added Palo Alto PAN support |
| || ||added support for Cisco Nexus 9000v via |
| || ||added experimental support for Juniper vQFX via |
Building vrnetlab images#
To build a vrnetlab image compatible with containerlab users first need to ensure that the versions of both projects follow compatibility matrix.
hellt/vrnetlaband checkout to a version compatible with containerlab release:
git clone https://github.com/hellt/vrnetlab && cd vrnetlab # assuming we are running containerlab 0.11.0, # the latest compatible vrnetlab version is 0.2.3 # at the moment of this writing git checkout v0.2.3
- Enter the directory for the image of interest
- Follow the build instructions from the README.md file in the image directory
Supported VM products#
The images that work with containerlab will appear in the supported list gradually, as we implement the necessary integration.
|Nokia SR OS||vr-sros||SRL & SR OS||When building SR OS vrnetlab image for use with containerlab, do not provide the license during the image build process. The license shall be provided in the containerlab topology definition file1.|
|Juniper vMX||vr-vmx||SRL & vMX|
|Juniper vQFX||vr-vqfx||Coming soon|
|Cisco XRv||vr-xrv||SRL & XRv|
|Cisco XRv9k||vr-xrv9k||SRL & XRv9k|
|Palo Alto PAN||vr-pan|
|Cisco Nexus 9000v||vr-n9kv|
Containerlab offers several ways VM based routers can be connected with the rest of the docker workloads. By default, vrnetlab integrated routers will use tc backend2 which doesn't require any additional packages to be installed on the container host and supports transparent passage of LACP frames.
Any other datapaths?
tc based datapath should cover all the needed connectivity requirements, if other, bridge-like, datapaths are needed, Containerlab offers OpenvSwitch and Linux bridge modes.
Users can plug in those datapaths by specifying
CONNECTION_MODE env variable:
# the env variable can also be set in the defaults section name: myTopo topology: nodes: sr1: kind: vr-sros image: vrnetlab/vr-sros:20.10.R1 env: CONNECTION_MODE: bridge # use `ovs` for openvswitch datapath
Simultaneous boot of many qemu nodes may stress the underlying system, which sometimes render in a boot loop or system halt. If the container host doesn't have enough capacity to bear the simultaneous boot of many qemu nodes it is still possible to successfully run them by scheduling their boot time.
Delaying the boot process of certain nodes by a user defined time will allow nodes to boot successfully while "gradually" load the system. The boot delay can be set with
BOOT_DELAY environment variable that supported
vr-xxxx kinds will recognize.
Consider the following example where the first SR OS nodes will boot immediately, whereas the second node will sleep for 30 seconds and then start the boot process:
name: bootdelay topology: nodes: sr1: kind: vr-sros image: vr-sros:21.2.R1 license: license-sros21.txt sr2: kind: vr-sros image: vr-sros:21.2.R1 license: license-sros21.txt env: # boot delay in seconds BOOT_DELAY: 30
Typically a lab consists of a few types of VMs which are spawned and interconnected with each other. Consider a lab that consists of 5 interconnected routers, 1 router uses VM image X and 4 routers are using VM image Y.
Effectively we run just two types of VMs in that lab, and thus we can implement memory deduplication technique that drastically reduces the memory footprint of a lab. In Linux this can be achieved with technologies like UKSM/KSM. Refer to this article that explains the methodology and provides steps to get UKSM working on Ubuntu/Fedora systems.
to have a guaranteed compatibility checkout to the mentioned tag and build the images. ↩