Skip to content

Cumulus VX#

Cumulus VX is identified with cvx kind in the topology file. The cvx kind defines a supported feature set and a startup procedure of a cvx node.

CVX nodes launched with containerlab comes up with:

  • the management interface eth0 configured with IPv4/6 addresses as assigned by either the container runtime or DHCP
  • root user created with password root

Mode of operation#

CVX supports two modes of operation:

  • Using Firecracker micro-VMs -- this mode runs Cumulus VX inside a micro-VM on top of the native Cumulus kernel. This is mode uses ignite runtime and is the default way of running CVX nodes.
  • Using only the container runtime -- this mode runs Cumulus VX container image directly inside the container runtime (e.g. Docker). Due to the lack of Cumulus VX kernel modules, some features are not supported, most notable one being MLAG. In order to use this mode add runtime: docker under the cvx node definition (see also this example).

Note

When running in the default ignite runtime mode, the only host OS dependency is /dev/kvm1 required to support harware-assisted virtualisation. Firecracker VMs are spun up inside a special "sandbox" container that has all the right tools and dependencies required to run micro-VMs.

Additionally, containerlab creates a number of directories under /var/lib/firecracker for nodes running in ignite runtime to store runtime metadata; these directories are managed by containerlab.

Managing cvx nodes#

Cumulus VX node launched with containerlab can be managed via the following interfaces:

to attach to a bash shell of a running cvx container (only container ID is supported):

docker attach <container-id> 
Use Docker's detach sequence (Ctrl+P+Q) to disconnect.

SSH server is running on port 22

ssh root@<container-name> 

gNMI server will be added in future releases.

Info

Default user credentials: root:root

User defined config#

It is possible to make cvx nodes to boot up with a user-defined config by passing any number of files along with their desired mount path:

name: cvx_lab
topology:
  nodes:
    cvx:
      kind: cvx
      binds:
        - cvx/interfaces:/etc/network/interfaces
        - cvx/daemons:/etc/frr/daemons
        - cvx/frr.conf:/etc/frr/frr.conf

Note on configuration persistency#

When running inside the ignite runtime, all mount binds work one way -- from host OS to the cvx node, but not the other way around. Currently, it's up to a user to manually update individual files if configuration updates need to be persisted. This will be addressed in the future releases.

Lab examples#

The following labs feature CVX node:

Known issues or limitations#

  • CVX in Ignite is always attached to the default docker bridge network

  1. this device is already part of the linux kernel, therefore this can be read as "no external dependencies are needed for running cvx with ignite runtime".