vr-xrv9k nodes launched with containerlab come up pre-provisioned with SSH, SNMP, NETCONF and gNMI (if available) services enabled.
XRv9k node is a resource hungry image. As of XRv9k 7.2.1 version the minimum resources should be set to 2vcpu/14GB. These are the default setting set with containerlab for this kind.
Image will take 25 minutes to fully boot, be patient. You can monitor the loading status with
docker logs -f <container-name>.
Managing vr-xrv9k nodes#
Cisco XRv9k node launched with containerlab can be managed via the following interfaces:
to connect to a
bash shell of a running vr-xrv9k container:
docker exec -it <container-name/id> bash
to connect to the XRv9kCLI
NETCONF server is running over port 830
ssh clab@<container-name> -p 830 -s netconf
using the best in class gnmic gNMI client as an example:
gnmic -a <container-name/node-mgmt-address> --insecure \ -u clab -p clab@123 \ capabilities
Default user credentials:
vr-xrv9k container can have up to 90 interfaces and uses the following mapping rules:
eth0- management interface connected to the containerlab management network
eth1- first data interface, mapped to first data port of XRv9k line card
eth2+- second and subsequent data interface
When containerlab launches vr-xrv9k node, it will assign IPv4/6 address to the
eth0 interface. These addresses can be used to reach management plane of the router.
eth1+ needs to be configured with IP addressing manually using CLI/management protocols.
Features and options#
vr-xrv9k nodes come up with a basic configuration where only the control plane and line cards are provisioned, as well as the
clab user and management interfaces such as NETCONF, SNMP, gNMI.
The following labs feature vr-xrv9k node:
Known issues and limitations#
- LACP and BPDU packets are not propagated to/from vrnetlab based routers launched with containerlab.