vr-vmx nodes launched with containerlab comes up pre-provisioned with SSH, SNMP, NETCONF and gNMI services enabled.
Managing vr-vmx nodes#
Containers with vMX inside will take ~7min to fully boot.
You can monitor the progress with
docker logs -f <container-name>.
Juniper vMX node launched with containerlab can be managed via the following interfaces:
to connect to a
bash shell of a running vr-vmx container:
docker exec -it <container-name/id> bash
to connect to the vMX CLI
NETCONF server is running over port 830
ssh admin@<container-name> -p 830 -s netconf
using the best in class gnmic gNMI client as an example:
gnmic -a <container-name/node-mgmt-address> --insecure \ -u admin -p admin@123 \ capabilities
Default user credentials:
vr-vmx container can have up to 90 interfaces and uses the following mapping rules:
eth0- management interface connected to the containerlab management network
eth1- first data interface, mapped to first data port of vMX line card
eth2+- second and subsequent data interface
When containerlab launches vr-vmx node, it will assign IPv4/6 address to the
eth0 interface. These addresses can be used to reach management plane of the router.
eth1+ needs to be configured with IP addressing manually using CLI/management protocols.
Features and options#
vr-vmx nodes come up with a basic configuration where only the control plane and line cards are provisioned, as well as the
admin users and management interfaces such as NETCONF, SNMP, gNMI.
The following labs feature vr-vmx node:
Known issues and limitations#
- when listing docker containers, vr-vmx container will always report unhealthy status. Do not rely on this status.
- LACP and BPDU packets are not propagated to/from vrnetlab based routers launched with containerlab.
- vMX requires Linux kernel 4.17+
- To check the boot log, use
docker logs -f <node-name>.