Nokia SR Linux#
Managing SR Linux nodes#
There are many ways to manage SR Linux nodes, ranging from classic CLI management all the way up to the gNMI programming. Here is a short summary on how to access those interfaces:
to connect to a
bash shell of a running SR Linux container:
docker exec -it <container-name/id> bash
to connect to the SR Linux CLI
docker exec -it <container-name/id> sr_cli
using the best in class gnmic gNMI client as an example:
gnmic -a <container-name/node-mgmt-address> --skip-verify \ -u admin -p admin \ -e json_ietf \ get --path /system/name/host-name
SR Linux has a JSON-RPC interface, that is enabled on port 80/443 for HTTP/HTTPS schemas accordingly.
HTTPS server uses the same TLS certificate as gNMI server.
Default user credentials:
Features and options#
For SR Linux nodes
type defines the hardware variant that this node will emulate.
The available type values are:
ixrd3 which correspond to a hardware variant of Nokia 7250/7220 IXR chassis.
ixr6 type will be used by containerlab.
Based on the provided type, containerlab will generate the topology file that will be mounted to SR Linux container and make it boot in a chosen HW variant.
SR Linux nodes have a dedicated
config directory that is used to persist the configuration of the node. It is possible to launch nodes of
srl kind with a basic "empty" config or to provide a custom config file that will be used as a startup config instead.
Default node configuration#
When a node is defined without
config statement present, containerlab will generate an empty config from this template and put it in that directory.
# example of a topo file that does not define a custom config # as a result, the config will be generated from a template # and used by this node name: srl_lab topology: nodes: srl1: kind: srl type: ixr6
The generated config will be saved by the path
clab-<lab_name>/<node-name>/config/config.json. Using the example topology presented above, the exact path to the config will be
User defined startup config#
It is possible to make SR Linux nodes to boot up with a user-defined config instead of a built-in one. With a
startup-config property of the node/kind a user sets the path to the config file that will be mounted to a container:
name: srl_lab topology: nodes: srl1: kind: srl type: ixr6 image: ghcr.io/nokia/srlinux config: myconfig.json
With such topology file containerlab is instructed to take a file
myconfig.json from the current working directory, copy it to the lab directory for that specific node under the
config.json name and mount that file to the container. This will result in this config to act as a startup config for the node.
As was explained in the Node configuration section, SR Linux containers can make their config persistent, because config files are provided to the containers from the host via the bind mount.
When a user configures SR Linux node the changes are saved into the running configuration stored in memory. To save the running configuration as a startup configuration the user needs to execute the
tools system configuration save CLI command. This will write the config to the
/etc/opt/srlinux/config.json file that holds the startup config and is exposed to the host.
SR Linux node also support the
containerlab save -t <topo-file> command which will execute the command to save the running config on all the lab nodes. For SR Linux node the
tools system configuration save will be executed:
❯ containerlab save -t quickstart.clab.yml INFO Parsing & checking topology file: quickstart.clab.yml INFO saved SR Linux configuration from leaf1 node. Output: /system: Saved current running configuration as initial (startup) configuration '/etc/opt/srlinux/config.json' INFO saved SR Linux configuration from leaf2 node. Output: /system: Saved current running configuration as initial (startup) configuration '/etc/opt/srlinux/config.json'
By default containerlab will generate TLS certificates and keys for each SR Linux node of a lab. The TLS related files that containerlab creates are located in the so-called CA directory which can be located by the
<lab-directory>/ca/ path. Here is a list of files that containerlab creates relative to the CA directory
- Root CA certificate -
- Root CA private key -
- Node certificate -
- Node private key -
The generated TLS files will persist between lab deployments. This means that if you destroyed a lab and deployed it again, the TLS files from initial lab deployment will be used.
In case a user-provided certificates/keys need to be used, the
<node-name>-key.pem files must be copied by the paths outlined above for containerlab to take them into account when deploying a lab.
In case only
root-ca-key.pem files are provided, the node certificates will be generated using these CA files.
SR Linux container can run without any license .
In that license-less mode the datapath is limited to 100PPS and the sr_linux process will reboot once a week.
The license file lifts these limitations and a path to it can be provided with
To start an SR Linux NOS containerlab uses the configuration that is described in SR Linux Software Installation Guide
sudo bash -c /opt/srlinux/bin/sr_linux
net.ipv4.ip_forward = "0" net.ipv6.conf.all.disable_ipv6 = "0" net.ipv6.conf.all.accept_dad = "0" net.ipv6.conf.default.accept_dad = "0" net.ipv6.conf.all.autoconf = "0" net.ipv6.conf.default.autoconf = "0"
When a user starts a lab, containerlab creates a lab directory for storing configuration artifacts. For
srl kind containerlab creates directories for each node of that kind.
~/clab/clab-srl02 ❯ ls -lah srl1 drwxrwxrwx+ 6 1002 1002 87 Dec 1 22:11 config -rw-r--r-- 1 root root 2.8K Dec 1 22:11 license.key -rw-r--r-- 1 root root 4.4K Dec 1 22:11 srlinux.conf -rw-r--r-- 1 root root 233 Dec 1 22:11 topology.clab.yml
config directory is mounted to container's
rw mode and will effectively contain configuration that SR Linux runs of as well as the files that SR Linux keeps in its
❯ ls srl1/config banner cli config.json devices tls ztp
The topology file that defines the emulated hardware type is driven by the value of the kinds
type parameter. Depending on a specified
type the appropriate content will be populated into the
topology.yml file that will get mounted to
/tmp/topology.yml directory inside the container in