Labs deployed with containerlab are endlessly flexible, mostly because containerlab can spin up and wire regular containers as part of the lab topology.
Nowadays more and more workloads are packaged into containers, and containerlab users can nicely integrate them in their labs following a familiar docker' compose-like syntax. As long as the networking domain is considered, the most common use case for bare linux containers is to introduce "clients" or traffic generators which are connected to the network nodes or host telemetry/monitoring stacks.
But, of course, you are free to choose which container to add into your lab, there is not restriction to that!
Using linux containers#
As with any other node, the linux container is a node of a specific kind,
linux in this case.
# a simple topo of two alpine containers connected with each other name: demo topology: nodes: n1: kind: linux image: alpine:latest n2: kind: linux image: alpine:latest links: - endpoints: ["n1:eth1","n2:eth1"]
With a topology file like that, the nodes will start and both containers will have
eth1 link available.
Containerlab tries to deliver the same level of flexibility in container configuration as docker-compose has. With linux containers it is possible to use the following node configuration parameters:
- image - to set an image source for the container
- binds - to mount files from the host to a container
- ports - to expose services running in the container to a host
- env - to set environment variables
- user - to set a user that will be used inside the container system
- cmd - to provide a command that will be executed when the container is started
- publish - to provide expose container' service via myscoket.io integration