|Martins Eglitis cf70ae5fd6||3 weeks ago|
|bench||7 months ago|
|click-bridge||10 months ago|
|click-fw||10 months ago|
|click-lb||10 months ago|
|click-nat||10 months ago|
|click-nop||10 months ago|
|codegen||4 weeks ago|
|doc||9 months ago|
|libvig||3 weeks ago|
|moonpol||10 months ago|
|setup||11 months ago|
|template||3 weeks ago|
|validator||10 months ago|
|vigarp||3 weeks ago|
|vigbridge||3 weeks ago|
|vigdhcp||3 weeks ago|
|vigfw||3 weeks ago|
|vigicmp||3 weeks ago|
|viglb||3 weeks ago|
|vignat||3 weeks ago|
|vignop||3 weeks ago|
|vigpol||3 weeks ago|
|.clang-format||10 months ago|
|.gitignore||9 months ago|
|.travis-test.sh||10 months ago|
|.travis.yml||10 months ago|
|Docker-build.sh||10 months ago|
|Docker-cleanup.sh||10 months ago|
|Dockerfile||10 months ago|
|Makefile||10 months ago|
|Makefile.click||10 months ago|
|Makefile.dpdk||4 weeks ago|
|Makefile.nfos||10 months ago|
|README.md||7 months ago|
|grub.cfg||10 months ago|
|linker.ld||11 months ago|
|nf-log.h||10 months ago|
|nf-util.c||3 weeks ago|
|nf-util.h||3 weeks ago|
|nf.c||3 weeks ago|
|nf.h||3 weeks ago|
|pxe-boot.sh||11 months ago|
|setup.sh||7 months ago|
This repository contains the Vigor verification toolchain and network functions (NFs).
Our scripts assume you are using Ubuntu 18.04, with an active Internet connection to download dependencies. Older Ubuntus, or other Debian-based distros, may work but have not been tested.
As an alternative to installing the dependencies on your own machine, we provide a Docker image:
However, you must still use Ubuntu 18.04 as a host, since the guest uses the host's kernel and DPDK needs kernel headers to compile.
This image can be generated with the
Please note that running the setup.sh script can take an hour, since it downloads and compiles many dependencies, and uses
sudo which will prompt for your credentials.
To compile and run Vigor NFs, you need:
To verify NFs using DPDK models, you need:
To verify NFs using hardware models, you need:
To benchmark Vigor NFs, you need:
Linux setup for performancesection below)
bench/config.shfile to match the two machines’ configuration
There are currently six Vigor NFs:
||NAT according to RFC 3022|
||Bridge with MAC learning according to IEEE 802.1Q-2014 sections 8.7 and 8.8|
||Load balancer inspired by Google's Maglev|
||Traffic policer whose specification we invented|
||Firewall whose specification we invented|
There are additional “baseline” NFs, which can only be compiled, run and benchmarked, each in its own folder:
||Click-based no-op forwarder|
||Click-based MAC learning bridge|
|Click load balancer||
||Click-based load balancer (not Maglev)|
||Libmoon-based traffic policer|
The Click- and Libmoon-based NFs use batching if the
VIGOR_USE_BATCH environment variable is set to
true when running the benchmark targets (see table below).
Pick the NF you want to work with by
cd-ing to its folder, then use one of the following
|Default||Compile the NF||<1min|
||Run the NF using recommended arguments||<1min to start (stop with Ctrl+C)|
||Verify the NF only||<1min to symbex, <1h to validate|
||Verify the NF + DPDK + driver||<1h to symbex, hours to validate|
||Verify the NF + DPDK + driver + NFOS (full stack)||<1h to symbex, hours to validate|
||Count LoC in the NF||<1min|
||Count LoC in the specification||<1min|
||Count LoC in the NFOS||<1min|
||Count LoC in libVig||<1min|
||Count LoC in DPDK (not drivers)||<1min|
||Count LoC in the ixgbe driver||<1min|
||Count LoC in KLEE-uClibc||<1min|
||Benchmark the NF's throughput||<15min|
||Benchmark the NF's latency||<5min|
||Build a NFOS ISO image runnable in a VM||<1min|
||Build a NFOS ISO image suitable for netboot||<1min|
||Build and run NFOS in a qemu VM||<1min to start|
To run with your own arguments, compile then run
sudo ./build/app/nf -- -? which will display the command-line arguments you need to pass to the NF.
To verify using a pay-as-you-go specification, add
VIGOR_SPEC=paygo-your_spec.py before a verification command; the spec name must begin with
paygo- and end with
VIGOR_SPEC=paygo-broadcast.py make symbex validate.
make new-nfat the root of the repository, and answer the prompt.
The generated files contain inline comments describing how to write your NF code and your specification.
Besides the NF folders mentioned above, the repository contains:
.git*: Git-related files
.clang-format: Settings file for the clang-format code formatter
.travis*: Travis-related files for continuous integration
Docker*Docker-related files to build an image
Makefile*: Makefiles for the NFs
README.md: This file
bench: Benchmarking scripts, used by the benchmarking make targets
codegen: Code generators, used as part of the Vigor build process
doc: Documentation files
pxe-boot.sh: NFOS-related files
libvig: The libVig folder, containing
models, and the NFOS
nf-log.h: Skeleton code for Vigor NFs
setup*: Setup script and related files
template: Template for new Vigor NFs (see “Create your own Vigor NF” above)
validator: The Vigor Validator
Vigor includes a NF OS that is simple enough to be symbolically executed, besides trusted boot code.
You can run NFOS either in a virtual machine, using qemu, or on bare metal, using PXE boot.
Note, that as NFOS can not read cmdline arguments, all the NF arguments are compiled into the image during the build. You can set the NF arguments in the respective NF Makefile.
In order to run the NFOS inside a virtual machine, you need your kernel allow direct device access through VFIO.
For that you need to pass
intel_iommu=on iommu=pt to your linux kernel in the command line arguments in your bootloader.
Further, you need to load the vfio-pci module to forward your NICs to the VM with ;
$ modprobe vfio-pci
Then, bind the NICs you intend for the NF to use to
RTE_SDK is the path to your DPDK folder):
$ $RTE_SDK/usertools/dpdk-devbind.py -b vfio-pci <nic1> <nic2>
<nic2> are PCI addresses of the NICs you want to bind (e.g.
You can find the PCI addresses of your NICs using
:warning: Warning: after the next step your terminal will stop responding. Make sure you have a spare terminal open on this machine.
Finally, to run the NF with NFOS in a VM, get in to the NF directory, e.g.
vigor/vignat, and run:
$ make nfos-run
This will build the NF with DPDK, device driver and NFOS, produce the bootable ISO image and start a qemu machine executing that image. Note that NFOS ignores any input, so your terminal will show the NFOS output and will stop responding. You will need to kill the qemu process from a different terminal.
In order to run the NFOS on bare metal you will need an extra ethernet connection of the machine intended to run the NFOS (we call it DUT from now on) and a PXE server machine.
You will need
nfos-x86_64-multiboot1.bin image on the machine that will serve PXE requests.
You can build it either directly on the machine, or build it on DUT and copy it over.
To build the image, run:
$ make nfos-multiboot1
To serve the image, run on the machine intended as a PXE server:
$ ./pxe-boot.sh nfos-x86_64-multiboot1.bin
This will start a DHCP server and a PXE server and wait for network boot requests. As our image is larger than 64KB, we use a two step boot process, booting first an ipxe/undionly.kpxe image that then fetches the NFOS image and boots it.
In BIOS, configure DUT to boot from network, using the interface connected to the PXE server. When you reboot it, you should see some activity in the PXE server output and see NFOS output on DUT (printing the NF configuration). At this point you can stop the PXE boot server. The NFOS is running!
To maximize and stabilize performance, we recommend the following Linux kernel switches in
These settings were obtained from experience and various online guides, such as Red Hat's low-latency tuning guides. They are designed for Intel-based machines, you may have to tweak some of them if you use AMD CPUs.
Please do not leave these settings enabled if you aren't benchmarking, as they will effectively cause your machine to always consume a lot of power.
This table does not include the hugepages settings.
||Isolate the specified CPU cores so Linux won't schedule anything on them; replace with at least one core you will then use to run the NFs|
||Disable ACPI interrupts|
||Don't print backtraces when a process appears to be locked up (such as an NF that runs for a while)|
||Do not allow the CPU to enter low-power states|
||Do not allow the kernel t use a wait mechanism in the idle routine|
||Ignore corrected errors that cause periodic latency spikes|
||Prevents the Intel driver from managing power state and frequency|
||Disable CPU idle time management|
||Disable PCIe Active State Power Management, forcing PCIe devices to always run at full power|
||Ignore BIOS warnings about CPU frequency|
||Set the IOMMU to passthrough, required on some CPUs for DPDK's huge pages to run at full performance|
Q: DPDK says
No free hugepages reported in hugepages-1048576kB, what did I do wrong?
A: Nothing wrong! This just means there are no 1GB hugepages; as long as it can find the 2MB ones, you're good.
Q: DPDK says
PMD: ixgbe_dev_link_status_print(): Port 0: Link Down, what's up?
A: This doesn't mean the link is down, just that it's down at the moment DPDK checks, it usually comes up right after that and the NF is fine.
Q: DPDK cannot reserve all of the hugepages it needs, though it can reserve some of them.
A: Reboot the machine. This is the only workaround that always works.
We depend on, and are grateful for:
This section details the justification for each figure and table in the SOSP paper; “references” are to sections of this file,
paths refers to this repository unless otherwise indicated.
libfolder in the DPDK 17.11 repository
drivers/net/ixgbein the DPDK 17.11 repository
spec.pyfiles in each NF's folder
paygo-*.pyfiles in each NF's folder
symbexMake targets as indicated in Vigor NFs
vigbridgefolder's code (besides
paygo-*.py), and of
patch -R < optimize.patchin each NF's folder to revert the optimization if you want to reproduce these numbers; note that they do not validate any more, sometimes because the specs assume that the NF always expire flows, but the unoptimized NFs do not, sometimes because the types of variables changed a bit. Thus, we extrapolated the time of the “valid” and “assertion failed” traces to the “type mismatch” traces, since type mismatches take almost no time to detect compared to the time it takes to verify a trace.
fspec.mlfile of each NF; except that to be consistent with the paper,
a < X & X < bcounts as 1.
paygo-*.pyfiles in each NF's repository
parallelthen divide by the number of paths.