OVS (Open vSwitch) 3.2 added support for SRv61, so let’s try using it. To put it simply, you create a port with type=srv6 as shown below. Since it’s implemented using the same framework as existing tunneling protocols like VXLAN and Geneve, you specify both tunnel endpoints (corresponding to SIDs on both sides in SRv6) with options:remote_ip and options:local_ip. In addition to these, SRv6 has a special option options:srv6_segs to set intermediate routers as a Segment List. By the way, both IPv4 and IPv6 are supported as Inner Packets.

ovs-vsctl add-br br0
ovs-vsctl add-port br0 srv6_0 -- \
  set int srv6_0 type=srv6  \
  options:remote_ip=fc00:100::1 \
  options:srv6_segs="fc00:100::1,fc00:200::1,fc00:300::1"

OVS mainly has two types of datapaths: kernelspace and userspace, but SRv6 only supports userspace2. This means you need to deploy it using mechanisms like DPDK or AF_XDP3.

Feature Linux upstream Linux OVS tree Userspace
GRE 3.11 1.0 2.4
VXLAN 3.12 1.10 2.4
Geneve 3.18 2.4 2.4
SRv6 NO NO 3.2

So how do you run it? I simplified a mininetlab script4 into a simple command sequence. Let’s try running it with the following configuration.

First, confirm that OVS 3.2 or later is running. Also, since this article uses AFXDP, configure with ./configure --enable-afxdp at compile time.

ovs-vswitchd -V
# ovs-vswitchd (Open vSwitch) 3.2.0
# DPDK 22.11.1

Next, create two veth pairs. One corresponds to the underlay network, the other to the overlay network.

ip link add p1 type veth peer name ovs-p1
ip link add p2 type veth peer name ovs-p2

Next, create two bridges in OVS. Similarly, one corresponds to the underlay network, the other to the overlay network. Set datapath_type=netdev to run the datapath in userspace. Since we’ll experiment with hardcoded packets using Scapy this time, fix the MAC addresses.

ovs-vsctl -- add-br br1 -- set Bridge br1 protocols=OpenFlow10 \
  fail-mode=secure datapath_type=netdev
ovs-vsctl -- add-br br2 -- set Bridge br2 protocols=OpenFlow10 \
  fail-mode=secure datapath_type=netdev
ovs-vsctl set bridge br2 other_config:hwaddr=aa:55:aa:55:00:00

Assign ports to each bridge. This time we’re bringing packets to userspace with afxdp. Of course, you could use DPDK as well.

ovs-vsctl add-port br1 ovs-p1 -- set interface ovs-p1 type="afxdp"
ovs-vsctl add-port br2 ovs-p2 -- set interface ovs-p2 type="afxdp"

Create a type=srv6 port on bridge br1.

ovs-vsctl add-port br1 srv6_0 -- set interface srv6_0 type=srv6 \
  options:local_ip=fc00:100::100 options:remote_ip=fc00:100::1

Port configuration is done. It should be in the following state.

ovs-vsctl show
# a1fee738-93fe-42ee-8420-a283807c3d87
#     Bridge br1
#         fail_mode: secure
#         datapath_type: netdev
#         Port br1
#             Interface br1
#                 type: internal
#         Port ovs-p1
#             Interface ovs-p1
#                 type: afxdp
#         Port srv6_0
#             Interface srv6_0
#                 type: srv6
#                 options: {local_ip="fc00:100::100", remote_ip="fc00:100::1"}
#     Bridge br2
#         fail_mode: secure
#         datapath_type: netdev
#         Port br2
#             Interface br2
#                 type: internal
#         Port ovs-p2
#             Interface ovs-p2
#                 type: afxdp
#     ovs_version: "3.2.0"

Next, inject flow rules. Simply configured traffic to flow between ovs-p1 — srv6_0 and ovs-p2 - LOCAL.

ovs-ofctl add-flow br1 in_port=ovs-p1,actions=output:srv6_0
ovs-ofctl add-flow br1 in_port=srv6_0,actions=output:ovs-p1
ovs-ofctl add-flow br2 in_port=LOCAL,actions=output:ovs-p2
ovs-ofctl add-flow br2 in_port=ovs-p2,actions=output:LOCAL

Bring all ports up.

for iface in ovs-p1 ovs-p2 br1 br2 p1 p2; do
  ip link set dev $iface up
done

Finally, assign local_ip to bridge br2, and statically set the MAC address for remote_ip. Of course, if local_ip and remote_ip belong to different subnets, configure the route accordingly.

ip -6 addr add fc00:100::100/64 dev br2
ovs-appctl tnl/arp/set br2 fc00:100::1 aa:55:aa:55:00:01

Now that we’re ready, let’s send a packet. You can confirm it’s encapsulated in the p1 -> p2 direction. The SRH is inserted as expected. (Only the fields of interest are excerpted)

python3 -c "from scapy.all import *; \
            pkt=Ether(dst='aa:55:aa:55:00:ff',src='aa:55:aa:55:00:ee') \
                /IP(dst='192.168.1.1',src='192.168.3.3')/ICMP(); \
            sendp(pkt, iface='p1')"

tcpdump -i p2 -w p2.pcap
tshark -V -r p2.pcap
# Frame 1: 106 bytes on wire (848 bits), 106 bytes captured (848 bits)
# Ethernet II, Src: aa:55:aa:55:00:00 (aa:55:aa:55:00:00), Dst: aa:55:aa:55:00:01 (aa:55:aa:55:00:01)
# Internet Protocol Version 6, Src: fc00:100::100, Dst: fc00:100::1
#     Routing Header for IPv6 (Segment Routing)
#         Next Header: IPIP (4)
#         Length: 2
#         [Length: 24 bytes]
#         Type: Segment Routing (4)
#         Segments Left: 0
#         Last Entry: 0
#         Flags: 0x00
#         Tag: 0000
#         Address[0]: fc00:100::1
# Internet Protocol Version 4, Src: 192.168.3.3, Dst: 192.168.1.1
# Internet Control Message Protocol

In the reverse direction, you can confirm it’s decapsulated.

python3 -c "from scapy.all import *; \
            pkt=Ether(src='aa:55:aa:55:00:01',dst='aa:55:aa:55:00:00') \
                /IPv6(src='fc00:100::1', dst='fc00:100::100') \
                /IPv6ExtHdrSegmentRouting(addresses=['fc00:100::100']) \
                /IP(dst='192.168.5.5',src='192.168.5.6')/ICMP(); \
            sendp(pkt, iface='p2')"

tcpdump -i p1 -w p1.pcap
tshark -V -r p1.pcap
# Frame 1: 42 bytes on wire (336 bits), 42 bytes captured (336 bits)
# Ethernet II, Src: 00:00:00_00:00:00 (00:00:00:00:00:00), Dst: 00:00:00_00:00:00 (00:00:00:00:00:00)
# Internet Protocol Version 4, Src: 192.168.5.6, Dst: 192.168.5.5
# Internet Control Message Protocol

That’s it. By the way, the kernel datapath is unimplemented. Microbenchmarks are discussed in a video5. Roughly 20Mpps for multi-flow cases, 4~7Mpps for single-flow cases.