Difference between revisions of "Using the IPv4 Code Option"

From ARL Wiki
Jump to navigationJump to search
Line 7: Line 7:
 
An SPP is a PlanetLab node that combines the high-performance and programmability of Network Processors (NPs) with the programmability of general-purpose processors (GPEs).
 
An SPP is a PlanetLab node that combines the high-performance and programmability of Network Processors (NPs) with the programmability of general-purpose processors (GPEs).
 
[[The Hello GPE World Tutorial]] page described how to use a GPE.
 
[[The Hello GPE World Tutorial]] page described how to use a GPE.
This page describes the SPP's fastpath (NP) features using the IPv4 meta-net as an example.
+
This page describes the SPP's fastpath (NP) features using the IPv4 code option as an example.
 
Those features include:
 
Those features include:
  
Line 20: Line 20:
 
The SPP's IPv4 code option is an example of this fastpath-slowpath paradigm.
 
The SPP's IPv4 code option is an example of this fastpath-slowpath paradigm.
  
This page describes a simple IPv4 meta-net and in doing so, illustrates the fastpath-slowpath paradigm that would be in any high-speed implementation.
+
This page describes a simple IPv4 code option and in doing so, illustrates the fastpath-slowpath paradigm that would be in any high-speed implementation.
 
Configuring the SPP so that it will process IPv4 packets using the IPv4 code option involves these steps:
 
Configuring the SPP so that it will process IPv4 packets using the IPv4 code option involves these steps:
  
Line 36: Line 36:
  
 
A network of nodes containing SPPs is formed by connecting the nodes with UDP tunnels.
 
A network of nodes containing SPPs is formed by connecting the nodes with UDP tunnels.
This network forms a ''substrate-net'' which can carry packets from one or more ''meta-nets''.
+
This network forms a ''substrate'' which can carry packets from one or more user slices.
 
The nodes can be SPPs, hosts (PlanetLab and non-PlanetLab), or any packet processors that support this paradigm.
 
The nodes can be SPPs, hosts (PlanetLab and non-PlanetLab), or any packet processors that support this paradigm.
 
A UDP tunnel has two endpoints, each defined by a (IP address, UDP port) pair.
 
A UDP tunnel has two endpoints, each defined by a (IP address, UDP port) pair.
Line 47: Line 47:
 
[[Image:sub-net-packet.png|right|300px|border| Substrate Packet]]
 
[[Image:sub-net-packet.png|right|300px|border| Substrate Packet]]
  
A packet that travels through this network of SPPs has an outer (substrate-net) header, an inner (meta-net) header and a payload (packet content); i.e., a meta-net packet is encapsulated in a substrate-net IP/UDP packet.
+
A packet that travels through this network of SPPs has an outer (substrate) header, an inner (slice) header and a payload (packet content); i.e., a slice's packet is encapsulated in a substrate IP/UDP packet.
If an SPP has been configured to process the packet using the fastpath, the packet is sent to the NP where the substrate header is removed to expose the meta-net packet.
+
If an SPP has been configured to process the packet using the fastpath, the packet is sent to the NPE where the substrate header is removed to expose the slice's packet.
The NP processes the meta-net packet and encapsulates the meta-net packet in an IP/UDP packet before forwarding the packet out of one of its interfaces.
+
The NPE processes the slice's packet and encapsulates the packet in an IP/UDP packet before forwarding the packet out of one of its interfaces.
In the case of an IPv4 meta-net, an IPv4 packet is encapsulated in another IPv4 packet.
+
In the case of an IPv4 slice, an IPv4 packet is encapsulated in another IPv4 packet.
  
 
[[ Image:GEC-4-mi-R1-H1.png | thumb | right | 300px | Meta-Interfaces, Filters and Queues ]]
 
[[ Image:GEC-4-mi-R1-H1.png | thumb | right | 300px | Meta-Interfaces, Filters and Queues ]]
Line 63: Line 63:
  
 
* There are five meta-interfaces labeled m0-m4.
 
* There are five meta-interfaces labeled m0-m4.
* The blocks labeled f19, f6, f23, f0 and f1 are filters that direct meta-net packets to queues q48 q9, q3 and q6.  Queues q10, q11 and q0 are not used by packets coming from H1).
+
* The blocks labeled f19, f6, f23, f0 and f1 are filters that direct slice packets to queues q48 q9, q3 and q6.  Queues q10, q11 and q0 are not used by packets coming from H1).
 
* More than one queue (e.g., q9, q10, q11) can be bound to one meta-interface.
 
* More than one queue (e.g., q9, q10, q11) can be bound to one meta-interface.
  
Line 70: Line 70:
 
* The GPE can inject packets into the FP.
 
* The GPE can inject packets into the FP.
 
* The complete set of filters form the router's forwarding table.
 
* The complete set of filters form the router's forwarding table.
* Exception packets (e.g., bad meta-net header) are sent to the GPE for further processing.
+
* Exception packets (e.g., bad slice packet header) are sent to the GPE for further processing.
 
* Traffic through queues can be monitored using ''stats indices'' which in turn can be displayed.
 
* Traffic through queues can be monitored using ''stats indices'' which in turn can be displayed.
  
Line 98: Line 98:
 
== Example 1 ==
 
== Example 1 ==
  
[[Image:ipv4-meta-net-example1.png|right|300px|border| IPv4 Meta-Net Example 1]]
+
[[Image:ipv4-slice-example-1.png|right|300px|border| IPv4 Slice Example 1]]
  
The figure (above) shows the main parts of the IPv4 meta-net example described below.
+
The figure (above) shows the main parts of the IPv4 slice described below.
 
The symbols S, D, S' and D' are used to denote hosts and the traffic processes running on those hosts.
 
The symbols S, D, S' and D' are used to denote hosts and the traffic processes running on those hosts.
 
The example includes one SPP (labeled R) which includes the GPE host D', and three other hosts (labeled S, S' and D).
 
The example includes one SPP (labeled R) which includes the GPE host D', and three other hosts (labeled S, S' and D).
 
There are two traffic flows:  1) a unidirectional flow from S to D; and 2) a bidirectional flow between S' and D'.
 
There are two traffic flows:  1) a unidirectional flow from S to D; and 2) a bidirectional flow between S' and D'.
The meta-net traffic between S' and D' is IPv4 meta-net ''ping'' traffic (ICMP echo request packets from S' to D', and ICMP echo reply packets from D' to S').
+
The slice traffic between S' and D' is ''ping'' traffic (ICMP echo request packets from S' to D', and ICMP echo reply packets from D' to S').
  
The meta-net uses four meta-net interfaces (m0, m1, m2, and m3), four queues (q0, q1, q8 and q9) and three filters (f0, f1 and f2).
+
The IPv4 slice uses four meta-net interfaces (m0, m1, m2, and m3), four queues (q0, q1, q8 and q9) and three filters (f0, f1 and f2).
 
In the flow between S' and D':
 
In the flow between S' and D':
  
* An incoming meta-net ''ping'' packet from S' is stripped of its substrate-net header and directed to queue q8 by filter f0.
+
* An incoming packet containing an IPv4 ''ping'' packet from S' is stripped of its outer header and directed to queue q8 by filter f0.
* The ''ip_fpd'' process (labeled as D') running on a GPE reads the echo request packet from queue q8, forms an echo reply packet, and injects the packet into the FP.
+
* The ''ip_fpd'' daemon process (labeled as D') running on a GPE reads the echo request packet from queue q8, forms an echo reply packet, and injects the packet into the FP.
 
* The echo reply packet will be placed into queue q0 by filter f1.
 
* The echo reply packet will be placed into queue q0 by filter f1.
* The packet is encapsulated into an IP/UDP substrate-net packet and sent out MI 1.
+
* The packet is encapsulated into an IP/UDP packet and sent out MI 1.
* Finally, the substrate-net packet arrives back at S'.
+
* Finally, the encapsulated ICMP echo reply packet arrives back at S'.
  
 
In the flow from S to D:
 
In the flow from S to D:
  
* An incoming meta-net packet from S to MI m2 is stripped of its substrate-net header and directed to queue q2 by filter f2.
+
* An incoming slice packet from S to MI m2 is stripped of its outer header and directed to queue q2 by filter f2.
* The packet is encapsulated into an IP/UDP substrate-net packet and sent out MI m3.
+
* The packet is encapsulated into an IP/UDP packet and sent out MI m3.
* Finally, the substrate-net packet arrives at D.
+
* Finally, the encapsulated IP/UDP packet arrives at D.
  
This example will show the basics of using the IPv4 meta-net and can be easily extended in a number of ways:
+
This example will show the basics of using the IPv4 slice.
 +
It can be easily extended in a number of ways:
  
 
* Include more flows and more variety of flows.
 
* Include more flows and more variety of flows.

Revision as of 23:31, 14 March 2010

Template:Under Construction

Introduction

An SPP is a PlanetLab node that combines the high-performance and programmability of Network Processors (NPs) with the programmability of general-purpose processors (GPEs). The Hello GPE World Tutorial page described how to use a GPE. This page describes the SPP's fastpath (NP) features using the IPv4 code option as an example. Those features include:

  • Bandwidth, queue, filter and memory resources
  • Logical interfaces (meta-interfaces) within each physical interface
  • Packet scheduling queues and their binding to meta-interfaces
  • Filters for forwarding packets to queues

Like any PlanetLab node, the SPP runs a server on each GPE that allows a user to allocate a subset of a node's resources called a slice. Although an SPP user can prototype a new router by writing a socket program for the GPE, the SPP's high performance can only be tapped by using a SPP's NP. That is, use a fastpath-slowpath packet processing paradigm where the fastpath uses NPs to process data packets at high speed while the slowpath uses GPEs to handle control and exception packets. The SPP's IPv4 code option is an example of this fastpath-slowpath paradigm.

This page describes a simple IPv4 code option and in doing so, illustrates the fastpath-slowpath paradigm that would be in any high-speed implementation. Configuring the SPP so that it will process IPv4 packets using the IPv4 code option involves these steps:

  • Allocate (and configure) a fastpath (FP)
  • Create one or more meta-interfaces (MIs)
  • Create and configure packet queues, and bind each queue to an MI
  • Install filters to direct incoming packets to packet queues

A fastpath creation request specifies your desire for SPP resources such as interface bandwidths, queues, filters and memory. Once you are granted those resources, you define meta-interfaces within the fastpath and structure meta-interfaces and resources for packet forwarding.

The SPP Fastpath

A network of nodes containing SPPs is formed by connecting the nodes with UDP tunnels. This network forms a substrate which can carry packets from one or more user slices. The nodes can be SPPs, hosts (PlanetLab and non-PlanetLab), or any packet processors that support this paradigm. A UDP tunnel has two endpoints, each defined by a (IP address, UDP port) pair. Note that this address-port pair is from the addressing domain of the substrate-net. Also, note that an SPP node:

  • Has multiple physical interfaces, each with an IP address.
  • Can support concurrent traffic from multiple SPP slices (users) at each interface.

A packet that travels through this network of SPPs has an outer (substrate) header, an inner (slice) header and a payload (packet content); i.e., a slice's packet is encapsulated in a substrate IP/UDP packet. If an SPP has been configured to process the packet using the fastpath, the packet is sent to the NPE where the substrate header is removed to expose the slice's packet. The NPE processes the slice's packet and encapsulates the packet in an IP/UDP packet before forwarding the packet out of one of its interfaces. In the case of an IPv4 slice, an IPv4 packet is encapsulated in another IPv4 packet.

Meta-Interfaces, Filters and Queues

Since the primary function of a router is to forward incoming packets to the next destination (or next hop), a router has to have enough interfaces to accept packets from its neighboring nodes and forward them to the next node. A meta-interface (MI) is a logical interface that is bound to an endpoint, an (IP address, UDP port) pair. The IP address is the address of a physical interface, and the UDP port number is chosen to distinguish the user's traffic from other traffic that shares the same physical interface.

The figure (right) shows some of the paths that a packet from H1 can take through the FP of the R1 router. For simplicity, the diagram doesn't show resources used by packets from other nodes. The figure shows these FP features:

  • There are five meta-interfaces labeled m0-m4.
  • The blocks labeled f19, f6, f23, f0 and f1 are filters that direct slice packets to queues q48 q9, q3 and q6. Queues q10, q11 and q0 are not used by packets coming from H1).
  • More than one queue (e.g., q9, q10, q11) can be bound to one meta-interface.

A complete diagram would show other features:

  • The GPE can inject packets into the FP.
  • The complete set of filters form the router's forwarding table.
  • Exception packets (e.g., bad slice packet header) are sent to the GPE for further processing.
  • Traffic through queues can be monitored using stats indices which in turn can be displayed.

This page shows how to:

  • Reserve FP resources and then create a FP for the IPv4 code option.
  • Create FP endpoints (meta-interfaces (MIs)) with bandwidth guarantees.
  • Create and configure queues with drop thresholds and bandwidth guarantees.
  • Bind queues to MIs.
  • Install IPv4 filters.

Utilities and Daemons

We will use several utilities and daemons:

  • scfg (slice configuration)
    • Used to get SPP information, reserve resources, configure queues and allocate/free resources
  • ip_fpc (IPv4 filter configuration)
    • Used to create IPv4 filters
  • ip_fpd (IPv4 daemon)
    • Used to create an IPv4 fastpath and process IPv4 local-delivery and exception traffic
  • sliced (slice statistics daemon)
    • Used to process statistics monitoring requests

The executables are in the directory /usr/local/bin/ on each SPP slice.

Example 1

The figure (above) shows the main parts of the IPv4 slice described below. The symbols S, D, S' and D' are used to denote hosts and the traffic processes running on those hosts. The example includes one SPP (labeled R) which includes the GPE host D', and three other hosts (labeled S, S' and D). There are two traffic flows: 1) a unidirectional flow from S to D; and 2) a bidirectional flow between S' and D'. The slice traffic between S' and D' is ping traffic (ICMP echo request packets from S' to D', and ICMP echo reply packets from D' to S').

The IPv4 slice uses four meta-net interfaces (m0, m1, m2, and m3), four queues (q0, q1, q8 and q9) and three filters (f0, f1 and f2). In the flow between S' and D':

  • An incoming packet containing an IPv4 ping packet from S' is stripped of its outer header and directed to queue q8 by filter f0.
  • The ip_fpd daemon process (labeled as D') running on a GPE reads the echo request packet from queue q8, forms an echo reply packet, and injects the packet into the FP.
  • The echo reply packet will be placed into queue q0 by filter f1.
  • The packet is encapsulated into an IP/UDP packet and sent out MI 1.
  • Finally, the encapsulated ICMP echo reply packet arrives back at S'.

In the flow from S to D:

  • An incoming slice packet from S to MI m2 is stripped of its outer header and directed to queue q2 by filter f2.
  • The packet is encapsulated into an IP/UDP packet and sent out MI m3.
  • Finally, the encapsulated IP/UDP packet arrives at D.

This example will show the basics of using the IPv4 slice. It can be easily extended in a number of ways:

  • Include more flows and more variety of flows.
  • Add filters to add further discrimination for packet forwarding.
  • Add queues and configure filters to give some flows preferential treatment.
  • Add another SPP and more hosts to form a larger network.

Example 2 includes some of these extensions, and the exercises explores these extensions further.

Preparation

XXXXX getting files, etc


Setup the SPP

The setupIPex1.sh script is used to configure the SPP for this example:


>>>>> HERE <<<<<


XXXXX setup script setupFP1.sh

  • MIs
    • Implicit MI 0 for LD and EX
    • 1 for each FP EP plus MI 0 (LD, EX)
  • Filters
    • direct incoming packet to queue
    • For LD: "--txdaddr 0 --txdport 0 --qid 0"
  • Queues
    • each queue is bound to an MI
    • each MI can have 1 or more queues
    • q$N for EX and q${N-1} for LD where $N is #queues
    • pkt scheduling: weighted fair queueing

Send Traffic

XXXXX

Teardown the SPP

XXXXX teardown script teardownFP1.sh

Example 2

XXXXX

  • simplest complete example
  • all traffic
  • sliced