Difference between revisions of "The Hello GPE World Tutorial"

From ARL Wiki
Jump to navigationJump to search
 
(32 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
== Introduction ==
 
== Introduction ==
  
XXXXX
+
Like any ''PlanetLab node'', a user can allocate a subset of an SPP's resources called a ''slice''.
 
+
An SPP slice user can either use a ''fastpath-slowpath'' packet processing paradigm which uses both a network processor (NPE) a general-purpose processor (GPE) or use a ''slowpath-only'' paradigm in which packet processing is handled completely by a socket program running on a GPE.
== The SPP Components ==
+
This page describes how to use the GPE-only approach.
 
 
XXXXX
 
  
 
== Pinging SPP External Interfaces ==
 
== Pinging SPP External Interfaces ==
Line 21: Line 19:
  
 
<pre>
 
<pre>
 +
    ping -c 3 64.57.23.210        # Salt Lake City interface 0
 +
    ping -c 3 64.57.23.214        # Salt Lake City interface 1
 +
    ping -c 3 64.57.23.218        # Salt Lake City interface 2
 
     ping -c 3 64.57.23.194        # Washington DC interface 0
 
     ping -c 3 64.57.23.194        # Washington DC interface 0
 
     ping -c 3 64.57.23.198        # Washington DC interface 1
 
     ping -c 3 64.57.23.198        # Washington DC interface 1
 
     ping -c 3 64.57.23.202        # Washington DC interface 2
 
     ping -c 3 64.57.23.202        # Washington DC interface 2
    ping -c 3 64.57.23.210        # Salt Lake City interface 0
 
    ping -c 3 64.57.23.214        # Salt Lake City interface 1
 
    ping -c 3 64.57.23.218        # Salt Lake City interface 2
 
 
     ping -c 3 64.57.23.178        # Kansas City interface 0
 
     ping -c 3 64.57.23.178        # Kansas City interface 0
 
     ping -c 3 64.57.23.182        # Kansas City interface 1
 
     ping -c 3 64.57.23.182        # Kansas City interface 1
Line 35: Line 33:
  
 
<pre>
 
<pre>
myhost> ping -c 3 64.57.23.194
+
    > ping -c 3 64.57.23.210
PING 64.57.23.178 (64.57.23.178) 56(84) bytes of data.
+
    PING 64.57.23.210 (64.57.23.210) 56(84) bytes of data.
64 bytes from 64.57.23.178: icmp_seq=1 ttl=56 time=67.5 ms
+
    64 bytes from 64.57.23.210: icmp_seq=1 ttl=56 time=67.5 ms
64 bytes from 64.57.23.178: icmp_seq=2 ttl=56 time=55.9 ms
+
    64 bytes from 64.57.23.210: icmp_seq=2 ttl=56 time=55.9 ms
64 bytes from 64.57.23.178: icmp_seq=3 ttl=56 time=59.0 ms
+
    64 bytes from 64.57.23.210: icmp_seq=3 ttl=56 time=59.0 ms
  
--- 64.57.23.178 ping statistics ---
+
    --- 64.57.23.210 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
+
    3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 55.949/60.823/67.511/4.895 ms
+
    rtt min/avg/max/mdev = 55.949/60.823/67.511/4.895 ms
 
</pre>
 
</pre>
  
 
Note that you may not be able to ''ping'' an SPP external interface.
 
Note that you may not be able to ''ping'' an SPP external interface.
Some reasons why you may fail are:
+
Some reasons why it might fail are:
  
 
# Your host doesn't have ''ping'' installed.  This is not typical.
 
# Your host doesn't have ''ping'' installed.  This is not typical.
# The SPP interface or the SPP is down.
+
# The SPP interface is down.
# Your network adminstrator blocks ''ping'' traffic.
+
# Your network blocks ''ping'' traffic.
 
# Your network provider doesn't route Internet2 addresses.
 
# Your network provider doesn't route Internet2 addresses.
  
Line 58: Line 56:
 
See your system administrator if you can't find ''ping''.
 
See your system administrator if you can't find ''ping''.
 
In the other cases, your ''ping'' command will eventually return with a ''100% packet loss'' message.
 
In the other cases, your ''ping'' command will eventually return with a ''100% packet loss'' message.
In the last case, running ''traceroute 64.57.23.178'' may give a ''Network unreachable'' indication (the last router is marked ''!N'').
+
In the last case, running the command ''traceroute 64.57.23.210'' will give a ''Network unreachable'' indication (the last router is marked ''!N'').
 +
 
 +
If you are unsuccessful with one interface, try to ''ping'' the interface of a different SPP.
 +
 
 +
However, you can always get around these problems (except for an SPP being down) by issuing the ''ping'' command from a PlanetLab node.
 +
We discuss how to log into a PlanetLab node in ''[[Using the IPv4 Code Option]]''.
 +
 
 +
== DNS Names of SPP External Interfaces ==
  
If you are unsuccessful with one interface, try an interface of a different SPP.
+
{| border=1 cellspacing=0 cellpadding=3 align=right
 +
! SPP || Ifn || IP Address || DNS Name
 +
|- align="center"
 +
| KANS || 0 || 64.57.23.178 || sppkans1.arl.wustl.edu
 +
|- align="center"
 +
|      || 1 || 64.57.23.182 || sppkans2.arl.wustl.edu
 +
|- align="center"
 +
|      || 2 || 64.57.23.186 || sppkans3.arl.wustl.edu
 +
|- align="center"
 +
| WASH || 0 || 64.57.23.194 || sppwash1.arl.wustl.edu
 +
|- align="center"
 +
|      || 1 || 64.57.23.198 || sppwash2.arl.wustl.edu
 +
|- align="center"
 +
|      || 2 || 64.57.23.202 || sppwash3.arl.wustl.edu
 +
|- align="center"
 +
| SALT || 0 || 64.57.23.210 || sppsalt1.arl.wustl.edu
 +
|- align="center"
 +
|      || 1 || 64.57.23.214 || sppsalt2.arl.wustl.edu
 +
|- align="center"
 +
|      || 2 || 64.57.23.218 || sppsalt3.arl.wustl.edu
 +
|-
 +
|}
  
However, you can always get around these problems (except for an SPP being down) if you can log into a PlanetLab node, and enter the ''ping'' command from there.
+
The SPP's external interfaces also have DNS names.
We discuss how to log into a PlanetLab node in [[The IPv4 Metanet Tutorial]].
+
So, ''ping -c 3 sppsalt1.arl.wustl.edu'' works as well as ''ping -c 3 64.57.23.210''.
 +
The table (right) shows the DNS names of the Internet external interfaces.
  
 
== Logging Into an SPP's GPE ==
 
== Logging Into an SPP's GPE ==
  
Now, let's try to log into the Salt Lake City SPP (64.57.23.178).
+
Now, let's try to log into the SPP interface that you were able to ''ping''.
 +
The example below assumes that interface was 64.57.23.210; that is, interface 0 of the Salt Lake City SPP.
 
Note the following:
 
Note the following:
  
 
* You must use ''ssh'' to log into an SPP.
 
* You must use ''ssh'' to log into an SPP.
* When you ''ssh'' to an SPP's external interface, you will actually get logged into the GPE of the SPP.
+
* When you ''ssh'' to an SPP's external interface, you will actually get logged into a GPE of the SPP.
 
* Furthermore, you will be logging into your slice in a GPE.
 
* Furthermore, you will be logging into your slice in a GPE.
 
* Even if your network blocks your ''ping'' packets, you should be able to log into a GPE as long as there is a route to the SPP's external interface address.
 
* Even if your network blocks your ''ping'' packets, you should be able to log into a GPE as long as there is a route to the SPP's external interface address.
 +
* You can 'ssh' to any of the SPP's external interfaces.
 +
 +
To log into a GPE at the Salt Lake City SPP, I would enter:
  
If you have used ''ssh'' extensively, you probably know how to use an SSH agent so that you only enter your passphrase once.
+
<pre>
If so, you can skip over the next few paragraphs which are oriented for novice ''ssh'' users.
+
    ssh pl_washu_sppDemo@64.57.23.210
 +
</pre>
  
>>>>> HERE <<<<<
+
where my slice name is ''pl_washu_sppDemo''.
 +
Thus, the general format is:
  
YOUR_SLICE XXXXX
 
 
<pre>
 
<pre>
     myhost> ssh -v -i ~/id_rsa YOUR_SLICE@64.57.23.210    # Salt Lake City
+
     ssh YOUR_SLICE@SPP_ADDRESS
 
</pre>
 
</pre>
  
* The ''-v'' flag (verbose mode) causes ''ssh'' to output debug messages that indicate its progress.
+
where ''YOUR_SLICE'' is the slice you were assigned during account registration, and ''SPP_ADDRESS'' is the IP address of an SPP external interface.
* The argument following the ''-i'' flag indicates the private identity file.
+
 
* This form of ''ssh'' will prompt you for your passphrase.
+
During the login process, you will be asked to enter your RSA passphrase unless ''ssh-agent'' or an equivalent utility (e.g., ''keychain'', ''gnome-keyring-daemon'') is holding your private RSA key.
 +
 
 +
    host> ssh pl_washu_sppDemo@SPP_ADDRESS
 +
    Enter passphrase for key '/home/.../LOGIN_NAME/.ssh/id_rsa':
 +
        '''... Respond with your passphrase ...'''
 +
    Last login:  ... Previous login information ...
 +
    [YOUR_SLICE@SPP_ADDRESS ~]$
 +
 
 +
If the SSH daemon asks you for your password, you will have to call ''ssh'' using the ''-i KEY_FILE'' argument like this:
 +
 
 +
<pre>
 +
    ssh -i ~/.ssh/id_rsa YOUR_SLICE@SPP_ADDRESS
 +
        ... The SSH daemon will ask for your passphrase ...
 +
</pre>
 +
 
 +
== Using ''ssh-agent'' ==
 +
 
 +
This section is a very brief explanation of how to use ''ssh-agent''.
 +
You can skip this section if you are already using such an agent.
 +
If you have never used such an agent, note that there are several alternatives to the procedure described below and our description is meant to be a simple cookbook procedure.
 +
See the ''ssh-agent'' and ''ssh-add'' man pages or the web for more details.
 +
 
 +
The basic idea is to run ''ssh-agent'' which is a daemon process that caches private keys and listens for requests from SSH clients needing a private key related computation.
 +
Then, run the ''ssh-add'' command to add your private key to your agent's cache.
 +
This is only done once after you start the SSH agent.
 +
The process will ask you for your passphrase which is used to decrypt the private key which is then held in main memory by the agent.
  
If you are successful, you will get about 30 lines of debug output followed by a slice prompt that looks like this:
+
For example,
  
 
<pre>
 
<pre>
    myhost> ssh -v -i ~/id_rsa YOUR_SLICE@64.57.23.210
+
    eval `ssh-agent`        # Notice the backquotes
    debug1: Reading configuration data /etc/ssh/ssh_config
+
    ssh-add
    debug1: Applying options for *
+
        ... Enter your passphrase when it prompts for it ...
    debug1: Connecting to 64.57.23.210 [64.57.23.210] port 22.
 
    ... 20-30 lines of debug output ...
 
    Last login: Sun Feb 28 00:45:11 2010 from imbrium.seas.wustl.edu
 
    [YOUR_SLICE@salt_spp1 ~]$
 
 
</pre>
 
</pre>
  
Now, log out (enter ''exit'' or ctrl-d) and login again but omit the ''-v'' flag.
+
Notice that we are using backquotes (which denotes command substitution) in the first line, NOT the normal forward quote characters.
  
Experienced ''ssh'' users use ''ssh-agent'' or an equivalent utility (e.g., ''keychain'', ''gnome-keyring-daemon'') so that they only enter their passphrase once at the beginning of the day.
+
In the first line, ''ssh-agent'' outputs two commands to stdout which is then evaluated by the ''eval'' command.
Then, they login by enterring:
+
These two commands set the two environment variables ''SSH_AUTH_SOCK'' and ''SSH_AGENT_PID''.
 +
Enter the command "''printenv | grep SSH_A''", and you
 +
will get output that looks like:
  
 
<pre>
 
<pre>
    myhost> ssh YOUR_SLICE@64.57.23.210
+
    SSH_AUTH_SOCK=/tmp/ssh-sTNf2142/agent.2142
 +
    SSH_AGENT_PID=2143
 
</pre>
 
</pre>
  
Furthermore, there is no prompt for a password or a passphrase.
+
which says that process 2143 is your ssh-agent and it is listening for requests on the Unix Domain socket ''/tmp/ssh-sTNf2142/agent.2142''.
This approach substantially reduces a user's login effort, especially if he/she has to log into SPPs many times during the day.
+
The ''ssh-add'' command adds your private key to the list of private keys held by ''ssh-agent''.
  
XXXXX
+
You can now verify that you can ssh to an SPP without entering a password
 +
or passphrase.
 +
In fact, any subshell of the current shell will not need to enter
 +
a password when logging into an SPP as long as the agent is running because the SSH environment variables are passed to all children of the current shell allowing them to communicate with the same agent.
  
== Getting Information About Interfaces and Peers ==
+
== The SPP Configuration Command ''scfg'' ==
  
* The ''scfg'' utility
+
After you have logged into a GPE, you can use the ''scfg'' command to:
* ''scfg --cmd get_ifaces''
 
* ''scfg --cmd get_peer''
 
  
== Constructing a Tunnel Map ==
+
* Get information about the SPP
 +
* Configure the SPP
 +
* Make resource reservations
  
* Use the output of ''get_ifaces'' and ''get_peer''
+
You can get help information from ''scfg'' by entering one of these forms of the command:
  
== Making a Resource Reservation ==
+
<pre>
 +
    scfg --help all        # show help for all commands
 +
    scfg --help info        # show help for information commands
 +
    scfg --help queues      # show help for queue commands
 +
    scfg --help reserv      # show help for reservation commands
 +
    scfg --help alloc      # show help for resource alloc/free commands
 +
</pre>
  
* ''scfg --cmd make_reservation'' and the reservation file
+
Try getting help on the information commands by entering:
* Other reservation commands
 
  
== Creating a Slowpath (GPE) Endpoint ==
+
<pre>
 +
    scfg --help info
 +
</pre>
  
* ''scfg --cmd alloc_plspec''
+
Your output should look like this:
* ''scfg --cmd alloc_endpoint''
 
  
== Hello SPP ==
+
<pre>
 +
    USAGE:
 +
    INFORMATION CMDS:
 +
      scfg --cmd get_ifaces
 +
            Display all interfaces
 +
      scfg --cmd get_ifpeer --ifn N
 +
            Display the peer of interface num N
 +
      ... other output not shown ...
 +
</pre>
  
 +
If you get a ''command not found'' message, try entering:
  
 +
<pre>
 +
    /usr/local/bin/scfg --help info
 +
</pre>
  
== Putting Things Together ==
+
If the command now runs, you need to add ''/usr/local/bin'' to your PATH environment variable.
 +
The rest of this tutorial assumes that your PATH environment variable has been set to include the directory containing the ''scfg'' command.
  
 +
== Getting Information About External Interfaces ==
  
 +
SPPs have multiple external interfaces.
 +
To show the attributes of all external interfaces, enter:
  
 +
<pre>
 +
    scfg --cmd get_ifaces
 +
</pre>
  
 +
For example, running this command on the Salt Lake City SPP produces:
  
 +
<pre>
 +
    Interface list:
 +
      [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.210]
 +
      [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.214]
 +
      [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.218]
 +
      [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.1.2]
 +
      [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.2.2]
 +
      [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.7.2]
 +
      [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.8.2]
 +
</pre>
  
<!-- BEGIN COMMENT
+
This output shows:
  
>>>>> Cannabalize the rest of this page. <<<<<
+
* There are seven external interfaces numbered from 0 to 7.
 +
* ''type:'' There are two types of interfaces:  Internet (''inet'') and point-to-point (''p2p'').
 +
* ''linkBW:'' The capacity of each interface is 1 Gbps (i.e., 1000000 Kbps).
 +
* ''availBW:'' The available bandwidth of each interface is 899.232 Mbps (i.e., 899232 Kbps); that is, that portion of the capacity that hasn't already been allocated.
 +
* ''ipAddr:'' The IP addresses of each interface.
  
 +
== Getting Information About Peers ==
  
 +
The ''type inet'' interfaces are physically connected to the Internet.
 +
The ''type p2p'' interfaces are physically connected to other SPPs through point-to-point links.
 +
That's why you can only ''ping'' interfaces with type ''inet'' from your host.
  
= Getting SPP Configurations =
+
You can use the ''get_peer'' command to show the IP address of the interface at the other end of a point-to-point link.
 +
For example, I would enter:
  
The ''scfg'' (Slice Configuration) command can be used to display the attributes of SPP interfaces.
+
<pre>
The two most common commands are:
+
    scfg --cmd get_peer --ifn 3
 +
</pre>
  
 +
to find out the IP address of interface 3's peer.
 +
These seven commands will show the peer IP addresses of interfaces 0-6:
 
<pre>
 
<pre>
scfg --cmd get_ifaces          # display attributes of all interfaces
+
    scfg --cmd get_peer --ifn 0
scfg --cmd get_peer --ifn N     # display IP address of interface N's peer
+
    scfg --cmd get_peer --ifn 1
 +
    scfg --cmd get_peer --ifn 2
 +
    scfg --cmd get_peer --ifn 3
 +
     scfg --cmd get_peer --ifn 4
 +
    scfg --cmd get_peer --ifn 5
 +
    scfg --cmd get_peer --ifn 6
 
</pre>
 
</pre>
  
Examples of the displays are shown at the end of this page.
+
Running these commands on the Salt Lake City SPP produces this output:
Enter:
 
  
 
<pre>
 
<pre>
scfg --help info
+
    SPP Peer IP address: 0.0.0.0
 +
    SPP Peer IP address: 0.0.0.0
 +
    SPP Peer IP address: 0.0.0.0
 +
    SPP Peer IP address: 10.1.1.1
 +
    SPP Peer IP address: 10.1.2.1
 +
    SPP Peer IP address: 10.1.7.1
 +
    SPP Peer IP address: 10.1.8.1
 
</pre>
 
</pre>
  
to see what other information command options there are.
+
Notice that the ''p2p'' interfaces are the only ones with a peer IP address that is not 0.0.0.0.
 
+
Furthermore, these addresses have the same 10.1.x.y format as other p2p interfaces.
= IP Addresses =
 
  
'''Washington DC:'''
+
== Constructing an SPP Interconnection Map ==
  
{| border=1 cellspacing=0 cellpadding=3
+
We can now build a complete interconnection map of the SPPs if we combine the output of the ''get_ifaces'' and ''get_peer'' commands from all SPPs.
! Interface || Type || IP Address || Peer Address
+
This output is shown at the bottom of the [[The GENI SPP Configuration]] page.
|- align="center"
+
The interconnection tables shown near the top of the [[The GENI SPP Configuration]] page were constructed from this output.
| 0 || inet || 64.57.23.194 || 0.0.0.0
 
|- align="center"
 
| 1 || inet || 64.57.23.198 || 0.0.0.0
 
|- align="center"
 
| 2 || inet || 64.57.23.202 || 0.0.0.0
 
|- align="center"
 
| 3 || p2p  || 10.1.3.2    || 10.1.3.1 (KC ifn 5)
 
|- align="center"
 
| 4 || p2p  || 10.1.4.2    || 10.1.4.1 (KC ifn 6)
 
|- align="center"
 
| 5 || p2p  || 10.1.7.1    || 10.1.7.2 (SLC ifn 5)
 
|- align="center"
 
| 6 || p2p  || 10.1.8.1    || 10.1.8.2 (SLC ifn 6)
 
|}
 
  
'''Salt Lake City:'''
+
The Salt Lake City table is:
  
{| border=1 cellspacing=0 cellpadding=3
+
{| border=1 cellspacing=0 cellpadding=3 align=center
 
! Interface || Type || IP Address || Peer Address
 
! Interface || Type || IP Address || Peer Address
 
|- align="center"
 
|- align="center"
| 0 || inet || 64.57.23.178 || 0.0.0.0
+
| 0 || inet || 64.57.23.210 || 0.0.0.0
 
|- align="center"
 
|- align="center"
| 1 || inet || 64.57.23.182 || 0.0.0.0
+
| 1 || inet || 64.57.23.214 || 0.0.0.0
 
|- align="center"
 
|- align="center"
| 2 || inet || 64.57.23.186 || 0.0.0.0
+
| 2 || inet || 64.57.23.218 || 0.0.0.0
 
|- align="center"
 
|- align="center"
 
| 3 || p2p  || 10.1.1.2    || 10.1.1.1 (KC ifn 3)
 
| 3 || p2p  || 10.1.1.2    || 10.1.1.1 (KC ifn 3)
Line 212: Line 314:
 
| 6 || p2p  || 10.1.8.2    || 10.1.8.1 (DC ifn 6)
 
| 6 || p2p  || 10.1.8.2    || 10.1.8.1 (DC ifn 6)
 
|}
 
|}
'''Kansas City:'''
 
  
{| border=1 cellspacing=0 cellpadding=3
+
For example, the peer IP address of interface 3 is 10.1.1.1 which is the IP address of Kansas City's interface 3.
! Interface || Type || IP Address || Peer Address
+
You can verify the labeling of the peer IP addresses for interfaces 4-6 by looking at the output at the bottom of [[The GENI SPP Configuration]] page.
|- align="center"
+
Below is a diagram of the SPP interconnection map:
| 0 || inet || 64.57.23.210 || 0.0.0.0
+
 
|- align="center"
+
[[Image:spp-interconnection-map.png|right|300px|border|SPP Interconnection Map]]
| 1 || inet || 64.57.23.214 || 0.0.0.0
+
 
|- align="center"
+
''scfg'' has other information commands and also commands for allocating/freeing SPP resources and managing queues.
| 2 || inet || 64.57.23.218 || 0.0.0.0
+
The example below will describe some of these commands.
|- align="center"
+
The page [[SPP Command Interface]] summarizes all of the commands.
| 3 || p2p || 10.1.1.1     || 10.1.1.2 (SLC ifn 3)
+
 
|- align="center"
+
== Hello GPE World ==
| 4 || p2p || 10.1.2.1     || 10.1.2.2 (SLC ifn 4)
+
 
|- align="center"
+
[[Image:hello-gpe-pkts.png|right|300px|border|Hello GPE World Packets]]
| 5 || p2p  || 10.1.3.1    || 10.1.3.2 (DC ifn 3)
+
 
|- align="center"
+
The first program we will run on a GPE is a variant of the UDP echo server.
| 6 || p2p  || 10.1.4.1    || 10.1.4.2 (DC ifn 4)
+
You will run the client on your Linux host which will send a UDP packet containing the 6-byte C-string (including the terminating NUL byte) "hello" to the server.
 +
The server listens on port 50000 for an incoming UDP packet.
 +
When it receives a packet, it displays the content of the read buffer in both ASCII and hexadecimal formats and sends the "hello" string back to the client.
 +
 
 +
Going through this example should demonstrate to you that using a GPE is just like using any other general-purpose host except that you need to setup and teardown the SPP.
 +
 
 +
Here are the steps involved in this example:
 +
 
 +
* Create the client and server executables.
 +
* Copy the server executable and scripts to a GPE.
 +
* Setup the SPP.
 +
* Run the server and the client.
 +
* Teardown the SPP.
 +
 
 +
== Create the Client and Server Executables ==
 +
 
 +
>>>>> How to get the tar file ??? <<<<<
 +
 
 +
In the command block below, we assume that you will extract the tar file into the directory ''~/hello-gpe'' in your home directory:
 +
 
 +
<pre>
 +
    host> cd                                # change directory to your home directory
 +
    host> tar tf ~/Download/hello-gpe.tar    # see what is in the tar file
 +
    host> tar xf ~/Download/hello-gpe.tar    # extract contents into ~/hello-gpe/ directory
 +
    host> cat README                        # read about the example
 +
    host> make                              # make the two executables
 +
    ... Follow the insructions for doing a test using localhost ...
 +
</pre>
 +
 
 +
You have now created two executables in the ~/hello-gpe/ directory:  ''hello-client'' and ''hello-server''.
 +
 
 +
== Copy the Server Executable and Scripts to a GPE ==
 +
 
 +
Now, create a tar file that contains the two above executables and the SPP scripts found in ~/hello-gpe/scripts/:
 +
 
 +
<pre>
 +
    host> make spp-hello.tar
 +
    host> scp spp-hello.tar YOUR_SLICE@SPP_ADDRESS:
 +
    host> ssh YOUR_SLICE@SPP_ADDRESS
 +
    GPE>  tar tf spp-hello.tar        # look at what is in the tar file
 +
    GPE>  tar xf spp-hello.tar        # creates and populates ~/hello-gpe/
 +
</pre>
 +
 
 +
The ''spp-hello.tar'' file contains the scripts from the ''hello-gpe.tar'' file and the two executables ''hello-server'' and ''hello-client'' that you just created.
 +
We will first lead you through the process of setting up the SPP, running the executables and then tearing down the SPP in a step-by-step manner.
 +
Afterwards, we will discuss how to script the setup and teardown procedures.
 +
 
 +
== Setup the SPP ==
 +
 
 +
Setting up the SPP so that packets from ''hello-client'' can get to your ''hello-server'' process running on a GPE of the Salt Lake City SPP involves these steps:
 +
 
 +
* Run the ''mkResFile4hello.sh'' script to create a resource reservation file.
 +
* Submit the resource reservation.
 +
* Claim the resources described by the resource reservation file.
 +
** This allocates 1 Mbps of capacity from the 64.57.23.210 interface.
 +
* Setup the endpoint (64.57.23.210, 50000) to handle 1 Mbps of UDP traffic.
 +
 
 +
=== Create a Resource Reservation File ===
 +
 
 +
Most users make a resource reservation file in one of two ways:
 +
 
 +
* Manual:  Copy an existing file and hand edit the file to meet their needs; or
 +
* Script:  Run a script that generates the file.
 +
 
 +
You can hand edit the file ''~/hello-gpe/scripts/res.xml'' or generate one using the script ''~/hello-gpe/scripts/mkResFile4hello.sh''.
 +
The ''res.xml'' file looks like this:
 +
 
 +
<pre>
 +
    <?xml version="1.0" encoding="utf-8" standalone="yes"?>
 +
    <spp>
 +
      <rsvRecord>
 +
        <!-- Date Format:  YYYYMMDDHHmmss -->
 +
        <!-- That's year, month, day, hour, minutes, seconds -->
 +
        <rDate start="20100304121500" end="20100404121500" />
 +
        <plRSpec>
 +
          <ifParams>
 +
            <!-- reserve 1 Mb/s on one interface -->
 +
            <ifRec bw="1000" ip="64.57.23.210" />
 +
          </ifParams>
 +
        </plRSpec>
 +
      </rsvRecord>
 +
    </spp>
 +
</pre>
 +
 
 +
This file defines the following reservation:
 +
 
 +
* The reservation runs from 1215 on March 4, 2010 to 1215 on April 4, 2010.
 +
** The hours (HH) is based on a 24-hour clock.
 +
** This period must include the actual time period that you plan to use the resources.
 +
* The ''plRSpec'' section defines the GPE (slowpath) resources.
 +
** It specifies that you will be using 1000 Kbps (= 1 Mbps) of the interface with IP address 64.57.23.210.
 +
* This reservation does not have a ''fpRspec'' component which defines fastpath resources because this example doesn't use the fastpath ([[The IPv4 Metanet Tutorial]] shows how to create a reservation file containing fastpath resources).
 +
 
 +
We don't really need 1 Mbps of bandwidth for this example since we are only sending a UDP packet with a 6-byte payload.
 +
 
 +
If you use the manual method to create the reservation file, you can edit the existing ''res.xml'' file that is in the tar file.
 +
You will only need to edit the two date fields in the ''rDate'' tag and the bandwidth and IP address fields in the ''ifRec'' tag.
 +
You can choose an arbitrary file name.
 +
 
 +
If you use the script method, the ''mkResFile4hello.sh'' script has been written specifically for this example.
 +
You run the script on the GPE like this for the Salt Lake City SPP:
 +
 
 +
<pre>
 +
    GPE>  cd ~/hello-gpe/scripts
 +
    GPE>  ./mkResFile4hello.sh 64.57.23.210    # Salt Lake City SPP, interface 0 IP address
 +
    +++ Making res.xml, 1 month reservation file starting from now:
 +
        BEGIN = 20100304205900
 +
        END  = 20100404205900
 +
        SPP_ADDRESS = 64.57.23.210
 +
    +++
 +
    See res.xml file
 +
</pre>
 +
 
 +
It will create a reservation file for a one month period starting from today for the interface IP address entered as the first command-line argument.
 +
It announces the date parameters (20100304205900 and 20100404205900) and the IP address (64.57.23.194) that it will put into the reservation file.
 +
 
 +
Our choice of a one month reservation period was arbitrary.
 +
You can modify the date fields in our ''res.xml'' file to suit your own needs.
 +
Furthermore, note the following:
 +
 
 +
* You can make an ''advanced reservation'' which covers a time period in the future.
 +
* The time period can have a ''start'' date that is in the past.
 +
* You can only one reservation per time period; i.e., reservations can't overlap in time.
 +
 
 +
=== Submit the Reservation ===
 +
 
 +
Now, we use the ''scfg'' command ''--cmd make_resrv'' to submit the reservation:
 +
 
 +
<pre>
 +
    GPE> scfg --cmd make_resrv --xfile res.xml
 +
    Warning:  Your reservation has no fpRSpec
 +
    Adding reservation:
 +
    rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0]
 +
      GPE: (ip=64.57.23.210 bw=1000 Kbps)
 +
 
 +
    Successfully added reservation
 +
</pre>
 +
 
 +
Note that ''scfg'' outputs a warning that the reservation file doesn't have a ''fpRspec'' component; i.e., a fastpath specification.
 +
Since this example is using only the slowpath, we can ignore the warning.
 +
 
 +
You can check for a reservation using one of the reservation management commands described in [[SPP Command Interface]].
 +
The ''get_resrvs'' command will display all of your reservations:
 +
 
 +
<pre>
 +
    Get all reservations:
 +
    Successfully got reservations (1)
 +
    0) rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0]
 +
      GPE: (ip=64.57.23.210 bw=1000 Kbps)
 +
</pre>
 +
 
 +
=== Claim the Resources ===
 +
 
 +
The reservation only indicates your intent to use resources.
 +
You use the ''scfg'' command ''--cmd claim_resources'' to actually allocate the resources specified by a reservation:
 +
 
 +
<pre>
 +
    GPE> scfg --cmd get_ifattrs --ifn 0
 +
    Interface attributes:
 +
        [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 864552Kbps, ipAddr 64.57.23.210]
 +
 
 +
    GPE> scfg --cmd claim_resources
 +
    Successfully allocated GPE spec
 +
 
 +
    GPE> scfg --cmd get_ifattrs --ifn 0
 +
    Interface attributes:
 +
        [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 863552Kbps, ipAddr 64.57.23.210]
 +
</pre>
 +
 
 +
The third command does the allocation of your active reservation:  1 Mbps of capacity from the 64.57.23.210 interface.
 +
We can now configure slowpath endpoints that use portions of this 1 Mbps capacity.
 +
 
 +
The command block above shows that interface 0's available bandwidth has been reduced by 1000 Kbps.
 +
The ''--cmd get_ifattrs'' outputs the same information as ''--cmd get_ifaces'' but for only one interface.
 +
 
 +
=== Setup the Endpoint ===
 +
 
 +
We now use the capacity allocated by ''--cmd claim_resources'' by creating a slowpath endpoint within the 64.57.23.210 interface by using ''--cmd setup_sp_enpoint'':
 +
   
 +
<pre>
 +
    GPE> scfg --cmd setup_sp_endpoint --bw 1000 --ipaddr 64.57.23.210 --port 50000 --proto 17
 +
     Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.194, 50000, 17 } ]
 +
</pre>
 +
 
 +
This command example shows:
 +
 
 +
* The endpoint IP address, port number and protocol are 64.57.23.210, 50000 and 17 (UDP) respectively; and
 +
* it will use all 1000 Kbps of the allocated capacity.
 +
 
 +
[[Image:setup_sp_endpoint.png|right|300px|border|scfg --cmd setup_sp_endpoint ...]]
 +
 
 +
Note that:
 +
 
 +
* The ''hello-client'' process will send UDP packets to (SPP_ADDRESS=64.57.23.210, UDP_PORT=50000).
 +
* A slice can have multiple endpoints within an interface allocation as long as the sum of their bandwidth parameters does not exceed the allocated capacity.
 +
* Multiple slices can use the same IP address.
 +
** Each slice will be guaranteed their allocated bandwidth.
 +
* Slices can not use the same port number.
 +
* The ''--cmd setup_sp_endpoint'' command installed a filter in the linecard that will direct incoming traffic to any process your slice is running on a GPE.
 +
 
 +
== Run the Server and the Client ==
 +
 
 +
You should have two windows: one to run the server an done to run the client.
 +
In your GPE window, start the server on the GPE so that it listens on port 50000:
 +
 
 +
<pre>
 +
    GPE> hello-server 50000
 +
    udp-echo-srvr:  Listening on port 50000
 +
        ... Waiting for UDP packet ...
 +
</pre>
 +
 
 +
Actually, the default port number is 50000.
 +
So, you could have also left off the 50000 argument.
 +
 
 +
Now, in your other window, start the client with the IP address and port number of the slowpath endpoint:
 +
 
 +
<pre>
 +
    host> hello-client 64.57.23.210 50000
 +
    udp-echo-cli:  Sending to port 50000 at 64.57.23.210
 +
    send_dgram rcvd 6 char: <hello>
 +
</pre>
 +
 
 +
The client sends one UDP packet containing the C-string "hello", waits for the returning packet and displays the contents of the packet it receives.
 +
The server continues to run waiting for another packet.
 +
To send another packet, run ''hello-client'' again.
 +
 
 +
The GPE window shows that the server received a 6-byte datagram and displays the contents of a 10-byte buffer.
 +
 
 +
<pre>
 +
    GPE>  hello-server 50000
 +
    udp-echo-srvr:  Listening on port 50000
 +
    echo_dgram rcvd 6 bytes (hex follows):
 +
      68  65  6c  6c  6f  00  00  00  2c ffffff91
 +
    ====================
 +
    ... Wait for next UDP packet to port 50000 ...
 +
     ... Enter ctrl-c to terminate ...
 +
</pre>
 +
 
 +
The hexadecimal output shows that the first six bytes contain the correct hexadecimal representation for "hello" including the C-string terminating NUL (hex 00) byte.
 +
For example, an ASCII 'h' is 68 in hexadecimal.
 +
 
 +
== Teardown the SPP ==
 +
 
 +
After you are done using the GPE, you need to return the resources and cancel the reservation.
 +
The teardown procedure is the:
 +
 
 +
<pre>
 +
    GPE> scfg --cmd free_sp_endpoint --ipaddr 64.57.23.210 --port 50000 --proto 17
 +
    free_sp_endpoint completed successfully 0
 +
 
 +
    GPE> scfg --cmd free_sp_resources
 +
    Successfully freed sp resources
 +
 
 +
    GPE> scfg --cmd cancel_resrv
 +
    Get reservation for current time
 +
    Successfully canceled reservation
 +
</pre>
 +
 
 +
{| border=1 cellspacing=0 cellpadding=3 align=right
 +
! Step || Setup || Teardown
 +
|-
 +
|align="center"| 1 || ''make_resrv''        || ''free_sp_endpoint''
 +
|-
 +
|align="center"| 2 || ''claim_resources''  || ''free_sp_resources''
 +
|-
 +
|align="center"| 3 || ''setup_sp_endpoint'' || ''cancel_resrv''
 
|}
 
|}
  
= Resources =
+
The teardown procedure is the reverse of the setup procedure:
  
= Examples of Configuration Displays =
+
The ''--cmd cancel_resrv'' command cancels the current active reservation.
 +
You can also use it to cancel an advance reservation by supplying a date.
 +
See [[SPP Command Interface]] for the details.
  
What follows is the output of the ''get-info.sh'' script for the three GENI SPPs.
+
<br clear=all>
The annotated output below shows which commands were used to produce the output.
+
 
 +
== The Setup and Teardown Scripts ==
 +
 
 +
The work of setting up and tearing down the SPP can be scripted so that the commands can be repeated without manually entering all of the commands.
 +
When you extracted the files from the ''spp-hello.tar'' file, the scripts ''setup4hello.sh'' and ''teardown4hello.sh'' were extracted into the ~/hello-gpe/scripts/ directory.
 +
 
 +
These scripts can be used in our example like this:
  
 
<pre>
 
<pre>
<< DC 64.57.23.194 >>
+
    GPE> cd ~/hello-gpe/scripts
  
Get list of available interfacesscfg --cmd get_ifaces
+
    GPE> ./setup4hello.sh 64.57.23.210 50000
 +
    +++ Making res.xml, 1 month reservation file starting from now:
 +
        BEGIN = 20100304213200
 +
        END  = 20100404213200
 +
        SPP_ADDRESS = 64.57.23.210
 +
    +++
 +
    See res.xml file
 +
    WarningYour reservation has no fpRSpec
 +
    Adding reservation:
 +
    rDate: [3/4/2010 at 21:32:0, 4/4/2010 at 21:32:0]
 +
      GPE: (ip=64.57.23.210 bw=1000 Kbps)
  
Interface list:
+
    Successfully added reservation
  [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899808Kbps, ipAddr 64.57.23.194]
+
    Successfully allocated GPE spec
  [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.198]
+
    Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.210, 60000, 17 } ]
  [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.202]
 
  [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.3.2]
 
  [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.4.2]
 
  [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.7.1]
 
  [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.8.1]
 
  
Get IP address of peerscfg --cmd get_peer --ifn N
+
    GPE> cd ..          # now in ~/hello-gpe/
 +
    GPE> ./hello-server 50000
 +
    udp-echo-srvrListening on port 50000
 +
        ... Run hello-client at your host ...
 +
    echo_dgram rcvd 6 bytes (hex follows):
 +
      68  65  6c  6c  6f  00  00  00  2c  ffffff91
 +
    ====================
 +
        ... ctrl-c entered to terminate server ...
  
Interface 0 SPP Peer IP address: 0.0.0.0
+
    GPE> cd scripts
Interface 1 SPP Peer IP address: 0.0.0.0
+
    GPE> ./teardown4hello.sh 64.57.23.210 60000
Interface 2 SPP Peer IP address: 0.0.0.0
 
Interface 3 SPP Peer IP address: 10.1.3.1
 
Interface 4 SPP Peer IP address: 10.1.4.1
 
Interface 5 SPP Peer IP address: 10.1.7.2
 
Interface 6 SPP Peer IP address: 10.1.8.2
 
  
Get interface attributes:  scfg --cmd get_attrs --ifn N
+
    free_sp_endpoint completed successfully 0
 +
    Successfully freed sp resources
 +
    Get reservation for current time
 +
    Successfully canceled reservation
 +
</pre>
  
Interface 0 Interface attributes:
+
== Higher Speed Traffic ==
  [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899808Kbps, ipAddr 64.57.23.194]
 
Interface 1 Interface attributes:
 
  [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.198]
 
Interface 2 Interface attributes:
 
  [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.202]
 
Interface 3 Interface attributes:
 
  [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.3.2]
 
Interface 4 Interface attributes:
 
  [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.4.2]
 
Interface 5 Interface attributes:
 
  [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.7.1]
 
Interface 6 Interface attributes:
 
  [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.8.1]
 
  
<< SLC 64.57.23.210 >>
+
Now that you know how to setup and teardown the SPP, you should be able to run any application on a GPE.
 +
You could run a traffic generator such as ''iperf'' (http://sourceforge.net/projects/iperf/).
  
Get list of available interfaces: scfg --cmd get_ifaces
+
We installed ''iperf'' in our GPE slice and our host.
 +
Then, we used the ''setup4hello.sh'' script to setup the SPP and then sent two seconds of UDP traffic to the server at 1.1 Mbps, 2 Mbps, 10 Mbps and finally 100 Mbps from our host.
 +
The following timeline shows the commands issued in the GPE window and the host window:  
  
Interface list:
+
<pre>
  [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.210]
+
    GPE> iperf -s -u -p 50000
  [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.214]
+
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 1.1m -t 2
  [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.218]
+
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 2m -t 2
  [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.1.2]
+
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 10m -t 2
  [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.2.2]
+
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 100m -t 2
  [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.7.2]
+
    GPE> ctrl-c    # terminate server
  [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.8.2]
+
</pre>
  
Get IP address of peer:  scfg --cmd get_peer --ifn N
+
The server window is shown on the left, and the client window is shown on the right.
 +
The server arguments are '-s' (server mode), '-u' (UDP packets), and '-p 50000' (use port 50000).
 +
The client arguments are '-c 64.57.23.210' (client mode with server at 64.57.23.210), '-u' (UDP packets), '-p 50000' (use port 50000), '-b Xm' (bandwidth X in Mbps), and '-t 2' (send for two seconds).
  
Interface 0 SPP Peer IP address: 0.0.0.0
+
The server output (with # annotation) is shown below:
Interface 1 SPP Peer IP address: 0.0.0.0
 
Interface 2 SPP Peer IP address: 0.0.0.0
 
Interface 3 SPP Peer IP address: 10.1.1.1
 
Interface 4 SPP Peer IP address: 10.1.2.1
 
Interface 5 SPP Peer IP address: 10.1.7.1
 
Interface 6 SPP Peer IP address: 10.1.8.1
 
  
Get interface attributesscfg --cmd get_attrs --ifn N
+
<pre>
 +
    GPE> ./iperf -s -u -p 50000
 +
    ------------------------------------------------------------
 +
    Server listening on UDP port 50000
 +
    Receiving 1470 byte datagrams
 +
    UDP buffer size:   108 KByte (default)
 +
    ------------------------------------------------------------
 +
    [ 4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 60988  # 1.1 Mbps
 +
    [  4]  0.0- 2.0 sec    271 KBytes  1.09 Mbits/sec  0.033 ms    0/  189 (0%)
 +
    [  3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 53234  # 2 Mbps
 +
    [  3]  0.0- 2.0 sec    491 KBytes  1.99 Mbits/sec  0.085 ms    0/  342 (0%)
 +
    [  4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 39372  # 10 Mbps
 +
    [  4]  0.0- 3.3 sec  2.39 MBytes  6.07 Mbits/sec  1.290 ms    0/ 1702 (0%)
 +
    [  3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 57153  # 100 Mbps
 +
    [  3]  0.0- 9.9 sec  7.19 MBytes  6.08 Mbits/sec  0.820 ms 11963/17095 (70%)
 +
    [  3]  0.0- 9.9 sec  1 datagrams received out-of-order
 +
</pre>
  
Interface 0 Interface attributes:
+
The output shows that the ''iperf'' server received packets at a maximum input rate of around 6 Mbps.
  [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.210]
+
It is interesting to note that even though the client sent at 10 Mbps and the server measured an average input rate of around 6 Mbps, there was no packet loss (0/1702 means 0 out of 1702 packets were lost) ... even though we only allocated 1 Mbps.
Interface 1 Interface attributes:
 
  [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.214]
 
Interface 2 Interface attributes:
 
  [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.218]
 
Interface 3 Interface attributes:
 
  [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.1.2]
 
Interface 4 Interface attributes:
 
  [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.2.2]
 
Interface 5 Interface attributes:
 
  [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.7.2]
 
Interface 6 Interface attributes:
 
  [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.8.2]
 
  
<< KC 64.57.23.178 >>
+
== Monitoring Traffic ==
  
Get list of available interfaces:  scfg --cmd get_ifaces
+
The only way to monitor traffic in and out of a GPE process is to have the process record and display packet statistics.
 +
The hello-gpe code could be modified to do that although we later show how that can be combined with
 +
our slice daemon ''sliced'' and a graphical interface to plot the statistics.
  
Interface list:
+
== Exercises ==
  [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899808Kbps, ipAddr 64.57.23.178]
 
  [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.182]
 
  [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.186]
 
  [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.1.1]
 
  [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.2.1]
 
  [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.3.1]
 
  [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.4.1]
 
  
Get IP address of peer:  scfg --cmd get_peer --ifn N
+
This page has shown you the basics of using the GPE.
 +
You can improve your skill at writing a slowpath software router by doing the exercises below.
  
Interface 0 SPP Peer IP address: 0.0.0.0
+
<ol>
Interface 1 SPP Peer IP address: 0.0.0.0
+
  <li> '''A Different Interface'''
Interface 2 SPP Peer IP address: 0.0.0.0
+
      <ul>
Interface 3 SPP Peer IP address: 10.1.1.2
+
        How would you change the commands shown in ''[[The Hello GPE World Tutorial#The Setup and Teardown Scripts]]'' to use 64.57.23.214 instead of 64.57.23.210?
Interface 4 SPP Peer IP address: 10.1.2.2
+
      </ul>
Interface 5 SPP Peer IP address: 10.1.3.2
+
      </li>
Interface 6 SPP Peer IP address: 10.1.4.2
+
  <li> '''A Different SPP'''
 +
      <ul>
 +
        Show the sequence of commands needed to use the Kansas City SPP instead of the Salt Lake City SPP.
 +
      </ul>
 +
      </li>
 +
  <li> '''Multiple Interfaces'''
 +
      <ul>
 +
        Suppose that you wanted ''hello-server'' to read UDP packets from 64.57.23.210 but send the response packet to the client using 64.57.23.214.
 +
Describe the changes to the client and server code, scripts and command usage that would be needed to make it possible to use the two interfaces.
 +
      </ul>
 +
      </li>
 +
  <li> '''Forwarding Packets (1)'''
 +
      <ul>
 +
      Consider the communication pattern client->SPP/relay->server. That is, the client sends a packet to a relay process running on a GPE which then relays the packet to a server running on a host different from the one where the client is running.
 +
Note that the ''relay'' process acts like both the hello-gpe server and client.
 +
Also, the ''relay'' process should use two different interfaces: one for the client and one for the server.
 +
The server announces the reception of each packet and then drops the packet.
 +
Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible.
 +
      </ul>
 +
  <li> '''Forwarding Packets (2)'''
 +
      <ul>
 +
      Consider the communication pattern client<->SPP/relay<->SPP'/server. That is, the client sends a packet to a ''relay'' process running on a GPE which then relays the packet to a ''server'' process running on a different GPE. As before, the ''relay'' process should use two different interfaces: one for the client and one for the server.
 +
But it should use a point-to-point interface for packets to the server.
 +
The server should announce the reception of each packet and send the packet back to the ''relay'' process.
 +
The ''relay'' process should then forward the packet back to the client.
 +
Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible.  
 +
      </ul>
 +
</ol>
  
Get interface attributes:  scfg --cmd get_attrs --ifn N
+
<!-- COMMENT
  
Interface 0 Interface attributes:
+
[[Image:hello.png|right|300px|border|Hello GPE World]]
  [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899808Kbps, ipAddr 64.57.23.178]
 
Interface 1 Interface attributes:
 
  [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.182]
 
Interface 2 Interface attributes:
 
  [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 64.57.23.186]
 
Interface 3 Interface attributes:
 
  [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.1.1]
 
Interface 4 Interface attributes:
 
  [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.2.1]
 
Interface 5 Interface attributes:
 
  [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.3.1]
 
Interface 6 Interface attributes:
 
  [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899744Kbps, ipAddr 10.1.4.1]
 
</pre>
 
  
END COMMENT -->
+
END -->

Latest revision as of 22:14, 15 March 2010

Template:Under Construction

Introduction

Like any PlanetLab node, a user can allocate a subset of an SPP's resources called a slice. An SPP slice user can either use a fastpath-slowpath packet processing paradigm which uses both a network processor (NPE) a general-purpose processor (GPE) or use a slowpath-only paradigm in which packet processing is handled completely by a socket program running on a GPE. This page describes how to use the GPE-only approach.

Pinging SPP External Interfaces

Unlike most PlanetLab nodes, an SPP has multiple external interfaces. In the GENI deployment, some of those interfaces have Internet2 IP addresses and some are interfaces attached to point-to-point links going directly to an external interfaces of other SPPs. This section introduces you to sone of the Internet2 interfaces.

Let's try to ping some of those Internet2 interfaces. Enter one of the following ping commands (omit the comments):

    ping -c 3 64.57.23.210         # Salt Lake City interface 0
    ping -c 3 64.57.23.214         # Salt Lake City interface 1
    ping -c 3 64.57.23.218         # Salt Lake City interface 2
    ping -c 3 64.57.23.194         # Washington DC interface 0
    ping -c 3 64.57.23.198         # Washington DC interface 1
    ping -c 3 64.57.23.202         # Washington DC interface 2
    ping -c 3 64.57.23.178         # Kansas City interface 0
    ping -c 3 64.57.23.182         # Kansas City interface 1
    ping -c 3 64.57.23.186         # Kansas City interface 2

For example, my output from the first ping command looks like this:

    > ping -c 3 64.57.23.210
    PING 64.57.23.210 (64.57.23.210) 56(84) bytes of data.
    64 bytes from 64.57.23.210: icmp_seq=1 ttl=56 time=67.5 ms
    64 bytes from 64.57.23.210: icmp_seq=2 ttl=56 time=55.9 ms
    64 bytes from 64.57.23.210: icmp_seq=3 ttl=56 time=59.0 ms

    --- 64.57.23.210 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2002ms
    rtt min/avg/max/mdev = 55.949/60.823/67.511/4.895 ms

Note that you may not be able to ping an SPP external interface. Some reasons why it might fail are:

  1. Your host doesn't have ping installed. This is not typical.
  2. The SPP interface is down.
  3. Your network blocks ping traffic.
  4. Your network provider doesn't route Internet2 addresses.

In the first case, you will get a command not found error message. The ping command is usually located at /bin/ping. See your system administrator if you can't find ping. In the other cases, your ping command will eventually return with a 100% packet loss message. In the last case, running the command traceroute 64.57.23.210 will give a Network unreachable indication (the last router is marked !N).

If you are unsuccessful with one interface, try to ping the interface of a different SPP.

However, you can always get around these problems (except for an SPP being down) by issuing the ping command from a PlanetLab node. We discuss how to log into a PlanetLab node in Using the IPv4 Code Option.

DNS Names of SPP External Interfaces

SPP Ifn IP Address DNS Name
KANS 0 64.57.23.178 sppkans1.arl.wustl.edu
1 64.57.23.182 sppkans2.arl.wustl.edu
2 64.57.23.186 sppkans3.arl.wustl.edu
WASH 0 64.57.23.194 sppwash1.arl.wustl.edu
1 64.57.23.198 sppwash2.arl.wustl.edu
2 64.57.23.202 sppwash3.arl.wustl.edu
SALT 0 64.57.23.210 sppsalt1.arl.wustl.edu
1 64.57.23.214 sppsalt2.arl.wustl.edu
2 64.57.23.218 sppsalt3.arl.wustl.edu

The SPP's external interfaces also have DNS names. So, ping -c 3 sppsalt1.arl.wustl.edu works as well as ping -c 3 64.57.23.210. The table (right) shows the DNS names of the Internet external interfaces.

Logging Into an SPP's GPE

Now, let's try to log into the SPP interface that you were able to ping. The example below assumes that interface was 64.57.23.210; that is, interface 0 of the Salt Lake City SPP. Note the following:

  • You must use ssh to log into an SPP.
  • When you ssh to an SPP's external interface, you will actually get logged into a GPE of the SPP.
  • Furthermore, you will be logging into your slice in a GPE.
  • Even if your network blocks your ping packets, you should be able to log into a GPE as long as there is a route to the SPP's external interface address.
  • You can 'ssh' to any of the SPP's external interfaces.

To log into a GPE at the Salt Lake City SPP, I would enter:

    ssh pl_washu_sppDemo@64.57.23.210

where my slice name is pl_washu_sppDemo. Thus, the general format is:

    ssh YOUR_SLICE@SPP_ADDRESS

where YOUR_SLICE is the slice you were assigned during account registration, and SPP_ADDRESS is the IP address of an SPP external interface.

During the login process, you will be asked to enter your RSA passphrase unless ssh-agent or an equivalent utility (e.g., keychain, gnome-keyring-daemon) is holding your private RSA key.

   host> ssh pl_washu_sppDemo@SPP_ADDRESS
   Enter passphrase for key '/home/.../LOGIN_NAME/.ssh/id_rsa':
       ... Respond with your passphrase ...
   Last login:  ... Previous login information ...
   [YOUR_SLICE@SPP_ADDRESS ~]$

If the SSH daemon asks you for your password, you will have to call ssh using the -i KEY_FILE argument like this:

    ssh -i ~/.ssh/id_rsa YOUR_SLICE@SPP_ADDRESS
        ... The SSH daemon will ask for your passphrase ...

Using ssh-agent

This section is a very brief explanation of how to use ssh-agent. You can skip this section if you are already using such an agent. If you have never used such an agent, note that there are several alternatives to the procedure described below and our description is meant to be a simple cookbook procedure. See the ssh-agent and ssh-add man pages or the web for more details.

The basic idea is to run ssh-agent which is a daemon process that caches private keys and listens for requests from SSH clients needing a private key related computation. Then, run the ssh-add command to add your private key to your agent's cache. This is only done once after you start the SSH agent. The process will ask you for your passphrase which is used to decrypt the private key which is then held in main memory by the agent.

For example,

    eval `ssh-agent`        # Notice the backquotes
    ssh-add
        ... Enter your passphrase when it prompts for it ...

Notice that we are using backquotes (which denotes command substitution) in the first line, NOT the normal forward quote characters.

In the first line, ssh-agent outputs two commands to stdout which is then evaluated by the eval command. These two commands set the two environment variables SSH_AUTH_SOCK and SSH_AGENT_PID. Enter the command "printenv | grep SSH_A", and you will get output that looks like:

    SSH_AUTH_SOCK=/tmp/ssh-sTNf2142/agent.2142
    SSH_AGENT_PID=2143

which says that process 2143 is your ssh-agent and it is listening for requests on the Unix Domain socket /tmp/ssh-sTNf2142/agent.2142. The ssh-add command adds your private key to the list of private keys held by ssh-agent.

You can now verify that you can ssh to an SPP without entering a password or passphrase. In fact, any subshell of the current shell will not need to enter a password when logging into an SPP as long as the agent is running because the SSH environment variables are passed to all children of the current shell allowing them to communicate with the same agent.

The SPP Configuration Command scfg

After you have logged into a GPE, you can use the scfg command to:

  • Get information about the SPP
  • Configure the SPP
  • Make resource reservations

You can get help information from scfg by entering one of these forms of the command:

    scfg --help all         # show help for all commands
    scfg --help info        # show help for information commands
    scfg --help queues      # show help for queue commands
    scfg --help reserv      # show help for reservation commands
    scfg --help alloc       # show help for resource alloc/free commands

Try getting help on the information commands by entering:

    scfg --help info

Your output should look like this:

    USAGE:
    INFORMATION CMDS:
      scfg --cmd get_ifaces
            Display all interfaces
      scfg --cmd get_ifpeer --ifn N
            Display the peer of interface num N
      ... other output not shown ...

If you get a command not found message, try entering:

    /usr/local/bin/scfg --help info

If the command now runs, you need to add /usr/local/bin to your PATH environment variable. The rest of this tutorial assumes that your PATH environment variable has been set to include the directory containing the scfg command.

Getting Information About External Interfaces

SPPs have multiple external interfaces. To show the attributes of all external interfaces, enter:

    scfg --cmd get_ifaces

For example, running this command on the Salt Lake City SPP produces:

    Interface list:
      [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.210]
      [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.214]
      [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.218]
      [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.1.2]
      [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.2.2]
      [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.7.2]
      [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.8.2]

This output shows:

  • There are seven external interfaces numbered from 0 to 7.
  • type: There are two types of interfaces: Internet (inet) and point-to-point (p2p).
  • linkBW: The capacity of each interface is 1 Gbps (i.e., 1000000 Kbps).
  • availBW: The available bandwidth of each interface is 899.232 Mbps (i.e., 899232 Kbps); that is, that portion of the capacity that hasn't already been allocated.
  • ipAddr: The IP addresses of each interface.

Getting Information About Peers

The type inet interfaces are physically connected to the Internet. The type p2p interfaces are physically connected to other SPPs through point-to-point links. That's why you can only ping interfaces with type inet from your host.

You can use the get_peer command to show the IP address of the interface at the other end of a point-to-point link. For example, I would enter:

    scfg --cmd get_peer --ifn 3

to find out the IP address of interface 3's peer. These seven commands will show the peer IP addresses of interfaces 0-6:

    scfg --cmd get_peer --ifn 0
    scfg --cmd get_peer --ifn 1
    scfg --cmd get_peer --ifn 2
    scfg --cmd get_peer --ifn 3
    scfg --cmd get_peer --ifn 4
    scfg --cmd get_peer --ifn 5
    scfg --cmd get_peer --ifn 6

Running these commands on the Salt Lake City SPP produces this output:

    SPP Peer IP address: 0.0.0.0
    SPP Peer IP address: 0.0.0.0
    SPP Peer IP address: 0.0.0.0
    SPP Peer IP address: 10.1.1.1
    SPP Peer IP address: 10.1.2.1
    SPP Peer IP address: 10.1.7.1
    SPP Peer IP address: 10.1.8.1

Notice that the p2p interfaces are the only ones with a peer IP address that is not 0.0.0.0. Furthermore, these addresses have the same 10.1.x.y format as other p2p interfaces.

Constructing an SPP Interconnection Map

We can now build a complete interconnection map of the SPPs if we combine the output of the get_ifaces and get_peer commands from all SPPs. This output is shown at the bottom of the The GENI SPP Configuration page. The interconnection tables shown near the top of the The GENI SPP Configuration page were constructed from this output.

The Salt Lake City table is:

Interface Type IP Address Peer Address
0 inet 64.57.23.210 0.0.0.0
1 inet 64.57.23.214 0.0.0.0
2 inet 64.57.23.218 0.0.0.0
3 p2p 10.1.1.2 10.1.1.1 (KC ifn 3)
4 p2p 10.1.2.2 10.1.2.1 (KC ifn 4)
5 p2p 10.1.7.2 10.1.7.1 (DC ifn 5)
6 p2p 10.1.8.2 10.1.8.1 (DC ifn 6)

For example, the peer IP address of interface 3 is 10.1.1.1 which is the IP address of Kansas City's interface 3. You can verify the labeling of the peer IP addresses for interfaces 4-6 by looking at the output at the bottom of The GENI SPP Configuration page. Below is a diagram of the SPP interconnection map:

scfg has other information commands and also commands for allocating/freeing SPP resources and managing queues. The example below will describe some of these commands. The page SPP Command Interface summarizes all of the commands.

Hello GPE World

The first program we will run on a GPE is a variant of the UDP echo server. You will run the client on your Linux host which will send a UDP packet containing the 6-byte C-string (including the terminating NUL byte) "hello" to the server. The server listens on port 50000 for an incoming UDP packet. When it receives a packet, it displays the content of the read buffer in both ASCII and hexadecimal formats and sends the "hello" string back to the client.

Going through this example should demonstrate to you that using a GPE is just like using any other general-purpose host except that you need to setup and teardown the SPP.

Here are the steps involved in this example:

  • Create the client and server executables.
  • Copy the server executable and scripts to a GPE.
  • Setup the SPP.
  • Run the server and the client.
  • Teardown the SPP.

Create the Client and Server Executables

>>>>> How to get the tar file ??? <<<<<

In the command block below, we assume that you will extract the tar file into the directory ~/hello-gpe in your home directory:

    host> cd                                 # change directory to your home directory
    host> tar tf ~/Download/hello-gpe.tar    # see what is in the tar file
    host> tar xf ~/Download/hello-gpe.tar    # extract contents into ~/hello-gpe/ directory
    host> cat README                         # read about the example
    host> make                               # make the two executables
    ... Follow the insructions for doing a test using localhost ...

You have now created two executables in the ~/hello-gpe/ directory: hello-client and hello-server.

Copy the Server Executable and Scripts to a GPE

Now, create a tar file that contains the two above executables and the SPP scripts found in ~/hello-gpe/scripts/:

    host> make spp-hello.tar
    host> scp spp-hello.tar YOUR_SLICE@SPP_ADDRESS:
    host> ssh YOUR_SLICE@SPP_ADDRESS
    GPE>  tar tf spp-hello.tar        # look at what is in the tar file
    GPE>  tar xf spp-hello.tar        # creates and populates ~/hello-gpe/

The spp-hello.tar file contains the scripts from the hello-gpe.tar file and the two executables hello-server and hello-client that you just created. We will first lead you through the process of setting up the SPP, running the executables and then tearing down the SPP in a step-by-step manner. Afterwards, we will discuss how to script the setup and teardown procedures.

Setup the SPP

Setting up the SPP so that packets from hello-client can get to your hello-server process running on a GPE of the Salt Lake City SPP involves these steps:

  • Run the mkResFile4hello.sh script to create a resource reservation file.
  • Submit the resource reservation.
  • Claim the resources described by the resource reservation file.
    • This allocates 1 Mbps of capacity from the 64.57.23.210 interface.
  • Setup the endpoint (64.57.23.210, 50000) to handle 1 Mbps of UDP traffic.

Create a Resource Reservation File

Most users make a resource reservation file in one of two ways:

  • Manual: Copy an existing file and hand edit the file to meet their needs; or
  • Script: Run a script that generates the file.

You can hand edit the file ~/hello-gpe/scripts/res.xml or generate one using the script ~/hello-gpe/scripts/mkResFile4hello.sh. The res.xml file looks like this:

    <?xml version="1.0" encoding="utf-8" standalone="yes"?>
    <spp>
       <rsvRecord>
         <!-- Date Format:  YYYYMMDDHHmmss -->
         <!-- That's year, month, day, hour, minutes, seconds -->
         <rDate start="20100304121500" end="20100404121500" />
         <plRSpec>
           <ifParams>
             <!-- reserve 1 Mb/s on one interface -->
             <ifRec bw="1000" ip="64.57.23.210" />
           </ifParams>
         </plRSpec>
       </rsvRecord>
    </spp>

This file defines the following reservation:

  • The reservation runs from 1215 on March 4, 2010 to 1215 on April 4, 2010.
    • The hours (HH) is based on a 24-hour clock.
    • This period must include the actual time period that you plan to use the resources.
  • The plRSpec section defines the GPE (slowpath) resources.
    • It specifies that you will be using 1000 Kbps (= 1 Mbps) of the interface with IP address 64.57.23.210.
  • This reservation does not have a fpRspec component which defines fastpath resources because this example doesn't use the fastpath (The IPv4 Metanet Tutorial shows how to create a reservation file containing fastpath resources).

We don't really need 1 Mbps of bandwidth for this example since we are only sending a UDP packet with a 6-byte payload.

If you use the manual method to create the reservation file, you can edit the existing res.xml file that is in the tar file. You will only need to edit the two date fields in the rDate tag and the bandwidth and IP address fields in the ifRec tag. You can choose an arbitrary file name.

If you use the script method, the mkResFile4hello.sh script has been written specifically for this example. You run the script on the GPE like this for the Salt Lake City SPP:

    GPE>  cd ~/hello-gpe/scripts
    GPE>  ./mkResFile4hello.sh 64.57.23.210    # Salt Lake City SPP, interface 0 IP address
    +++ Making res.xml, 1 month reservation file starting from now:
        BEGIN = 20100304205900
        END   = 20100404205900
        SPP_ADDRESS = 64.57.23.210
    +++
    See res.xml file

It will create a reservation file for a one month period starting from today for the interface IP address entered as the first command-line argument. It announces the date parameters (20100304205900 and 20100404205900) and the IP address (64.57.23.194) that it will put into the reservation file.

Our choice of a one month reservation period was arbitrary. You can modify the date fields in our res.xml file to suit your own needs. Furthermore, note the following:

  • You can make an advanced reservation which covers a time period in the future.
  • The time period can have a start date that is in the past.
  • You can only one reservation per time period; i.e., reservations can't overlap in time.

Submit the Reservation

Now, we use the scfg command --cmd make_resrv to submit the reservation:

    GPE> scfg --cmd make_resrv --xfile res.xml
    Warning:  Your reservation has no fpRSpec
    Adding reservation:
    rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0]
      GPE: (ip=64.57.23.210 bw=1000 Kbps)

    Successfully added reservation

Note that scfg outputs a warning that the reservation file doesn't have a fpRspec component; i.e., a fastpath specification. Since this example is using only the slowpath, we can ignore the warning.

You can check for a reservation using one of the reservation management commands described in SPP Command Interface. The get_resrvs command will display all of your reservations:

    Get all reservations:
    Successfully got reservations (1)
    0) rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0]
      GPE: (ip=64.57.23.210 bw=1000 Kbps)

Claim the Resources

The reservation only indicates your intent to use resources. You use the scfg command --cmd claim_resources to actually allocate the resources specified by a reservation:

    GPE> scfg --cmd get_ifattrs --ifn 0
    Interface attributes:
        [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 864552Kbps, ipAddr 64.57.23.210]

    GPE> scfg --cmd claim_resources
    Successfully allocated GPE spec

    GPE> scfg --cmd get_ifattrs --ifn 0
    Interface attributes:
        [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 863552Kbps, ipAddr 64.57.23.210]

The third command does the allocation of your active reservation: 1 Mbps of capacity from the 64.57.23.210 interface. We can now configure slowpath endpoints that use portions of this 1 Mbps capacity.

The command block above shows that interface 0's available bandwidth has been reduced by 1000 Kbps. The --cmd get_ifattrs outputs the same information as --cmd get_ifaces but for only one interface.

Setup the Endpoint

We now use the capacity allocated by --cmd claim_resources by creating a slowpath endpoint within the 64.57.23.210 interface by using --cmd setup_sp_enpoint:

    GPE> scfg --cmd setup_sp_endpoint --bw 1000 --ipaddr 64.57.23.210 --port 50000 --proto 17
    Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.194, 50000, 17 } ]

This command example shows:

  • The endpoint IP address, port number and protocol are 64.57.23.210, 50000 and 17 (UDP) respectively; and
  • it will use all 1000 Kbps of the allocated capacity.

Note that:

  • The hello-client process will send UDP packets to (SPP_ADDRESS=64.57.23.210, UDP_PORT=50000).
  • A slice can have multiple endpoints within an interface allocation as long as the sum of their bandwidth parameters does not exceed the allocated capacity.
  • Multiple slices can use the same IP address.
    • Each slice will be guaranteed their allocated bandwidth.
  • Slices can not use the same port number.
  • The --cmd setup_sp_endpoint command installed a filter in the linecard that will direct incoming traffic to any process your slice is running on a GPE.

Run the Server and the Client

You should have two windows: one to run the server an done to run the client. In your GPE window, start the server on the GPE so that it listens on port 50000:

    GPE> hello-server 50000
    udp-echo-srvr:  Listening on port 50000
        ... Waiting for UDP packet ...

Actually, the default port number is 50000. So, you could have also left off the 50000 argument.

Now, in your other window, start the client with the IP address and port number of the slowpath endpoint:

    host> hello-client 64.57.23.210 50000
    udp-echo-cli:  Sending to port 50000 at 64.57.23.210
    send_dgram rcvd 6 char: <hello>

The client sends one UDP packet containing the C-string "hello", waits for the returning packet and displays the contents of the packet it receives. The server continues to run waiting for another packet. To send another packet, run hello-client again.

The GPE window shows that the server received a 6-byte datagram and displays the contents of a 10-byte buffer.

    GPE>  hello-server 50000
    udp-echo-srvr:  Listening on port 50000
    echo_dgram rcvd 6 bytes (hex follows):
      68  65  6c  6c  6f  00  00  00  2c  ffffff91
    ====================
    ... Wait for next UDP packet to port 50000 ...
    ... Enter ctrl-c to terminate ...

The hexadecimal output shows that the first six bytes contain the correct hexadecimal representation for "hello" including the C-string terminating NUL (hex 00) byte. For example, an ASCII 'h' is 68 in hexadecimal.

Teardown the SPP

After you are done using the GPE, you need to return the resources and cancel the reservation. The teardown procedure is the:

    GPE> scfg --cmd free_sp_endpoint --ipaddr 64.57.23.210 --port 50000 --proto 17
    free_sp_endpoint completed successfully 0

    GPE> scfg --cmd free_sp_resources
    Successfully freed sp resources

    GPE> scfg --cmd cancel_resrv
    Get reservation for current time
    Successfully canceled reservation
Step Setup Teardown
1 make_resrv free_sp_endpoint
2 claim_resources free_sp_resources
3 setup_sp_endpoint cancel_resrv

The teardown procedure is the reverse of the setup procedure:

The --cmd cancel_resrv command cancels the current active reservation. You can also use it to cancel an advance reservation by supplying a date. See SPP Command Interface for the details.


The Setup and Teardown Scripts

The work of setting up and tearing down the SPP can be scripted so that the commands can be repeated without manually entering all of the commands. When you extracted the files from the spp-hello.tar file, the scripts setup4hello.sh and teardown4hello.sh were extracted into the ~/hello-gpe/scripts/ directory.

These scripts can be used in our example like this:

    GPE> cd ~/hello-gpe/scripts

    GPE> ./setup4hello.sh 64.57.23.210 50000
    +++ Making res.xml, 1 month reservation file starting from now:
        BEGIN = 20100304213200
        END   = 20100404213200
        SPP_ADDRESS = 64.57.23.210
    +++
    See res.xml file
    Warning:  Your reservation has no fpRSpec
    Adding reservation:
    rDate: [3/4/2010 at 21:32:0, 4/4/2010 at 21:32:0]
      GPE: (ip=64.57.23.210 bw=1000 Kbps)

    Successfully added reservation
    Successfully allocated GPE spec
    Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.210, 60000, 17 } ]

    GPE> cd ..          # now in ~/hello-gpe/
    GPE> ./hello-server 50000
    udp-echo-srvr:  Listening on port 50000
        ... Run hello-client at your host ...
    echo_dgram rcvd 6 bytes (hex follows):
      68  65  6c  6c  6f  00  00  00  2c  ffffff91
    ====================
        ... ctrl-c entered to terminate server ...

    GPE> cd scripts
    GPE> ./teardown4hello.sh 64.57.23.210 60000

    free_sp_endpoint completed successfully 0
    Successfully freed sp resources
    Get reservation for current time
    Successfully canceled reservation

Higher Speed Traffic

Now that you know how to setup and teardown the SPP, you should be able to run any application on a GPE. You could run a traffic generator such as iperf (http://sourceforge.net/projects/iperf/).

We installed iperf in our GPE slice and our host. Then, we used the setup4hello.sh script to setup the SPP and then sent two seconds of UDP traffic to the server at 1.1 Mbps, 2 Mbps, 10 Mbps and finally 100 Mbps from our host. The following timeline shows the commands issued in the GPE window and the host window:

    GPE> iperf -s -u -p 50000
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 1.1m -t 2
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 2m -t 2
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 10m -t 2
                                        host> iperf -c 64.57.23.210 -u -p 50000 -b 100m -t 2
    GPE> ctrl-c    # terminate server

The server window is shown on the left, and the client window is shown on the right. The server arguments are '-s' (server mode), '-u' (UDP packets), and '-p 50000' (use port 50000). The client arguments are '-c 64.57.23.210' (client mode with server at 64.57.23.210), '-u' (UDP packets), '-p 50000' (use port 50000), '-b Xm' (bandwidth X in Mbps), and '-t 2' (send for two seconds).

The server output (with # annotation) is shown below:

    GPE> ./iperf -s -u -p 50000
    ------------------------------------------------------------
    Server listening on UDP port 50000
    Receiving 1470 byte datagrams
    UDP buffer size:   108 KByte (default)
    ------------------------------------------------------------
    [  4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 60988  # 1.1 Mbps
    [  4]  0.0- 2.0 sec    271 KBytes  1.09 Mbits/sec  0.033 ms    0/  189 (0%)
    [  3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 53234  # 2 Mbps
    [  3]  0.0- 2.0 sec    491 KBytes  1.99 Mbits/sec  0.085 ms    0/  342 (0%)
    [  4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 39372  # 10 Mbps
    [  4]  0.0- 3.3 sec  2.39 MBytes  6.07 Mbits/sec  1.290 ms    0/ 1702 (0%)
    [  3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 57153  # 100 Mbps
    [  3]  0.0- 9.9 sec  7.19 MBytes  6.08 Mbits/sec  0.820 ms 11963/17095 (70%)
    [  3]  0.0- 9.9 sec  1 datagrams received out-of-order

The output shows that the iperf server received packets at a maximum input rate of around 6 Mbps. It is interesting to note that even though the client sent at 10 Mbps and the server measured an average input rate of around 6 Mbps, there was no packet loss (0/1702 means 0 out of 1702 packets were lost) ... even though we only allocated 1 Mbps.

Monitoring Traffic

The only way to monitor traffic in and out of a GPE process is to have the process record and display packet statistics. The hello-gpe code could be modified to do that although we later show how that can be combined with our slice daemon sliced and a graphical interface to plot the statistics.

Exercises

This page has shown you the basics of using the GPE. You can improve your skill at writing a slowpath software router by doing the exercises below.

  1. A Different Interface
  2. A Different SPP
      Show the sequence of commands needed to use the Kansas City SPP instead of the Salt Lake City SPP.
  3. Multiple Interfaces
      Suppose that you wanted hello-server to read UDP packets from 64.57.23.210 but send the response packet to the client using 64.57.23.214. Describe the changes to the client and server code, scripts and command usage that would be needed to make it possible to use the two interfaces.
  4. Forwarding Packets (1)
      Consider the communication pattern client->SPP/relay->server. That is, the client sends a packet to a relay process running on a GPE which then relays the packet to a server running on a host different from the one where the client is running. Note that the relay process acts like both the hello-gpe server and client. Also, the relay process should use two different interfaces: one for the client and one for the server. The server announces the reception of each packet and then drops the packet. Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible.
  5. Forwarding Packets (2)
      Consider the communication pattern client<->SPP/relay<->SPP'/server. That is, the client sends a packet to a relay process running on a GPE which then relays the packet to a server process running on a different GPE. As before, the relay process should use two different interfaces: one for the client and one for the server. But it should use a point-to-point interface for packets to the server. The server should announce the reception of each packet and send the packet back to the relay process. The relay process should then forward the packet back to the client. Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible.