Difference between revisions of "The Hello GPE World Tutorial"
(37 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
[[Category: The SPP Tutorial]] | [[Category: The SPP Tutorial]] | ||
− | = | + | == Introduction == |
+ | Like any ''PlanetLab node'', a user can allocate a subset of an SPP's resources called a ''slice''. | ||
+ | An SPP slice user can either use a ''fastpath-slowpath'' packet processing paradigm which uses both a network processor (NPE) a general-purpose processor (GPE) or use a ''slowpath-only'' paradigm in which packet processing is handled completely by a socket program running on a GPE. | ||
+ | This page describes how to use the GPE-only approach. | ||
+ | == Pinging SPP External Interfaces == | ||
− | + | Unlike most PlanetLab nodes, an SPP has multiple external interfaces. | |
+ | In the GENI deployment, some of those interfaces have Internet2 IP addresses and some are interfaces attached to point-to-point links going directly to an external interfaces of other SPPs. | ||
+ | This section introduces you to sone of the Internet2 interfaces. | ||
+ | Let's try to ''ping'' some of those Internet2 interfaces. | ||
+ | Enter one of the following ''ping'' commands (omit the comments): | ||
− | === | + | <pre> |
+ | ping -c 3 64.57.23.210 # Salt Lake City interface 0 | ||
+ | ping -c 3 64.57.23.214 # Salt Lake City interface 1 | ||
+ | ping -c 3 64.57.23.218 # Salt Lake City interface 2 | ||
+ | ping -c 3 64.57.23.194 # Washington DC interface 0 | ||
+ | ping -c 3 64.57.23.198 # Washington DC interface 1 | ||
+ | ping -c 3 64.57.23.202 # Washington DC interface 2 | ||
+ | ping -c 3 64.57.23.178 # Kansas City interface 0 | ||
+ | ping -c 3 64.57.23.182 # Kansas City interface 1 | ||
+ | ping -c 3 64.57.23.186 # Kansas City interface 2 | ||
+ | </pre> | ||
+ | |||
+ | For example, my output from the first ''ping'' command looks like this: | ||
+ | |||
+ | <pre> | ||
+ | > ping -c 3 64.57.23.210 | ||
+ | PING 64.57.23.210 (64.57.23.210) 56(84) bytes of data. | ||
+ | 64 bytes from 64.57.23.210: icmp_seq=1 ttl=56 time=67.5 ms | ||
+ | 64 bytes from 64.57.23.210: icmp_seq=2 ttl=56 time=55.9 ms | ||
+ | 64 bytes from 64.57.23.210: icmp_seq=3 ttl=56 time=59.0 ms | ||
+ | |||
+ | --- 64.57.23.210 ping statistics --- | ||
+ | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms | ||
+ | rtt min/avg/max/mdev = 55.949/60.823/67.511/4.895 ms | ||
+ | </pre> | ||
+ | |||
+ | Note that you may not be able to ''ping'' an SPP external interface. | ||
+ | Some reasons why it might fail are: | ||
+ | |||
+ | # Your host doesn't have ''ping'' installed. This is not typical. | ||
+ | # The SPP interface is down. | ||
+ | # Your network blocks ''ping'' traffic. | ||
+ | # Your network provider doesn't route Internet2 addresses. | ||
+ | |||
+ | In the first case, you will get a ''command not found'' error message. | ||
+ | The ''ping'' command is usually located at ''/bin/ping''. | ||
+ | See your system administrator if you can't find ''ping''. | ||
+ | In the other cases, your ''ping'' command will eventually return with a ''100% packet loss'' message. | ||
+ | In the last case, running the command ''traceroute 64.57.23.210'' will give a ''Network unreachable'' indication (the last router is marked ''!N''). | ||
+ | |||
+ | If you are unsuccessful with one interface, try to ''ping'' the interface of a different SPP. | ||
+ | |||
+ | However, you can always get around these problems (except for an SPP being down) by issuing the ''ping'' command from a PlanetLab node. | ||
+ | We discuss how to log into a PlanetLab node in ''[[Using the IPv4 Code Option]]''. | ||
+ | |||
+ | == DNS Names of SPP External Interfaces == | ||
+ | |||
+ | {| border=1 cellspacing=0 cellpadding=3 align=right | ||
+ | ! SPP || Ifn || IP Address || DNS Name | ||
+ | |- align="center" | ||
+ | | KANS || 0 || 64.57.23.178 || sppkans1.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | || 1 || 64.57.23.182 || sppkans2.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | || 2 || 64.57.23.186 || sppkans3.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | WASH || 0 || 64.57.23.194 || sppwash1.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | || 1 || 64.57.23.198 || sppwash2.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | || 2 || 64.57.23.202 || sppwash3.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | SALT || 0 || 64.57.23.210 || sppsalt1.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | || 1 || 64.57.23.214 || sppsalt2.arl.wustl.edu | ||
+ | |- align="center" | ||
+ | | || 2 || 64.57.23.218 || sppsalt3.arl.wustl.edu | ||
+ | |- | ||
+ | |} | ||
+ | |||
+ | The SPP's external interfaces also have DNS names. | ||
+ | So, ''ping -c 3 sppsalt1.arl.wustl.edu'' works as well as ''ping -c 3 64.57.23.210''. | ||
+ | The table (right) shows the DNS names of the Internet external interfaces. | ||
+ | |||
+ | == Logging Into an SPP's GPE == | ||
+ | |||
+ | Now, let's try to log into the SPP interface that you were able to ''ping''. | ||
+ | The example below assumes that interface was 64.57.23.210; that is, interface 0 of the Salt Lake City SPP. | ||
+ | Note the following: | ||
+ | |||
+ | * You must use ''ssh'' to log into an SPP. | ||
+ | * When you ''ssh'' to an SPP's external interface, you will actually get logged into a GPE of the SPP. | ||
+ | * Furthermore, you will be logging into your slice in a GPE. | ||
+ | * Even if your network blocks your ''ping'' packets, you should be able to log into a GPE as long as there is a route to the SPP's external interface address. | ||
+ | * You can 'ssh' to any of the SPP's external interfaces. | ||
+ | |||
+ | To log into a GPE at the Salt Lake City SPP, I would enter: | ||
+ | |||
+ | <pre> | ||
+ | ssh pl_washu_sppDemo@64.57.23.210 | ||
+ | </pre> | ||
+ | |||
+ | where my slice name is ''pl_washu_sppDemo''. | ||
+ | Thus, the general format is: | ||
+ | |||
+ | <pre> | ||
+ | ssh YOUR_SLICE@SPP_ADDRESS | ||
+ | </pre> | ||
+ | |||
+ | where ''YOUR_SLICE'' is the slice you were assigned during account registration, and ''SPP_ADDRESS'' is the IP address of an SPP external interface. | ||
+ | During the login process, you will be asked to enter your RSA passphrase unless ''ssh-agent'' or an equivalent utility (e.g., ''keychain'', ''gnome-keyring-daemon'') is holding your private RSA key. | ||
− | + | host> ssh pl_washu_sppDemo@SPP_ADDRESS | |
+ | Enter passphrase for key '/home/.../LOGIN_NAME/.ssh/id_rsa': | ||
+ | '''... Respond with your passphrase ...''' | ||
+ | Last login: ... Previous login information ... | ||
+ | [YOUR_SLICE@SPP_ADDRESS ~]$ | ||
+ | If the SSH daemon asks you for your password, you will have to call ''ssh'' using the ''-i KEY_FILE'' argument like this: | ||
+ | |||
+ | <pre> | ||
+ | ssh -i ~/.ssh/id_rsa YOUR_SLICE@SPP_ADDRESS | ||
+ | ... The SSH daemon will ask for your passphrase ... | ||
+ | </pre> | ||
− | == | + | == Using ''ssh-agent'' == |
+ | This section is a very brief explanation of how to use ''ssh-agent''. | ||
+ | You can skip this section if you are already using such an agent. | ||
+ | If you have never used such an agent, note that there are several alternatives to the procedure described below and our description is meant to be a simple cookbook procedure. | ||
+ | See the ''ssh-agent'' and ''ssh-add'' man pages or the web for more details. | ||
− | + | The basic idea is to run ''ssh-agent'' which is a daemon process that caches private keys and listens for requests from SSH clients needing a private key related computation. | |
+ | Then, run the ''ssh-add'' command to add your private key to your agent's cache. | ||
+ | This is only done once after you start the SSH agent. | ||
+ | The process will ask you for your passphrase which is used to decrypt the private key which is then held in main memory by the agent. | ||
+ | For example, | ||
+ | <pre> | ||
+ | eval `ssh-agent` # Notice the backquotes | ||
+ | ssh-add | ||
+ | ... Enter your passphrase when it prompts for it ... | ||
+ | </pre> | ||
− | + | Notice that we are using backquotes (which denotes command substitution) in the first line, NOT the normal forward quote characters. | |
+ | In the first line, ''ssh-agent'' outputs two commands to stdout which is then evaluated by the ''eval'' command. | ||
+ | These two commands set the two environment variables ''SSH_AUTH_SOCK'' and ''SSH_AGENT_PID''. | ||
+ | Enter the command "''printenv | grep SSH_A''", and you | ||
+ | will get output that looks like: | ||
+ | <pre> | ||
+ | SSH_AUTH_SOCK=/tmp/ssh-sTNf2142/agent.2142 | ||
+ | SSH_AGENT_PID=2143 | ||
+ | </pre> | ||
− | + | which says that process 2143 is your ssh-agent and it is listening for requests on the Unix Domain socket ''/tmp/ssh-sTNf2142/agent.2142''. | |
+ | The ''ssh-add'' command adds your private key to the list of private keys held by ''ssh-agent''. | ||
+ | You can now verify that you can ssh to an SPP without entering a password | ||
+ | or passphrase. | ||
+ | In fact, any subshell of the current shell will not need to enter | ||
+ | a password when logging into an SPP as long as the agent is running because the SSH environment variables are passed to all children of the current shell allowing them to communicate with the same agent. | ||
+ | == The SPP Configuration Command ''scfg'' == | ||
− | + | After you have logged into a GPE, you can use the ''scfg'' command to: | |
+ | * Get information about the SPP | ||
+ | * Configure the SPP | ||
+ | * Make resource reservations | ||
+ | You can get help information from ''scfg'' by entering one of these forms of the command: | ||
− | + | <pre> | |
+ | scfg --help all # show help for all commands | ||
+ | scfg --help info # show help for information commands | ||
+ | scfg --help queues # show help for queue commands | ||
+ | scfg --help reserv # show help for reservation commands | ||
+ | scfg --help alloc # show help for resource alloc/free commands | ||
+ | </pre> | ||
+ | Try getting help on the information commands by entering: | ||
+ | <pre> | ||
+ | scfg --help info | ||
+ | </pre> | ||
− | + | Your output should look like this: | |
+ | <pre> | ||
+ | USAGE: | ||
+ | INFORMATION CMDS: | ||
+ | scfg --cmd get_ifaces | ||
+ | Display all interfaces | ||
+ | scfg --cmd get_ifpeer --ifn N | ||
+ | Display the peer of interface num N | ||
+ | ... other output not shown ... | ||
+ | </pre> | ||
+ | If you get a ''command not found'' message, try entering: | ||
− | + | <pre> | |
+ | /usr/local/bin/scfg --help info | ||
+ | </pre> | ||
+ | If the command now runs, you need to add ''/usr/local/bin'' to your PATH environment variable. | ||
+ | The rest of this tutorial assumes that your PATH environment variable has been set to include the directory containing the ''scfg'' command. | ||
+ | == Getting Information About External Interfaces == | ||
− | + | SPPs have multiple external interfaces. | |
+ | To show the attributes of all external interfaces, enter: | ||
− | + | <pre> | |
+ | scfg --cmd get_ifaces | ||
+ | </pre> | ||
− | + | For example, running this command on the Salt Lake City SPP produces: | |
− | |||
<pre> | <pre> | ||
− | + | Interface list: | |
− | + | [ifn 0, type "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.210] | |
+ | [ifn 1, type "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.214] | ||
+ | [ifn 2, type "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.218] | ||
+ | [ifn 3, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.1.2] | ||
+ | [ifn 4, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.2.2] | ||
+ | [ifn 5, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.7.2] | ||
+ | [ifn 6, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.8.2] | ||
</pre> | </pre> | ||
− | + | This output shows: | |
− | + | ||
+ | * There are seven external interfaces numbered from 0 to 7. | ||
+ | * ''type:'' There are two types of interfaces: Internet (''inet'') and point-to-point (''p2p''). | ||
+ | * ''linkBW:'' The capacity of each interface is 1 Gbps (i.e., 1000000 Kbps). | ||
+ | * ''availBW:'' The available bandwidth of each interface is 899.232 Mbps (i.e., 899232 Kbps); that is, that portion of the capacity that hasn't already been allocated. | ||
+ | * ''ipAddr:'' The IP addresses of each interface. | ||
+ | |||
+ | == Getting Information About Peers == | ||
+ | |||
+ | The ''type inet'' interfaces are physically connected to the Internet. | ||
+ | The ''type p2p'' interfaces are physically connected to other SPPs through point-to-point links. | ||
+ | That's why you can only ''ping'' interfaces with type ''inet'' from your host. | ||
+ | |||
+ | You can use the ''get_peer'' command to show the IP address of the interface at the other end of a point-to-point link. | ||
+ | For example, I would enter: | ||
+ | |||
+ | <pre> | ||
+ | scfg --cmd get_peer --ifn 3 | ||
+ | </pre> | ||
+ | to find out the IP address of interface 3's peer. | ||
+ | These seven commands will show the peer IP addresses of interfaces 0-6: | ||
<pre> | <pre> | ||
− | scfg -- | + | scfg --cmd get_peer --ifn 0 |
+ | scfg --cmd get_peer --ifn 1 | ||
+ | scfg --cmd get_peer --ifn 2 | ||
+ | scfg --cmd get_peer --ifn 3 | ||
+ | scfg --cmd get_peer --ifn 4 | ||
+ | scfg --cmd get_peer --ifn 5 | ||
+ | scfg --cmd get_peer --ifn 6 | ||
</pre> | </pre> | ||
− | + | Running these commands on the Salt Lake City SPP produces this output: | |
+ | |||
+ | <pre> | ||
+ | SPP Peer IP address: 0.0.0.0 | ||
+ | SPP Peer IP address: 0.0.0.0 | ||
+ | SPP Peer IP address: 0.0.0.0 | ||
+ | SPP Peer IP address: 10.1.1.1 | ||
+ | SPP Peer IP address: 10.1.2.1 | ||
+ | SPP Peer IP address: 10.1.7.1 | ||
+ | SPP Peer IP address: 10.1.8.1 | ||
+ | </pre> | ||
− | + | Notice that the ''p2p'' interfaces are the only ones with a peer IP address that is not 0.0.0.0. | |
+ | Furthermore, these addresses have the same 10.1.x.y format as other p2p interfaces. | ||
− | + | == Constructing an SPP Interconnection Map == | |
− | + | We can now build a complete interconnection map of the SPPs if we combine the output of the ''get_ifaces'' and ''get_peer'' commands from all SPPs. | |
− | + | This output is shown at the bottom of the [[The GENI SPP Configuration]] page. | |
− | + | The interconnection tables shown near the top of the [[The GENI SPP Configuration]] page were constructed from this output. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | The Salt Lake City table is: | |
− | {| border=1 cellspacing=0 cellpadding=3 | + | {| border=1 cellspacing=0 cellpadding=3 align=center |
! Interface || Type || IP Address || Peer Address | ! Interface || Type || IP Address || Peer Address | ||
|- align="center" | |- align="center" | ||
− | | 0 || inet || 64.57.23. | + | | 0 || inet || 64.57.23.210 || 0.0.0.0 |
|- align="center" | |- align="center" | ||
− | | 1 || inet || 64.57.23. | + | | 1 || inet || 64.57.23.214 || 0.0.0.0 |
|- align="center" | |- align="center" | ||
− | | 2 || inet || 64.57.23. | + | | 2 || inet || 64.57.23.218 || 0.0.0.0 |
|- align="center" | |- align="center" | ||
| 3 || p2p || 10.1.1.2 || 10.1.1.1 (KC ifn 3) | | 3 || p2p || 10.1.1.2 || 10.1.1.1 (KC ifn 3) | ||
Line 109: | Line 314: | ||
| 6 || p2p || 10.1.8.2 || 10.1.8.1 (DC ifn 6) | | 6 || p2p || 10.1.8.2 || 10.1.8.1 (DC ifn 6) | ||
|} | |} | ||
− | |||
− | + | For example, the peer IP address of interface 3 is 10.1.1.1 which is the IP address of Kansas City's interface 3. | |
− | + | You can verify the labeling of the peer IP addresses for interfaces 4-6 by looking at the output at the bottom of [[The GENI SPP Configuration]] page. | |
− | + | Below is a diagram of the SPP interconnection map: | |
− | + | ||
− | + | [[Image:spp-interconnection-map.png|right|300px|border|SPP Interconnection Map]] | |
− | + | ||
− | + | ''scfg'' has other information commands and also commands for allocating/freeing SPP resources and managing queues. | |
− | + | The example below will describe some of these commands. | |
− | + | The page [[SPP Command Interface]] summarizes all of the commands. | |
− | + | ||
− | + | == Hello GPE World == | |
− | + | ||
− | |- align="center" | + | [[Image:hello-gpe-pkts.png|right|300px|border|Hello GPE World Packets]] |
− | | | + | |
− | |- align="center" | + | The first program we will run on a GPE is a variant of the UDP echo server. |
− | | | + | You will run the client on your Linux host which will send a UDP packet containing the 6-byte C-string (including the terminating NUL byte) "hello" to the server. |
+ | The server listens on port 50000 for an incoming UDP packet. | ||
+ | When it receives a packet, it displays the content of the read buffer in both ASCII and hexadecimal formats and sends the "hello" string back to the client. | ||
+ | |||
+ | Going through this example should demonstrate to you that using a GPE is just like using any other general-purpose host except that you need to setup and teardown the SPP. | ||
+ | |||
+ | Here are the steps involved in this example: | ||
+ | |||
+ | * Create the client and server executables. | ||
+ | * Copy the server executable and scripts to a GPE. | ||
+ | * Setup the SPP. | ||
+ | * Run the server and the client. | ||
+ | * Teardown the SPP. | ||
+ | |||
+ | == Create the Client and Server Executables == | ||
+ | |||
+ | >>>>> How to get the tar file ??? <<<<< | ||
+ | |||
+ | In the command block below, we assume that you will extract the tar file into the directory ''~/hello-gpe'' in your home directory: | ||
+ | |||
+ | <pre> | ||
+ | host> cd # change directory to your home directory | ||
+ | host> tar tf ~/Download/hello-gpe.tar # see what is in the tar file | ||
+ | host> tar xf ~/Download/hello-gpe.tar # extract contents into ~/hello-gpe/ directory | ||
+ | host> cat README # read about the example | ||
+ | host> make # make the two executables | ||
+ | ... Follow the insructions for doing a test using localhost ... | ||
+ | </pre> | ||
+ | |||
+ | You have now created two executables in the ~/hello-gpe/ directory: ''hello-client'' and ''hello-server''. | ||
+ | |||
+ | == Copy the Server Executable and Scripts to a GPE == | ||
+ | |||
+ | Now, create a tar file that contains the two above executables and the SPP scripts found in ~/hello-gpe/scripts/: | ||
+ | |||
+ | <pre> | ||
+ | host> make spp-hello.tar | ||
+ | host> scp spp-hello.tar YOUR_SLICE@SPP_ADDRESS: | ||
+ | host> ssh YOUR_SLICE@SPP_ADDRESS | ||
+ | GPE> tar tf spp-hello.tar # look at what is in the tar file | ||
+ | GPE> tar xf spp-hello.tar # creates and populates ~/hello-gpe/ | ||
+ | </pre> | ||
+ | |||
+ | The ''spp-hello.tar'' file contains the scripts from the ''hello-gpe.tar'' file and the two executables ''hello-server'' and ''hello-client'' that you just created. | ||
+ | We will first lead you through the process of setting up the SPP, running the executables and then tearing down the SPP in a step-by-step manner. | ||
+ | Afterwards, we will discuss how to script the setup and teardown procedures. | ||
+ | |||
+ | == Setup the SPP == | ||
+ | |||
+ | Setting up the SPP so that packets from ''hello-client'' can get to your ''hello-server'' process running on a GPE of the Salt Lake City SPP involves these steps: | ||
+ | |||
+ | * Run the ''mkResFile4hello.sh'' script to create a resource reservation file. | ||
+ | * Submit the resource reservation. | ||
+ | * Claim the resources described by the resource reservation file. | ||
+ | ** This allocates 1 Mbps of capacity from the 64.57.23.210 interface. | ||
+ | * Setup the endpoint (64.57.23.210, 50000) to handle 1 Mbps of UDP traffic. | ||
+ | |||
+ | === Create a Resource Reservation File === | ||
+ | |||
+ | Most users make a resource reservation file in one of two ways: | ||
+ | |||
+ | * Manual: Copy an existing file and hand edit the file to meet their needs; or | ||
+ | * Script: Run a script that generates the file. | ||
+ | |||
+ | You can hand edit the file ''~/hello-gpe/scripts/res.xml'' or generate one using the script ''~/hello-gpe/scripts/mkResFile4hello.sh''. | ||
+ | The ''res.xml'' file looks like this: | ||
+ | |||
+ | <pre> | ||
+ | <?xml version="1.0" encoding="utf-8" standalone="yes"?> | ||
+ | <spp> | ||
+ | <rsvRecord> | ||
+ | <!-- Date Format: YYYYMMDDHHmmss --> | ||
+ | <!-- That's year, month, day, hour, minutes, seconds --> | ||
+ | <rDate start="20100304121500" end="20100404121500" /> | ||
+ | <plRSpec> | ||
+ | <ifParams> | ||
+ | <!-- reserve 1 Mb/s on one interface --> | ||
+ | <ifRec bw="1000" ip="64.57.23.210" /> | ||
+ | </ifParams> | ||
+ | </plRSpec> | ||
+ | </rsvRecord> | ||
+ | </spp> | ||
+ | </pre> | ||
+ | |||
+ | This file defines the following reservation: | ||
+ | |||
+ | * The reservation runs from 1215 on March 4, 2010 to 1215 on April 4, 2010. | ||
+ | ** The hours (HH) is based on a 24-hour clock. | ||
+ | ** This period must include the actual time period that you plan to use the resources. | ||
+ | * The ''plRSpec'' section defines the GPE (slowpath) resources. | ||
+ | ** It specifies that you will be using 1000 Kbps (= 1 Mbps) of the interface with IP address 64.57.23.210. | ||
+ | * This reservation does not have a ''fpRspec'' component which defines fastpath resources because this example doesn't use the fastpath ([[The IPv4 Metanet Tutorial]] shows how to create a reservation file containing fastpath resources). | ||
+ | |||
+ | We don't really need 1 Mbps of bandwidth for this example since we are only sending a UDP packet with a 6-byte payload. | ||
+ | |||
+ | If you use the manual method to create the reservation file, you can edit the existing ''res.xml'' file that is in the tar file. | ||
+ | You will only need to edit the two date fields in the ''rDate'' tag and the bandwidth and IP address fields in the ''ifRec'' tag. | ||
+ | You can choose an arbitrary file name. | ||
+ | |||
+ | If you use the script method, the ''mkResFile4hello.sh'' script has been written specifically for this example. | ||
+ | You run the script on the GPE like this for the Salt Lake City SPP: | ||
+ | |||
+ | <pre> | ||
+ | GPE> cd ~/hello-gpe/scripts | ||
+ | GPE> ./mkResFile4hello.sh 64.57.23.210 # Salt Lake City SPP, interface 0 IP address | ||
+ | +++ Making res.xml, 1 month reservation file starting from now: | ||
+ | BEGIN = 20100304205900 | ||
+ | END = 20100404205900 | ||
+ | SPP_ADDRESS = 64.57.23.210 | ||
+ | +++ | ||
+ | See res.xml file | ||
+ | </pre> | ||
+ | |||
+ | It will create a reservation file for a one month period starting from today for the interface IP address entered as the first command-line argument. | ||
+ | It announces the date parameters (20100304205900 and 20100404205900) and the IP address (64.57.23.194) that it will put into the reservation file. | ||
+ | |||
+ | Our choice of a one month reservation period was arbitrary. | ||
+ | You can modify the date fields in our ''res.xml'' file to suit your own needs. | ||
+ | Furthermore, note the following: | ||
+ | |||
+ | * You can make an ''advanced reservation'' which covers a time period in the future. | ||
+ | * The time period can have a ''start'' date that is in the past. | ||
+ | * You can only one reservation per time period; i.e., reservations can't overlap in time. | ||
+ | |||
+ | === Submit the Reservation === | ||
+ | |||
+ | Now, we use the ''scfg'' command ''--cmd make_resrv'' to submit the reservation: | ||
+ | |||
+ | <pre> | ||
+ | GPE> scfg --cmd make_resrv --xfile res.xml | ||
+ | Warning: Your reservation has no fpRSpec | ||
+ | Adding reservation: | ||
+ | rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0] | ||
+ | GPE: (ip=64.57.23.210 bw=1000 Kbps) | ||
+ | |||
+ | Successfully added reservation | ||
+ | </pre> | ||
+ | |||
+ | Note that ''scfg'' outputs a warning that the reservation file doesn't have a ''fpRspec'' component; i.e., a fastpath specification. | ||
+ | Since this example is using only the slowpath, we can ignore the warning. | ||
+ | |||
+ | You can check for a reservation using one of the reservation management commands described in [[SPP Command Interface]]. | ||
+ | The ''get_resrvs'' command will display all of your reservations: | ||
+ | |||
+ | <pre> | ||
+ | Get all reservations: | ||
+ | Successfully got reservations (1) | ||
+ | 0) rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0] | ||
+ | GPE: (ip=64.57.23.210 bw=1000 Kbps) | ||
+ | </pre> | ||
+ | |||
+ | === Claim the Resources === | ||
+ | |||
+ | The reservation only indicates your intent to use resources. | ||
+ | You use the ''scfg'' command ''--cmd claim_resources'' to actually allocate the resources specified by a reservation: | ||
+ | |||
+ | <pre> | ||
+ | GPE> scfg --cmd get_ifattrs --ifn 0 | ||
+ | Interface attributes: | ||
+ | [ifn 0, type "inet", linkBW 1000000Kbps, availBW 864552Kbps, ipAddr 64.57.23.210] | ||
+ | |||
+ | GPE> scfg --cmd claim_resources | ||
+ | Successfully allocated GPE spec | ||
+ | |||
+ | GPE> scfg --cmd get_ifattrs --ifn 0 | ||
+ | Interface attributes: | ||
+ | [ifn 0, type "inet", linkBW 1000000Kbps, availBW 863552Kbps, ipAddr 64.57.23.210] | ||
+ | </pre> | ||
+ | |||
+ | The third command does the allocation of your active reservation: 1 Mbps of capacity from the 64.57.23.210 interface. | ||
+ | We can now configure slowpath endpoints that use portions of this 1 Mbps capacity. | ||
+ | |||
+ | The command block above shows that interface 0's available bandwidth has been reduced by 1000 Kbps. | ||
+ | The ''--cmd get_ifattrs'' outputs the same information as ''--cmd get_ifaces'' but for only one interface. | ||
+ | |||
+ | === Setup the Endpoint === | ||
+ | |||
+ | We now use the capacity allocated by ''--cmd claim_resources'' by creating a slowpath endpoint within the 64.57.23.210 interface by using ''--cmd setup_sp_enpoint'': | ||
+ | |||
+ | <pre> | ||
+ | GPE> scfg --cmd setup_sp_endpoint --bw 1000 --ipaddr 64.57.23.210 --port 50000 --proto 17 | ||
+ | Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.194, 50000, 17 } ] | ||
+ | </pre> | ||
+ | |||
+ | This command example shows: | ||
+ | |||
+ | * The endpoint IP address, port number and protocol are 64.57.23.210, 50000 and 17 (UDP) respectively; and | ||
+ | * it will use all 1000 Kbps of the allocated capacity. | ||
+ | |||
+ | [[Image:setup_sp_endpoint.png|right|300px|border|scfg --cmd setup_sp_endpoint ...]] | ||
+ | |||
+ | Note that: | ||
+ | |||
+ | * The ''hello-client'' process will send UDP packets to (SPP_ADDRESS=64.57.23.210, UDP_PORT=50000). | ||
+ | * A slice can have multiple endpoints within an interface allocation as long as the sum of their bandwidth parameters does not exceed the allocated capacity. | ||
+ | * Multiple slices can use the same IP address. | ||
+ | ** Each slice will be guaranteed their allocated bandwidth. | ||
+ | * Slices can not use the same port number. | ||
+ | * The ''--cmd setup_sp_endpoint'' command installed a filter in the linecard that will direct incoming traffic to any process your slice is running on a GPE. | ||
+ | |||
+ | == Run the Server and the Client == | ||
+ | |||
+ | You should have two windows: one to run the server an done to run the client. | ||
+ | In your GPE window, start the server on the GPE so that it listens on port 50000: | ||
+ | |||
+ | <pre> | ||
+ | GPE> hello-server 50000 | ||
+ | udp-echo-srvr: Listening on port 50000 | ||
+ | ... Waiting for UDP packet ... | ||
+ | </pre> | ||
+ | |||
+ | Actually, the default port number is 50000. | ||
+ | So, you could have also left off the 50000 argument. | ||
+ | |||
+ | Now, in your other window, start the client with the IP address and port number of the slowpath endpoint: | ||
+ | |||
+ | <pre> | ||
+ | host> hello-client 64.57.23.210 50000 | ||
+ | udp-echo-cli: Sending to port 50000 at 64.57.23.210 | ||
+ | send_dgram rcvd 6 char: <hello> | ||
+ | </pre> | ||
+ | |||
+ | The client sends one UDP packet containing the C-string "hello", waits for the returning packet and displays the contents of the packet it receives. | ||
+ | The server continues to run waiting for another packet. | ||
+ | To send another packet, run ''hello-client'' again. | ||
+ | |||
+ | The GPE window shows that the server received a 6-byte datagram and displays the contents of a 10-byte buffer. | ||
+ | |||
+ | <pre> | ||
+ | GPE> hello-server 50000 | ||
+ | udp-echo-srvr: Listening on port 50000 | ||
+ | echo_dgram rcvd 6 bytes (hex follows): | ||
+ | 68 65 6c 6c 6f 00 00 00 2c ffffff91 | ||
+ | ==================== | ||
+ | ... Wait for next UDP packet to port 50000 ... | ||
+ | ... Enter ctrl-c to terminate ... | ||
+ | </pre> | ||
+ | |||
+ | The hexadecimal output shows that the first six bytes contain the correct hexadecimal representation for "hello" including the C-string terminating NUL (hex 00) byte. | ||
+ | For example, an ASCII 'h' is 68 in hexadecimal. | ||
+ | |||
+ | == Teardown the SPP == | ||
+ | |||
+ | After you are done using the GPE, you need to return the resources and cancel the reservation. | ||
+ | The teardown procedure is the: | ||
+ | |||
+ | <pre> | ||
+ | GPE> scfg --cmd free_sp_endpoint --ipaddr 64.57.23.210 --port 50000 --proto 17 | ||
+ | free_sp_endpoint completed successfully 0 | ||
+ | |||
+ | GPE> scfg --cmd free_sp_resources | ||
+ | Successfully freed sp resources | ||
+ | |||
+ | GPE> scfg --cmd cancel_resrv | ||
+ | Get reservation for current time | ||
+ | Successfully canceled reservation | ||
+ | </pre> | ||
+ | |||
+ | {| border=1 cellspacing=0 cellpadding=3 align=right | ||
+ | ! Step || Setup || Teardown | ||
+ | |- | ||
+ | |align="center"| 1 || ''make_resrv'' || ''free_sp_endpoint'' | ||
+ | |- | ||
+ | |align="center"| 2 || ''claim_resources'' || ''free_sp_resources'' | ||
+ | |- | ||
+ | |align="center"| 3 || ''setup_sp_endpoint'' || ''cancel_resrv'' | ||
|} | |} | ||
− | = | + | The teardown procedure is the reverse of the setup procedure: |
+ | |||
+ | The ''--cmd cancel_resrv'' command cancels the current active reservation. | ||
+ | You can also use it to cancel an advance reservation by supplying a date. | ||
+ | See [[SPP Command Interface]] for the details. | ||
+ | |||
+ | <br clear=all> | ||
− | = | + | == The Setup and Teardown Scripts == |
− | + | The work of setting up and tearing down the SPP can be scripted so that the commands can be repeated without manually entering all of the commands. | |
− | + | When you extracted the files from the ''spp-hello.tar'' file, the scripts ''setup4hello.sh'' and ''teardown4hello.sh'' were extracted into the ~/hello-gpe/scripts/ directory. | |
+ | |||
+ | These scripts can be used in our example like this: | ||
<pre> | <pre> | ||
− | + | GPE> cd ~/hello-gpe/scripts | |
+ | |||
+ | GPE> ./setup4hello.sh 64.57.23.210 50000 | ||
+ | +++ Making res.xml, 1 month reservation file starting from now: | ||
+ | BEGIN = 20100304213200 | ||
+ | END = 20100404213200 | ||
+ | SPP_ADDRESS = 64.57.23.210 | ||
+ | +++ | ||
+ | See res.xml file | ||
+ | Warning: Your reservation has no fpRSpec | ||
+ | Adding reservation: | ||
+ | rDate: [3/4/2010 at 21:32:0, 4/4/2010 at 21:32:0] | ||
+ | GPE: (ip=64.57.23.210 bw=1000 Kbps) | ||
− | + | Successfully added reservation | |
+ | Successfully allocated GPE spec | ||
+ | Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.210, 60000, 17 } ] | ||
− | + | GPE> cd .. # now in ~/hello-gpe/ | |
− | + | GPE> ./hello-server 50000 | |
− | + | udp-echo-srvr: Listening on port 50000 | |
− | + | ... Run hello-client at your host ... | |
− | + | echo_dgram rcvd 6 bytes (hex follows): | |
− | + | 68 65 6c 6c 6f 00 00 00 2c ffffff91 | |
− | + | ==================== | |
− | + | ... ctrl-c entered to terminate server ... | |
− | + | GPE> cd scripts | |
+ | GPE> ./teardown4hello.sh 64.57.23.210 60000 | ||
− | + | free_sp_endpoint completed successfully 0 | |
− | + | Successfully freed sp resources | |
− | + | Get reservation for current time | |
− | + | Successfully canceled reservation | |
− | + | </pre> | |
− | |||
− | |||
− | + | == Higher Speed Traffic == | |
− | + | Now that you know how to setup and teardown the SPP, you should be able to run any application on a GPE. | |
− | + | You could run a traffic generator such as ''iperf'' (http://sourceforge.net/projects/iperf/). | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | We installed ''iperf'' in our GPE slice and our host. | |
+ | Then, we used the ''setup4hello.sh'' script to setup the SPP and then sent two seconds of UDP traffic to the server at 1.1 Mbps, 2 Mbps, 10 Mbps and finally 100 Mbps from our host. | ||
+ | The following timeline shows the commands issued in the GPE window and the host window: | ||
− | + | <pre> | |
+ | GPE> iperf -s -u -p 50000 | ||
+ | host> iperf -c 64.57.23.210 -u -p 50000 -b 1.1m -t 2 | ||
+ | host> iperf -c 64.57.23.210 -u -p 50000 -b 2m -t 2 | ||
+ | host> iperf -c 64.57.23.210 -u -p 50000 -b 10m -t 2 | ||
+ | host> iperf -c 64.57.23.210 -u -p 50000 -b 100m -t 2 | ||
+ | GPE> ctrl-c # terminate server | ||
+ | </pre> | ||
− | + | The server window is shown on the left, and the client window is shown on the right. | |
− | + | The server arguments are '-s' (server mode), '-u' (UDP packets), and '-p 50000' (use port 50000). | |
− | + | The client arguments are '-c 64.57.23.210' (client mode with server at 64.57.23.210), '-u' (UDP packets), '-p 50000' (use port 50000), '-b Xm' (bandwidth X in Mbps), and '-t 2' (send for two seconds). | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | The server output (with # annotation) is shown below: | |
− | + | <pre> | |
− | + | GPE> ./iperf -s -u -p 50000 | |
− | + | ------------------------------------------------------------ | |
− | + | Server listening on UDP port 50000 | |
− | + | Receiving 1470 byte datagrams | |
− | + | UDP buffer size: 108 KByte (default) | |
− | + | ------------------------------------------------------------ | |
+ | [ 4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 60988 # 1.1 Mbps | ||
+ | [ 4] 0.0- 2.0 sec 271 KBytes 1.09 Mbits/sec 0.033 ms 0/ 189 (0%) | ||
+ | [ 3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 53234 # 2 Mbps | ||
+ | [ 3] 0.0- 2.0 sec 491 KBytes 1.99 Mbits/sec 0.085 ms 0/ 342 (0%) | ||
+ | [ 4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 39372 # 10 Mbps | ||
+ | [ 4] 0.0- 3.3 sec 2.39 MBytes 6.07 Mbits/sec 1.290 ms 0/ 1702 (0%) | ||
+ | [ 3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 57153 # 100 Mbps | ||
+ | [ 3] 0.0- 9.9 sec 7.19 MBytes 6.08 Mbits/sec 0.820 ms 11963/17095 (70%) | ||
+ | [ 3] 0.0- 9.9 sec 1 datagrams received out-of-order | ||
+ | </pre> | ||
− | + | The output shows that the ''iperf'' server received packets at a maximum input rate of around 6 Mbps. | |
+ | It is interesting to note that even though the client sent at 10 Mbps and the server measured an average input rate of around 6 Mbps, there was no packet loss (0/1702 means 0 out of 1702 packets were lost) ... even though we only allocated 1 Mbps. | ||
− | + | == Monitoring Traffic == | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | The only way to monitor traffic in and out of a GPE process is to have the process record and display packet statistics. | |
+ | The hello-gpe code could be modified to do that although we later show how that can be combined with | ||
+ | our slice daemon ''sliced'' and a graphical interface to plot the statistics. | ||
− | + | == Exercises == | |
− | + | This page has shown you the basics of using the GPE. | |
− | + | You can improve your skill at writing a slowpath software router by doing the exercises below. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <ol> | |
+ | <li> '''A Different Interface''' | ||
+ | <ul> | ||
+ | How would you change the commands shown in ''[[The Hello GPE World Tutorial#The Setup and Teardown Scripts]]'' to use 64.57.23.214 instead of 64.57.23.210? | ||
+ | </ul> | ||
+ | </li> | ||
+ | <li> '''A Different SPP''' | ||
+ | <ul> | ||
+ | Show the sequence of commands needed to use the Kansas City SPP instead of the Salt Lake City SPP. | ||
+ | </ul> | ||
+ | </li> | ||
+ | <li> '''Multiple Interfaces''' | ||
+ | <ul> | ||
+ | Suppose that you wanted ''hello-server'' to read UDP packets from 64.57.23.210 but send the response packet to the client using 64.57.23.214. | ||
+ | Describe the changes to the client and server code, scripts and command usage that would be needed to make it possible to use the two interfaces. | ||
+ | </ul> | ||
+ | </li> | ||
+ | <li> '''Forwarding Packets (1)''' | ||
+ | <ul> | ||
+ | Consider the communication pattern client->SPP/relay->server. That is, the client sends a packet to a relay process running on a GPE which then relays the packet to a server running on a host different from the one where the client is running. | ||
+ | Note that the ''relay'' process acts like both the hello-gpe server and client. | ||
+ | Also, the ''relay'' process should use two different interfaces: one for the client and one for the server. | ||
+ | The server announces the reception of each packet and then drops the packet. | ||
+ | Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible. | ||
+ | </ul> | ||
+ | <li> '''Forwarding Packets (2)''' | ||
+ | <ul> | ||
+ | Consider the communication pattern client<->SPP/relay<->SPP'/server. That is, the client sends a packet to a ''relay'' process running on a GPE which then relays the packet to a ''server'' process running on a different GPE. As before, the ''relay'' process should use two different interfaces: one for the client and one for the server. | ||
+ | But it should use a point-to-point interface for packets to the server. | ||
+ | The server should announce the reception of each packet and send the packet back to the ''relay'' process. | ||
+ | The ''relay'' process should then forward the packet back to the client. | ||
+ | Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible. | ||
+ | </ul> | ||
+ | </ol> | ||
− | + | <!-- COMMENT | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | [[Image:hello.png|right|300px|border|Hello GPE World]] | |
− | + | END --> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |
Latest revision as of 22:14, 15 March 2010
Contents
- 1 Introduction
- 2 Pinging SPP External Interfaces
- 3 DNS Names of SPP External Interfaces
- 4 Logging Into an SPP's GPE
- 5 Using ssh-agent
- 6 The SPP Configuration Command scfg
- 7 Getting Information About External Interfaces
- 8 Getting Information About Peers
- 9 Constructing an SPP Interconnection Map
- 10 Hello GPE World
- 11 Create the Client and Server Executables
- 12 Copy the Server Executable and Scripts to a GPE
- 13 Setup the SPP
- 14 Run the Server and the Client
- 15 Teardown the SPP
- 16 The Setup and Teardown Scripts
- 17 Higher Speed Traffic
- 18 Monitoring Traffic
- 19 Exercises
Introduction
Like any PlanetLab node, a user can allocate a subset of an SPP's resources called a slice. An SPP slice user can either use a fastpath-slowpath packet processing paradigm which uses both a network processor (NPE) a general-purpose processor (GPE) or use a slowpath-only paradigm in which packet processing is handled completely by a socket program running on a GPE. This page describes how to use the GPE-only approach.
Pinging SPP External Interfaces
Unlike most PlanetLab nodes, an SPP has multiple external interfaces. In the GENI deployment, some of those interfaces have Internet2 IP addresses and some are interfaces attached to point-to-point links going directly to an external interfaces of other SPPs. This section introduces you to sone of the Internet2 interfaces.
Let's try to ping some of those Internet2 interfaces. Enter one of the following ping commands (omit the comments):
ping -c 3 64.57.23.210 # Salt Lake City interface 0 ping -c 3 64.57.23.214 # Salt Lake City interface 1 ping -c 3 64.57.23.218 # Salt Lake City interface 2 ping -c 3 64.57.23.194 # Washington DC interface 0 ping -c 3 64.57.23.198 # Washington DC interface 1 ping -c 3 64.57.23.202 # Washington DC interface 2 ping -c 3 64.57.23.178 # Kansas City interface 0 ping -c 3 64.57.23.182 # Kansas City interface 1 ping -c 3 64.57.23.186 # Kansas City interface 2
For example, my output from the first ping command looks like this:
> ping -c 3 64.57.23.210 PING 64.57.23.210 (64.57.23.210) 56(84) bytes of data. 64 bytes from 64.57.23.210: icmp_seq=1 ttl=56 time=67.5 ms 64 bytes from 64.57.23.210: icmp_seq=2 ttl=56 time=55.9 ms 64 bytes from 64.57.23.210: icmp_seq=3 ttl=56 time=59.0 ms --- 64.57.23.210 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 55.949/60.823/67.511/4.895 ms
Note that you may not be able to ping an SPP external interface. Some reasons why it might fail are:
- Your host doesn't have ping installed. This is not typical.
- The SPP interface is down.
- Your network blocks ping traffic.
- Your network provider doesn't route Internet2 addresses.
In the first case, you will get a command not found error message. The ping command is usually located at /bin/ping. See your system administrator if you can't find ping. In the other cases, your ping command will eventually return with a 100% packet loss message. In the last case, running the command traceroute 64.57.23.210 will give a Network unreachable indication (the last router is marked !N).
If you are unsuccessful with one interface, try to ping the interface of a different SPP.
However, you can always get around these problems (except for an SPP being down) by issuing the ping command from a PlanetLab node. We discuss how to log into a PlanetLab node in Using the IPv4 Code Option.
DNS Names of SPP External Interfaces
SPP | Ifn | IP Address | DNS Name |
---|---|---|---|
KANS | 0 | 64.57.23.178 | sppkans1.arl.wustl.edu |
1 | 64.57.23.182 | sppkans2.arl.wustl.edu | |
2 | 64.57.23.186 | sppkans3.arl.wustl.edu | |
WASH | 0 | 64.57.23.194 | sppwash1.arl.wustl.edu |
1 | 64.57.23.198 | sppwash2.arl.wustl.edu | |
2 | 64.57.23.202 | sppwash3.arl.wustl.edu | |
SALT | 0 | 64.57.23.210 | sppsalt1.arl.wustl.edu |
1 | 64.57.23.214 | sppsalt2.arl.wustl.edu | |
2 | 64.57.23.218 | sppsalt3.arl.wustl.edu |
The SPP's external interfaces also have DNS names. So, ping -c 3 sppsalt1.arl.wustl.edu works as well as ping -c 3 64.57.23.210. The table (right) shows the DNS names of the Internet external interfaces.
Logging Into an SPP's GPE
Now, let's try to log into the SPP interface that you were able to ping. The example below assumes that interface was 64.57.23.210; that is, interface 0 of the Salt Lake City SPP. Note the following:
- You must use ssh to log into an SPP.
- When you ssh to an SPP's external interface, you will actually get logged into a GPE of the SPP.
- Furthermore, you will be logging into your slice in a GPE.
- Even if your network blocks your ping packets, you should be able to log into a GPE as long as there is a route to the SPP's external interface address.
- You can 'ssh' to any of the SPP's external interfaces.
To log into a GPE at the Salt Lake City SPP, I would enter:
ssh pl_washu_sppDemo@64.57.23.210
where my slice name is pl_washu_sppDemo. Thus, the general format is:
ssh YOUR_SLICE@SPP_ADDRESS
where YOUR_SLICE is the slice you were assigned during account registration, and SPP_ADDRESS is the IP address of an SPP external interface.
During the login process, you will be asked to enter your RSA passphrase unless ssh-agent or an equivalent utility (e.g., keychain, gnome-keyring-daemon) is holding your private RSA key.
host> ssh pl_washu_sppDemo@SPP_ADDRESS Enter passphrase for key '/home/.../LOGIN_NAME/.ssh/id_rsa': ... Respond with your passphrase ... Last login: ... Previous login information ... [YOUR_SLICE@SPP_ADDRESS ~]$
If the SSH daemon asks you for your password, you will have to call ssh using the -i KEY_FILE argument like this:
ssh -i ~/.ssh/id_rsa YOUR_SLICE@SPP_ADDRESS ... The SSH daemon will ask for your passphrase ...
Using ssh-agent
This section is a very brief explanation of how to use ssh-agent. You can skip this section if you are already using such an agent. If you have never used such an agent, note that there are several alternatives to the procedure described below and our description is meant to be a simple cookbook procedure. See the ssh-agent and ssh-add man pages or the web for more details.
The basic idea is to run ssh-agent which is a daemon process that caches private keys and listens for requests from SSH clients needing a private key related computation. Then, run the ssh-add command to add your private key to your agent's cache. This is only done once after you start the SSH agent. The process will ask you for your passphrase which is used to decrypt the private key which is then held in main memory by the agent.
For example,
eval `ssh-agent` # Notice the backquotes ssh-add ... Enter your passphrase when it prompts for it ...
Notice that we are using backquotes (which denotes command substitution) in the first line, NOT the normal forward quote characters.
In the first line, ssh-agent outputs two commands to stdout which is then evaluated by the eval command. These two commands set the two environment variables SSH_AUTH_SOCK and SSH_AGENT_PID. Enter the command "printenv | grep SSH_A", and you will get output that looks like:
SSH_AUTH_SOCK=/tmp/ssh-sTNf2142/agent.2142 SSH_AGENT_PID=2143
which says that process 2143 is your ssh-agent and it is listening for requests on the Unix Domain socket /tmp/ssh-sTNf2142/agent.2142. The ssh-add command adds your private key to the list of private keys held by ssh-agent.
You can now verify that you can ssh to an SPP without entering a password or passphrase. In fact, any subshell of the current shell will not need to enter a password when logging into an SPP as long as the agent is running because the SSH environment variables are passed to all children of the current shell allowing them to communicate with the same agent.
The SPP Configuration Command scfg
After you have logged into a GPE, you can use the scfg command to:
- Get information about the SPP
- Configure the SPP
- Make resource reservations
You can get help information from scfg by entering one of these forms of the command:
scfg --help all # show help for all commands scfg --help info # show help for information commands scfg --help queues # show help for queue commands scfg --help reserv # show help for reservation commands scfg --help alloc # show help for resource alloc/free commands
Try getting help on the information commands by entering:
scfg --help info
Your output should look like this:
USAGE: INFORMATION CMDS: scfg --cmd get_ifaces Display all interfaces scfg --cmd get_ifpeer --ifn N Display the peer of interface num N ... other output not shown ...
If you get a command not found message, try entering:
/usr/local/bin/scfg --help info
If the command now runs, you need to add /usr/local/bin to your PATH environment variable. The rest of this tutorial assumes that your PATH environment variable has been set to include the directory containing the scfg command.
Getting Information About External Interfaces
SPPs have multiple external interfaces. To show the attributes of all external interfaces, enter:
scfg --cmd get_ifaces
For example, running this command on the Salt Lake City SPP produces:
Interface list: [ifn 0, type "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.210] [ifn 1, type "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.214] [ifn 2, type "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.218] [ifn 3, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.1.2] [ifn 4, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.2.2] [ifn 5, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.7.2] [ifn 6, type "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.8.2]
This output shows:
- There are seven external interfaces numbered from 0 to 7.
- type: There are two types of interfaces: Internet (inet) and point-to-point (p2p).
- linkBW: The capacity of each interface is 1 Gbps (i.e., 1000000 Kbps).
- availBW: The available bandwidth of each interface is 899.232 Mbps (i.e., 899232 Kbps); that is, that portion of the capacity that hasn't already been allocated.
- ipAddr: The IP addresses of each interface.
Getting Information About Peers
The type inet interfaces are physically connected to the Internet. The type p2p interfaces are physically connected to other SPPs through point-to-point links. That's why you can only ping interfaces with type inet from your host.
You can use the get_peer command to show the IP address of the interface at the other end of a point-to-point link. For example, I would enter:
scfg --cmd get_peer --ifn 3
to find out the IP address of interface 3's peer. These seven commands will show the peer IP addresses of interfaces 0-6:
scfg --cmd get_peer --ifn 0 scfg --cmd get_peer --ifn 1 scfg --cmd get_peer --ifn 2 scfg --cmd get_peer --ifn 3 scfg --cmd get_peer --ifn 4 scfg --cmd get_peer --ifn 5 scfg --cmd get_peer --ifn 6
Running these commands on the Salt Lake City SPP produces this output:
SPP Peer IP address: 0.0.0.0 SPP Peer IP address: 0.0.0.0 SPP Peer IP address: 0.0.0.0 SPP Peer IP address: 10.1.1.1 SPP Peer IP address: 10.1.2.1 SPP Peer IP address: 10.1.7.1 SPP Peer IP address: 10.1.8.1
Notice that the p2p interfaces are the only ones with a peer IP address that is not 0.0.0.0. Furthermore, these addresses have the same 10.1.x.y format as other p2p interfaces.
Constructing an SPP Interconnection Map
We can now build a complete interconnection map of the SPPs if we combine the output of the get_ifaces and get_peer commands from all SPPs. This output is shown at the bottom of the The GENI SPP Configuration page. The interconnection tables shown near the top of the The GENI SPP Configuration page were constructed from this output.
The Salt Lake City table is:
Interface | Type | IP Address | Peer Address |
---|---|---|---|
0 | inet | 64.57.23.210 | 0.0.0.0 |
1 | inet | 64.57.23.214 | 0.0.0.0 |
2 | inet | 64.57.23.218 | 0.0.0.0 |
3 | p2p | 10.1.1.2 | 10.1.1.1 (KC ifn 3) |
4 | p2p | 10.1.2.2 | 10.1.2.1 (KC ifn 4) |
5 | p2p | 10.1.7.2 | 10.1.7.1 (DC ifn 5) |
6 | p2p | 10.1.8.2 | 10.1.8.1 (DC ifn 6) |
For example, the peer IP address of interface 3 is 10.1.1.1 which is the IP address of Kansas City's interface 3. You can verify the labeling of the peer IP addresses for interfaces 4-6 by looking at the output at the bottom of The GENI SPP Configuration page. Below is a diagram of the SPP interconnection map:
scfg has other information commands and also commands for allocating/freeing SPP resources and managing queues. The example below will describe some of these commands. The page SPP Command Interface summarizes all of the commands.
Hello GPE World
The first program we will run on a GPE is a variant of the UDP echo server. You will run the client on your Linux host which will send a UDP packet containing the 6-byte C-string (including the terminating NUL byte) "hello" to the server. The server listens on port 50000 for an incoming UDP packet. When it receives a packet, it displays the content of the read buffer in both ASCII and hexadecimal formats and sends the "hello" string back to the client.
Going through this example should demonstrate to you that using a GPE is just like using any other general-purpose host except that you need to setup and teardown the SPP.
Here are the steps involved in this example:
- Create the client and server executables.
- Copy the server executable and scripts to a GPE.
- Setup the SPP.
- Run the server and the client.
- Teardown the SPP.
Create the Client and Server Executables
>>>>> How to get the tar file ??? <<<<<
In the command block below, we assume that you will extract the tar file into the directory ~/hello-gpe in your home directory:
host> cd # change directory to your home directory host> tar tf ~/Download/hello-gpe.tar # see what is in the tar file host> tar xf ~/Download/hello-gpe.tar # extract contents into ~/hello-gpe/ directory host> cat README # read about the example host> make # make the two executables ... Follow the insructions for doing a test using localhost ...
You have now created two executables in the ~/hello-gpe/ directory: hello-client and hello-server.
Copy the Server Executable and Scripts to a GPE
Now, create a tar file that contains the two above executables and the SPP scripts found in ~/hello-gpe/scripts/:
host> make spp-hello.tar host> scp spp-hello.tar YOUR_SLICE@SPP_ADDRESS: host> ssh YOUR_SLICE@SPP_ADDRESS GPE> tar tf spp-hello.tar # look at what is in the tar file GPE> tar xf spp-hello.tar # creates and populates ~/hello-gpe/
The spp-hello.tar file contains the scripts from the hello-gpe.tar file and the two executables hello-server and hello-client that you just created. We will first lead you through the process of setting up the SPP, running the executables and then tearing down the SPP in a step-by-step manner. Afterwards, we will discuss how to script the setup and teardown procedures.
Setup the SPP
Setting up the SPP so that packets from hello-client can get to your hello-server process running on a GPE of the Salt Lake City SPP involves these steps:
- Run the mkResFile4hello.sh script to create a resource reservation file.
- Submit the resource reservation.
- Claim the resources described by the resource reservation file.
- This allocates 1 Mbps of capacity from the 64.57.23.210 interface.
- Setup the endpoint (64.57.23.210, 50000) to handle 1 Mbps of UDP traffic.
Create a Resource Reservation File
Most users make a resource reservation file in one of two ways:
- Manual: Copy an existing file and hand edit the file to meet their needs; or
- Script: Run a script that generates the file.
You can hand edit the file ~/hello-gpe/scripts/res.xml or generate one using the script ~/hello-gpe/scripts/mkResFile4hello.sh. The res.xml file looks like this:
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <spp> <rsvRecord> <!-- Date Format: YYYYMMDDHHmmss --> <!-- That's year, month, day, hour, minutes, seconds --> <rDate start="20100304121500" end="20100404121500" /> <plRSpec> <ifParams> <!-- reserve 1 Mb/s on one interface --> <ifRec bw="1000" ip="64.57.23.210" /> </ifParams> </plRSpec> </rsvRecord> </spp>
This file defines the following reservation:
- The reservation runs from 1215 on March 4, 2010 to 1215 on April 4, 2010.
- The hours (HH) is based on a 24-hour clock.
- This period must include the actual time period that you plan to use the resources.
- The plRSpec section defines the GPE (slowpath) resources.
- It specifies that you will be using 1000 Kbps (= 1 Mbps) of the interface with IP address 64.57.23.210.
- This reservation does not have a fpRspec component which defines fastpath resources because this example doesn't use the fastpath (The IPv4 Metanet Tutorial shows how to create a reservation file containing fastpath resources).
We don't really need 1 Mbps of bandwidth for this example since we are only sending a UDP packet with a 6-byte payload.
If you use the manual method to create the reservation file, you can edit the existing res.xml file that is in the tar file. You will only need to edit the two date fields in the rDate tag and the bandwidth and IP address fields in the ifRec tag. You can choose an arbitrary file name.
If you use the script method, the mkResFile4hello.sh script has been written specifically for this example. You run the script on the GPE like this for the Salt Lake City SPP:
GPE> cd ~/hello-gpe/scripts GPE> ./mkResFile4hello.sh 64.57.23.210 # Salt Lake City SPP, interface 0 IP address +++ Making res.xml, 1 month reservation file starting from now: BEGIN = 20100304205900 END = 20100404205900 SPP_ADDRESS = 64.57.23.210 +++ See res.xml file
It will create a reservation file for a one month period starting from today for the interface IP address entered as the first command-line argument. It announces the date parameters (20100304205900 and 20100404205900) and the IP address (64.57.23.194) that it will put into the reservation file.
Our choice of a one month reservation period was arbitrary. You can modify the date fields in our res.xml file to suit your own needs. Furthermore, note the following:
- You can make an advanced reservation which covers a time period in the future.
- The time period can have a start date that is in the past.
- You can only one reservation per time period; i.e., reservations can't overlap in time.
Submit the Reservation
Now, we use the scfg command --cmd make_resrv to submit the reservation:
GPE> scfg --cmd make_resrv --xfile res.xml Warning: Your reservation has no fpRSpec Adding reservation: rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0] GPE: (ip=64.57.23.210 bw=1000 Kbps) Successfully added reservation
Note that scfg outputs a warning that the reservation file doesn't have a fpRspec component; i.e., a fastpath specification. Since this example is using only the slowpath, we can ignore the warning.
You can check for a reservation using one of the reservation management commands described in SPP Command Interface. The get_resrvs command will display all of your reservations:
Get all reservations: Successfully got reservations (1) 0) rDate: [3/4/2010 at 20:59:0, 4/4/2010 at 20:59:0] GPE: (ip=64.57.23.210 bw=1000 Kbps)
Claim the Resources
The reservation only indicates your intent to use resources. You use the scfg command --cmd claim_resources to actually allocate the resources specified by a reservation:
GPE> scfg --cmd get_ifattrs --ifn 0 Interface attributes: [ifn 0, type "inet", linkBW 1000000Kbps, availBW 864552Kbps, ipAddr 64.57.23.210] GPE> scfg --cmd claim_resources Successfully allocated GPE spec GPE> scfg --cmd get_ifattrs --ifn 0 Interface attributes: [ifn 0, type "inet", linkBW 1000000Kbps, availBW 863552Kbps, ipAddr 64.57.23.210]
The third command does the allocation of your active reservation: 1 Mbps of capacity from the 64.57.23.210 interface. We can now configure slowpath endpoints that use portions of this 1 Mbps capacity.
The command block above shows that interface 0's available bandwidth has been reduced by 1000 Kbps. The --cmd get_ifattrs outputs the same information as --cmd get_ifaces but for only one interface.
Setup the Endpoint
We now use the capacity allocated by --cmd claim_resources by creating a slowpath endpoint within the 64.57.23.210 interface by using --cmd setup_sp_enpoint:
GPE> scfg --cmd setup_sp_endpoint --bw 1000 --ipaddr 64.57.23.210 --port 50000 --proto 17 Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.194, 50000, 17 } ]
This command example shows:
- The endpoint IP address, port number and protocol are 64.57.23.210, 50000 and 17 (UDP) respectively; and
- it will use all 1000 Kbps of the allocated capacity.
Note that:
- The hello-client process will send UDP packets to (SPP_ADDRESS=64.57.23.210, UDP_PORT=50000).
- A slice can have multiple endpoints within an interface allocation as long as the sum of their bandwidth parameters does not exceed the allocated capacity.
- Multiple slices can use the same IP address.
- Each slice will be guaranteed their allocated bandwidth.
- Slices can not use the same port number.
- The --cmd setup_sp_endpoint command installed a filter in the linecard that will direct incoming traffic to any process your slice is running on a GPE.
Run the Server and the Client
You should have two windows: one to run the server an done to run the client. In your GPE window, start the server on the GPE so that it listens on port 50000:
GPE> hello-server 50000 udp-echo-srvr: Listening on port 50000 ... Waiting for UDP packet ...
Actually, the default port number is 50000. So, you could have also left off the 50000 argument.
Now, in your other window, start the client with the IP address and port number of the slowpath endpoint:
host> hello-client 64.57.23.210 50000 udp-echo-cli: Sending to port 50000 at 64.57.23.210 send_dgram rcvd 6 char: <hello>
The client sends one UDP packet containing the C-string "hello", waits for the returning packet and displays the contents of the packet it receives. The server continues to run waiting for another packet. To send another packet, run hello-client again.
The GPE window shows that the server received a 6-byte datagram and displays the contents of a 10-byte buffer.
GPE> hello-server 50000 udp-echo-srvr: Listening on port 50000 echo_dgram rcvd 6 bytes (hex follows): 68 65 6c 6c 6f 00 00 00 2c ffffff91 ==================== ... Wait for next UDP packet to port 50000 ... ... Enter ctrl-c to terminate ...
The hexadecimal output shows that the first six bytes contain the correct hexadecimal representation for "hello" including the C-string terminating NUL (hex 00) byte. For example, an ASCII 'h' is 68 in hexadecimal.
Teardown the SPP
After you are done using the GPE, you need to return the resources and cancel the reservation. The teardown procedure is the:
GPE> scfg --cmd free_sp_endpoint --ipaddr 64.57.23.210 --port 50000 --proto 17 free_sp_endpoint completed successfully 0 GPE> scfg --cmd free_sp_resources Successfully freed sp resources GPE> scfg --cmd cancel_resrv Get reservation for current time Successfully canceled reservation
Step | Setup | Teardown |
---|---|---|
1 | make_resrv | free_sp_endpoint |
2 | claim_resources | free_sp_resources |
3 | setup_sp_endpoint | cancel_resrv |
The teardown procedure is the reverse of the setup procedure:
The --cmd cancel_resrv command cancels the current active reservation. You can also use it to cancel an advance reservation by supplying a date. See SPP Command Interface for the details.
The Setup and Teardown Scripts
The work of setting up and tearing down the SPP can be scripted so that the commands can be repeated without manually entering all of the commands. When you extracted the files from the spp-hello.tar file, the scripts setup4hello.sh and teardown4hello.sh were extracted into the ~/hello-gpe/scripts/ directory.
These scripts can be used in our example like this:
GPE> cd ~/hello-gpe/scripts GPE> ./setup4hello.sh 64.57.23.210 50000 +++ Making res.xml, 1 month reservation file starting from now: BEGIN = 20100304213200 END = 20100404213200 SPP_ADDRESS = 64.57.23.210 +++ See res.xml file Warning: Your reservation has no fpRSpec Adding reservation: rDate: [3/4/2010 at 21:32:0, 4/4/2010 at 21:32:0] GPE: (ip=64.57.23.210 bw=1000 Kbps) Successfully added reservation Successfully allocated GPE spec Set up slow path endpoint: epInfo [ bw 1000 epoint { 64.57.23.210, 60000, 17 } ] GPE> cd .. # now in ~/hello-gpe/ GPE> ./hello-server 50000 udp-echo-srvr: Listening on port 50000 ... Run hello-client at your host ... echo_dgram rcvd 6 bytes (hex follows): 68 65 6c 6c 6f 00 00 00 2c ffffff91 ==================== ... ctrl-c entered to terminate server ... GPE> cd scripts GPE> ./teardown4hello.sh 64.57.23.210 60000 free_sp_endpoint completed successfully 0 Successfully freed sp resources Get reservation for current time Successfully canceled reservation
Higher Speed Traffic
Now that you know how to setup and teardown the SPP, you should be able to run any application on a GPE. You could run a traffic generator such as iperf (http://sourceforge.net/projects/iperf/).
We installed iperf in our GPE slice and our host. Then, we used the setup4hello.sh script to setup the SPP and then sent two seconds of UDP traffic to the server at 1.1 Mbps, 2 Mbps, 10 Mbps and finally 100 Mbps from our host. The following timeline shows the commands issued in the GPE window and the host window:
GPE> iperf -s -u -p 50000 host> iperf -c 64.57.23.210 -u -p 50000 -b 1.1m -t 2 host> iperf -c 64.57.23.210 -u -p 50000 -b 2m -t 2 host> iperf -c 64.57.23.210 -u -p 50000 -b 10m -t 2 host> iperf -c 64.57.23.210 -u -p 50000 -b 100m -t 2 GPE> ctrl-c # terminate server
The server window is shown on the left, and the client window is shown on the right. The server arguments are '-s' (server mode), '-u' (UDP packets), and '-p 50000' (use port 50000). The client arguments are '-c 64.57.23.210' (client mode with server at 64.57.23.210), '-u' (UDP packets), '-p 50000' (use port 50000), '-b Xm' (bandwidth X in Mbps), and '-t 2' (send for two seconds).
The server output (with # annotation) is shown below:
GPE> ./iperf -s -u -p 50000 ------------------------------------------------------------ Server listening on UDP port 50000 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 60988 # 1.1 Mbps [ 4] 0.0- 2.0 sec 271 KBytes 1.09 Mbits/sec 0.033 ms 0/ 189 (0%) [ 3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 53234 # 2 Mbps [ 3] 0.0- 2.0 sec 491 KBytes 1.99 Mbits/sec 0.085 ms 0/ 342 (0%) [ 4] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 39372 # 10 Mbps [ 4] 0.0- 3.3 sec 2.39 MBytes 6.07 Mbits/sec 1.290 ms 0/ 1702 (0%) [ 3] local 64.57.23.210 port 50000 connected with 128.252.160.167 port 57153 # 100 Mbps [ 3] 0.0- 9.9 sec 7.19 MBytes 6.08 Mbits/sec 0.820 ms 11963/17095 (70%) [ 3] 0.0- 9.9 sec 1 datagrams received out-of-order
The output shows that the iperf server received packets at a maximum input rate of around 6 Mbps. It is interesting to note that even though the client sent at 10 Mbps and the server measured an average input rate of around 6 Mbps, there was no packet loss (0/1702 means 0 out of 1702 packets were lost) ... even though we only allocated 1 Mbps.
Monitoring Traffic
The only way to monitor traffic in and out of a GPE process is to have the process record and display packet statistics. The hello-gpe code could be modified to do that although we later show how that can be combined with our slice daemon sliced and a graphical interface to plot the statistics.
Exercises
This page has shown you the basics of using the GPE. You can improve your skill at writing a slowpath software router by doing the exercises below.
- A Different Interface
-
How would you change the commands shown in The Hello GPE World Tutorial#The Setup and Teardown Scripts to use 64.57.23.214 instead of 64.57.23.210?
- A Different SPP
-
Show the sequence of commands needed to use the Kansas City SPP instead of the Salt Lake City SPP.
- Multiple Interfaces
-
Suppose that you wanted hello-server to read UDP packets from 64.57.23.210 but send the response packet to the client using 64.57.23.214.
Describe the changes to the client and server code, scripts and command usage that would be needed to make it possible to use the two interfaces.
- Forwarding Packets (1)
-
Consider the communication pattern client->SPP/relay->server. That is, the client sends a packet to a relay process running on a GPE which then relays the packet to a server running on a host different from the one where the client is running.
Note that the relay process acts like both the hello-gpe server and client.
Also, the relay process should use two different interfaces: one for the client and one for the server.
The server announces the reception of each packet and then drops the packet.
Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible.
- Forwarding Packets (2)
-
Consider the communication pattern client<->SPP/relay<->SPP'/server. That is, the client sends a packet to a relay process running on a GPE which then relays the packet to a server process running on a different GPE. As before, the relay process should use two different interfaces: one for the client and one for the server.
But it should use a point-to-point interface for packets to the server.
The server should announce the reception of each packet and send the packet back to the relay process.
The relay process should then forward the packet back to the client.
Describe what changes would be needed to the hello-gpe code, the scripts, and the command usage to make this possible.