Difference between revisions of "The Hello GPE World Tutorial"

From ARL Wiki
Jump to navigationJump to search
Line 300: Line 300:
  
 
* Create the client and server executables.
 
* Create the client and server executables.
* Copy the server executable to a GPE.
+
* Copy the server executable and scripts to a GPE.
 
* Setup the SPP.
 
* Setup the SPP.
 
* Run the server and the client.
 
* Run the server and the client.
Line 310: Line 310:
  
 
<pre>
 
<pre>
     cd SOME_DIRECTORY
+
     host> cd SOME_DIRECTORY
     tar xf ~/Download/hello-gpe.tar    # extracts into hello-gpe/ directory
+
     host> tar xf ~/Download/hello-gpe.tar    # extracts into hello-gpe/ directory
     cat README                        # read about the example
+
     host> cat README                        # read about the example
     make                              # make the two executables
+
     host> make                              # make the two executables
 
     ... Follow the insructions for doing a test using localhost ...
 
     ... Follow the insructions for doing a test using localhost ...
 
</pre>
 
</pre>
  
== Copy the Server Executable to a GPE ==
+
== Copy the Server Executable and Scripts to a GPE ==
  
 +
<pre>
 +
    host> make spp-hello.tar
 +
    GPE>  scp spp-hello.tar YOUR_SLICE@SPP_ADDRESS:
 +
    GPE>  ssh YOUR_SLICE@SPP_ADDRESS
 +
    GPE>  tar xf spp-hello.tar
 +
</pre>
 +
 +
For example,
 +
 +
<pre>
 +
    host> make spp-hello.tar
 +
    GPE>  scp spp-hello.tar pl_washu_sppDemo@64.57.23.210:
 +
    GPE>  ssh pl_washu_sppDemo@64.57.23.210
 +
    GPE>  tar xf spp-hello.tar
 +
</pre>
  
  
 
== Setup the SPP ==
 
== Setup the SPP ==
  
 +
<pre>
 +
    GPE>  cd hello-gpe
 +
    GPE>  scripts/setup-hello-gpe.sh
 +
</pre>
  
 +
<pre>
 +
    XXX what you should see XXX
 +
</pre>
  
 
== Run the Server and the Client ==
 
== Run the Server and the Client ==
  
 +
<pre>
 +
    GPE>  hello-server
 +
</pre>
  
 +
<pre>
 +
    XXX what you should see XXX
 +
</pre>
 +
 +
<pre>
 +
    host> hello-client
 +
</pre>
 +
 +
<pre>
 +
    XXX what you should see XXX
 +
</pre>
 +
 +
=== Making a Reservation ===
  
== Teardown the SPP ==
 
  
 +
=== Allocating Resources ===
  
  
>>>>> HERE <<<<<
 
  
== Making a Resource Reservation ==
+
== Teardown the SPP ==
  
 +
<pre>
 +
    GPE>  scripts/teardown-hello-gpe.sh
 +
</pre>
  
 +
<pre>
 +
    XXX what you should see XXX
 +
</pre>
  
* ''scfg --cmd make_reservation'' and the reservation file
+
=== Freeing Resources ===
* Other reservation commands
 
  
== Creating a Slowpath (GPE) Endpoint ==
 
  
* ''scfg --cmd alloc_plspec''
 
* ''scfg --cmd alloc_endpoint''
 
  
 +
>>>>> HERE <<<<<
  
  

Revision as of 18:25, 3 March 2010

Template:Under Construction

Introduction

XXXXX

The SPP Components

XXXXX

Pinging SPP External Interfaces

Unlike most PlanetLab nodes, an SPP has multiple external interfaces. In the GENI deployment, some of those interfaces have Internet2 IP addresses and some are interfaces attached to point-to-point links going directly to an external interfaces of other SPPs. This section introduces you to sone of the Internet2 interfaces.

Let's try to ping some of those Internet2 interfaces. Enter one of the following ping commands (omit the comments):

    ping -c 3 64.57.23.210         # Salt Lake City interface 0
    ping -c 3 64.57.23.214         # Salt Lake City interface 1
    ping -c 3 64.57.23.218         # Salt Lake City interface 2
    ping -c 3 64.57.23.194         # Washington DC interface 0
    ping -c 3 64.57.23.198         # Washington DC interface 1
    ping -c 3 64.57.23.202         # Washington DC interface 2
    ping -c 3 64.57.23.178         # Kansas City interface 0
    ping -c 3 64.57.23.182         # Kansas City interface 1
    ping -c 3 64.57.23.186         # Kansas City interface 2

For example, my output from the first ping command looks like this:

    > ping -c 3 64.57.23.210
    PING 64.57.23.210 (64.57.23.210) 56(84) bytes of data.
    64 bytes from 64.57.23.210: icmp_seq=1 ttl=56 time=67.5 ms
    64 bytes from 64.57.23.210: icmp_seq=2 ttl=56 time=55.9 ms
    64 bytes from 64.57.23.210: icmp_seq=3 ttl=56 time=59.0 ms

    --- 64.57.23.210 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2002ms
    rtt min/avg/max/mdev = 55.949/60.823/67.511/4.895 ms

Note that you may not be able to ping an SPP external interface. Some reasons why it might fail are:

  1. Your host doesn't have ping installed. This is not typical.
  2. The SPP interface is down.
  3. Your network blocks ping traffic.
  4. Your network provider doesn't route Internet2 addresses.

In the first case, you will get a command not found error message. The ping command is usually located at /bin/ping. See your system administrator if you can't find ping. In the other cases, your ping command will eventually return with a 100% packet loss message. In the last case, running the command traceroute 64.57.23.210 will give a Network unreachable indication (the last router is marked !N).

If you are unsuccessful with one interface, try to ping the interface of a different SPP.

However, you can always get around these problems (except for an SPP being down) by issuing the ping command from a PlanetLab node. We discuss how to log into a PlanetLab node in The IPv4 Metanet Tutorial.

Logging Into an SPP's GPE

Now, let's try to log into the SPP interface that you were able to ping. The example below assumes that interface was 64.57.23.210; that is, interface 0 of the Salt Lake City SPP. Note the following:

  • You must use ssh to log into an SPP.
  • When you ssh to an SPP's external interface, you will actually get logged into a GPE of the SPP.
  • Furthermore, you will be logging into your slice in a GPE.
  • Even if your network blocks your ping packets, you should be able to log into a GPE as long as there is a route to the SPP's external interface address.
  • You can 'ssh' to any of the SPP's external interfaces.

To log into a GPE at the Salt Lake City SPP, I would enter:

    ssh pl_washu_sppDemo@64.57.23.210

where my slice name is pl_washu_sppDemo. Thus, the general format is:

    ssh YOUR_SLICE@IP_ADDRESS

where YOUR_SLICE is the slice you were assigned during account registration, and IP_ADDRESS is the IP address of an SPP external interface.

During the login process, you will be asked to enter your RSA passphrase unless ssh-agent or an equivalent utility (e.g., keychain, gnome-keyring-daemon) is holding your private RSA key.

Using ssh-agent

This section is a very brief explanation of how to use ssh-agent. You can skip this section if you are already using such an agent. If you have never used such an agent, note that there are several alternatives to the procedure described below and our description is meant to be a simple cookbook procedure. See the ssh-agent and ssh-add man pages or the web for more details.

The basic idea is to run ssh-agent which is a daemon process that caches private keys and listens for requests from SSH clients needing a private key related computation. Then, run the ssh-add command to add your private key to your agent's cache. This is only done once after you start the SSH agent. The process will ask you for your passphrase which is used to decrypt the private key which is then held in main memory by the agent.

For example,

    eval `ssh-agent`
    ssh-add
    ... Enter your passphrase when it prompts for it ...

Notice that we are using backquotes (which denotes command substitution) in the first line, NOT the normal forward quote characters.

In the first line, ssh-agent outputs two commands to stdout which is then evaluated by the eval command. These two commands set the two environment variables SSH_AUTH_SOCK and SSH_AGENT_PID. Enter the command "printenv | grep SSH_A", and you will get output that looks like:

    SSH_AUTH_SOCK=/tmp/ssh-sTNf2142/agent.2142
    SSH_AGENT_PID=2143

which says that process 2143 is your ssh-agent and it is listening for requests on the Unix Domain socket /tmp/ssh-sTNf2142/agent.2142. The ssh-add command adds your private key to the list of private keys held by ssh-agent.

You can now verify that you can ssh to an SPP without entering a password or passphrase. In fact, any subshell of the current shell will not need to enter a password when logging into an SPP as long as the agent is running because the SSH environment variables are passed to all children of the current shell allowing them to communicate with the same agent.

The SPP Configuration Command scfg

After you have logged into a GPE, you can use the scfg command to:

  • Get information about the SPP
  • Configure the SPP
  • Make resource reservations

You can get help information from scfg by entering one of these forms of the command:

    scfg --help all         # show help for all commands
    scfg --help info        # show help for information commands
    scfg --help queues      # show help for queue commands
    scfg --help reserv      # show help for reservation commands
    scfg --help alloc       # show help for resource alloc/free commands

Try getting help on the information commands by entering:

    scfg --help info

Your output should look like this:

    USAGE:
    INFORMATION CMDS:
      scfg --cmd get_ifaces
            Display all interfaces
      scfg --cmd get_ifpeer --ifn N
            Display the peer of interface num N
      ... other output not shown ...

If you get a command not found message, try entering:

    /usr/local/bin/scfg --help info

If the command now runs, you need to add /usr/local/bin to your PATH environment variable. The rest of this tutorial assumes that your PATH environment variable has been set to include the directory containing the scfg command.

Getting Information About External Interfaces

SPPs have multiple external interfaces. To show the attributes of all external interfaces, enter:

    scfg --cmd get_ifaces

For example, running this command on the Salt Lake City SPP produces:

    Interface list:
      [ifn 0, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.210]
      [ifn 1, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.214]
      [ifn 2, type  "inet", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 64.57.23.218]
      [ifn 3, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.1.2]
      [ifn 4, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.2.2]
      [ifn 5, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.7.2]
      [ifn 6, type  "p2p", linkBW 1000000Kbps, availBW 899232Kbps, ipAddr 10.1.8.2]

This output shows:

  • There are seven external interfaces numbered from 0 to 7.
  • type: There are two types of interfaces: Internet (inet) and point-to-point (p2p).
  • linkBW: The capacity of each interface is 1 Gbps (i.e., 1000000 Kbps).
  • availBW: The available bandwidth of each interface is 899.232 Mbps (i.e., 899232 Kbps); that is, that portion of the capacity that hasn't already been allocated.
  • ipAddr: The IP addresses of each interface.

Getting Information About Peers

The type inet interfaces are physically connected to the Internet. The type p2p interfaces are physically connected to other SPPs through point-to-point links. That's why you can only ping interfaces with type inet.

You can use the get_peer command to show the IP address of the interface at the other end of a point-to-point link. For example, I would enter:

    scfg --cmd get_peer --ifn 3

to find out the IP address of interface 3's peer. These seven commands will show the peer IP addresses of interfaces 0-6:

    scfg --cmd get_peer --ifn 0
    scfg --cmd get_peer --ifn 1
    scfg --cmd get_peer --ifn 2
    scfg --cmd get_peer --ifn 3
    scfg --cmd get_peer --ifn 4
    scfg --cmd get_peer --ifn 5
    scfg --cmd get_peer --ifn 6

Running these commands on the Salt Lake City SPP produces this output:

    SPP Peer IP address: 0.0.0.0
    SPP Peer IP address: 0.0.0.0
    SPP Peer IP address: 0.0.0.0
    SPP Peer IP address: 10.1.1.1
    SPP Peer IP address: 10.1.2.1
    SPP Peer IP address: 10.1.7.1
    SPP Peer IP address: 10.1.8.1

Notice that the p2p interfaces are the only ones with a peer IP address that is not 0.0.0.0. Furthermore, these addresses have the same 10.1.x.y format as other p2p interfaces.

Constructing an SPP Interconnection Map

We can now build a complete interconnection map of the SPPs if we combine the output of the get_ifaces and get_peer commands from all SPPs. This output is shown at the bottom of the The GENI SPP Configuration page. The interconnection tables shown near the top of the The GENI SPP Configuration page were constructed from this output.

The Salt Lake City table is:

Interface Type IP Address Peer Address
0 inet 64.57.23.210 0.0.0.0
1 inet 64.57.23.214 0.0.0.0
2 inet 64.57.23.218 0.0.0.0
3 p2p 10.1.1.2 10.1.1.1 (KC ifn 3)
4 p2p 10.1.2.2 10.1.2.1 (KC ifn 4)
5 p2p 10.1.7.2 10.1.7.1 (DC ifn 5)
6 p2p 10.1.8.2 10.1.8.1 (DC ifn 6)

For example, the peer IP address of interface 3 is 10.1.1.1 which is the IP address of Kansas City's interface 3. You can verify the labeling of the peer IP addresses for interfaces 4-6 by looking at the output at the bottom of The GENI SPP Configuration page. Below is a diagram of the SPP interconnection map:

scfg has other information commands and also commands for allocating/freeing SPP resources and managing queues. The example below will describe some of these commands. The page SPP Command Interface summarizes all of the commands.

Hello GPE World

The first program we will run on a GPE is a variant of the UDP echo server. You will run the client on your Linux host which will send a UDP packet containing the 6-byte C-string (including the terminating NUL byte) "hello" to the server. The server listens on port 50000 for an incoming UDP packet. When it receives a packet, it displays the content of the read buffer in both ASCII and hexadecimal formats and sends the "hello" string back to the client.

Going through this example should demonstrate that using a GPE is just like using any other general-purpose host except that you need to setup and teardown the SPP. But the setup and teardown procedure is straightforward.

Here are the steps involved in this example:

  • Create the client and server executables.
  • Copy the server executable and scripts to a GPE.
  • Setup the SPP.
  • Run the server and the client.
  • Teardown the SPP.

Create the Client and Server Executables

>>>>> How to get the tar file ??? <<<<<

    host> cd SOME_DIRECTORY
    host> tar xf ~/Download/hello-gpe.tar    # extracts into hello-gpe/ directory
    host> cat README                         # read about the example
    host> make                               # make the two executables
    ... Follow the insructions for doing a test using localhost ...

Copy the Server Executable and Scripts to a GPE

    host> make spp-hello.tar
    GPE>  scp spp-hello.tar YOUR_SLICE@SPP_ADDRESS:
    GPE>  ssh YOUR_SLICE@SPP_ADDRESS
    GPE>  tar xf spp-hello.tar

For example,

    host> make spp-hello.tar
    GPE>  scp spp-hello.tar pl_washu_sppDemo@64.57.23.210:
    GPE>  ssh pl_washu_sppDemo@64.57.23.210
    GPE>  tar xf spp-hello.tar


Setup the SPP

    GPE>  cd hello-gpe
    GPE>  scripts/setup-hello-gpe.sh
    XXX what you should see XXX

Run the Server and the Client

    GPE>  hello-server
    XXX what you should see XXX
    host> hello-client
    XXX what you should see XXX

Making a Reservation

Allocating Resources

Teardown the SPP

    GPE>  scripts/teardown-hello-gpe.sh
    XXX what you should see XXX

Freeing Resources

>>>>> HERE <<<<<