Difference between revisions of "Running The GPE Forest Demo"

From ARL Wiki
Jump to navigationJump to search
Line 1: Line 1:
 
[[Category:The SPP]]
 
[[Category:The SPP]]
  
CAVEAT:  THIS FILE HAS BEEN LOADED WITH RAW TEXT AND NOT MEANT TO BE READ YET.
+
CAVEAT:  THIS PAGE IS IN ROUGH FORM.
  
This page explains how to run the GPE Forest Demo which uses only the GPEs of an SPP.
+
This page first explains how to run the 3R GPE Forest Demo which uses three SPP GPEs as routers, three ordinary PlanetLab hosts as multicast traffic generators and six PlanetLab hosts as multicast subscribers.
 +
It does not use NPEs.
 +
It is a canned demo in the sense that it uses specific SPPs and PlanetLab hosts.
 +
Then, it explains how to run the 2- and 1-router configurations (known as 2R and 1R respectively).
 +
Finally, it explains how to choose different PlanetLab hosts as multicast traffic generators and multicast subscribers.
  
 
== Overview ==
 
== Overview ==
Line 15: Line 19:
 
There are three sets of windows:
 
There are three sets of windows:
  
* A ''MAIN window'' for controlling the experiment/demo
+
* A ''DEMO window'' for controlling the experiment/demo
 
* Three router windows (DC, SLC, KC)
 
* Three router windows (DC, SLC, KC)
 
* (Optional) Nine host windows
 
* (Optional) Nine host windows
Line 46: Line 50:
 
== Preliminaries ==
 
== Preliminaries ==
  
You should do the following before you can run the entire Demo:
+
{| align=right border=1 cellspacing=0 cellpadding=3
 +
! Forest Name || Node Name || IP Address
 +
|-
 +
| H11    || vn4.cse.wustl.edu              || 128.252.19.19
 +
|-
 +
| H12    || planetlab-04.cs.princeton.edu  || 128.112.139.27
 +
|-
 +
| H13    || planetlab-05.cs.princeton.edu  || 128.112.139.28
 +
|-
 +
| H21    || vn5.cse.wustl.edu              || 128.252.19.18
 +
|-
 +
| H22    || planetlab5.flux.utah.edu        || 155.98.35.6
 +
|-
 +
| H23    || planetlab2.unl.edu              || 129.93.229.139
 +
|-
 +
| H31    || osiris.planetlab.cs.umd.edu    || 128.8.126.78
 +
|-
 +
| H32    || planetlab7.flux.utah.edu        || 155.98.35.8
 +
|-
 +
| H33    || planetlab6.flux.utah.edu        || 155.98.35.7
 +
|}
 +
 
 +
You must do the following before you try to run the entire Demo:
  
 
* Have your PI assign you to an SPP slice and a PlanetLab slice.
 
* Have your PI assign you to an SPP slice and a PlanetLab slice.
* Add three SPPs to your SPP slice and nine hosts to your PlanetLab slice.
+
* Add the three SPPs DC, SLC and KC to your SPP slice and the nine hosts (shown right) to your PlanetLab slice.
 
* Install version 1.6 of the Java Run-time Environment (JRE) on the host where you plan to install and run the Forest router software.
 
* Install version 1.6 of the Java Run-time Environment (JRE) on the host where you plan to install and run the Forest router software.
  
 
The JRE is only required if you plan to run the graphical interface for displaying traffic statistics.
 
The JRE is only required if you plan to run the graphical interface for displaying traffic statistics.
  
== Get the GPE Forest Software ==
+
XXXXX
 
 
* gpe-forest-env.source
 
* gpe-forest/
 
* gpe-forest-topology.source
 
  
 
== Install the GPE Forest Software ==
 
== Install the GPE Forest Software ==
  
'''At The Three GPE Routers'''
+
The GPE Forest software must be installed on:
  
'''At The Nine PlanetLab Hosts'''
+
* The GPEs of the three SPPs (SLC, KC, DC)
 +
* The nine PlanetLab hosts shown above
 +
* The host that will run the Demo
  
'''At the Demo Central Host'''
+
Here are the steps involved in installing the software:
  
== Find PlanetLab Hosts to Act as Traffic Sources ==
+
* Log into the host where you plan to run the Demo, and install the ''gpe-forest.tar'' tar file using the bash shell:
  
== Configure the SPPs ==
+
    demo>  bash                          # run the bash shell
 +
    demo>  XXX gpe-forest.tar            # get tar file
 +
    demo>  tar xvf gpe-forest.tar        # extract files
  
We setup the three SPPs by running the ''setup-spp.sh'' script:
+
* Read the README file ~/gpe-forest/README.
 +
* Install the software on the Demo host:
  
     demo>  setup-all-SPPs.sh > setup-all-SPPs.out
+
     demo>  cd gpe-forest/install
 +
    demo>  ./install-gpe-forest-demo-host.sh    # install on the demo host
  
which does the following on each GPE:
+
* Install the software on the three GPEs:
  
* Make a reservation for an SPP that specifies the aggregate bandwidth needed by the data and monitoring traffic.
+
    demo>  ./install-gpe-forest-routers.sh      # install on the routers (GPEs)
* Acquire the GPE resources specified in the reservation.
 
* Configure the Line Card to support two endpoints (EPs):
 
** The data EP which accepts 10 Mbps (peak) TCP traffic at port 30123.
 
** The monitoring EP which accepts 1 Mbps (peak) UDP traffic at port 3551.
 
  
You can also do this by logging into each of the SPPs and executing a setup script.
+
* Install the software on the nine PlanetLab hosts:
The remainder of this section shows you how to do this.
 
  
'''SLC Router Window'''
+
    demo>  ./install-gpe-forest-hosts.sh        # install on the PlanetLab hosts
  
* Place the window in the southwest corner of the screen
+
These three scripts install the software and do some simple checks.
* Title the window SLC
+
The README file explains what each of these scripts do.
* Ssh to the SLC router and setup the router
+
If an error occurs, an error message preceded by "+++++ ERROR:" will be displayed on stdout.
  
    xterm>  ssh -v $SLClogin
+
If there are no errors,
    SLCgpe> cd gpe-forest/demo
 
    SLCgpe> setup-slc.sh | tee setup-slc.out
 
  
Here is an example of the ''setup-slc.out'' file:
+
* XXXXX
  
    XXX
+
== Configure the SPPs ==
  
The page [[Inside The setup-spp.sh Script]] describes the setup script.
+
We setup the three SPPs by running the ''setup-spp.sh'' script:
  
Now, repeat the same procedure for the other two routers KC and DC.
+
    demo>  cd ~/gpe-forest/scripts
 +
    demo>  ./setup-all-SPPs.sh > setup-all-SPPs.out
  
'''KC Router Window'''
+
which does the following on each GPE:
  
* Place the window in the south side of the screen
+
* Make a reservation for an SPP that specifies the aggregate bandwidth needed by the data and monitoring traffic.
* Title the window KC
+
* Acquire the GPE resources specified in the reservation.
* Ssh to the KC router and setup the router
+
* Configure the Line Card to support two endpoints (EPs):
 +
** The data EP which accepts 10 Mbps (peak) TCP traffic at port 30123.
 +
** The monitoring EP which accepts 1 Mbps (peak) UDP traffic at port 3551.
  
    xterm>  ssh -v $KClogin
+
You can also do this by logging into each of the SPPs and executing a setup script.
    KCgpe>  cd gpe-forest/demo
 
    KCgpe>  setup-kc.sh | tee setup-kc.out
 
 
 
'''DC Router Window'''
 
 
 
* Place the window in the southeast corner of the screen
 
* Title the window DC
 
* Ssh to the DC router and setup the router
 
 
 
    xterm>  ssh -v $DClogin
 
    DCgpe>  cd gpe-forest/demo
 
    DCgpe>  setup-dc.sh | tee setup-dc.out
 
  
 
== Run the Demo ==
 
== Run the Demo ==
Line 134: Line 146:
 
     xterm>  source ~/gec6-3R-xterms
 
     xterm>  source ~/gec6-3R-xterms
  
Now run the ''=probe'' script on each of the hosts:
+
Now run the ''=probeH'' script on each of the hosts:
  
     xterm>  =probe
+
     xterm>  =probeH
  
The ''=probe'' script greps the output of "ps clax" for the wuHost process every five seconds.
+
The ''=probeH'' script greps the output of "ps clax" for the wuHost process every five seconds.
 
In the example below, two successive lines of "- -"  indicates that wuHost was not running for the first 10 seconds, but ran for 10 seconds before terminating.
 
In the example below, two successive lines of "- -"  indicates that wuHost was not running for the first 10 seconds, but ran for 10 seconds before terminating.
  
Line 172: Line 184:
 
Now, run the wuRouter process at each of the three GPEs and the wuHost process at each of the nine PlanetLab hosts.
 
Now, run the wuRouter process at each of the three GPEs and the wuHost process at each of the nine PlanetLab hosts.
  
 +
    demo>  cd ~/gpe-forest/wunet5s/3R
 
     demo>  ./run-forest.sh
 
     demo>  ./run-forest.sh
  
Line 210: Line 223:
 
This script goes to each of the SPPs and runs a teardown script.
 
This script goes to each of the SPPs and runs a teardown script.
  
== BEGIN OLD STUFF ==
+
== Demo Problems ==
  
== Preparation ==
+
Some of the more common problems are listed below with an explanation of how to check for and resolve the problem.
  
<table align=right border=1 cellpadding=3 cellspacing=0 >
+
* XXXXX
  <tr>
 
    <th>File</th>
 
    <th>Purpose</th>
 
    <th>Where To Run</th>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice1.sh''</td>
 
    <td>Configure R1</td>
 
    <td>SPP3/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP2_Slice1.sh''</td>
 
    <td>Configure R2</td>
 
    <td>SPP2/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice2.sh''</td>
 
    <td>Configure R3</td>
 
    <td>SPP3/Slice2</td>
 
    </tr>
 
  <tr>
 
    <td>''start_recvs.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP receivers</td>
 
    <td> Remote host </td>
 
    </tr>
 
  <tr>
 
    <td>''start_xmits_3.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP senders</td>
 
    <td> Remote host </td>
 
    </tr>
 
  
  <tr>
+
    XXXXX
    <td>''changeQparams.sh''</td>
 
    <td>Change threshold and <br> bandwidth parameters</td>
 
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''change1Filter.sh''</td>
 
    <td>Change one filter</td>
 
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''create_fp''</td>
 
    <td>Allocate FP and handle <br> LD and EX traffic </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''client''</td>
 
    <td>Create and configure MIs, <br> queues and filters </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''2_Routers_remote.onl''</td>
 
    <td>RLI jar file</td>
 
    <td>remote client</td>
 
    </tr>
 
</table>
 
  
gpe-forest.tar:
+
* XXXXX
  
* gpe-forest-main-config.sh
+
    XXXXX
* gpe-forest-router-config.sh
 
* gpe-forest-host-config.sh
 
* gpe-forest/
 
** demo/
 
** files/
 
** scripts/
 
** wunet5s/
 
* SPP/
 
** resrv, client
 
  
Scripts used:
+
== Running The 2R And 3R Configurations ==
  
* SPP primitive commands
+
== Using Different PlanetLab Hosts ==
** resrv, client, sliced
 
* SPP wrappers
 
** setup-{dc,slc,ut,spp}.sh, teardown-{dc,slc,ut,spp}.sh, get-into.sh, run-sliced.sh
 
* Forest scripts
 
** run, =probe
 
** (Secondary) =status, =getstats
 
* RLI files
 
** SPP.hw, SPPmon.jar, JRE 1.6
 
* Advanced scripts
 
** runR, runH, killR, killH
 
  
 +
== Extra Stuff ==
  
  
You will need the 10 files (shown right) in order to do the Demo.
+
'''SLC Router Window'''
All but the last three files are Bash shell scripts.
 
The files ''create_fp'' and ''client'' are executables that are used by the first three scripts.
 
The last file is the jar file used by the RLI GUI that displays the traffic bandwidths and queue
 
lengths.
 
  
You will also have to do the following before you can run the entire Demo:
+
* Place the window in the southwest corner of the screen
 +
* Title the window SLC
 +
* Ssh to the SLC router and setup the router
  
* Get an SPP PlanetLab account
+
    xterm>  ssh -v $SLClogin
* Install version 1.6 of the Java Run-time Environment (JRE)
+
    SLCgpe> cd gpe-forest/demo
 +
    SLCgpe> setup-slc.sh | tee setup-slc.out
  
The JRE is only required if you plan to run the graphical interface for displaying bandwidth and
+
Here is an example of the ''setup-slc.out'' file:
queue length statistics.
 
  
== HERE HERE ==
+
    XXX
  
The IP addresses <ref>The IP addresses used in a deployment will be different than these addresses
+
The page [[Inside The setup-spp.sh Script]] describes the setup script.
and will need to be discovered by a process described later.</ref> of the interfaces are shown in
 
the figure (right).
 
Unlike a typical PlanetLab node, an SPP user will need to use more than the node's IP address.
 
For example, R1's IP address is 10.1.16.1.
 
But a user needs to know the sockets associated with all of R1's meta-interfaces
 
as well as a few other sockets such as the slowpath tunnel.
 
The Demo uses port 20000 with each of the five IP addresses 10.1.1.1, 10.1.2.1, 10.1.3.1,
 
10.1.16.1 and 10.1.32.1 to form the sockets for the five meta-interfaces of R1's fastpath;
 
that is, (10.1.1.1, 20000), (10.1.2.1, 20000),
 
(10.1.3.1, 20000), (10.1.16.1, 20000) and (10.1.32.1, 20000).
 
  
<br clear=all>
+
Now, repeat the same procedure for the other two routers KC and DC.
  
=== Preparation ===
+
'''KC Router Window'''
  
<table align=right border=1 cellpadding=3 cellspacing=0 >
+
* Place the window in the south side of the screen
  <tr>
+
* Title the window KC
    <th>File</th>
+
* Ssh to the KC router and setup the router
    <th>Purpose</th>
 
    <th>Where To Run</th>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice1.sh''</td>
 
    <td>Configure R1</td>
 
    <td>SPP3/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP2_Slice1.sh''</td>
 
    <td>Configure R2</td>
 
    <td>SPP2/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice2.sh''</td>
 
    <td>Configure R3</td>
 
    <td>SPP3/Slice2</td>
 
    </tr>
 
  <tr>
 
    <td>''start_recvs.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP receivers</td>
 
    <td> Remote host </td>
 
    </tr>
 
  <tr>
 
    <td>''start_xmits_3.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP senders</td>
 
    <td> Remote host </td>
 
    </tr>
 
  
  <tr>
+
    xterm> ssh -v $KClogin
    <td>''changeQparams.sh''</td>
+
    KCgpe> cd gpe-forest/demo
    <td>Change threshold and <br> bandwidth parameters</td>
+
    KCgpe> setup-kc.sh | tee setup-kc.out
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''change1Filter.sh''</td>
 
    <td>Change one filter</td>
 
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''create_fp''</td>
 
    <td>Allocate FP and handle <br> LD and EX traffic </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''client''</td>
 
    <td>Create and configure MIs, <br> queues and filters </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''2_Routers_remote.onl''</td>
 
    <td>RLI jar file</td>
 
    <td>remote client</td>
 
    </tr>
 
</table>
 
  
You will need the 10 files (shown right) in order to do the Demo.
+
'''DC Router Window'''
All but the last three files are Bash shell scripts.
 
The files ''create_fp'' and ''client'' are executables that are used by the first three scripts.
 
The last file is the jar file used by the RLI GUI that displays the traffic bandwidths and queue
 
lengths.
 
  
You will also have to do the following before you can run the entire Demo:
+
* Place the window in the southeast corner of the screen
 +
* Title the window DC
 +
* Ssh to the DC router and setup the router
  
* Get an SPP PlanetLab account
+
    xterm>  ssh -v $DClogin
* Install version 1.6 of the Java Run-time Environment (JRE)
+
    DCgpecd gpe-forest/demo
 
+
    DCgpe> setup-dc.sh | tee setup-dc.out
The JRE is only required if you plan to run the graphical interface for displaying bandwidth and
 
queue length statistics.
 
 
 
<br clear=all>
 
 
 
=== Part 0: Configuring R1 and R2 ===
 
 
 
Since the same approach is used to configure each of the routers, we only
 
describe how to configure R1.
 
Configuration consists of the following three sequence of commands:
 
 
 
<pre>
 
  remote> ssh pl_GEC4_slice_01@spp3.arl.wustl.edu    # login to SPP3/Slice1
 
  R1> ./demo_GEC4_SPP3_Slice1.sh                      # run configuration script
 
  R1> ./runSliced.sh                                  # run slice monitoring daemon
 
</pre>
 
 
 
We use the terms R1, R2 and R3 synonomously with SPP3/Slice1, SPP2/Slice1 and SPP3/Slice2 respectively.
 
We configure R2 in the same manner but use the script ''demo_GEC4_SPP2_Slice1.sh''.
 
 
 
The configuration script ''[[ demo_GEC4_SPP3_Slice1.sh ]]'' configures R1 by:
 
 
 
* Adding an endpoint to the line card so that we can use the daemon process ''sliced'' for monitoring
 
* Creating one fastpath (FP) with a bandwidth of 1.08 Gb/s that can have up to 50 filters, 50 queues and 50 stats indices
 
* Running a process that handles ''local delivery'' (LD) and ''exception'' (EX) traffic.
 
* Creating five ''meta-interfaces'' (MIs) within the FP that correspond to the five paths through R1
 
* Creating and configures 15 packet queues and binds them to MIs
 
* Installing 26 filters for packet forwarding
 
 
 
These components are created by running two command interpreters:  ''create_fp'' and ''client''.
 
The ''create_fp'' program creates a fastpath and handles LD and EX traffic (e.g., ''ping'' and ''traceroute'').
 
The ''client'' program is used for all other creation and configuration tasks.
 
 
 
These programs provide the abstractions for configuring an ''IPv4 router'' and use a library
 
of functions that implement the substrate abstraction provided by an SPP.
 
[[ Overview Of The SPP Architecture ]] briefly describes the mechanisms and interface provided by
 
the SPP substrate<ref>The term ''substrate'' as used here should be viewed as being synonomous to
 
''implementation'' or ''physical layer''. Although the term has its roots in previous work on
 
diversifying the Internet, you should not interpret it as identical to its usage in that context.</ref>.
 
 
 
== Blah ==
 
 
 
HOW TO RUN 3-ROUTER DEMO
 
 
 
o Current Setup
 
  - R1 = DC
 
  - R2 = SLC
 
  - R3 = KC
 
 
 
o General Info
 
  - $WUNET below is wunet directory (e.g., wunet5, wunet5s) and should be set
 
  and exported into the environment <<<<< NOTE >>>>>
 
  - ~/gec6-[23]R-xterms source to open host xterms for 2R and 3R
 
  - ~/kenw/${WUNET}/topology.source source to set topology shell variables
 
  - ~/kenw/${WUNET}/run run demo leaving stdout and stderr in
 
    r[12]/*log and r[12]/*err
 
  - tar files
 
  bang:kenw-gec6-host.tar extract for PlanetLab host
 
  bang:kenw-gec6-router.tar extract for SPP (maybe old)
 
  bang:kenw-gec6-main.tar extract for host with controlling window
 
  (i.e., where you execute the run
 
  script)
 
  - router files
 
  $loginR[123]:kenw/${WUNET}/3R/r[123]/{lt,rt,vt,st}
 
~/kenw/demo/ directory for running setup-*.sh and
 
  teardown-*.sh (contains symolic
 
  links to reservation xml files)
 
~/kenw/scripts/ scripts (e.g., get-info.sh, setup-dc.sh)
 
    get-info.sh get interface attributes
 
    plab-ip.source source to define IP addresses of hosts
 
    plab-uptime.sh find host load averages
 
    setup-proto.sh DEPRECATED
 
    setup-dc.sh symlinked to setup-spp.sh
 
    setup-kc.sh symlinked to setup-spp.sh
 
    setup-slc.sh symlinked to setup-spp.sh
 
    setup-spp.sh setup SPP (make reservation,
 
    sort-uptime.pl sort output of plab-uptime.sh in
 
      increasing longterm load average order
 
    spp-ip.source source to define IP addresses of SPPs
 
      alloc GPE bw, alloc LC bw)
 
    teardown-dc.sh teardown SPP (reverse setup)
 
    wunet-pps.pl DEPRECATED (computes pkts/sec from
 
      old stats file)
 
 
 
~/kenw/files/ files used by scripts (e.g., reservation
 
  xml files)
 
~/kenw/help/ various help files (usually output of
 
  --help)
 
  - ~/kenw/${WUNET}/3R/ files
 
  run symlinked to run3R
 
  run3R run demo leaving stdout and stderr in
 
    r[123]/*log and r[123]/*err
 
  =status ls -l r*/*err r*/*log
 
=getstats get r[123]/stats file from R[123]
 
do-charts.sh [Pollsec] display r[123]stats
 
ctrl-charts.sh pause/resume charts display
 
wunet-chart.pl displays chart for 1 router
 
=EAGAIN extract EAGAIN stats from R[123]/log
 
clean rm -f r*/*err r*/*log r*/stats
 
wunet-chart-avg.pl same as wunet-chart.pl but averages over
 
  successive pairs of stats lines
 
do-charts-avg.sh .
 
ctrl-charts-avg.sh .
 
demo.html description of data
 
 
 
 
  - ~/kenw/${WUNET}/2R/ files
 
  run run demo leaving stdout and stderr in
 
    r[12]/*log and r[12]/*err
 
  =status ls -l r*/*err r*/*log
 
=getstats get r[12]/stats file from R1 and R2
 
do-charts.sh [Pollsec] display r[12]stats
 
ctrl-charts.sh pause/resume charts display
 
=EAGAIN extract EAGAIN stats from R[12]/log
 
clean rm -f r*/*err r*/*log r*/stats
 
 
 
o Old screen shots
 
  - screens/run-[123]/*.png xxxxx
 
 
 
o ### MAIN window (northeast or east corner of screen)
 
 
 
### FIND HOSTS WITH LOWEST LONGTERM LOAD AVERAGES
 
cd ~/kenw/demo
 
plab-uptime.sh > uptime.txt # get uptime of potential hosts
 
#    (may need to delete some hosts
 
#    to get to run to completion)
 
sort-uptime.sh < uptime.txt > uptime-sorted.txt
 
# sort in non-decreasing longterm
 
#    load average order
 
### UPDATE TOPOLOGY FILE
 
... check choice of hosts in ~/kenw/${WUNET}/2R/topology.source ...
 
... modify topology file if necessary ...
 
### PREPARE TO RUN
 
cd ~/kenw/${WUNET}/3R
 
source ../topology.source # not necessary except for convenience
 
 
 
o ### ROUTER WINDOWS
 
  - Prepare routers
 
  - DC window (southeast corner of screen)
 
    + open window titled DC
 
    + ssh to router DC: or "ssh -v $loginR1"
 
    + setup router
 
    cd kenw/demo
 
get-info.sh # show interface attributes
 
setup-dc.sh | tee setup-dc.out | more # setup SPP
 
  - SLC window (west side of screen)
 
    + open window titled SLC
 
    + ssh to router SLC: or "ssh -v $loginR2"
 
    + setup router
 
    cd kenw/demo
 
get-info.sh
 
setup-slc.sh | tee setup-slc.out | more
 
  - KC window (southwest corner of screen)
 
    + open window titled KC
 
    + ssh to router KC: or "ssh -v $loginR3"
 
    + setup router
 
    cd kenw/demo
 
get-info.sh
 
setup-kc.sh | tee setup-kc.out | more
 
 
 
o ### HOST WINDOWS H[123][123]
 
  - Open host windows H[123][123] (9 small windows titled H11, H12, ...)
 
  - OPTIONAL ... just shows something is happening on hosts
 
  - new window
 
source ~/gec6-3R-xterms
 
 
 
9 windows appear tiled in 3 columns near northwest corner of screen
 
 
 
o Run ~/bin/=probe on each of the routers and hosts
 
  - the script greps the output of "ps clax" for wuRouter and wuRscript for
 
  routers and wuHost and wuHscript for hosts
 
  - two successive lines of "- -" indicates that none of the processes of
 
  interest are running
 
 
 
 
o ### MAIN WINDOW
 
  - Start the run
 
 
 
### CLEAN UP OLD ERROR, LOG AND STATS FILES
 
script # >>> optional
 
clean # rm err, log and stats files from r*
 
### START RUN
 
run # run for about 2 minutes
 
... look at other windows to see =probe output ...
 
### WAIT FOR COMPLETION
 
=status # show error and log files
 
... wait for end of run ...
 
=status # show error and log files
 
### GET STATS FILES FROM GPES
 
=getstats # get r[123]/stats
 
ls -l r*/stats
 
### CHART STATS
 
do-charts.sh # show charts of stats
 
ctrl-charts.sh # any key entry toggles between pause
 
# and resume
 
### LOOK AT EAGAIN ERRORS
 
=EAGAIN # look at EAGAIN stats
 
###
 
exit # >>> only if ran script command
 
 
 
  - should see wuRscripts, wuRouter, wuHscripts and wuHost running
 
 
 
o Save run stats (e.g.)
 
 
 
mkdir stats.d/run-X-stats
 
tar cf kenw.tar r[12]
 
cd stats.d/run-X-stats
 
tar xf ../kenw.tar
 
 
 
o Clean up the routers
 
  - run the teardown-*.sh script in the kenw/demo/ directory; e.g.,
 
 
 
    SLC> cd kenw/demo
 
SLC> teardown-slc.sh | tee teardown-slc.out | more
 
 
 
 
o Setting up a PlanetLab host
 
  - copy tar file to host and extract
 
 
 
  ssh YOUR_HOST
 
rm -f wunetRuns
 
scp kenw@bang.arl.wustl.edu:kenw-gec6-host.tar . # get tar file
 
cp .bashrc dot-bashrc.ORIG # save .bashrc (optional)
 
tar xf kenw-gec6-host.tar # extract
 
=probe # test that you can run ./bin/=probe
 
 
 
o Setting up a GPE (same as for host but use different tar file)
 
 
 
  ssh YOUR_GPE
 
rm -f wunetRuns
 
scp kenw@bang.arl.wustl.edu:kenw-gec6-router.tar . # get tar file
 
cp .bashrc dot-bashrc.ORIG # save .bashrc (optional)
 
tar xf kenw-gec6-router.tar # extract
 
=probe # test that you can run ./bin/=probe
 
cd kenw/demo # .
 
get-info.sh # . get interface attributes
 
 
 
o Setting up MAIN host (same as for host but use different tar file)
 
 
 
  ssh YOUR_RUN_HOST
 
rm -f wunetRuns
 
scp kenw@bang.arl.wustl.edu:kenw-gec6-main.tar . # get tar file
 
tar xf kenw-gec6-main.tar # extract
 
export WUNET=wunet5s # or whatever
 
ls -l wunetRuns # does symlink point to right place ???
 
#  should point to kenw/${WUNET}
 
ls wunetRuns # .
 
cd wunetRuns/3R # .
 
ls # .
 

Revision as of 22:49, 19 November 2009


CAVEAT: THIS PAGE IS IN ROUGH FORM.

This page first explains how to run the 3R GPE Forest Demo which uses three SPP GPEs as routers, three ordinary PlanetLab hosts as multicast traffic generators and six PlanetLab hosts as multicast subscribers. It does not use NPEs. It is a canned demo in the sense that it uses specific SPPs and PlanetLab hosts. Then, it explains how to run the 2- and 1-router configurations (known as 2R and 1R respectively). Finally, it explains how to choose different PlanetLab hosts as multicast traffic generators and multicast subscribers.

Overview

The components:

  • Three SPP GPEs for running the router software wuRouter
  • Nine PlanetLab hosts for running the traffic generator wuHost
  • A controlling host for orchestrating the Demo

There are three sets of windows:

  • A DEMO window for controlling the experiment/demo
  • Three router windows (DC, SLC, KC)
  • (Optional) Nine host windows
  • A traffic monitoring window

To run the Demo:

  • Get the GPE Forest tar file gpe-forest.tar
  • At each of the three SPP GPEs:
    • Extract the files from the gpe-forest.tar
    • Run the config-forest-router.sh script
  • At each of the nine PlanetLab hosts:
    • Extract the files from the gpe-forest.tar
    • Run the config-forest-host.sh script
  • At the host where you plan to control the experiment (Demo Central):
    • Extract the files from the gpe-forest.tar
    • Run the config-forest-main.sh script
  • At each of the three GPEs:
    • Configure the SPP using a setup script
  • At the Demo Central host:
    • Change directory to ~/wunetRuns
    • Find suitable PlanetLab hosts
    • Execute the run script

When done:

  • At each of the three GPEs:
    • Run the teardown script for that router

Preliminaries

Forest Name Node Name IP Address
H11 vn4.cse.wustl.edu 128.252.19.19
H12 planetlab-04.cs.princeton.edu 128.112.139.27
H13 planetlab-05.cs.princeton.edu 128.112.139.28
H21 vn5.cse.wustl.edu 128.252.19.18
H22 planetlab5.flux.utah.edu 155.98.35.6
H23 planetlab2.unl.edu 129.93.229.139
H31 osiris.planetlab.cs.umd.edu 128.8.126.78
H32 planetlab7.flux.utah.edu 155.98.35.8
H33 planetlab6.flux.utah.edu 155.98.35.7

You must do the following before you try to run the entire Demo:

  • Have your PI assign you to an SPP slice and a PlanetLab slice.
  • Add the three SPPs DC, SLC and KC to your SPP slice and the nine hosts (shown right) to your PlanetLab slice.
  • Install version 1.6 of the Java Run-time Environment (JRE) on the host where you plan to install and run the Forest router software.

The JRE is only required if you plan to run the graphical interface for displaying traffic statistics.

XXXXX

Install the GPE Forest Software

The GPE Forest software must be installed on:

  • The GPEs of the three SPPs (SLC, KC, DC)
  • The nine PlanetLab hosts shown above
  • The host that will run the Demo

Here are the steps involved in installing the software:

  • Log into the host where you plan to run the Demo, and install the gpe-forest.tar tar file using the bash shell:
    demo>  bash                           # run the bash shell
    demo>  XXX gpe-forest.tar             # get tar file
    demo>  tar xvf gpe-forest.tar         # extract files
  • Read the README file ~/gpe-forest/README.
  • Install the software on the Demo host:
    demo>  cd gpe-forest/install
    demo>  ./install-gpe-forest-demo-host.sh    # install on the demo host
  • Install the software on the three GPEs:
    demo>  ./install-gpe-forest-routers.sh      # install on the routers (GPEs)
  • Install the software on the nine PlanetLab hosts:
    demo>  ./install-gpe-forest-hosts.sh        # install on the PlanetLab hosts

These three scripts install the software and do some simple checks. The README file explains what each of these scripts do. If an error occurs, an error message preceded by "+++++ ERROR:" will be displayed on stdout.

If there are no errors,

  • XXXXX

Configure the SPPs

We setup the three SPPs by running the setup-spp.sh script:

    demo>  cd ~/gpe-forest/scripts
    demo>  ./setup-all-SPPs.sh > setup-all-SPPs.out

which does the following on each GPE:

  • Make a reservation for an SPP that specifies the aggregate bandwidth needed by the data and monitoring traffic.
  • Acquire the GPE resources specified in the reservation.
  • Configure the Line Card to support two endpoints (EPs):
    • The data EP which accepts 10 Mbps (peak) TCP traffic at port 30123.
    • The monitoring EP which accepts 1 Mbps (peak) UDP traffic at port 3551.

You can also do this by logging into each of the SPPs and executing a setup script.

Run the Demo

Host Windows (Optional)

This step is optional but clearly indicates when the traffic generator wuHost are running. If you are running an xterm, the following command will open nine small windows labeled H11, H12, ... , H32, H33 near the northwest corner of the screen (one window for each PlanetLab host):

    xterm>  source ~/gec6-3R-xterms

Now run the =probeH script on each of the hosts:

    xterm>  =probeH

The =probeH script greps the output of "ps clax" for the wuHost process every five seconds. In the example below, two successive lines of "- -" indicates that wuHost was not running for the first 10 seconds, but ran for 10 seconds before terminating.

    - -
    - -
    0   505 21993 21988  25  0 344524 29432 -  R ?  0:03 wuHost
    - -
    0   505 21993 21988  25  0 344524 29432 -  R ?  0:08 wuHost
    - -
    - -

Demo Central Window

Initialize the stats files at the routers:

    demo>  cd ~/gpe-forest
    demo>  ./clean.sh

The script ensures that the files ~/gpe-forest/${WUNET}/3R/r[123]/stats at each of the three routers is empty (0-byte).

Traffic Monitor Window

Start the traffic monitor:

    xterm>  java -jar SPPmon.jar

The figure (below) shows the four monitor windows: the main RLI window and three traffic charts (one for each router).

File:Gec6-forest-traffic0.png
Forest Traffic Windows

Demo Central Window

Now, run the wuRouter process at each of the three GPEs and the wuHost process at each of the nine PlanetLab hosts.

    demo>  cd ~/gpe-forest/wunet5s/3R
    demo>  ./run-forest.sh

The page Inside The run-forest.sh Script describes the script. In short, the script starts the routers together; then it starts the three traffic sources (H11, H21, H31) together; and then, it starts the remaining hosts (multicast subscribers) staggered by 5 seconds starting with H12. The subscribers subscribe to multicast flows with 5-second delays between new subscriptions, and then follow a similar pattern unsubscribing. Because this pattern is done for vnet 1 flows and then repeated for vnet 2 flows, traffic to the subscribers appears as two staircase traffic patterns.

The traffic charts (above) indicate:

  • The traffic sources (H11, H21, H31) are generating 100 pkts/sec (black line).
  • Traffic to the multicast subscribers directly-connected to router Rx (x=1,2,3) (i.e., Hx2, Hx3) (blue line):
    • XXXXX
    • XXXXX staircase up and down
  • At R1, traffic to routers R2 and R3 (red and pink lines) XXXXX
  • At R2, traffic to routers R1 and R3 (red and pink lines) XXXXX
  • At R3, traffic to routers R1 and R2 (red and pink lines) XXXXX

In the basic run script, no further intervention is necessary because wuRouter terminates after XXX seconds and wuHost terminates after XXX seconds. The whole Demo runs in about XXX seconds.

Tear Down The Demo

Now, the SPP resources must be freed and their reservations canceled. This is done by executing a teardown script in a similar manner to the setup script:

    demo>  teardown-all-SPPs.sh > teardown-all-SPPs.out

which undoes the setup on each GPE:

  • Free the two endpoints (EPs).
  • Free the GPE resources specified in the reservation.
  • Cancel the reservation for an SPP.

This script goes to each of the SPPs and runs a teardown script.

Demo Problems

Some of the more common problems are listed below with an explanation of how to check for and resolve the problem.

  • XXXXX
    XXXXX
  • XXXXX
    XXXXX

Running The 2R And 3R Configurations

Using Different PlanetLab Hosts

Extra Stuff

SLC Router Window

  • Place the window in the southwest corner of the screen
  • Title the window SLC
  • Ssh to the SLC router and setup the router
    xterm>  ssh -v $SLClogin
    SLCgpe> cd gpe-forest/demo
    SLCgpe> setup-slc.sh | tee setup-slc.out

Here is an example of the setup-slc.out file:

    XXX

The page Inside The setup-spp.sh Script describes the setup script.

Now, repeat the same procedure for the other two routers KC and DC.

KC Router Window

  • Place the window in the south side of the screen
  • Title the window KC
  • Ssh to the KC router and setup the router
    xterm>  ssh -v $KClogin
    KCgpe>  cd gpe-forest/demo
    KCgpe>  setup-kc.sh | tee setup-kc.out

DC Router Window

  • Place the window in the southeast corner of the screen
  • Title the window DC
  • Ssh to the DC router and setup the router
    xterm>  ssh -v $DClogin
    DCgpe>  cd gpe-forest/demo
    DCgpe>  setup-dc.sh | tee setup-dc.out