Difference between revisions of "Running The GPE Forest Demo"

From ARL Wiki
Jump to navigationJump to search
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[Category:The SPP]]
 
[[Category:The SPP]]
  
CAVEAT:  THIS FILE HAS BEEN LOADED WITH RAW TEXT AND NOT MEANT TO BE READ YET.
+
CAVEAT:  THIS PAGE IS IN ROUGH FORM.
  
[[The GEC 6 SPP Demos#The Forest Demo]] gave an abstract description of the Forest Demo.
+
This page first explains how to run the 3R GPE Forest Demo which uses three SPP GPEs as routers, three ordinary PlanetLab hosts as multicast traffic generators and six PlanetLab hosts as multicast subscribers.
This page gives the steps required to run the GEC 6 Forest Demo.
+
It does not use NPEs.
 +
It is a canned demo in the sense that it uses specific SPPs and PlanetLab hosts.
 +
Then, it explains how to run the 2- and 1-router configurations (known as 2R and 1R respectively).
 +
Finally, it explains how to choose different PlanetLab hosts as multicast traffic generators and multicast subscribers.
  
=== Preparation ===
+
== Overview ==
  
<table align=right border=1 cellpadding=3 cellspacing=0 >
+
The components:
  <tr>
 
    <th>File</th>
 
    <th>Purpose</th>
 
    <th>Where To Run</th>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice1.sh''</td>
 
    <td>Configure R1</td>
 
    <td>SPP3/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP2_Slice1.sh''</td>
 
    <td>Configure R2</td>
 
    <td>SPP2/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice2.sh''</td>
 
    <td>Configure R3</td>
 
    <td>SPP3/Slice2</td>
 
    </tr>
 
  <tr>
 
    <td>''start_recvs.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP receivers</td>
 
    <td> Remote host </td>
 
    </tr>
 
  <tr>
 
    <td>''start_xmits_3.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP senders</td>
 
    <td> Remote host </td>
 
    </tr>
 
  
  <tr>
+
* Three SPP GPEs for running the router software ''wuRouter''
    <td>''changeQparams.sh''</td>
+
* Nine PlanetLab hosts for running the traffic generator ''wuHost''
    <td>Change threshold and <br> bandwidth parameters</td>
+
* A controlling host for orchestrating the Demo
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''change1Filter.sh''</td>
 
    <td>Change one filter</td>
 
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''create_fp''</td>
 
    <td>Allocate FP and handle <br> LD and EX traffic </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''client''</td>
 
    <td>Create and configure MIs, <br> queues and filters </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''2_Routers_remote.onl''</td>
 
    <td>RLI jar file</td>
 
    <td>remote client</td>
 
    </tr>
 
</table>
 
  
* ~/WASH/
+
There are three sets of windows:
** resrv
 
** client
 
* ~/forest/
 
** scripts
 
** files
 
** wunet5s
 
* ~/bin
 
** =probe
 
  
You will need the 10 files (shown right) in order to do the Demo.
+
* A ''DEMO window'' for controlling the experiment/demo
All but the last three files are Bash shell scripts.
+
* Three router windows (DC, SLC, KC)
The files ''create_fp'' and ''client'' are executables that are used by the first three scripts.
+
* (Optional) Nine host windows
The last file is the jar file used by the RLI GUI that displays the traffic bandwidths and queue
+
* A traffic monitoring window
lengths.
 
  
You will also have to do the following before you can run the entire Demo:
+
To run the Demo:
  
* Get an SPP PlanetLab account
+
* Get the GPE Forest tar file ''gpe-forest.tar''
* Install version 1.6 of the Java Run-time Environment (JRE)
+
* At each of the three SPP GPEs:
 +
** Extract the files from the ''gpe-forest.tar''
 +
** Run the ''config-forest-router.sh'' script
 +
* At each of the nine PlanetLab hosts:
 +
** Extract the files from the ''gpe-forest.tar''
 +
** Run the ''config-forest-host.sh'' script
 +
* At the host where you plan to control the experiment (''Demo Central''):
 +
** Extract the files from the ''gpe-forest.tar''
 +
** Run the ''config-forest-main.sh'' script
 +
* At each of the three GPEs:
 +
** Configure the SPP using a setup script
 +
* At the ''Demo Central'' host:
 +
** Change directory to ~/wunetRuns
 +
** Find suitable PlanetLab hosts
 +
** Execute the ''run'' script
  
The JRE is only required if you plan to run the graphical interface for displaying bandwidth and
+
When done:
queue length statistics.
 
  
== HERE HERE ==
+
* At each of the three GPEs:
 +
** Run the teardown script for that router
  
The IP addresses <ref>The IP addresses used in a deployment will be different than these addresses
+
== Preliminaries ==
and will need to be discovered by a process described later.</ref> of the interfaces are shown in
+
 
the figure (right).
+
{| align=right border=1 cellspacing=0 cellpadding=3
Unlike a typical PlanetLab node, an SPP user will need to use more than the node's IP address.
+
! Forest Name || Node Name || IP Address
For example, R1's IP address is 10.1.16.1.
+
|-
But a user needs to know the sockets associated with all of R1's meta-interfaces
+
| H11    || vn4.cse.wustl.edu              || 128.252.19.19
as well as a few other sockets such as the slowpath tunnel.
+
|-
The Demo uses port 20000 with each of the five IP addresses 10.1.1.1, 10.1.2.1, 10.1.3.1,
+
| H12    || planetlab-04.cs.princeton.edu  || 128.112.139.27
10.1.16.1 and 10.1.32.1 to form the sockets for the five meta-interfaces of R1's fastpath;
+
|-
that is, (10.1.1.1, 20000), (10.1.2.1, 20000),
+
| H13    || planetlab-05.cs.princeton.edu  || 128.112.139.28
(10.1.3.1, 20000), (10.1.16.1, 20000) and (10.1.32.1, 20000).
+
|-
 +
| H21    || vn5.cse.wustl.edu              || 128.252.19.18
 +
|-
 +
| H22    || planetlab5.flux.utah.edu        || 155.98.35.6
 +
|-
 +
| H23    || planetlab2.unl.edu              || 129.93.229.139
 +
|-
 +
| H31    || osiris.planetlab.cs.umd.edu    || 128.8.126.78
 +
|-
 +
| H32    || planetlab7.flux.utah.edu        || 155.98.35.8
 +
|-
 +
| H33    || planetlab6.flux.utah.edu        || 155.98.35.7
 +
|}
 +
 
 +
You must do the following before you try to run the entire Demo:
 +
 
 +
* Have your PI assign you to an SPP slice and a PlanetLab slice.
 +
* Add the three SPPs DC, SLC and KC to your SPP slice and the nine hosts (shown right) to your PlanetLab slice.
 +
* Install version 1.6 of the Java Run-time Environment (JRE) on the host where you plan to install and run the Forest router software.
 +
 
 +
The JRE is only required if you plan to run the graphical interface for displaying traffic statistics.
 +
 
 +
XXXXX
  
 
<br clear=all>
 
<br clear=all>
  
=== Preparation ===
+
== Install the GPE Forest Software ==
 +
 
 +
The GPE Forest software must be installed on:
 +
 
 +
* The GPEs of the three SPPs (SLC, KC, DC)
 +
* The nine PlanetLab hosts shown above
 +
* The host that will run the Demo
 +
 
 +
Here are the steps involved in installing the software:
 +
 
 +
* Log into the host where you plan to run the Demo, and install the ''gpe-forest.tar'' tar file using the bash shell:
 +
 
 +
    demo>  bash                          # run the bash shell
 +
    demo>  XXX gpe-forest.tar            # get tar file
 +
    demo>  tar xvf gpe-forest.tar        # extract files
 +
 
 +
* Read the README file ~/gpe-forest/README.
 +
* Install the software on the Demo host:
 +
 
 +
    demo>  cd gpe-forest/install
 +
    demo>  ./install-gpe-forest-demo-host.sh    # install on the demo host
 +
 
 +
* Install the software on the three GPEs:
 +
 
 +
    demo>  ./install-gpe-forest-routers.sh      # install on the routers (GPEs)
 +
 
 +
* Install the software on the nine PlanetLab hosts:
 +
 
 +
    demo>  ./install-gpe-forest-hosts.sh        # install on the PlanetLab hosts
 +
 
 +
These three scripts install the software and do some simple checks.
 +
The README file explains what each of these scripts do.
 +
If an error occurs, an error message preceded by "+++++ ERROR:" will be displayed on stdout.
 +
 
 +
If there are no errors,
 +
 
 +
* XXXXX
 +
 
 +
== Configure the SPPs ==
 +
 
 +
We setup the three SPPs by running the ''setup-spp.sh'' script:
 +
 
 +
    demo>  cd ~/gpe-forest/scripts
 +
    demo>  ./setup-all-SPPs.sh > setup-all-SPPs.out
 +
 
 +
which does the following on each GPE:
 +
 
 +
* Make a reservation for an SPP that specifies the aggregate bandwidth needed by the data and monitoring traffic.
 +
* Acquire the GPE resources specified in the reservation.
 +
* Configure the Line Card to support two endpoints (EPs):
 +
** The data EP which accepts 10 Mbps (peak) TCP traffic at port 30123.
 +
** The monitoring EP which accepts 1 Mbps (peak) UDP traffic at port 3551.
 +
 
 +
You can also do this by logging into each of the SPPs and executing a setup script.
 +
 
 +
== Run the Demo ==
 +
 
 +
'''Host Windows (Optional)'''
 +
 
 +
This step is optional but clearly indicates when the traffic generator ''wuHost'' are running.
 +
If you are running an xterm, the following command will open nine small windows labeled H11, H12, ... , H32, H33 near the northwest corner of the screen (one window for each PlanetLab host):
 +
 
 +
    xterm>  source ~/gec6-3R-xterms
 +
 
 +
Now run the ''=probeH'' script on each of the hosts:
 +
 
 +
    xterm>  =probeH
 +
 
 +
The ''=probeH'' script greps the output of "ps clax" for the wuHost process every five seconds.
 +
In the example below, two successive lines of "- -"  indicates that wuHost was not running for the first 10 seconds, but ran for 10 seconds before terminating.
 +
 
 +
    - -
 +
    - -
 +
    0  505 21993 21988  25  0 344524 29432 -  R ?  0:03 wuHost
 +
    - -
 +
    0  505 21993 21988  25  0 344524 29432 -  R ?  0:08 wuHost
 +
    - -
 +
    - -
 +
 
 +
'''Demo Central Window'''
 +
 
 +
Initialize the ''stats'' files at the routers:
 +
 
 +
    demo>  cd ~/gpe-forest
 +
    demo>  ./clean.sh
 +
 
 +
The script ensures that the files ~/gpe-forest/${WUNET}/3R/r[123]/stats at each of the three routers is empty (0-byte).
  
<table align=right border=1 cellpadding=3 cellspacing=0 >
+
'''Traffic Monitor Window'''
  <tr>
 
    <th>File</th>
 
    <th>Purpose</th>
 
    <th>Where To Run</th>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice1.sh''</td>
 
    <td>Configure R1</td>
 
    <td>SPP3/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP2_Slice1.sh''</td>
 
    <td>Configure R2</td>
 
    <td>SPP2/Slice1</td>
 
    </tr>
 
  <tr>
 
    <td>''demo_GEC4_SPP3_Slice2.sh''</td>
 
    <td>Configure R3</td>
 
    <td>SPP3/Slice2</td>
 
    </tr>
 
  <tr>
 
    <td>''start_recvs.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP receivers</td>
 
    <td> Remote host </td>
 
    </tr>
 
  <tr>
 
    <td>''start_xmits_3.sh''</td>
 
    <td>Start three <br> ''iperf'' UDP senders</td>
 
    <td> Remote host </td>
 
    </tr>
 
  
  <tr>
+
Start the traffic monitor:
    <td>''changeQparams.sh''</td>
 
    <td>Change threshold and <br> bandwidth parameters</td>
 
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''change1Filter.sh''</td>
 
    <td>Change one filter</td>
 
    <td> R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''create_fp''</td>
 
    <td>Allocate FP and handle <br> LD and EX traffic </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''client''</td>
 
    <td>Create and configure MIs, <br> queues and filters </td>
 
    <td>R1 GPE </td>
 
    </tr>
 
  <tr>
 
    <td>''2_Routers_remote.onl''</td>
 
    <td>RLI jar file</td>
 
    <td>remote client</td>
 
    </tr>
 
</table>
 
  
You will need the 10 files (shown right) in order to do the Demo.
+
    xterm>  java -jar SPPmon.jar
All but the last three files are Bash shell scripts.
 
The files ''create_fp'' and ''client'' are executables that are used by the first three scripts.
 
The last file is the jar file used by the RLI GUI that displays the traffic bandwidths and queue
 
lengths.
 
  
You will also have to do the following before you can run the entire Demo:
+
The figure (below) shows the four monitor windows:  the main RLI window and three traffic charts (one for each router).
  
* Get an SPP PlanetLab account
+
[[Image:gec6-forest-traffic0.png|thumb|center|600px|border|Forest Traffic Windows]]
* Install version 1.6 of the Java Run-time Environment (JRE)
 
  
The JRE is only required if you plan to run the graphical interface for displaying bandwidth and
+
'''Demo Central Window'''
queue length statistics.
 
  
<br clear=all>
+
Now, run the wuRouter process at each of the three GPEs and the wuHost process at each of the nine PlanetLab hosts.
 +
 
 +
    demo>  cd ~/gpe-forest/wunet5s/3R
 +
    demo>  ./run-forest.sh
 +
 
 +
The page [[Inside The run-forest.sh Script]] describes the script.
 +
In short, the script starts the routers together; then it starts the three traffic sources (H11, H21, H31) together; and then, it starts the remaining hosts (multicast subscribers) staggered by 5 seconds starting with H12.
 +
The subscribers subscribe to multicast flows with 5-second delays between new subscriptions, and then follow a similar pattern unsubscribing.
 +
Because this pattern is done for vnet 1 flows and then repeated for vnet 2 flows, traffic to the subscribers appears as two
 +
staircase traffic patterns.
 +
 
 +
[[Image:gec6-forest-traffic1.png|thumb|center|600px|border|Forest Traffic]]
 +
 
 +
The traffic charts (above) indicate:
 +
 
 +
* The traffic sources (H11, H21, H31) are generating 100 pkts/sec (black line).
 +
* Traffic to the multicast subscribers directly-connected to router Rx (x=1,2,3) (i.e., Hx2, Hx3) (blue line):
 +
** XXXXX
 +
** XXXXX staircase up and down
 +
* At R1, traffic to routers R2 and R3 (red and pink lines) XXXXX
 +
* At R2, traffic to routers R1 and R3 (red and pink lines) XXXXX
 +
* At R3, traffic to routers R1 and R2 (red and pink lines) XXXXX
 +
 
 +
In the basic run script, no further intervention is necessary because wuRouter terminates after XXX seconds and wuHost terminates after XXX seconds.
 +
The whole Demo runs in about XXX seconds.
  
=== Part 0:  Configuring R1 and R2 ===
+
== Tear Down The Demo ==
  
Since the same approach is used to configure each of the routers, we only
+
Now, the SPP resources must be freed and their reservations canceled.
describe how to configure R1.
+
This is done by executing a teardown script in a similar manner to the setup script:
Configuration consists of the following three sequence of commands:
 
  
<pre>
+
    demo> teardown-all-SPPs.sh > teardown-all-SPPs.out
  remote> ssh pl_GEC4_slice_01@spp3.arl.wustl.edu    # login to SPP3/Slice1
 
  R1> ./demo_GEC4_SPP3_Slice1.sh                     # run configuration script
 
  R1> ./runSliced.sh                                  # run slice monitoring daemon
 
</pre>
 
  
We use the terms R1, R2 and R3 synonomously with SPP3/Slice1, SPP2/Slice1 and SPP3/Slice2 respectively.
+
which undoes the setup on each GPE:
We configure R2 in the same manner but use the script ''demo_GEC4_SPP2_Slice1.sh''.
 
  
The configuration script ''[[ demo_GEC4_SPP3_Slice1.sh ]]'' configures R1 by:
+
* Free the two endpoints (EPs).
 +
* Free the GPE resources specified in the reservation.
 +
* Cancel the reservation for an SPP.
  
* Adding an endpoint to the line card so that we can use the daemon process ''sliced'' for monitoring
+
This script goes to each of the SPPs and runs a teardown script.
* Creating one fastpath (FP) with a bandwidth of 1.08 Gb/s that can have up to 50 filters, 50 queues and 50 stats indices
 
* Running a process that handles ''local delivery'' (LD) and ''exception'' (EX) traffic.
 
* Creating five ''meta-interfaces'' (MIs) within the FP that correspond to the five paths through R1
 
* Creating and configures 15 packet queues and binds them to MIs
 
* Installing 26 filters for packet forwarding
 
  
These components are created by running two command interpreters:  ''create_fp'' and ''client''.
+
== Demo Problems ==
The ''create_fp'' program creates a fastpath and handles LD and EX traffic (e.g., ''ping'' and ''traceroute'').
 
The ''client'' program is used for all other creation and configuration tasks.
 
  
These programs provide the abstractions for configuring an ''IPv4 router'' and use a library
+
Some of the more common problems are listed below with an explanation of how to check for and resolve the problem.
of functions that implement the substrate abstraction provided by an SPP.
 
[[ Overview Of The SPP Architecture ]] briefly describes the mechanisms and interface provided by
 
the SPP substrate<ref>The term ''substrate'' as used here should be viewed as being synonomous to
 
''implementation'' or ''physical layer''.  Although the term has its roots in previous work on
 
diversifying the Internet, you should not interpret it as identical to its usage in that context.</ref>.
 
  
== Blah ==
+
* XXXXX
  
HOW TO RUN 3-ROUTER DEMO
+
    XXXXX
  
o Current Setup
+
* XXXXX
  - R1 = DC
 
  - R2 = SLC
 
  - R3 = KC
 
  
o General Info
+
    XXXXX
  - $WUNET below is wunet directory (e.g., wunet5, wunet5s) and should be set
 
  and exported into the environment <<<<< NOTE >>>>>
 
  - ~/gec6-[23]R-xterms source to open host xterms for 2R and 3R
 
  - ~/kenw/${WUNET}/topology.source source to set topology shell variables
 
  - ~/kenw/${WUNET}/run run demo leaving stdout and stderr in
 
    r[12]/*log and r[12]/*err
 
  - tar files
 
  bang:kenw-gec6-host.tar extract for PlanetLab host
 
  bang:kenw-gec6-router.tar extract for SPP (maybe old)
 
  bang:kenw-gec6-main.tar extract for host with controlling window
 
  (i.e., where you execute the run
 
  script)
 
  - router files
 
  $loginR[123]:kenw/${WUNET}/3R/r[123]/{lt,rt,vt,st}
 
~/kenw/demo/ directory for running setup-*.sh and
 
  teardown-*.sh (contains symolic
 
  links to reservation xml files)
 
~/kenw/scripts/ scripts (e.g., get-info.sh, setup-dc.sh)
 
    get-info.sh get interface attributes
 
    plab-ip.source source to define IP addresses of hosts
 
    plab-uptime.sh find host load averages
 
    setup-proto.sh DEPRECATED
 
    setup-dc.sh symlinked to setup-spp.sh
 
    setup-kc.sh symlinked to setup-spp.sh
 
    setup-slc.sh symlinked to setup-spp.sh
 
    setup-spp.sh setup SPP (make reservation,
 
    sort-uptime.pl sort output of plab-uptime.sh in
 
      increasing longterm load average order
 
    spp-ip.source source to define IP addresses of SPPs
 
      alloc GPE bw, alloc LC bw)
 
    teardown-dc.sh teardown SPP (reverse setup)
 
    wunet-pps.pl DEPRECATED (computes pkts/sec from
 
      old stats file)
 
  
~/kenw/files/ files used by scripts (e.g., reservation
+
== Other Configurations ==
  xml files)
 
~/kenw/help/ various help files (usually output of
 
  --help)
 
  - ~/kenw/${WUNET}/3R/ files
 
  run symlinked to run3R
 
  run3R run demo leaving stdout and stderr in
 
    r[123]/*log and r[123]/*err
 
  =status ls -l r*/*err r*/*log
 
=getstats get r[123]/stats file from R[123]
 
do-charts.sh [Pollsec] display r[123]stats
 
ctrl-charts.sh pause/resume charts display
 
wunet-chart.pl displays chart for 1 router
 
=EAGAIN extract EAGAIN stats from R[123]/log
 
clean rm -f r*/*err r*/*log r*/stats
 
wunet-chart-avg.pl same as wunet-chart.pl but averages over
 
  successive pairs of stats lines
 
do-charts-avg.sh .
 
ctrl-charts-avg.sh .
 
demo.html description of data
 
  
+
=== Running wuRouter and wuHost Separately ===
  - ~/kenw/${WUNET}/2R/ files
 
  run run demo leaving stdout and stderr in
 
    r[12]/*log and r[12]/*err
 
  =status ls -l r*/*err r*/*log
 
=getstats get r[12]/stats file from R1 and R2
 
do-charts.sh [Pollsec] display r[12]stats
 
ctrl-charts.sh pause/resume charts display
 
=EAGAIN extract EAGAIN stats from R[12]/log
 
clean rm -f r*/*err r*/*log r*/stats
 
  
o Old screen shots
+
* ex2/
  - screens/run-[123]/*.png xxxxx
+
* run wuRouter forever (finTime = 0)
 +
* run wuHost on demand
  
o ### MAIN window (northeast or east corner of screen)
+
=== Running The 2R And 1R Configurations ===
  
### FIND HOSTS WITH LOWEST LONGTERM LOAD AVERAGES
+
* XXXXX
cd ~/kenw/demo
 
plab-uptime.sh > uptime.txt # get uptime of potential hosts
 
#    (may need to delete some hosts
 
#    to get to run to completion)
 
sort-uptime.sh < uptime.txt > uptime-sorted.txt
 
# sort in non-decreasing longterm
 
#    load average order
 
### UPDATE TOPOLOGY FILE
 
... check choice of hosts in ~/kenw/${WUNET}/2R/topology.source ...
 
... modify topology file if necessary ...
 
### PREPARE TO RUN
 
cd ~/kenw/${WUNET}/3R
 
source ../topology.source # not necessary except for convenience
 
  
o ### ROUTER WINDOWS
+
=== Using Different PlanetLab Hosts ===
  - Prepare routers
 
  - DC window (southeast corner of screen)
 
    + open window titled DC
 
    + ssh to router DC: or "ssh -v $loginR1"
 
    + setup router
 
    cd kenw/demo
 
get-info.sh # show interface attributes
 
setup-dc.sh | tee setup-dc.out | more # setup SPP
 
  - SLC window (west side of screen)
 
    + open window titled SLC
 
    + ssh to router SLC: or "ssh -v $loginR2"
 
    + setup router
 
    cd kenw/demo
 
get-info.sh
 
setup-slc.sh | tee setup-slc.out | more
 
  - KC window (southwest corner of screen)
 
    + open window titled KC
 
    + ssh to router KC: or "ssh -v $loginR3"
 
    + setup router
 
    cd kenw/demo
 
get-info.sh
 
setup-kc.sh | tee setup-kc.out | more
 
  
o ### HOST WINDOWS H[123][123]
+
* XXXXX
  - Open host windows H[123][123] (9 small windows titled H11, H12, ...)
 
  - OPTIONAL ... just shows something is happening on hosts
 
  - new window
 
source ~/gec6-3R-xterms
 
  
9 windows appear tiled in 3 columns near northwest corner of screen
+
== Extra Stuff ==
  
o Run ~/bin/=probe on each of the routers and hosts
 
  - the script greps the output of "ps clax" for wuRouter and wuRscript for
 
  routers and wuHost and wuHscript for hosts
 
  - two successive lines of "- -" indicates that none of the processes of
 
  interest are running
 
  
+
'''SLC Router Window'''
o ### MAIN WINDOW
 
  - Start the run
 
  
### CLEAN UP OLD ERROR, LOG AND STATS FILES
+
* Place the window in the southwest corner of the screen
script # >>> optional
+
* Title the window SLC
clean # rm err, log and stats files from r*
+
* Ssh to the SLC router and setup the router
### START RUN
 
run # run for about 2 minutes
 
... look at other windows to see =probe output ...
 
### WAIT FOR COMPLETION
 
=status # show error and log files
 
... wait for end of run ...
 
=status # show error and log files
 
### GET STATS FILES FROM GPES
 
=getstats # get r[123]/stats
 
ls -l r*/stats
 
### CHART STATS
 
do-charts.sh # show charts of stats
 
ctrl-charts.sh # any key entry toggles between pause
 
# and resume
 
### LOOK AT EAGAIN ERRORS
 
=EAGAIN # look at EAGAIN stats
 
###
 
exit # >>> only if ran script command
 
  
  - should see wuRscripts, wuRouter, wuHscripts and wuHost running
+
    xterm>  ssh -v $SLClogin
 +
    SLCgpe> cd gpe-forest/demo
 +
    SLCgpe> setup-slc.sh | tee setup-slc.out
  
o Save run stats (e.g.)
+
Here is an example of the ''setup-slc.out'' file:
  
mkdir stats.d/run-X-stats
+
    XXX
tar cf kenw.tar r[12]
 
cd stats.d/run-X-stats
 
tar xf ../kenw.tar
 
  
o Clean up the routers
+
The page [[Inside The setup-spp.sh Script]] describes the setup script.
  - run the teardown-*.sh script in the kenw/demo/ directory; e.g.,
 
  
    SLC> cd kenw/demo
+
Now, repeat the same procedure for the other two routers KC and DC.
SLC> teardown-slc.sh | tee teardown-slc.out | more
 
  
+
'''KC Router Window'''
o Setting up a PlanetLab host
 
  - copy tar file to host and extract
 
  
  ssh YOUR_HOST
+
* Place the window in the south side of the screen
rm -f wunetRuns
+
* Title the window KC
scp kenw@bang.arl.wustl.edu:kenw-gec6-host.tar . # get tar file
+
* Ssh to the KC router and setup the router
cp .bashrc dot-bashrc.ORIG # save .bashrc (optional)
 
tar xf kenw-gec6-host.tar # extract
 
=probe # test that you can run ./bin/=probe
 
  
o Setting up a GPE (same as for host but use different tar file)
+
    xterm>  ssh -v $KClogin
 +
    KCgpe>  cd gpe-forest/demo
 +
    KCgpe>  setup-kc.sh | tee setup-kc.out
  
  ssh YOUR_GPE
+
'''DC Router Window'''
rm -f wunetRuns
 
scp kenw@bang.arl.wustl.edu:kenw-gec6-router.tar . # get tar file
 
cp .bashrc dot-bashrc.ORIG # save .bashrc (optional)
 
tar xf kenw-gec6-router.tar # extract
 
=probe # test that you can run ./bin/=probe
 
cd kenw/demo # .
 
get-info.sh # . get interface attributes
 
  
o Setting up MAIN host (same as for host but use different tar file)
+
* Place the window in the southeast corner of the screen
 +
* Title the window DC
 +
* Ssh to the DC router and setup the router
  
  ssh YOUR_RUN_HOST
+
    xterm>  ssh -v $DClogin
rm -f wunetRuns
+
    DCgpe>  cd gpe-forest/demo
scp kenw@bang.arl.wustl.edu:kenw-gec6-main.tar . # get tar file
+
    DCgpe>  setup-dc.sh | tee setup-dc.out
tar xf kenw-gec6-main.tar # extract
 
export WUNET=wunet5s # or whatever
 
ls -l wunetRuns # does symlink point to right place ???
 
#  should point to kenw/${WUNET}
 
ls wunetRuns # .
 
cd wunetRuns/3R # .
 
ls # .
 

Latest revision as of 22:00, 20 November 2009


CAVEAT: THIS PAGE IS IN ROUGH FORM.

This page first explains how to run the 3R GPE Forest Demo which uses three SPP GPEs as routers, three ordinary PlanetLab hosts as multicast traffic generators and six PlanetLab hosts as multicast subscribers. It does not use NPEs. It is a canned demo in the sense that it uses specific SPPs and PlanetLab hosts. Then, it explains how to run the 2- and 1-router configurations (known as 2R and 1R respectively). Finally, it explains how to choose different PlanetLab hosts as multicast traffic generators and multicast subscribers.

Overview

The components:

  • Three SPP GPEs for running the router software wuRouter
  • Nine PlanetLab hosts for running the traffic generator wuHost
  • A controlling host for orchestrating the Demo

There are three sets of windows:

  • A DEMO window for controlling the experiment/demo
  • Three router windows (DC, SLC, KC)
  • (Optional) Nine host windows
  • A traffic monitoring window

To run the Demo:

  • Get the GPE Forest tar file gpe-forest.tar
  • At each of the three SPP GPEs:
    • Extract the files from the gpe-forest.tar
    • Run the config-forest-router.sh script
  • At each of the nine PlanetLab hosts:
    • Extract the files from the gpe-forest.tar
    • Run the config-forest-host.sh script
  • At the host where you plan to control the experiment (Demo Central):
    • Extract the files from the gpe-forest.tar
    • Run the config-forest-main.sh script
  • At each of the three GPEs:
    • Configure the SPP using a setup script
  • At the Demo Central host:
    • Change directory to ~/wunetRuns
    • Find suitable PlanetLab hosts
    • Execute the run script

When done:

  • At each of the three GPEs:
    • Run the teardown script for that router

Preliminaries

Forest Name Node Name IP Address
H11 vn4.cse.wustl.edu 128.252.19.19
H12 planetlab-04.cs.princeton.edu 128.112.139.27
H13 planetlab-05.cs.princeton.edu 128.112.139.28
H21 vn5.cse.wustl.edu 128.252.19.18
H22 planetlab5.flux.utah.edu 155.98.35.6
H23 planetlab2.unl.edu 129.93.229.139
H31 osiris.planetlab.cs.umd.edu 128.8.126.78
H32 planetlab7.flux.utah.edu 155.98.35.8
H33 planetlab6.flux.utah.edu 155.98.35.7

You must do the following before you try to run the entire Demo:

  • Have your PI assign you to an SPP slice and a PlanetLab slice.
  • Add the three SPPs DC, SLC and KC to your SPP slice and the nine hosts (shown right) to your PlanetLab slice.
  • Install version 1.6 of the Java Run-time Environment (JRE) on the host where you plan to install and run the Forest router software.

The JRE is only required if you plan to run the graphical interface for displaying traffic statistics.

XXXXX


Install the GPE Forest Software

The GPE Forest software must be installed on:

  • The GPEs of the three SPPs (SLC, KC, DC)
  • The nine PlanetLab hosts shown above
  • The host that will run the Demo

Here are the steps involved in installing the software:

  • Log into the host where you plan to run the Demo, and install the gpe-forest.tar tar file using the bash shell:
    demo>  bash                           # run the bash shell
    demo>  XXX gpe-forest.tar             # get tar file
    demo>  tar xvf gpe-forest.tar         # extract files
  • Read the README file ~/gpe-forest/README.
  • Install the software on the Demo host:
    demo>  cd gpe-forest/install
    demo>  ./install-gpe-forest-demo-host.sh    # install on the demo host
  • Install the software on the three GPEs:
    demo>  ./install-gpe-forest-routers.sh      # install on the routers (GPEs)
  • Install the software on the nine PlanetLab hosts:
    demo>  ./install-gpe-forest-hosts.sh        # install on the PlanetLab hosts

These three scripts install the software and do some simple checks. The README file explains what each of these scripts do. If an error occurs, an error message preceded by "+++++ ERROR:" will be displayed on stdout.

If there are no errors,

  • XXXXX

Configure the SPPs

We setup the three SPPs by running the setup-spp.sh script:

    demo>  cd ~/gpe-forest/scripts
    demo>  ./setup-all-SPPs.sh > setup-all-SPPs.out

which does the following on each GPE:

  • Make a reservation for an SPP that specifies the aggregate bandwidth needed by the data and monitoring traffic.
  • Acquire the GPE resources specified in the reservation.
  • Configure the Line Card to support two endpoints (EPs):
    • The data EP which accepts 10 Mbps (peak) TCP traffic at port 30123.
    • The monitoring EP which accepts 1 Mbps (peak) UDP traffic at port 3551.

You can also do this by logging into each of the SPPs and executing a setup script.

Run the Demo

Host Windows (Optional)

This step is optional but clearly indicates when the traffic generator wuHost are running. If you are running an xterm, the following command will open nine small windows labeled H11, H12, ... , H32, H33 near the northwest corner of the screen (one window for each PlanetLab host):

    xterm>  source ~/gec6-3R-xterms

Now run the =probeH script on each of the hosts:

    xterm>  =probeH

The =probeH script greps the output of "ps clax" for the wuHost process every five seconds. In the example below, two successive lines of "- -" indicates that wuHost was not running for the first 10 seconds, but ran for 10 seconds before terminating.

    - -
    - -
    0   505 21993 21988  25  0 344524 29432 -  R ?  0:03 wuHost
    - -
    0   505 21993 21988  25  0 344524 29432 -  R ?  0:08 wuHost
    - -
    - -

Demo Central Window

Initialize the stats files at the routers:

    demo>  cd ~/gpe-forest
    demo>  ./clean.sh

The script ensures that the files ~/gpe-forest/${WUNET}/3R/r[123]/stats at each of the three routers is empty (0-byte).

Traffic Monitor Window

Start the traffic monitor:

    xterm>  java -jar SPPmon.jar

The figure (below) shows the four monitor windows: the main RLI window and three traffic charts (one for each router).

File:Gec6-forest-traffic0.png
Forest Traffic Windows

Demo Central Window

Now, run the wuRouter process at each of the three GPEs and the wuHost process at each of the nine PlanetLab hosts.

    demo>  cd ~/gpe-forest/wunet5s/3R
    demo>  ./run-forest.sh

The page Inside The run-forest.sh Script describes the script. In short, the script starts the routers together; then it starts the three traffic sources (H11, H21, H31) together; and then, it starts the remaining hosts (multicast subscribers) staggered by 5 seconds starting with H12. The subscribers subscribe to multicast flows with 5-second delays between new subscriptions, and then follow a similar pattern unsubscribing. Because this pattern is done for vnet 1 flows and then repeated for vnet 2 flows, traffic to the subscribers appears as two staircase traffic patterns.

The traffic charts (above) indicate:

  • The traffic sources (H11, H21, H31) are generating 100 pkts/sec (black line).
  • Traffic to the multicast subscribers directly-connected to router Rx (x=1,2,3) (i.e., Hx2, Hx3) (blue line):
    • XXXXX
    • XXXXX staircase up and down
  • At R1, traffic to routers R2 and R3 (red and pink lines) XXXXX
  • At R2, traffic to routers R1 and R3 (red and pink lines) XXXXX
  • At R3, traffic to routers R1 and R2 (red and pink lines) XXXXX

In the basic run script, no further intervention is necessary because wuRouter terminates after XXX seconds and wuHost terminates after XXX seconds. The whole Demo runs in about XXX seconds.

Tear Down The Demo

Now, the SPP resources must be freed and their reservations canceled. This is done by executing a teardown script in a similar manner to the setup script:

    demo>  teardown-all-SPPs.sh > teardown-all-SPPs.out

which undoes the setup on each GPE:

  • Free the two endpoints (EPs).
  • Free the GPE resources specified in the reservation.
  • Cancel the reservation for an SPP.

This script goes to each of the SPPs and runs a teardown script.

Demo Problems

Some of the more common problems are listed below with an explanation of how to check for and resolve the problem.

  • XXXXX
    XXXXX
  • XXXXX
    XXXXX

Other Configurations

Running wuRouter and wuHost Separately

  • ex2/
  • run wuRouter forever (finTime = 0)
  • run wuHost on demand

Running The 2R And 1R Configurations

  • XXXXX

Using Different PlanetLab Hosts

  • XXXXX

Extra Stuff

SLC Router Window

  • Place the window in the southwest corner of the screen
  • Title the window SLC
  • Ssh to the SLC router and setup the router
    xterm>  ssh -v $SLClogin
    SLCgpe> cd gpe-forest/demo
    SLCgpe> setup-slc.sh | tee setup-slc.out

Here is an example of the setup-slc.out file:

    XXX

The page Inside The setup-spp.sh Script describes the setup script.

Now, repeat the same procedure for the other two routers KC and DC.

KC Router Window

  • Place the window in the south side of the screen
  • Title the window KC
  • Ssh to the KC router and setup the router
    xterm>  ssh -v $KClogin
    KCgpe>  cd gpe-forest/demo
    KCgpe>  setup-kc.sh | tee setup-kc.out

DC Router Window

  • Place the window in the southeast corner of the screen
  • Title the window DC
  • Ssh to the DC router and setup the router
    xterm>  ssh -v $DClogin
    DCgpe>  cd gpe-forest/demo
    DCgpe>  setup-dc.sh | tee setup-dc.out