Difference between revisions of "SPP Control Software"

From ARL Wiki
Jump to navigationJump to search
(New page: Category:The SPP The major control software components are shown in Figure 10. The SPP-PLC is a separate system that runs the PlanetLab Central software, providing an interface throug...)
 
Line 136: Line 136:
 
         Free the resources of the FP.
 
         Free the resources of the FP.
 
         </ul>
 
         </ul>
</ul>=== FastPath Bandwidth ===
+
</ul>
 +
 
 +
=== FastPath Bandwidth ===
  
 
<ul>
 
<ul>
Line 418: Line 420:
 
         </ul>
 
         </ul>
 
</ul>
 
</ul>
 +
 
=== NPE Filter Management ===
 
=== NPE Filter Management ===
  
Line 431: Line 434:
 
         </ul>
 
         </ul>
 
substrateFltr &larr; npe_get_fltr_by_key(contextId,key,substrateFltr)
 
substrateFltr &larr; npe_get_fltr_by_key(contextId,key,substrateFltr)
         </ul>
+
         <ul>
 
         Return the context’s substrate (generic) filter that matches the key.
 
         Return the context’s substrate (generic) filter that matches the key.
 
         </ul>
 
         </ul>

Revision as of 21:47, 12 November 2009


The major control software components are shown in Figure 10. The SPP-PLC is a separate system that runs the PlanetLab Central software, providing an interface through which users can request new slices and instantiate those slices on one or more SPP. The System Resource Manager is the top level controller and coordinates the use of various resources by the different components of the architecture. The Resource Manager Proxy provides an interface through which user slices can request and configure resources. The Substrate Control Daemons (SCD) in the Line Card and NPE provide an interface through which the datapath software running in the network processors is configured. The SPP Login Manager (SLM) provides a mechanism to enable users to login to the vServers for their individual slices, so they can install code, request and configure resources and run experiments. More details of the various components are provided below.


>> Figure 10. Major Control Software Modules


System Resource Manager (SRM)

The SRM is the top level controller for the SPP and provides several services. These include acquiring slice definitions from SPP-PLC, instantiating slice definitions, reserving and assigning resources to slices and coordinating the initialization of the whole system. The SRM implements functions provided by the Node Manager on a conventional PlanetLab node, but must provide this functionality in the context of a system with a more complex internal structure, and a richer set of resources.

The SRM polls SPP-PLC periodically to obtain new slice definitions. When a new slice is detected, the SRM selects one of the two GPEs on which to instantiate the slice. Slice instantiation involves creating a vServer on the selected slice, initializing it and configuring a login so that users can access their assigned vServer.

Once assigned to a vServer, a user can run programs that send and receive packets on the external interfaces. Outgoing connections are subjected to port number translation at the Line Cards, as described in Section 4. Users may also request the use of specific external port numbers in order to run servers that listen on specific ports. User requests are made through an interface provided by the RMP on the user’s assigned GPE. The RMP forwards these requests to the SRM which manages all system level resources, including external port numbers, physical interface bandwidth and NPE resources.

Resource Manager Proxy (RMP)

The RMP provides an API used by applications running in vServers. The API allows users to reserve resources in advance (such as external port bandwidth and NPE fastpaths), to acquire those resources when a reservation period starts and configure the resources as needed. The RMP is implemented as a daemon that runs in the root context and is accessed through a set of library routines. A command line interface is also provided so that users can reserve and configure resources interactively, or through a shell script. The command line interface converts the given commands to API calls.

The main API calls are listed below in topical sub-sections, along with a brief description of how each call is used. We use a representation that attempts to informally describe the interface semantics. More precise descriptions are given in the reference manual. We use an abstract interface syntax that has the form “R ← F(A1,…,An)” where F is the function name, Ai is the i-th argument, and R is the return value. Mnemonic names are used to convey usage while data type modifiers have been omitted. The following abbreviations and mnemonics are used in argument names and descriptions:

FP FastPath
EP EndPoint -- a logical interface used by a slice and mapped to a physical interface
LC LineCard
BW BandWidth
DB DataBase
Xdescr X description where X is Q, EP or FP for Queue, EndPoint, or FastPath
Xid X identifier where X is F, FP, MI, Q or S for Filter, FastPath, MetaInterface, Queue, or Slice

Interfaces

    ifList ← get_ifaces(ifList)
      Return a list of all physical interfaces of the SPP. Slices configure MIs using the information from this list. The returned list indicates for each physical interface the attributes of the interface; i.e., interface number, the interface type (Internet or peering), the IP address, the total bandwidth and the available bandwidth.

    ifNum ← get_ifn(EPaddr)

      Return the physical interface number of the EP.

    ifAttributes ← get_ifattrs(ifNum,ifAttributes)

      Return the attributes of the physical interface.

    IPaddr ← get_ifpeer(ifNum)

      Return the IP address of the physical interface.


GPE Interface Bandwidth

    rmpCode ← resrv_pl_ifbw(ifNum,BWkbps)
      Reserve bandwidth (Kbps) on the physical interface.

    rmpCode ← free_pl_ifbw(ifNum,BWkbps)

      Release bandwidth (Kbps) from the physical interface.

GPE Endpoints

    EPdescr ← alloc_endpoint(EPdescr)
      Given an EP description, allocate a new EP, and return a reference to the EP. A filter is installed in the LC to direct matching traffic to the GPE. For TCP or UDP, you can select the port number or have the system give you one.

    RMPcode ← free_endpoint(EPdescr)

      Free the endpoint, de-install the LC filter for the EP, and return the status.

FastPaths

    FPdescr ← alloc_fastpath(codeOpt,bwSpec,resSpec,memSpec,FPdescr)
      Given specifications for the aggregate bandwidth, other resource (filters, queues, buffers and stats) and memory, allocate a new FP for the code option, and return a reference to the FP description.

    free_fastpath(FPid)

      Free the resources of the FP.

FastPath Bandwidth

    RMPcode ← resrv_fpath_ifbw(FPid,ifNum,BWkbps)
      Reserve bandwidth (Kbps) on a physical interface for a FP.

    RMPcode ← free_fpath_ifbw(FPid,ifNum,BWkbps)

      Free the bandwidth (Kbps) of a FP from a physical interface, and return the status.

FastPath MetaInterfaces

    MIid ← alloc_udp_tunnel(FPid,EPdescr )
      Given a UDP tunnel EP description allocate the EP for the FP, and return the MI identifier.

    RMPcode ← free_udp_tunnel(FPid,MIid)

      Free the MI of a FP, and return the status.

    EPdescr ← get_endpoint(FPid,MIid,EPdescr)

      Return the UDP tunnel EP description for a given MI of a FP.

FastPath Queue Management

    RMPcode ← bind_queue(FPid,MIid,qidListType,qidList)
      Associate the listed queues to the MI of the FP, and return the status.

    Qdescr ← get_queue_params(FPid,Qid,Qdescr)

      Return the parameters (threshold, bandwidth) for the FP queue, and return a description of the queue.

    BWkbps ← set_queue_params(FPid,Qid,Qdescr)

      Set the queue parameters (threshold, bandwidth) for the FP queue, and return the bandwidth of the queue.

    Qlen ← get_queue_len(FPid,Qid,Qlen)

      Return the length of the FP queue.

Fastpath Filter Management

    rmpCode ← write_fltr(FPid,Fid,Fltr)
      Install a FP filter, and return the status.

    rmpCode ← update_result(FPid,Fid,Fltr)

      Modify the FP filter, and return the status.

    Fltr ← get_fltr_byfid(FPid,Fid,Fltr)

      Return the FP filter given the filter ID.

    Fltr ← get_fltr_bykey(FPid,key,Fltr)

      Return the FP filter that matches the key.

    fltrResult ← lookup_fltr(FPid,key,Fltr)

      Return the result part of the FP filter that matches the key.

    rmpCode ← rem_fltr_byfid(FPid,Fid)

      Remove the FP filter given the filter ID, and return the status.

    rmpCode ← rem_fltr_bykey(FPid,key)

      Remove the highest priority FP filter that matches the key, and return the status.

FastPath Stats Management

    statsRecord ← read_stats(FPid,statsId,flags,statsRecord)
      Return the FP stats record (counter group) for the stats ID. The flags argument selects which counters to return. You can select the byte or packet counter and whether the preQ or postQ counter

    rmpCode ← clear_stats(FPid,statsId,flags)

      Reset the FP stats counters for the stats ID. The flags argument selects which counters to return.

    statsHandle ← create_periodic(FPid,statsId,period, historySize,flags)

      Create a periodic stats read event for the stats ID with the given period and history size, and return a handle for the operation. The flags argument indicates the retrieval method: either push the stats data to a registered port, or have the VM pull the data using the get_periodic command.

    rmpCode ← delete_periodic(FPid,statsHandle)

      Remove the periodic event, remove the callback state, and return the status.

    rmpCode ← set_callback( FPid,statsHandle,ipPortNum)

      Setup the callback for a periodic stats push model that sends stats records to the IP port number, and return the status.

    statsRecord ← get_periodic(FPid,statsHandle,statsRecord)

      Return the stats record associated with the stats handle.

FastPath Memory

Each code option is provided with a block of SRAM. A slice can read/write to any location in this block. A code option may elect to provide library functions to manipulate control structures within this block. The valBuf argument to the read/write functions is a structure that includes the number of bytes in the buffer and the buffer itself.

    rmpCode ← mem_write(FPid,offset,valBuf)
      Write data to the SRAM starting at offset within the FP block, and return the status. The valBuf argument is a structure that includes the number of bytes and the data.

    valBuf ← mem_read(FPid,offset,nbytes,valBuf)

      Read bytes into the value buffer, and return a reference to the value buffer.

Reservation Management

    rmpCode ← make_reservation(rsvRecord)
      Make a reservation, and return the status.

    rmpCode ← update_reservation(rsvRecord)

      Update a reservation.

    rmpCode ← cancel_reservation(date)

      Cancel the reservation that includes the specified date and time.

Substrate Control Daemons (SCD)

The SCDs run on the xScale processors of the Line Card and NPE. They provide a messaging interface, through which other control software components can exercise control. These include messages to access traffic counters, add/remove TCAM packet filters, configure queue parameters (including WDRR weights and discard thresholds), read/write specific memory locations used for control and status registers, etc. These are described in more detail below. All functions have a context ID (contextID) as an argument. A context ID of 0 indicates a privileged operation performed by the substrate. Any other context ID indicates a user context and is either a fastpath ID or internal slice ID. Many of the functions (e.g. write_fltr) appear to be similar to ones in the RMP. This is expected because the evaluation of an RMP operation must often be relayed to an SCD for evaluation but with one important difference. The SCD has a substrate view of objects whereas the RMP provides a higher-level of abstraction. The Line Card SCD allows the SRM to control various elements of the Line Card data path. This includes the TCAM-resident packet filters (on both input and output), interface addressing and bandwidth, NAT filter table configuration and queueing parameters. The NPE SCD allows the SRM and the RMP to control various elements of the NPE data path. This includes fast path configuration data, per-slice packet filters resident in the TCAM and queueing parameters.

Control Table Initialization

There are several tables and control blocks used by the control software.

    set_sched_params(contextId,Sid,ifNum,BWkbpsMax,BWkbpsMin,valBuf)
      Set the interface number and bandwidth characteristics for a Scheduler in the Per Scheduler Parameters table.

    set_encap_cb(contextId,Sid,srcIPaddr,dstMACaddr,valBuf)

      On the NPE, set the source IP Address and destination MAC Address associated with the specified scheduler.

    set_sched_mac(contextId,Sid,dstMACaddr,srcMACaddr,valBuf)

      On the LC, set the destination and source MAC Addresses for the specified scheduler.

    set_encap_gpe(contextId,FPid,GPEipAddr,NPEipAddr,valBuf)

      On the NPE, for a fast path, set the GPE IP Address and NPE IP Address to be used for communication between the GPE and NPE for local delivery and exceptions.

    set_fpmi_bw(contextId,FPid,Sid,MIid,BWkbps,valBuf)

      On the NPE, for a particular fast path, set the bandwidth for a MI using a particular scheduler.

    SCDcode ← set_src_hwaddr(contextId,MACaddr)

      On the NPE, set the NPE’s source MAC Address.

    SCDcode ← set_iface_table(contextId,ifTable)

      On the NPE, initialize the RX Interface ID table. This table translates the receive destination address on a packet to a 4 bit index which will be used in the Lookup key.

FastPath (NPE SCD Only)

set_fast_path(contextId,FPid,codeOpt,vlanID, num_queues,num_filters,num_buffers,num_stats, SRAM_offset,SRAM_size,DRAM_offset,DRAM_size,valBuf)

    On the NPE, create a new fast path.

rem_fast_path(contextId,FPid,valBuf)

    On the NPE, remove a fast path.

SCDcode ← set_gpe_info(contextId,EXport,LDport,EXqid,LDqid)

    On the NPE, for a particular fast path, set the Local Delivery and Exception traffic port numbers and QIDs.

Memory

    write_sram(contextId,offset,valBuf)
      On the NPE, write to the SRAM block for a particular fast path.

    read_sram(contextId,offset,valBuf,count)

      On the NPE, read from the SRAM block for a particular fast path.

Queue Management

    SCDcode ← bind_queue(contextId,MIid,qidListType,qidVector)
      Associate the listed queues to the context’s MI, and return the status.

    BWkbps ← set_queue_params(contextId,Qid,threshhold,BWkbps)

      Set the context’s queue parameters (threshold, bandwidth) for the queue, and return the bandwidth of the queue.

    get_queue_params(contextId,Qid,threshhold,BWkbps)

      Return the context’s parameters (threshold, bandwidth) for the queue through the threshold and BWkbps parameters, and return a description of the queue.

    get_queue_len(contextId,Qid,pktCnt,byteCnt)

      Return the length of the context’s queue through the pktCnt and byteCnt parameters.

    set_queue_sched(contextId,Qid,Sid,valBuf)

      Associate a specified queue with the specified scheduler.

NPE Filter Management

    SCDcode ← npe_write_fltr(contextId,Fid,substrateFltr)
      Install a context’s substrate (generic) filter with filter ID.

    SCDcode ← npe_update_result(contextId,Fid,result)

      Modify the result part of a context’s substrate (generic) filter with filter ID.

    substrateFltr ← npe_get_fltr_by_key(contextId,key,substrateFltr)

      Return the context’s substrate (generic) filter that matches the key.

    substrateFltr ← npe_get_fltr_by_fid(contextId,Fid,substrateFltr)

      Return the context’s substrate filter given the filter ID.

    substrateResult ← npe_lookup_fltr(contextId,key,substrateResult)

      Return the result part of the context’s substrate (generic) filter that matches the key.

    SCDcode ← npe_rem_fltr_by_key(contextId,substrateKey)

      Remove the context’s highest priority substrate filter that matches the key, and return the status.

    SCDcode ← npe_rem_fltr_by_fid(contextId,Fid)

      Remove the context’s substrate filter given the filter ID, and return the status.

Line Card Filter Management

There are two Line Card filter databases: ingress and egress. Ingress filters are used to determine which SPP component (e.g., NPE, GPE) should handle incoming packets. Egress filters are used to determine which output interface to send outgoing packets. The database ID (DBid) indicates the database to be used.

    write_fltr( contextId,DBid,Fid,key,mask,result,valBuf)
      Install a context’s LC filter (key, mask, result) in the given database.

    update_result(contextId,DBid,Fid,result)

      Update a context’s LC filter result in the specified database.

    get_fltr_by_key(contextId,DBid,key,mask,result,keyLen,resultLen)

      Given the key, retrieve a filter from the specified database.

    get_fltr_by_fid(contextId,DBid,Fid,key,mask,result,keyLen, resultLen)

      Given the filter id, retrieve a filter from the specified database.

    lookup_fltr(contextId,DBid,key,result,resultLen)

      Given the key, retrieve the filter result from the specified database.

    rem_fltr_by_key(contextId,DBid,key,valBuf)

      Given the key, remove the filter from the specified database.

    rem_fltr_by_fid(contextId,DBid,Fid,valBuf)

      Given the filter id, remove the filter from a specified database.

Statistics Management

    statsRecord ← read_stats(contextId,statsId,flags,statsRecord)
      Return the context’s stats record (counter group) for the stats ID. The flags argument selects which counters to return. You can select the byte or packet counter and whether the preQ or postQ counter.

    SCDcode ← clear_stats(contextId,statsId,flags)

      Reset the context’s stats counters for the stats ID, and return the status. The flags argument selects which counters to return.

    statsHandle ← create_periodic(contextId,statsId,period,count, flags)

      Create a periodic stats read event for the stats ID of the context with the given period and history size, and return a handle for the operation. The flags argument indicates the retrieval method: either push the stats data to a registered port, or have the VM pull the data using the get_periodic command.

    SCDcode ← del_periodic(contextId,statsHandle)

      Remove the context’s periodic event, remove the callback state, and return the status.

    SCDcode ← set_callback(contextId,statsHandle,UDPport)

      Setup the context’s callback for a periodic stats push model that sends stats records to the UDP port number, and return the status.

    statsRecordVector ← get_periodic(contextId,statsHandle, statsRecordVector)

      Return the context’s stats record associated with the stats handle.

MicroEngine Management

    start_mes(contextId,valBuf)
      Start the MicroEngines on an NPU.

    stop_mes(contextId,valBuf)

      Stop the MicroEngines on an NPU.

NAT

    nat_filters(contextId,ingressStartFid,ingressEndFid, egressStartFid, egressEndFid)
      On the LC, initialize the NAT filter tables. This sets aside a block of the TCAM for the Ingress NAT filters and a block of the TCAM for the Egress NAT filters.

MetaInterface Management

    SCDcode ← create_mi(contextId,FPid,MIid,Sid)
      On the NPE,cCreate a new meta-interface for a fast path.

    SCDcode ← delete_mi(contextId,FPid,MIid)

      On the NPE, delete the specified meta-interface for the specified fast path.

    SCDcode ← set_mi_bw(contextId,FPid,MIid,BWkbps)

      On the NPE, for the specified fast path, set the bandwidth for a meta-interface.

    SCDcode ← bind_queue_sched(contextId,Qid,Sid)

      On the NPE, bind a queue to a scheduler.

    SCDcode ← unbind_queue_sched(contextId,Qid)

      On the NPE, unbind a queue from a scheduler and release its bandwidth on that scheduler.

    SCDcode ← unbind_queue(contextId,Qid)

      On the NPE, unbind a queue from a meta-interface and release its bandwidth on that meta-interface.