Search |
Boot Manager Technical Documentation
Table of Contents The entire Boot Manager system consists of several components that are designed to work together to provide the functionality outline in the Boot Manager PDN [1]. These consist of:
The previous implementation of the software responsible for installing and booting nodes consisted of a set of boot scripts that the boot cd would download and run, depending on the node's current boot state. Only the necessary script for the current state would be downloaded, and the logic behind which script the node was sent to the node existed on the boot server in the form of PHP scripts. However, the intention with the new BootManager system is to send the same script back for all nodes (consisting of the core BootManager code), in all boot states, each time the node starts. Then, the boot manager will run and detiremine which operations to perform on the node, based on the current boot state. All state based logic for the node boot, install, debug, and reconfigure operations are contained in one place; there is no longer any boot state specific logic at PLC. All BootManager source code is located in the repository 'bootmanager' on the PlanetLab CVS system. For information on how to access CVS, consult the PlanetLab website. Unless otherwise noted, all file references refer to this repository. Most of the API calls available as part of the PlanetLab Central API are intended to be run by users, and thus authentication for these calls is done with the user's email address and password. However, the API calls described below will be run by the nodes themselves, so a new authentication mechanism is required. As is done with other PLC API calls, the first parameter to all BootManager related calls will be an authentication structure, consisting of these named fields:
Authentication is succesful if PLC is able to create the same hash from the values usings its own copy of the node key. If the hash values to not match, then either the keys do not match or the values of the call were modified in transmision and the node cannot be authenticated. Both the BootManager and the authentication software at PLC must agree on a method for creating the hash values for each call. This hash is essentially a finger print of the method call, and is created by this algorithm:
The implementation of this algorithm is in the function serialize_params in the file source/BootAPI.py. The same algorithm is located in the 'plc_api' repository, in the function serialize_params in the file PLC/Auth.py. The resultant string is fed into the HMAC algorithm with the node key, and the resultant hash value is used in the authentication structure. This authentication method makes a number of assumptions, detailed below.
Full, up to date technical documentation of these functions can be found in the PlanetLab Central API documentation. They are listed here for completeness.
The Boot Manager core package, which is run on the nodes and contacts the Boot API as necessary, is responsible for the following major functional units:
Each node always has one of four possible boot states.
Below is a high level flow chart of the boot manager, from the time it is executed to when it exits. This core state machine is located in source/BootManager.py. The boot manager needs to be able to operate under all currently supported boot cds. The new 3.0 cd contains software the current 2.x cds do not contain, including the Logical Volume Manager (LVM) client tools, RPM, and YUM, among other packages. Given this requirement, the boot cd will need to download as necessary the extra support files it needs to run. Depending on the size of these files, they may only be downloaded by specific steps in the flow chart in figure 1, and thus are not mentioned. See the PlanetLab BootCD Documentation for more information about the current, 3.x boot cds, how they are build, and what they provide to the BootManager. To remain compatible with 2.x boot cds, the format and existing contents of the configuration files for the nodes will not change. There will be, however, the addition of three fields:
An example of a configuration file for a dhcp networked machine: IP_METHOD="dhcp" HOST_NAME="planetlab-1" DOMAIN_NAME="cs.princeton.edu" NET_DEVICE="00:06:5B:EC:33:BB" NODE_KEY="79efbe871722771675de604a227db8386bc6ef482a4b74" NODE_ID="121" An example of a configuration file for the same machine, only with a statically assigned network address: IP_METHOD="static" IP_ADDRESS="128.112.139.71" IP_GATEWAY="128.112.139.65" IP_NETMASK="255.255.255.192" IP_NETADDR="128.112.139.127" IP_BROADCASTADDR="128.112.139.127" IP_DNS1="128.112.136.10" IP_DNS2="128.112.136.12" HOST_NAME="planetlab-1" DOMAIN_NAME="cs.princeton.edu" NET_DEVICE="00:06:5B:EC:33:BB" NODE_KEY="79efbe871722771675de604a227db8386bc6ef482a4b74" NODE_ID="121" Existing 2.x boot cds will look for the configuration files only on a floppy disk, and the file must be named 'planet.cnf'. The new 3.x boot cds, however, will initially look for a file named 'plnode.txt' on either a floppy disk, or burned onto the cd itself. Alternatively, it will fall back to looking for the original file name, 'planet.cnf'. This initial file reading is performed by the boot cd itself to bring the nodes network online, so it can download and execute the Boot Manager. However, the Boot Manager will also need to identify the location of and read in the file, so it can get the extra fields not initially used to bring the network online (primarily node_key and node_id). Below is the search order that the BootManager will use to locate a file. Configuration file location search order:
New nodes are added to the system explicitly by either a PI or a tech contact, either directly through the API calls, or by using the appropriate interfaces on the website. As nodes are added, their hostname, network configuration method (dhcp or static), and any static settings are required to be entered. Regardless of network configuration method, IP address is required. When the node is brought online, the records at PLC will be updated with any remaining information. After a node is added, the user has the option of creating a configuration file for that node. Once the node is added, the contents of the file are created automatically, and the user is prompted to download and save the file. This file contains only the primary network interface information (necessary to contact PLC), the node id, and the per-node key. The default boot state of a new node is 'inst', which requires the user to confirm the installation at the node, by typing yes on the console. If this is not desired, as is the case with nodes in a co-location site, or for a large number of nodes being setup at the same time, the administrator can change the node state, after the entry is in the PLC records, from 'inst' to 'reinstall'. This will bypass the confirmation screen, and proceed directly to reinstall the machine (even if it already had a node installation on it). If the primary node network address must be updated, if the node is moved to a new network for example, then two steps must be performed to successfully complete the move:
If the node ip address on the floppy does not match the record at PLC, then the node will not boot until they do match, as authentication will fail. The intention here is to prevent a malicious user from taking the floppy disk, altering the network settings, and trying to bring up a new machine with the new settings. On the other hand, if a non-primary network address needs to be updated, then simply updating the record in the configuration file will suffice. The boot manager, at next restart, will reconfigure the machine, and update the PLC records to match the configuration file. All run time configuration options for the BootManager exist in a single file located at source/configuration. These values are described below.
When a node is being installed, the Boot Manager must identify which
hardware the machine has that is applicable to a running node, and
configure the node properly so it can boot properly post-install. The
general procedure for doing so is outline in this section. It is
implemented in the The process for identifying which kernel module needs to be load is:
This process is fairly straight forward, and is simplified by the fact that we currently do not need support for USB, sound, or video devices when the node is fully running. The boot cd itself uses a similar process, but includes USB devices. Consult the boot cd technical documentation for more information. The creation of the PCI id to kernel module table lookup uses three different sources of information, and merges them together into a single table for easier lookups. With these three sources of information, a fairly comprehensive lookup table can be generated for the devices that PlanetLab nodes need to have configured. They include:
It should be noted here that SATA (Serial ATA) devices have been known to exist with both a PCI SCSI device class, and with a PCI IDE device class. Under linux 2.6 kernels, all SATA modules need to be listed in modprobe.conf under 'scsi_hostadapter' lines. This case is handled in the hardware loading scripts by making the assumption that if an IDE device matches a loadable module, it should be put in the modprobe.conf file, as 'real' IDE drivers are all currently built into the kernel, and do not need to be loaded. SATA devices that have a PCI SCSI device class are easily identified. It is enssential that the modprobe.conf configuration file contain the correct drivers for the disks on the system, if they are present, as during kernel installation the creation of the initrd (initial ramdisk) which is responsible for booting the system uses this file to identify which drivers to include in it. A failure to do this typically results in an kernel panic at boot with a 'no init found' message. Given the large number of nodes in PlanetLab, and the lack of direct physical access to them, the process of updating all configuration files to include the new node id and node key will take a fairly significant amount of time. Rather than delay deployment of the Boot Manager until all machines are updated, alternative methods for aquiring these values is used for existing nodes. First, the node id. For any machine already part of PlanetLab, there exists a record of its IP address and MAC address in PlanetLab central. To get the node_id value, if it is not located in the configuration file, the BootManager uses a standard HTTP POST request to a known php page on the boot server, sending the IP and MAC address of the node. This php page queries the PLC database, and returns a node_Id if the node is part of PlanetLab, -1 otherwise. Second, the node key. All Boot CDs currently in use, at the time they request a script from PLC to run, send in the request a randomly generated value called a boot_nonce, usually 32 bytes or larger. During normal BootManager operation, this value is ignored. However, in the absense of a node key, we can use this value. Although it is not as secure as a typical node key (because it is not distributed through external mechanisms, but is generated by the node itself), it can be used if we validate that the IP address of the node making the request matches the PLC record. This means that nodes behind firewalls can no longer be allowed in this situation. Below are common scenarios that the BootManager might encounter that would exist outside of the documented procedures for handling nodes. A full description of how they will be handled by the BootManager follows each.
|
PlanetLab loginAnnouncements
|