Internals

Introduction

A ConPaaS service may consist of three main entities: the manager, the agent and the frontend. The (primary) manager resides in the first VM that is started by the frontend when the service is created and its role is to manage the service by providing supporting agents, maintaining a stable configuration at any time and by permanently monitoring the service’s performance. An agent resides on each of the other VMs that are started by the manager. The agent is the one that does all the work. Note that a service may contain one manager and multiple agents, or multiple managers that also act as agents.

To implement a new ConPaaS service, you must provide a new manager service, a new agent service and a new frontend service (we assume that each ConPaaS service can be mapped on the three entities architecture). To ease the process of adding a new ConPaaS service, we propose a framework which implements common functionality of the ConPaaS services. So far, the framework provides abstraction for the IaaS layer (adding support for a new cloud provider should not require modifications in any ConPaaS service implementation) and it also provides abstraction for the HTTP communication (we assume that HTTP is the preferred protocol for the communication between the three entities).

ConPaaS directory structure

You can see below the directory structure of the ConPaaS software. The core folder under src contains the ConPaaS framework. Any service should make use of this code. It contains the manager http server, which instantiates the python manager class that implements the required service; the agent http server that instantiates the python agent class (if the service requires agents); the IaaS abstractions and other useful code.

A new service should be added in a new python module under the ConPaaS/src/conpaas/services folder:

ConPaaS/  (conpaas/conpaas-services/)
│── src
│   │── conpaas
│   │   │── core
│   │   │   │── clouds
│   │   │   │   │── base.py
│   │   │   │   │── dummy.py
│   │   │   │   │── ec2.py
│   │   │   │   │── federation.py
│   │   │   │   │── opennebula.py
│   │   │   │   │── openstack.py
│   │   │   │── agent.py
│   │   │   │── controller.py
│   │   │   │── expose.py
│   │   │   │── file.py
│   │   │   │── ganglia.py
│   │   │   │── git.py
│   │   │   │── https
│   │   │   │── iaas.py
│   │   │   │── ipop.py
│   │   │   │── log.py
│   │   │   │── manager.py
│   │   │   │── manager.py.generic_add_nodes
│   │   │   │── misc.py
│   │   │   │── node.py
│   │   │   │── services.py
│   │   │── services
│   │       │── cds/
│   │       │── galera/
│   │       │── helloworld/
│   │       │── htc/
│   │       │── htcondor/
│   │       │── mapreduce/
│   │       │── scalaris/
│   │       │── selenium/
│   │       │── taskfarm/
│   │       │── webservers/
│   │       │── xtreemfs/
│   │── dist
│   │── libcloud -> ../contrib/libcloud/
│   │── setup.py
│   │── tests
│       │── core
│       │── run_tests.py
│       │── services
│       │── unit-tests.sh
│── config
│── contrib
│── misc
│── sbin
│── scripts

In the next paragraphs we describe how to add the new ConPaaS service.

Service Organization

Service’s name

The first step in adding a new ConPaaS service is to choose a name for it. This name will be used to construct, in a standardized manner, the file names of the scripts required by this service (see below). Therefore, the names should not contain spaces, nor unaccepted characters.

Scripts

To function properly, ConPaaS uses a series of configuration files and scripts. Some of them must be modified by the administrator, i.e. the ones concerning the cloud infrastructure, and the others are used, ideally unchanged, by the manager and/or the agent. A newly added service would ideally function with the default scripts. If, however, the default scripts are not satisfactory (for example the new service would need to start something on the VM, like a memcache server) then the developers must supply a new script/config file, that would be used instead of the default one. This new script’s name must be preceded by the service’s chosen name (as described above) and will be selected by the frontend at run time to generate the contextualization file for the manager VM. (If the frontend doesn’t find such a script/config file for a given service, then it will use the default script). Note that some scripts provided for a service do not replace the default ones, instead they will be concatenated to them (see below the agent and manager configuration scripts).

Below we give an explanation of the scripts and configuration files used by a ConPaaS service (there are other configuration files used by the frontend but these are not relevant to the ConPaaS service). Basically there are two scripts that a service uses to boot itself up - the manager contextualization script, which is executed after the manager VM booted, and the agent contextualization script, which is executed after the agent VM booted. These scripts are composed of several parts, some of which are customizable to the needs of the new service.

In the ConPaaS home folder (CONPAAS_HOME) there is the config folder that contains configuration files in the INI format and the scripts folder that contains executable bash scripts. Some of these files are specific to the cloud, other to the manager and the rest to the agent. These files will be concatenated in a single contextualization script, as described below.

  • Files specific to the Cloud:

    (1) CONPAAS_HOME/config/cloud/cloud_name.cfg, where cloud_name refers to the clouds supported by the system (for now OpenNebula and EC2). So there is one such file for each cloud the system supports. These files are filled in by the administrator. They contain information such as the username and password to access the cloud, the OS image to be used with the VMs, etc. These files are used by the frontend and the manager, as both need to ask the cloud to start VMs.

    (2) CONPAAS_HOME/scripts/cloud/cloud_name, where cloud_name refers to the clouds supported by the system (for now OpenNebula and EC2). So, as above, there is one such file for each cloud the system supports. These scripts will be included in the contextualization files. For example, for OpenNebula, this file sets up the network.

  • Files specific to the Manager:

    (3) CONPAAS_HOME/scripts/manager/manager-setup, which prepares the environment by copying the ConPaaS source code on the VM, unpacking it, and setting up the PYTHONPATH environment variable.

    (4) CONPAAS_HOME/config/manager/service_name-manager.cfg, which contains configuration variables specific to the service manager (in INI format). If the new service needs any other variables (like a path to a file in the source code), it should provide an annex to the default manager config file. This annex must be named service_name-manager.cfg and will be concatenated to default-manager.cfg

    (5) CONPAAS_HOME/scripts/manager/service_name-manager-start, which starts the server manager and any other programs the service manager might use.

    (6) CONPAAS_HOME/sbin/manager/service_name-cpsmanager (will be started by the service_name-manager-start script), which starts the manager server, which in turn will start the requested manager service.

    Scripts (1), (2), (3), (4) and (5) will be used by the frontend to generate the contextualization script for the manager VM. After this scripts executes, a configuration file containing the concatenation of (1) and (4) will be put in ROOT_DIR/config.cfg and then (6) is started with the config.cfg file as a parameter that will be forwarded to the new service.

    Examples:

Listing 1: Script (1) ConPaaS/config/cloud/opennebula.cfg

[iaas]
DRIVER = OPENNEBULA

# The URL of the OCCI interface at OpenNebula. Note: ConPaaS currently
# supports only the default OCCI implementation that comes together
# with OpenNebula. It does not yet support the full OCCI-0.2 and later
# versions.
URL = 

# TODO: Currently, the TaskFarming service uses XMLRPC to talk to Opennebula.
# This is the url to the server (Ex. http://dns.name.or.ip:2633/RPC2)
XMLRPC = 

# Your OpenNebula user name
USER = 

# Your OpenNebula password
PASSWORD = 

# The image ID (an integer). You can list the registered OpenNebula
# images with command "oneimage list" command.
IMAGE_ID = 

# OCCI defines 4 standard instance types: small medium large and custom. This
# variable should choose one of these. (The small, medium and large instances have
# predefined memory size and cpu, but the custom one permits the customization of
# these parameters. The best option is to use the custom variable as some services,
# like map-reduce and mysql, must be able to start VMs with a given quantity of memory)
INST_TYPE = custom

# The network ID (an integer). You can list the registered OpenNebula
# networks with the "onevnet list" command.
NET_ID = 

# The network gateway through which new VMs can route their traffic in
# OpenNebula (an IP address)
NET_GATEWAY = 

# The DNS server that VMs should use to resolve DNS names (an IP address)
NET_NAMESERVER = 

# The OS architecture of the virtual machines.
# (corresponds to the OpenNebula "ARCH" parameter from the VM template)
OS_ARCH = 

# The device that will be mounted as root on the VM. Most often it
# is "sda" or "hda" for KVM, and "xvda2" for Xen.
# (corresponds to the OpenNebula "ROOT" parameter from the VM template)
OS_ROOT = 

# The device on which the VM image disk is mapped. 
DISK_TARGET = 

# The device associated with the CD-ROM on the virtual machine. This
# will be used for contextualization in OpenNebula. Most often it is
# "sr0" for KVM and "xvdb" for Xen.
# (corresponds to the OpenNebula "TARGET" parameter from the "CONTEXT" 
# section of the VM template)
CONTEXT_TARGET = 

####################################################################
# The following values are only needed by the Task Farming service #
####################################################################

PORT = 

# A unique name used in the service to specify different clouds
HOSTNAME = 

# The accountable time unit. Different clouds charge at different
# frequencies (e.g. Amazon charges per hour = 60 minutes)
TIMEUNIT = 

# The price per TIMEUNIT of this specific machine type on this cloud
COSTUNIT = 

# The maximum number of VMs that the system is allowed to allocate from this
# cloud
MAXNODES = 
SPEEDFACTOR = 

Listing 2: Script (2) ConPaaS/scripts/cloud/opennebula

#!/bin/bash

if [ -f /mnt/context.sh ]; then
  . /mnt/context.sh
fi

/sbin/ifconfig eth0 $IP_PUBLIC netmask $NETMASK
/sbin/ip route add default via $IP_GATEWAY
echo "nameserver $NAMESERVER" > /etc/resolv.conf
echo "prepend domain-name-servers $NAMESERVER;" >> /etc/dhcp/dhclient.conf

HOSTNAME=`/usr/bin/host $IP_PUBLIC | cut -d' ' -f5 | cut -d'.' -f1`
/bin/hostname $HOSTNAME

########################################################################################
# Create the one_auth file from contextualization variable ONE_AUTH_CONTENT
# and set it as an environment variable for the JVM
# This is needed for services that use XMLRPC instead of OCCI

if [ $ONE_AUTH_CONTENT ]; then
  export ONE_AUTH=/root/.one_auth
  export ONE_XMLRPC
  echo $ONE_AUTH_CONTENT > $ONE_AUTH
fi

# PCI Hotplug Support is needed in order to attach persistent storage volumes
# to this instance
/sbin/modprobe acpiphp
/sbin/modprobe pci_hotplug

Listing 3: Script (3) ConPaaS/scripts/manager/manager-setup

#!/bin/bash

# Ths script is part of the contextualization file. It 
# copies the source code on the VM, unpacks it, and sets
# the PYTHONPATH environment variable. 

# Is filled in by the director
DIRECTOR=%DIRECTOR_URL%
SOURCE=$DIRECTOR/download
ROOT_DIR=/root
CPS_HOME=$ROOT_DIR/ConPaaS

LOG_FILE=/var/log/cpsmanager.log
ETC=/etc/cpsmanager
CERT_DIR=$ETC/certs
VAR_TMP=/var/tmp/cpsmanager
VAR_CACHE=/var/cache/cpsmanager
VAR_RUN=/var/run/cpsmanager

mkdir $CERT_DIR
mv /tmp/*.pem $CERT_DIR

wget --ca-certificate=$CERT_DIR/ca_cert.pem -P $ROOT_DIR/ $SOURCE/ConPaaS.tar.gz
tar -zxf $ROOT_DIR/ConPaaS.tar.gz -C $ROOT_DIR/
export PYTHONPATH=$CPS_HOME/src/:$CPS_HOME/contrib/

Listing 4: Script (4) ConPaaS/config/manager/default-manager.cfg

[manager]

# Service TYPE will be filled in by the director
TYPE = %CONPAAS_SERVICE_TYPE%

BOOTSTRAP = $SOURCE
MY_IP = $IP_PUBLIC

# These are used by the manager to
# communicate with the director to:
#  - decrement the number of credits the user has.
#    (they are used when a VM ran more than 1 hour)
#  - request a new certificate from the CA
# Everything will be filled in by the director
DEPLOYMENT_NAME = %CONPAAS_DEPLOYMENT_NAME%
SERVICE_ID = %CONPAAS_SERVICE_ID%
USER_ID = %CONPAAS_USER_ID%
APP_ID = %CONPAAS_APP_ID%
CREDIT_URL = %DIRECTOR_URL%/callback/decrementUserCredit.php
TERMINATE_URL = %DIRECTOR_URL%/callback/terminateService.php
CA_URL = %DIRECTOR_URL%/ca/get_cert.php

IPOP_BASE_NAMESPACE = %DIRECTOR_URL%/ca/get_cert.php
# The following IPOP directives are added by the director if necessary
# IPOP_BASE_IP = %IPOP_BASE_IP%
# IPOP_NETMASK = %IPOP_NETMASK%
# IPOP_IP_ADDRESS = %IPOP_IP_ADDRESS%
# IPOP_SUBNET  = %IPOP_SUBNET%

# This directory structure already exists in the VM (with ROOT = '') - see
# the 'create new VM script' so do not change ROOT unless you also modify 
# it in the VM. Use these files/directories to put variable data that
# your manager might generate during its life cycle
LOG_FILE = $LOG_FILE 
ETC = $ETC
CERT_DIR = $CERT_DIR
VAR_TMP = $VAR_TMP
VAR_CACHE = $VAR_CACHE
VAR_RUN = $VAR_RUN
CODE_REPO = %(VAR_CACHE)s/code_repo

CONPAAS_HOME = $CPS_HOME

# The default block device where the disks are attached to.
DEV_TARGET = sdb

# Add below other config params your manager might need and save a file as
# %service_name%-manager.cfg 
# Otherwise this file will be used by default

Listing 5: Script (5) ConPaaS/scripts/manager/default-manager-start

#!/bin/bash

# This script is part of the contextualization file. It 
# starts a python script that parses the given arguments
# and starts the manager server, which in turn will start
# the manager service. 

# This file is the default manager-start file. It can be
# customized as needed by the service.

$CPS_HOME/sbin/manager/default-cpsmanager -c $ROOT_DIR/config.cfg 1>$ROOT_DIR/manager.out 2>$ROOT_DIR/manager.err &
manager_pid=$!
echo $manager_pid > $ROOT_DIR/manager.pid

Listing 6: Script (6) ConPaaS/sbin/manager/default-cpsmanager

#!/usr/bin/python
'''
Copyright (c) 2010-2012, Contrail consortium.
All rights reserved.

Redistribution and use in source and binary forms, 
with or without modification, are permitted provided
that the following conditions are met:

 1. Redistributions of source code must retain the
    above copyright notice, this list of conditions
    and the following disclaimer.
 2. Redistributions in binary form must reproduce
    the above copyright notice, this list of 
    conditions and the following disclaimer in the
    documentation and/or other materials provided
    with the distribution.
 3. Neither the name of the Contrail consortium nor the
    names of its contributors may be used to endorse
    or promote products derived from this software 
    without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.


Created on Jul 4, 2011

@author: ielhelw
'''
from os.path import exists
from conpaas.core.https import client, server

if __name__ == '__main__':
  from optparse import OptionParser
  from ConfigParser import ConfigParser
  import sys
  
  parser = OptionParser()
  parser.add_option('-p', '--port', type='int', default=443, dest='port')
  parser.add_option('-b', '--bind', type='string', default='0.0.0.0', dest='address')
  parser.add_option('-c', '--config', type='string', default=None, dest='config')
  options, args = parser.parse_args()
  
  if not options.config or not exists(options.config):
    print >>sys.stderr, 'Failed to find configuration file'
    sys.exit(1)
  
  config_parser = ConfigParser()
  try:
    config_parser.read(options.config)
  except:
    print >>sys.stderr, 'Failed to read configuration file'
    sys.exit(1)

  """
  Verify some sections and variables that must exist in the configuration file
  """
  config_vars = {
    'manager': ['TYPE', 'BOOTSTRAP', 'LOG_FILE',
                'CREDIT_URL', 'TERMINATE_URL', 'SERVICE_ID'],
    'iaas': ['DRIVER'],
  }
  config_ok = True
  for section in config_vars:
    if not config_parser.has_section(section):
      print >>sys.stderr, 'Missing configuration section "%s"' % (section)
      print >>sys.stderr, 'Section "%s" should contain variables %s' % (section, str(config_vars[section]))
      config_ok = False
      continue
    for field in config_vars[section]:
      if not config_parser.has_option(section, field)\
      or config_parser.get(section, field) == '':
        print >>sys.stderr, 'Missing configuration variable "%s" in section "%s"' % (field, section)
        config_ok = False
  if not config_ok:
    sys.exit(1)
  
  # Initialize the context for the client   
  client.conpaas_init_ssl_ctx(config_parser.get('manager', 'CERT_DIR'),
                                    'manager', config_parser.get('manager', 'USER_ID'),
                                    config_parser.get('manager', 'SERVICE_ID'))

  # Start the manager server
  print options.address, options.port
  d = server.ConpaasSecureServer((options.address, options.port),
                     config_parser,
		     'manager',
                     reset_config=True)
  d.serve_forever()
  • Files specific to the Agent

    They are similar to the files described above for the manager, but this time the contextualization file is generated by the manager.

Scripts and config files directory structure

Below you can find the directory structure of the scripts and configuration files described above.

ConPaaS/  (conpaas/conpaas-services/)
│── config
│   │── agent
│   │   │── default-agent.cfg
│   │   │── galera-agent.cfg
│   │   │── helloworld-agent.cfg
│   │   │── htc-agent.cfg
│   │   │── htcondor.cfg
│   │   │── mapreduce-agent.cfg
│   │   │── scalaris-agent.cfg
│   │   │── web-agent.cfg
│   │   │── xtreemfs-agent.cfg
│   │── cloud
│   │   │── clouds-template.cfg
│   │   │── ec2.cfg
│   │   │── ec2.cfg.example
│   │   │── opennebula.cfg
│   │   │── opennebula.cfg.example
│   │── ganglia
│   │   │── ganglia_frontend.tmpl
│   │   │── ganglia-gmetad.tmpl
│   │   │── ganglia-gmond.tmpl
│   │── ipop
│   │   │── bootstrap.config.tmpl
│   │   │── dhcp.config.tmpl
│   │   │── ipop.config.tmpl
│   │   │── ipop.vpn.config.tmpl
│   │   │── node.config.tmpl
│   │── manager
│       │── default-manager.cfg
│       │── htc-manager.cfg
│       │── htcondor.cfg
│       │── java-manager.cfg
│       │── php-manager.cfg
│── sbin
│   │── agent
│   │   │── default-cpsagent
│   │   │── web-cpsagent
│   │── manager
│       │── default-cpsmanager
│       │── php-cpsmanager
│       │── taskfarm-cpsmanager
│── scripts
    │── agent
    │   │── agent-setup
    │   │── default-agent-start
    │   │── htc-agent-start
    │   │── htcondor-agent-start
    │   │── mapreduce-agent-start
    │   │── scalaris-agent-start
    │   │── selenium-agent-start
    │   │── taskfarm-agent-start
    │   │── web-agent-start
    │   │── xtreemfs-agent-start
    │── cloud
    │   │── dummy
    │   │── ec2
    │   │── federation
    │   │── opennebula
    │   │── openstack
    │── create_vm
    │   │── 40_custom
    │   │── create-img-conpaas.sh
    │   │── create-img-script.cfg
    │   │── create-img-script.py
    │   │── README
    │   │── register-image-ec2-ebs.sh
    │   │── register-image-ec2-s3.sh
    │   │── register-image-opennebula.sh
    │   │── scripts
    │       │── 000-head
    │       │── 003-create-image
    │       │── 004-conpaas-core
    │       │── 501-php
    │       │── 502-galera
    │       │── 503-condor
    │       │── 504-selenium
    │       │── 505-hadoop
    │       │── 506-scalaris
    │       │── 507-xtreemfs
    │       │── 508-cds
    │       │── 995-rm-unused-pkgs
    │       │── 996-user
    │       │── 997-tail
    │       │── 998-ec2
    │       │── 998-opennebula
    │       │── 999-resize-image
    │── manager
        │── cds-manager-start
        │── default-git-deploy-hook
        │── default-manager-start
        │── htc-manager-start
        │── htcondor-manager-start
        │── java-manager-start
        │── manager-setup
        │── notify_git_push.py
        │── php-manager-start
        │── taskfarm-manager-start

Implementing a new ConPaaS service using blueprints

Blueprints are service templates you can use to speed up the creation of a new service. You can use this blueprinting mechanism with create-new-service-from-blueprints.sh.

The conpaas-blueprints tree contains the following files:

conpaas-blueprints
│── conpaas-client
│   │── cps
│       │── blueprint.py
│── conpaas-frontend
│   │── www
│       │── images
│       │   │── blueprint.png
│       │── js
│       │   │── blueprint.js
│       │── lib
│           │── service
│           │   │── blueprint
│           │       │── __init__.php
│           │── ui
│               │── instance
│               │   │── blueprint
│               │       │── __init__.php
│               │── page
│                   │── blueprint
│                       │── __init__.php
│── conpaas-services
    │── scripts
    │   │── create_vm
    │       │── scripts
    │           │── 5xx-blueprint
    │── src
        │── conpaas
            │── services
                │── blueprint
                    │── agent
                    │   │── agent.py
                    │   │── client.py
                    │   │── __init__.py
                    │── __init__.py
                    │── manager
                        │── client.py
                        │── __init__.py
                        │── manager.py

Edit create-new-service-from-blueprints.sh and change the following lines to set up the script:

BP_lc_name=foobar                  # Lowercase service name in the tree
BP_mc_name=FooBar                  # Mixedcase service name in the tree
BP_uc_name=FOOBAR                  # Uppercase service name in the tree
BP_bp_name='Foo Bar'               # Selection name as shown on the frontend  create.php  page
BP_bp_desc='My new FooBar Service' # Description as shown on the frontend  create.php  page
BP_bp_num=511                      # Service sequence number for
                                   # conpaas-services/scripts/create_vm/create-img-script.cfg
                                   # Please look in conpaas-services/scripts/create_vm/scripts
                                   # for the first available number

Running the script in the ConPaaS root will copy the files from the tree above to the appropriate places in the conpaas-client, conpaas-frontend and conpaas-services trees. In the process of copying, the above keywords will be replaced by the values you entered, and files and directories named *blueprint* will be replaced by the new service name. Furthermore, the following files will be adjusted similarly:

conpaas-services/src/conpaas/core/services.py
conpaas-frontend/www/create.php
conpaas-frontend/www/lib/ui/page/PageFactory.php
conpaas-frontend/www/lib/service/factory/__init__.php

Now you are ready to set up the specifics for your service. In most newly created files you will find the following comment

*TODO: as this file was created from a BLUEPRINT file, you may want to
change ports, paths and/or methods (e.g. for hub) to meet your specific
service/server needs*.

So it’s a good idea to do just that.

Implementing a new ConPaaS service by hand

In this section we describe how to implement a new ConPaaS service by providing an example which can be used as a starting point. The new service is called helloworld and will just generate helloworld strings. Thus, the manager will provide a method, called get_helloworld which will ask all the agents to return a ’helloworld’ string (or another string chosen by the manager).

We will start by implementing the agent. We will create a class, called HelloWorldAgent, which implements the required method - get_helloworld, and put it in conpaas/services/helloworld/agent/agent.py (Note: make the directory structure as needed and providing empty __init__.py to make the directory be recognized as a module path). As you can see in Listing 7, this class uses some functionality provided in the conpaas.core package. The conpaas.core.expose module provides a python decorator (@expose) that can be used to expose the http methods that the agent server dispatches. By using this decorator, a dictionary containing methods for http requests GET, POST or UPLOAD is filled in behind the scenes. This dictionary is used by the built-in server in the conpaas.core package to dispatch the HTTP requests. The module conpaas.core.http contains some useful methods, like HttpJsonResponse and HttpErrorResponse that are used to respond to the HTTP request dispatched to the corresponding method. In this class we also implemented a method called startup, which only changes the state of the agent. This method could be used, for example, to make some initializations in the agent. We will describe later the use of the other method, check_agent_process.

Listing 7: conpaas/services/helloworld/agent/agent.py

from conpaas.core.expose import expose

from conpaas.core.https.server import HttpJsonResponse, HttpErrorResponse

from conpaas.core.agent import BaseAgent

class HelloWorldAgent(BaseAgent):
    def __init__(self,
                 config_parser, # config file
                 **kwargs):     # anything you can't send in config_parser 
                                # (hopefully the new service won't need anything extra)
      BaseAgent.__init__(self, config_parser)
      self.gen_string = config_parser.get('agent', 'STRING_TO_GENERATE')

    @expose('POST')
    def startup(self, kwargs):
      self.state = 'RUNNING'
      self.logger.info('Agent started up')
      return HttpJsonResponse()

    @expose('GET')
    def get_helloworld(self, kwargs):
      if self.state != 'RUNNING':
        return HttpErrorResponse('ERROR: Wrong state to get_helloworld')
      return HttpJsonResponse({'result':self.gen_string})
 

Let’s assume that the manager wants each agent to generate a different string. The agent should be informed about the string that it has to generate. To do this, we could either implement a method inside the agent, that will receive the required string, or specify this string in the configuration file with which the agent is started. We opted for the second method just to illustrate how a service could make use of the config files and also, maybe some service agents/managers need some information before having been started.

Therefore, we will provide the helloworld-agent.cfg file (see Listing 8) that will be concatenated to the default-manager.cfg file. It contains a variable ($STRING) which will be replaced by the manager.

Listing 8: ConPaaS/config/agent/helloworld-agent.cfg

STRING_TO_GENERATE = $STRING

Now let’s implement an http client for this new agent server. See Listing 9. This client will be used by the manager as a wrapper to easily send requests to the agent. We used some useful methods from conpaas.core.http, to send json objects to the agent server.

Listing 9: conpaas/services/helloworld/agent/client.py

import json
import httplib

from conpaas.core import https 

def _check(response):
  code, body = response
  if code != httplib.OK: raise Exception('Received http response code %d' % (code))
  data = json.loads(body)
  if data['error']: raise Exception(data['error'])
  else: return data['result']

def check_agent_process(host, port):
  method = 'check_agent_process'
  return _check(https.client.jsonrpc_get(host, port, '/', method))

def startup(host, port):
  method = 'startup'
  return _check(https.client.jsonrpc_post(host, port, '/', method))

def get_helloworld(host, port):
  method = 'get_helloworld'
  return _check(https.client.jsonrpc_get(host, port, '/', method))

Next, we will implement the manager in the same manner: we will write the HelloWorldManager class and place it in the file conpaas/services/helloworld/manager/manager.py. (See Listing 10) To make use of the IaaS abstractions, we need to instantiate a Controller which controls all the requests to the clouds on which ConPaaS is running. Note the lines:

1: self.controller = Controller( config_parser)
2: self.controller.generate_context('helloworld')

The first line instantiates a Controller. The controller maintains a list of cloud objects generated from the config_parser file. There are several functions provided by the controller which are documented in the doxygen documentation of file controller.py. The most important ones, which are also used in the Hello World service implementation, are: generate_context (which generates a template of the contextualization file); update_context (which takes the contextualization template and replaces the variables with the supplied values); create_nodes (which asks for additional nodes from the specified cloud or the default one) and delete_nodes (which deletes the specified nodes).

Note that the create_nodes function accepts as a parameter a function (in our case check_agent_process) that tests if the agent process started correctly in the agent VM. If an exception is generated during the calls to this function for a given period of time, then the manager assumes that the agent process didn’t start correctly and tries to start the agent process on a different agent VM.

Listing 10: conpaas/services/helloworld/manager/manager.py

from threading import Thread

from conpaas.core.expose import expose
from conpaas.core.manager import BaseManager

from conpaas.core.https.server import HttpJsonResponse, HttpErrorResponse

from conpaas.services.helloworld.agent import client

class HelloWorldManager(BaseManager):

    # Manager states - Used by the Director
    S_INIT = 'INIT'         # manager initialized but not yet started
    S_PROLOGUE = 'PROLOGUE' # manager is starting up
    S_RUNNING = 'RUNNING'   # manager is running
    S_ADAPTING = 'ADAPTING' # manager is in a transient state - frontend will keep
                            # polling until manager out of transient state
    S_EPILOGUE = 'EPILOGUE' # manager is shutting down
    S_STOPPED = 'STOPPED'   # manager stopped
    S_ERROR = 'ERROR'       # manager is in error state

    AGENT_PORT = 5555

    def __init__(self, config_parser, **kwargs):
        BaseManager.__init__(self, config_parser)
        self.nodes = []
        # Setup the clouds' controller
        self.controller.generate_context('helloworld')
        self.state = self.S_INIT

    def _do_startup(self, cloud):
        startCloud = self._init_cloud(cloud)

        self.controller.add_context_replacement(dict(STRING='helloworld'))

        try:
            nodes = self.controller.create_nodes(1,
                client.check_agent_process, self.AGENT_PORT, startCloud)

            node = nodes[0]

            client.startup(node.ip, self.AGENT_PORT)

            # Extend the nodes list with the newly created one
            self.nodes += nodes
            self.state = self.S_RUNNING
        except Exception, err:
            self.logger.exception('_do_startup: Failed to create node: %s' % err)
            self.state = self.S_ERROR

    @expose('POST')
    def shutdown(self, kwargs):
        self.state = self.S_EPILOGUE
        Thread(target=self._do_shutdown, args=[]).start()
        return HttpJsonResponse()

    def _do_shutdown(self):
        self.controller.delete_nodes(self.nodes)
        self.nodes = []
        self.state = self.S_STOPPED

    @expose('POST')
    def add_nodes(self, kwargs):
        if self.state != self.S_RUNNING:
            return HttpErrorResponse('ERROR: Wrong state to add_nodes')

        if 'node' in kwargs:
            kwargs['count'] = kwargs['node']

        if not 'count' in kwargs:
            return HttpErrorResponse("ERROR: Required argument doesn't exist")

        if not isinstance(kwargs['count'], int):
            return HttpErrorResponse('ERROR: Expected an integer value for "count"')

        count = int(kwargs['count'])

        cloud = kwargs.pop('cloud', 'iaas')
        try:
            cloud = self._init_cloud(cloud)
        except Exception as ex:
                return HttpErrorResponse(
                    "A cloud named '%s' could not be found" % cloud)

        self.state = self.S_ADAPTING
        Thread(target=self._do_add_nodes, args=[count, cloud]).start()
        return HttpJsonResponse()

    def _do_add_nodes(self, count, cloud):
        node_instances = self.controller.create_nodes(count,
                client.check_agent_process, self.AGENT_PORT, cloud)

        self.nodes += node_instances
        # Startup agents
        for node in node_instances:
            client.startup(node.ip, self.AGENT_PORT)

        self.state = self.S_RUNNING
        return HttpJsonResponse()

    @expose('GET')
    def list_nodes(self, kwargs):
        if len(kwargs) != 0:
            return HttpErrorResponse('ERROR: Arguments unexpected')

        if self.state != self.S_RUNNING:
            return HttpErrorResponse('ERROR: Wrong state to list_nodes')

        return HttpJsonResponse({
              'helloworld': [ node.id for node in self.nodes ],
              })

    @expose('GET')
    def get_service_info(self, kwargs):
        if len(kwargs) != 0:
            return HttpErrorResponse('ERROR: Arguments unexpected')

        return HttpJsonResponse({'state': self.state, 'type': 'helloworld'})

    @expose('GET')
    def get_node_info(self, kwargs):
        if 'serviceNodeId' not in kwargs:
            return HttpErrorResponse('ERROR: Missing arguments')

        serviceNodeId = kwargs.pop('serviceNodeId')

        if len(kwargs) != 0:
            return HttpErrorResponse('ERROR: Arguments unexpected')

        serviceNode = None
        for node in self.nodes:
            if serviceNodeId == node.id:
                serviceNode = node
                break

        if serviceNode is None:
            return HttpErrorResponse('ERROR: Invalid arguments')

        return HttpJsonResponse({
            'serviceNode': {
                            'id': serviceNode.id,
                            'ip': serviceNode.ip
                            }
            })

    @expose('POST')
    def remove_nodes(self, kwargs):
        if self.state != self.S_RUNNING:
            return HttpErrorResponse('ERROR: Wrong state to remove_nodes')

        if 'node' in kwargs:
            kwargs['count'] = kwargs['node']

        if not 'count' in kwargs:
            return HttpErrorResponse("ERROR: Required argument doesn't exist")

        if not isinstance(kwargs['count'], int):
            return HttpErrorResponse('ERROR: Expected an integer value for "count"')

        count = int(kwargs['count'])
        self.state = self.S_ADAPTING
        Thread(target=self._do_remove_nodes, args=[count]).start()
        return HttpJsonResponse()

    def _do_remove_nodes(self, count):
        for _ in range(0, count):
            self.controller.delete_nodes([ self.nodes.pop() ])

        self.state = self.S_RUNNING
        return HttpJsonResponse()

    @expose('GET')
    def get_helloworld(self, kwargs):
        if self.state != self.S_RUNNING:
            return HttpErrorResponse('ERROR: Wrong state to get_helloworld')

        messages = []

        # Just get_helloworld from all the agents
        for node in self.nodes:
            data = client.get_helloworld(node.ip, self.AGENT_PORT)
            message = 'Received %s from %s' % (data['result'], node.id)
            self.logger.info(message)
            messages.append(message)

        return HttpJsonResponse({ 'helloworld': "\n".join(messages) })

We can also implement a client for the manager server (see Listing 11). This will allow us to use the command line interface to send requests to the manager, if the frontend integration is not available.

Listing 11: conpaas/services/helloworld/manager/client.py

import httplib , json
from conpaas.core.http import HttpError, _jsonrpc_get, _jsonrpc_post,  _http_post, _http_get

def _check(response):
    code, body = response
    if code != httplib.OK: raise HttpError('Received http response code %d' % (code))
    data = json.loads(body)
    if data['error']: raise Exception(data['error']) 
    else : return data['result']

def get_service_info(host, port):
    method = 'get_service_info'
    return _check(_jsonrpc_get(host, port , '/' , method))

def get_helloworld(host, port):
    method = 'get_helloworld'
    return _check(_jsonrpc_get(host, port , '/' , method))

def startup(host, port):
    method = 'startup'
    return _check(_jsonrpc_get(host, port , '/' , method))

def add_nodes(host, port , count=0): 
    method = 'add_nodes'
    params = {}
    params['count'] = count
    return _check(_jsonrpc_post(host, port , '/', method, params=params))

def remove_nodes(host , port , count=0):
    method = 'remove_nodes'
    params = {}
    params['count'] = count
    return _check(_jsonrpc_post(host, port , '/', method, params=params))

def list_nodes(host, port):
    method = 'list_nodes'
    return _check(_jsonrpc_get(host, port , '/' , method))

The last step is to register the new service to the conpaas core. One entry must be added to file conpaas/core/services.py, as it is indicated in Listing 12. Because the Java and PHP services use the same code for the agent, there is only one entry in the agent services, called web which is used by both webservices.

Listing 12: conpaas/core/services.py

# -*- coding: utf-8 -*-

"""
    conpaas.core.services
    =====================

    ConPaaS core: map available services to their classes.

    :copyright: (C) 2010-2013 by Contrail Consortium.
"""

manager_services = {'php'    : {'class' : 'PHPManager', 
                                'module': 'conpaas.services.webservers.manager.internal.php'},
                    'java'   : {'class' : 'JavaManager',
                                'module': 'conpaas.services.webservers.manager.internal.java'},
                    'scalaris' : {'class' : 'ScalarisManager',
                                  'module': 'conpaas.services.scalaris.manager.manager'},
                    'hadoop' : {'class' : 'MapReduceManager',
                                'module': 'conpaas.services.mapreduce.manager.manager'},
                    'helloworld' : {'class' : 'HelloWorldManager',
                                    'module': 'conpaas.services.helloworld.manager.manager'},
                    'xtreemfs' : {'class' : 'XtreemFSManager',
                                  'module': 'conpaas.services.xtreemfs.manager.manager'},
                    'selenium' : {'class' : 'SeleniumManager',
                                  'module': 'conpaas.services.selenium.manager.manager'},
                    'taskfarm' : {'class' : 'TaskFarmManager',
                                  'module': 'conpaas.services.taskfarm.manager.manager'},
		    'galera' : {'class' : 'GaleraManager',
                               'module': 'conpaas.services.galera.manager.manager'},

#                    'htcondor' : {'class' : 'HTCondorManager',
#                                  'module': 'conpaas.services.htcondor.manager.manager'},
                    'htc' : {'class' : 'HTCManager',
                                  'module': 'conpaas.services.htc.manager.manager'},
                    'generic' : {'class' : 'GenericManager',
                                  'module': 'conpaas.services.generic.manager.manager'},

#""" BLUE_PRINT_INSERT_MANAGER 		do not remove this line: it is a placeholder for installing new services """
		    }

agent_services = {'web' : {'class' : 'WebServersAgent',
                           'module': 'conpaas.services.webservers.agent.internals'},
                  'scalaris' : {'class' : 'ScalarisAgent',
                                'module': 'conpaas.services.scalaris.agent.agent'},
                  'mapreduce' : {'class' : 'MapReduceAgent',
                                 'module': 'conpaas.services.mapreduce.agent.agent'},
                  'helloworld' : {'class' : 'HelloWorldAgent',
                                  'module': 'conpaas.services.helloworld.agent.agent'},
                  'xtreemfs' : {'class' : 'XtreemFSAgent',
                                'module': 'conpaas.services.xtreemfs.agent.agent'},
                  'selenium' : {'class' : 'SeleniumAgent',
                                'module': 'conpaas.services.selenium.agent.agent'},
  		  'galera' : {'class' : 'GaleraAgent',
                             'module': 'conpaas.services.galera.agent.internals'},

#                  'htcondor' : {'class' : 'HTCondorAgent',
#                                'module': 'conpaas.services.htcondor.agent.agent'},
                  'htc' : {'class' : 'HTCAgent',
                                'module': 'conpaas.services.htc.agent.agent'},
                  'generic' : {'class' : 'GenericAgent',
                                'module': 'conpaas.services.generic.agent.agent'},
#""" BLUE_PRINT_INSERT_AGENT 		do not remove this line: it is a placeholder for installing new services """
		  }

Integrating the new service with the frontend

So far there is no easy way to add a new frontend service. Each service may require distinct graphical elements. In this section we explain how the Hello World frontend service has been created.

Manager states

As you have noticed in the Hello World manager implementation, we used some standard states, e.g. INIT, ADAPTING, etc. By calling the get_service_info function, the frontend knows in which state the manager is. Why do we need these standardized stated? As an example, if the manager is in the ADAPTING state, the frontend would know to draw a loading icon on the interface and keep polling the manager.

Files to be modified

frontend
│── www
    │── create.php
    │── lib
        │── service
            │── factory
                │── __init__.php

Several lines of code must be added to the two files above for the new service to be recognized. If you look inside these files, you’ll see that knowing where to add the lines and what lines to add is self-explanatory.

Files to be added

frontend
│── www
    │── lib
    |   │── service
    |   |   │── helloworld
    |   |       │── __init__.php
    |   │── ui
    |       │── instance
    |           │── helloworld
    |               │── __init__.php
    │── images
        │── helloworld.png