Internals¶
Introduction¶
A ConPaaS service may consist of three main entities: the manager, the agent and the frontend. The (primary) manager resides in the first VM that is started by the frontend when the service is created and its role is to manage the service by providing supporting agents, maintaining a stable configuration at any time and by permanently monitoring the service’s performance. An agent resides on each of the other VMs that are started by the manager. The agent is the one that does all the work. Note that a service may contain one manager and multiple agents, or multiple managers that also act as agents.
To implement a new ConPaaS service, you must provide a new manager service, a new agent service and a new frontend service (we assume that each ConPaaS service can be mapped on the three entities architecture). To ease the process of adding a new ConPaaS service, we propose a framework which implements common functionality of the ConPaaS services. So far, the framework provides abstraction for the IaaS layer (adding support for a new cloud provider should not require modifications in any ConPaaS service implementation) and it also provides abstraction for the HTTP communication (we assume that HTTP is the preferred protocol for the communication between the three entities).
ConPaaS directory structure¶
You can see below the directory structure of the ConPaaS software. The core folder under src contains the ConPaaS framework. Any service should make use of this code. It contains the manager http server, which instantiates the python manager class that implements the required service; the agent http server that instantiates the python agent class (if the service requires agents); the IaaS abstractions and other useful code.
A new service should be added in a new python module under the ConPaaS/src/conpaas/services folder:
ConPaaS/ (conpaas/conpaas-services/)
│── src
│ │── conpaas
│ │ │── core
│ │ │ │── clouds
│ │ │ │ │── base.py
│ │ │ │ │── dummy.py
│ │ │ │ │── ec2.py
│ │ │ │ │── federation.py
│ │ │ │ │── opennebula.py
│ │ │ │ │── openstack.py
│ │ │ │── agent.py
│ │ │ │── controller.py
│ │ │ │── expose.py
│ │ │ │── file.py
│ │ │ │── ganglia.py
│ │ │ │── git.py
│ │ │ │── https
│ │ │ │── iaas.py
│ │ │ │── ipop.py
│ │ │ │── log.py
│ │ │ │── manager.py
│ │ │ │── manager.py.generic_add_nodes
│ │ │ │── misc.py
│ │ │ │── node.py
│ │ │ │── services.py
│ │ │── services
│ │ │── cds/
│ │ │── galera/
│ │ │── helloworld/
│ │ │── htc/
│ │ │── htcondor/
│ │ │── mapreduce/
│ │ │── scalaris/
│ │ │── selenium/
│ │ │── taskfarm/
│ │ │── webservers/
│ │ │── xtreemfs/
│ │── dist
│ │── libcloud -> ../contrib/libcloud/
│ │── setup.py
│ │── tests
│ │── core
│ │── run_tests.py
│ │── services
│ │── unit-tests.sh
│── config
│── contrib
│── misc
│── sbin
│── scripts
In the next paragraphs we describe how to add the new ConPaaS service.
Service Organization¶
Service’s name¶
The first step in adding a new ConPaaS service is to choose a name for it. This name will be used to construct, in a standardized manner, the file names of the scripts required by this service (see below). Therefore, the names should not contain spaces, nor unaccepted characters.
Scripts¶
To function properly, ConPaaS uses a series of configuration files and scripts. Some of them must be modified by the administrator, i.e. the ones concerning the cloud infrastructure, and the others are used, ideally unchanged, by the manager and/or the agent. A newly added service would ideally function with the default scripts. If, however, the default scripts are not satisfactory (for example the new service would need to start something on the VM, like a memcache server) then the developers must supply a new script/config file, that would be used instead of the default one. This new script’s name must be preceded by the service’s chosen name (as described above) and will be selected by the frontend at run time to generate the contextualization file for the manager VM. (If the frontend doesn’t find such a script/config file for a given service, then it will use the default script). Note that some scripts provided for a service do not replace the default ones, instead they will be concatenated to them (see below the agent and manager configuration scripts).
Below we give an explanation of the scripts and configuration files used by a ConPaaS service (there are other configuration files used by the frontend but these are not relevant to the ConPaaS service). Basically there are two scripts that a service uses to boot itself up - the manager contextualization script, which is executed after the manager VM booted, and the agent contextualization script, which is executed after the agent VM booted. These scripts are composed of several parts, some of which are customizable to the needs of the new service.
In the ConPaaS home folder (CONPAAS_HOME) there is the config folder that contains configuration files in the INI format and the scripts folder that contains executable bash scripts. Some of these files are specific to the cloud, other to the manager and the rest to the agent. These files will be concatenated in a single contextualization script, as described below.
Files specific to the Cloud:
(1) CONPAAS_HOME/config/cloud/cloud_name.cfg, where cloud_name refers to the clouds supported by the system (for now OpenNebula and EC2). So there is one such file for each cloud the system supports. These files are filled in by the administrator. They contain information such as the username and password to access the cloud, the OS image to be used with the VMs, etc. These files are used by the frontend and the manager, as both need to ask the cloud to start VMs.
(2) CONPAAS_HOME/scripts/cloud/cloud_name, where cloud_name refers to the clouds supported by the system (for now OpenNebula and EC2). So, as above, there is one such file for each cloud the system supports. These scripts will be included in the contextualization files. For example, for OpenNebula, this file sets up the network.
Files specific to the Manager:
(3) CONPAAS_HOME/scripts/manager/manager-setup, which prepares the environment by copying the ConPaaS source code on the VM, unpacking it, and setting up the PYTHONPATH environment variable.
(4) CONPAAS_HOME/config/manager/service_name-manager.cfg, which contains configuration variables specific to the service manager (in INI format). If the new service needs any other variables (like a path to a file in the source code), it should provide an annex to the default manager config file. This annex must be named service_name-manager.cfg and will be concatenated to default-manager.cfg
(5) CONPAAS_HOME/scripts/manager/service_name-manager-start, which starts the server manager and any other programs the service manager might use.
(6) CONPAAS_HOME/sbin/manager/service_name-cpsmanager (will be started by the service_name-manager-start script), which starts the manager server, which in turn will start the requested manager service.
Scripts (1), (2), (3), (4) and (5) will be used by the frontend to generate the contextualization script for the manager VM. After this scripts executes, a configuration file containing the concatenation of (1) and (4) will be put in ROOT_DIR/config.cfg and then (6) is started with the config.cfg file as a parameter that will be forwarded to the new service.
Examples:
Listing 1: Script (1) ConPaaS/config/cloud/opennebula.cfg
[iaas]
DRIVER = OPENNEBULA
# The URL of the OCCI interface at OpenNebula. Note: ConPaaS currently
# supports only the default OCCI implementation that comes together
# with OpenNebula. It does not yet support the full OCCI-0.2 and later
# versions.
URL =
# TODO: Currently, the TaskFarming service uses XMLRPC to talk to Opennebula.
# This is the url to the server (Ex. http://dns.name.or.ip:2633/RPC2)
XMLRPC =
# Your OpenNebula user name
USER =
# Your OpenNebula password
PASSWORD =
# The image ID (an integer). You can list the registered OpenNebula
# images with command "oneimage list" command.
IMAGE_ID =
# OCCI defines 4 standard instance types: small medium large and custom. This
# variable should choose one of these. (The small, medium and large instances have
# predefined memory size and cpu, but the custom one permits the customization of
# these parameters. The best option is to use the custom variable as some services,
# like map-reduce and mysql, must be able to start VMs with a given quantity of memory)
INST_TYPE = custom
# The network ID (an integer). You can list the registered OpenNebula
# networks with the "onevnet list" command.
NET_ID =
# The network gateway through which new VMs can route their traffic in
# OpenNebula (an IP address)
NET_GATEWAY =
# The DNS server that VMs should use to resolve DNS names (an IP address)
NET_NAMESERVER =
# The OS architecture of the virtual machines.
# (corresponds to the OpenNebula "ARCH" parameter from the VM template)
OS_ARCH =
# The device that will be mounted as root on the VM. Most often it
# is "sda" or "hda" for KVM, and "xvda2" for Xen.
# (corresponds to the OpenNebula "ROOT" parameter from the VM template)
OS_ROOT =
# The device on which the VM image disk is mapped.
DISK_TARGET =
# The device associated with the CD-ROM on the virtual machine. This
# will be used for contextualization in OpenNebula. Most often it is
# "sr0" for KVM and "xvdb" for Xen.
# (corresponds to the OpenNebula "TARGET" parameter from the "CONTEXT"
# section of the VM template)
CONTEXT_TARGET =
####################################################################
# The following values are only needed by the Task Farming service #
####################################################################
PORT =
# A unique name used in the service to specify different clouds
HOSTNAME =
# The accountable time unit. Different clouds charge at different
# frequencies (e.g. Amazon charges per hour = 60 minutes)
TIMEUNIT =
# The price per TIMEUNIT of this specific machine type on this cloud
COSTUNIT =
# The maximum number of VMs that the system is allowed to allocate from this
# cloud
MAXNODES =
SPEEDFACTOR =
Listing 2: Script (2) ConPaaS/scripts/cloud/opennebula
#!/bin/bash
if [ -f /mnt/context.sh ]; then
. /mnt/context.sh
fi
/sbin/ifconfig eth0 $IP_PUBLIC netmask $NETMASK
/sbin/ip route add default via $IP_GATEWAY
echo "nameserver $NAMESERVER" > /etc/resolv.conf
echo "prepend domain-name-servers $NAMESERVER;" >> /etc/dhcp/dhclient.conf
HOSTNAME=`/usr/bin/host $IP_PUBLIC | cut -d' ' -f5 | cut -d'.' -f1`
/bin/hostname $HOSTNAME
########################################################################################
# Create the one_auth file from contextualization variable ONE_AUTH_CONTENT
# and set it as an environment variable for the JVM
# This is needed for services that use XMLRPC instead of OCCI
if [ $ONE_AUTH_CONTENT ]; then
export ONE_AUTH=/root/.one_auth
export ONE_XMLRPC
echo $ONE_AUTH_CONTENT > $ONE_AUTH
fi
# PCI Hotplug Support is needed in order to attach persistent storage volumes
# to this instance
/sbin/modprobe acpiphp
/sbin/modprobe pci_hotplug
Listing 3: Script (3) ConPaaS/scripts/manager/manager-setup
#!/bin/bash
# Ths script is part of the contextualization file. It
# copies the source code on the VM, unpacks it, and sets
# the PYTHONPATH environment variable.
# Is filled in by the director
DIRECTOR=%DIRECTOR_URL%
SOURCE=$DIRECTOR/download
ROOT_DIR=/root
CPS_HOME=$ROOT_DIR/ConPaaS
LOG_FILE=/var/log/cpsmanager.log
ETC=/etc/cpsmanager
CERT_DIR=$ETC/certs
VAR_TMP=/var/tmp/cpsmanager
VAR_CACHE=/var/cache/cpsmanager
VAR_RUN=/var/run/cpsmanager
mkdir -p $VAR_TMP
mkdir -p $VAR_CACHE
mkdir -p $VAR_RUN
mkdir $CERT_DIR
mv /tmp/*.pem $CERT_DIR
wget --ca-certificate=$CERT_DIR/ca_cert.pem -P $ROOT_DIR/ $SOURCE/ConPaaS.tar.gz
tar -zxf $ROOT_DIR/ConPaaS.tar.gz -C $ROOT_DIR/
export PYTHONPATH=$CPS_HOME/src/:$CPS_HOME/contrib/
Listing 4: Script (4) ConPaaS/config/manager/default-manager.cfg
[manager]
# Service TYPE will be filled in by the director
TYPE = %CONPAAS_SERVICE_TYPE%
BOOTSTRAP = $SOURCE
MY_IP = $IP_PUBLIC
# These are used by the manager to
# communicate with the director to:
# - decrement the number of credits the user has.
# (they are used when a VM ran more than 1 hour)
# - request a new certificate from the CA
# Everything will be filled in by the director
DEPLOYMENT_NAME = %CONPAAS_DEPLOYMENT_NAME%
SERVICE_ID = %CONPAAS_SERVICE_ID%
USER_ID = %CONPAAS_USER_ID%
APP_ID = %CONPAAS_APP_ID%
CREDIT_URL = %DIRECTOR_URL%/callback/decrementUserCredit.php
TERMINATE_URL = %DIRECTOR_URL%/callback/terminateService.php
CA_URL = %DIRECTOR_URL%/ca/get_cert.php
IPOP_BASE_NAMESPACE = %DIRECTOR_URL%/ca/get_cert.php
# The following IPOP directives are added by the director if necessary
# IPOP_BASE_IP = %IPOP_BASE_IP%
# IPOP_NETMASK = %IPOP_NETMASK%
# IPOP_IP_ADDRESS = %IPOP_IP_ADDRESS%
# IPOP_SUBNET = %IPOP_SUBNET%
# This directory structure already exists in the VM (with ROOT = '') - see
# the 'create new VM script' so do not change ROOT unless you also modify
# it in the VM. Use these files/directories to put variable data that
# your manager might generate during its life cycle
LOG_FILE = $LOG_FILE
ETC = $ETC
CERT_DIR = $CERT_DIR
VAR_TMP = $VAR_TMP
VAR_CACHE = $VAR_CACHE
VAR_RUN = $VAR_RUN
CODE_REPO = %(VAR_CACHE)s/code_repo
CONPAAS_HOME = $CPS_HOME
# The default block device where the disks are attached to.
DEV_TARGET = sdb
# Add below other config params your manager might need and save a file as
# %service_name%-manager.cfg
# Otherwise this file will be used by default
Listing 5: Script (5) ConPaaS/scripts/manager/default-manager-start
#!/bin/bash
# This script is part of the contextualization file. It
# starts a python script that parses the given arguments
# and starts the manager server, which in turn will start
# the manager service.
# This file is the default manager-start file. It can be
# customized as needed by the service.
$CPS_HOME/sbin/manager/default-cpsmanager -c $ROOT_DIR/config.cfg 1>$ROOT_DIR/manager.out 2>$ROOT_DIR/manager.err &
manager_pid=$!
echo $manager_pid > $ROOT_DIR/manager.pid
Listing 6: Script (6) ConPaaS/sbin/manager/default-cpsmanager
#!/usr/bin/python
'''
Copyright (c) 2010-2012, Contrail consortium.
All rights reserved.
Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:
1. Redistributions of source code must retain the
above copyright notice, this list of conditions
and the following disclaimer.
2. Redistributions in binary form must reproduce
the above copyright notice, this list of
conditions and the following disclaimer in the
documentation and/or other materials provided
with the distribution.
3. Neither the name of the Contrail consortium nor the
names of its contributors may be used to endorse
or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Created on Jul 4, 2011
@author: ielhelw
'''
from os.path import exists
from conpaas.core.https import client, server
if __name__ == '__main__':
from optparse import OptionParser
from ConfigParser import ConfigParser
import sys
parser = OptionParser()
parser.add_option('-p', '--port', type='int', default=443, dest='port')
parser.add_option('-b', '--bind', type='string', default='0.0.0.0', dest='address')
parser.add_option('-c', '--config', type='string', default=None, dest='config')
options, args = parser.parse_args()
if not options.config or not exists(options.config):
print >>sys.stderr, 'Failed to find configuration file'
sys.exit(1)
config_parser = ConfigParser()
try:
config_parser.read(options.config)
except:
print >>sys.stderr, 'Failed to read configuration file'
sys.exit(1)
"""
Verify some sections and variables that must exist in the configuration file
"""
config_vars = {
'manager': ['TYPE', 'BOOTSTRAP', 'LOG_FILE',
'CREDIT_URL', 'TERMINATE_URL', 'SERVICE_ID'],
'iaas': ['DRIVER'],
}
config_ok = True
for section in config_vars:
if not config_parser.has_section(section):
print >>sys.stderr, 'Missing configuration section "%s"' % (section)
print >>sys.stderr, 'Section "%s" should contain variables %s' % (section, str(config_vars[section]))
config_ok = False
continue
for field in config_vars[section]:
if not config_parser.has_option(section, field)\
or config_parser.get(section, field) == '':
print >>sys.stderr, 'Missing configuration variable "%s" in section "%s"' % (field, section)
config_ok = False
if not config_ok:
sys.exit(1)
# Initialize the context for the client
client.conpaas_init_ssl_ctx(config_parser.get('manager', 'CERT_DIR'),
'manager', config_parser.get('manager', 'USER_ID'),
config_parser.get('manager', 'SERVICE_ID'))
# Start the manager server
print options.address, options.port
d = server.ConpaasSecureServer((options.address, options.port),
config_parser,
'manager',
reset_config=True)
d.serve_forever()
Files specific to the Agent
They are similar to the files described above for the manager, but this time the contextualization file is generated by the manager.
Scripts and config files directory structure¶
Below you can find the directory structure of the scripts and configuration files described above.
ConPaaS/ (conpaas/conpaas-services/)
│── config
│ │── agent
│ │ │── default-agent.cfg
│ │ │── galera-agent.cfg
│ │ │── helloworld-agent.cfg
│ │ │── htc-agent.cfg
│ │ │── htcondor.cfg
│ │ │── mapreduce-agent.cfg
│ │ │── scalaris-agent.cfg
│ │ │── web-agent.cfg
│ │ │── xtreemfs-agent.cfg
│ │── cloud
│ │ │── clouds-template.cfg
│ │ │── ec2.cfg
│ │ │── ec2.cfg.example
│ │ │── opennebula.cfg
│ │ │── opennebula.cfg.example
│ │── ganglia
│ │ │── ganglia_frontend.tmpl
│ │ │── ganglia-gmetad.tmpl
│ │ │── ganglia-gmond.tmpl
│ │── ipop
│ │ │── bootstrap.config.tmpl
│ │ │── dhcp.config.tmpl
│ │ │── ipop.config.tmpl
│ │ │── ipop.vpn.config.tmpl
│ │ │── node.config.tmpl
│ │── manager
│ │── default-manager.cfg
│ │── htc-manager.cfg
│ │── htcondor.cfg
│ │── java-manager.cfg
│ │── php-manager.cfg
│── sbin
│ │── agent
│ │ │── default-cpsagent
│ │ │── web-cpsagent
│ │── manager
│ │── default-cpsmanager
│ │── php-cpsmanager
│ │── taskfarm-cpsmanager
│── scripts
│── agent
│ │── agent-setup
│ │── default-agent-start
│ │── htc-agent-start
│ │── htcondor-agent-start
│ │── mapreduce-agent-start
│ │── scalaris-agent-start
│ │── selenium-agent-start
│ │── taskfarm-agent-start
│ │── web-agent-start
│ │── xtreemfs-agent-start
│── cloud
│ │── dummy
│ │── ec2
│ │── federation
│ │── opennebula
│ │── openstack
│── create_vm
│ │── 40_custom
│ │── create-img-conpaas.sh
│ │── create-img-script.cfg
│ │── create-img-script.py
│ │── README
│ │── register-image-ec2-ebs.sh
│ │── register-image-ec2-s3.sh
│ │── register-image-opennebula.sh
│ │── scripts
│ │── 000-head
│ │── 003-create-image
│ │── 004-conpaas-core
│ │── 501-php
│ │── 502-galera
│ │── 503-condor
│ │── 504-selenium
│ │── 505-hadoop
│ │── 506-scalaris
│ │── 507-xtreemfs
│ │── 508-cds
│ │── 995-rm-unused-pkgs
│ │── 996-user
│ │── 997-tail
│ │── 998-ec2
│ │── 998-opennebula
│ │── 999-resize-image
│── manager
│── cds-manager-start
│── default-git-deploy-hook
│── default-manager-start
│── htc-manager-start
│── htcondor-manager-start
│── java-manager-start
│── manager-setup
│── notify_git_push.py
│── php-manager-start
│── taskfarm-manager-start
Implementing a new ConPaaS service using blueprints¶
Blueprints are service templates you can use to speed up the creation
of a new service. You can use this blueprinting mechanism with
create-new-service-from-blueprints.sh
.
The conpaas-blueprints
tree contains the following files:
conpaas-blueprints
│── conpaas-client
│ │── cps
│ │── blueprint.py
│── conpaas-frontend
│ │── www
│ │── images
│ │ │── blueprint.png
│ │── js
│ │ │── blueprint.js
│ │── lib
│ │── service
│ │ │── blueprint
│ │ │── __init__.php
│ │── ui
│ │── instance
│ │ │── blueprint
│ │ │── __init__.php
│ │── page
│ │── blueprint
│ │── __init__.php
│── conpaas-services
│── scripts
│ │── create_vm
│ │── scripts
│ │── 5xx-blueprint
│── src
│── conpaas
│── services
│── blueprint
│── agent
│ │── agent.py
│ │── client.py
│ │── __init__.py
│── __init__.py
│── manager
│── client.py
│── __init__.py
│── manager.py
Edit create-new-service-from-blueprints.sh
and change the following lines to set up the script:
BP_lc_name=foobar # Lowercase service name in the tree
BP_mc_name=FooBar # Mixedcase service name in the tree
BP_uc_name=FOOBAR # Uppercase service name in the tree
BP_bp_name='Foo Bar' # Selection name as shown on the frontend create.php page
BP_bp_desc='My new FooBar Service' # Description as shown on the frontend create.php page
BP_bp_num=511 # Service sequence number for
# conpaas-services/scripts/create_vm/create-img-script.cfg
# Please look in conpaas-services/scripts/create_vm/scripts
# for the first available number
Running the script in the ConPaaS root will copy the files from the
tree above to the appropriate places in the conpaas-client
,
conpaas-frontend
and conpaas-services
trees. In the
process of copying, the above keywords will be replaced by the values
you entered, and files and directories named *blueprint*
will be
replaced by the new service name. Furthermore, the following files
will be adjusted similarly:
conpaas-services/src/conpaas/core/services.py
conpaas-frontend/www/create.php
conpaas-frontend/www/lib/ui/page/PageFactory.php
conpaas-frontend/www/lib/service/factory/__init__.php
Now you are ready to set up the specifics for your service. In most newly created files you will find the following comment
*TODO: as this file was created from a BLUEPRINT file, you may want to
change ports, paths and/or methods (e.g. for hub) to meet your specific
service/server needs*.
So it’s a good idea to do just that.
Implementing a new ConPaaS service by hand¶
In this section we describe how to implement a new ConPaaS service by providing an example which can be used as a starting point. The new service is called helloworld and will just generate helloworld strings. Thus, the manager will provide a method, called get_helloworld which will ask all the agents to return a ’helloworld’ string (or another string chosen by the manager).
We will start by implementing the agent. We will create a class, called HelloWorldAgent, which implements the required method - get_helloworld, and put it in conpaas/services/helloworld/agent/agent.py (Note: make the directory structure as needed and providing empty __init__.py to make the directory be recognized as a module path). As you can see in Listing 7, this class uses some functionality provided in the conpaas.core package. The conpaas.core.expose module provides a python decorator (@expose) that can be used to expose the http methods that the agent server dispatches. By using this decorator, a dictionary containing methods for http requests GET, POST or UPLOAD is filled in behind the scenes. This dictionary is used by the built-in server in the conpaas.core package to dispatch the HTTP requests. The module conpaas.core.http contains some useful methods, like HttpJsonResponse and HttpErrorResponse that are used to respond to the HTTP request dispatched to the corresponding method. In this class we also implemented a method called startup, which only changes the state of the agent. This method could be used, for example, to make some initializations in the agent. We will describe later the use of the other method, check_agent_process.
Listing 7: conpaas/services/helloworld/agent/agent.py
from conpaas.core.expose import expose
from conpaas.core.https.server import HttpJsonResponse, HttpErrorResponse
from conpaas.core.agent import BaseAgent
class HelloWorldAgent(BaseAgent):
def __init__(self,
config_parser, # config file
**kwargs): # anything you can't send in config_parser
# (hopefully the new service won't need anything extra)
BaseAgent.__init__(self, config_parser)
self.gen_string = config_parser.get('agent', 'STRING_TO_GENERATE')
@expose('POST')
def startup(self, kwargs):
self.state = 'RUNNING'
self.logger.info('Agent started up')
return HttpJsonResponse()
@expose('GET')
def get_helloworld(self, kwargs):
if self.state != 'RUNNING':
return HttpErrorResponse('ERROR: Wrong state to get_helloworld')
return HttpJsonResponse({'result':self.gen_string})
Let’s assume that the manager wants each agent to generate a different string. The agent should be informed about the string that it has to generate. To do this, we could either implement a method inside the agent, that will receive the required string, or specify this string in the configuration file with which the agent is started. We opted for the second method just to illustrate how a service could make use of the config files and also, maybe some service agents/managers need some information before having been started.
Therefore, we will provide the helloworld-agent.cfg file (see Listing 8) that will be concatenated to the default-manager.cfg file. It contains a variable ($STRING) which will be replaced by the manager.
Listing 8: ConPaaS/config/agent/helloworld-agent.cfg
STRING_TO_GENERATE = $STRING
Now let’s implement an http client for this new agent server. See Listing 9. This client will be used by the manager as a wrapper to easily send requests to the agent. We used some useful methods from conpaas.core.http, to send json objects to the agent server.
Listing 9: conpaas/services/helloworld/agent/client.py
import json
import httplib
from conpaas.core import https
def _check(response):
code, body = response
if code != httplib.OK: raise Exception('Received http response code %d' % (code))
data = json.loads(body)
if data['error']: raise Exception(data['error'])
else: return data['result']
def check_agent_process(host, port):
method = 'check_agent_process'
return _check(https.client.jsonrpc_get(host, port, '/', method))
def startup(host, port):
method = 'startup'
return _check(https.client.jsonrpc_post(host, port, '/', method))
def get_helloworld(host, port):
method = 'get_helloworld'
return _check(https.client.jsonrpc_get(host, port, '/', method))
Next, we will implement the manager in the same manner: we will write the HelloWorldManager class and place it in the file conpaas/services/helloworld/manager/manager.py. (See Listing 10) To make use of the IaaS abstractions, we need to instantiate a Controller which controls all the requests to the clouds on which ConPaaS is running. Note the lines:
1: self.controller = Controller( config_parser)
2: self.controller.generate_context('helloworld')
The first line instantiates a Controller. The controller maintains a list of cloud objects generated from the config_parser file. There are several functions provided by the controller which are documented in the doxygen documentation of file controller.py. The most important ones, which are also used in the Hello World service implementation, are: generate_context (which generates a template of the contextualization file); update_context (which takes the contextualization template and replaces the variables with the supplied values); create_nodes (which asks for additional nodes from the specified cloud or the default one) and delete_nodes (which deletes the specified nodes).
Note that the create_nodes function accepts as a parameter a function (in our case check_agent_process) that tests if the agent process started correctly in the agent VM. If an exception is generated during the calls to this function for a given period of time, then the manager assumes that the agent process didn’t start correctly and tries to start the agent process on a different agent VM.
Listing 10: conpaas/services/helloworld/manager/manager.py
from threading import Thread
from conpaas.core.expose import expose
from conpaas.core.manager import BaseManager
from conpaas.core.https.server import HttpJsonResponse, HttpErrorResponse
from conpaas.services.helloworld.agent import client
class HelloWorldManager(BaseManager):
# Manager states - Used by the Director
S_INIT = 'INIT' # manager initialized but not yet started
S_PROLOGUE = 'PROLOGUE' # manager is starting up
S_RUNNING = 'RUNNING' # manager is running
S_ADAPTING = 'ADAPTING' # manager is in a transient state - frontend will keep
# polling until manager out of transient state
S_EPILOGUE = 'EPILOGUE' # manager is shutting down
S_STOPPED = 'STOPPED' # manager stopped
S_ERROR = 'ERROR' # manager is in error state
AGENT_PORT = 5555
def __init__(self, config_parser, **kwargs):
BaseManager.__init__(self, config_parser)
self.nodes = []
# Setup the clouds' controller
self.controller.generate_context('helloworld')
self.state = self.S_INIT
def _do_startup(self, cloud):
startCloud = self._init_cloud(cloud)
self.controller.add_context_replacement(dict(STRING='helloworld'))
try:
nodes = self.controller.create_nodes(1,
client.check_agent_process, self.AGENT_PORT, startCloud)
node = nodes[0]
client.startup(node.ip, self.AGENT_PORT)
# Extend the nodes list with the newly created one
self.nodes += nodes
self.state = self.S_RUNNING
except Exception, err:
self.logger.exception('_do_startup: Failed to create node: %s' % err)
self.state = self.S_ERROR
@expose('POST')
def shutdown(self, kwargs):
self.state = self.S_EPILOGUE
Thread(target=self._do_shutdown, args=[]).start()
return HttpJsonResponse()
def _do_shutdown(self):
self.controller.delete_nodes(self.nodes)
self.nodes = []
self.state = self.S_STOPPED
@expose('POST')
def add_nodes(self, kwargs):
if self.state != self.S_RUNNING:
return HttpErrorResponse('ERROR: Wrong state to add_nodes')
if 'node' in kwargs:
kwargs['count'] = kwargs['node']
if not 'count' in kwargs:
return HttpErrorResponse("ERROR: Required argument doesn't exist")
if not isinstance(kwargs['count'], int):
return HttpErrorResponse('ERROR: Expected an integer value for "count"')
count = int(kwargs['count'])
cloud = kwargs.pop('cloud', 'iaas')
try:
cloud = self._init_cloud(cloud)
except Exception as ex:
return HttpErrorResponse(
"A cloud named '%s' could not be found" % cloud)
self.state = self.S_ADAPTING
Thread(target=self._do_add_nodes, args=[count, cloud]).start()
return HttpJsonResponse()
def _do_add_nodes(self, count, cloud):
node_instances = self.controller.create_nodes(count,
client.check_agent_process, self.AGENT_PORT, cloud)
self.nodes += node_instances
# Startup agents
for node in node_instances:
client.startup(node.ip, self.AGENT_PORT)
self.state = self.S_RUNNING
return HttpJsonResponse()
@expose('GET')
def list_nodes(self, kwargs):
if len(kwargs) != 0:
return HttpErrorResponse('ERROR: Arguments unexpected')
if self.state != self.S_RUNNING:
return HttpErrorResponse('ERROR: Wrong state to list_nodes')
return HttpJsonResponse({
'helloworld': [ node.id for node in self.nodes ],
})
@expose('GET')
def get_service_info(self, kwargs):
if len(kwargs) != 0:
return HttpErrorResponse('ERROR: Arguments unexpected')
return HttpJsonResponse({'state': self.state, 'type': 'helloworld'})
@expose('GET')
def get_node_info(self, kwargs):
if 'serviceNodeId' not in kwargs:
return HttpErrorResponse('ERROR: Missing arguments')
serviceNodeId = kwargs.pop('serviceNodeId')
if len(kwargs) != 0:
return HttpErrorResponse('ERROR: Arguments unexpected')
serviceNode = None
for node in self.nodes:
if serviceNodeId == node.id:
serviceNode = node
break
if serviceNode is None:
return HttpErrorResponse('ERROR: Invalid arguments')
return HttpJsonResponse({
'serviceNode': {
'id': serviceNode.id,
'ip': serviceNode.ip
}
})
@expose('POST')
def remove_nodes(self, kwargs):
if self.state != self.S_RUNNING:
return HttpErrorResponse('ERROR: Wrong state to remove_nodes')
if 'node' in kwargs:
kwargs['count'] = kwargs['node']
if not 'count' in kwargs:
return HttpErrorResponse("ERROR: Required argument doesn't exist")
if not isinstance(kwargs['count'], int):
return HttpErrorResponse('ERROR: Expected an integer value for "count"')
count = int(kwargs['count'])
self.state = self.S_ADAPTING
Thread(target=self._do_remove_nodes, args=[count]).start()
return HttpJsonResponse()
def _do_remove_nodes(self, count):
for _ in range(0, count):
self.controller.delete_nodes([ self.nodes.pop() ])
self.state = self.S_RUNNING
return HttpJsonResponse()
@expose('GET')
def get_helloworld(self, kwargs):
if self.state != self.S_RUNNING:
return HttpErrorResponse('ERROR: Wrong state to get_helloworld')
messages = []
# Just get_helloworld from all the agents
for node in self.nodes:
data = client.get_helloworld(node.ip, self.AGENT_PORT)
message = 'Received %s from %s' % (data['result'], node.id)
self.logger.info(message)
messages.append(message)
return HttpJsonResponse({ 'helloworld': "\n".join(messages) })
We can also implement a client for the manager server (see Listing 11). This will allow us to use the command line interface to send requests to the manager, if the frontend integration is not available.
Listing 11: conpaas/services/helloworld/manager/client.py
import httplib , json
from conpaas.core.http import HttpError, _jsonrpc_get, _jsonrpc_post, _http_post, _http_get
def _check(response):
code, body = response
if code != httplib.OK: raise HttpError('Received http response code %d' % (code))
data = json.loads(body)
if data['error']: raise Exception(data['error'])
else : return data['result']
def get_service_info(host, port):
method = 'get_service_info'
return _check(_jsonrpc_get(host, port , '/' , method))
def get_helloworld(host, port):
method = 'get_helloworld'
return _check(_jsonrpc_get(host, port , '/' , method))
def startup(host, port):
method = 'startup'
return _check(_jsonrpc_get(host, port , '/' , method))
def add_nodes(host, port , count=0):
method = 'add_nodes'
params = {}
params['count'] = count
return _check(_jsonrpc_post(host, port , '/', method, params=params))
def remove_nodes(host , port , count=0):
method = 'remove_nodes'
params = {}
params['count'] = count
return _check(_jsonrpc_post(host, port , '/', method, params=params))
def list_nodes(host, port):
method = 'list_nodes'
return _check(_jsonrpc_get(host, port , '/' , method))
The last step is to register the new service to the conpaas core. One entry must be added to file conpaas/core/services.py, as it is indicated in Listing 12. Because the Java and PHP services use the same code for the agent, there is only one entry in the agent services, called web which is used by both webservices.
Listing 12: conpaas/core/services.py
# -*- coding: utf-8 -*-
"""
conpaas.core.services
=====================
ConPaaS core: map available services to their classes.
:copyright: (C) 2010-2013 by Contrail Consortium.
"""
manager_services = {'php' : {'class' : 'PHPManager',
'module': 'conpaas.services.webservers.manager.internal.php'},
'java' : {'class' : 'JavaManager',
'module': 'conpaas.services.webservers.manager.internal.java'},
'scalaris' : {'class' : 'ScalarisManager',
'module': 'conpaas.services.scalaris.manager.manager'},
'hadoop' : {'class' : 'MapReduceManager',
'module': 'conpaas.services.mapreduce.manager.manager'},
'helloworld' : {'class' : 'HelloWorldManager',
'module': 'conpaas.services.helloworld.manager.manager'},
'xtreemfs' : {'class' : 'XtreemFSManager',
'module': 'conpaas.services.xtreemfs.manager.manager'},
'selenium' : {'class' : 'SeleniumManager',
'module': 'conpaas.services.selenium.manager.manager'},
'taskfarm' : {'class' : 'TaskFarmManager',
'module': 'conpaas.services.taskfarm.manager.manager'},
'galera' : {'class' : 'GaleraManager',
'module': 'conpaas.services.galera.manager.manager'},
# 'htcondor' : {'class' : 'HTCondorManager',
# 'module': 'conpaas.services.htcondor.manager.manager'},
'htc' : {'class' : 'HTCManager',
'module': 'conpaas.services.htc.manager.manager'},
'generic' : {'class' : 'GenericManager',
'module': 'conpaas.services.generic.manager.manager'},
#""" BLUE_PRINT_INSERT_MANAGER do not remove this line: it is a placeholder for installing new services """
}
agent_services = {'web' : {'class' : 'WebServersAgent',
'module': 'conpaas.services.webservers.agent.internals'},
'scalaris' : {'class' : 'ScalarisAgent',
'module': 'conpaas.services.scalaris.agent.agent'},
'mapreduce' : {'class' : 'MapReduceAgent',
'module': 'conpaas.services.mapreduce.agent.agent'},
'helloworld' : {'class' : 'HelloWorldAgent',
'module': 'conpaas.services.helloworld.agent.agent'},
'xtreemfs' : {'class' : 'XtreemFSAgent',
'module': 'conpaas.services.xtreemfs.agent.agent'},
'selenium' : {'class' : 'SeleniumAgent',
'module': 'conpaas.services.selenium.agent.agent'},
'galera' : {'class' : 'GaleraAgent',
'module': 'conpaas.services.galera.agent.internals'},
# 'htcondor' : {'class' : 'HTCondorAgent',
# 'module': 'conpaas.services.htcondor.agent.agent'},
'htc' : {'class' : 'HTCAgent',
'module': 'conpaas.services.htc.agent.agent'},
'generic' : {'class' : 'GenericAgent',
'module': 'conpaas.services.generic.agent.agent'},
#""" BLUE_PRINT_INSERT_AGENT do not remove this line: it is a placeholder for installing new services """
}
Integrating the new service with the frontend¶
So far there is no easy way to add a new frontend service. Each service may require distinct graphical elements. In this section we explain how the Hello World frontend service has been created.
Manager states¶
As you have noticed in the Hello World manager implementation, we used some standard states, e.g. INIT, ADAPTING, etc. By calling the get_service_info function, the frontend knows in which state the manager is. Why do we need these standardized stated? As an example, if the manager is in the ADAPTING state, the frontend would know to draw a loading icon on the interface and keep polling the manager.
Files to be modified¶
frontend
│── www
│── create.php
│── lib
│── service
│── factory
│── __init__.php
Several lines of code must be added to the two files above for the new service to be recognized. If you look inside these files, you’ll see that knowing where to add the lines and what lines to add is self-explanatory.
Files to be added¶
frontend
│── www
│── lib
| │── service
| | │── helloworld
| | │── __init__.php
| │── ui
| │── instance
| │── helloworld
| │── __init__.php
│── images
│── helloworld.png
Creating A ConPaaS Services VM Image¶
Various services require certain packages and configurations to be present in the VM image. ConPaaS provides facilities for creating specialized VM images that contain these dependencies. Furthermore, for the convenience of users, there are prebuilt images that contain the dependencies for all available services. If you intend to use these images and do not need a specialized VM image, then you can skip this section.
Configuring your VM image¶
The configuration file for customizing your VM image is located at
conpaas-services/scripts/create_vm/create-img-script.cfg
.
In the CUSTOMIZABLE section of the configuration file, you can define whether you plan to run ConPaaS on Amazon EC2, OpenStack or OpenNebula. Depending on the virtualization technology that your target cloud uses, you should choose either KVM or Xen for the hypervisor. Note that for Amazon EC2 this variable needs to be set to Xen. Please do not make the recommended size for the image file smaller than the default. The optimize flag enables certain optimizations to reduce the necessary packages and disk size. These optimizations allow for smaller VM images and faster VM startup.
In the SERVICES section of the configuration file, you have the opportunity to disable any service that you do not need in your VM image. If a service is disabled, its package dependencies are not installed in the VM image. Paired with the optimize flag, the end result will be a minimal VM image that runs only what you need.
Note that te configuration file contains also a NUTSHELL section. The settings in this section are explained in details in ConPaaS in a Nutshell. However, in order to generate a regular customized VM image, make sure that both container and nutshell flags in this section are set to false.
Once you are done with the configuration, you should run this command in the
create_vm
directory:
$ python create-img-script.py
This program generates a script file named create-img-conpaas.sh
. This script
is based on your specific configurations.
Creating your VM image¶
To create the image you can execute create-img-conpaas.sh
in any 64-bit
Debian or Ubuntu machine. Please note that you will need to have root
privileges on such a system. In case you do not have root access to a Debian or
Ubuntu machine please consider installing a virtual machine using your favorite
virtualization technology, or running a Debian/Ubuntu instance in the cloud.
Make sure your system has the following executables installed (they are usually located in
/sbin
or/usr/sbin
, so make sure these directories are in your$PATH
): dd parted losetup kpartx mkfs.ext3 tune2fs mount debootstrap chroot umount grub-installIt is particularly important that you use Grub version 2. To install it:
sudo apt-get install grub2
Execute
create-img-conpaas.sh
as root.
The last step can take a very long time. If all goes well, the final VM image
is stored as conpaas.img
. This file is later registered to your target IaaS
cloud as your ConPaaS services image.
If things go wrong¶
Note that if anything fails during the image file creation, the script will stop and it will try to revert any change it has done. However, it might not always reset your system to its original state. To undo everything the script has done, follow these instructions:
The image has been mounted as a separate file system. Find the mounted directory using command
df -h
. The directory should be in the form of/tmp/tmp.X
.There may be a
dev
and aproc
directories mounted inside it. Unmount everything using:sudo umount /tmp/tmp.X/dev /tmp/tmp.X/proc /tmp/tmp.X
Find which loop device you are using:
sudo losetup -a
Remove the device mapping:
sudo kpartx -d /dev/loopX
Remove the binding of the loop device:
sudo losetup -d /dev/loopX
Delete the image file
Your system should be back to its original state.
Creating a Nutshell image¶
Starting with the release 1.4.1, ConPaaS is shipped together with a VirtualBox appliance containing the Nutshell VM image. This section explains how to create a similar image that can be deployed on a different virtualization technology (such as the other clouds supported by ConPaaS). The next section describes the procedure for recreating the VirtualBox image. If you are interested only in installing the standard VirtualBox image that is shipped with ConPaaS, you may skip this chapter entirely and only read the installation guide available here: ConPaaS in a Nutshell.
The procedure for creating a Nutshell image is very similar to the one for creating a standard customized image described in section Creating A ConPaaS Services VM Image. However, there are a few settings in the configuration file which need to be considered.
Most importantly, there are two flags in the Nutshell section of the configuration file, nutshell and container which control the kind of image that is going to be generated. Since these two flags can take either value true of false, we distinguish four cases:
- nutshell = false, container = false: In this case a standard ConPaaS VM image is generated and the nutshell configurations are not taken into consideration. This is the default configuration which should be used when ConPaaS is deployed on a standard cloud.
- nutshell = false, container = true: In this case the user indicates that the image that will be generated will be a LXC container image. This image is similar to a standard VM one, but it does not contain a kernel installation.
- nutshell = true, container = false. In this case a Nutshell image is generated and a standard ConPaaS VM image will be embedded in it. This configuration should be used for deploying ConPaaS in nested standard VMs within a single VM.
- nutshell = true, container = true. Similar to the previous case, a Nutshell image is generated but this time a container image is embedded in it instead of a VM one. Therefore, in order to generate a Nutshell based on LXC containers, make sure to set these flags to this configuration. This is the default configuration for our distribution of the Nutshell.
Another important setting for generating the Nutshell image is also the path to a directory containing the ConPaaS tarballs (cps*.tar.gz files). The rest of the settings specify the distro and kernel versions that the Nutshell VM would have. For the moment we have tested it only for Ubuntu 12.04 with kernel 3.5.0.
In order to run the image generating script, the procedure is almost the same
as for a standard image. From the create_vm
directory run:
$ python create-img-script.py
$ sudo ./create-img-nutshell.sh
Note that if the nutshell flag is enabled the generated script file is called
create-img-nutshell.sh
. Otherwise, the generated script file is called
create-img-conpaas.sh
as indicated previously.
Creating a Nutshell image for VirtualBox¶
As mentioned earlier the Nutshell VM can also run on VirtualBox. In order to
generate a Nutshell image compatible with VirtualBox, you have to set the
cloud value to vbox in the Customizable section of the configuration
file. The rest of the procedure is the same as for other clouds. The result
of the image generation script would be a nutshell.vdi
image file which
can be used as a virtual hard drive when creating a new appliance on VirtualBox.
The procedure for creating a new appliance on VirtualBox is quite standard:
- Name and OS: You choose a custom name for the appliance but use Linux and Ubuntu (64 bit) for the type and version.
- Memory size: Since the Nutshell runs a significant number of services and also requires some memory for the containers, we suggest to choose at least 3 GB of RAM.
- Hard drive: Select “User an existing virtual hard drive file”, browse to the
location of the
nutshell.vdi
file generated earlier and press create.
Preinstalling an application into a ConPaaS Services Image¶
A ConPaaS Services Image contains all the necessary components needed in order
to run the ConPaaS services. For deploying arbitrary applications using ConPaaS,
the The Generic service provides a mechanism to install and run the application,
along with its dependencies. The installation, however, has to happen during the
initialization of every new node that is started, for example in the init.sh
script of the Generic Service. If installing the application with its dependencies
takes a long time or, in general, is not desired to happen during every deployment
of a new node, another option is available: preinstalling the application inside the
ConPaaS Services Image. The current section describes this process.
Download a ConPaaS Services Image appropriate for your computer architecture and virtualization technology. Here are the download links for the latest images:
- ConPaaS VM image for Amazon EC2 (x86_64):
- ConPaaS VM image for OpenStack with KVM (x86_64):
- MD5: 28299ac49cc216dde57b107000078c4fsize: 1.8 GB
- ConPaaS VM image for OpenStack with LXC (x86_64):
- MD5: 45296e4cfcd44325a13703dc67da1d0bsize: 1.8 GB
- ConPaaS VM image for OpenNebula with KVM (x86_64):
- MD5: 32022d0e50f3253b121198d30c336ae8size: 2.0 GB
- ConPaaS VM image for OpenStack with LXC for the Raspberry Pi (arm):
- MD5: c29cd086e8e0ebe7f0793e7d54304da4size: 2.0 GB
Warning
If you choose to use one of the images above, it is always a good idea to check its integrity before continuing to the next step. A corrupt image may result in unexpected behaviour which may be hard to trace. You can check the integrity by verifying the MD5 hash with the
md5sum
command.Alternatively, you can also create one such image using the instructions provided in the section Creating A ConPaaS Services VM Image.
The following steps will use as an example the image for the Raspberry PI. For other architecture or virtualization technologies, the commands are the same.
Warning
The following steps need to be performed on a machine with the same architecture and a similar operating system. For the regular images, this means the 64 bit version of a Debian or Ubuntu system. For the Raspberry PI image, the steps need to be performed on the Raspberry PI itself (with a Raspbian installation, arm architecture). Trying to customize the Raspberry PI image on a x86 system will not work!
Log in as root and change to the directory where you downloaded the image.
(Optional) If you need to expand the size of the image, you can do it right now. As the image is in the raw format, expanding the size can be done by increasing the size of the image file. For example, to increase the size with 1 GB:
root@raspberrypi:/home/pi# dd if=/dev/zero bs=4M count=256 >> conpaas-rpi.img 256+0 records in 256+0 records out 1073741824 bytes (1.1 GB) copied, 56.05551 s, 19 MB/s
If you have the package
qemu-utils
installed, you can also useqemu-img
instead:root@raspberrypi:/home/pi# qemu-img resize conpaas-rpi.img +1G Image resized.
Map a loop device to the ConPaaS image:
root@raspberrypi:/home/pi# losetup -fv conpaas-rpi.img Loop device is /dev/loop0
Warning
If you already have other loop devices in use, the output of this command may contain a different loop device. Take a note of it and replace loop0 with the correct device in the following commands.
If you increased the size of the image in step 3, you now need to also expand the file system. First, check the integrity of the file system with the following command:
root@raspberrypi:/home/pi# e2fsck -f /dev/loop0 e2fsck 1.42.9 (4-Feb-2014) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information root: 44283/117840 files (9.1% non-contiguous), 409442/470528 blocks
You can now expand the file system:
root@raspberrypi:/home/pi# resize2fs /dev/loop0 resize2fs 1.42.9 (4-Feb-2014) Resizing the filesystem on /dev/loop0 to 732672 (4k) blocks. The filesystem on /dev/loop0 is now 732672 blocks long.
Create a new directory and mount the image to it:
root@raspberrypi:/home/pi# mkdir conpaas-img root@raspberrypi:/home/pi# mount /dev/loop0 conpaas-img/
Now you can access the contents of the image inside the
conpaas-img
directory.Copy your application’s binaries and any other static content that you want to include in the image somewhere under the
conpaas-img
directory.To install any prerequisites, you may want to change the root directory to
conpaas-img
. But first, you will need to mount/dev
,/dev/pts
and/proc
in theconpaas-img
directory (which will become the new root directory), or else the installation of some packages may fail:root@raspberrypi:/home/pi# mount -obind /dev conpaas-img/dev root@raspberrypi:/home/pi# mount -obind /dev/pts conpaas-img/dev/pts root@raspberrypi:/home/pi# mount -t proc proc conpaas-img/proc
You can now execute the chroot:
root@raspberrypi:/home/pi# chroot conpaas-img
Your root directory is now the root of the image.
To use apt-get, you need to set a working DNS server:
root@raspberrypi:/# echo "nameserver 8.8.8.8" > /etc/resolv.conf
This example uses the Google Public DNS, you may however use any DNS server you prefer.
Check that the Internet works in this new environment:
root@raspberrypi:/# ping www.conpaas.eu PING carambolier.irisa.fr (131.254.150.34) 56(84) bytes of data. 64 bytes from carambolier.irisa.fr (131.254.150.34): icmp_seq=1 ttl=50 time=35.8 ms [... output omitted ...]
Use apt-get to install any packages that your application requires:
root@raspberrypi:/# apt-get update Hit http://archive.raspbian.org wheezy Release.gpg Hit http://archive.raspbian.org wheezy Release [... output omitted ...] root@raspberrypi:/# apt-get install <...>
Make the final configurations (if needed) and make sure that everything works.
Clean-up:
Exit the chroot:
root@raspberrypi:/# exit exit root@raspberrypi:/home/pi#
Unmount
/dev
,/dev/pts
and/proc
:root@raspberrypi:/home/pi# umount conpaas-img/proc root@raspberrypi:/home/pi# umount conpaas-img/dev/pts root@raspberrypi:/home/pi# umount conpaas-img/dev
Unmount the image:
root@raspberrypi:/home/pi# umount conpaas-img
Remove the directory:
root@raspberrypi:/home/pi# rm -r conpaas-img
Delete the loop device mapping:
root@raspberrypi:/home/pi# losetup -d /dev/loop0
That’s it! Now the file
conpaas-rpi.img
contains the new ConPaaS image with your application pre-installed.
You can now register the new image to the cloud of your choice and update the ConPaaS Director’s settings to use the new image. Instructions are available in the Installation guide:
- For Amazon EC2:
- Registering your custom VM image to Amazon EC2
- For OpenStack:
- Registering your ConPaaS image to OpenStack
- For OpenNebula:
- Registering your ConPaaS image to OpenNebula