· pdf fileacronyms & abbreviations ... . ... this description is useful to see how
TRANSCRIPT
WP3-D3.2.1
CloudOpting
Migration of Open Public Services to the Cloud and Provision of Cloud Infrastructures
Deliverable D3.2.1, Mobility services, Environmental
& social Services, Open Data Services and Internet of
Things Services deployment implementation
methodology
Delivery 31st March 2015
Actual submission date: 31st March 2015
Lead beneficiary: TeamNET
C l o u d O p t i n g - P r o j e c t N º 6 2 1 1 4 6
2 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Reference
Author/’s Ignacio Soler, Xavier Riera, Luca Gioppo, Guido Spadotto,
Xavier Xicota, Robert Furió, Gonzalo Cabezas, Boryana
Stamenova, Antonio Paradell, Genís Mauri, Fernando
Andrés, August Mabilon, Ignacio Garcia, Ciprian Pavel,
Manuel Gonzalez, Ernest Planas
Partner TeamNET
Work package WP3
Task T3.2, T3.3, T3.4, T3.5
Status-Version Deliverable - v1.0
Dissemination Public
3 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Version List
Version Date Reason Author
0.1 5 January 2015 Initial
document
Ciprian Pavel - Teamnet
0.2 27 January 2015 Partners
improvements
added
Ciprian Pavel - Teamnet
0.3 06 March 2015 Google Docs
transformation
Ignacio Soler - SmartPartners
Ciprian Pavel - Teamnet
0.4 26 March 2015 Partners
contribution
Ignacio Garcia Vega - Wellness
Telecom
Xavier Riera - Worldline
Guido Spadotto - Profesia
Luca Gioppo - CSI
Ciprian Pavel - Teamnet
0.5 28 March 2015 Proof read Stephen Mirkovic - Electric
Corby
0.6 31 March 2015 Formatting Ciprian Pavel - Teamnet
4 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Acronyms & Abbreviations
Concept Definition
CRM Customer Relationship Management - is a
system for managing a company’s
interactions with current and future
customers1.
CMS Content Management System - is a computer
application that allows publishing, editing
and modifying content, organizing, deleting
as well as maintenance from a central
interface2.
DBMS DataBase Management System - a collection
of programs that enables you to store, modify,
and extract information from a database. CSS Cascading Style Sheets is a simple
mechanism for adding style (e.g., fonts,
colours, spacing) to Web documents.
TOSCA Topology and Orchestration Specification for
Cloud Applications
SLA Service Level Agreement - A formal,
negotiated document that defines (or
attempts to define) in quantitative (and
perhaps qualitative) terms the service being
offered to a Customer.3
CloudOpting Service
Catalogue
A catalogue of available services within a
given CloudOpting platform instance
(defined also in D3.1)
CloudOpting Blackbox A set of software components and
functionalities needed to Monitor, Deploy,
and Orchestrate CloudOpting Services in a
centralised way (defined also in D3.1)
1
From Wikipedia - http://en.wikipedia.org/wiki/Customer_relationship_management 2
From Wikipedia - http://en.wikipedia.org/wiki/Content_management_system 3
Source: http://www.knowledgetransfer.net/dictionary/ITIL/en/Service_Level_Agreement.htm
5 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Table of Contents
Reference .................................................................................... 2
Version List ................................................................................. 3
Acronyms & Abbreviations ............................................................. 4
Table of Contents .......................................................................... 5
Executive Summary ...................................................................... 7
1. Introduction ..............................................................................8
1.1. Scope of the Document ....................................................... 8
1.2 Evolution of D3.2.1 and D3.2.2 .............................................. 8
2. Deployment methodology implementation .................................. 10
2.1 MIGRATION PHASE ........................................................... 12
2.2 ADD/DEPLOY PHASE ......................................................... 14
3. Actual deployment implementation ............................................ 17
3.1 EXP-01 – Clearò - Transparency Portal .................................. 17
3.2 EXP-02 – FixThis and City Incident Reporting ........................ 20
3.3 EXP-03 – City Agenda and Next2Me ...................................... 20
3.3.1 Next2Me Migration .................................................... 20
3.3.2 Next2Me Deployment ................................................. 23
3.4 EXP-04 – mobileID ............................................................. 26
3.5 EXP-05 – ASIA GUIA (Applied Integrated Systems Support) ...... 26
3.6 EXP-06 – MIB (Base Information Database) ........................... 32
3.7 EXP-07 – Provide and offering of PaaS and SaaS to other
municipalities ........................................................................... 32
3.8 EXP-08 – Energy Consumption and Generation Dashboard ...... 32
3.9 EXP-09 – Bus Portal ............................................................ 32
3.10 EXP-10 – Business Portal ................................................... 33
3.11 EXP-11 – Indicators Portal ................................................. 39
3.12 EXP-12 – Smart City Cloud Expert System ............................ 39
3.13 EXP-13 – Open Data .......................................................... 40
3.14 EXP-14 – Mobile Services - Interoperability Azure - Cloud Stack
................................................................................................ 40
4 Conclusions and Next Steps ........................................................ 41
4.1 Next Steps ......................................................................... 41
6 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Table of images
Image 1-The CloudOpting Funnel Process......................................................... 9
Image 2-Relationships between CloudOpting Components (from D2.2) 10
Image 3-Docker Containers Hierarchy .............................................................. 11
Image 4-High-Level Migration Steps .................................................................. 12
Image 5-ADD phase steps ....................................................................................... 14
Image 6-DEPLOY phase steps .............................................................................. 15
Image 7-Current deployment diagram for CLEARO ..................................... 17
Image 8-CLEARO target deployment diagram ............................................... 18
Image 9-CLEARO migration steps ...................................................................... 19
Image 10-Next2Me municipalities deployment model ................................ 21
Image 11-Connected Citizen platform & services view ............................... 22
Image 12-Next2Me renewed user interfaces.................................................... 22
Image 13-Next2Me container and server map ................................................ 23
Image 14-Next2Me high level migration steps ............................................... 24
Image 15-ASIA experiment .................................................................................... 27
Image 16-Dockerised ASIA instance VM ........................................................... 28
Image 17-Migration steps to migrate ASIA. ..................................................... 28
Image 18-Github ASIA account screenshot ....................................................... 29
Image 19-Apache Dockerfile in Github ............................................................. 30
Image 20-Container creation order and linking map .................................... 31
Image 21-AZURE management portal ................................................................ 31
Image 22-Business Portal deployment overview ............................................ 33
Image 23-Business Portal migration steps ....................................................... 34
Image 24-Business Portal Docker containers usage ..................................... 34
Image 25-Business Portal Docker containers colocation ............................ 35
Image 26-Business Portal container creation order ...................................... 37
Image 27-Business Portal container source repository ................................ 38
Image 28-Business Portal container registry ................................................... 39
7 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Executive Summary
This deliverable will describe the methodology that the CloudOpting project
has agreed to follow in order to provide the CloudOpting platform with an
initial set of applications that will be a proof of concept and validation for the
coverage of identified requirements.
Chapter 2 provides a brief introduction to the scope of this deliverable and
an explanation of why its content does not cover all the phases associated to
the funnels described in WP2.
In Chapter 3, the “generic” workflow related to a subset of the phases in the
Publishing Funnel (namely the “Migration” and the “Add/Deploy” phases) is
described.
In Chapter 4, the generic process outlined in Chapter 3 is described in
further detail regarding the specific ported experiments (as of the time of
delivering this document, March 2015). This description is useful to see how
well the chosen technical architecture for CloudOpting suits the various
needs of pre-existing applications and which aspects or tools have to be
developed to further ease the adoption of the CloudOpting platform by
potential Service Providers.
Finally, Chapter 5 will list the conclusions which the work performed so far
has allowed the consortium to reach, and the next steps that are required to
be taken to extend the functionalities and capabilities of the CloudOpting
platform. The progress made in these aspects will be detailed in the next
WP3 deliverable: D3.2.2.
8 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
1. Introduction
1.1 Scope of the Document
This deliverable, D3.2.1 (Deployment Implementation Methodology) due in
M13, comprises of both this report and four migrated services which had
been developed at the time of this deliverable (M13). These migrated
services will be showcased during the first annual review. On M20 we plan to
have all the experiments migrated and described in D3.2.2. This report will
describe the methodology that the CloudOpting project has adopted in
order to provide an initial set of available services to verify and showcase
the platform’s capabilities, ease of use and overall usefulness.
It will not detail all the aspects of the experiments migration as this
deliverable only looks to demonstrate the ongoing processes. The
partitioning of phases and funnels defined in D3.1 will be detailed in the next
section together with the vision of the WP3 deliverables.
1.2 Evolution of D3.2.1 and D3.2.2
The main purpose of Work Package 3 is to perform the actual migration of
the proposed experiments using the CloudOpting architecture (as defined in
Work Package 2). The results of the migration will be documented into the
following deliverables:
- D3.2.1 - Mobility Services, Environmental & Social Services, Open Data
Services and Internet of Things Services deployment implementation
methodology - which is this deliverable, due in M13
- D3.2.2 - Mobility Services, Environmental & Social Services, Open Data
Services and Internet of Things Services deployment implementation
methodology - due in M20 at the end of Work Package 3
Taking into account the CloudOpting Double Funnel defined in the D3.1
deliverable, D3.2.1 will focus on the “Migration” Phase and on the
“Add/Deploy” Phase of the CloudOpting Publishing Funnel. This is
represented in a graphic form in the following image:
9 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Image 1-The CloudOpting Funnel Process
D.3.2.2 will focus on both funnels, providing additional information - gained
by porting the complete set of experiments - to the publishing funnel.
Regarding the migrated services, D3.2.1 will focus on four experiments
while D.3.2.2 will detail the remaining services.
10 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
2. Deployment methodology implementation
All of the virtual machines created as a consequence of a service
subscription will be implemented as ‘docker’ hosts. Inside these ‘docker’
hosts there will run docker containers with a specific set of tools and
packages needed for both the distinct functionality offered by the service
and also to exploit the CloudOpting platform-specific functionalities. For this
reason the docker containers will be called CloudOpting containers.
As defined in D2.2, the following image describes the relationships among
technical artifacts that will be used for deploying an application that is
compliant with the CloudOpting technical architecture:
Image 2-Relationships between CloudOpting Components (from D2.2)
The CloudOpting containers will be based on containers validated by the
Docker community and they will be built extending a set of available
containers that will have been guaranteed to meet CloudOpting’s
requirements as defined in WP2.
In order to achieve this goal a single CloudOpting root container will be
defined and any other container will be based upon, and extend this root
container. The final container hierarchy will be similar to the following non-
exhaustive diagram:
11 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Image 3-Docker Containers Hierarchy
The base CloudOpting container, COBASE, will extend Docker official image
of either CentOS or Ubuntu. The main extension will be the inclusion of the
puppet binaries.
The next level of CloudOpting containers will extend the COBASE container
by adding functions and capabilities needed for their intended use, like
dedicated volumes for logging, data storage, etc.
As you go further down the hierarchy the CloudOpting container will have
more specialized functions and capabilities and may also separate into
distinct versions of the same product like Apache HTTP Server 2.2 and 2.4.
The container customization will be done via puppet recipes. By using these
two technologies together we will achieve a high degree of customization
and independence between different layers of the application. This will also
increase the reuse of the components, as will be detailed later in the
experiments chapter.
Each Service Subscriber will have its own virtual machine running a set of
dedicated CloudOpting containers for each technology. This will ensure
higher security between similar instances and it may also provide the
required degree of customization of the software packages (Apache HTTP
server, PHP module, WordPress core module, WordPress plugins, Tomcat
versions and libraries, etc.)
12 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
2.1 MIGRATION PHASE
As stated in D3.1, in the Migration Phase all the services made available
through the CloudOpting platform “will be transformed in order to implement
the requirements (both functional and nonfunctional) that the CloudOpting
platform requires in order to be integrated in the service container. The phase
will finish when we have a set of packages containing all the artifacts needed to
create the master copy of the service featuring all the minimum parameters in a
Virtual Machine including the application itself and a configuration of
resources”.
The following picture lists the steps that each Service Provider will have to
go through in order to migrate or implement its service:
Image 4-High-Level Migration Steps
Going into further detail:
0 - Port Application Logic: The application code will be amended to use the
APIs that the CO Platform exposes (logging, event processing, … ), to
support multi tenancy, to externalize the configuration parameters so that
they can be altered without recompiling the app, to adapt its data model to
the available DBMS and - generally speaking - to integrate the application
logic with the new features provided by the CloudOpting Platform;
1 - Define Containers: Depending on the deployment model of the
application, the relevant base images from the Cloudopting repository of
images are selected as baselines for extension in the following steps.
2 - Define Dockerfiles*: For each required Docker base image (identified
in the previous step), the corresponding Dockerfile is written (as a quick
reminder, a Dockerfile is “a text document that contains all the commands
13 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
you would normally execute manually in order to build a Docker image4”) to
create the most simple base image;
3 - Add Custom Packages*: Having defined the base image, statements to
add all additional required software components (web/application servers,
programming languages, firewalls, etc.) are added to the Dockerfile;
4 - Add Custom Resources*: Each application has its own set of custom
resources or assets (Images, Logos, CSSs, Data Definition (Service Data
Model) and Manipulation (Data Import/Export) Scripts for the Database). The
service provider has to identify and persist them so that they can be applied
to the “standard” containers by means of Dockerfiles or Puppet5 scripts;
5 - Define Puppet Scripts*: Puppet Scripts are used to perform low-level
system configurations on the required nodes for the Application that has to
be ported. Using Puppet together with Docker allows for an increased
flexibility in defining the final state of the nodes;
6 - Externalize Configuration Values*: Once all aspects for deployment
have been formalized using the relevant Domain Specific Language
(Dockerfiles, Puppet Manifest files, etc.), all configuration parameter values
will have to be replaced by placeholders. These placeholders will be parsed
and will contribute to the generation of the TOSCA Template file (see next
step);
7 - Generate TOSCA Template: The TOSCA Template file will preserve an
interoperable description of the application in which only the configuration
parameter values will be replaced by placeholders. The CO Platform will
parse the TOSCA Template file to dynamically generate a form by which a
service subscriber will be able to provide all the required parameter values
for the service instances he is subscribing for. Some of these parameters are
about, beside technical parameters, the required SLA levels and the
“flavour” of the service that best suits the subscriber’s needs. The TOSCA
Template will include snippets of other Domain Specific Languages (Puppet,
Dockerfiles, etc.) with symbolic parameter values so that it will be the only
descriptor file required for gathering all the required parameter values at
activation time;
8 - Add to GitHub: Once all of the technical artifacts will be ready, they will
be uploaded to GitHub: that is designed as the private repository for code
and technical artifacts.
4 https://docs.docker.com/reference/builder/
5 https://puppetlabs.com/puppet/what-is-puppet “Puppet is a configuration management system that
allows you to define the state of your IT infrastructure, then automatically enforces the correct
state.”
14 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Steps marked with an asterisk in the previous list (from step 2 to step 6) are
steps whose outputs will not be used directly by the CloudOpting
orchestrator, but are needed by Service Providers to check that their
application is indeed able to be deployed automatically. Based on the
knowledge gained by defining the relevant scripts for the various goals of
migration, the Service Provider will be able to identify and embed the
relevant script snippets in the final TOSCA template which, as already stated,
will be the only source of information for the configuration and instantiation
of the application.
As the time of writing this deliverable, several files still have to be written by
the Service Provider in order for him to describe the “structure” of his/her
application. The consortium is trying to make it possible to generate all the
required files from a single TOSCA Template file (one for each service) by
means of model transformations or - at least - to provide a set of automated
tools to ease the process of producing correct “descriptors”. Progress on
this goal will be described in the upcoming deliverable D3.2.2.
2.2 ADD/DEPLOY PHASE
After the successful MIGRATION of the service the next phase in the
deployment process is to ADD the service definition to the CloudOpting
Service Catalogue.
ADD phase
The ADD phase is where the elements of the service are transformed to a set
of images and scripts resulting in a Master Service image. After the Master
Service image creation CloudOpting will be able to deploy the service in a
customized and automated way for each Service Subscriber.
Image 5-ADD phase steps
The steps for the ADD phase that are going to be performed by the Service
Provider are the following:
0 - Upload TOSCA template: The service provider will upload the TOSCA
template and puppet scripts for the service to add along with needed
artifacts for the proper execution of the service.
1 - Storage of the TOSCA template: CloudOpting platform will store the
TOSCA definition file along with all other artifacts uploaded by the service
provider with an associated ID. It will be stored on the CloudOpting
15 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Blackbox, that will not only store the master copies of migrated services but
will also provide a registry of running services - namely the CloudOpting
Service Catalogue.
2 - Storage of the service artifacts: CloudOpting Service Catalogue will
store the service specific artifacts (custom code, CSS, images, etc) inside the
CloudOpting Blackbox.
DEPLOY phase
The DEPLOY phase is the phase where the scripts to deploy the service are
generated and executed. In this phase the needed customizations are
performed and the result is a running instance of the customized service.
Using the CloudOpting Service Catalogue the Service Subscriber may
choose a service to add to its own portfolio and will perform the following
steps:
1. CloudOpting Service Catalogue will show a webform by reading the
TOSCA template placeholders (the webform is based on TOSCA template
placeholders described in MIGRATION phase - Externalize Configuration
values step).
2. The form will be shown to the service provider that will make it possible
to collect the different customization parameters from the service
subscriber. The form will enable non-technical users to fulfill appropriate
configuration to their needs.
3. The form will generate a TOSCA instance file based on the TOSCA
template. This instance will have all the needed parameters to deploy the
service that fulfill the service subscriber decision/needs.
Once the TOSCA instance file has been saved the DEPLOY phase may start.
This phase will be executed by the CloudOpting Platform.
Image 6-DEPLOY phase steps
The steps of this phase are:
16 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
0 - Generate new VM : the CloudOpting orchestration engine will generate
a new virtual machine in the appropriate cloud provider. 1 - Manage VM in the cloud : the newly created VM is set up according to
the Service Subscribers preferences and the services defined rules in terms
of resource usage (CPU, memory, HDD). The network setup will be managed
by the cloud provider in order to offer network connectivity to the newly
created virtual machine. 2 - Manage VM in CloudOpting : the newly created virtual machine will be
started using the CloudOpting base image that will contain software
packages for CloudOpting requirements (monitoring, administration and
logging agents) and also docker binaries in order to be able to perform the
docker host function for the CloudOpting containers. 3 - Generate CloudOpting containers : based on the preferences set by the
Service Subscriber the CloudOpting containers are generated automatically
and the corresponding Docker images are built. 4 - Deploy CloudOpting containers into newly VM : the CloudOpting
containers built in the previous step are copied and instantiated into the
newly created virtual machine. After the containers are started the service
may be used by the final users.
17 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
3. Actual deployment implementation
After defining the generic migration and implementation methodology in
this chapter this methodology will be applied to each of the experiments and
current actions will be described. Each step takes into consideration the
peculiarities of each experiment.
3.1 EXP-01 – Clearò - Transparency Portal
The Clearò portal is a multitenant application that actually runs in a legacy
environment.
The migration steps will follow two paths:
1) porting the actual architecture to an architecture compliant with the
Cloudopting one.
2) completing the portal with the multi-language features and the needed
additions to release the portal as a european usable service
At the moment, activity has been focused around the first step. Having
solved that, the evolution of the service will smoothly be deployed in the
CloudOpting platform with ease.
Clearò architecture
The actual architecture of the Clearò portal is summarized in the following
deployment diagram.
Image 7-Current deployment diagram for CLEARO
As can be seen there are 3 Virtual Machines that contain the different pieces
of the distributed architecture.
All these components which are installed manually (by DevOps people) may
generate problems in the process of migration from a development to a
18 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
production environment. This may lead to new problems which can only be
fixed in a production environment.
The operating system of all the three machines is a CentOS distribution.
The proposed CloudOpting architecture allows to keep the isolation
required at container level but optimizing the resource expenditures by
reducing the SO resource consumption to just one instance, generating a
reduced cost of operation by freeing some CPU and RAM needs to the
datacenter.
Image 8-CLEARO target deployment diagram
Clearò installation instructions
The Docker base image that the service will make use of will be a
CloudOpting image based on a CentOS 6.6 official image taken from the
DockerHub where an official repository for the CentOS images resides.
From this image all the containers will be generated.
As seen from the previous diagram we have 5 containers: 3 for the
middleware and DB services and 2 for the Data Volumes used by the other
containers.
To make the installation we will use community available puppet modules
that are officially maintained by the puppet vendor:
F
o
r
L
i
f
e
r
For Liferay there is an existing module that has dependencies to a different
Component Puppet Modules
Apache puppetlabs-apache
PostgreSQL puppetlabs-postgresql
Tomcat puppetlabs-tomcat
Liferay customization from proteon/Liferay
19 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
tomcat module and the development that will be done is to publish a forked
module that will depend on the more actively developed tomcat from
puppetlabs.
Following this stage another step will be required to add some CSI
customizations to transform the base Liferay portal to the Clearò portal.
This operation will be the deployment of additional WAR files in the tomcat
in a proper order.
This will generate 5 Dockerfiles linked between one another.
All this information will be placed appropriately in the TOSCA file.
Particular care will be needed in the ordering of the WAR deployed and we
will have some peculiarities from the fact that liferay is a specification of a
WAR type and this will cause an additional complexity in the design of the
TOSCA file and its usage.
The advantage of this is that with this service the project will manage to solve
an example of a higher level of architectural complexity enabling it to have a
real proof of concept for a generic real world deployment.
Since the Clearò service has been used by the CSI as a proof of concept to
refine the orchestrator for the WP2 activity the actual migration of the service
has been delayed in favour of solving peculiarities at the orchestrator level
to update the whole project with potential changes in the various definitions.
For this reason the development team did not use the GitHub and
DockerHub, but decided to work locally and has not yet worked on the
custom aspects of Clearò.
Image 9-CLEARO migration steps
20 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
For further details about the Clearò service, please refer to D3.1 (Appendix
A, chapter “EXP-01 - Clearò”) and D5.1 (Section 5.1 of the document).
3.2 EXP-02 – FixThis and City Incident Reporting
This experiment is going to be performed once the first mobile applications
experiment (EXP-03 City Agenda and Next2Me) herein detailed is running
properly.
For further details about the Fix This service, please refer to D3.1 (Appendix
A, chapter “EXP-02 - Fix This”) and D5.1 (Section 5.2 of the document).
3.3 EXP-03 – City Agenda and Next2Me
As part of the scope of the project, the “legacy” mobile applications such as
Next2Me and Agenda are being totally redeveloped from scratch, thus
producing brand new mobile apps that will reach the market with new
functionality, a totally refurbished user interface and finally with a new
business model (which is partially defined but yet to be further developed in
detail)
In this chapter we will focus in the description of the main features and
development of the first mobile app which is already migrated. That is:
Next2Me
Migration. In this section we will describe the main features of the
Next2Me mobile app and an approach of the business model that lies
underneath
Deployment. In this section we will describe the main physical
features and components of the Next2Me mobile app and the process we
have been following to deploy and install the application in the CloudOpting
platform
3.3.1 Next2Me Migration
Next2Me Business Model
As a brief introduction of the business model, Next2Me is a mobile app with
a Global purpose. That means, that the philosophy underneath this app is to
create a unique and single application led by the Barcelona Municipality
(IMI) and Provided by Atos Worldline. As said before, there will be a single
instance of the Next2Me mobile app, allowing third parties (in this case
municipalities) to join the app. For doing that, the only requisites they need
is to publish open data files of their facilities and sign a Subscriber contract
with the CloudOpting Platform.
21 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
The following image shows the schema we will follow to deploy the Next2Me
app in the CloudOpting platform and how the different municipalities will
integrate to it by publishing and integrating open data sets.
Image 10-Next2Me municipalities deployment model
Further definition of the Next2Me business model will be done at WP5.
Next2Me Architecture
The following image shows the overall architecture we are implementing for
the Next2Me mobile app and the platform of services needed to be
integrated. We call this platform “Connected Citizen Middleware”. As a
main feature of this platform we want to highlight that this platform will be
replicated for all the mobile apps we are going to develop within the
CloudOpting project
Among the main components of the Connected Citizen Middleware we can
highlight:
The Geo localization Manager. Providing the user location and the
contextual search of facilities nearby
The Search Manager. Providing semantic search
The ETL manager that provides interfacing and functionality of data
loading and transformation.
City Manager. Providing back office information of the subscribing
municipalities.
22 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Image 11-Connected Citizen platform & services view
Next2Me User Interface
The following images show the totally renewed user interface we are
implementing for the new Next2Me mobile app. This user interface also
encompasses a new user experience in contrast with the old legacy mobile
app. In the new one we are prioritising the user location (thanks to the
extensive use of google maps and the geolocation) and the free search of
categories and facilities thanks to the use of a semantic search engine.
Image 12-Next2Me renewed user interfaces
23 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
3.3.2 Next2Me Deployment
The following chapter describes the actual deployment of the Next2Me
experiment in the CloudPlatform (IMI Azure CloudOpting server).
Next2Me Physical Architecture
Next2Me is based on a 3 components/Servers architecture. Each one
corresponds with a VM:
Rest Services. Includes the Rest Services developed to integrate with the
business logic libraries and middleware
Database Server. Includes the DB engine and the ETL data load engines
and interfaces used to upload open data sources into the Next2Me app.
First release, MYSQL. Following an iterative approach we will first release a
prototype of the DB based on Mysql technology and a relational approach.
Second Release, MongoDB. This will be a new DB engine following a Non
SQL structure. The aim of this DB, is to transform the Next2Me DB engine in a
document structure that facilitates the semantic searches
Solr Server. This server includes the SOLR semantic index and search
engine (This will be implemented in a Second Release)
The Next2Me server map is described in the following image. Each one
corresponds with a VM:
Image 13-Next2Me container and server map
Next2Me Technical Features
24 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Next to Me is a mobile application with the following components:
- MySql
- Java (Business Library)
- JAX-RS (Rest Services)
- Apache Tomcat
- OS: CentOS Linux release 7.0.1406 (Core)
- Docker: Docker version 1.4.1, build 5bc2ff8
- Environment: CloudOpting.
Next2Me Deployment Process
This process describes the steps that were followed to deploy the current
release of the Next2Me Mobile app (As is in April 2015) in the CloudOpting
platform. The complexity of this deployment will grow in the future
according to the functionalities we plan to develop and integrate into the
app.
The next image shows the workflow with the main steps followed in order to
deploy it.
Image 14-Next2Me high level migration steps
Step 1: This step will identify the Docker containers we need for the
app. In this case two containers were created:
o DB Mysql Container
o Apache Tomcat Container
Step 2: This step defines and execute the commands needed to create
the Docker images:
o MySql Docker image
o Apache Tomcat Docker image
25 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Step 3: In this step we store the software libraries we need to install in
the Docker containers: Mysql and Apache Tomcat
Step 4: In this step we create and deploy the packages in the Docker
container
Step 5: In this step we parametrize the Docker instances we recently
have created
Step 6: In this step we link the Docker images involved. The Mysql
instance IP is set in the Application War and hosted in the Apache Docker
container.
Step 7: In this step we open the ports needed to access the Mysql
container
Next2Me Deployment Instructions
Because the Next2Me is an application which is still in a prototype phase,
please notice that all the instructions we have followed to deploy and install
the app have been done manually.
Apache Installation
To install apache tomcat we run the following command:
%>sudo docker run -it -d --name tomcat_test -v /home/ubuntu/tomcat-
volume:/usr/local/tomcat/webapps -p 8888:8080 tomcat:8.0
- We name it "tomcat_test" with the '--name' command.
- We set a volume direct to the deployment folder inside the docker image,
that will allow us to put the applications that we want to deploy direct to our
host. This is done with the '-v' command and the paths of the host first and the
image second separated by a colon (:).
- We set the external port with the command '-p'
- We specify the version of tomcat we want to install.
MYSQL Installation
To install MYSQL we run the following command:
%> sudo docker run --name mysql_test -v /home/ubuntu/mysql-
volume:/usr/local/mysql-volume -e MYSQL_ROOT_PASSWORD=1234 -e
MYSQL_DATABASE=database_name -e MYSQL_USER=user -e
MYSQL_PASSWORD=password -d mysql
-We name it 'mysql_test' with the '--name' command.
-We set a volume to a folder inside the docker image, that will allow us to put
scripts into the database. This is done with the '-v' command and the paths of
the host first and the image second separated by a colon (:).
26 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
-With the command '-e' we can add parameters like
MYSQL_ROOT_PASSWORD,MYSQL_DATABASE,MYSQL_USER,MYSQL_PASS
WORD that will create automatically a database with this information.
-We specify the mysql image.
After that, data has to be populated to the mysql database, so we follow the
next steps.
1. Connect to the docker image. Run the following command:
%> sudo docker exec -i -t mysql_test bash
2. Once inside the image we execute the populate instruction loading the
data file:
%> mysql -u user -p database_name < /usr/local/mysql-
volume/Dump20150130.sql
IP Image & Networking
To know the IP image, we execute:
%> sudo docker inspect mysql_test | grep 'IPAddress'
- This will give us the IP of the mysql docker image.
- Now we can configure our application to go against this IP.
- Once the application is configured and packed, we can copy the
packed file to the volume created, and we will have an application
“dockerized” and running.
3.4 EXP-04 – mobileID
This experiment is going to be performed once the first four experiments
herein detailed are running properly, as a guideline to guarantee its
success.
For further details about the mobileID service, please refer to D3.1
(Appendix A, chapter “EXP-04 - mobileID”) and D5.1 (Section 5.4 of the
document).
3.5 EXP-05 – ASIA GUIA (Applied Integrated Systems Support)
ASIA-GUIA (Applied Integrated Systems Support) is a corporate database
service that accumulates all data related to facilities and events in Barcelona
and its metropolitan area.
The main purpose of the service is to centralize the information about these
facilities and events into a single system to provide users with quality
27 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
information updated by different information and public attention channels at
the Barcelona City Council.
The ASIA architecture is described in the D3.1 deliverable. It is based on a
three tier architecture; apache, solr, mysql. This configuration has been
migrated to the CloudOpting standards. In this process, multiple steps and
actions have to be performed and require a deep knowledge of the
application to be migrated and the technological tools involved in the
platform.
Image 15-ASIA experiment
The distinctive features of each experiment, including the ASIA service,
makes the process of migrating an application a learning path to fine tune
the requirements and dependencies specification to develop the
corresponding automation scripts.
The starting point of the ASIA experiment are three hot VMWare images of
the running ASIA servers:
Apache
Solr
mySQL
As a consequence of that, a reverse engineering approach is then needed to
recover the source files for each server such as configuration files,
databases, volumes, indexes, etc.
28 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Every CloudOpting ASIA instance will be composed by a single virtual
machine with a Docker Host and, at least, three dedicated containers running
an apache web server, solr, and mysql. Shared data containers and volumes
can be created as well to ensure data persistency and cross-container
communication.
Image 16-Dockerised ASIA instance VM
The steps to migrate the ASIA experiment to the CloudOpting platform are
shown in the diagram below:
Image 17-Migration steps to migrate ASIA.
Green steps are already done, in red, still to be done.
Extraction of source files:
Databases: After the inspection of the ASIA mysql, a backup of the
databases has to be performed and saved into a single .sql file for further
importation.
29 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
+———————————————+
| Database |
+———————————————+
| information_schema |
| mysql |
| test |
| guiabcn |
| guiabcn_nova |
+———————————————+
Solr: The most important files to backup for the solr server are stored
under its home directory. In the ASIA image, the directory is located in
/opt/solr/. The main files are:
/conf/schema.xml
/conf/solrconfig.xml
/data/index
/data/spellchecker
Apache: The server settings are stored, in CentOS, in the following
folders:
/var/www
/etc/httpd/
You are then required to copy and migrate all the configuration files above
and restore the databases in the new containers. To have a shared point to
store the configuration files, a github account was created.
Image 18-Github ASIA account screenshot
30 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Each server directory contains the corresponding configuration data files
and a Dockerfile to build the images of each Docker container.
The content from github can be easily downloaded from the Docker host with
the command:
git clone https://[email protected]/cloudopting-IMI/AsiaTest.git
The Dockerfiles build the container images from official OS repositories and
include the installation scripts for the custom packages required on each
container. Image 19 below exposes the content of the apache container
Dockerfile.
Image 19-Apache Dockerfile in Github
In this approach, the ASIA git repository is copied to the docker host in order
to run the Dockerfiles locally and transfer the custom files from the host to
the container. The instruction ADD ./www ./var/www transfers the content in
the local directory ./www from the Docker master to the directory ./var/www
in the corresponding Docker container.
In the process of running the containers we have to link the services running
inside them by linking the containers and then open the corresponding ports
in the containers and the docker host. Links between containers allow them
to discover each other and securely transfer information from one container
to another.
31 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
MySQL container has to be created and linked to a shared volume to ensure
data persistency. The solr container has to be linked to the mysql container
and finally the apache server has to be linked to both the mysql and the solr
containers (Image 17).
Image 20-Container creation order and linking map
After configuring the Docker host and containers, we have to open the
corresponding ports in the AZURE virtual machine. At present, ASIA is
installed in the cloudopting3 virtual machine, a Basic A3 standard size VM.
Image 18 below shows the opened ports used during the installation of ASIA,
notice that some of the ports are used for testing purposes. Among others,
ports SSH, HTTP, and mySQL are shown.
Image 21-AZURE management portal
32 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
3.6 EXP-06 – MIB (Base Information Database)
This experiment is going to be performed once the first four experiments
herein detailed are running properly, as a guideline to guarantee its
success.
For further details about the MIB service, please refer to D3.1 (Appendix A,
chapter “EXP-06 - MIB”) and D5.1 (Section 5.6 of the document).
3.7 EXP-07 – Provide and offering of PaaS and SaaS to other
municipalities
Since the objective of this experiment is to demonstrate the deployment of a
new instance of Open Data (Experiment 13 of this project), this needs to be
taken into account first, prior to performing the experiment itself.
For further details about the Provision and offering PaaS and SaaS to other
municipalities, please refer to D3.1 section “EXP-07 - Provide and offering of
PaaS and SaaS to other municipalities” and section “EXP-13 - Open Data”
with focus on the application.
3.8 EXP-08 – Energy Consumption and Generation Dashboard
Experiment 08 - Energy Consumption and Generation Dashboard does not
have a service to be migrated, however it will be built from scratch. It is
currently in the design and analysis phase.
This experiment is going to be performed once the first four experiments
herein detailed are running properly, as a guideline to guarantee its
success.
For further details about the Energy Consumption and Generation
Dashboard service, please refer to D3.1 (Appendix A, chapter “EXP-08 -
Energy Consumption and Generation Dashboard”) and D5.1 (Section 5.8 of
the document).
3.9 EXP-09 – Bus Portal
Experiment 09 - Bus Portal does not have a service to be migrated, however
it will be built from scratch. It is currently in the design and analysis phase.
This experiment is going to be performed once the first four experiments
herein detailed are running properly, as a guideline to guarantee its
success.
33 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
For further details about the Bus Portal service, please refer to D3.1
(Appendix A, chapter “EXP-09 - Bus Portal”) and D5.1 (Section 5.9 of the
document).
3.10 EXP-10 – Business Portal
The Business Portal from ElectricCorby is a WordPress site based on a
specific template and integrated with Alcium’s Evolutive CRM6 software.
Alcium CRM software is a COTS product provided as a service by Alcium
Software7 to allow public administrations to perform Customer Relationship
Management functions.
The current usage of the Business Portal experiment is also described in
deliverable D3.1 Technical and Legal Requirements but will be briefly
described here:
- the users are connecting to the WordPress site for general information
managed in WordPress CMS
- the users are shown IFRAME for the specific Alcium CRM account of the
Public Administration
The deployment diagram of the Business Portal experiment (from
deliverable D3.1) is:
Image 22-Business Portal deployment overview
6
http://www.evolutive.co.uk/CRM - Evolutive is a specialist CRM Solution for Enquiry handling and
Property / Land recording - powered by Alcium Software - a Software Development and Web Design
Studio based in Sheffield 7
http://www.alciumsoftware.com/
34 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
For the migration of the Business Portal experiment to CloudOpting platform
custom migration steps will be done as following:
Image 23-Business Portal migration steps
These are the envisioned steps to achieve the full migration of the
experiment fulfilling CloudOpting platform requirements. The green
coloured steps are the steps already done while the red coloured steps are
due to be completed within the next time period.
Following is the description of the steps already done.
Containers
The containers will be based on specific CloudOpting architecture
described in the previous chapter. This experiment will use the following
CloudOpting containers:
1. cowordpress
2. comysql
The full inheritance is shown in the following diagram:
Image 24-Business Portal Docker containers usage
35 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
The yellow containers are shown in order to depict the full inheritance and
dependencies of the Business Portal experiment.
The containers use by Business Portal (the one in green) are the following:
1. cowordpress-10 - this is the main container that will run Apache HTTP
server with PHP module and WordPress software. The database needed by
the WordPress component will run in another container providing advanced
security for the whole experiment.
2. comysql-10 - this is the database container.
The containers will run in the same virtual machine dedicated to each
customer. This will increase security and administration. The container
diagram is shown below illustrating the use of the platform resources:
Image 25-Business Portal Docker containers colocation
Docker files
The Docker files used for these containers are the following:
1. cowordpress-10
This container is based on cowordpress container
FROM cloudopting/cowordpress
It adds new configuration from Docker host to temporary folder inside
container
ADD wp-config.php.tem /tmp/
Configure Apache HTTP server with PHP module and a virtual host matching
the Service Subscriber preferences
36 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
RUN puppet apply -e "class { 'apache':mpm_module => 'prefork'}
apache::vhost { 'first.example.com':docroot => '/var/www/first.example.com' }
class {'::apache::mod::php': }" --verbose --detailed-exitcodes || [ $? -eq 2 ]
Copy the WordPress installation into the virtual host created earlier
RUN cp -r /tmp/wordpress/* /var/www/first.example.com/
Add a new configuration from temporary folder to correct the path inside the
container
RUN cp /tmp/wp-config.php.tem /var/www/first.example.com/wp-
config.php
The underlined text in the previous Dockerfile commands will be replaced
by the actual value specified by Service Subscriber. The replacement of the
variables will be done by the CloudOpting orchestrator service component
developed in WP2 that will be used by Service Providers to add new
services, and by Service Subscribers to opt-in to available services. This tool
is detailed in the D2.2 deliverable in chapter 5 and it is referenced as
CloudOpting Services Catalog.
2. comysql-10
This container is based on comysql container
FROM cloudopting/comysql
The database configuration will be done in later steps. There is no special
configuration needed for this container.
Custom packages
This experiment is using custom packages in order to enrich the default
functions of the software packages and also to provide immediate business
benefits for the Service Subscriber.
The custom packages are the following:
- WordPress iThemes Security - provides enhanced security for
WordPress deployments
- public WordPress themes plugins - provides multiple look and feel
options for various needs
- custom database information - provides a ‘template’ based approach
for the business portal for the Service Subscriber
- prebuilt Alcium CRM integration - provides out-of-the-box integration
with Alcium CRM solution
These custom packages are envisioned in this phase but the final content will
be available after the “Template-ize” phase.
37 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Configure links between containers
Having two containers in the same virtual machine increases the security but
also adds another layer of complexity in the configuration of the experiment
as both containers must communicate between them (wordpress site has to
read data from the MySQL database and has to write information into MySQL
database).
Linking containers will allow secure connectivity from the master container
(in our case it is cowordpress-10) to the linked container (comysql-10)
without exposing network communication ports to the Docker host.
The container linking diagram is the following:
Image 26-Business Portal container creation order
Translate to Puppet
During the firsts LAB experiments with this migration, Puppet8 technology
was not fully used, software packages were being installed and configured
using OS utilities. After the experiment migration has been stabilized, the OS
specific procedures were translated to Puppet recipes for installation and
configuration of the software packages. This enables reuse across platforms
and operating systems.
The Puppet usage allows us to use different types of operating systems as
guest OS inside the virtual machines, all of them only have to be supported
by Puppet.
Right now the Apache HTTP web server is installed using Puppet by:
RUN puppet module install puppetlabs-apache
Also MySQL is installed by:
8
https://puppetlabs.com/ IT Automation Technology
38 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
RUN puppet module install puppetlabs-mysql
These installations are done in the Docker hierarchy tree of the containers.
the configuration of the software being specific to each instance of the
container based on Service Subscriber needs.
Use GitHUB and DockerHUB
The containers will be based on specific CloudOpting architecture
described in the previous chapter. The Docker files are residing on GitHUB
at https://github.com/CloudOpting.
Image 27-Business Portal container source repository
The corresponding Docker images are built by public Docker Hub available
at the adress https://registry.hub.docker.com/repos/cloudopting/.
39 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
Image 28-Business Portal container registry
The remaining steps are foreseen to be needed for the successful migration
of the experiment into the CloudOpting platform but are not completed at
the time of finalizing this deliverable and will be provided in deliverable
D3.2.2 which is due in M20.
3.11 EXP-11 – Indicators Portal
This experiment is going to be performed once the first four experiments
herein detailed are running properly, as a guideline to guarantee its
success.
For further details about the Indicators Portal service, please refer to D3.1
(Appendix A, chapter “EXP-11 - Indicators Portal”) and D5.1 (Section 5.11 of
the document).
3.12 EXP-12 – Smart City Cloud Expert System
This experiment is going to be performed once the first four experiments
herein detailed are running properly, as a guideline to guarantee its
success.
For further details about the Smart City Cloud Expert System service, please
refer to D3.1 (Appendix A, chapter “EXP-12 - Smart City Cloud Expert
System”) and D5.1 (Section 5.12 of the document).
40 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
3.13 EXP-13 – Open Data
This experiment is going to be performed once the first four experiments
herein detailed are running properly, as a guideline to guarantee its
success.
For further details about the Open Data service, please refer to D3.1
(Appendix A, chapter “EXP-13 - Open Data”) and D5.1 (Section 5.13 of the
document).
3.14 EXP-14 – Mobile Services - Interoperability Azure -
Cloud Stack
Since the objective of this experiment is to demonstrate the deployment of a
new instance of the Mobile Services Platform and the Next2Me mobile
service in another technology environment (in this case CloudStack), this
needs to be taken into account first, prior to performing the experiment
itself.
For further details about the Provision and offering PaaS and SaaS to other
municipalities, please refer to D3.1 section “EXP-14 - Mobile Services -
Interoperability Azure - Cloud Stack” and sections “EXP-03 “City Agenda”
and EXP-04 “Next2Me”” with focus on the applications.
41 D3.2.1 Mobility Services, Environmental & Social Services, Open Data Services and Internet of Things Services deployment implementation methodology
4 Conclusions and Next Steps
During the laboratory work in a sandbox environment and also during
migration work on an actual cloud environment some technical issues have
been raised and it is worth mentioning them:
- trial and error is good for learning purposes, and for experimenting in
new technologies; setting up a process and defining a migration path or
roadmap is more efficient for the long term involvements.
- having a common image for containers will decrease the speed in the
beginning but later experiments will benefit from the experience, the
technical process and the resolution of errors encountered in the process
- OS commands are easier to use for the beginning but replacing them
with puppet recipes is a must in order to be compliant with CloudOpting
deployment requirements.
- data persistence in CloudOpting containers is tricky for large volumes
of data; HOST volumes and CONTAINER volumes are different options to
consider in the next phases of migration.
4.1 Next Steps
The next steps for the Work Package 3 are to continue with the migration for
the rest of the experiments and to document all the migration steps in D3.2.2.
During the next migrations the experiment will also have other
functionalities that will fulfill CloudOpting Platform requirement like:
1. centralized logging with LogStash
2. centralized activity monitoring with ElasticSearch and Kibana
3. centralized administration with PuppetMaster
4. centralized monitoring with Zabbix
These functionalities will be transparently added to all migrated service by
utilizing the CloudOpting containers hierarchies.
In the next period the CloudOpting orchestrator will be released to be used
for migration purposes and will help the subscription to the service process
by fully automating it.
Automating the process of subscription to the service will help also other
processes like:
- the process of tailoring the service instance for each Service Subscriber
through automation
- the process of decommissioning a service instance or a Master Service.
Also the next deliverable will provide more technical details on the
CloudOpting Publishing Funnel phases. The final conclusions and the final
deployment implementation methodology will be presented in deliverable
D3.2.2.