Download - SUCRE CloudSource Magazine Issue 2/2013
Graphical concept and representation of the public sector existing as a system of nodes with interconnections. Some nodes also have sub elements or partitions which co-exist with its parent. The graphic tries to visualise the similarities between the public sector and the cloud concept.
Graphic concept & design : Paul Davies
Cloud computing & opensourceCloudSource
Issue 2 - October 2013
www.de-clunk.com [email protected] & layout : Paul Davies
Sucre issue 2 cover
2
SUCRE Coordinator, National & Kapodistrian University of Athens, Greece
Head of the Supercomputing Department at the Poznan Supercomputing Center, Poland.
President of ERCIM, U.K.
OCEAN project Coordinator, Fraunhofer Institute, Germany
EU-Japan Centre for Industrial Cooperation,manager of the FP7 project JBILAT, Japan
Scientific Editor and Journalist, Australia
Coordination by Giovanna Calabrò, Zephyr s.r.l., Italy and Mrs. Eleni Toli, National & Kapodistrian University of Athens, Greece.
This publication is supported by EC funding under the 7th Framework Programme for Research and Technological Development (FP7). This Magazine has been prepared within the framework of FP7 SUCRE - SUpporting Cloud Research Exploitation Project, funded by the European Commission (contract number 318204). The views expressed are those of the authors and the SUCRE consortium and are, under no circumstances, those of the European Commission and its affiliated organizations and bodies.
The project consortium wishes to thank the Editorial Board for its support in the selection of the articles, the DG CONNECT Unit E.2 – Software & Services, Cloud of the European Commission and all the authors and projects for their valuable articles and inputs.
Prof. Alex Delis,
Dr. Norbert Meyer,
Prof. Dr. Keith Jeffery,
Dr. Yuri Glikman,
Dr. Toshiyasu Ichioka,
Mrs. Cristy Burne,
Editorial Board
3
2
3
4
9
12
14
18
21
25
29
33
37
38
Editorial Board
Table of Contents
Goodbye to legacy software - ARTIST changing frumpy to glamorous
Open cloud software applications for the public sector
Education in the cloud
Cloud-sourcing: Positioning the cloud for disaster relief scenarios
Azure: designing modern applications using a hybrid cloud approach
MODAClouds: Model-driven engineering for the clouds
Synnefo: A Complete Open Source Cloud Stack
CELAR: Automatic, multi-grained elasticity provisioning for the cloud
The PaaSage project: the cloud was the limit
News & Events
Related International Events
Table of Contents
Goodbye to legacy softwareARTIST changing frumpy to glamorous
Being stuck with ages-old software is a constant headache. It’s clunky, expensive to run and never quite works like it should. Worse, most legacy applications are unsuited to running on the cloud, and replacing them isn’t easy, interfering with business performance, continuity and offering.
Clara Pezuela Research and Innovation Group Atos, Spain SA & the ARTIST project consortium
Yet traditional software and service providers must adapt to the new reality of the cloud, without disrupting business continuity for their customers.
4
Reverse-engineer, forward-engineer
Working with ARTIST
Reduce software costs by 50%
Until now, legacy applications could only watch as their modern counterparts whizzed by, scaling up and down on demand, roaming the internet, hosted from sleek and swanky data centers.
Now, the EC-supported ARTIST project proposes a set of methods and tools that, like the flick of a wand, can reverse-engineer legacy apps to a meta-model version and then forward-engineer them to the desired platform. Remodeled applications can then take advantage of the latest technologies, including cloud computing, smart-phones and security features.
But the decision to remodel isn’t always that simple. Companies must decide whether to migrate existing solu-tions, which represent significant prior investment, or whether to start from scratch. Moreover, they must do so in an environment where time-to-market is critical.
“The ARTIST project applies the latest scientific knowledge to a critical business issue that many European compa-nies are currently facing,” said Clara Pezuela, the ARTIST project coordinator. “We aim to help companies evaluate whether their applications can be migrated to a cloud environment at a reasonable cost. If migration is possible, we’ll provide the tools to achieve it.
Approximately 90% of software costs can be attributed to post-installation support, yet legacy applications rarely achieve the performance levels of more modern solutions. According to its partners, ARTIST will reduce costs by 50% when compared to traditional manual migration methods, which will make it possible to implement more frequent migration programmes to better and more cost-efficient platforms.
ARTIST’s software modernisation approach is based on model-driven engineering techniques, aiming to help with re-engineering legacy applications to platform-independent models suited to cloud computing. This will signifi-cantly reduce the risk, time and cost of software migration, which today represent major barriers for organisations wanting to take advantage of cloud-based technologies.
www.artist-project.eu
“We’re committed to helping businesses revitalise the thousands of legacy applications that aren’t being used optimally.”
5
Goodbye to legacy software - ARTIST changing frumpy to glamorous
Large-scale topographical GIS database for the Hellenic Prefectural urban planning authorities
One of the greatest challenges that governments in the 21st century face is that of efficiently and cost-effectively keeping up with public demand. Increasingly, councils are under pressure to efficiently plan and manage resources, while keeping the public informed and opening the way for their participation.
In response to these challenges, the Hellenic Ministry of Interior Affairs initiated a large-scale project, entitled “Electronic Urban Planning: A Geographic Information System (GIS) for the Prefectural Urban Planning Authorities.” The project aimed to introduce a private cloud-based GIS database for planning, monitoring and managing urban planning and land development for the 185 Prefectural urban planning authorities across Greece.
SingularLogic was appointed to develop the system, tasked with providing the engineers and planners of the Prefectural urban planning offices – and the general public – with accurate, up-to-date information on urban planning legislation, and the tools to organise and manipulate this information accordingly.
Dimitrios Charalambakis – SingularLogic, Greece
6
Why geographic information systems?
Doing more and spending less
Value-added services for citizens
Until recently, cartography and GIS had little to do with urban development: why invest a good deal of money in cartography, if the results are useful only to a handful of specialists, and of no direct benefit to internal management or public relations? Now, with evolving technology and improved information management, the answer is clear.
Regardless of the size of an urban development authority, land development planners must deal with a great volume of information: land-use data, addresses, transportation networks, housing, land acquisitions, accounts, and so on. A planner must study and track multiple urban and regional indicators, forecast future community needs, and plan strategically to guarantee quality of life for the community.
By harnessing the power of a tailored GIS, urban planning authorities can more efficiently plan and develop, more rapidly identify and respond to problems, and more effectively share outcomes with citizens. Further, basing a GIS in the cloud enables urban planners and the general public to participate in the process.
Previously, the Hellenic urban planning lifecycle was predominantly manual. Information was retrieved using different registers and record books. The process was tedious and time consuming, leading to less objective decision-making. By implementing a private cloud-based GIS solution, the Ministry of Interior Affairs improved efficiency by:
The GIS system has been implemented over the governmental backbone (2500+ internal users) and internet (open to external users) via a private cloud. It is now used by town planners and central administration to automate the day-to-day functioning of all departments and offices.
Users can view maps on-screen, and can combine, manage and analyse geographically referenced data to support their decision making, all via the intranet or internet.
The success of this project has enabled Hellenic urban planning authorities to improve resource management and urban planning, enhance stakeholder communication, and update their geographic database.
The end result? The government has improved its services and dramatically cut costs.
As part of the project, the project team:
automating tasks,enabling prompt decision-making,optimising information retrieval, andproviding on-time, accurate and complete data for decision-making.
re-engineered the processes and procedures of the Urban Planning Offices, developed a private cloud-based GIS, designed a GIS database containing current legislation, andimplemented a portal to provide value-added services and information to the citizens.
7
Large-scale topographical GIS database for the Hellenic Prefectural urban planning authorities
SAN-1 storage array
Figure 1: Virtualised system architecture of the cloud-based GIS solution
SAN switches
Application Servers
Data Server #1
Database layer
Data Server #2
Application Layer
Internet
Government Backbone
Citizen Service O�cesMinistry of Interior/PrefecturesCitizens
GIS Application Servers
SAN switches
Disaster Site
SAN-2 storage arrayTape library
SAN & backupManagementServer
LDAP Servers
Directory/Backup Layer
LB / FO
LB / FO
LB / FO
Web Servers
LB / FO
Infrastructure Layer (Management Servers)
Web Layer
nPar
n1
nPar
n2
nPar
n3
nPar
n4
nPar
n1
nPar
n2
nPar
n3
nPar
n4
Clus
ter I
nter
conn
ect
Clus
ter I
nter
conn
ect
Clus
ter I
nter
conn
ect
Clus
ter I
nter
conn
ect
8
Large-scale topographical GIS database for the Hellenic Prefectural urban planning authorities
Open cloud software applications for the public sector
In an era of economic crisis, the public sector is increasingly faced with budget cuts, and must offer more with ever less. Strategic thinking – and a deep knowledge of an organisation operations, constraints, capacities, ethics and political agenda – are required if IT investments are to flourish and catalyse cost reductions and operational improvements.
Androklis Mavridis, Athanasios Soulakis, and Spyros Skolarikis - B.open Open Business Software Systems Ltd, Greece
9
Why geographic information systems?
jPlaton Integrated Design, Development and Runtime Environment platform
If the overriding goal is to maintain operational standards while squeezing resources, then cloud technology is the best-fit solution for public sector organisations (PSOs).
Exploiting clouds will help PSOs acquire new capacities and improve collaboration and awareness, all in a secure manner. To achieve this, PSOs need adaptable, extendable cloud solutions that will enable service integration and management, and accelerate development and deployment. It is exactly this need that b.Open Ltd covers with its jPlaton software application server and Comidor cloud application suite.
Since its inception in 2004, jPlaton has evolved into an integrated design, development and runtime environment platform for distributed enterprise applications, tailored to cloud software application development.
jPlaton is independent from operating systems, databases, system architectures and underlying technologies. Any application built on jPlaton contains only plain XML files (no binary at all), can be installed on Windows, Linux and Mac, and can operate on relational database management systems, like MySql, Oracle, Sql Server and so on. The innovation lies in jPlaton’s open, multi-layered, distributed architecture, which encourages collaborative software development: any number of developers can work on the same software project, upgrading, modifying, extending and integrating as required.
jPlaton takes modular architecture a step further, and a typical application consists of multiple parts, called program units. All the functionality of a program unit is contained in XML files that describe its objects and procedures, resulting in a multi-layered, homocentric environment. Any layer can be used to add new functionality, or to update or delete existing functionalities in its inner layers. The number and nature of the layers is tailored to a specific application.
This completely open and transparent architecture permits the flow of information between layers, facilitating integration, and its distributed multi-layered nature allows evolution and customization, all while preserving the inner (core) layers.
Package 2
Group 1
Package 3
Pack
age 1
Package
4
Group Χ
Users
System
jPlaton
Layers
Platform
System
Package
Group
User
At execution time, information on a specific program unit is automatically collected and assembled, as per its specific installation and user settings.
10
Open cloud software applications for the public sector
It is neither productive nor cost-effective for PSOs to purchase, operate and maintain numerous applications, just to cover their daily operational needs. This is where the Comidor application suite comes in. Developed on JPlaton, Comidor offers customer relationship management, and project and financial management capabilities, all on a state of the art collaboration platform. More specifically, Comidor enables:
B.open’s cloud platform-as-a-service and software-as-a-service solutions provide an integrated cloud operational environment, supporting the adoption of cloud computing facilities and delivering fundamental changes to the way the public sector procures and operates ICT.
www.b-open.gr/
E-mail integrationContacts managementAccounts managementSocial networkingInteractive calendar File creation and sharing of any file formatWikis for collaboration and knowledge consolidation Real-time text chatting, video, message threading and pollsAbility to “follow” the real-time feeds of co-workersVersion control (including who changed it, date/time of changes, and ability to make prior versions available for use, etc.)Organisation management (drag’n’drop groups and users on an organisational chart that can be restructured on demand)Stats and graphs (on-the-fly visual presentation of opportunities, projects, contacts, accounts, and much more)Report creation and customisationWeb services for third-party systems with full interaction and data import/export toolsComidor mobile
Case-specific collaboration and quality-focused services provisionUnified knowledge baseLeads and opportunities managementCampaign creation and monitoring with advanced analytics and reportsPerformance indicators, statistics and reportingForecasting based on financial information streams
Resource planning (human, tangibles and intangibles including personnel, equipment, effort, knowledge, etc.) Schedule and task management (allowing managers to monitor budget and costs at any given time)Deliverables and milestones management, at the project, task and resource levelKnowledge management
Comidor cloud application suite
The G-cloud way
Enhanced collaboration
Personalised customer care
Project management
11
Open cloud software applications for the public sector
Education in the cloud
In an environment where online learning (e-learning) is increasingly popular, the application of cloud computing to education creates some remarkable opportunities. Our younger generation especially is increasingly using the internet in their search for knowledge, and pupils of any age can benefit from this “here and now” model of knowledge, using net browsers, exchanging ideas on social networks, and pursuing their education in a dynamic, immediate and personalised manner.
Przemyslaw Fuks - EFICOM S.A. European and Financial Consulting, EuroCloud, Poland
12
By raising awareness of cloud technologies as a new way of providing services, we can give schools and universities the chance to create their own low-cost e-learning systems.
Thanks to cloud technologies, the cost of expensive hardware necessary to create IT infrastructure, and the cost of software licenses for school laboratories, is no longer a barrier.
The required computational power is now being served by IT providers and the owners of integrated learning environments, allowing the users of Learning Management Systems to create new educational content with only the aid of special applications for on-line content editing.
Such solutions allow teachers and lecturers to make lessons more attractive, using the huge repositories of multimedia resources available in the cloud, without requiring the installation of additional software on local workstations. All teachers need is a web-browser.
A policy of resource management allows teachers to share their educational content, leading to more attractive lessons as content is systematically enhanced with new, unique educational resources.
Pupils can access educational resources – including lessons, courses, revision aids, and so on – without the pressures of time or barriers of geography. They can use simple devices, such as laptops, tablets and smartphones, in their own preferred way, learning at their own pace and in their own style. Some may prefer e-books or multimedia presentations, while others prefer videos, and so on.
Foremost of the advantages for schools or universities are the much lower costs of infrastructure maintenance when using cloud resources. Also, money spent is proportional to platform utilisation, and costs could thus be significantly reduced during holidays as compared to during the school year or end-of-term examinations.
An excellent example of the use of cloud solutions in education is the Educational Platform of Lodz, which has more than 150,000 users.
By leveraging cloud technologies on municipal servers, the city of Lodz has deployed this broad-scale educational platform, on which users dynamically manage their educational content without the need to buy expensive software or costly hardware. A+ for Lodz.
Low-cost, tailored e-learning
Dynamic lessons
Personalised approach
Flexibility
Case study: Lodz, Poland
13
Education in the cloud
Cloud-sourcing:
We saw the ability of digital natives and the networked world, using lightweight and easily iterated tools, to do something rapidly that a big organization or government would find difficult, if not impossible, to do.1
Mark Roddy and Edel Jennings - TSSG, Ireland, Patrick Robertson - DLR, Germany and Dingqi Yang ITSud, France
Richard Boly from the US State Department advocating ‘crowd-sourcing’ as an effective disaster relief tool
Positioning the cloud for disaster relief scenarios
14
Disasters, such as earthquakes or terrorist attacks, are by nature random and unpredictable. Disaster management teams must respond to unknown and dangerous situations using the best available and most reliable information.
In recent years, data coming from disaster situations via social media and ubiquitous technologies has increased, and open-source cloud-based technologies, such as Ushahidi2 and Sahana Eden3 , have evolved to offer real-time insights that are helpful to relief agencies in focusing attention to where it is most required.
The EU FP7 open-source project, SOCIETIES4 , is investigating the use of novel ubiquitous and social cloud-based communications services to discover and organise people able to assist with specific disaster management information deficits. These cloud-based services propose harnessing the efforts of offsite crowd-sourced volunteers, selected for their relevant skills and trustworthiness. These intelligently orchestrated communities-of-interest will help research the answers to general or specialist requests from disaster response teams, thus generating a trustworthy cloud-based and crowd-sourced wisdom tool.
“ In April 2011, the SOCIETIES research team discussed the proposed services with the European Union’s Civil Protection Mechanism (CPM), and through a storyboard evaluation were able to refine the service requirements.”
CPM’s disaster experts were presented with sample scenarios that attempted to describe how this cloud-based community might assist with disaster relief. For example, an offsite volunteer community could be asked to compare satellite images of a disaster zone, taken before and after the catastrophe, so that destroyed infrastructure, such as roads or bridges, could be identified and highlighted.
A fundamental question5 derived from this research centered on trust: How could onsite experts trust results from the offsite community?
Further analysis of this trust issue enabled researchers to generate a fundamental design that included the following features (Figure 1):
1 http://www.fastcompany.com/1751308/state-department-trying-make-thousand-ushahidis-bloom “Fast Company”2 http://ushahidi.com/about-us “The Ushahidi Project”3 http://wiki.sahanafoundation.org/doku.php “The Sahana Foundation”4 http://www.ict-societies.eu/ “EU FP7 SOCIETIES Project”5 http://www.ict-societies.eu/files/2011/11/D8.1_public.pdf “SOCIETIES Paper Trial Evaluation Report”
Cloud-sourced wisdom tool
Service requirements and design
Τhe service should be able to identify those offsite users most relevant to specific onsite requests,Τhe service should allow for physical and virtual collaboration of offsite users, andΟnsite personnel should have sufficient confidence in the veracity of the cloud-sourced answers.
The SOCIETIES team developed a user interface (Figure 2) that allows requests for help from disaster zones to be generated and uploaded, along with the skills profile of the expertise needed by the offsite volunteer. Disaster relief experts can then view the responses provided by ‘cloud-sourced’ volunteers, and filter and relay responses back to onsite relief teams.
15
Cloud-sourcing: Positioning the cloud for disaster relief scenarios
Request Processor
RequestCreation
RequestResolution
QueryInterface
User Pro�leExtractor
Instant MessageService
O�-site UserGrouping
RequestNoti�cation
Relevant O�-site User Selection
Request Database
User Pro�leDatabase
Request Manager
O�-site User Manager
HistoricalUsage Records
Social Networks
Manual Input
Web User Interfaces
O�-site Users
On-site Users
AnswerVote
Figure 1: High-level architecture of the ‘cloud-sourcing’ platform
Figure 2: Screenshot of the user interface for the offsite volunteer service
16
Cloud-sourcing: Positioning the cloud for disaster relief scenarios
In recent months, SOCIETIES researchers conducted a series of experimental cognitive walkthroughs, aimed at evaluating the worth of offsite volunteer services, and including real volunteer users (Figure 3).
The evaluations involved two user types: offsite volunteers selected by the system for their relevant expertise, and onsite relief teams and disaster victims. Researchers looked at several questions: What was the user experience of the offsite volunteers that were selected to collaborate? Did relief workers trust the results returned by offsite volunteers?
Figure 3: First cognitive walkthrough with researchers from SOCIETIES
The cognitive walk-through experiments
The results6 clearly showed that although the system is still a work-in-progress, it has potential to be further developed to achieve good usability and trustworthiness.
Future workIssues for future consideration include integration of the service and system features with existing platforms, such as Ushahidi. Project partners have also used feedback from these first trials to make service improvements, and plan to perform an end-to-end experiment with sufficient offsite volunteers to enable full validation of the system.
6 http://www.ict-societies.eu/project-deliverables/ “SOCIETIES First Prototype Evaluation Report” released subject to CEC approval
17
Cloud-sourcing: Positioning the cloud for disaster relief scenarios
Azure:
Tomasz Kopacz , Microsoft - Poland
Managers want fast, flexible IT on a limited budget, so they’re increasingly turning to clouds. Economies of scale mean public clouds are more cost-effective, yet security concerns mean many companies require full control of their IT assets. Windows Azure provides businesses with a means to create their own tailored solutions, using hybrid cloud.
Cloud services are usually divided into three categories: IaaS (infrastructure as a service), PaaS (platform as a service) and SaaS (software as a service).
designing modern applications using a hybrid cloud approach
18
IaaS offers virtual computers, storage space, and some form of secure connection between the data center and customer. Examples include Windows Azure, virtual private networks (VPN), and so on.
IaaS offerings usually include standardized services in predefined sets, hardware configurations, operating system (OS) images, and maybe some readymade applications, like Microsoft’s SQL Server, BizTalk Server, Sharepoint or even specialized Linux images with Django. Custom-made, specialized images can also be added, and architects will tailor these offerings to best suit a particular situation. Microsoft’s Hyper-V can be used to run the same OS image in Azure and on on-site servers. Thus, technically speaking, moving between public and private clouds is very easy in IaaS. Additionally, management tools such as Microsoft System Center allow users to control this mixed environment, including machines in both public and private clouds.
PaaS is a very generic form of cloud in which providers offer developers sets of components, or higher level building blocks. In the case of Azure, offerings include specially designed websites (from simple sites in PHP or Node.js to complex portals in ASP.NET or web services); HDInsight (based on Hadoop), for processing gigantic datasets and building next-generation analytical systems; or Generic Worker, which allows developers to deploy background processing tasks.
SaaS includes all end-user solutions, from the simple – like Microsoft SkyDrive and Dropbox, or calendar services – to the more complicated – like SalesForce.com, Office 365 or Microsoft Dynamics CRM. Some SaaS are ready-to-use products, others are platforms that allow developers or business analysts to create specialized solutions using very high level languages, removing the burden of dealing with technology layers like in PaaS- or IaaS-based applications.
Let’s assume we want to build a sales automation system that supports e-commerce and mobile devices.
In summary, there are no clear boundaries between clouds. So the real challenge lies in choosing the best components for you. Have a look at following example:
Infrastructure as a service
Platform as a service
Software as a service
Building a sales automation system
Data warehouse We should opt to invest in our own data warehouse, for two reasons: data will
be kept in, privately managed IT facility. And two, investing in our own hardware
and backup system will be more cost-effective in the long-term. A VPN will be
used to periodically download data from the cloud to our warehouse.
We won’t host our public website on our own servers, because of the cost of
high-speed internet, denial-of-service protection, and so on. Our e-commerce
site will be kept in Azure in a public cloud. We will use Azure Web Sites (for pure
web presence) and Azure Worker Roles (for backend processing of orders).
Communication between those components will be handed by Azure Service
Bus. This architecture will allow us to scale-up services as needed.
Website hosting
19
Azure: designing modern applications using a hybrid cloud approach
Mobile devices
Mobile devices
Security
Data storage
Mobile devices
When it comes to web services, security is critical. Clouds can deliver better
security simply because they operate on a larger scale. This means that,
technically speaking, by opting for cloud, we’re not only buying a service, but
also a guaranteed service level agreement and excellent security. For our case
study example, this means we’ll be using the cloud as a simple form of DMZ: we
won’t be exposing anything directly from our datacenter to the internet.
We’ll store our data in two forms: in a relational database, like SQL Database,
using which it is simple to build an application using ORM libraries; and in Azure
Storage, where we will keep all media (images, videos) relevant to our products.
Such storage is inexpensive, and thanks to protocols (HTTP(S)), integration with a
portal is trivial. Additionally, our marketing agency can work directly with stored
material and easily update it as needed.
Thanks to Azure Mobile Services, it’s easy to build a backend for every major
mobile platform: Android, Windows Phone, Windows 8, iPad, and of course
generic HTML5. Mobile Services provides secure access to data, authentication
(including integration with Facebook, etc.), and push notification services. This
one solution can service all types of devices!
In this way, we can build quite a complex system. This is, of course, only the beginning. We may also choose to use Azure Active Directory, to solve any authentication and authorization problems. In the future, we can expect many more of these standardized services.
For example, email is already a highly standardized, SaaS service. Maybe, in the future, other cloud-based services will reach similar standards.
Right now, each and every cloud provider uses similar REST-based application programming interfaces (APIs) to manage their services. From the client’s API perspective, there are also similarities between storage services. Of course, there are huge differences in terms of internal implementation.
The Windows Azure Pack provides the same core services on-site as those available in the Azure public cloud, which means customers can build complex systems using these same, known services, and then choose which parts will run on a public cloud, and which on a private cloud.
Tailoring your hybrid cloud
20
Azure: designing modern applications using a hybrid cloud approach
MODAClouds:
Elisabetta Di Nitto (Politecnico di Milano, Italy)Dana Petcu (West University of Timisoara, Romania)Septimiu Nechifor (Siemens SRL, Romania)
Vendor lock-in and cloud outages still discourage migration to cloud-based solutions, especially in the public sector. A solution to both problems could be to support application developers and operators in the adoption of a multi-cloud approach, where applications are built to run and replicate on different clouds and can rapidly switch between them.
Model-driven engineering for the clouds
21
This multi-cloud approach requires proper cost risk analysis and advanced software engineering, guiding developers at design time and supporting providers at runtime. Thus a healthy marriage between new open-source cloud technologies and well-established model-driven engineering could increase trust in clouds and encourage cloud adoption.
The MODAClouds project aims to deliver a decision-support system, methods and an open-source integrated development environment and run-time environment for the high-level design, early prototyping, semi-automatic code generation, and automatic deployment of applications on multi-clouds with guaranteed quality of service.
Figure 1 sketches the general architecture of the MODAClouds’ solution.
At design time, MODAClouds aims to analyse business opportunities, determine the value of some cloud solutions over others (cost risk analysis), map the functional and non-functional design of multi-cloud applications, and analyse different design alternatives.
At runtime, MODAClouds focuses on three main issues:
The MODAClouds solution targets system developers and operators by providing tools to support the application or services life-cycle phases:
The MODAClouds’ approach
Feasibility study and analysis: allows developers to analyse and compare cloud solutions.Design, implementation and deployment: supports the cloud-agnostic design of software systems, semi-automatic coding, and deployment to target clouds.Run-time monitoring and adaptation: allows system operators to oversee execution on multiple clouds, automatically triggers some adaptation actions (e.g., migrating system components to services offering better performance at that time), and provides runtime information to inform software system evolution.
1. execution management, intended as the set of operations to instantiate, run and stop services on the cloud; 2. monitoring the running application; and 3. self-adaptation to ensure quality of service.
22
MODAClouds: Model-driven engineering for the clouds
Figure 1. General architecture of the MODAClouds’ solution
QoS modelling andanalysis tool
Data MappingComponent (DMC)
MODACloudMLFunctional ModellingEnvironment
MODACloudMLDeployment and
Provisioning Component
Decision MakingToolkit
Run Time
Design Time
Shared Models
MO
DA
Cloud
s’ IDE
Runtim
e Platform
Runtime GUI
Self AdaptionPlatform
CurrentModel
TargetModel
Models @ runtime
MonitoringPlatform
MonitoringRules
InitialDeployment
ModelQoS modelPolicies for
self-adaption
Self Adaption Reasoner
DeployableArtefacts
ExecutionPlatform
Cloud App Cloud App
23
MODAClouds: Model-driven engineering for the clouds
One MODAClouds case study is the Smart City Urban Safety Planner, for managing fire incidents in a high-density area served by a gas network. Gas detectors, traffic sensors, cameras, and electricity circuit-breakers are in place and managed by an Internet of Things platform.
The safety planner aims to
The infrastructure must be highly scalable, to manage the varying density of data flows and events. Data replication and migration mechanisms are needed to avoid data loss in case of failure of one of the application instances. MODAClouds supports this case study by offering design-time and run-time mechanisms to enable the use of multiple clouds, aiming to guarantee 24/7 availability. It also offers dedicated mechanisms for managing data replication and synchronization, thus allowing hot-switching between different replicas of the safety planner system.
www.modaclouds.eu
Smart city safety planner
predict the failure of gas detector sensors using sensor data, predict the impact of a fire by analysing video from nearby cameras, evaluate the best path for fire squads, and the best exit for traffic, by correlating run-time data with historical data from traffic sensors,determine the optimal gas pipes to isolate to limit impactsend information to relevant authorities.
24
MODAClouds: Model-driven engineering for the clouds
Synnefo
Vangelis Koukis, GRNETConstantinos Venetsanopoulos, GRNETNectarios Koziris, CSLab, NTUA
In a nutshell
Synnefo is a complete open source cloud stack that provides Compute, Network, Image, Volume and Storage services, similar to the ones offered by AWS. Synnefo manages multiple Google Ganeti (https://code.google.com/p/ganeti/) clusters at the backend for the handling of lowlevel VM operations. Essentially, it provides the necessary layer around Ganeti to implement the functionality of a complete cloud stack. To boost 3rd-party compatibility, Synnefo exposes the OpenStack APIs to users.
We have developed two standalone clients for its APIs: a rich Web UI and a command-line client. Synnefo runs in large scale production environments since 2011, to power private or public cloud services.
A Complete Open Source Cloud Stack
25
An overview of the Synnefo stack is shown in Fig. 1. Synnefo has three main components:
bese components are implemented in Python using the Django framework. Each service exposesthe associated OpenStack APIs to end users (OpenStack Compute, Glance, Cinder, Keystone, Object Storage). It scales out on a number of workers, uses its own private DB to hold cloud-level data, and issues requests to thecluster layer, as necessary.
When the need arises to provision and manage resources automatically and in bulk, the ./kamaki command-line tool can be used to perform low-level administrative tasks. ./kamaki is just another client accessing Synnefo over its RESTful APIs, targeted to advanced end users and developers.
Anyone using a cloud service powered by Synnefo, either public or private, has access to the following functionality:
Compute:
Network:
Storage:
Image:
Architecture
Key features
Astakos is the common Identity/Account management servicePithos is the File/Object Storage serviceCyclades is the Compute/Network/Image and Volume service
Support for Windows Server 2008R2 and 2012, all major Linux distributions (Ubuntu,Debian, RHEL/CentOS/Scientific Linux, Fedora, Gentoo, ArchLinux, openSUSE), andFreeBSD Virtual MachinesSpawning VMs from custom Images, uploaded by usersDynamic file injection upon VM creation for contextualizationPer-VM CPU and Network statisticsEasy and secure console access through the Web UI
Public networking with full IPv4/IPv6 support Different firewall options for the public networkIsolated Private networks (virtual L2/L3 networks)with automatic or manual IP allocationAbility to create arbitrary virtual network topologies, with multiple NICs per VM
File uploads/downloads over Web UI and command-line clientsSyncing between local files and the cloud with native Windows and Mac OS X clientsFile sharing among individual users and user groups, with per-file Access ControlLists
Images are just files in the Storage ServiceImages can be shared with other usersSupport for custom, user-provided ImagesAutomatic bundling tool transfers existing physical or virtual machines to a Synnefocloud
26
Synnefo - A Complete Open Source Cloud Stack
Synnefo’s architecture decouples the cloud from the cluster layer, easing administration. It providesthe following functionality at the backend:
Synnefo has been running in production powering GRNET’s 26okeanos public cloud service (http://okeanos.grnet.gr). As of this writing, ~okeanos runs more than 5k active VMs, for more than 3.5k users. Users have launched morethan 130k VMs and more than 35k virtual networks.
Compute:
Network:
Storage:
Image:
Identity:
Backend functionality
Use case: The ~okeanos public cloud
Management of multiple Ganeti clustersSupport for VM live migrations with or without shared storageSupport for multiple storage backends:LVM, DRBD, Files on local/shared directory (e.g., over NFS), RBD (Ceph/RADOS)Simple interface to plug into existing SAN/NAS deploymentsEasy integration into existing infrastructure using admin hooksLinear scaling with dynamic addition of Ganeti clusters
Full IPv4/IPv6 support for Public and Private networksScale to thousands of isolated private L2 segments over single VLANSupport for pluggable networking configurations in the backendCurrently supports VLAN-based isolation, MAC-based filtering over single VLAN, VXLAN-based virtual L2 segments
Files are collections of blocksContent-based addressing for blocksPartial file transfers, deduplication, efficient syncingOptionally used as single store for Files, Images and VM disks
Secure deployment of custom Images, inside isolated VMAll contextualization done by Synnefo with no need for special tools inside the ImageEfficient syncing and sharing of Images as files
Support for multiple login methods per user: classic username/password, LDAP/Active Directory, Google/Twitter/LinkedIn 3rd-party accounts, SAML 2.0 (Shibboleth) federatedloginsFully customizable user sign-up process, with discrete verification/moderation stepsQuota system for fine-grained per-user, perresource limits, with associated UISupport for collaborative projects for sharing virtual resources among user groups
27
Synnefo - A Complete Open Source Cloud Stack
Using Synnefo in production has enabled:
Synnefo is open source. Source code, distribution packages, documentation, many screenshots and videos, as well as a test deployment open to all can be found at http://www.synnefo.org.
Rolling software and hardware upgrades across all nodes. We have done numeroushardware and software upgrades (kernel, Ganeti, Synnefo), many requiring physical node reboots, without user-visible VM interruption.Moving the whole service to a different datacenter, with cross-datacenter live VM migrations, from Intel to AMD machines, without the users noticing.Scaling from a few physical hosts to multiple racks with dynamic addition of Ganeti backends.Overcoming limitations of the networking hardware regarding number of VLANs, with multiple L2 segments on single VLAN, with MAC-based filtering or VXLAN encapsulation.
28
Synnefo - A Complete Open Source Cloud Stack
CELAR:
Dimitrios Tsoumakos National Technical University of Athens, Greece
Cloud computing possesses the inherent ability to support elasticity, namely the scaling of infrastructure or platform resources to meet the exact demand, performance or cost characteristics at runtime.
Automatic, multi-grained elasticity provisioning for the cloud
29
Optimal resource allocation is hugely important: users can experience wide variations in application workload across a year, month, day or even a few minutes. Static under-provisioning runs the risk of costly service unavailability at peak-hours, while static provisioning for peak-load incurs increased costs and underutilised resources (Figure 1, left graph).
Figure 1: Resource provisioning strategies
Elasticity can be applied so that application performance and cost are throttled in a controlled manner, bringing profits for both parties: service consumers can reduce task execution times without blowing their budget, and cloud providers maximise their financial gain by increasing their clientele and keeping their customers satisfied.
While many systems claim to offer elasticity, the throttling is usually performed manually, and users are required to define the conditions under which resources should be scaled up or down – a difficult task. Clients’ needs change dynamically, and different optimisations will be required at different times. Such coarse-grained elastic provisioning – and/or the scaling of a single resource (e.g., CPUs, storage or networking elements) – leads to suboptimal use and performance degradation (Figure 1, middle graph).
To harvest the benefits of elastic provisioning, it must be automated and fine-grained (Figure 1, right graph).
The CELAR project is EU-funded and aims to enhance current cloud functionality to allow elastic resource provisioning. The project will develop open-source tools for applying and controlling elasticity in cloud-based applications, then apply this technology to two exemplary applications: one in online gaming, and the other in scientific computing, with an application requiring compute- and storage-intensive genome computations.
The first version of the proposed CELAR system architecture is depicted in Figure 2.
Automatic throttling
CELAR architecture
CELAR : Automatic, multi-grained elasticity provisioning for the cloud
30
Static over provisioning
Coarse-grainedelastic provisioning
Fine-grained elasticprovisioning
Staticprovisioning
demanddemanddemand
Reso
urce
s
Reso
urce
s
Reso
urce
s
Time TimeTime
CELAR : Automatic, multi-grained elasticity provisioning for the cloud
There are three main modules:
Application management platform: Modules and methods that enable developers to describe and deploy their applications. This layer will give users the ability to define their desired scaling policies and provide input to the inner CELAR modules. It will provide easy application deployment and real-time performance metrics. Modules will be implemented on top of the reliable Eclipse platform and exposed via meaningful, user-friendly UIs to the end-users.
Cloud information and performance monitor: A scalable, distributed subsystem that allows the collection and storage of statistics that relate to the running application and the resources it consumes over time. These metrics will be used to evaluate the current status of the application execution. Users will be able to evaluate the application by using existing metrics or by defining their own.
Elasticity platform:All the algorithms and modules required to automatically allocate (or free up) resources based on their characteristics, user preferences and the application load. The Decision Module is central to the platform: It views elasticity as a multi-dimensional property with three main dimensions: quality, cost and resources. This module then maps high-level elasticity requirements to low-level metric restrictions. For example, application cost can be broken down into the costs of running each virtual machine and the costs of I/O calls. The elasticity platform also maintains all necessary information for past and current application deployments, orchestrates addition or removal of resources (of different types and granularities), and ensures the robustness and availability of elastic operations. Finally, the platform features automated characterisation of an application’s behavior over a number of representative resource provisioning and load scenarios (profiling).
Figure 2: CELAR system architecture v1
31
PhysicalLayer
IaaSVS
SaaS /Pass
Custom Applications
ServicesHBase
Cassandra
Hive
Hadoop
Cloud Provider
Monitoring System
CELAR DataBaseStorage
DecisionModule
ApplicationDescription
ApplicationSubmission
InformationSystem
Application Management
Platform
Application Profiler
Elasticity PlatformCloud Information and Performance
Monitor
Mul
ti-le
vel M
etric
s Eva
luat
ion
VMVM
Cluster
VM VMVM VS
Other
Custom App 1
Custom App 2
Custom App 3
CELAR ManagerA
pp
lication
Orch
estration
Clo
ud
Orch
estration
Resou
rcePro
vision
er
Inte
rcep
tor
CELAR : Automatic, multi-grained elasticity provisioning for the cloud
Lifecycle of an application
CELAR imposes a general structure into the lifecycle of an application (Figure 3). User input is required to describe, submit and deploy their application. The application is then profiled and elastically managed by CELAR until its termination.
CELAR is committed to using standard APIs, open source tools and platform-independent programming languages to ensure wide coverage for underlying platforms and encourage widespread adoption and use of CELAR. We aim at provide the first integrated version of the CELAR system by the end of 2013.
www.celarcloud.eu/
32
Describe Submit Deploy Pro�le TerminateMonitor
&Manage
The PaaSage project:
Frode Finnes Larsen – EVRY, Norway and industrial partner in the PaaSage project
Want the power of multiple cloud platforms? But only when you need it? Want to avoid cloud vendor lock-in? Want to develop once, but deploy to multiple clouds? Or just want to cloudify your stuff?
PaaSage technology is for you!
the cloud was the limit
33
The passage project : the cloud was the limit
The project
Case study: the public sector
Core issue: handling application constraints
PaaSage architecture
PaaSage is an EC-funded project aimed at creating a development and deployment platform to help software engineers create new applications and migrate old applications to multiple cloud platforms. There are 19 partners involved; case studies for proof-of-concept come from industry and the public sector.
The public sector is facing a huge demographic challenge in Europe: our population is getting older, living longer, and requiring care for more extended periods. On top of this, there are the challenges of rising costs and economic crises.
Part of the solution lies in increased efficiency: the public sector must deliver more services using fewer resources. Technology, automation and self-service will play a crucial role in this.
In Norway we have 428 municipalities. These government bodies are autonomous: a huge number have their own legacy systems and have different ways of delivering the same service. Their systems are not harmonised and public interactions often require case-by-case management. The different legacy systems are heavily integrated into other systems and registers. Some services experience high-volume demand only once or twice a year. In this post-PRISM environment, privacy is becoming an even more important issue for the public sector: data must be stored nationally or within the EU.
The public sector needs a platform that supports the use of different clouds while reducing technology dependencies and orchestrating multiple data sources, governance and control.
There are several challenges when automatically deploying applications to multiple clouds, but the biggest issue is managing application constraints: not all cloud providers will be compliant with your application.
Constraints are also necessary to know when to take action: What should trigger automatic scale-out or deployment? What should trigger the application to scale down?
What price model do the different cloud providers employ? How many users must be supported? How many transactions? How much memory is required? How much CPU? How much storage? The nature of your application can affect the cost of its deployment.
Thus any platform for managing multiple clouds must make it possible to specify constraints on application availability, performance, cost, security and privacy. For reasons of data security, it must also support specify constraints on the location of cloud providers.
The PaaSage platform aims to support deployment to multiple clouds, including private and community clouds as well as commercial cloud offerings. It employs three main elements: the integrated development environment (IDE), Executionware and Upperware.
IDEThe IDE is PaaSage’s front-end. It extends the popular open-source development platform Eclipse and supports Cloud Modelling Language (CloudML). The IDE ensures model-based integration of functional components in a variety of application scenarios.
34
reducing platform and vendor dependencies, making strategic decisions easier,supporting adoption of cloud, thereby reducing costs and providing extra power only when needed. managing data privacy. Hot topic in these “Prism days”,managing multiple data sources and simplifying the handling of external data sources,managing governance and control at design-time, deploy-time and run-time,improving services by automatically monitoring application behavior, andmigrating legacy applications to a cloud environment.
Executionware Executionware provides platform-specific mapping and technical integration of applications to the cloud provider’s architecture and Application Programming Interfaces (APIs). It can be used to monitor, reconfigure and optimise running applications on a variety of platforms.
Upperware Upperware is linked to the IDE and presents a collection of tools and components to assist developers when porting models or designing applications. At runtime, it integrates these models and applications with the executionware to optimise performance.
The passage project : the cloud was the limit
Figure 1: PaaSage architecture
PaaSage in actionPaaSage will help to modernise the public sector by:
35
PaaSageModel based open platform
Commercial o�erings
PaaSagemetadata
PaaSagemetadata
CloudModellingLanguage
IntegratedDevelopmentEnvironment
API&
Architecture
Privatecloud solutions
Existingcloud solutions
Externalservice providersData & Storage
API&
Architecture
API&
Architecture
API&
Architecture
SpeculativePro�ler
PaaSageMetadata
IntellegentStochasticreasoning
Extrafunctionaladoption
Exchange with otherPaaSage users
The passage project : the cloud was the limit
Figure 2: PaaSage workflow
36
Metadata
Execution Environments
NewApplication
LegacyApplication
SpeculativePro�ler
SpeculativePro�ler
Intellegentreasoner
Extra functionaladaption
Platform speci�cmapping
Executionmonitoring
Executioncontrol
Execution optimization loop
Design timeoptimization loop
Metadata collection
Metadata sharing
CommunityExpertise
Cloud ML Application Model Architectural model Dependancy model Data �ow model Extra functional utility model
37
NEWS & EVENTS
1st SUCRE Video about Cloud Computing and the Public Sector available on YouTube!
This first video focuses and highlights the benefits for the Public Sector to migrate its services to cloud computing. With a fresh, innovative and user friendly style, this short cartoon will give you the insights of how cloud computing could really benefit the P.A and ultimately the EU citizens. Enjoy it at http://www.youtube.com/watch?v=wNrM-617q70&feature=youtu.be
JOIN the SUCRE & OCEAN NETWORKING SESSION @ ICT 2013 IN VILNIUS
The joint SUCRE-OCEAN session entitled Open Clouds in Europe and Japan: Tackling Interoperability through Collaboration will engage stakeholders from Europe and Japan to present and discuss about major interoperability challenges in Open Clouds, based also on the experiences drown by the two projects during the first year of operation. The proposed session will tackle some of the major challenges of Horizon 2020 and Digital Agenda. It is also in-line with and supports the aims of the International Cooperation strategy and policy of the EC. This session will take place on Tuesday 7 November, at 16:00, Booth 8
The SUCRE Young Researcher Forum outcomes
On September 23rd and 24th 2013, the SUCRE Consortium organised its Young Researcher Forum (YRF) as part of 2nd International Summer School on Services at KIT in Karlsruhe, Germany. The main goals of the event were to expose its participants in current technical issues while working with Open Cloud Computing platforms, help familiarize junior researchers with open and contemporary problems in the area, and finally, offer a forum to interact, learn and explore. Find out the main event outcome in the SUCRE portal www.sucreproject.eu
Innovative cloud research in action @ ICT 2013!
The FP 7 SOCIETIES project is pleased to inform you that they are about to deploy a conference service that exploits the project innovations called the Relevance App at ICT 2013.Exhibitors are welcome to register their exhibit information and delegates are encouraged to download and use the application.
For further information and to register please visit https://societies-trial.eventbrite.com/ and/or contact Mark Roddy at [email protected]
PaaSage consortium gets enlarged!
The FP7 PaaSage project (see www.PaaSage.eu) is welcoming three new partners from Poland (the Academic Computer Centre CYFRONET of AGH-University of Science and Technology in Krakow, Poland) and Cyprus (the Department of Computer Science of the University of Cyprus and Intelligent Business Solutions Ltd, an SME). They bring in additional expertise to broaden the potential use of PaaSage in e-Science and financial services, to provide tools ensuring continuous quality improvement, to improve interoperability through gathering and integrating metadata, and to extend both the PaaSage upperware and executionware.The project benefits therefore of an additional funding of 788 k€, corresponding to an additional investment of 1.080 k€. Total PaaSage investment reaches 7,4 M€ over 4 years! More information about PaaSage at www.PaaSage.eu
SUCRE INVITES YOU TO JOIN ALSO THE “ENGENEERING APPLICATIONS FOR THE CLOUD”
The aim of this session is to cover the full range of technical challenges through themes like: Decision mechanisms for migration towards Clouds; model-driven engineering of applications for the Clouds; modeling of Cloud target platforms; and resource management in multiple Clouds. The session will also cover business aspects like: Business implications and impact from migrating to the Cloud, in particular in the public sector; validation and certification of migrated applications to the Cloud; open-source middleware for the Cloud. The session offers also short snapshot presentations of state-of-the-art research results, with ample time for discussion. This session will take place on Tuesday 7 November, at 18:00, Room A.More information at http://ec.europa.eu/digital-agenda/events/cf/ict2013/item-display.cfm?id=11538
Graphical concept and representation of the public sector existing as a system of nodes with interconnections. Some nodes also have sub elements or partitions which co-exist with its parent. The graphic tries to visualise the similarities between the public sector and the cloud concept.
Graphic concept & design : Paul Davies
Cloud computing & opensourceCloudSource
Issue 2 - October 2013
www.de-clunk.com [email protected] & layout : Paul Davies
Sucre issue 2 cover