translating from legacy to cloud

39
Translating legacy to cloud Manikandan Sekar (Principal Consultant, Ebillsoft LLC) Dec 08, 2016

Upload: manikandan-sekar

Post on 22-Jan-2018

122 views

Category:

Software


2 download

TRANSCRIPT

Page 1: Translating from legacy to cloud

Translating legacy to cloud

Manikandan Sekar (Principal Consultant, Ebillsoft LLC)

Dec 08, 2016

Page 2: Translating from legacy to cloud

Intended audience

This presentation attempts explain how one might attempt to translate a legacy software solution into a cloud friendly model and it stays technically shallow on purpose in order to keep the focus on the idea

So, if you’re new to cloud platform, this presentation is for you!

A cloud friendly solution is nothing but an optimized solution that is easy enough to run and maintain in the cloud, so, what we really mean by “translating from legacy to cloud” is, “optimizing a legacy solution”

So, if you’re looking to optimize a legacy solution, this presentation is for you too!

Page 3: Translating from legacy to cloud

Intention of this presentation

This presentation only intends to expose the audience to the idea of optimization, so, don’t expect in depth suggestions on optimizing different types of legacy solution here

However, at the end of this presentation I expect the audience will have a general idea on how they could approach the translation of their legacy solution

Page 4: Translating from legacy to cloud

Our legacy solution

Let’s pick a simple traditional architecture for this discussion

A frontend server that talks to a backend server which interacts with a database

Simple enough, but, the servers on legacy systems are almost always monolithic in nature (a tightly coupled mixture of applications and support softwares)

DB

[Backend Server][Frontend Server] [DB Server]

Page 5: Translating from legacy to cloud

How to make it cloud friendly?

What if, we pushed the servers to cloud?

That’ll work, but, that isn’t really optimization of the solution, is it?

It is a general (but, fair) notion that cloud platform suits lighter solutions better, so, how can we make our legacy solution lighter?

DB

Backend ServerFrontend Server DB Server

Page 6: Translating from legacy to cloud

How to make it cloud friendly?

How about we package the applications and run them inside containers in a cloud cluster?

But, what is a cluster and what is a container? And how does that make our solution lighter?

DB

[Frontend app container] DB server on cloud[Backend app Container]

Cloud cluster

Page 7: Translating from legacy to cloud

What is a cluster?

A cluster is a group of individual computers that act as a single logical system

Master Node

Member Nodes

[Cluster]

Page 8: Translating from legacy to cloud

What are containers?

One expert says

Containers are sealed application packages

If we’re looking for a definition, here is something I nicked from an AWS summit lecture -

Containers are similar to hardware virtualization (like a VM), however,instead of partitioning a machine, containers isolate the processesthat are running on a single OS by allocating dedicated file system &resources

Containers are portable, scalable, OS independent, fast and -this is important- disposable

Page 9: Translating from legacy to cloud

How to make it cloud friendly?

Okay, but, why containers and clusters are better than dedicated cloud servers?

DB

[Frontend app container] DB server on cloud[Backend app Container]

Cloud cluster

Page 10: Translating from legacy to cloud

Why clusters?

Here are a few benefits of using a cluster

1) Decentralized system2) Mitigates the risk of single point hardware failure3) The possibility of load balancing makes performance accessible4) Easy to scale up or down (clusters can be scaled horizontally)5) High availability6) Pooled resources (computing is a team effort)

Though not entirely relevant, let’s quickly see why Horizontal scaling is better than Vertical scaling in the next slide, because, horizontal scaling is one of the major advantages of using a cluster

Page 11: Translating from legacy to cloud

Horizontal scaling vs Vertical scaling

Horizontal scaling is the concept of increasing system capabilities by adding members to a cluster, whereas, vertical scaling is the concept of increasing system capabilities by adding resources to a single machine

It doesn’t matter how good your “one person” team is, you’re not going to be productive if he or she is “out sick”

Horizontal

Vertical

Page 12: Translating from legacy to cloud

What good are the containers?

Before discussing the container, let’s check this architecture comparison between Virtual Machines and Containers (because, containers are similar to virtualization – only, lighter)

Page 13: Translating from legacy to cloud

What good are the containers?

Containers offer an isolated application environment

1) So, we can run them anywhere without worrying about host environment’s influence. So, no more of this

Developer: I swear, it worked in my env!QA: Ok, I believe you :)Developer: No really, it did.

2) This makes it convenient when we want to make changes to the environment. So, no more of this too

Developer: We need Java 8 to support new version of the UIOPS: Sorry, we can’t upgrade to Java 8 now, our billing app

isn’t compatible!

Page 14: Translating from legacy to cloud

What good are the containers?

Containers are OS independent! If OS is capable of running the container engine, then, it can host the container

I’m using mac, do I’ve to install Ubuntu to host your app?No, just run the container.

Containers are fast

Dedicated FS? How long does it take to start a container?A few seconds, usually. A min or two, if huge.

Page 15: Translating from legacy to cloud

What good are the containers?

Containers are scalable

Can I scale horizontally? Yes!Can I scale vertically? Yes!Can I scale diagonally? If u can define it, yes!

Containers are disposable

Traditional Server: I need couple of hours downtime for 2.0 upgradeContainer Host: Oh, I don’t! I just start 2.0 container & shut down 1.0Traditional Server: I can’t upgrade often due to downtime concernsContainer Host: But, I can! All I have to do is swap containers

Page 16: Translating from legacy to cloud

How to make it cloud friendly?

The benefits of using containers (portable, OS independent) can be directly related to our purpose (making application lighter), but, the benefits of using a cluster doesn’t seem to be a necessity to make an application lighter – so, why do we need a cluster?

DB

[Frontend app container] DB server on cloud[Backend app Container]

Cloud cluster

Page 17: Translating from legacy to cloud

Containers inside a cluster

The applications running inside containers can increase or decrease their footprint at will (dynamic scaling)

Running such applications inside a system that can grow or shrink with the containers can only benefit the setup (consume what is needed, no more, no less), and a cluster is capable of doing just that

Add the standard benefits of a cluster to the equation (like, load balancing between nodes, high availability, less risk of single point h/w failure etc), then, we’re looking at a powerful combination of technologies that complement each other

Page 18: Translating from legacy to cloud

How to make it cloud friendly?

Right, the reason why containers and clusters can optimize an application is beginning to make sense now, but, how do we achieve dynamic scaling of containers?

DB

[Frontend app container] DB server on cloud[Backend app Container]

Cloud cluster

Page 19: Translating from legacy to cloud

How to scale dynamically?

By running it as a service with no downtime, of course!

Here is the over simplified version of the concept - containers can be cloned to form a service, as long as there is one active clone running the service will be available

A service is like static virtual node that exposes a single application to the user

DB

[Frontend service]100.0.0.1:8080

DB server on cloud[Backend service]100.0.0.2:11001

Cloud cluster

[FE container replicas] [BE container replicas]

CONTAINER ORCHESTRATION

Page 20: Translating from legacy to cloud

What is a service?

[containers]

[Service]

User defines the Service using a container orchestration tool:“n” containers of “application A” configured to interact with “service B” makes the “service A”

Service is assigned a virtual IP:Remains attached to the service when it is running

The containers get their own dynamic IP:So, each container gets access to entire port range

Service knows how to communicate with its containers:Containers’ dynamic IP addresses are resolved by service internally, not our problem!

The service is exposed to the user:So, the user always connects to a static IP (mimics a traditional server setup)

Page 21: Translating from legacy to cloud

What is a service?

The service, when orchestrated using the right tool, can have the self healing capability - it is achieved by state comparison and resolution

Orchestration tool remembers the desired state from service definition and constantly compares it with actual state – if they differ, orchestration tool takes necessary action to resolve the difference

Say, one of the containers that make the service dies due to insufficient memory, the orchestration tool will recognize the event and spawn a new container to replace it

Or say, if I changed the desired state definition to decrease the # of containers that make up the service from “n” to “n – 2”, the orchestration tool will recognize the change in state definition and kill 2 containers to match the new desired state

Page 22: Translating from legacy to cloud

What is a service?

Services can be associated to load balancers to distribute the load across its containers (cluster level load balancing covers load distribution among the nodes, this is another level of load balancing among the containers that make a service)

Services make the deployments easy

For instance, updating containers doesn’t require a new service - Say, if we want to update “service A” from version 1.0 to 2.0

We can simply deploy a new 2.0 containers cluster and point “service A” to it (blue-green deployment) or we can simply do a rolling deployment (replace limited # of containers at a time in a periodic interval) or we can just partially upgrade some of the containers that make up “service A” to 2.0 to evaluate how the new version performs before doing a full 2.0 deployment (canary deployment model)

Page 23: Translating from legacy to cloud

What is a service?

What if we want to make changes within the containers? Say, I need to reconfigure containers of “service A” to work with “service C” instead of “service B”?

We discussed that containers are like virtual machines, so, I can simply SSH into the containers and update the configurations

But, we also discussed containers are disposable, so, it is operationally easier to create new containers configured to work with “service C” and use them to replace the old containers

If we make changes to the containers that are persistent in nature, then, it is difficult to replicate the state when the containers are replaced, so, keeping containers stateless will help matters a lot

Remember, container deployments aren’t difficult

Page 24: Translating from legacy to cloud

How to make it cloud friendly?

Now that we’ve arrived at the optimized solution we want, let’s see how we can perform this optimization in details

It’ll be easier if we pick the tools for our translation, so, let’s choose AWS for cloud platform, Docker for containerization and Kubernetes for container orchestration

DB

[Frontend service]100.0.0.1:8080

DB server on cloud[Backend service]100.0.0.2:11001

Cloud cluster

[FE container replicas] [BE container replicas]

CONTAINER ORCHESTRATION

Page 25: Translating from legacy to cloud

Building Block 1: DB

Let’s go ground up, let’s start with the database -

1) Having a state makes it indispensable2) So, let’s not run it within a container, but, we can still run it in the cloud3) Since, we chose AWS platform, let’s run it as an AWS-RDS instance - inside a VPC, with a security group controlling external interactions

DB

[Backend Server][Frontend Server] [DB Server]

Page 26: Translating from legacy to cloud

Building blocks 2 & 3: Application Servers

When we look at the big picture, these applications mayn’t be stateless (versions & customizations), but, at a granular level they can be treated stateless (say, a specific version or a specific configuration)

For example, version 1.0 of frontend app configured to run against a backend app instance running on server XYZ via port 12345

[Backend Server][Frontend Server]

DB

AWS-RDS

VPCSECURITY

GROUP

Page 27: Translating from legacy to cloud

Building blocks 2 & 3: Application Servers

So, a given state of these two applications can be dockerized to run inside the containers

Let’s dockerize (containerize) our frontend and backend applications and run them as services in the same VPC that hosts our AWS-RDS database

[Backend Server][Frontend Server]

DB

AWS-RDS

VPCSECURITY

GROUP

Page 28: Translating from legacy to cloud

Containerizing the applications

Before dockerizing the apps, we need to answer one important question

IMAGE or IMAGES?

1) Sure, we can setup a full working instance of the application and commit it as a single docker image. But, is it the best way?2) Containers are efficient, but, even they can be difficult to handle if they are huge blobs of moving components3) So, we need to answer 2 questions before we build an image a) Can the image be split further without complicating container interactions? b) Will the resulting image be functional within a container?4) We should continue to granulate the solution until the answers for these 2 questions are “No” and “Yes” (in that order) for every single image that construct the whole application

Confused with the terms “image” and “container”? An image records the state of the application and we run the image inside the container (like a game disc)

Page 29: Translating from legacy to cloud

Building block 2: Backend Application

Backend application first (we’re going ground up), let’s make an argument for a multi image model -

Let’s assume our system uses an Oracle DB, so, the backend will require an Oracle DB client to interact with the DB

Let’s also assume the application requires a few 3rd party softwares (say, Python, Perl & Java)

That would make the backend application a 3 layer composite software (Oracle DB client, 3rd party s/w and core backend application)

If we build a docker image for each of these 3 layers and make them communicate seamlessly, then, we’ll have our containerized application

Page 30: Translating from legacy to cloud

Building block 2: Backend Application

How do we make 3 different docker images, each running on its own container, work together to replicate the backend application?

Say, Image #1 has Oracle DB client installed in directory “/u01” Image #2 has all the necessary 3rd party s/w installed in directory “/u02” & Image #3 has core application installed in directory “/u03”

Now, Docker has a mechanism to expose a directory as volume which will be visible outside the container that created it (like a global variable, the volume is initialized by one container for other containers to import and use)

So, if we run images #1 & #2 on dedicated containers and expose directories /u01 & /u02 as docker volumes and run image #3 on its own container with access to the docker volumes /u01 & /u02, the container running image #3 should behave like the whole backend app instance that includes Oracle DB client, 3rd party s/w and core backend application (because, it’ll have access to all 3 installation directories - /u01, /u02 & /u03)

Page 31: Translating from legacy to cloud

Building block 2: Backend Application

/u01 /u02

Oracle DB client container 3rd party s/w container Core backend app container

Oracle DB

AWS-RDS

Page 32: Translating from legacy to cloud

How to create service?

We can run the backend application inside docker containers now, but, we need it as a service, how to create one?

Before discussing the steps to create the service, let’s try to understand what a container orchestration tool does

A container orchestration tool deploys and manages the containers

There are a few options out there, but, Kubernetes and Docker Swarm Mode are our preferences at Ebillsoft

Here are the links to short and informative videos about these 2 tools1) Kubernetes: https://www.youtube.com/watch?v=4ht22ReBjno

2) Docker Swarm Mode: https://www.youtube.com/watch?v=KC4Ad1DS8xU

NOTE: AWS ECS is also an option for container orchestration within AWS

Page 33: Translating from legacy to cloud

Building block 2: Backend Application

Services are container aware - both Kubernetes & Docker Swarm Mode create services with all the necessities just out of the box, all we need to do is a service definition

Let’s define the service representing the backend app

1) Configure the Orchestration tool to create an AWS EC2 cluster2) Create a service definition as follows for the backend a) Image: Oracle DB client; Replicas: 2; Expose volume: /u01; Service type: Load Balancer b) Image: 3rd party software; Replicas: 2; Expose volume: /u02; Service type: Load Balancer c) Image: Backend Core App; Replicas: 6; Expose port: 11001, 22; Access Vol: /u01 & /u02; Service type: Load Balancer; Name: Backend A

Page 34: Translating from legacy to cloud

Building block 3: Frontend Application

[Service] Backend A

DB client containers

3rd party s/w containers

Backend containers

Oracle DB

AWS-RDS

SECURITY

GROUP

SECURITY

GROUP

SECURITY

GROUP

V P C

[Frontend Server]

We’ve the Backend Service, let’s move on to the frontend application

If we can run it as a service and include it in the same VPC running the backend service and the RDS DB, then, we’re done here

Page 35: Translating from legacy to cloud

Building block 3: Frontend Application

We’ve seen the process of creating the backend application service in detail, so, let’s try to be short with the frontend service

Containerization first,

1) Let’s assume frontend has some 3rd party software dependency (say, JRE8) and everything else is part of the core frontend installation package2) So, we can either build 2 images (one for JRE8 and another for frontend app) or we can just build one image (frontend app on top of a JRE8)

Ignoring technical details, let’s assume we exposed JRE8 as docker volume from a dedicated container (2 images solution)

Page 36: Translating from legacy to cloud

Building block 3: Frontend Application

Core app

JRE8

JRE8 container Core frontend app container

Page 37: Translating from legacy to cloud

Building block 3: Frontend Application

Now, let’s define the frontend service in the orchestration tool

a) Image: Core frontend app (connecting to service “Backend A” – the backend service we created); Replicas: 6; Expose port: 8040, 8080; Access Vol: jre8; Service Type: Load Balancer; Name: Frontend A

Page 38: Translating from legacy to cloud

The cloud friendly solution

[Service] backend A

DB client containers

3rd party s/w containers

Core backend containers

Oracle DB

AWS-RDS

SECURITY

GROUP

SECURITY

GROUP

SECURITY

GROUP

V P CSECURITY

GROUP[Service] frontend A

JRE8 containers

Core frontend containers

Page 39: Translating from legacy to cloud

Summary

To summarize, the 6 stages of translation are

Understand: know the legacy solution (how it works)Analyze: study the legacy solution (what can be done to optimize)Compartmentalize: split the legacy solution into logical blocks (and blocks into layers)Cloud map: identify right cloud friendly model for each block and layer (containers, services, volumes etc)Construct: build the pieces individuallyAssemble: assemble the pieces in a cloud platform