introduction to mainframe architecture · unit 1 evolution of model computing introduction to...

34
1.1 Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com Bamu UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client-server architecture, Cluster Computing, Grid Computing, Parallel Computing and Distributed Computing, Evolution of sharing on the Internet, Introduction of Cloud Computing: Definition of cloud, Cloud Deployment Models, Cloud Service Models, Key Characteristics, Benefits and Risks in Cloud Computing, Service oriented architecture (SOA) and Cloud Computing Reference Architecture by IBM. INTRODUCTION TO MAINFRAME ARCHITECTURE: The mainframes we use today date back to April 7, 1964, with the announcement of the IBM System/360. System/360 uses the operating system was called MVS (Multiple Virtual Storage). Later, IBM packaged MVS and many of its key subsystems together and called the result OS/390®, which is the immediate predecessor to z/OS. Until the 80s, most mainframes used punched cards for input and tele-printers for output; these were later replaced by CRT (cathode ray tube) terminals. Typical (post 1980) mainframe architecture is depicted in Figure 1.1. A terminal-based user interface would display screens controlled by the mainframe server using the ‘virtual telecommunications access method’ (VTAM) for entering and viewing information. VTAM (Virtual Telecommunications Access Method) is an IBM application program interface (API) for communicating with telecommunication devices and their users. VTAM was the first IBM program to allow programmers to deal with devices as "logical units" without having to understand the details of line protocol and device operation. In IBM terminology, VTAM is access method software allowing application programs to read and write data to and from external devices. It is called 'virtual' because it was introduced at the time when IBM was introducing virtual storage by upgrading the operating systems of the System/360 series to virtual storage versions. VTAM has been renamed to be the SNA Services feature of Communications Server for OS/390. This software package also provides TCP/IP functions. VTAM supports several network protocols, including SDLC, Token Ring, start-stop, Bisync, local (channel attached) 3270 devices,[5] and later TCP/IP. VTAM became part of IBM's strategic Systems Network Architecture (SNA) which in turn became part of

Upload: others

Post on 18-Mar-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.1

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

UNIT 1

EVOLUTION OF MODEL COMPUTING

Introduction to Mainframe architecture, Client-server architecture, Cluster Computing, Grid Computing, Parallel Computing and

Distributed Computing, Evolution of sharing on the Internet, Introduction of Cloud Computing: Definition of cloud, Cloud

Deployment Models, Cloud Service Models, Key Characteristics, Benefits and Risks in Cloud Computing, Service oriented

architecture (SOA) and Cloud Computing Reference Architecture by IBM.

INTRODUCTION TO MAINFRAME ARCHITECTURE:

The mainframes we use today date back to April 7, 1964, with the announcement of the IBM

System/360. System/360 uses the operating system was called MVS (Multiple Virtual Storage). Later, IBM

packaged MVS and many of its key subsystems together and called the result OS/390®, which is the

immediate predecessor to z/OS.

Until the 80s, most mainframes used punched cards for input and tele-printers for output; these were

later replaced by CRT (cathode ray tube) terminals. Typical (post 1980) mainframe architecture is depicted

in Figure 1.1. A terminal-based user interface would display screens controlled by the mainframe server

using the ‘virtual telecommunications access method’ (VTAM) for entering and viewing information.

VTAM (Virtual Telecommunications Access Method) is an IBM application program interface

(API) for communicating with telecommunication devices and their users. VTAM was the first IBM

program to allow programmers to deal with devices as "logical units" without having to understand the

details of line protocol and device operation.

In IBM terminology, VTAM is access method software allowing application programs to read and

write data to and from external devices. It is called 'virtual' because it was introduced at the time when IBM

was introducing virtual storage by upgrading the operating systems of the System/360 series to virtual

storage versions.

VTAM has been renamed to be the SNA Services feature of Communications Server for OS/390.

This software package also provides TCP/IP functions. VTAM supports several network protocols,

including SDLC, Token Ring, start-stop, Bisync, local (channel attached) 3270 devices,[5] and later TCP/IP.

VTAM became part of IBM's strategic Systems Network Architecture (SNA) which in turn became part of

Page 2: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.2

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

the more comprehensive Systems Application Architecture (SAA). Terminals communicated with the

mainframe using the ‗systems network architecture‘ (SNA) protocol, instead of the ubiquitous TCP/IP

protocol of today.

While these mainframe computers had limited CPU power by modern standards, their I/O bandwidth

was (and is, to date) extremely generous relative to their CPU power. Consequently, mainframe applications

were built using batch architecture to minimize utilization of the CPU during data entry or retrieval. Thus,

data would be written to disk as soon as it was captured and then processed by scheduled background

programs, in sharp contrast to the complex business logic that gets executed during ‗online‘ transactions on

the web today. In fact, for many years, moving from a batch model to an online one was considered a major

revolution in IT architecture, and large systems migration efforts were undertaken to achieve this; it is easy

to see why: In a batch system, if one deposited money in a bank account it would usually not show up in the

balance until the next day after the ‗end of day‘ batch jobs had run! Further, if there was incorrect data entry,

a number of corrective measures would have to be triggered, rather than the immediate data validations.

MVS (Multiple Virtual Storage) is an operating system from IBM that continues to run on many of

IBM's mainframe and large server computers. MVS has been said to be the operating system that keeps the

world going and the same could be said of its successor systems, OS/390 and z/OS. The Virtual Storage in

MVS refers to the use of virtual memory in the operating system. Job Control Language (JCL) is

a scripting language used on IBM mainframe operating systems to instruct the system on how to run a batch

job or start a subsystem.

IMS (Information Management System) is a database and transaction management system that

was first introduced by IBM in 1968. Since then, IMS has gone through many changes in adapting to new

programming tools and environments. IMS is one of two major legacy database and transaction management

subsystems from IBM that run on mainframe MVS (now z/OS) systems. The other is CICS. It is claimed

that, historically, application programs that use either (or both) IMS or CICS services have handled and

continue to handle most of the world's banking, insurance, and order entry transactions. IMS consists of two

major components, the IMS Database Management System (IMS DB) and the IMS Transaction

Management System (IMS TM). In IMS DB, the data is organized into a hierarchy. The data in each level is

dependent on the data in the next higher level. The data is arranged so that its integrity is ensured, and the

storage and retrieval process is optimized. IMS TM controls I/O (input/output) processing, provides

formatting, logging, and recovery of messages, maintains communications security, and oversees the

scheduling and execution of programs. TM uses a messaging mechanism for queuing requests. IMS's

original programming interface was DL/1 (Data Language/1). Today, IMS applications and databases can be

connected to CICS applications and DB2 databases. Java programs can access IMS databases and services.

The storage subsystem in mainframes, called ‘virtual storage access mechanism’ (VSAM), built in

support for a variety of file access and indexing mechanisms as well as sharing of data between concurrent

users using record level locking mechanisms. Early file-structure-based data storage, including networked

and hierarchical databases, rarely included support for concurrency control beyond simple locking. The need

Page 3: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.3

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

for transaction control, i.e., maintaining consistency of a logical unit of work made up of multiple updates,

led to the development of ‘transaction-processing monitors’ (TP-monitors), such as CICS (customer

information control system). CICS leveraged facilities of the VSAM layer and implemented commit and roll

back protocols to support atomic transactions in a multi-user environment. CICS is still in use in conjunction

with DB2 relational databases on IBM z-series mainframes. At the same time, the need for speed continued

to see the exploitation of so called ‗direct access‘ methods where transaction control is left to application

logic.

CLIENT-SERVER ARCHITECTURE:--

The microprocessor revolution of the 80s brought PCs to business desktops as well as homes. At the same

time minicomputers such as the VAX family and RISC-based systems running the UNIX operating system

and supporting the C programming language became available. It was now conceivable to move some data

processing tasks away from expensive mainframes to exploit the seemingly powerful and inexpensive

desktop CPUs. As an added benefit corporate data became available on the same desktop computers that

were beginning to be used for word processing and spreadsheet applications using emerging PC-based

office-productivity tools. In contrast terminals were difficult to use and typically found only in ‗data

processing rooms‘. Moreover, relational databases, such as Oracle, became available on minicomputers,

overtaking the relatively lukewarm adoption of DB2 in the mainframe world.

Finally, networking using TCP/IP rapidly became a standard, meaning that networks of PCs and

minicomputers could share data. Corporate data processing rapidly moved to exploit these new technologies.

Figure 1.2 shows the architecture of client-server systems. First, the ‗forms‘ architecture for minicomputer-

based data processing became popular. At first this architecture involved the use of terminals to access

server-side logic in C, mirroring the mainframe architecture; later PC-based forms applications provided

Page 4: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.4

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

graphical ‗GUIs‘ as opposed to the terminal-based character-oriented ‗CUIs.‘ The GUI ‗forms‘ model was

the first ‗client-server‘ architecture.

The ‗forms‘ architecture evolved into the more general client-server architecture, wherein significant

processing logic executes in a client application, such as a desktop PC: Therefore the client-server

architecture is also referred to as a ‗fat-client‘ architecture, as shown in Figure 1.2. The client application (or

‗fat-client‘) directly makes calls (using SQL) to the relational database using networking protocols such as

SQL/Net, running over a local area (or even wide area) network using TCP/IP. Business logic largely resides

within the client application code, though some business logic can also be implemented within the database

for faster performance, using ‗stored procedures‘.

The client-server architecture became hugely popular: Mainframe applications which had been

evolving for more than a decade were rapidly becoming difficult to maintain, and client-server provided a

refreshing and seemingly cheaper alternative to recreating these applications for the new world of desktop

computers and smaller Unix-based servers. Further, by leveraging the computing power on desktop

computers to perform validations and other logic, ‗online‘ systems became possible, a big step forward for a

world used to batch processing. Lastly, graphical user interfaces allowed the development of extremely rich

user interfaces, which added to the feeling of being ‗redeemed‘ from the mainframe world.

In the early to mid-90s, the client-server revolution spawned and drove the success of a host of

application software products, such as SAP-R/3, the client-server version of SAP‘s ERP software2 for core

manufacturing process automation; which was later extended to other areas of enterprise operations.

Similarly supply chain management (SCM), such as from i2, and customer relationship management

(CRM), such as from Seibel, also became popular. With these products, it was conceivable, in principle, to

replace large parts of the functionality deployed on mainframes by client-server systems, at a fraction of the

cost.

However, the client-server architecture soon began to exhibit its limitations as its usage grew beyond

small workgroup applications to the core systems of large organizations: Since processing logic on the

‗client‘ directly accessed the database layer, client-server applications usually made many requests to the

server while processing a single screen. Each such request was relatively bulky as compared to the terminal-

based model where only the input and final result of a computation were transmitted. In fact, CICS and IMS

even today support ‗changed-data only‘ modes of terminal images, where only those bytes changed by a

user are transmitted over the network. Such ‗frugal‘ network architectures enabled globally distributed

terminals to connect to a central mainframe even though network bandwidths were far lower than they are

today. Thus, while the client-server model worked fine over a local area network, it created problems when

client-server systems began to be deployed on wide area networks connecting globally distributed offices.

As a result, many organizations were forced to create regional data centers, each replicating the same

enterprise application, albeit with local data. This structure itself led to inefficiencies in managing global

software upgrades, not to mention the additional complications posed by having to upgrade the ‗client‘

applications on each desktop machine as well.

Page 5: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.5

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

Finally, it also became clear over time that application maintenance was far costlier when user

interface and business logic code was intermixed, as almost always became the case in the ‗fat‘ client-side

applications. Lastly, and in the long run most importantly, the client-server model did not scale;

organizations such as banks and stock exchanges where very high volume processing was the norm could

not be supported by the client-server model.

Thus, the mainframe remained the only means to achieve large throughput high-performance

business processing.

CLUSTER COMPUTING:

Computer Cluster consists of a set of loosely or tightly connected computers that work together so that, in

many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each

node set to perform the same task, controlled and scheduled by software.

The components of a cluster are usually connected to each other through fast local area

networks ("LAN"), with each node (computer used as a server) running its own instance of an operating

system. In most circumstances, all of the nodes use the same hardware and the same operating system,

although in some setups (i.e. using Open Source Cluster Application Resources (OSCAR)), different

operating systems can be used on each computer, and/or different hardware.

They are usually deployed to improve performance and availability over that of a single computer,

while typically being much more cost-effective than single computers of comparable speed or availability.

Computer clusters emerged as a result of convergence of a number of computing trends including the

availability of low cost microprocessors, high speed networks, and software for high-

performance distributed computing. They have a wide range of applicability and deployment, ranging from

small business clusters with a handful of nodes to some of the fastest supercomputers in the world such

as IBM's Sequoia. The applications that can be done however are nonetheless limited, since the software

needs to be purpose-built per task. It is hence not possible to use computer clusters for casual computing

tasks.

The desire to get more computing power and better reliability by orchestrating a number of low

cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The

computer clustering approach usually (but not always) connects a number of readily available computing

nodes (e.g. personal computers used as servers) via a fast local area network. The activities of the computing

nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the

users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system

image concept.

Computer clustering relies on a centralized management approach which makes the nodes available

as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid

computing which also use many nodes, but with a far more distributed nature.

Page 6: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.6

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

A computer cluster may be a simple two-node system which just connects two personal computers,

or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster

which may be built with a few personal computers to produce a cost-effective alternative to traditional high

performance computing. An early project that showed the viability of the concept was the 133-node Stone

Supercomputer. The developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing

Interface library to achieve high performance at a relatively low cost.

A typical Beowulf Cluster configuration

Attributes of clusters

A load balancing cluster with two servers and N user stations (Galician). Computer clusters may be

configured for different purposes ranging from general purpose business needs such as web-service support,

to computation-intensive scientific calculations. In either case, the cluster may use a high-

availability approach. Note that the attributes described below are not exclusive and a "computer cluster"

may also use a high-availability approach, etc. "Load-balancing" clusters are configurations in which

cluster-nodes share computational workload to provide better overall performance. For example, a web

server cluster may assign different queries to different nodes, so the overall response time will be optimized.

However, approaches to load-balancing may significantly differ among applications, e.g. a high-

performance cluster used for scientific computations would balance load with different algorithms from a

web-server cluster which may just use a simple round-robin method by assigning each new request to a

different node.

Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations

such as web service or databases. For instance, a computer cluster might support computational

simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that

may approach "supercomputing".

"High-availability clusters" (also known as failover clusters, or HA clusters) improve the availability of

the cluster approach. They operate by having redundant nodes, which are then used to provide service when

system components fail. HA cluster implementations attempt to use redundancy of cluster components to

eliminate single points of failure. There are commercial implementations of High-Availability clusters for

Page 7: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.7

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

many operating systems. The Linux-HA project is one commonly used free software HA package for the

Linux operating system.

GRID COMPUTING –

Grid is an infrastructure that involves the integrated and collaborative use of computers, networks,

databases and scientific instruments owned and managed by multiple organizations. Grid applications often

involve large amounts of data and/or computing resources that require secure resource sharing across

organizational boundaries.

Grid computing is a form of distributed computing whereby a "super and virtual computer" is

composed of a cluster of networked, loosely coupled computers, acting in concert to perform very large

tasks.Grid computing (Foster and Kesselman, 1999) is a growing technology that facilitates the executions

of large-scale resource intensive applications on geographically distributed computing resources.Facilitates

flexible, secure, coordinated large scale resource sharing among dynamic collections of individuals,

institutions, and resource Enable communities (―virtual organizations‖) to share geographically distributed

resources as they pursue common goals.

Grid is a shared collection of reliable (cluster-tightly coupled) & unreliable resources (loosely

coupled machines) and interactively communicating researchers of different virtual organisations (doctors,

biologists, physicists). Grid System controls and coordinates the integrity of the Grid by balancing the usage

of reliable and unreliable resources among its participants providing better quality of service.

Grid computing is a method of harnessing the power of many computers in a network to solve

problems requiring a large number of processing cycles and involving huge amounts of data. Most

organizations today deploy firewalls around their computer networks to protect their sensitive proprietary

data. But the central idea of grid computing-to enable resource sharing makes mechanisms such as firewalls

difficult to use

Types Of Grids

Computational Grid:-―A computational grid is a hardware and software infrastructure that provides

dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.‖ Provides

Page 8: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.8

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

Users with compute power for solving jobs. The ability to provide mechanism that can intelligently and

transparently select computing resources capable to run users jobs with the ability to allow user to

independently manage the computing resources.

Example : Science Grid (US Department of Energy)

Data Grid:-A data grid is a grid computing system that deals with data — the controlled sharing and

management of large amounts of distributed data. Data Grid is the storage component of a grid

environment. Scientific and engineering applications require access to large amounts of data, and often this

data is widely distributed. A data grid provides seamless access to the local or remote data required to

complete compute intensive calculations.

Eg : Biomedical informatics Research Network (BIRN), Southern California earthquake Center (SCEC).

A TYPICAL VIEW OF GRID ENVIRONMENT:

A high-level view of activities involved within a seamless and scalable Grid environment is shown in Figure

2. Grid resources are registered within one or more Grid information services. The end users submit their

application requirements to the Grid resource broker which then discovers suitable resources by querying the

Information services, schedules the application jobs for execution on these resources and then monitors their

processing until they are completed. A more complex scenario would involve more requirements and

therefore, Grid environments involve services such as security, information, directory, resource allocation,

application development, execution management, resource aggregation, and scheduling. Figure 3 shows the

hardware and software stack within a typical Grid architecture. It consists of four layers: fabric, core

middleware, user-level middleware, and applications and portals layers.

Page 9: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

1.9

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

Bamu

A high-level view of activities involved within a seamless and scalable Grid environment is shown in Figure

2. Grid resources are registered within one or more Grid information services. The end users submit their

application requirements to the Grid resource broker which then discovers suitable resources by querying the

Information services, schedules the application jobs for execution on these resources and then monitors their

processing until they are completed. A more complex scenario would involve more requirements and

therefore, Grid environments involve services such as security, information, directory, resource allocation,

application development, execution management, resource aggregation, and scheduling. Figure 3 shows the

hardware and software stack within a typical Grid architecture. It consists of four layers: fabric, core

middleware, user-level middleware, and applications and portals layers.

Grid Fabric layer consists of distributed resources such as computers, networks, storage devices and

scientific instruments. The computational resources represent multiple architectures such as clusters,

supercomputers, servers and ordinary PCs which run a variety of operating systems (such as UNIX variants

or Windows). Scientific instruments such as telescope and sensor networks provide real-time data that can

be transmitted directly to computational sites or are stored in a database.

Core Grid middleware offers services such as remote process management, co-allocation of resources,

storage access, information registration and discovery, security, and aspects of Quality of Service (QoS)

such as resource reservation and trading. These services abstract the complexity and heterogeneity of the

fabric level by providing a consistent method for accessing distributed resources.

Page 10: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.10

User-level Grid middleware utilizes the interfaces provided by the low-level middleware to provide higher

level abstractions and services. These include application development environments, programming tools

and resource brokers for managing resources and scheduling application tasks for execution on global

resources.

Grid applications and portals are typically developed using Grid-enabled programming environments and

interfaces and brokering and scheduling services provided by user-level middleware. An example

application, such as parameter simulation or a grand-challenge problem, would require computational

power, access to remote datasets, and may need to interact with scientific instruments. Grid portals offer

Web-enabled application services, where users can submit and collect results for their jobs on remote

resources through the Web.

PARALLEL COMPUTING:

Parallel computing is a form of computation in which many calculations are carried out

simultaneously, operating on the principle that large problems can often be divided into smaller ones, which

are then solved in parallel. There are several different forms of parallel computing: bit-level, instruction

level, data, and task parallelism. Parallel computers can be roughly classified according to the level at

which the hardware supports parallelism, with multi-core and multi-processor computers having multiple

processing elements within a single machine. Specialized parallel computer architectures are sometimes

used alongside traditional processors, for accelerating specific tasks. Parallel computer programs are more

difficult to write than sequential ones, because communication and synchronization between the different

subtasks are typical.

Types of parallelism:

BIT-LEVEL PARALLELISM: From the advent of very-large-scale integration (VLSI) computer-chip

fabrication technology in the 1970s until about 1986, speed-up in computer architecture was driven by

doubling computer word size the amount of information the processor can manipulate per cycle. Increasing

the word size reduces the number of instructions the processor must execute to perform an operation on

variables whose sizes are greater than the length of the word. For example, where an 8-bit processor must

add two 16-bit integers, the processor must first add the 8 lower-order bits from each integer using the

standard addition instruction, then add the 8 higher-order bits using an add-with-carry instruction and

the carry bit from the lower order addition; thus, an 8-bit processor requires two instructions to complete a

single operation, where a 16-bit processor would be able to complete the operation with a single instruction.

INSTRUCTION-LEVEL PARALLELISM:

A computer program is in essence, a stream of instructions executed by a processor. These instructions can

be re-ordered and combined into groups which are then executed in parallel without changing the result of

the program. This is known as instruction-level parallelism.

Page 11: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.11

A canonical five-stage pipeline in a RISC machine (IF = Instruction Fetch, ID = Instruction Decode, EX =

Execute, MEM = Memory access, WB = Register write back)

Task parallelism: A Task parallelism is the characteristic of a parallel program that "entirely different

calculations can be performed on either the same or different sets of data". This contrasts with data

parallelism, where the same calculation is performed on the same or different sets of data. Task parallelism

involves the decomposition of a task into sub-tasks and then allocating each sub-task to a processor for

execution. The processors would then execute these sub-tasks simultaneously and often cooperatively. Task

parallelism does not usually scale with the size of a problem.

Memory and communication:

Main memory in a parallel computer is either shared memory (shared between all processing elements in a

single address space), or distributed memory (in which each processing element has its own local address

space). Distributed memory refers to the fact that the memory is logically distributed, but often implies that

it is physically distributed as well. Distributed shared memory and memory virtualization combine the two

approaches, where the processing element has its own local memory and access to the memory on non-local

processors. Accesses to local memory are typically faster than accesses to non-local memory. A logical view

of Non-Uniform Memory Access (NUMA) architecture is shown in below figure. Processors in one

directory can access that directory's memory with less latency than they can access memory in the other

directory's memory.

Computer architectures in which each element of main memory can be accessed with equal latency and

bandwidth are known as Uniform Memory Access (UMA) systems. Typically, that can be achieved only by

a shared system, in which the memory is not physically distributed. A system that does not have this

Page 12: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.12

property is known as Non-Uniform Memory Access (NUMA) architecture. Distributed memory systems

have non-uniform memory access.

CLASSES OF PARALLEL COMPUTERS:

Parallel computers can be roughly classified according to the level at which the hardware supports

parallelism.

MULTICORE COMPUTING: A multicore processor is a processor that includes multiple execution

units ("cores") on the same chip. These processors differ from superscalar processors, which can issue

multiple instructions per cycle from one instruction stream (thread); in contrast, a multicore processor can

issue multiple instructions per cycle from multiple instruction streams. Each core in a multicore processor

can potentially be superscalar as well—that is, on every cycle, each core can issue multiple instructions from

one instruction stream. Simultaneous multithreading was an early form of pseudo-multicore. A processor

capable of simultaneous multithreading has only one execution unit ("core"), but when that execution unit is

idling (such as during a cache miss), it uses that execution unit to process a second thread.

SYMMETRIC MULTIPROCESSING: A symmetric multiprocessor (SMP) is a computer system with

multiple identical processors that share memory and connect via a bus. Bus contention prevents bus

architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors. "Because

of the small size of the processors and the significant reduction in the requirements for bus bandwidth

achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a

sufficient amount of memory bandwidth exists."

DISTRIBUTED COMPUTING: A distributed computer (also known as a distributed memory

multiprocessor) is a distributed memory computer system in which the processing elements are connected by

a network. Distributed computers are highly scalable.

PARALLEL PROCESSING: Processing of multiple tasks simultaneously on multiple processors is

called parallel processing. The parallel program consists of multiple active processes (tasks) simultaneously

solving a given problem. A given task is divided into multiple sub tasks using a divide-and-conquer

technique, and each sub task is processed on a different central processing unit (CPU). Programming on a

multi-processor system using the divide-and-conquer technique is called parallel programming.

Hardware architectures for parallel processing

The core elements of parallel processing are CPUs. Based on the number of instruction and data streams that

can be processed simultaneously, computing systems are classified into the following four categories:

Page 13: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.13

Single-instruction, single-data (SISD) systems: An SISD computing system is a uniprocessor machine

capable of executing a single instruction, which operates on a single data stream (see Figure 2.2). In SISD,

machine instructions are processed sequentially; hence computers adopting this model are popularly called

sequential computers.

.

Single-instruction, multiple-data (SIMD) systems: An SIMD computing system is a multiprocessor

machine capable of executing the same instruction on all the CPUs but operating on different data streams

(see Figure 2.3). Machines based on an SIMD model are well suited to scientific computing since they

involve lots of vector and matrix operations.

Multiple-instruction, single-data (MISD) systems: An MISD computing system is a

multiprocessor machine capable of executing different instructions on different PEs but all of them operating

on the same dataset (see Figure2.4). For instance, statements such as

perform different operations on the same data set. Machines built using the MISD model are not useful in

most of the applications; a few machines are built, but none of them are available commercially. They

became more of an intellectual exercise than a practical configuration.

Page 14: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.14

Multiple-instruction, multiple-data (MIMD) systems: An MIMD computing system is a

multiprocessor machine capable of executing multiple instructions on multiple datasets (see Figure2.5).

MIMD machines are broadly categorized into shared-memory MIMD and distributed-memory MIMD.

Shared memory MIMD machines: In shared memory MIMD model, all the processing elements are

connected to a single global memory and they all have access to it (see Figure 2.6). Systems based on this

model are also called tightly coupled multiprocessor systems.

Distributed memory MIMD machines: In the distributed memory MIMD model, all processing elements

have a local memory. Systems based on this model are also called loosely coupled multiprocessor systems.

The communication between processing elements in this model takes place through the inter connection

network (the inter process communication channel, or IPC).

Page 15: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.15

DISTRIBUTED COMPUTING:

Distributed computing is a field of computer science that studies distributed systems. A distributed

system is a software system in which components located on networked computers communicate and

coordinate their actions by passing messages. The components interact with each other in order to achieve a

common goal. A computer program that runs in a distributed system is called a distributed program, and

distributed programming is the process of writing such programs. There are many alternatives for the

message passing mechanism, including RPC-like connectors and message queues.

Distributed computing also refers to the use of distributed systems to solve computational problems.

In distributed computing, a problem is divided into many tasks, each of which is solved by one or more

computers, which communicate with each other by message passing.

INTRODUCTION:

The word distributed in terms such as "distributed system", "distributed programming", and "distributed

algorithm" originally referred to computer networks where individual computers were physically distributed

within some geographical area. The terms are nowadays used in a much wider sense, even referring to

autonomous processes that run on the same physical computer and interact with each other by message

passing. While there is no single definition of a distributed system, the following defining properties are

commonly used:

There are several autonomous computational entities, each of which has its own local memory.

The entities communicate with each other by message passing.

In this article, the computational entities are called computers or nodes.

A distributed system may have a common goal, such as solving a large computational

problem. Alternatively, each computer may have its own user with individual needs, and the purpose of the

Page 16: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.16

distributed system is to coordinate the use of shared resources or provide communication services to the

users.

General concepts and definitions

A distributed system is a collection of independent computers that appears to its users as a single coherent

system. This definition is general enough to include various types of distributed computing systems that are

especially focused on unified usage and aggregation of distributed resources. A distributed system is one in

which components located at networked computers communicate and coordinate their actions only by

passing messages. As specified in this definition, the components of a distributed system communicate with

some sort of message passing. This is a term that encompasses several communication models. Components

of a distributed system A distributed system is the result of the interaction of several components that

traverse the entire computing stack from hardware to software. It emerges from the collaboration of several

elements that—by working together—give users the illusion of a single coherent system. Figure2.10

provides an overview of the different layers that are involved in providing the services of a distributed

system.

At the very bottom layer, computer and network hardware constitute the physical infrastructure; these

components are directly managed by the operating system, which provides the basic services for inter

process communication (IPC), process scheduling and management, and resource management in terms of

file system and local devices. Taken together these two layers become the platform on top of which

specialized software is deployed to turn a set of networked computers into a distributed system. The use of

well-known standards at the operating system level and even more at the hardware and network levels

allows easy harnessing of heterogeneous components and their organization into a coherent and uniform

system. For example, network connectivity between different devices is controlled by standards, which

Page 17: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.17

allow them to interact seamlessly. At the operating system level, IPC services are implemented on top of

standardized communication protocols such Transmission Control Protocol/Internet Protocol (TCP/IP), User

Datagram Protocol (UDP) or others. The middle ware layer leverages such services to build a uniform

environment for the development and deployment of distributed applications. By relying on the services

offered by the operating system, the middleware develops its own protocols, data formats, and programming

language or frameworks for the development of distributed applications. All of them constitute a uniform

interface to distributed application developers that is completely independent from the underlying operating

system and hides all the heterogeneities of the bottom layers. The top of the distributed system stack is

represented by the applications and services designed and developed to use the middleware. These can serve

several purposes and often expose their

Features in the form of graphical user interfaces (GUIs) accessible locally or through the Internet via a Web

browser. For example, in the case of a cloud computing system, the use of Web technologies is strongly

preferred, not only to interface distributed applications with the end user but also to provide platform

services aimed at building distributed systems. A very good example is constituted by Infrastructure-as-a-

Service (IaaS) providers such as Amazon Web Services (AWS), which provide facilities for creating virtual

machines, organizing them together into a cluster, and deploying applications and systems on top. Figure

2.11 shows an example of how the general reference architecture of a distributed system is contextualized in

the case of a cloud computing system.

The Evolution of Cloud Computing: To understand what cloud computing is and is not, it is important to

understand how this model of computing has evolved. As Alvin Toffler notes in his famous book, The Third

Wave (Bantam, 1980), civilization has progressed in waves (three of them to date: the first wave was

agricultural societies, the second was the industrial age, and the third is the information age). Within each

wave, there have been several important sub-waves. In this post-industrial information age, we are now at

the beginning of what many people feel will be an era of cloud computing.

Page 18: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.18

As we noted earlier, within each wave there are sub-waves, and there have already been several

within the information age, as Figure 1-1 shows. We started with mainframe computers and progressed to

minicomputers, personal computers, and so forth, and we are now entering cloud computing.

Another view illustrates that cloud computing itself is a logical evolution of computing. Figure 1-2 displays

cloud computing and cloud service providers (CSPs) as extensions of the Internet service provider (ISP)

model.

In the beginning (ISP 1.0), ISPs quickly proliferated to provide access to the Internet for

organizations and individuals. These early ISPs merely provided Internet connectivity for users and small

businesses, often over dial-up telephone service. As access to the Internet became a commodity, ISPs

consolidated and searched for other value-added services, such as providing access to email and to servers at

their facilities (ISP 2.0). This version quickly led to specialized facilities for hosting organizations‘

(customers‘) servers, along with the infrastructure to support them and the applications running on them.

These specialized facilities are known as collocation facilities (ISP 3.0). Those facilities are ―a type of data

center where multiple customers locate network, server, and storage gear and interconnect to a variety of

telecommunications and other network service provider(s) with a minimum of cost and complexity.‖* As

collocation facilities proliferated and became commoditized, the next step in the evolution was the formation

of application service providers (ASPs), which focused on a higher value-added service of providing

specialized applications for organizations, and not just the computing infrastructure (ISP 4.0). ASPs

typically owned and operated the software application(s) they provided, as well as the necessary

infrastructure. Although ASPs might appear similar to a service delivery model of cloud computing that is

Page 19: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.19

referred to as software-as-a-service (SaaS), there is an important difference in how these services are

provided, and in the business model.

FIGURE 1-2. Evolution of cloud computing

Although ASPs usually provided services to multiple customers (just as SaaS providers do today), they did

so through dedicated infrastructures. That is, each customer had its own dedicated instance of an application,

and that instance usually ran on a dedicated host or server. The important difference between SaaS providers

and ASPs is that SaaS providers offer access to applications on a shared, not dedicated, infrastructure. Cloud

computing (ISP 5.0) defines the SPI model, which is generally agreed upon as providing SaaS, platform-as-

a-service (PaaS), and infrastructure-as-a-service (IaaS).

Evolution of sharing on the Internet

Page 20: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.20

DEFINITION OF CLOUD COMPUTING:

Cloud computing is a model for enabling ubiquitous(Anywhere & Any time), convenient, on-demand access

to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and

services) that can be rapidly provisioned and released with minimal management effort or service provider

interaction.

Wikipedia:--―Cloud computing is Web-based processing, where by shared resources, software, and

information are provided to computers and other devices (such as smart phones) on demand over

the Internet.‖

Introduction to Cloud Computing:

Cloud computing takes the technology, services, and applications that are similar to those on the Internet and

turns them into a self-service utility. The use of the word ―cloud‖ makes reference to the two essential

concepts:

• Abstraction: Cloud computing abstracts the details of system implementation from users and developers.

Applications run on physical systems that aren't specified, data is stored in locations that are unknown,

administration of systems is outsourced to others, and access by users is ubiquitous.

• Virtualization: Cloud computing virtualizes systems by pooling and sharing resources. Systems and

storage can be provisioned as needed from a centralized infrastructure, costs are assessed on a metered basis,

multi-tenancy is enabled, and resources are scalable with agility.

CLOUD NIST MODEL:

Cloud Deployment Models

Cloud Service Models

Essential Characteristics of Cloud Computing

Page 21: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.21

CLOUD DEPLOYMENT MODELS/ TYPES OF CLOUD/CLOUD DEVELOPMENT MODELS:

1. Public/ External cloud.

2. Private/ Internal cloud.

3. Community/ Vertical Clouds.

4. Hybrid/ Integrated cloud.

1.Public/External cloud:---

The cloud infrastructure is made available to the general public or a large industry group and is owned

by an organization selling cloud services.

A public cloud (also called External Cloud) is one based on the standard cloud computing model, in

which a service provider makes resources, such as applications and storage, available to the general

public over the Internet. Public cloud services may be free or offered on a pay‐per‐usage model.

A public cloud is hosted, operated, and managed by a third‐party vendor from one or more data centers.

In a public cloud, security management and day‐to‐day operations are relegated to the third party vendor,

who is responsible for the public cloud service offering.

Benefits

Cost Effective

Reliability

Flexibility

Location Independence

Utility Style Costing

High Scalability

Disadvantages

Low Security

Less customizable

Examples of public clouds include:

Amazon Elastic Compute Cloud (EC2),

IBM's Blue Cloud,

Google AppEngine and

Windows Azure Services Platform

2. Private/Internal cloud:--

The cloud infrastructure is operated solely for a single organization. It may be managed by the

organization or a third party, and may be on-premises or off-premises.

Private cloud (also called internal cloud) is a marketing term for a proprietary computing architecture

that provides hosted services to a limited number of people behind a firewall.

Page 22: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.22

Marketing media that uses the words "private cloud" is designed to appeal to an organization that

needs or wants more control over their data than they can get by using a third‐party hosted service such

as Amazon's Elastic Compute Cloud (EC2) or Simple Storage Service (S3).

Benefits

Higher Security and Privacy

More Control

Cost and energy efficiency

Disadvantages

Restricted Area

Limited Scalability

3.Hybrid/ Integrated cloud:--

A hybrid cloud is a composition of at least one private cloud and at least one public cloud.

A hybrid cloud is typically offered in one of two ways: a vendor has a private cloud and forms a

partnership with a public cloud provider, or a public cloud provider forms a partnership with a

vendor that provides private cloud platforms.

A hybrid cloud is a cloud computing environment in which an organization provides and manages

some resources in‐house and has others provided externally.

For example, an organization might use a public cloud service, such as Amazon Simple Storage

Service (Amazon S3) for archived data but continue to maintain in‐house storage for operational

customer data.

4.Community/Vertical Clouds

Community clouds are a deployment pattern suggested by NIST, where semi‐private clouds will be

formed to meet the needs of a set of related stakeholders or constituents that have common

requirements or interests.

The cloud infrastructure is shared by several organizations and supports a specific community that has

shared concerns (e.g., mission, security requirements, policy, or compliance considerations).

Page 23: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.23

It may be managed by the organizations or a third party and may exist on-premises or off-premises.

A community cloud may be private for its stakeholders, or may be a hybrid that integrates the

respective private clouds of the members, yet enables them to share and collaborate across their

clouds by exposing data or resources into the community cloud.

CLOUD SERVICE MODELS/ DELIVERY MODELS:

Infrastructure as a Service (IaaS)

Platform as a Service (PaaS)

Software as a Service (SaaS)

Infrastructure as a Service (IaaS):--

This is the base layer of the cloud stack.

It serves as a foundation for the other two layers, for their execution.

The keyword behind this stack is Virtualization.

Most large Infrastructure as a Service (IaaS) providers rely on virtual machine technology to deliver

servers that can run applications.

The capability provided to the consumer is to provision processing, storage, networks, and other

fundamental computing resources.

Consumer is able to deploy and run arbitrary software, which can include operating systems and

applications. The consumer does not manage or control the underlying cloud infrastructure but has

control over operating systems; storage, deployed applications, and possibly limited control of select

networking components (e.g., host firewalls).

IaaS provides virtual machines, virtual storage, virtual infrastructure, and other hardware assets as

resources that clients can provision. The IaaS service provider manages the entire infrastructure,

while the client is responsible for all other aspects of the deployment. This can include the operating

system, applications, and user interactions with the system.

Examples of IaaS service providers include:

• Amazon Elastic Compute Cloud (EC2)

Page 24: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.24

• Eucalyptus

• GoGrid

• FlexiScale

• Linode

• RackSpace Cloud

• Terremark

Platform as a Service (PaaS):--

The middle layer of Could Stack, i.e.,PaaS(Platform as a Service).

This middle layer of cloud is consumed mainly by developers.

The consumer does not manage or control the underlying cloud infrastructure including network,

servers, operating systems, or storage, but has control over the deployed applications and possibly

application hosting environment configurations.

PaaS provides virtual machines, operating systems, applications, services, development frameworks,

transactions, and control structures. The client can deploy its applications on the cloud infrastructure or

use applications that were programmed using languages and tools that are supported by the PaaS service

provider.

Examples of PaaS services are:

• Force.com

• Google AppEngine

• Windows Azure Platform.

Software as a Service (SaaS):--

The capability provided to the consumer is to use the provider‘s applications running on a cloud

infrastructure.

The applications are accessible from various client devices through a thin client interface such as a

web browser (e.g., web-based email).

The consumer does not manage or control the underlying cloud infrastructure including network,

servers, operating systems, storage, or even individual application capabilities, with the possible

exception of limited user specific application configuration settings.

Software as a Service (SaaS) is a cloud computing model, which hosts various software

applications and makes them available to customers over the Internet or other network.

Other good examples of SaaS cloud service providers are:

• GoogleApps

• Oracle On Demand

• SalesForce.com

• SQL Azure

Page 25: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.25

ESSENTIAL CHARACTERISTICS:--

On-demand self-service: A client can provision computer resources without the need for interaction

with cloud service provider personnel.

Broad network access: Access to resources in the cloud is available over the network using standard

methods in a manner that provides platform-independent access to clients of all types. This includes a

mixture of heterogeneous operating systems, and thick and thin platforms such as laptops, mobile

phones, and PDA.

Resource pooling: The provider‘s creates computing resources that are pooled to serve multiple

consumers using a multi-tenant model, with different physical and virtual resources dynamically

assigned and reassigned according to consumer demand.

Rapid elasticity: Capabilities can be rapidly and elastically provisioned in some cases automatically -

to quickly scale out; and rapidly released to quickly scale in. To the consumer, the capabilities available

for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Measured service: Cloud systems automatically control and optimize resource usage by leveraging a

metering capability at some level of abstraction appropriate to the type of service. Resource usage can

be monitored, controlled, and reported - providing transparency for both the provider and consumer of

the service.

BENEFITS OF CLOUD COMPUTING:--

• Lower costs: Because cloud networks operate at higher efficiencies and with greater utilization, significant

cost reductions are often encountered.

• Ease of utilization: Depending upon the type of service being offered, you may find that you do not

require hardware or software licenses to implement your service.

Page 26: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.26

• Quality of Service: The Quality of Service (QoS) is something that you can obtain under contract from

your vendor.

• Reliability: The scale of cloud computing networks and their ability to provide load balancing and failover

makes them highly reliable, often much more reliable than what you can achieve in a single organization.

• Outsourced IT management: A cloud computing deployment lets someone else manage your computing

infrastructure while you manage your business. In most instances, you achieve considerable reductions in IT

staffing costs.

• Simplified maintenance and upgrade: Because the system is centralized, you can easily apply patches

and upgrades. This means your users always have access to the latest software versions.

• Low Barrier to Entry: In particular, upfront capital expenditures are dramatically reduced. In cloud

computing, anyone can be a giant at any time.

Risks in cloud computing Security

Compatibility

Availability

Compliance

Monitoring

Lock –In

Standardization

Service Oriented Architecture (SOA):

Service Oriented Architecture (SOA) describes a standard method for requesting services from

distributed components and managing the results. Because the clients requesting services, the components

providing the services, the protocols used to deliver messages, and the responses can vary widely, SOA

provides the translation and management layer in an architecture that removes the barrier for a client

Page 27: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.27

obtaining desired services. With SOA, clients and components can be written in different languages and can

use multiple messaging protocols and networking protocols to communicate with one another. SOA

provides the standards that transport the messages and makes the infrastructure to support it possible.

Introducing Service Oriented Architecture

Service Oriented Architecture (SOA) is a specification and a methodology for providing platform-

and language-independent services for use in distributed applications. A service is a repeatable task within a

business process, and a business task is a composition of services. Usually service providers and service

consumers do not pass messages directly to each other. Implementations of SOA employ middleware

software to play the role of transaction manager (or broker) and translator. That middleware can discover

and list available services, as well as potential service consumers, often in the form of a registry, because

SOA describes a distributed architecture security and trust services are built directly into many of these

products to protect communication.

The Universal Description Discovery and Integration (UDDI) protocol is the one most commonly

used to broadcast and discover available Web services, often passing data in the form of an Electronic

Business using eXtensible Markup Language (ebXML) documents. Service consumers find a Web service in

a broker registry and bind their service requests to that specific service; if the broker supports several Web

services, it can bind to any of the ones that are useful.

The most commonly used message-passing format is an Extensible Markup Language (XML)

document using Simple Object Access Protocol (SOAP), but many more are possible, including Web

Services Description Language (WSDL), Web Services Security (WSS), and Business Process Execution

Language for Web Services (WS-BPEL). WSDL is commonly used to describe the service interface, how to

bind information, and the nature of the component's service or endpoint. The Service Component Definition

Language (SCDL) is used to define the service component that performs the service, providing the

component service information that is not part of the Web service and that therefore wouldn't be part of

WSDL.

Figure 13.1 shows a protocol stack for SOA architecture and how those different protocols execute

the functions required in the Service Oriented Architecture. In the figure, the box labeled Other Services

could include Common Object Request Broker Architecture (CORBA), Representational State Transfer

(REST), Remote Procedure Calls (RPC), Distributed Common Object Model (DCOM), Jini, Data

Distribution Service (DDS), Windows Communication Foundation (WCF), and other technologies and

protocols. It is this flexibility and neutrality that makes SOA so singularly useful in designing complex

applications.

SOA provides the framework needed to allow clients of any type to engage in a request-response

mechanism with a service. The specification of the manner in which messages are passed in SOA, or in

which events are handled, are referred to as their contract. The term is meant to imply that the client engages

the service in a task that must be managed in a specified manner. In real systems, contracts may specifically

Page 28: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.28

be stated with a Quality of Service parameter in a real paper contract. Typically, SOA requires the use of an

orchestrator or broker service to ensure that messages are correctly transacted. SOA makes no other

demands on either the client (consumer) or the components (provider) of the service; it is concerned only

with the interface or action boundary between the two. This is the earliest definition of SOA architecture.

FIGURE 13.1

A protocol stack for SOA showing the relationship of each protocol to its function

Components are often written to comply with the Service Component Architecture (SCA), a language and

technology-agnostic design specification that has wide, but not universal, industry support. SCA can use the

services of components that are written in the Business Process Execution Language (BPEL), Java,

C#/.NET, XML, or Cobol, and can apply to C++ and Fortran, as well as to the dynamic languages Python,

Ruby, PHP, and others. This allows components to be written in the easiest form that supports the business

process that the component is meant to service. By wrapping data from legacy clients written in languages

such as COBOL, SOA has greatly extended the life of many legacy applications.

FIGURE 13.2

SOA allows for different component and client construction.

The Enterprise Service Bus

In Figure 13.5, those aforementioned hypothetical three different applications are shown interfaced with an

authentication module through what has come to be called an Enterprise Service Bus (ESB). An ESB is not

a physical bus in the sense of a network; rather, it is an architectural pattern comprised of a set of network

services that manage transactions in a Service Oriented Architecture. You may prefer to think of an ESB as

a set of services that separate clients from components on a transactional basis and that the use of the word

Page 29: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.29

bus in the name indicates a high degree of connectivity or fabric quality to the system; that is, the system is

loosely coupled. Messages flow from client to component through the ESB, which manages these

transactions, even though the location of the services comprising the ESB may vary widely.

These typical features are found in ESBs, among others:

• Monitoring services aid in managing events.

• Process management services manage message transactions.

• Data repositories or registries store business logic and aid in governance of business processes.

• Data services pass messages between clients and services.

• Data abstraction services translate messages from one format to another, as required.

• Governance is a service that monitors compliance of your operations with governmental regulation, which

can vary from state to state and from country to country.

Defining SOA Communications

Message passing in SOA requires the use of two different protocol types: the data interchange format and

the network protocol that carries the message. A client (or customer) connected to an ESB communicates

over a network protocol such as HTTP, Representational State Transfer (REST), or Java Message Service

(JMS) to a component (or service). Messages are most often in the form of the eXtensible Markup Language

(XML) or in a variant such as the Simple Object Access Protocol (SOAP). SOAP is a messaging format

used in Web services that use XML as the message format while relying on Application layer protocols such

as HTTP and Remote Procedure Calls (RPC) for message negotiation and transmission.

The software used to write clients and components can be written in Java, .NET, Web Service

Business Process Execution Language (WS-BPEL), or another form of executable code; the services that

they message can be written in the same or another language. What is required is the ability to transport and

translate a message into a form that both parties can understand. An ESB may require a variety of

combinations in order to support communications between a service consumer and a service provider. For

example, in WebSphere ESB, you might see the following combinations:

• XML/JMS (Java Message Service)

• SOAP/JMS

• SOAP/HTTP

• Text/JMS

• Bytes/JMS

The Web Service Description Language (WSDL) is one of the most commonly used XML protocols for

messaging in Web services, and it finds use in Service Oriented Architectures. Version 1.1 of WSDL is a

W3C standard, but the current version WSDL 2.0 (formerly version 1.2) has yet to be ratified by theW3C.

The significant difference between 1.1 and 2.0 is that version 2.0 has more support for RESTful (e.g. Web

2.0) application, but much less support in the current set of software development tools. The most common

transport for WSDL is SOAP, and the WSDL file usually contains both XML data and an XML schema.

Page 30: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.30

REST offers some very different capabilities than SOAP. With REST, each URL is an object that

you can query and manipulate. You use HTML commands such as GET, POST, PUT, and DELETE to work

with REST objects. SOAP uses a different approach to working with Web data, exposing Web objects

through an API and transferring data using XML. The REST approach offers lightweight access using

standard HTTP command, is easier to implement than SOAP, and comes with less overhead. SOAP is often

more precise and provides a more error-free consumption model. SOAP often comes with more

sophisticated development tools. All major Web services use REST, but many Web services, especially

newer ones, combine REST with SOAP to derive the benefits that both offer.

Cloud Computing Reference Architecture by IBM

Roles:-The IBM Cloud Computing Reference Architecture defines three main roles:

Cloud Service Consumer, Cloud Service Provider and Cloud Service Creator.

Each role can be fulfilled by a single person or can be fulfilled by a group of people or an

organization.

The roles defined here intend to capture the common set of roles typically encountered in any

cloud computing environment.

Cloud Service Consumer

A cloud service consumer is an organization, a human being or an IT system that consumes (i.e.,

requests, uses and manages, e.g. changes quotas for users, changes CPU capacity assigned to a VM,

increases maximum number of seats for a web conferencing cloud service) service instances delivered

by a particular cloud service.

The service consumer may be billed for all (or a subset of) its interactions with cloud service and the

provisioned service instance(s).

Cloud Service Provider

The Cloud Service Provider has the responsibility of providing cloud services to Cloud Service

Consumers.

A cloud service provider is defined by the ownership of a common cloud management platform

(CCMP).

This ownership can either be realized by truly running a CCMP by himself or consuming one as a

service.

Cloud Service Creator

The Cloud Service Creator is responsible for creating a cloud service, which can be run by a Cloud

Service Provider and by that exposed to Cloud Service Consumers.

Typically, Cloud Service Creators build their cloud services by leveraging functionality which

is exposed by a Cloud Service Provider.

Page 31: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.31

Management functionality which is commonly needed by Cloud Service Creators is defined by the

CCMP architecture.

A Cloud Service Creator designs, implements and maintains runtime and management artifacts

specific to a cloud service.

1. What is cloud Computing, What are various driving forces which forces for making use of cloud

computing?

Cloud Computing is a technology that uses the internet and central remote servers to maintain data and

applications. Cloud computing allows consumers and businesses to use applications without installation and

access their personal files at any computer with internet access. Use of computing resources (hardware and

software) that are delivered as a service over a network (typically the Internet). The name comes from the

use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system

diagrams. Cloud computing entrusts remote services with a user's data, software and computation.

Reasons to Make the Switch to Cloud Computing

Saves time. Businesses that utilize software programs for their management needs are disadvantaged,

because of the time needed to get new programs to operate at functional levels. By turning to cloud

computing, you avoid these hassles. You simply need access to a computer with Internet to view the

information you need.

Less glitches. Applications serviced through cloud computing require fewer versions. Upgrades are

needed less frequently and are typically managed by data centers. Often, businesses experience problems

with software because they are not designed to be used with similar applications. Departments cannot share

data because they use different applications. Cloud computing enables users to integrate various types of

Page 32: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.32

G

applications including management systems, word processors, and e-mail. The fewer glitches, the more

productivity expected from employees.

oing green. On average, individual personal computers are only used at approximately 10 to 20 percent

of their capacity. Similarly, computers are left idle for hours at a times soaking up energy. Pooling resources

into a cloud consolidates energy use. Essentially, you save on costs by paying for what you use and

extending the life of your PC.

Fancy technology. Cloud computing offers customers more access to power. This power is not ordinarily

accessible through a standard PC. Applications now use virtual power. Users can even build virtual

assistants, which automate tasks such as ordering, managing dates, and offering reminders for upcoming

meetings.

Mobilization. From just about anywhere in the world, services that you need are available. Sales are

conducted over the phone and leads are tracked by using a cell phone. Cloud computing opens users up to a

whole new world of wireless devices, all of which can be used to access any applications. Companies are

taking sales productivity to a whole new level, while at the same time, providing their sales representatives

with high quality, professional devices to motivate them to do their jobs well.

Consumer trends. Business practices that are most successful are the ones that reflect consumer trends.

Currently, over 69 percent of Americans with internet access use a source of cloud computing. Whether it is

Web e-mail, data storage, or software, this number continues to grow. Consumers are looking to conduct

business with a modern approach.

Social media. Social networking is all the wave of the future among entrepreneurs. Companies are using

social networking sites such as Twitter, Facebook, and LinkedIn to heighten their productivity levels. Blogs

are used to communicate with customers about improvements that need to be made within companies.

LinkedIn is a popular website used by business professionals for collaboration purposes.

Customize. All to often, companies purchase the latest software in hopes that it will improve their sales.

Sometimes, programs do not quite meet the needs of a company. Some businesses require a personalized

touch, that ordinary software cannot provide. Cloud computing gives the user the opportunity to build

custom applications on a user-friendly interface. In a competitive world, your business needs to stand out

from the rest.

No need for hardware hiccups

IT staff cuts. When all the services you need are maintained by experts outside your business, there is

not need to hire new ones.

Low Barriers to Entry

A major benefit to cloud computing is the speed at which you can have an office up and running. Mordecai

notes that he could have a server functional for a new client within a few hours, although doing the research

work to assess a particular planner's needs and get them fully operating could take a week or two.

Page 33: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.33

Improving Security

Obviously, the security of cloud computing is a major issue for anyone considering a switch.

"The data is secure because it is being accessed through encryption set up by people smarter than us," says

Dave Williams, CFP®, of Wealth Strategies Group in Cordova, Tenn. "Clients like accessing their data

through a cloud environment because they know it's secure, they know they can get access to it, and they

know we are able to bring together a lot of their records."

Increased Mobility

One of the major benefits in cloud computing for Lybarger is the instant mobility.

"I used to work with a large broker-dealer and when I was traveling, I sometimes would have difficulty

getting my computer connected to the Internet with all of the proprietary software on my laptop," he says.

"There were times when I was traveling when I wanted to be able to take care of a client's business on the

spot, but I wasn't able to. Now, I can do it in an instant."

Limitless Scalability

If you're looking to grow, the scalability of cloud computing could be a big selling point. With applications

software, you can buy only the licenses you need right now, and add more as needed. The same goes for

storage space, according to Lybarger.

Strong Compliance

Planners who are already in the cloud believe that their compliance program is stronger than it was before.

For Thornton, who is registered with the state of Georgia (not large enough to require registration with the

SEC), his business continuity plan includes an appendix that lists all the Web sites and his user names and

passwords so that, in his words, "If I get run over by a truck tomorrow, whoever comes in to take over can

access my business continuity plan and pretty much pick up where I left off."

2. What are various barriers founds for implementation of cloud computing solutions.

There are several factors that you need to take into consideration before designing your own cloud-based

systems architecture, particularly if you're considering a multi-cloud/region architecture.

Cost - Before you architect your site/application and start launching servers, you should clearly

understand the SLA and pricing models associated with your cloud infrastructure(s). There are different

costs associated with both private and public clouds. For example, in AWS, data transfered between servers

inside of the same datacenter (Availability Zone) is free, whereas communication between servers in

different datacenters within the same cloud (EC2 Region) is cheaper than communication between servers in

different clouds or on-premise datacenters.

Complexity - Before you construct a highly customized hybrid cloud solution architecture, make sure you

properly understand the actual requirements of your application, SLA, etc. Simplified architectures will

always be easier to design and manage. A more complex solution should only be used if a simpler version

will not suffice. For example, a system architecture that is distributed across multiple clouds (regions)

Page 34: INTRODUCTION TO MAINFRAME ARCHITECTURE · UNIT 1 EVOLUTION OF MODEL COMPUTING Introduction to Mainframe architecture, Client ... start-stop, Bisync, local (channel attached) 3270

Documented by Prof. K.V.Reddy Asst.Prof at DIEMS BamuEngine.com

1.34

introduces complexity at the architecture level and may require changes at the application level to be more

latency-tolerant and/or be able to communicate with a database that's migrated to a different cloud for

failover purposes.

Speed - The cloud gives you more flexibility to control the speed or latency of your site/application. For

example, you could launch different instance types based on your application's needs. For example, do you

need an instance type that has high memory or high CPU? From a geograhic point of view which cloud will

provide the lowest latency for your users? Is it necessary or cost effective to use a content distribution

network (CDN) or caching service? For user-intensive applications, the extra latency that results from cross-

cloud/region communication may not be acceptable.

Cloud Portability - Although it might be easier to use one of the cloud provider's tools or services, such

as a load balancing or database service, it's important to realize that if and when you need to move that

particular tier of your architecture to another cloud provider, you will need to modify your architecture

accordingly. Since ServerTemplates are cloud-agnostic, you can use them to build portable cloud

architectures.

Security - For MultiCloud system architectures, it's important to realize that cross-cloud/region

communication is performed over the public Internet and may introduce security concerns that will need to

be addressed using some type of data encryption or VPN technology.

Volatility: New cloud vendors appear almost on a daily basis. Some will be credible, well resourced, and

professional. Others, not so much. Some are adding cloud to their conventional IT services to stay in the

race, and others are new entrants that are, as they say, cloud natives, in which case they do not suffer the

pains and challenges of reengineering legacy business models and support processes to the cloud. How can a

CFO perform due diligence on a provider‘s viability if it‘s new to the market and backed by impatient

startup capital that‘s expecting quick and positive returns? Are you concerned about the potentially complex

nest of providers that sit behind your provider‘s cloud offering? That is, the cloud providers that store its

data, handle its transactions, or manage its network? In the event that your provider ceases to exist, can they

offer you protection in the form of data escrow?

The cloud ecosystem is far more complex than the on-premise world, even if it doesn‘t appear that way at

first blush. When you enter the cloud, have an exit strategy, and be sure you can execute it.