documentml

69
EXPERIMENT-1 AIM : To study sharing of files and folders on LAN. REQUIREMENT: Computer running the Windows operating system(Window Xp,Vista or Seven) Files are needed for sharing. A proper connectivity of local area network(LAN) THEORY: File sharing is the public or private sharing of computer data or space in a network with various levels of access privilege. While files can easily be shared outside a network (for example, simply by handing or mailing someone your file on a diskette), the term file sharing almost always means sharing files in a network, even if in a small local area network. File sharing allows a number of people to use the same file or file by some combination of being able to read or view it, write to or modify it, copy it, or print it. Typically, a file sharing system has one or more administrators. File sharing is the practice of distributing or providing access to digitally stored information, such as computer programs, multimedia (audio, images, and video), documents, or electronic books. It may be implemented through a variety of ways. Storage, transmission, and distribution models are common methods of file sharing that incorporate manual sharing using removable media, centralized computer file server installations on computer networks, World Wide Web- based hyperlinked documents, and the use of distributed peer- to-peer networking . File sharing has been a feature of mainframe and multi-user computer systems for many years. With the advent of the Internet, a file transfer system called the File Transfer Protocol ( FTP ) has become widely-used. FTP can be used to access (read and possibly write to) files shared among a particular set of users with a password to gain access to files shared from an FTP server site. Page | 1

Upload: mitali-bhagwanshi

Post on 15-Oct-2014

111 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Documentml

EXPERIMENT-1

AIM : To study sharing of files and folders on LAN.

REQUIREMENT:

Computer running the Windows operating system(Window Xp,Vista or Seven) Files are needed for sharing. A proper connectivity of local area network(LAN)

THEORY:

File sharing is the public or private sharing of computer data or space in a network with various levels of access privilege. While files can easily be shared outside a network (for example, simply by handing or mailing someone your file on a diskette), the term file sharing almost always means sharing files in a network, even if in a small local area network. File sharing allows a number of people to use the same file or file by some combination of being able to read or view it, write to or modify it, copy it, or print it. Typically, a file sharing system has one or more administrators.

File sharing is the practice of distributing or providing access to digitally stored information, such as computer programs, multimedia (audio, images, and video), documents, or electronic books. It may be implemented through a variety of ways. Storage, transmission, and distribution models are common methods of file sharing that incorporate manual sharing using removable media, centralized computer file server installations on computer networks, World Wide Web-based hyperlinked documents, and the use of distributed peer-to-peer networking .

File sharing has been a feature of mainframe and multi-user computer systems for many years. With the advent of the Internet, a file transfer system called the File Transfer Protocol (FTP) has become widely-used. FTP can be used to access (read and possibly write to) files shared among a particular set of users with a password to gain access to files shared from an FTP server site.

PROCEDURE:

WINDOW XP

Follow these steps on any Windows XP computer to share file resources across a local network. Individual files, an entire folder, or an entire Windows drive can be shared with Windows XP network configuration.

1. Ensure Windows XP Simple File Sharing is enabled.

2. Open Windows Explorer (or My Computer).

3. Navigate to the file, folder or drive folder to be shared, and click once on its icon to select it.

Page | 1

Page 2: Documentml

4. From either the File menu or the right-click menu, choose the "Sharing and Security..." option. A new Properties window appears. If this option did not appear on the menu, ensure that a valid file or folder was selected in the previous step.

5. Click the Network tab in the Properties window. If no Network tab appears in the window, but a Sharing tab appears instead, close this window and ensure the Simple File Sharing option was enabled in the earlier step before proceeding.

6. Click the Share This Folder option in the Properties window to enable sharing of this resource. This allows all other computers on the local network to access file(s) but not modify them. To grant others permission to modify these files, click the "Allow Network Users to Change My Files" checkbox to enable this option.Alternatively, if the Network tab is not enabled, make required settings in the Sharing tab to configure the equivalent sharing. Choose "Share this folder" to enable sharing.

7. Click Apply or OK to save these settings.

WINDOW 7

Windows 7 provides an easiest way to establish a network connection for mutual file and folders sharing. If you have been using Microsoft OS platform and are acquaint with the procedure of sharing   files  and folders on previous Microsoft operating systems, then it would be very easy for you to share   files  and folders in Windows 7. For novices it would still be a challenge to pull off. In this post we will guiding you through the step-by-step procedure of establishing network connection with others PCs, which eventually lets you share files and folders instantly.We have divided the post into two sections

Connecting via WiFi Router Connecting without WiFi Router(Ad-Hoc Connection)

Connecting via WiFi Router

To start off with, all computers need to be connected with the same WiFi router. Once connected from system tray, click the Network Icon and click Open Networking and Sharing Center.

Page | 2

Page 3: Documentml

Be certain that every one in your vicinity with whom you need to share files are connected to the same wireless network connection. Under wireless connection name, click Unidentified or Public Network.It will bring up Set Network Location dialog, select Home Network HomeGroup.

If you want to configure advance sharing settings, click View or change HomeGroup Settings. It will lead you to HomeGroup setting dialog, form here you can configure numerous sharing options. Click Save Changes.

Now your computer will be discoverable under the hood of HomeGroup. All the computers who have joined this Home Network HomeGroup can view & share your computer’s public files and folders.When any HomeGroup user will try to view or share files with you, he will be prompted for username and password credentials.

Page | 3

Page 4: Documentml

Apparently they are not authorized to access your public files and folders yet. To get it over with and allow HomeGroup users to freely access your public folders, go to Network and Sharing Center and from left sidebar, click Change advance Sharing Settings.

Apparently they are not authorized to access your public files and folders yet. To get it over with and allow HomeGroup users to freely access your public folders, go to Network and Sharing Center and from left sidebar, click Change advance Sharing Settings.

Page | 4

Page 5: Documentml

It will lead you to Advance Sharing settings, now scroll-down to find Password protected sharing section, enable Turn Off password protected sharing option and click Save Changes

Now on opening your public folder from other end would not prompt that user for any account credentials. In layman’s term, your public folder will become accessible to anyone who has joined the Home Network Homegroup.

Page | 5

Page 6: Documentml

In Network and Sharing Center, click WiFi network name to view the computers connected to the sameHomeGroup. For viewing others’ public files and folders, just double-click the desired computer name.

Without Router (Ad-Hoc Connection)

Windows 7, with Adhoc   connection, brings the easiest way around to connect with other computers when there is no wireless router around. It provides seamless connectivity with other computers having Wifi support. In this procedure we will be connecting our two notebooks (PC1 and PC2) with each other (both having WiFi capability).To start off with, first turn On your WiFi and from Network system tray button, click open Network and Sharing Center.

Network and Sharing Center window will appear, click Setup up a new connection network. It will bring up a new dialog, scroll down and click Set up a wireless ad hoc network and click Next.

Page | 6

Page 7: Documentml

In this step enter an appropriate name of an ad-hoc network. If you want to create security-enabled network with password protection, then choose desired option from Security type and enter security key. Click Save this network to save this ad-hoc connection. Click Next to continue

The ad-hoc connection is now successfully created.

To connect it with second notebook, first turn on it’s wifi and from system tray network menu, under specified network name, click Connect.

Page | 7

Page 8: Documentml

Upon click, both notebooks (PC1 and PC2) will be connected. From Network and Sharing center of both PCs, change the network group to HomeGroup. If file sharing needs username/password credentials, then turn-off the Password protected sharing (as mentioned above).Open Network and Sharing Center, double-click network connection, it will open Network window showing the connected computers. Now the connection is established and you can freely access files and folders from each other.

Page | 8

Page 9: Documentml

Apart from sharing public files and folders, you can also enable sharing of personal folders by going to their respective Properties and clicking Share under the Sharing tab.

Page | 9

Page 10: Documentml

EXPERIMENT-2

AIM: Write a program to implement client server application using TCP connection, i.e. Socket and ServerSocket class (for only one client).

REQUIRMENTS:

Computer running the Windows operating system(Window Xp,Vista or Seven) A proper connectivity of local area network(LAN)

THEORY:

When we write our program we are communicating with the application layer.We don’t need to worry about TCP and UDP. To concern with these protocols, we can use the classes provided in the java.net package. To decide which class to use, it is important to understand difference between both type of services.

TCP:

It is reliable service protocol. In this system a connection is extablished before the data can be sent. In this protocol the data is guaranteed to be received at the receivers end.

Example:

Telephone call setup can be compared to TCP. Call(connection) is established and they voice(data) is transferred.

The Services which use TCP are HTTP FTP TELNET etc

TCP can be used in the applications where the accuracy of the data matters.Also the order or the sequence of the data matters.HTTP is used to read data form URL. The data must be received in the order in which it was sent or we may end up receiving jumbled HTML file or corrupted zip files or invalid data.

UDP:

UDP in contrast is a unreliable service protocol.It does not establish any connection. It sends independent packets of data called datagrams. Sending datagrams is much like sending letters through a postal service.

In some application the strictness of reliable service is not needed. The addition of reliability may increase the over head or may invalidate the service all together.

Example:

Time of day service:In this service even when the packet is lost.It makes no sense to resend that packet.Because it may contain wrong information at the time when it is being resent.

ServerSocket:

ServerSocket class implements server socket. This class waits for requests to come in, on the network. When request comes, it performs some specific operations and returns the result to the requestors.

Page | 10

Page 11: Documentml

Socket:

Socket is sending or receiving point for datagram packets. All the datagram packets sent or received are addressed individually and may arrive in different order. UDP broadcasts are always enabled on DatagramSockets.

Working:

Normally a server runs on a specific computer and has a socket bound to a specific port. Server keeps on listening to that specific port for any connection requests.

On client side:

Client knows the address or hostname of the machine on which server is running and the port no to which the server is listening. To make a request client requests to the server on the port. For doing so client also needs an identification, so it binds itself to a local port no usually assigned by system.

On server side :

If everything goes well, the server accepts the connection and creates a new socket bound the port no on which the server keeps on listening further for new connections while serving the requests of the currently connected client.

This way a client and server can communicate by writing to or reading from their sockets.An endpoint or a socket is a combination of a inetAddress and a port number. This way any tcp connection has two endpoints.For this working java provides class called ServerSocket and socket in java.net package

Page | 11

Page 12: Documentml

Client side:

import java.net.*;import java.io.*;

public class Fileclient{ public static void main (String [] args ) throws IOException { int filesize=6022386; // filesize temporary hardcoded

long start = System.currentTimeMillis(); int bytesRead; int current = 0; // localhost for testing Socket sock = new Socket("127.0.0.1",13267); System.out.println("Connecting...");

// receive file byte [] mybytearray = new byte [filesize]; InputStream is = sock.getInputStream(); FileOutputStream fos = new FileOutputStream("amita1.txt"); BufferedOutputStream bos = new BufferedOutputStream(fos); bytesRead = is.read(mybytearray,0,mybytearray.length); current = bytesRead;

// thanks to A. Cádiz for the bug fix do { bytesRead = is.read(mybytearray, current, (mybytearray.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead > -1);

bos.write(mybytearray, 0 , current); bos.flush(); long end = System.currentTimeMillis(); System.out.println(end-start); bos.close(); sock.close(); }}

Page | 12

Page 13: Documentml

Server side:

import java.net.*;import java.io.*;

public class Fileserver { public static void main (String [] args ) throws IOException { // create socket ServerSocket servsock = new ServerSocket(13267); while (true) { System.out.println("Waiting...");

Socket sock = servsock.accept(); System.out.println("Accepted connection : " + sock);

// sendfile File myFile = new File ("Amit.txt"); byte [] mybytearray = new byte [(int)myFile.length()]; FileInputStream fis = new FileInputStream(myFile); BufferedInputStream bis = new BufferedInputStream(fis); bis.read(mybytearray,0,mybytearray.length); OutputStream os = sock.getOutputStream(); System.out.println("Sending..."); os.write(mybytearray,0,mybytearray.length); os.flush(); sock.close(); } }}

Page | 13

Page 14: Documentml

OUTPUT:

Server Side Console Output:-

Client side Console Output:-

Page | 14

Page 15: Documentml

EXPERIMENT -3

AIM: Write a program to implement multiple client single server model using TCP connection, i.e. Socket and ServerSocket class (for multiple clients).

REQUIREMENTS

Computer running the Windows operating system(Window Xp,Vista or Seven) A proper connectivity of local area network(LAN)

THEORY:

Client/server systems provide access to a central application from one or more remote clients. For example, a server application may perform some measurement or automation function (such as test cell control) and client applications may provide operators with a user interface for monitoring the state or progress of that function. In multi-client applications, clients may connect and disconnect at random times. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server machine is a host that is running one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests. In order to support the multiple clients, the server software should be able to dynamically accept and service any number of incoming connections. Socket is a endpoint of two way communication link between two computers. A socket is bound to port number so that the transport layer can identify the application to which the data is destined to be sent.

WORKING:

Normally a server runs on a specific computer and has a socket bound to a specific port. Server keeps on listening to that specific port for any connection requests.

On client side:

Client knows the address or hostname of the machine on which server is running and the port no to which the server is listening. To make a request client requests to the server on the port. For doing so client also needs an identification, so it binds itself to a local port no usually assigned by system.

On server side :

If everything goes well, the server accepts the connection and creates a new socket bound the port no on which the server keeps on listening further for new connections while serving the requests of the currently connected client.This way a client and server can communicate by writing to or reading from their sockets.An endpoint or a socket is a combination of a inetAddress and a port number. This way any tcp connection has two endpoints.For this working java provides class called ServerSocket and socket in java.net package.

Page | 15

Page 16: Documentml

Client side:

import java.net.*;import java.io.*;

public class Fileclient{ public static void main (String [] args ) throws IOException { int filesize=6022386; // filesize temporary hardcoded

long start = System.currentTimeMillis(); int bytesRead; int current = 0; // localhost for testing Socket sock = new Socket("127.0.0.1",13267); System.out.println("Connecting...");

// receive file byte [] mybytearray = new byte [filesize]; InputStream is = sock.getInputStream(); FileOutputStream fos = new FileOutputStream("amita1.txt"); BufferedOutputStream bos = new BufferedOutputStream(fos); bytesRead = is.read(mybytearray,0,mybytearray.length); current = bytesRead;

// thanks to A. Cádiz for the bug fix do { bytesRead = is.read(mybytearray, current, (mybytearray.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead > -1);

bos.write(mybytearray, 0 , current); bos.flush(); long end = System.currentTimeMillis(); System.out.println(end-start); bos.close(); sock.close(); }}

Page | 16

Page 17: Documentml

Server side:

import java.net.*;import java.io.*;

public class Fileserver1{public void Startserver(){try{ServerSocket ss = new ServerSocket(13267);boolean flag=true;while(flag) {//System.out.println(“Waiting For Client’s Request”);final Socket s=ss.accept();Thread t=new Thread(){public void run(){ logic(s);} };t.start();}ss.close();}catch(Exception e){//System.out.println(“Unable To Connect”);

}}public void logic(Socket s){try{File myFile = new File ("Amit.txt"); byte [] mybytearray = new byte [(int)myFile.length()]; FileInputStream fis = new FileInputStream(myFile); BufferedInputStream bis = new BufferedInputStream(fis); bis.read(mybytearray,0,mybytearray.length); OutputStream os = s.getOutputStream(); System.out.println("Sending..."); os.write(mybytearray,0,mybytearray.length); os.flush(); s.close(); }catch(Exception e){//System.out.println(“Unable to Connect”);}}public static void main(String args[]){Fileserver1 server =new Fileserver1();server.Startserver();}}

Page | 17

Page 18: Documentml

OUTPUT:

Server Side Console Output:-

Client side Console Output:-

Page | 18

Page 19: Documentml

Page | 19

Page 20: Documentml

EXPERIMENT-4

AIM :

To study the concept of RAID and its implementation. To solve the given case study on the basis of this study of the RAID.

THEORY:

Introduction to RAID

Storage systems preserve data that has been processed and data that is queued up to be processed and have become an integral part of the computer system. Storage systems have advanced just as other computer components over the years.The RAID storage system was introduced over 15 years ago and has provided an excellent mass storage solution for enterprise systems. Let’s get a little more history about the RAID concept and they work.

History of RAID

RAID is an acronym for Redundant Array of Inexpensive Disks. The concept wasconceived at the University of California, Berkeley and IBM holds the intellectual patenton RAID level 5. The University of California, Berkeley researchers, David A. Patterson,Garth Gibson, and Randy H. Katz worked to produce working prototypes of five levels ofRAID storage systems. The result of this research has formed the basis of today’scomplex RAID storage systems. In 1987, the University of California Berkeley, published an article entitled A Case for Redundant Arrays of Inexpensive Disks (RAID). This article described various types of disk arrays, referred to by the acronym RAID. The basic idea of RAID was to combine multiple small, independent disk drives into an array of disk drives which yields performance exceeding that of a Single Large Expensive Drive (SLED). Additionally, this array of drives appears to the computer as a single logical storage unit or drive.

The Mean Time Between Failure (MTBF) of the array will be equal to the MTBF of an individual drive, divided by the number of drives in the array. Because of this, the MTBF of an array of drives would be too low for many application requirements. However, disk arrays can be made fault-tolerant by redundantly storing information in various ways.

Five types of array architectures, RAID-1 through RAID-5, were defined by the Berkeley paper, each providing disk fault-tolerance and each offering different trade-offs in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID-0 array.

Today some of the original RAID levels (namely level 2 and 3) are only used in very specialized systems (and in fact not even supported by the Linux Software RAID drivers). Another level, "linear" has emerged, and especially RAID level 0 is often combined with RAID level 1.

Page | 20

Page 21: Documentml

Some of the design goals of the RAID storage system were to provide performanceimprovements, storage reliability and recovery, and scalability. The redundancy conceptemployed in the RAID system is unique and provides a method to recover if one driveshould fail within the system. In fact, today’s RAID controller cards have the ability tocontinue reading and writing data even if one drive is ‘off-line.’ So how does the RAIDcontroller card manage the individual disks and provide fault tolerance?

RAID Overview

The heart of the RAID storage system is controller card. This card is usually a SCSI hard disk controller card (however, IDE RAID controller cards are becoming quite common). The task of the controller card is to:

Manage Individual Hard Disk Drives Provide a Logical Array Configuration Perform Redundant or Fault Tolerant

Operations

Management of Individual Drives

The RAID controller will translate and communicate directly with the hard disk drives. 1Some controller cards have additional utilities to work with the disk drives specifically, such as a surface scan function and a drive format utility. In the case of SCSI based cards, these controllers will provide additional options to manage the drives.

Logical Array Configuration

The configuration of the logical array stripes the data across all of the physical drives.

This provides balanced data throughput to all of the drives—instead of making one drive.

Implementations:

There are two types of RAID implementation, hardware and software. Both have their merits and demerits and are discussed in this section.

Software RAID:

Software RAID used host based software to provide RAID functions. It is implemented at the operating system level and does not use a dedicated hardware controller to manage the Raid array.

Software RAID implementations offer cost and simplicity benefits when compared with hardware RAID. However, they have the following limitation.

Performance:

Software RAID affects overall system performance. This is due to the additional CPU cycles required to perform RAID calculationms. The performance impact is more pronounced for complex implementation of RAID.

Page | 21

Page 22: Documentml

Supported Features:

Software RAID does not support all RAID levels.

Operating System compatibility:

Software RAID is tied to the host operating system hence upgrades to software RAID is tied to the host operating system hence upgrades to the software RAID or to the operating system should be validated for compatibility. This leads to inflexibility in the data processing environment.

Hardware RAID:

In hardware RAID implementations, a specialized hardware controller is implemented either on the host or on the array. These implementations vary in the way the storage array interacts with the host.

Controller card RAID is host based hardware RAID implementation in which a specialized RAID controller is installed in the host and HDDs are connected to it. The RAID controller interacts with the hard disks using a PCI bus. Manufacturers also integrate RAID controllers on motherboards. This integration reduces the overall cost of the system, but does not provide the flexibility required for high end storage systems.

The external RAID controller is an array based hardware RAID. IT acts as an interface between the host and disks. It presents storage volumes to the host which manage the drives using the supported protocol. Key functions of RAID controllers are:

Management and control of disk aggregations

Translation of I/O requests between logical disks and physical disks

Data regeneration in the event of disk failures

Firmware/driver-based RAID

A RAID implemented at the level of an operating system is not always compatible with the system's boot process, and hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip with special firmware and drivers; during early stage bootup, the RAID is implemented by the firmware, and once the operating system has been more completely loaded, then the drivers take over control. Consequently, such controllers may not work when there is no driver support available for the host operating system.

Hot spares

Both hardware and software RAIDs with redundancy may support the use of hot spare drives, a drive physically installed in the array which is inactive until an active drive fails, when the system automatically replaces the failed drive with the spare, rebuilding the array with the spare drive included. This reduces the mean time to recovery (MTTR), but does not completely eliminate it.

RAID Array Components :

A RAID array is an enclosure that contains a number of HDDs and the supporting hardware and software to implement RAID. These sub-enclosures, or physical arrays, hold a fixed number of HDDs, and may also include other supporting hardware, such as power supplies.

Page | 22

Page 23: Documentml

A subset of disks within a RAID array can be grouped to form logical associations called logical arrays, also known as a RAID set or a RAID group.Logical arrays are comprised of logical volumes (LV). The operating system recognizes the LVs as if they are physical HDDS managed by the RAID controller.

The number of HDDs in a logical array depends on the RAID level used.Configurations could have a logical array with multiple physical arrays or a physical array with multiple logical arrays.

RAID Levels :

RAID levels are defined on the basis of striping, mirroring, and parity techniques. These techniques determine the data availability and performance characteristics of an array.

1. Striping: A RAID set is a group of disks. Within each disk, a predefined number of contiguously addressable disk blocks are defined as strips. The set of aligned strips that spans across all the disks within the RAID set is called a stripe.

2. Mirroring:Mirroring is a technique whereby data is stored on two different HDDs, yielding two copies of data. In the event of one HDD failure, the data is intact on the surviving HDD and the controller continues to service the host’s data requests from the surviving disk of a mirrored pair.

3. Parity:Parity is a method of protecting striped data from HDD failure without the cost of mirroring. An additional HDD is added to the stripe width to hold parity,a mathematical construct that allows re-creation of the missing data. Parity is a redundancy check that ensures full protection of data without maintaining a full set of duplicate data.

Page | 23

Page 24: Documentml

Comparison between different RAID Levels:

RAID Min Disks

Storage efficiency(%)

Cost Read Performance

Write Performance

Write Penalty

0 2 100 Low very good for

both random

and sequential

read

Very good No

1 2 50 High Good. Better

than a single

disk.

Good. Slower

than a single

disk, as every

write must be

committed to all disks.

Moderate

3 3 (n-1)*100/n

where n= number

of disks

Moderate Good for random Reads and very good

for sequential

reads.

Good for random reads

and very good

for sequential

reads.

High

4 3 (n-1)*100/n

where n= number

of disks

Moderate Very good for

random reads.

Good to very

good for sequential

very good for

random reads.

Good to very

good for

sequential writes.

High

Page | 24

Page 25: Documentml

writes.

5 3 (n-1)*100/n

where n= number

of disks

Moderate Very good

for random

reads. Good for sequential reads

Fair for random writes. Slower due to parity overhead.

Fair to good for sequential

writes.

High

6 4 (n-2)*100/n

where n= number

of disks

Moderate but more than Raid

5

Very good

for random

reads. Good

for sequential

reads.

Good for small,random writes (has write penalty).

Very high

0+1 and 1+0

4 50 High Very good Good Moderate

Page | 25

Page 26: Documentml

CASE STUDY

Acme Telecom is involved in mobile wireless service across united state and has about 5000 employees worldwide .This company in Chicago based has 7 regional offices across the country .Although Acme is doing well financially they continue to feel competitive pressure .As a result the company needs to ensure that the IT infrastructure takes advantage of fault tolerance features.

Current situation /Issues-

The company uses a no. of different application for communication, management, and accounting .All application hosted on individual servers with configuration as RAID 0

All financial activities is managed and tracked by a single accounting application .It is very important for the accounting data to be highly available.

The application performs around 15% write operations and remaining 85% are reads.

The accounting data is currently stored on a 5 disk RAID 0 set .Each disk has an capacity of 200 GB and total size of their files is 500 GB.

How would you suggest that the company restructure their environment ?

RAID levels use:

A RAID 0+1 (also called RAID 01), is a RAID level used for both replicating and sharing data among disks.The minimum number of disks required to implement this level of RAID is 3 (first, even numbered chunks on all disks are built – like in RAID 0 – and then every odd chunk number is mirrored with the next higher even neighbour) but it is more common to use a minimum of 4 disks.

The difference between RAID 0+1 and RAID 1+0 is the location of each RAID system — RAID 0+1 is a mirror of stripes although some manufacturers (e.g. Digital/Compaq/HP) use

Page | 26

Page 27: Documentml

RAID 0+1 to describe striped mirrors, consequently this usage is now deprecated so that RAID 0+1 and RAID1+0 are replaced by RAID10 whose definition correctly describes the correct and safe layout, i.e. striped mirrors.The usable capacity of a RAID 0+1 array is

, where N is the total number of drives (must be even) in the array and Smin is the capacity of the smallest drive in the array.

Advantages And Disadvantages of RAID 0+1:-

The maximum storage space of Six-drive RAID 0+1 is 360 GB, spread across two arrays. The advantage is that when a hard drive fails in one of the level 0 arrays, the missing data can be transferred from the other array. However, adding an extra hard drive to one stripe requires you to add an additional hard drive to the other stripes to balance out storage among the arrays.

It is not as robust as RAID 10 and cannot tolerate two simultaneous disk failures. When one disk fails, the RAID 0 array that it is in will fail also. The RAID 1 array will continue to work on the remaining RAID 0 array. If a disk from that array fails before the first failing disk has been replaced, the data will be lost. That is, once a single disk fails, each of the mechanisms in the other stripe is single point of failure. Also, once the single failed mechanism is replaced, in order to rebuild its data all the disks in the array must participate in the rebuild.

The exception to this is if all the disks are hooked up to the same RAID controller in which case the controller can do the same error recovery as RAID 10 as it can still access the functional disks in each RAID 0 set. Comparing the diagrams between RAID 0+1 and RAID 10, the only difference in this case is that the disks are swapped around. If the controller has a direct link to each disk it can do the same. In this single case there is no difference between RAID 0+1 and RAID 10.

Additionally, bit error correction technologies have not kept up with rapidly rising drive capacities, resulting in higher risks of encountering media errors. In the case where a failed drive is not replaced in a RAID 0+1 configuration, a single uncorrectable media error occurring on the mirrored hard drive would result in data loss.

Given these increasing risks with RAID 0+1, many business and mission critical enterprise environments are beginning to evaluate more fault tolerant RAID setups, both RAID 10 and formats such as RAID 5 and RAID 6 that provide a smaller improvement than RAID 10 by adding underlying disk parity, but reduce overall cost. Among the more promising are hybrid approaches such as RAID 51 (mirroring above single parity) or RAID 61 (mirroring above dual parity) although neither of these delivers the reliability of the more expensive option of using RAID 10 with three way mirrors

Page | 27

Page 28: Documentml

EXPERIMENT-5

AIM:

A large company is considering a storage infrastructure—one that is scalable and provides high availability. More importantly, the company also needs performance for its mission-critical applications. Which storage topology would you recommend (SAN, NAS, IP SAN) and why?

THEORY:

Introduction:

IP-SAN is a convergence of technologies used in SAN and NAS. IP-SAN provides block-level communication across a local or wide area network (LAN or WAN), resulting in greater consolidation and availability of data.

Figure (A) illustrates the co-existence of FC and IP storage technologies in an organization where mission-critical applications are serviced through FC, and business-critical applications and remote office applications make use of IP SAN. Disaster recovery solutions can also be implemented using both of these technologies.

Two primary protocols that leverage IP as the transport mechanism are

1.)iSCSI2.) Fiber Channel over IP (FCIP).

Page | 28

Page 29: Documentml

iSCSI: iSCSI is the host-based encapsulation of SCSI I/O over IP using an Ethernet NIC card or an iSCSI HBA in the host. As illustrated in Figure 5-2 (a), IP traffic is routed over a network either to a gateway device that extracts the SCSI I/O from the IP packets or to an iSCSI storage array. The gateway can then send the SCSI I/O to an FC-based external storage array, whereas an iSCSI storage array can handle the extraction and I/O natively.

FCIP:FCIP uses a pair of bridges (FCIP gateways) communicating over TCP/IP as the transport protocol. FCIP is used to extend FC networks over distances and/or an existing IP-based infrastructure, as illustrated in Figure 5-2 (b).

Because IP SANs are based on standard Ethernet protocols, the concepts, security mechanisms, and management tools are familiar to administrators. This has enabled the rapid adoption of IP SAN in organizations.

Page | 29

Page 30: Documentml

Advantages of IPSAN over SAN and NAS:

Traditional SAN environments allow block I/O over Fiber Channel, whereas NAS environments allow file I/O over IP-based networks. Organizations need the performance and scalability of SAN plus the ease of use and lower TCO of NAS solutions. The emergence of IP technology that supports block I/O over IP has positioned IP for storage solutions.

IP offers easier management and better interoperability. When block I/O is run over IP, the existing network infrastructure can be leveraged, which is more economical than investing in new SAN hardware and software. Many long-distance, disaster recovery (DR) solutions are already leveraging IP-based networks.

Many robust and mature security options are now available for IP networks. With the advent of block storage technology that leverages IP networks (the result is often referred to as IP SAN), organizations can extend the geographical reach of their storage infrastructure. IP SAN technologies can be used in a variety of situations

Page | 30

Page 31: Documentml

EXPERIMENT NO-6

AIM: To study the concept of virtualization and also discuss the types of virtualization.

THEORY:

Virtualization allows multiple operating system instances to run concurrently on a single computer; it is a means of separating hardware from a single operating system. Each “guest” OS is managed by a Virtual Machine Monitor (VMM), also known as a hypervisor. Because the virtualization system sits between the guest and the hardware, it can control the guests’ use of CPU, memory, and storage, even allowing a guest OS to migrate from one machine to another.By using specially designed software, an administrator can convert one physical server into multiple virtual machines. Each virtual server acts like a unique physical device, capable of running its own operating system (OS).

Virtualization is used for:-

• Consolidation

• Redundancy

• Segregation

• Legacy Hardware

• Migration

CONSOLIDATION

It's common practice to dedicate each server to a single application. If several applications only use a small amount of processing power, the network administrator can combine several machines into one server running multiple virtual environments. For companies that have hundreds or thousands of servers, the need for physical space can decrease significantly.

This saves on:

• Cost : 10000$ per maintenance cost per machine

• Space: Less servers, less space needed

• Energy: Savings by upto 80%

• Environment: Reduced CO2 emissions due to decrease in number of servers

REDUNDANCY

Server virtualization provides a way for companies to practice redundancy without purchasing additional hardware. Redundancy refers to running the same application on multiple servers. It's a safety measure -- if a server fails for any reason, another server running the same application can take its place. This minimizes any interruption in service. It

Page | 31

Page 32: Documentml

wouldn't make sense to build two virtual servers performing the same application on the same physical server.

If the physical server were to crash, both virtual servers would also fail. In most cases, network administrators will create redundant virtual servers on different physical machines.

SEGREGATION

Virtual servers offer programmers isolated, independent systems in which they can test new applications or operating systems. Rather than buying a dedicated physical machine, the network administrator can create a virtual server on an existing machine. Because each virtual server is independent in relation to all the other servers, programmers can run software without worrying about affecting other applications.

LEGACY HARDWARE

Server hardware will eventually become obsolete, and switching from one system to another can be difficult. In order to continue offering the services provided by these outdated systems -- sometimes called legacy systems -- a network administrator could create a virtual version of the hardware on modern servers.

From an application perspective, nothing has changed. The programs perform as if they were still running on the old hardware. This can give the company time to transition to new processes without worrying about hardware failures, particularly if the company that produced the legacy hardware no longer exists and can't fix broken equipment.

MIGRATION

An emerging trend in server virtualization is called migration. Migration refers to moving a server environment from one place to another. With the right hardware and software, it's possible to move a virtual server from one physical machine in a network to another. Originally, this was possible only if both physical machines ran on the same hardware, operating system and processor.

It's possible now to migrate virtual servers from one physical machine to another even if both machines have different processors, but only if the processors come from the same manufacturer.

Types of Virtualization:-

• Full Virtualization

• Para-Virtualization

• OS-level Virtualization

Full Virtualization

Full virtualization uses a special kind of software called a hypervisor. The hypervisor interacts directly with the physical server's CPU and disk space. It serves as a platform for the virtual servers' operating systems.

Page | 32

Page 33: Documentml

The hypervisor keeps each virtual server completely independent and unaware of the other virtual servers running on the physical machine. Each guest server runs on its own OS -- you can even have one guest running on Linux and another on Windows.

Para-Virtualization

The para-virtualization approach is a little different than the full virtualization technique, the guest servers in a para-virtualization system are aware of one another. A para-virtualization hypervisor doesn't need as much processing power to manage the guest operating systems, because each OS is already aware of the demands the other operating systems are placing on the physical server. The entire system works together as a cohesive unit.

Page | 33

Page 34: Documentml

OS-level Virtualization

An OS-level virtualization approach doesn't use a hypervisor at all. Instead, the virtualization capability is part of the host OS, which performs all the functions of a fully virtualized hypervisor. The biggest limitation of this approach is that all the guest servers must run the same OS. Each virtual server remains independent from all the others, but you can't mix and match operating systems among them. Because all the guest operating systems must be the same, this is called a homogeneous environment.

Forms of Virtualization:-

Page | 34

Page 35: Documentml

Memory Virtualization

Virtual memory makes an application appear as if it has its own contiguous logical memory independent of the existing physical memory resources.Since the beginning of the computer industry, memory has been and continues to be an expensive component of a host. It determines both the size and the number of applications that can run on a host.With technological advancements, memory technology has changed and the cost of memory has decreased. Virtual memory managers (VMMs) have evolved,enabling multiple applications to be hosted and processed simultaneously.In a virtual memory implementation, a memory address space is divided into contiguous blocks of fixed-size pages. A process known as paging saves inactive memory pages onto the disk and brings them back to physical memory when required. This enables efficient use of available physical memory among different processes. The space used by VMMs on the disk is known as a swap file. A swap file (also known as page file or swap space) is a portion of the hard disk that functions like physical memory (RAM) to the operating system. The operating system typically moves the least used data into the swap file so that RAM will be available for processes that are more active. Because the space allocated to the swap file is on the hard disk (which is slower than the physical memory), access to this file is slower.

Network Virtualization

Network virtualization creates virtual networks whereby each application sees its own logical network independent of the physical network. A virtual LAN (VLAN) is an example of network virtualization that provides an easy, flexible, and less expensive way to manage networks. VLANs make large networks more manageable by enabling a centralized configuration of devices located in physically diverse locations.Consider a company in which the users of a department are separated over a metropolitan area with their resources centrally located at one office. In a typical network, each location has its own network connected to the others through routers.When network packets cross routers, latency influences network performance.With VLANs, users with similar access requirements can be grouped together into the same virtual network. This setup eliminates the need for network routing. As a result, although users are physically located at disparate locations, they appearto be at the same location accessing resources locally. In addition to improving network perfomance, VLANs also provide enhanced security by isolating sen-sitive data from the other networks and by restricting access to the resources located within the networks.

Virtual SAN (VSAN)

A virtual SAN/virtual fabric is a recent evolution of SAN and conceptually, functions in the same way as a VLAN.In a VSAN, a group of hosts or storage ports communicate with each other using a virtual topology defined on the physical SAN. VSAN technology enables users to build one or more Virtual SANs on a single physical topology containing switches and ISLs. This technology improves storage area network (SAN) scalability, availability, and security. These benefits are derived from the separation of Fibre Channel services in each VSAN and isolation of traffic between VSANs. Some of the features of VSAN are:

Fibre Channel ID (FC ID) of a host in a VSAN can be assigned to a host in another VSAN, thus improving scalability of SAN.

Every instance of a VSAN runs all required protocols such as FSPF,domain manager, and zoning.

Page | 35

Page 36: Documentml

Fabric-related configurations in one VSAN do not affect the traffic in another VSAN. Events causing traffic disruptions in one VSAN are contained within that VSAN and

are not propagated to other VSANs.

Server Virtualization

Server virtualization enables multiple operating systems and applications to run simultaneously on different virtual machines created on the same physical server (or group of servers). Virtual machines provide a layer of abstraction between the operating system and the underlying hardware. Within a physical server, any number of virtual servers can be established; depending on hardware capabilities.

Each virtual server seems like a physical machine to the operating system, although all virtual servers share the same underlying physical hardware in an isolated manner. For example, the physical memory is shared between virtual servers but the address space is not. Individual virtual servers can be restarted, upgraded, or even crashed, without affecting the other virtual servers on the same physical machine.

Storage VirtualizationStorage virtualization is the process of presenting a logical view of the physical storage resources to a host. This logical storage appears and behaves as physical storage directly connected to the host. Throughout the evolution of storage technology, some form of storage virtualization has been implemented. Some examples of storage virtualization are host-based volume management, LUN creation, tape storage virtualization, and disk addressing (CHS to LBA).The key benefits of storage virtualization include increased storage utilization, adding or deleting storage without affecting an application’s availability, and nondisruptive data migration (access to files and storage while migrations are in progress).

Research into Virtualization:-

Reduce the number of physical machines Isolate environments but share hardware Make better use of existing capacity Virtualize Network and SAN interfaces to reduce infrastructure needs Ultimately save on maintenance and leases.

Marketplace Offerings of Vitualization:-

Freely Available

OpenVZ (Open Source) VMWare Server (GSX) Xen 3.0 (Open Source)

Commercial

Virtuozzo VMWare ESX Xen Enterprise Microsoft Virtual Server Virtual Iron

Page | 36

Page 37: Documentml

Limitations of Virtualization:-

• For servers dedicated to applications with high demands on processing power, virtualization isn't a good choice.

• It's also unwise to overload a server's CPU by creating too many virtual servers on one physical machine. The more virtual machines a physical server must support, the less processing power each server can receive.

• Another limitation is migration. Right now, it's only possible to migrate a virtual server from one physical machine to another if both physical machines use the same manufacturer's processor

Issues and concerns:

Supportability of Microsoft Server products running as Guest Operating Systems on a non-certified virtualization engine.

Managing load on virtualized systems can be more art than science.

EXPERIMENT NO-7

AIM:To study the concept of BackUp and Disaster Recovery management of Data.

THEORY:

One of a database administrator’s (DBA) major responsibilities is to ensure that the database is available for use. The DBA can take precautions to minimize failure of the system. In spite of the precautions, it is naive to think that failures will never occur. The DBA must make the database operational as quickly as possible in case of a failure and minimize the loss of data. To protect the data from the various types of failures that can occur, the DBA must back up the database regularly.Without a current backup, it is impossible for the DBA to get the database up and running if there is a file loss, without losing data. Backups are critical for recovering from different types of failures. The task of validating backups cannot be overemphasized. Making an assumption that a backup exists without actually checking it’s existence can prove very costly if it is not valid.

Defining a Backup and Recovery Strategy :

Business requirements

Mean-Time-To-Recover Mean-Time-Between-Failure Evolutionary process

Business Impact:

You should understand the impact that down time has on the business. Management must quantify the cost of down time and the loss of data and compare this with the cost of reducing down time and minimizing data loss.

MTTR Database availability is a key issue for a DBA. In the event of a failure the DBA should strive to reduce the Mean-Time-To-Recover (MTTR).

Page | 37

Page 38: Documentml

This strategy ensures that the database is unavailable for the shortest possible amount of time. Anticipating the types of failures that can occur and using effective recovery strategies, the DBA can ultimately reduce the MTTR.

MTBF Protecting the database against various types of failures is also a key DBA task. To do this, a DBA must increase the Mean-Time-Between-Failures (MTBF).

The DBA must understand the backup and recovery structures within an Oracle database environment and configure the database so that failures do not often occur.

Evolutionary Process A backup and recovery strategy evolves as business, operational, and technical requirements change.

It is important that both the DBA and appropriate management review the validity of a backup and recovery strategy on a regular basis.

Operational requirements

• 24-hour operations

Backups and recoveries are always affected by the type of business operation that you provide, particularly in a situation where a database must be available 24 hours a day, 7 days a week for continuous operation. Proper database configuration is necessary to support these operational requirements because they directly affect the technical aspects of the database environment.

• Testing and validating backups

DBAs can ensure that they have a strategy that enables them to decrease the MTTR and increase the MTBF by having a plan in place to test the validity of backups regularly. A recovery is only as good as the backups that are available.

• Database volatility

Other issues that impact operational requirements include the volatility of the data and structure of the database

Technical considerations

Other issues that impact operational requirements include the volatility of the data and structure of the database

• Physical image copies of the operating system files

• Logical copies of the objects in the database

• Database configuration

• Transaction volume which affects desired frequency of backups

Categories of Failures

Each type of failure requires a varying level of involvement by the DBA to recover effectively from the situation. In some cases, recovery depends on the type of backup strategy that has been implemented. For example, a statement failure requires little DBA intervention, whereas a media failure requires the DBA to employ a tested recovery strategy.

• Statement failure

Page | 38

Page 39: Documentml

• User process failure User error

• Instance failure

• Media failure

• Network failure

Data Protection and Recovery is the fourth Core Infrastructure Optimization capability. The following table lists the high-level challenges, applicable solutions, and benefits of moving to the Standardized level in Data Protection and Recovery.

The Standardized Level in the Infrastructure Optimization Model addresses key areas of Data Protection and Recovery, including Defined Backup and Restore Services for Critical Servers. It requires that your organization has procedures and tools in place to manage backup and recovery of data on critical servers.

Challenges Solutions Benefits

Business ChallengesNo standard data management policy, which creates isolated islands of data throughout the network on file shares, nonstandard servers, personal profiles, Web sites, and local PCsPoor or non-existent archiving and backup services makes achieving regulatory compliance difficultLack of disaster recovery plan could result in loss of data and critical systemsIT ChallengesHardware failure or corruption equates to catastrophic data lossServer administration is expensiveIT lacks tools for backup and restore management

ProjectsImplement backup and restore solutions for critical servers Consolidate and migrate file and print servers to simplify backup and restorationDeploy data protection tools for critical servers

Business BenefitsEffective data management strategy drives stability in the organization and improves productivityStandards for data management enable policy enforcement and define SLAs, improving the business relationship to IT Strategic approach to data management enables better data recovery procedures, supporting the business with a robust platformOrganization is closer to implementing regulatory complianceIT BenefitsMission-critical application data are kept in a safe place outside of the IT location Basic policies have been established to guarantee access to physical media (tapes, optical devices) when necessary

Defined Backup and Restore Services for Critical Servers

Page | 39

Page 40: Documentml

Audience

You should read this section if you do not have a backup and restore solution for 80 percent or more of your critical servers.Backup and recovery technologies provide a cornerstone of data protection strategies that help organizations meet their requirements for data availability and accessibility. Storing, restoring, and recovering data are key storage management operational activities surrounding one of the most important business assets: corporate data.

Data centers can use redundant components and fault tolerance technologies (such as server clustering, software mirroring, and hardware mirroring) to replicate crucial data to ensure high availability. However, these technologies alone cannot solve issues caused by data corruption or deletion, which can occur due to application bugs, viruses, security breaches, or user errors.

There may also be a requirement for retaining information in an archival form, such as for industry or legal auditing reasons; this requirement may extend to transactional data, documents, and collaborative information such as e-mail. Therefore, it is necessary to have a data protection strategy that includes a comprehensive backup and recovery scheme to protect data from any kind of unplanned outage or disaster, or to meet industry requirements for data retention.

The following guidance is based on the Windows Server System Reference Architecture implementation guides for Backup and Recovery Services.

Phase 1: Assess

The Assess Phase examines the business need for backup and recovery and takes inventory of the current backup and recovery processes in place. Backup activities ensure that data are stored properly and available for both restore and recovery, according to business requirements. The design of backup and recovery solutions needs to take into account business requirements of the organization as well as its operational environment.

Phase 2: Identify

The goal of the Identify Phase of your backup and recovery solution is to identify the targeted data repositories and prioritize the critical nature of the data. Critical data should be defined as data required for keeping the business running and to comply with applicable laws or regulations. Any backup and recovery solutions that are deployed must be predictable, reliable, and capable of complying with regulations and processing data as quickly as possible.

Challenges that you must address in managing data include:

Managing growth in the volumes of data. Managing storage infrastructure to improve the quality of service (QoS) as defined by

service level agreements (SLAs), while reducing complexity and controlling costs. Integrating applications with storage and data management requirements. Operating within short, or nonexistent, data backup windows. Supporting existing IT systems that cannot run the latest technologies. Managing islands of technology that have decentralized administration.

Page | 40

Page 41: Documentml

Assessing data value so that the most appropriate strategies can be applied to each type of data.

While the backup and restoring of all organizational data is important, this topic addresses the backup and restore policies and procedures you must implement for critical services to successfully move from a Basic level to a Standardized level.

Phase 3: Evaluate and Plan

In the Evaluate and Plan Phase, you should take into account several data points to determine the appropriate backup and recovery solution for your organization. These requirements can include:

How much data to store. Projected data growth. Backup and restore performance. Database backup and restore needs. E-mail backup requirements. Tables for backups and restores. Data archiving (off-site storage) requirements. Identification of constraints. Select and acquire storage infrastructure components. Storage monitoring and management plan. Testing the backup strategy.

Backup Plan:

In developing a backup and recovery plan for critical servers you need to consider these factors:

Backup mode Backup type Backup topology Service plan

Microsoft’s Data Protection Manager (DPM) is a server software application that enables disk-based data protection and recovery for file servers in your network. The DPM Planning and Deployment Guide contains a wealth of information on setting up a backup and recovery plan.

Backup Modes

The backup mode determines how the backup is carried out in relation to the data that is being backed up. There are two ways in which data backups can take place:

Online Backups. Backups are made while data is still accessible to users. Offline Backups. Backups are made of data that is first rendered in accessible to

users.

Page | 41

Page 42: Documentml

Backup Types

Various types of backups can be used for online and offline backups. An individual environment’s SLA, backup window, and recovery time requirement determine which method or combination of methods is optimal for that environment.

Full Backup. Captures all files on all disks. Incremental Backup. Captures files that have been added or changed since the last

incremental backup. Differential Backup. Captures files that have been added or changed since the last

full backup.

Backup Topologies

Originally, the only type of storage technology that required backup involved hard disks connected directly to storage adapters on servers. Today, this kind of storage is known as direct-attached storage, or DAS. The backup and recovery landscape has changed markedly with the development of technologies such as Storage Area Network (SAN) and Network Attached Storage (NAS). SAN environments in particular provide a significant opportunity to optimize and simplify the backup and recovery process.

Local Server Backup and Recovery (DAS). Each server is connected to its own backup device.

LAN-Based Backup and Recovery (NAS). This is a multi-tier architecture in which some backup servers kick off jobs and collect metadata about the backed-up data (also known as control data) while other servers (designated as media servers) perform the actual job of managing the data being backed up.

Page | 42

Page 43: Documentml

SAN-Based Backup and Recovery. In this topology you have the ability to move the actual backup copy operation from the production host to a secondary host system.

Service Plan

You have to consider many factors when designing your backup and recovery service. Among the factors to consider are:

Fast backup and fast recovery priorities – Recovery Time Objective (RTO). The frequency with which data changes. Time constraints on the backup operation. Storage media. Data retention requirements. Currency of recovered data – Recovery Point Objective (RPO).

Recovery Plan

Even the best backup plan can be ineffective if you don’t have a recovery plan in place. Following are some of the elements of a good data recovery plan.

Verify Backups

Verifying backups is a critical step in disaster recovery. You can't recover data unless you have a valid backup.

Back Up Existing Log Files Before Performing Any Restoration

A good safeguard is to back up any existing log files before you restore a server. If data is lost or an older backup set is restored by mistake, the logs help you recover.

Perform a Periodic Fire Drill

Page | 43

Page 44: Documentml

A drill measures your ability to recover from a disaster and certifies your disaster recovery plans. Create a test environment and attempt a complete recovery of data. Be sure to use data from production backups, and to record how long it takes to recover the data. This includes retrieving data from off-site storage.

Create a Disaster Kit

Plan ahead by building a disaster kit that includes an operating system configuration sheet, a hard disk partition configuration sheet, a redundant array of independent disks (RAID) configuration, a hardware configuration sheet, and so forth. This material is easy enough to compile, and it can minimize recovery time—much of which can be spent trying to locate information or disks needed to configure the recovery system.

Phase 4: Deploy

After the appropriate storage infrastructure components are in place and the backup and recovery service plan is defined, your organization can install the storage solution and associated monitoring and management tools into the IT environment.

Operations

Monitoring and managing storage management resources for backup and recovery used in the production environment are extremely important tasks. Whether the process is centralized or distributed, the technologies and procedures for backup and recovery must be managed. In the end, the capability to easily monitor and analyze the storage management systems availability, capacity, and performance should be available.

Storage resource management (SRM) is a key storage management activity focused on ensuring that important storage devices, such as disks, are formatted and installed with the appropriate files systems.

Typically, the tools used in the production environment to monitor and manage storage resources consist of functions provided as part of installed operating systems and/or those offered with other solutions, such as Microsoft Data Protection Manager.

Using a storage resource management system requires proper training and skills. An understanding of some of the basic concepts necessary for monitoring and managing storage resources successfully, and analyzing the results, is required. In addition, selecting the right tool for the right job increases the operations group’s ability to ensure data and storage resource availability, capacity, and performance.

Restore scenarios

Data Migration Development Testing

Disaster Recovery Recovering Backups Standby Servers

Page | 44

Page 45: Documentml

EXPERIMENT NO-8

AIM: To study the concept of Cloud Computing and its Architecture and framework in detail.

THEORY:

Cloud computing is rapidly emerging as a technology trend almost every industry that provides or consumes software, hardware, and infrastructure can leverage. The technology and architecture that cloud service and deployment models offer are a key area of research and development for Esri in current and future iterations of the ArcGIS product platform solutions.Although there are several variations on the definition of cloud computing, some basic tenets characterize this emerging environment. Cloud computing furnishes technological capabilities—commonly maintained off-premise—that are delivered on demand as a service via the Internet. Since a third party owns and manages public cloud services, consumers of these services do not own assets in the cloud model but pay for them on a per-use basis. In essence, they are renting the physical infrastructure and applications within a shared architecture. Cloud offerings can range from data storage to end-user Web applications to other focused computing services.

Fig: Cloud Computing Stack

Cloud computing has derived its name from the same line of thinking.Cloud Computing is a style of computing which must cater to the following computing needs:

1. Dynamism 2. Abstraction

Page | 45

Page 46: Documentml

3. Resource Sharing

Dynamism Your business is growing exponentially. Your computing need & usage is getting bigger with every passing day. Would you add servers & other hardwares to meet the new demand? Assume, Recession is back & your business is losing customers. The servers & hardwares you added during last quarter’s peak season is now idle. Will you sale them? Demand keeps on changing based on world/regional economy, sometimes seasonal traffic burst as well. That’s where Cloud Computing comes to your rescue! You just need to configure & your provider will take care of fluctuating demand.

Abstraction Your business should focus on your core competency & should not worry about security, OS, software platform , updates and patches etc. Leave these chores to your provider. From an end users perspective, you don’t need to care for the OS, the plug-ins, web security or the software platform. Everything should be in place without any worry.

Resource Sharing Resource Sharing is the beauty of Cloud Computing. This is the concept which helps the cloud providers to attain optimum utilization of resources. Say, a company dealing in gifts may require more server resources during festive season. A company dealing in Payroll management may require more resources during the end or beginning of the month.

cloud computing is such a type of computing environment, where business owners outsource their computing needs including application software services to a third party and when they need to use the computing power or employees need to use the application resources like database, emails etc., they access the resources via Internet.

For instance, you have a small business, where you need a few small servers for database, emails, applications etc. Normally, servers need higher computing power. On the other hand, PCs or laptop needs lower computing powers and are much cheaper than servers. Moreover, to maintain a client-server environment you need to have a highly skilled network maintenance team.

If you decide to avoid the need of purchasing servers and thus cut-off the need of keeping an operation and maintenance team, then going for clouding computing is a very cost-effective solutions. Because in a cloud architecture, you neither have to install nor maintain servers.

Just by paying a fixed amount of monthly charge you can outsource your IT infrastructure into a third party IT managed service data center.

The main advantage of using cloud computing facility is that customers do not have to pay for infrastructure installation and maintenance cost. As a user of cloud computing you have to pay the service charges according to your usage of computing power and other networking resources.

Moreover, you no more have to worry about software updates, installation, email servers, anti-viruses, backups, web servers and both physical and logical security of

Page | 46

Page 47: Documentml

your data. Thus, cloud computing can help you focus more on your core business competency.

ARCHITECTURE:

Infrastructure as a Service (IaaS)

This is the base layer of the cloud stack. It serves as a foundation for the other two layers, for their execution. The keyword behind this stack is Virtualization. Let us try to understand this using Amazon EC2. In Amazon EC2 (Elastic Compute Cloud) your application will be executed on a virtual computer (instance). You have the choice of virtual computer, where you can select a configuration of CPU, memory & storage that is optimal for your application. The whole cloud infrastructure viz. servers, routers, hardware based load-balancing, firewalls, storage & other network equipments are provided by the IaaS provider. The customer buy these resources as a service on a need basis.

Page | 47

Page 48: Documentml

Fig: Cloud computing deployment Model

Platform as a Service (PaaS)

Now you don’t need to invest millions of $$$ to get that development foundation ready for your developers. The PaaS provider will deliver the platform on the web, and in most of the cases you can consume the platform using your browser, i.e. no need to download any software. It has definitely empowered small & mid-size companies or even an individual developer to launch their own SaaS leveraging the power of these platform providers, without any initial investment. PaaS Layers Cloud OS

Cloud Middleware

Software as a Service (SaaS)

This is the Top most layer of the cloud computing stack - directly consumed by end user – i.e. SaaS (Software as a Service). On-Premise applications are quite expensive, affordable only to big enterprises. Why? Cause On-Premise applications had a very high upfront CapEx(Capital Expenditure); which results in a high TCO (Total Cost of Ownership). On-Premise apps also require a higher number of skilled developers to maintain the application. In its current avatar SaaS is going to be the best bet for SMEs/SMBs (Small & Mid size businesses). Now, they can afford best software solution for their business without investing anything at all on the infrastructure or development platform or skilled manpower. The only requirement for SaaS is a computer

Page | 48

Page 49: Documentml

with browser, quite basic. SaaS is a recurring subscription based model delivered to customer on demand – Pay as you use.

The success of cloud computing is largely based on the effective implementation of its architecture. In cloud computing, architecture is not just based on how the application will work with the intended users. Cloud computing requires an intricate interaction with the hardware which is very essential to ensure uptime of the application.

These two components (hardware and application) have to work together seamlessly or else cloud computing will not be possible. If the application fails, the hardware will not be able to push the data and implement certain processes.

On the other hand, hardware failure will mean stoppage of operations. For that reason, precaution has to be done so that these components will be working as expected and necessary fixes has to be implemented immediately for prevention as well as quick resolution.

Cloud Computing Service Architecture:

Infrastructure as a service- service provider bears all the cost of servers, networking equipment, storage, and back-ups. You just have to pay to take the computing service. And the users build their own application softwares. Amazon EC2 is a great example of this type of service.

Platform as a service-service provider only provide platform or a stack of solutions for your users. It helps users saving investment on hardware and software. Google Gc engine and Force.com provide this type of service.

Software as a service- service provider will give your users the service of using their software, especially any type of applications software. Example-Google (GOOG), Salesforce.com (CRM), NetSuite (N).

FRAMEWORK:

Storage-as-a-service:

It is the ability to leverage storage that physically exists remotely, but is logically a local storage resource to any application that requires storage. This is the most primitive component of cloud computing, and is a component or pattern that's leveraged by most of the other cloud computing components.Storage-as-a-service providers include Amazon S3, Box.net, and Google Base.

Database-as-a-service:

It provides the ability to leverage the services of a remotely hosted database, sharing it with other users, and having it logically function as if the database were local. Different models are offered by different providers, but the power is to leverage database technology that would typically cost thousands of dollars in hardware and software licenses.Database-as-a-service providers include Amazon SimpleDB, Trackvia, and Microsoft SSDS.

Information-as-a-service:

Page | 49

Page 50: Documentml

It refers to the ability to consume any type of information, remotely hosted, through a well-defined interface such as an API, for example, stock price information, address validation, or credit reporting. There are over a 1,000 sources of information that can be found these days, most of them listed in www.programmableweb.com.

Process-as-a-service :

It refers to a remote resource that's able to bind many resources together, either hosted within the same cloud computing resource or remote, to create business processes. These processes are typically easier to change than applications, and thus provide agility to those who leverage these process engines that are delivered on-demand.

Applications in Cloud Computing Architecture:

Enabling the capacity of the data centers is the software that does the processing. With the help of the data centers, the processing time will be fast as the speed of transaction will be suggested by the hardware capabilities of the data center.

The application in cloud computing will call on the assistance of the hardware not only in processing but also in data gathering. Although it would be possible that data will come from another source, data centers will usually house the data in their server farms for faster access and easier processing.The challenge for applications in cloud computing is largely based on the number of requests the application could handle. Although this factor could be highly suggested by the data center, the application will usually have a threshold if they are not properly written.

To deal with this concern, developers use metadata to enable personalized services to their users as well as data processing. Through metadata, individualized requests will be entertained and will be properly implemented. Metadata also ensures uptime of transaction as data requests will be slowed down if the developer chooses to do so.

Page | 50

Page 51: Documentml

CONCLUSION:

A new kind of application platform doesn’t come along very often. But when a successful platform innovation does appear, it has an enormous impact. Think of the way personal computers and servers shook up the world of mainframes and minicomputers, for example, or how the rise of platforms for Ntier applications changed the way people write software. While the old world doesn’t go away, a new approach can quickly become the center of attention for new applications.Cloud platforms don’t yet offer the full spectrum of an on-premises environment. For example, business intelligence as part of the platform isn’t common, nor is support for business process management technologies such as full-featured workflow and rules engines. This is all but certain to change, however, as this technology wave continues to roll forward.Cloud platforms aren’t yet at the center of most people’s attention. The odds are good, though, that this won’t be true five years from now. The attractions of cloud-based computing, including scalability and lower costs, are very real. If you work in application development, whether for a software vendor or an end user, expect the cloud to play an increasing role in your future. The next generation of applicationplatforms is here.

Page | 51

Page 52: Documentml

Page | 52