data integrity proof 3

88
DATA INTEGRITY PROOFS IN CLOUD STORAGE Project work Submitted by PIJUSH NATH ( Reg. No: 6112022006) In partial fulfillment of the requirement for the award of the Degree of MASTER OF COMPUTER APPLICATION Under the guidance of Mrs. CHRISTRY ANGELINE M.C.A, M.Phil. Asst.Professor Trichy DEPARTMENT OF COMPUTER APPLICATIONS PRIST UNIVERSITY Center for Higher Learning & Research

Upload: surjith-suresh

Post on 13-May-2017

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Data Integrity Proof 3

DATA INTEGRITY PROOFS IN CLOUD STORAGE

Project work

Submitted by

PIJUSH NATH

( Reg. No: 6112022006)

In partial fulfillment of the requirement for the award of the

Degree of

MASTER OF COMPUTER APPLICATION

Under the guidance of

Mrs. CHRISTRY ANGELINE M.C.A, M.Phil.

Asst.Professor

Trichy

DEPARTMENT OF COMPUTER APPLICATIONS

PRIST UNIVERSITY

Center for Higher Learning & Research

TRICHY CAMPUS –TAMIL NADU

MARCH-2014

Page 2: Data Integrity Proof 3

ACKNOWLEDGEMENTS

I wish to thank the almighty for giving me such an opportunity. I express my

sincere gratitude to my parents who steadfastly behind me in all my effort to the

successful completion of this project.

I am deeply indebted to my guide and Hod of MCA department Mrs.S.Christy

Angeline, MCA, M.Phil, Assistant professor, Prist University, Trichy whose help,

stimulating suggestions and encouragement throughout this project.

I wish to thank all my staff member of our department who helped me a lot for

successful completion of this project. Finally, I would like to have a mention of thanks to

my parents and friends who gave us the moral strength and encouragement for

completion my project successfully.

We are greatly thanked to our lab Assistants, who were patience and always

helped us when we are in need.

We are very much thankful to our parents and friends for their moral support

and for their love and prayer throughout our studies.

We thank all our friends and well-wishers who helped us and who were a

shoulder to us to learn on when we were discouraged.

PIJUSH NATH

Page 3: Data Integrity Proof 3

PRIST UNIVERSITY

(U/s of UGC Act 1956)

Trichy Campus

S. Christy Angeline ,MCA,M.Phil

Assistant Professor

Department of Computer Application

Prist University

Trichy

BONA-FIDE CERTIFICATE

This is to certify that the Project work as” Image Processing Based On Watermarking

Picture Maker” is the bonafide record of the project work done by Mr. PIJUSH NATH,

REG NO:61120220006 in partial fulfillment for the award of the Degree of Master

Application during the academic year 2012-2014. The work done under the guidance and

supervision of Mrs. S. Christy Angeline , MCA, M.Phil.

Signature of the Signature of the Guide

Head of the Department

(S. Christy Angelina

External Examiner

CHAPTER 01

Page 4: Data Integrity Proof 3

ABSTRACT:

Cloud computing has been envisioned as the de-facto solution to the rising storage costs

of IT Enterprises. With the high costs of data storage devices as well as the rapid rate at

which data is being generated it proves costly for enterprises or individual users to

frequently update their hardware. Apart from reduction in storage costs data outsourcing

to the cloud also helps in reducing the maintenance. Cloud storage moves the user’s data

to large data centers, which are remotely located, on which user does not have any

control. However, this unique feature of the cloud poses many new security challenges,

which need to be clearly understood and resolved. We provide a scheme, which gives a

proof of data integrity in the cloud, which the customer can employ to check the

correctness of his data in the cloud. This proof can be agreed upon by both the cloud and

the customer and can be incorporated in the Service level agreement (SLA).

PROJECT PURPOSE:

Purpose of developing proofs for data possession at untrusted cloud storage servers we

are often limited by the resources at the cloud server as well as at the client. Given that

the data sizes are large and are stored at remote servers, accessing the entire file can be

expensive in I/O costs to the storage server. Also transmitting the file across the network

to the client can consume heavy bandwidths. Since growth in storage capacity has far

outpaced the growth in data access as well as network bandwidth, accessing and transmit-

ting the entire archive even occasionally greatly limits the scalability of the network re-

sources. Furthermore, the I/O to establish the data proof interferes with the on-demand

bandwidth of the server used for normal storage and retrieving purpose.

Page 5: Data Integrity Proof 3

PROJECT SCOPE:

Cloud storing its data file F at the client should process it and create suitable meta data

which is used in the later stage of verification the data integrity at the cloud storage.

When checking for data integrity the client queries the cloud storage for suitable replies

based on which it concludes the integrity of its data stored in the client. our data integrity

protocol the verifier needs to store only a single cryptographic key - irrespective of the

size of the data file F- and two functions which generate a random sequence. The verifier

does not store any data with it. The verifier before storing the file at the archive, prepro-

cesses the file and appends some meta data to the file and stores at the archive.

PRODUCT FEATURES:

Our scheme was developed to reduce the computational and storage overhead of the

client as well asto minimize the computational overhead of the cloud storage server. We

also minimized the size of the proof of data integrity so as to reduce the network band-

width consumption. Hence the storage at the client is very much minimal compared to all

other schemes that were developed. Hence this scheme proves advantageous to thin

clients like PDAs and mobile phones.

The operation of encryption of data generally consumes a large computational power. In

our scheme the encrypting process is very much limited to only a fraction of the whole

data thereby saving on the computational time of the client. Many of the schemes pro-

posed earlier require the archive to perform tasks that need a lot of computational power

to generate the proof of data integrity. But in our scheme the archive just need to fetch

and send few bits of data to the client.

Page 6: Data Integrity Proof 3

INTRODUCTION:

Data outsourcing to cloud storage servers is raising trend among many firms and users

owing to its economic advantages. This essentially means that the owner (client) of the

data moves its data to a third party cloud storage server which is supposed to -

presumably for a fee - faithfully store the data with it and provide it back to the owner

whenever required.

As data generation is far outpacing data storage it proves costly for small firms to

frequently update their hardware whenever additional data is created. Also maintaining

the storages can be a difficult task. Storage outsourcing of data to cloud storage helps

such firms by reducing the costs of storage, maintenance and personnel. It can also assure

a reliable storage of important data by keeping multiple copies of the data thereby

reducing the chance of losing data by hardware failures.

Storing of user data in the cloud despite its advantages has many interesting security

concerns, which need to be extensively investigated for making it a reliable solution to

the problem of avoiding local storage of data. In this paper we deal with the problem of

implementing a protocol for obtaining a proof of data possession in the cloud sometimes

referred to as Proof of irretrievability (POR). This problem tries to obtain and verify a

proof that the data that is stored by a user at remote data storage in the cloud (called cloud

storage archives or simply archives) is not modified by the archive and thereby the

integrity of the data is assured.

Such verification systems prevent the cloud storage archives from misrepresenting or

modifying the data stored at it without the consent of the data owner by using frequent

checks on the storage archives. Such checks must allow the data owner to efficiently,

frequently, quickly and securely verify that the cloud archive is not cheating the owner.

Cheating, in this context, means that the storage archive might delete some of the data or

may modify some of the data.

Page 7: Data Integrity Proof 3

CHAPTER 02

SYSTEM ANALYSIS:

PROBLEM DEFINITION:

Storing of user data in the cloud despite its advantages has many interesting security con-

cerns which need to be extensively investigated for making it a reliable solution to the

problem of avoiding local storage of data. Many problems like data authentication and in-

tegrity (i.e., how to efficiently and securely ensure that the cloud storage server returns

correct and complete results in response to its clients’ queries, outsourcing encrypted data

and associated difficult problems dealing with querying over encrypted domain were dis-

cussed in research literature.

EXISTING SYSTEM:

As data generation is far outpacing data storage it proves costly for small firms to

frequently update their hardware whenever additional data is created. Also maintaining

the storages can be a difficult task. It transmitting the file across the network to the client

can consume heavy bandwidths. The problem is further complicated by the fact that the

owner of the data may be a small device, like a PDA (personal digital assist) or a mobile

phone, which have limited CPU power, battery power and communication bandwidth.

LIMITATIONS OF EXISTING SYSTEM:

The main drawback of this scheme is the high resource costs it requires for

the implementation.

Also computing hash value for even a moderately large data files can be

computationally burdensome for some clients (PDAs, mobile phones, etc).

Data encryption is large so the disadvantage is small users with limited

computational power (PDAs, mobile phones etc.).

Page 8: Data Integrity Proof 3

PROPOSED SYSTEM:

One of the important concerns that need to be addressed is to assure the customer of the

integrity i.e. correctness of his data in the cloud. As the data is physically not accessible

to the user the cloud should provide a way for the user to check if the integrity of his data

is maintained or is compromised. In this paper we provide a scheme, which gives a proof

of data integrity in the cloud, which the customer can employ to check the correctness of

his data in the cloud. This proof can be agreed upon by both the cloud and the customer

and can be incorporated in the Service level agreement (SLA). It is important to note that

our proof of data integrity protocol just checks the integrity of data i.e. if the data has

been illegally modified or deleted.

ADVANTAGES OF PROPOSED SYSTEM:

Apart from reduction in storage costs data outsourcing to the cloud also helps in

reducing the maintenance.

Avoiding local storage of data.

By reducing the costs of storage, maintenance and personnel.

It reduces the chance of losing data by hardware failures.

Not cheating the owner.

Page 9: Data Integrity Proof 3

PROCESS FLOW DIAGRAMS FOR EXISTING AND PROPOSED

SYSTEM:

FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put

forth with a very general plan for the project and some cost estimates. During system

analysis the feasibility study of the proposed system is to be carried out. This is to ensure

that the proposed system is not a burden to the company. For feasibility analysis, some

understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

ECONOMICAL FEASIBILITY

TECHNICAL FEASIBILITY

SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY:

This study is carried out to check the economic impact that the system will have

on the organization. The amount of fund that the company can pour into the research and

development of the system is limited. The expenditures must be justified. Thus the

developed system as well within the budget and this was achieved because most of the

technologies used are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical

requirements of the system. Any system developed must not have a high demand on the

available technical resources. This will lead to high demands on the available technical

resources. This will lead to high demands being placed on the client. The developed

system must have a modest requirement, as only minimal or null changes are required for

implementing this system.

Page 10: Data Integrity Proof 3

HARDWARE AND SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.

• Hard Disk : 40 GB.

• Floppy Drive : 1.44 Mb.

• Monitor : 15 VGA Colour.

• Mouse : Logitech.

• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

• Operating system : Windows XP.

• Coding Language : ASP.Net with C#

• Data Base : SQL Server 2005

Page 11: Data Integrity Proof 3

FUNCTIONAL REQUIREMENTS:

Functional requirements specify which output file should be produced from the given

file they describe the relationship between the input and output of the system, for each

functional requirement a detailed description of all data inputs and their source and the

range of valid inputs must be specified.

NON FUNCTIONAL REQUIREMENTS:

Describe user-visible aspects of the system that are not directly related with the

functional behavior of the system. Non-Functional requirements include quantitative

constraints, such as response time (i.e. how fast the system reacts to user commands.) or

accuracy ((.e. how precise are the systems numerical answers.)

PSEUDO REQUIREMENTS:The client that restricts the implementation of the system imposes these

requirements. Typical pseudo requirements are the implementation language and the

platform on which the system is to be implemented. These have usually no direct effect

on the users view of the system.

LITERATURE SURVEY:

Literature survey is the most important step in software development process.

Before developing the tool it is necessary to determine the time factor, economy n

company strength. Once these things r satisfied, ten next steps is to determine which

operating system and language can be used for developing the tool. Once the

programmers start building the tool the programmers need lot of external support. This

support can be obtained from senior programmers, from book or from websites. Before

building the system the above consideration r taken into account for developing the

proposed system.

Page 12: Data Integrity Proof 3

We have to analysis the Cloud Computing Outline Survey:

Cloud Computing

• Cloud computing providing unlimited infrastructure to store and execute customer

data and program. As customers you do not need to own the infrastructure, they are

merely accessing or renting; they can forego capital expenditure and consume resources

as a service, paying instead for what they use.

Benefits of Cloud Computing:

• Minimized Capital expenditure

• Location and Device independence

• Utilization and efficiency improvement

• Very high Scalability

• High Computing power

Security a major Concern:

Security concerns arising because both customer data and program are residing in

Provider Premises.

Security is always a major concern in Open System Architectures

Page 13: Data Integrity Proof 3

Data centre Security?

• Professional Security staff utilizing video surveillance, state of the art intrusion

detection systems, and other electronic means.

• When an employee no longer has a business need to access datacenter his privi-

leges to access datacenter should be immediately revoked.

• All physical and electronic access to data centers by employees should be logged

and audited routinely.

• Audit tools so that users can easily determine how their data is stored, protected,

used, and verify policy enforcement.

Data Location:

When user uses the cloud, user probably won't know exactly where your data is

hosted, what country it will be stored in?

Data should be stored and processed only in specific jurisdictions as define by

user.

Page 14: Data Integrity Proof 3

Provider should also make a contractual commitment to obey local privacy re-

quirements on behalf of their customers,

Data-centered policies that are generated when a user provides personal or sensi-

tive information, that travels with that information throughout its lifetime to ensure that

the information is used only in accordance with the policy

Backups of Data :

Data store in database of provider should be redundantly store in multiple physi-

cal location.

Data that is generated during running of program on instances is all customer

data and therefore provider should not perform backups.

Control of Administrator on Databases.

Network Security:

• Denial of Service: where servers and networks are brought down by a huge

amount of network traffic and users are denied the access to a certain Internet based ser-

vice.

• Like DNS Hacking, Routing Table “Poisoning”, XDoS attacks

• QoS Violation : through congestion, delaying or dropping packets, or through re-

source hacking.

Page 15: Data Integrity Proof 3

• Man in the Middle Attack: To overcome it always use SSL

• IP Spoofing: Spoofing is the creation of TCP/IP packets using somebody else's IP

address.

• Solution: Infrastructure will not permit an instance to send traffic with a source IP

or MAC address other than its own.

How secure is encryption Scheme:

Is it possible for all of my data to be fully encrypted?

What algorithms are used?

Who holds, maintains and issues the keys? Problem:

Encryption accidents can make data totally unusable.

Encryption can complicate availability Solution

The cloud provider should provide evidence that encryption schemes were de-

signed and tested by experienced specialists.

Information Security:

Security related to the information exchanged between different hosts or between

hosts and users.

This issues pertaining to secure communication, authentication, and issues con-

cerning single sign on and delegation.

Secure communication issues include those security concerns that arise during the

communication between two entities.

These include confidentiality and integrity issues. Confidentiality indicates that all

data sent by users should be accessible to only “legitimate” receivers, and integrity indi-

cates that all data received should only be sent/modified by “legitimate” senders.

Solution: public key encryption, X.509 certificates, and the Secure Sockets Layer

(SSL) enables secure authentication and communication over computer networks.

Page 16: Data Integrity Proof 3

MODULES DESCRIPTION:

CLOUD STORAGE:

Data outsourcing to cloud storage servers is raising trend among many firms and

users owing to its economic advantages. This essentially means that the owner (client) of

the data moves its data to a third party cloud storage server which is supposed to -

presumably for a fee - faithfully store the data with it and provide it back to the owner

whenever required.

SIMPLY ARCHIVES:

This problem tries to obtain and verify a proof that the data that is stored by a user

at remote data storage in the cloud (called cloud storage archives or simply archives) is

not modified by the archive and thereby the integrity of the data is assured. Cloud archive

is not cheating the owner, if cheating, in this context, means that the storage archive

might delete some of the data or may modify some of the data

SENTINELS:

In this scheme, unlike in the key-hash approach scheme, only a single key can be

used irrespective of the size of the file or the number of files whose retrievability it wants

to verify. Also the archive needs to access only a small portion of the file F unlike in the

key-has scheme, which required the archive to process the entire file F for each protocol

verification. If the prover has modified or deleted a substantial portion of F, then with

high probability it will also have suppressed a number of sentinels.

VERIFICATION PHASE:

The verifier before storing the file at the archive , preprocesses the file and

appends some Meta data to the file and stores at the archive. At the time of verification

the verifier uses this Meta data to verify the integrity of the data. It is important to note

that our proof of data integrity protocol just checks the integrity of data i.e. if the data has

been illegally modified or deleted. It does not prevent the archive from modifying the

data.

Page 17: Data Integrity Proof 3

CHAPTER 03

SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

The DFD is also called as bubble chart. It is a simple graphical formalism

that can be used to represent a system in terms of the input data to the system, various

processing carried out on these data, and the output data is generated by the system

The data flow diagram (DFD) is one of the most important modeling tools. It is

used to model the system components. These components are the system process, the

data used by the process, an external entity that interacts with the system and the

information flows in the system.

DFD shows how the information moves through the system and how it is

modified by a series of transformations. It is a graphical technique that depicts

information flow and the transformations that are applied as data moves from input to

output.

DFD is also known as bubble chart. A DFD may be used to represent a system at

any level of abstraction. DFD may be partitioned into levels that represent increasing

information flow and functional detail.

Page 18: Data Integrity Proof 3

SDLC:

SPIRAL MODEL:

PROJECT ARCHITECTURE:

Page 19: Data Integrity Proof 3

UML DIAGRAMS:

USE CASE:

Page 20: Data Integrity Proof 3

CLASS:

Page 21: Data Integrity Proof 3

SEQUENCE:

Page 22: Data Integrity Proof 3

ACTIVITY:

Page 23: Data Integrity Proof 3

DFD DIAGRAMS:

Page 24: Data Integrity Proof 3
Page 25: Data Integrity Proof 3

CHAPTER 04

PROCESS SPECIFICATION(Techniques And Algorithm Used):

ALGORITHM:

META-DATA GENERATION:

Let the verifier V wishes to the store the file F with the archive. Let this file F consist of n

file blocks. We initially preprocess the file and create metadata to be appended to the file.

Let each of the n data blocks have m bits in them. A typical data file F which the client

wishes to store in the cloud.

Each of the Meta data from the data blocks mi is encrypted by using a suitable algorithm

to give a new modified Meta data Mi. Without loss of generality we show this process by

using a simple XOR operation. The encryption method can be improvised to provide still

stronger protection for verifier’s data.

All the Meta data bit blocks that are generated using the above procedure are to be

concatenated together. This concatenated Meta data should be appended to the file F

before storing it at the cloud server. The file F along with the appended Meta data e F is

archived with the cloud.

Page 26: Data Integrity Proof 3

SCREEN SHOTS:

OWNER

Page 27: Data Integrity Proof 3

Registration

Page 28: Data Integrity Proof 3

Owner Login

Page 29: Data Integrity Proof 3
Page 30: Data Integrity Proof 3
Page 31: Data Integrity Proof 3

Owner Main

Page 32: Data Integrity Proof 3

File Upload

Page 33: Data Integrity Proof 3

File Status

Page 34: Data Integrity Proof 3
Page 35: Data Integrity Proof 3

File details

Page 36: Data Integrity Proof 3

TPA Auditor LOGIN

Page 37: Data Integrity Proof 3

TPA Auditor Main

Page 38: Data Integrity Proof 3

File Verification

Page 39: Data Integrity Proof 3
Page 40: Data Integrity Proof 3

File Direct Verification

Page 41: Data Integrity Proof 3

Send Message

Page 42: Data Integrity Proof 3

Owner File View

Fie Details

Page 43: Data Integrity Proof 3
Page 44: Data Integrity Proof 3

ADMIN LOGIN

Page 45: Data Integrity Proof 3

Admin Main

Page 46: Data Integrity Proof 3

Email to Owner

Owner View

Page 47: Data Integrity Proof 3
Page 48: Data Integrity Proof 3

CHAPTER 05

TECHNOLOGY DESCRIPTION:

Software Environment

FEATURES OF. NET

Microsoft .NET is a set of Microsoft software technologies for rapidly building and

integrating XML Web services, Microsoft Windows-based applications, and Web

solutions. The .NET Framework is a language-neutral platform for writing programs that

can easily and securely interoperate. There’s no language barrier with .NET: there are

numerous languages available to the developer including Managed C++, C#, Visual

Basic and Java Script. The .NET framework provides the foundation for components to

interact seamlessly, whether locally or remotely on different platforms. It standardizes

common data types and communications protocols so that components created in

different languages can easily interoperate.

“.NET” is also the collective name given to various software components built upon

the .NET platform. These will be both products (Visual Studio.NET and Windows.NET

Server, for instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment

within which programs run. The most important features are

Page 49: Data Integrity Proof 3

Conversion from a low-level assembler-style language, called Intermedi-

ate Language (IL), into code native to the platform being executed on.

Memory management, notably including garbage collection.

Checking and enforcing security restrictions on the running code.

Loading and executing programs, with version control and other such fea-

tures.

The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra

Information - “metadata” - to describe itself. Whilst both managed and unmanaged code

can run in the runtime, only managed code contains the information that allows the CLR

to guarantee, for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal

location facilities, and garbage collection. Some .NET languages use Managed Data by

default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do

not. Targeting CLR can, depending on the language you’re using, impose certain

constraints on the features available. As with managed and unmanaged code, one can

have both managed and unmanaged data in .NET applications - data that doesn’t get

garbage collected but instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce

type-safety. This ensures that all classes are compatible with each other, by describing

types in a common way. CTS define how types work within the runtime, which enables

types in one language to interoperate with types in another language, including cross-

language exception handling. As well as ensuring that types are only used in appropriate

ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t

been allocated to it.

Page 50: Data Integrity Proof 3

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can

develop managed code that can be fully used by developers using any programming

language, a set of language features and rules for using them called the Common

Language Specification (CLS) has been defined. Components that follow these rules and

expose only CLS features are considered CLS-compliant.

THE CLASS LIBRARY:

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root

of the namespace is called System; this contains basic types like Byte, Double, Boolean,

and String, as well as Object. All objects derive from System. Object. As well as objects,

there are value types. Value types can be allocated on the stack, which can provide useful

flexibility. There are also efficient means of converting value types to object types if and

when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and

network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing

distinct areas of functionality, with dependencies between the namespaces kept to a

minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables

developers to use their existing programming skills to build all types of applications and

XML Web services. The .NET framework supports new versions of Microsoft’s old

favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a

number of new additions to the family.

Page 51: Data Integrity Proof 3

Visual Basic .NET has been updated to include many new and improved language

features that make it a powerful object-oriented programming language. These features

include inheritance, interfaces, and overloading, among others. Visual Basic also now

supports structured exception handling, custom attributes and also supports multi-

threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant

language can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the

enhancements made to the C++ language. Managed Extensions simplify the task of

migrating existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid

Application Development”. Unlike other languages, its specification is just the grammar

of the language. It has no standard library of its own, and instead has been designed with

the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers

into the world of XML Web Services and dramatically improves the interoperability of

Java-language programs with existing software written in a variety of other programming

languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware

applications to be built in either Perl or Python. Both products can be integrated into the

Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl

Dev Kit.

Other languages for which .NET compilers are available include

FORTRAN

COBOL

Eiffel

Page 52: Data Integrity Proof 3

Fig1 .Net Framework

ASP.NET

XML WEB SERVICES

Windows Forms

Base Class Libraries

Common Language Runtime

Operating System

C#.NET is also compliant with CLS (Common Language Specification) and supports

structured exception handling. CLS is set of rules and constructs that are supported by the

CLR (Common Language Runtime). CLR is the runtime environment provided by

the .NET Framework; it manages the execution of the code and also makes the develop-

ment process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created

in C#.NET can be used in any other CLS-compliant language. In addition, we can use ob-

jects, classes, and components created in other CLS-compliant languages in C#.NET .The

use of CLS ensures complete interoperability among applications, regardless of the lan-

guages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to de-

stroy them. In other words, destructors are used to release the resources allocated to the

object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is

used to complete the tasks that must be performed when an object is destroyed. The sub

finalize procedure is called automatically when an object is destroyed. In addition, the

sub finalize procedure can be called only from the class it belongs to or from derived

classes.

Page 53: Data Integrity Proof 3

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors

allocated resources, such as objects and variables. In addition, the .NET Framework auto-

matically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by

applications. When the garbage collector comes across an object that is marked for

garbage collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple proce-

dures with the same name, where each procedure has a different set of arguments. Be-

sides using overloading for procedures, we can use it for constructors and properties in a

class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading

can handle multiple tasks simultaneously, we can use multithreading to decrease the time

taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and re-

move errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to

create exception handlers. Using Try…Catch…Finally statements, we can create robust

and effective exception handlers to improve the performance of our application.

THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application devel-

opment in the highly distributed environment of the Internet.

Page 54: Data Integrity Proof 3

FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called

SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the

term Analysis Services. Analysis Services also includes a new data mining component.

The Repository component available in SQL Server version 7.0 is now called Microsoft

SQL Server 2000 Meta Data Services. References to the component now use the term

Meta Data Services. The term repository is used only in reference to the repository

engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We

can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

Page 55: Data Integrity Proof 3

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the

question from one or more table. The data that make up the answer is either dynaset (if

you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest

information in the dynaset. Access either displays the dynaset or snapshot for us to view

or perform an action on it, such as deleting or updating.

Page 56: Data Integrity Proof 3

CHAPTER 06

TYPE OF TESTING:

BLOCK & WHITE BOX TESTING:

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner

workings, structure or language of the module being tested. Black box tests, as most other

kinds of tests, must be written from a definitive source document, such as specification or

requirements document, such as specification or requirements document. It is a testing in

which the software under test is treated, as a black box .you cannot “see” into it. The test

provides inputs and responds to outputs without considering how the software works.

White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge

of the inner workings, structure and language of the software, or at least its purpose. It is

purpose. It is used to test areas that cannot be reached from a black box level.

UNIT TESTING:

Unit testing is usually conducted as part of a combined code and unit test phase of the

software lifecycle, although it is not uncommon for coding and unit testing to be

conducted as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in

detail.

Page 57: Data Integrity Proof 3

Test objectives

All field entries must work properly.

Pages must be activated from the identified link.

The entry screen, messages and responses must not be delayed.

Features to be tested

Verify that the entries are of the correct format

No duplicate entries should be allowed

All links should take the user to the correct page.

SYSTEM TESTING:

The purpose of testing is to discover errors. Testing is the process of trying to

discover every conceivable fault or weakness in a work product. It provides a way to

check the functionality of components, sub assemblies, assemblies and/or a finished

product It is the process of exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does not fail in an

unacceptable manner. There are various types of test. Each test type addresses a specific

testing requirement.

INTEGRATION TESTING:

Software integration testing is the incremental integration testing of two or more

integrated software components on a single platform to produce failures caused by

interface defects.

The task of the integration test is to check that components or software

applications, e.g. components in a software system or – one step up – software

applications at the company level – interact without error.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

Page 58: Data Integrity Proof 3

CHAPTER 07

CONCLUSION:

In this paper we have worked to facilitate the client in getting a proof of integrity of the

data which he wishes to store in the cloud storage servers with bare minimum costs and

efforts. Our scheme was developed to reduce the computational and storage overhead of

the client as well as to minimize the computational overhead of the cloud storage server.

We also minimized the size of the proof of data integrity so as to reduce the network

bandwidth consumption. Many of the schemes proposed earlier require the archive to

perform tasks that need a lot of computational power to generate the proof of data

integrity. But in our scheme the archive just need to fetch and send few bits of data to the

client.

LIMITATIONS & FUTURE ENHANCEMENTS :

Apart from reduction in storage costs data outsourcing to the cloud also

helps in reducing the maintenance.

Avoiding local storage of data.

By reducing the costs of storage, maintenance and personnel.

It reduces the chance of losing data by hardware failures.

Not cheating the owner.

Page 59: Data Integrity Proof 3

REFERENCE & BIBLIOGRAPHY:

Good Teachers are worth more than thousand books, we have them in Our

Department

References Made From:

1. Beginning ASP.NET 4: in C# and VB by Imar Spaanjaars.

2. ASP.NET 4 Unleashed by Stephen Walther.

3. Programming ASP.NET 3.5 by Jesse Liberty, Dan Maharry, Dan Hurwitz.

4. Beginning ASP.NET 3.5 in C# 2008: From Novice to Professional, Second Edi -

tion by Matthew MacDonald.

5. Amazon Web Services (AWS), Online at http://aws. amazon.com.

6. Google App Engine, Online at http://code.google.com/appengine/.

7. Microsoft Azure, http://www.microsoft.com/azure/.

8. A. Agrawal et al. Ws-bpel extension for people (bpel4people), version 1.0.,

2007.

9. M. Amend et al. Web services human task (ws-humantask), version 1.0., 2007.

Sites Referred:

http://www.asp.net.com

http://www.dotnetspider.com/

http://www.dotnetspark.com

Abbreviations:

POR Proof of retrievability

CLS Common Language Specification

Page 60: Data Integrity Proof 3

PDA Personal Digital Assist

SOURCE CODE

using System;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;

public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void Button1_Click(object sender, EventArgs e) { Response.Redirect("AdminLogin.aspx"); } protected void Button2_Click(object sender, EventArgs e) { Response.Redirect("OwnerLogin.aspx"); } protected void Button3_Click(object sender, EventArgs e) { Response.Redirect("OwnerRegistration.aspx"); } protected void Button4_Click(object sender, EventArgs e) { Response.Redirect("tpalogin.aspx"); }}

using System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using System.Data.SqlClient;

public partial class OwnerFileDetails : System.Web.UI.Page

Page 61: Data Integrity Proof 3

{ SqlConnection con = new SqlConnection(ConfigurationManager.AppSet-tings["ConnectionString"]);

protected void Page_Load(object sender, EventArgs e) { con.Open(); if (!IsPostBack) { SqlDataAdapter adp = new SqlDataAdapter("Select distinct fext from filearchive where fowner='" + (string)Session["ownerid"] + "'", con); DataSet ds = new DataSet(); adp.Fill(ds); for (int i = 0; i < ds.Tables[0].Rows.Count; i++) { DropDownList1.Items.Add(ds.Tables[0].Rows[i]["fext"].ToString()); } } SqlDataAdapter adp1 = new SqlDataAdapter("Select * from filearchive where fowner='" + (string)Session["ownerid"] + "'", con); DataSet ds1= new DataSet(); adp1.Fill(ds1); GridView1.DataSource = ds1; GridView1.DataBind();

Label20.Text = Convert.ToString("(" + ds1.Tables[0].Rows.Count + ")");

con.Close(); }

protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { if (DropDownList1.SelectedItem.Text == "All") { SqlDataAdapter adp1 = new SqlDataAdapter("Select * from filearchive where fowner='" + (string)Session["ownerid"] + "'", con); DataSet ds1 = new DataSet(); adp1.Fill(ds1); GridView1.DataSource = ds1; GridView1.DataBind(); } else { SqlDataAdapter adp = new SqlDataAdapter("Select * from filearchive where fext='" + DropDownList1.SelectedItem.Text + "' and fowner='" + (string)Session["ownerid"] + "'", con); DataSet ds = new DataSet(); adp.Fill(ds); GridView1.DataSource = ds; GridView1.DataBind(); } }

Page 62: Data Integrity Proof 3

protected void GridView1_PageIndexChanging(object sender, GridView-PageEventArgs e) { GridView1.PageIndex = e.NewPageIndex; bindgrid(); }

public void bindgrid() { if (DropDownList1.SelectedItem.Text == "All") { SqlDataAdapter adp1 = new SqlDataAdapter("Select * from filearchive where fowner='" + (string)Session["ownerid"] + "'", con); DataSet ds1 = new DataSet(); adp1.Fill(ds1); GridView1.DataSource = ds1; GridView1.DataBind(); } else { SqlDataAdapter adp = new SqlDataAdapter("Select * from filearchive where fext='" + DropDownList1.SelectedItem.Text + "' and fowner='" + (string)Session["ownerid"] + "'", con); DataSet ds = new DataSet(); adp.Fill(ds); GridView1.DataSource = ds; GridView1.DataBind(); } }}

using System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using System.Data.SqlClient;

public partial class OwnerFilesView : System.Web.UI.Page{ SqlConnection con = new SqlConnection(ConfigurationManager.AppSet-tings["ConnectionString"]); string fileid, strmetadata; string enckey, enckey1; Cryptography cs = new Cryptography();

protected void Page_Load(object sender, EventArgs e) { fileid = Request.Params["ID"];

Page 63: Data Integrity Proof 3

SqlDataAdapter adp = new SqlDataAdapter("Select * from filearchive where fid='" + fileid + "'", con); DataSet ds = new DataSet(); adp.Fill(ds); Label4.Text = ds.Tables[0].Rows[0]["fid"].ToString(); Label7.Text = ds.Tables[0].Rows[0]["ffilename"].ToString(); Label10.Text = ds.Tables[0].Rows[0]["fsubject"].ToString(); Label13.Text = ds.Tables[0].Rows[0]["fext"].ToString(); Label16.Text = ds.Tables[0].Rows[0]["fsizeinkb"].ToString(); Label19.Text = ds.Tables[0].Rows[0]["fdatetime"].ToString(); Label23.Text = ds.Tables[0].Rows[0]["fverify"].ToString(); strmetadata = ds.Tables[0].Rows[0]["fmetadata"].ToString(); if (strmetadata.Length > 40) { Label26.Text = strmetadata.Substring(0, 40) + ".."; } else { Label26.Text = strmetadata; } Label29.Text = ds.Tables[0].Rows[0]["keyrequest"].ToString(); Session["key"] = ds.Tables[0].Rows[0]["fenccryp-tokey"].ToString(); enckey = (string)Session["key"]; Session["key1"] = cs.Decrypt(enckey); Session["path"] = ds.Tables[0].Rows[0]["filepath"].ToString(); } protected void Button2_Click(object sender, EventArgs e) { Response.Redirect("OwnerFileDetails.aspx"); } protected void LinkButton1_Click(object sender, EventArgs e) { Response.Redirect("OwnerFileDetails.aspx"); } protected void ImageButton2_Click(object sender, ImageClickEventArgs e) { Page_Load(null, EventArgs.Empty); } protected void btncheck_Click(object sender, EventArgs e) { ModalPopupExtender1.Show();

enckey1 = (string)Session["key"]; if (TextBox2.Text == enckey1) { TextBox3.Text = (string)Session["key1"]; TextBox2.Enabled = false; btndownload.Enabled = true; } else { string myStringVariable1 = string.Empty; myStringVariable1 = "Cryptographic Key Error."; ClientScript.RegisterStartupScript(this.GetType(), "myalert", "alert('" + myStringVariable1 + "');", true);

Page 64: Data Integrity Proof 3

} } protected void LinkButton2_Click(object sender, EventArgs e) { ModalPopupExtender1.Show(); TextBox2.Text = (string)Session["key"]; }

protected void btndownload_Click(object sender, EventArgs e) { Panel3.Visible = false;

SqlCommand cmd = new SqlCommand("select * from filearchive where fid = '" + Label4.Text + "'", con); DataTable dt = GetData(cmd); if (dt != null) { download(dt); } } private DataTable GetData(SqlCommand cmd) {

DataTable dt = new DataTable(); SqlConnection con = new SqlConnection(ConfigurationManag-er.AppSettings["ConnectionString"]); SqlDataAdapter sda = new SqlDataAdapter(); cmd.CommandType = CommandType.Text; cmd.Connection = con; try { con.Open(); sda.SelectCommand = cmd; sda.Fill(dt); return dt; } catch { return null; }

finally { con.Close(); sda.Dispose(); con.Dispose(); } }

private void download(DataTable dt) {

Byte[] bytes = (Byte[])dt.Rows[0]["filebytes"]; Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); //ftype = dt.Rows[0]["filetype"].ToString();

Page 65: Data Integrity Proof 3

//Response.ContentType = "application/"+ ftype+" "; Response.ContentType = dt.Rows[0]["fext"].ToString(); Response.AddHeader("content-disposition", "attachment;filename=" + dt.Rows[0]["ffilename"].ToString()); //Response.BinaryWrite("<script type='text/javascript'> <embed src='bytes' style=width:300px; height:200px;> </embed> </script> "); Response.BinaryWrite(bytes); Response.Flush(); Response.End(); }}

using System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using System.Data.SqlClient;

public partial class tpaverify : System.Web.UI.Page{ SqlConnection con = new SqlConnection(ConfigurationManager.AppSet-tings["ConnectionString"]);

protected void Page_Load(object sender, EventArgs e) { SqlDataAdapter adp1 = new SqlDataAdapter("Select * from filearchive", con); DataSet ds1 = new DataSet(); adp1.Fill(ds1); GridView1.DataSource = ds1; GridView1.DataBind(); } protected void GridView1_PageIndexChanging(object sender, GridView-PageEventArgs e) { GridView1.PageIndex = e.NewPageIndex; bindgrid(); }

public void bindgrid() { SqlDataAdapter adp1 = new SqlDataAdapter("Select * from filearchive", con); DataSet ds1 = new DataSet(); adp1.Fill(ds1); GridView1.DataSource = ds1; GridView1.DataBind(); }}

Page 66: Data Integrity Proof 3

using System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls;using System.Xml.Linq;

public partial class tpamaster : System.Web.UI.MasterPage{ string link1, link2, link3; string seslink1, seslink2, seslink3;

protected void Page_Load(object sender, EventArgs e) { Label3.Text = "Welcome, " + "&nbsp&nbsp" + Session["tpa"] + " !";

LinkButton1.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#B40404");

seslink1 = (string)Session["tpalink1"]; seslink2 = (string)Session["tpalink2"]; seslink3 = (string)Session["tpalink3"];

if (seslink1 != null) { LinkButton1.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#B40404"); LinkButton2.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E"); LinkButton3.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E"); LinkButton4.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E");

}

if (seslink2 != null) { LinkButton1.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E"); LinkButton2.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#B40404"); LinkButton3.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E"); LinkButton4.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E");

}

if (seslink3 != null)

Page 67: Data Integrity Proof 3

{ LinkButton1.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E"); LinkButton2.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E"); LinkButton3.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#B40404"); LinkButton4.BackColor = System.Drawing.ColorTransla-tor.FromHtml("#FE2E2E");

} }

protected void LinkButton1_Click(object sender, EventArgs e) { link1 = "yes"; Session["tpalink1"] = link1; Session.Remove("tpalink2"); Session.Remove("tpalink3"); Response.Redirect("tpamain.aspx"); }

protected void LinkButton2_Click(object sender, EventArgs e) { link2 = "yes"; Session["tpalink2"] = link2; Session.Remove("tpalink1"); Session.Remove("tpalink3"); Response.Redirect("tpaverify.aspx"); }

protected void LinkButton3_Click(object sender, EventArgs e) { link3 = "yes"; Session["tpalink3"] = link3; Session.Remove("tpalink1"); Session.Remove("tpalink2"); Response.Redirect("tpafiledetails.aspx"); }

protected void LinkButton4_Click(object sender, EventArgs e) { Session.Remove("tpalink1"); Session.Remove("tpalink2"); Session.Remove("tpalink3"); Response.Redirect("Default.aspx"); }

}