project documentation 1

Upload: beeram-lekha

Post on 06-Apr-2018

234 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Project Documentation 1

    1/68

    TABLE OF CONTENTS

    Abstract

    1. Introduction

    1.1 Purpose of Project1.2Scope of Project

    1.3 Goals/ Aim

    1.4 Features of our project ( Advantages)

    2. LITERATURE SURVEY2.1 Existing System

    2.2 Proposed System

    2.3 Modules Description

    2.4 Feasibility Study

    2.4.1 Technical Feasibility

    2.4.2 Operational Feasibility

    2.4.3 Economic Feasibility

    2.5 SDLC Model

    2.5.1 The Incremental, Iterative Software Engineering Life Cycle

    2.5.2 Waterfall Life Cycle Model

    2.5.3 Prototype Model

    3. ANALYSIS

    3.1 Software Requirement Specification

    3.1.1 Software requirement

    3.1.2 Hardware requirement3.1.3 Communications interfaces

    4. Languages of implementation

    4.1 Microsoft.NET Framework

    4.2 Features of the Common Language Runtime

    4.3 .NET Framework Class Library

    4.4 ADO.NET Overview

    5. SOFTWARE DESIGN

    5.1 Introduction

  • 8/3/2019 Project Documentation 1

    2/68

    5.2Design Overview (Application architecture and Software

    architecture)

    5.3UML (Unified Modeling Language)

    5.3.1 Introduction To UML

    5.3.2 Goals of UML

    5.3.3 Explanation of UML Diagrams5.3.4 UML DIAGRAMS

    5.4DFD (data flow diagrams)

    6. Code Templates

    7. TESTING

    7.1 Testing Introduction

    7.2 Testing Strategies

    7.2.1 Unit Testing

    7.2.2 Integration Testing

    7.2.3 White Box Testing

    7.2.4 Black Box Testing

    7.2.5 System Testing

    7.3 Design of test cases and scenarios

    8. IMPLEMENTATION & RESULTS

    8.1Running Applications

    8.2 Output Screens

    9. CONCLUSION : First Paragraph - Project Conclusion

    Second Paragraph - Future enhancement

    10. REFERENCES :

    Abstract

  • 8/3/2019 Project Documentation 1

    3/68

    The challenge of handling the digital image in different forms in the present world is bit

    difficult. Image Interpolation is one of the techniques where the resolution of the image will be

    changing. There are many existing techniques for interpolating an image, but the edges in the

    image will not be very sharp and also the blurs will not be removed. For this we are developing a

    technique called Soft Decision Adaptive Interpolation (SAI) technique where the interpolation

    of the image will be very effective. By using this technique the blurs in the image will be

    reduced and the edges will be sharp.

    The main usage of the application this technique can be found in the below areas.

    1. Print magazines when they want to increase the quality of the image.

    2. Wall posters

    3. Home Based Images

    4. Consumer electronics

    5. Medical imaging

    1. Introduction

  • 8/3/2019 Project Documentation 1

    4/68

    This project is mainly intended for maintaining the image clarity evenafter the image is interpolated. For this we use SAI (Soft-decision AdaptiveInterpolation) technique. In the existing there are two techniques which arebeen used i.e., bilinear, bicubic. But the drawback is after interpolation theimage clarity will be clear and the blurs will be formed which makes the

    edges of the image be clear. To overcome all these drawbacks the SAItechnique is proposed which will be more effective than the existing system.

    1.1Purpose of Project:

    The main purpose of this project is to reduce the time and increase the efficiency in

    maintaining the clarity of the image. The blurness in the image will be reduced by using SAI

    interpolation technique. The interpolation is done by considering four pixels at a time.

    1.2 Scope of Project:This project is used to interpolate an image into higher resolution. There are existing

    systems but they use basically use two techniques Bilinear and Bicubic. But in these two

    techniques each and every pixel must be considered which takes lot of time. But now in this

    project, a particular four pixels are considered at a time which reduces the time every effectively.

    1.3 Goals / Aim:This project will help effectively to interpolate an image into higher resolutions. The

    edges can be shown very effectively and the blurs can reduced to the maximum extent.

    1.4 Features of our project (Advantages):

    The features or advantages of our application are as follows:

    Considering four pixels at a time rather than considering 16 pixels at a time.

    Reducing the blurs in the image after resolution.

    Edges will be shown very effectively.

    Time is reduced.

  • 8/3/2019 Project Documentation 1

    5/68

    2.LITERATURE SURVEY

    2.1 Existing System:

    In the Existing System, two basic techniques are used first is bilinear and bicubic.

    In bilinear, the pixels are placed linearly in the destination without any manipulations in

    the image. But in bicubic, we consider 16 pixels for calculating the destination pixel

    value. This increases the complexity in calculating the destination image.

    2.2 Proposing System:

    To overcome the drawback of the existing system, SAI technique

    is proposed where 4 pixels are considered in calculating thedestination pixel value. This reduces the time complexity in generating

    the interpolated image.

    2.3 Modules Description:

    LOGIN MODULE:

    The user provides his user name and password to perform the application. The new users cansign up.

    INPUT IMAGE MODULE:

    In this module we provide image as an input. The application should take the image from the

    location specified by the user. The provided image must be converted into Bitmap Image so

    as to access the pixel values for future modification. In this the image height and width are

    stored in different variables for further manipulations.

    ANALYSE MISSING PIXELS MODULE:

    In this module a new image will be created with new resolution of the image. The new height

    and width will be calculated by the resize factor given by the user. Now the two images must

    be blocked which assures that no that process is accessing these images.

    PIXEL INTERPOLATION MODULE:

  • 8/3/2019 Project Documentation 1

    6/68

    To perform soft-decision estimation we follow the SAI technique which operates on the

    missing pixels of the destination image. Here the interpolation will be done considering four

    pixels at a time.

    2.4 FEASIBILITY STUDY:

    2.4.1 TECHINICAL FEASIBILITY:

    Evaluating the technical feasibility is the trickiest part of a feasibility

    study. This is because, at this point in time, not too many detailed design of the system, making it

    difficult to access issues like performance, costs on (on account of the kind of technology to be

    deployed) etc.

    A number of issues have to be considered while doing a technical analysis.

    i) Understand the different technologies involved in the proposed system:

    Before commencing the project, we have to be very clear about what are the

    Technologies that are to be required for the development of the new system.

    ii) Find out whether the organization currently possesses the required technologies:

    Is the required technology available with the organization?

    If so is the capacity sufficient?

    For instance

    Will the current printer be able to handle the new reports and forms required for the new

    system?

    2.4.2 OPERATIONAL FEASIBILITY:

    Proposed projects are beneficial only if they can be turned into

    information systems that will meet the organizations operating requirements. Simply stated, this

    test of feasibility asks if the system will work when it is developed and installed. Are there major

    barriers to Implementation? Here are questions that will help test the operational feasibility of a

    project:

    Is there sufficient support for the project from management from users? If

    the current system is well liked and used to the extent that persons will not be

  • 8/3/2019 Project Documentation 1

    7/68

    able to see reasons for change, there may be resistance.

    Are the current business methods acceptable to the user? If they are not,

    Users may welcome a change that will bring about a more operational and useful systems.

    Have the user been involved in the planning and development of the project?

    Early involvement reduces the chances of resistance to the system and in

    General and increases the likelihood of successful project.

    Since the proposed system was to help reduce the hardships encountered

    In the existing manual system, the new system was considered to be operational feasible.

    2.4.3 ECONOMIC FEASIBILITY:

    Economic feasibility attempts 2 weigh the costs of developing and

    implementing a new system, against the benefits that would accrue from having the new system

    in place. This feasibility study gives the top management the economic justification for the new

    system.

    A simple economic analysis which gives the actual comparison of costs and benefits are

    much more meaningful in this case. In addition, this proves to be a useful point of reference to

    compare actual costs as the project progresses. There could be various types of intangible

    benefits on account of automation. These could include increased customer satisfaction,

    improvement in product quality better decision making timeliness of information, expediting

    activities, improved accuracy of operations, better documentation and record keeping, faster

    retrieval of information, better employee morale.

  • 8/3/2019 Project Documentation 1

    8/68

    2.5 SDLC Model:

    2.5 .1 THE INCREMENTAL, ITERATIVE SOFTWARE ENGINEERING LIFE CYCLE:

    When we defining and constructing credit card validation systems will

    uncover many requirements that may be difficult at outset. Instead knowledge of the system and

    requirements will grow as work progress the whole software engineering process is designed to

    uncover details and incompatibilities in the requirements that may not be obvious to customer

    and bankers at outset.

    Several cases or increments of software development additional increases will be build anddelivered in successive increment system normally involves as are deliver successive new

    versions, the development of first version from sketch called green field development is special

    case of incremental development the development of first increment is an important activity

    series we establish the architectural base that must last for the entire systems life time.

    2.5 .2 WATERFALL LIFECYCLE MODEL:

    Waterfall model states that the phases (analysis, design, and coding,

    testing, support) are systematized in a linear order and each phase should accomplished entirely

    earlier of the next phase begins.

    In this way the step by step phase initially analysing phase is completed and that output takes

    place at the end of analyze phase after that output will be given as input for the design phase,

    depending on the inputs it generates all design steps ,like ways all phases processed and

    produced all successful outputs, And will to find out whether the project is pursuing on the exact

    path or not. If not the project may be discard or any other action takes place to continue. The

    model is the most commonly used and also known as linear sequential lifecycle model.

  • 8/3/2019 Project Documentation 1

    9/68

    ADVANTAGES:

    1. This model is very easy to use and implement.2. Each phase is completed at a time and processed.

    3. This model better works for smaller projects if only the requirements are well understood.

    4. In each phase have deliverables and that must be reviewed.

    DISADVANTAGES:

    1. If the requirements are gathered are inaccurate then the final product is inaccurate and the

    error is known in the final phase of the model. Any sort of errors that cannot be detected

    in any previous phase.

    2. For long, object-oriented, complex and ongoing projects its a poor model.

    3. This model has high risks.

  • 8/3/2019 Project Documentation 1

    10/68

    Fig1: Waterfall Lifecycle Model

    (Source: http://www.freetutes.com/systemanalysis/sa2-waterfall-software-life-cycle.html)

    2.5 .3 PROTOTYPE MODEL:

    In this model the requirements are gathered firstly, and the prototype is deployed according

    to the requirements. This prototype is a quick design which goes through the coding, design and

    testing. The phases are not done in detail. By seeing this prototype the client feels like a real

    system, so that the client understands the entire requirements of the systems.

    ADVANTAGES:

    1. During the development process the developers are interestingly engaged.

    2. The prototype developed that is used by the users for well understanding of the

    methodology

    3. The user involvement is increased and improved.

    4. The flaws and faults are identified early.

    5. The users opinion about the product is known early which leads to an improved system.

    DISADVANTAGES:

    1. This model focuses on design quite than functionality.

    2. The model is implemented firstly and then errors are evaluated later which becomes a

    complex process

    3. The model is also known as throw-away prototype.

    4. More time spent on development of the prototype that result in delay of the final product.

    http://www.freetutes.com/systemanalysis/sa2-waterfall-software-life-cycle.htmlhttp://www.freetutes.com/systemanalysis/sa2-waterfall-software-life-cycle.html
  • 8/3/2019 Project Documentation 1

    11/68

    Fig2: Prototyping Methodology

    (Source: http://www.testingbrain.com/TOOLS/TESTING_TOOLS.html)

    Requirements

    Gathering

    Quick

    Design

    Customer

    Evaluationof the

    prototype

    Design

    Implement

    Test

    Maintain

    Refine

    Re uirements

    Build

    Prototype

    http://www.testingbrain.com/TOOLS/TESTING_TOOLS.htmlhttp://www.testingbrain.com/TOOLS/TESTING_TOOLS.html
  • 8/3/2019 Project Documentation 1

    12/68

    3. ANALYSIS

    3 .1 SOFTWARE REQUIREMENT SPECIFICATIONS:

    3 .1.1 SOFTWARE INTERFACES:

    Microsoft C# 3.0

    Microsoft Visual Studio 2008 IDE

    Microsoft Windows XP

    Microsoft Dot Net Frame work 3.0.

    3 .1.2 HARDWARE INTERFACES:

    Processor: Pentium-III (or) Higher

    Ram: 1GB (or) Higher

    Hard disk: 40GB

    3 .1.3 COMMUNICATIONS INTERFACES:

    Not applicable

  • 8/3/2019 Project Documentation 1

    13/68

    4. LANGUAGES OF IMPLEMENTATION

    4 .1 Microsoft.NET Framework:

    The .NET Framework is a new computing platform that simplifies application

    development in the highly distributed environment of the Internet. The .NET Framework is

    designed to fulfill the following objectives:

    To provide a consistent object-oriented programming environment whether object code is

    stored and executed locally, executed locally but Internet-distributed, or executed

    remotely.

    To provide a code-execution environment that minimizes software deployment and

    versioning conflicts.

    Fig3: Microsoft.NET Framework

  • 8/3/2019 Project Documentation 1

    14/68

    To provide a code-execution environment that guarantees safe execution of code,including code created by an unknown or semi-trusted third party.

    To provide a code-execution environment that eliminates the performance problems of

    scripted or interpreted environments.

    To make the developer experience consistent across widely varying types of applications,

    such as Windows-based applications and Web-based applications.

    To build all communication on industry standards to ensure that code based on the .NET

    Framework can integrate with any other code.

    The .NET Framework has two main components: the common language runtime and the .NET

    Framework class library. The common language runtime is the foundation of the .NET

    Framework. You can think of the runtime as an agent that manages code at execution time,

    providing core services such as memory management, thread management, and remoting, while

    also enforcing strict type safety and other forms of code accuracy that ensure security and

    robustness. In fact, the concept of code management is a fundamental principle of the runtime.

    Code that targets the runtime is known as managed code, while code that does not target theruntime is known as unmanaged code. The class library, the other main component of the .NET

    Framework, is a comprehensive, object-oriented collection of reusable types that you can use to

    develop applications ranging from traditional command-line or graphical user interface (GUI)

    applications to applications based on the latest innovations provided by ASP.NET, such as Web

    Forms and XML Web services.

    The .NET Framework can be hosted by unmanaged components that load the common language

    runtime into their processes and initiate the execution of managed code, thereby creating a

    software environment that can exploit both managed and unmanaged features. The .NET

    Framework not only provides several runtime hosts, but also supports the development of third-

    party runtime hosts.

  • 8/3/2019 Project Documentation 1

    15/68

    For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for

    managed code. ASP.NET works directly with the runtime to enable Web Forms applications and

    XML Web services, both of which are discussed later in this topic.

    Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form

    of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed

    managed components or Windows Forms controls in HTML documents. Hosting the runtime in

    this way makes managed mobile code (similar to Microsoft ActiveX controls) possible, but

    with significant improvements that only managed code can offer, such as semi-trusted execution

    and secure isolated file storage.

    The following illustration shows the relationship of the common language runtime and the class

    library to your applications and to the overall system. The illustration also shows how managed

    code operates within a larger architecture.

    5 .2 Features of the Common Language Runtime:

    The common language runtime manages memory, thread execution, code execution, code safety

    verification, compilation, and other system services. These features are intrinsic to the managed

    code that runs on the common language runtime.

    With regards to security, managed components are awarded varying degrees of trust, depending

    on a number of factors that include their origin (such as the Internet, enterprise network, or local

    computer). This means that a managed component might or might not be able to perform file-

    access operations, registry-access operations, or other sensitive functions, even if it is being used

    in the same active application.

    The runtime enforces code access security. For example, users can trust that an executable

    embedded in a Web page can play an animation on screen or sing a song, but cannot access theirpersonal data, file system, or network. The security features of the runtime thus enable legitimate

    Internet-deployed software to be exceptionally feature rich.

    The runtime also enforces code robustness by implementing a strict type- and code-verification

    infrastructure called the common type system (CTS). The CTS ensures that all managed code is

  • 8/3/2019 Project Documentation 1

    16/68

    self-describing. The various Microsoft and third-party language compilers generate managed

    code that conforms to the CTS. This means that managed code can consume other managed

    types and instances, while strictly enforcing type fidelity and type safety.

    In addition, the managed environment of the runtime eliminates many common software issues.

    For example, the runtime automatically handles object layout and manages references to objects,

    releasing them when they are no longer being used. This automatic memory management

    resolves the two most common application errors, memory leaks and invalid memory references.

    The runtime also accelerates developer productivity. For example, programmers can write

    applications in their development language of choice, yet take full advantage of the runtime, the

    class library, and components written in other languages by other developers. Any compiler

    vendor who chooses to target the runtime can do so. Language compilers that target the .NET

    Framework make the features of the .NET Framework available to existing code written in that

    language, greatly easing the migration process for existing applications.

    While the runtime is designed for the software of the future, it also supports software of today

    and yesterday. Interoperability between managed and unmanaged code enables developers to

    continue to use necessary COM components and DLLs.

    The runtime is designed to enhance performance. Although the common language runtime

    provides many standard runtime services, managed code is never interpreted. A feature called

    just-in-time (JIT) compiling enables all managed code to run in the native machine language of

    the system on which it is executing. Meanwhile, the memory manager removes the possibilities

    of fragmented memory and increases memory locality-of-reference to further increase

    performance.

    Finally, the runtime can be hosted by high-performance, server-side applications, such as

    Microsoft SQL Server and Internet Information Services (IIS). This infrastructure enables

    you to use managed code to write your business logic, while still enjoying the superior

    performance of the industry's best enterprise servers that support runtime hosting.

  • 8/3/2019 Project Documentation 1

    17/68

    5 .3 .NET Framework Class Library:

    The .NET Framework class library is a collection of reusable types that tightly integrate with the

    common language runtime. The class library is object oriented, providing types from which your

    own managed code can derive functionality. This not only makes the .NET Framework types

    easy to use, but also reduces the time associated with learning new features of the .NET

    Framework. In addition, third-party components can integrate seamlessly with classes in the

    .NET Framework.

    For example, the .NET Framework collection classes implement a set of interfaces that you can

    use to develop your own collection classes. Your collection classes will blend seamlessly with

    the classes in the .NET Framework.

    As you would expect from an object-oriented class library, the .NET Framework types enable

    you to accomplish a range of common programming tasks, including tasks such as string

    management, data collection, database connectivity, and file access. In addition to these common

    tasks, the class library includes types that support a variety of specialized development scenarios.For example, you can use the .NET Framework to develop the following types of applications

    and services:

    Console applications.

    Scripted or hosted applications.

    Windows GUI applications (Windows Forms).

    ASP.NET applications.

    XML Web services.

    Windows services.

    For example, the Windows Forms classes are a comprehensive set of reusable types that vastly

    simplify Windows GUI development. If you write an ASP.NET Web Form application, you can

    use the Web Forms classes.

  • 8/3/2019 Project Documentation 1

    18/68

    5 .4 ADO.NET Overview:

    5.4.1 ARCHITECTURE OF ADO.NET:

    Fig4: ADO.NET ARCHITECTURE

    ADO.NET is an evolution of the ADO data access model that directly addresses user

    requirements for developing scalable applications. It was designed specifically for the web with

    scalability, statelessness, and XML in mind.

    ADO.NET uses some ADO objects, such as the Connection and Command objects, and also

    introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and

    DataAdapter.

    The important distinction between this evolved stage of ADO.NET and previous data

    architectures is that there exists an object -- the DataSet -- that is separate and distinct from anydata stores. Because of that, the DataSet functions as a standalone entity. You can think of the

    DataSet as an always disconnected record set that knows nothing about the source or destination

    of the data it contains. Inside a DataSet, much like in a database, there are tables, columns,

    relationships, constraints, views, and so forth.

  • 8/3/2019 Project Documentation 1

    19/68

    A DataAdapter is the object that connects to the database to fill the DataSet. Then, it connects

    back to the database to update the data there, based on operations performed while the DataSet

    held the data. In the past, data processing has been primarily connection-based. Now, in an effortto make multi-tiered apps more efficient, data processing is turning to a message-based approach

    that revolves around chunks of information. At the center of this approach is the DataAdapter,

    which provides a bridge to retrieve and save data between a DataSet and its source data store. It

    accomplishes this by means of requests to the appropriate SQL commands made against the data

    store.

    The XML-based DataSet object provides a consistent programming model that works with all

    models of data storage: flat, relational, and hierarchical. It does this by having no 'knowledge' of

    the source of its data, and by representing the data that it holds as collections and data types. No

    matter what the source of the data within the DataSet is, it is manipulated through the same set

    of standard APIs exposed through the DataSet and its subordinate objects.

    While the DataSet has no knowledge of the source of its data, the managed provider has detailed

    and specific information. The role of the managed provider is to connect, fill, and persist the

    DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers

    (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide

    four basic objects: the Command, Connection, DataReader and DataAdapter. In the

    remaining sections of this document, we'll walk through each part of the DataSet and the OLE

    DB/SQL Server .NET Data Providers explaining what they are, and how to program against

    them.

    The following sections will introduce you to some objects that have evolved, and some that are

    new. These objects are:

    Connections. For connection to and managing transactions against a database.

    Commands. For issuing SQL commands against a database.

    Data Readers. For reading a forward-only stream of data records from a SQL Server data source.

    Datasets. For storing, remoting and programming against flat data, XML data and relational data.

    Data Adapters. For pushing data into a DataSet, and reconciling data against a database.

  • 8/3/2019 Project Documentation 1

    20/68

    When dealing with connections to a database, there are two different options: SQL Server .NET

    Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider (System.Data.OleDb).

    In these samples we will use the SQL Server .NET Data Provider. These are written to talk

    directly to Microsoft SQL Server. The OLE DB .NET Data Provider is used to talk to any OLE

    DB provider (as it uses OLE DB underneath).

    Connections:

    Connections are used to 'talk to' databases, and are respresented by provider-specific classes such

    as SQLConnection. Commands travel over connections and resultsets are returned in the form

    of streams which can be read by a DataReader object, or pushed into a DataSet object.

    Commands:

    Commands contain the information that is submitted to a database, and are represented by

    provider-specific classes such as SQLCommand. A command can be a stored procedure call, an

    UPDATE statement, or a statement that returns results. You can also use input and output

    parameters, and return values as part of your command syntax. The example below shows how

    to issue an INSERT statement against the Northwind database.

    DataReaders:

    The DataReader object is somewhat synonymous with a read-only/forward-only cursor over

    data. The DataReader API supports flat as well as hierarchical data. A DataReader object is

    returned after executing a command against a database. The format of the returned DataReader

    object is different from a recordset. For example, you might use the DataReader to show the

    results of a search list in a web page.

    DataSets and DataAdapters:

    DataSets:

    The DataSet object is similar to the ADO Recordset object, but more powerful, and with one

    other important distinction: the DataSet is always disconnected. The DataSet object represents a

    cache of data, with database-like structures such as tables, columns, relationships, and

    constraints. However, though a DataSet can and does behave much like a database, it is

    important to remember that DataSet objects do not interact directly with databases, or other

  • 8/3/2019 Project Documentation 1

    21/68

    source data. This allows the developer to work with a programming model that is always

    consistent, regardless of where the source data resides. Data coming from a database, an XML

    file, from code, or user input can all be placed into DataSet objects. Then, as changes are made

    to the DataSet they can be tracked and verified before updating the source data. The

    GetChanges method of the DataSet object actually creates a second DatSet that contains only

    the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update

    the original data source.

    The DataSet has many XML characteristics, including the ability to produce and consume XML

    data and XML schemas. XML schemas can be used to describe schemas interchanged via

    WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and

    statement completion.

    DataAdapters (OLEDB/SQL):

    The DataAdapter object works as a bridge between the DataSet and the source data. Using the

    provider-specific SqlDataAdapter (along with its associated SqlCommand and

    SqlConnection) can increase overall performance when working with a Microsoft SQL Server

    databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter

    object and its associated OleDbCommand and OleDbConnection objects.

    The DataAdapter object uses commands to update the data source after changes have been

    made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command;

    using the Update method calls the INSERT, UPDATE or DELETE command for each changed

    row. You can explicitly set these commands in order to control the statements used at runtime to

    resolve changes, including the use of stored procedures. For ad-hoc scenarios, a

    CommandBuilder object can generate these at run-time based upon a select statement.

    However, this run-time generation requires an extra round-trip to the server in order to gather

    required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at

    design time will result in better run-time performance.

  • 8/3/2019 Project Documentation 1

    22/68

    1. ADO.NET is the next evolution of ADO for the .Net Framework.

    2. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new objects,

    the DataSet and DataAdapter, are provided for these scenarios.

    3. ADO.NET can be used to get data from a stream, or to store data in a cache for updates.

    4. There is a lot more information about ADO.NET in the documentation.

    5. Remember, you can execute a command directly against the database in order to do inserts,

    updates, and deletes. You don't need to first put data into a DataSet in order to insert, update, or

    delete it.

    6. Also, you can use a DataSet to bind to the data, move through the data, and navigate data

    relationships.

    ODBC Databases:

    These are client-server databases that confirm to the ODBC standard such as

    Microsoft SQL Server.

    Data reports:

    Data reports is a powerful program for creating custom reports, list and form

    letters using data a from existing databases. Data Reports is design to work with all kinds

    of data such as numbers, currency, text and Boolean fields. It has a wide range of built-in

    tools for manipulating data with which it is possible to:

    Make calculations and comparisons of data values,

    Calculate grand total and subtotals of values,

    Test for the presence of specific values,

    Present data only if specific conditions are met,

    Evaluate logical relationship between values,

    Convert data from one type to another,

    Calculate group averages, count the records in a group and test for minimum and

    maximum values.

    The data can be placed at the required spot on the report, with special fonts and

    font sizes. Once a report has been designed it can be used as a template for creating other

    similar reports, which save a lot of time in creating new reports from scratch.

  • 8/3/2019 Project Documentation 1

    23/68

    5.SOFTWARE DESIGN

    5 .1 Introduction:

    The design phase begins with the requirements specification for the software to be

    developed. Design is the first step to moving from the problem domain towards the

    solution domain. Design is essentially the bridge between requirement specification and

    the final solution for satisfying the requirements. It is the most critical factor effecting the

    quality of the software.

    The design process for software system has two levels.

    1. System Design or Top level design

    2. Detailed Design or Logical DesignSystem Design:

    In the system design the focus on the deciding which modules are needed

    for the system, the specification of these modules and how these modules should be

    interconnected.

    Detailed Design:

    In detailed design the interconnection of the modules or how the

    specifications of the modules can be satisfied is decided. Some properties for a software

    system design are

    Verifiability

    Completeness

    Consistency

    Traceability

    Simplicity / Understandability

  • 8/3/2019 Project Documentation 1

    24/68

    5 .2 Design Overview:

    1) Application Architecture:

    Fig5: Application Architecture

    The application which we are developing is using One-Tier or single Tier application. Within the

    same tier we are going to include the business functionalities as well the data access

    functionalities. The frontend which we are going to develop is using the C# .NET windows

    application. We will develop all the front end forms using C#.NET of .NET environment. Onceafter developing the user interfaces we need to write the code behind in order to specify the

    business logic. This coding will be done using the C# language in our application, where will

    write all the necessary business logic code in order to perform the required functionality.

    In the above application diagram , in the User Interface block will include all the necessary front

    end screens and in the BAL block will include all the necessary business logic required for anybusiness.

    User InterfaceBusiness

    Logic

    Layer

    Database

  • 8/3/2019 Project Documentation 1

    25/68

    2) Software Architecture:

    Fig6: Application Architecture

    Users

    Login

    Select the

    image

    Apply SAI

    Interpolation

    Save the

    image

    Copy/Paste

    the image

    Print the

    ima e

    ZoomIn/Out the

    image

  • 8/3/2019 Project Documentation 1

    26/68

    5 .3 UML (Unified Modeling Language):

    5 .3.1 Introduction :

    Modeling is an activity that has been carried out over the years in software

    development. When writing applications by using the simplest languages to the most powerful

    and complex languages, you still need to model. Modeling can be as straightforward as drawing

    a flowchart listing the steps carried out by an application.

    Why do we use modeling?

    Defining a model makes it easier to break up a complex application or a huge system into

    simple, discrete pieces that can be individually studied. We can focus more easily on the

    smaller parts of a system and then understand the "big picture." Hence, the reasons behind

    modeling can be summed up in two words:

    Readability

    Reusability

    Readability brings clarityease of understanding. Understanding a system is the first step in

    either building or enhancing a system. This involves knowing what a system is made up of,

    how it behaves, and so forth. Modeling a system ensures that it becomes readable and, most

    importantly, easy to document. Depicting a system to make it readable involves capturing the

    structure of a system and the behavior of the system.

    Reusability is the byproduct of making a system readable. After a system has been modeled

    to make it easy to understand, we tend to identify similarities or redundancy, be they in terms

    of functionality, features, or structure.

  • 8/3/2019 Project Documentation 1

    27/68

    INTRODUCTION TO UML:

    The Unified Modeling Language (UML) is a standard language for specifying, visualizing,

    constructing, and documenting the artifacts of software systems, as well as for business

    modeling and other non-software systems. The UML represents a collection of best

    engineering practices that have proven successful in the modeling of large and complex

    systems. The UML is a very important part of developing objects oriented software and the

    software development process. The UML uses mostly graphical notations to express the

    design of software projects. Using the UML helps project teams communicate, explore

    potential designs, and validate the architectural design of the software.

    The Unified Modeling Language, or UML, as it is popularly known by its TLA (three-letter

    acronym!), is the language that can be used to model systems and make them readable. This

    essentially means that UML provides the ability to capture the characteristics of a system by

    using notations. UML provides a wide array of simple, easy to understand notations for

    documenting systems based on the object-oriented design principles. These notations are

    called the nine diagrams of UML.

    Different languages have been used for depicting systems using object-oriented methodology.

    The prominent among these were the Rumbaing methodology, the Brooch methodology, andthe Jacobson methodology. The problem was that, although each methodology had its

    advantages, they were essentially disparate. Hence, if you had to work on different projects

    that used any of these methodologies, you had to be well versed with each of these

    methodologies. A very tall order indeed! The Unified Modeling Language is just that. It

    "unifies" the design principles of each of these methodologies into a single, standard,

    language that can be easily applied across the board for all object-oriented systems. But,

    unlike the different methodologies that tended more to the design and detailed design of

    systems, UML spans the realm of requirements, analysis, and design and, uniquely,

    implementation as well. The beauty of UML lies in the fact that any of the nine diagrams of

    UML can be used on an incremental basis as the need arises. Considering all these reasons, it

    is no wonder that UML is considered "the" language of choice.

  • 8/3/2019 Project Documentation 1

    28/68

    UML does not have any dependencies with respect to any technologies or languages. This

    implies that you can use UML to model applications and systems based on either of the

    current hot technologies; for example, J2EE and .NET. Every effort has been made to keep

    UML as a clear and concise modeling language without being tied down to any technologies.

    5 .3.2 Goals Of UML :

    The primary goals in the design of the UML were:

    Provide users with a ready-to-use, expressive visual modeling language so they can

    develop and exchange meaningful models.

    Provide extensibility and specialization mechanisms to extend the core concepts.

    Be independent of particular programming languages and development processes.

    Provide a formal basis for understanding the modeling language.

    Encourage the growth of the OO tools market.

    Support higher-level development concepts such as collaborations, frameworks,

    patterns and components.

    Integrate best practices.

    Why we use UML?

    As the strategic value of software increases for many companies, the industry looks for

    techniques to automate the production of software and to improve quality and reduce cost and

    time-to-market. These techniques include component technology, visual programming,

    patterns and frameworks. Businesses also seek techniques to manage the complexity of

    systems as they increase in scope and scale. In particular, they recognize the need to solve

    recurring architectural problems, such as physical distribution, concurrency, replication,

    security, load balancing and fault tolerance. Additionally, the development for the World

    Wide Web, while making some things simpler, has exacerbated these architectural problems.

    The Unified Modeling Language (UML) was designed to respond to these needs.

  • 8/3/2019 Project Documentation 1

    29/68

    5 .3.3 Explanation Of UML Diagrams:

    The underlying premise of UML is that no one diagram can capture the different

    elements of a system in its entirety. Hence, UML is made up of nine diagrams that can be

    used to model a system at different points of time in the software life cycle of a system.

    The nine UML diagrams are:

    Use case diagram:

    The use case diagram is used to identify the primary elements and processes that form the

    system. The primary elements are termed as "actors" and the processes are called "use

    cases." The use case diagram shows which actors interact with each use case.

    Class diagram:

    The class diagram is used to refine the use case diagram and define a detailed design of the

    system. The class diagram classifies the actors defined in the use case diagram into a set of

    interrelated classes. The relationship or association between the classes can be either an "is-a"

    or "has-a" relationship. Each class in the class diagram may be capable of providing certain

    functionalities. These functionalities provided by the class are termed "methods" of the class.

    Apart from this, each class may have certain "attributes" that uniquely identify the class.

    Object diagram:

    The object diagram is a special kind of class diagram. An object is an instance of a class. This

    essentially means that an object represents the state of a class at a given point of time while

    the system is running. The object diagram captures the state of different classes in the system

    and their relationships or associations at a given point of time.

  • 8/3/2019 Project Documentation 1

    30/68

    State diagram:

    A state diagram, as the name suggests, represents the different states that objects in the system

    undergo during their life cycle. Objects in the system change states in response to events. In

    addition to this, a state diagram also captures the transition of the object's state from an initial

    state to a final state in response to events affecting the system.

    Activity diagram:

    The process flows in the system are captured in the activity diagram. Similar to a state

    diagram, an activity diagram also consists of activities, actions, transitions, initial and final

    states, and guard conditions.

    Sequence diagram:

    A sequence diagram represents the interaction between different objects in the system. The

    important aspect of a sequence diagram is that it is time-ordered. This means that the exact

    sequence of the interactions between the objects is represented step by step. Different objects

    in the sequence diagram interact with each other by passing "messages".

    Collaboration diagram:

    A collaboration diagram groups together the interactions between different objects. The

    interactions are listed as numbered interactions that help to trace the sequence of the

    interactions. The collaboration diagram helps to identify all the possible interactions that each

    object has with other objects.

    Component diagram:

    The component diagram represents the high-level parts that make up the system. This diagram

    depicts, at a high level, what components form part of the system and how they are

  • 8/3/2019 Project Documentation 1

    31/68

    interrelated. A component diagram depicts the components culled after the system has

    undergone the development or construction phase.

    Deployment diagram:

    The deployment diagram captures the configuration of the runtime elements of the

    application. This diagram is by far most useful when a system is built and ready to be

    deployed.

    Now that we have an idea of the different UML diagrams, let us see if we can somehow group

    together these diagrams to enable us to further understand how to use them.

    UML Diagram ClassificationStatic, Dynamic, and Implementation

    A software system can be said to have two distinct characteristics: a structural, "static" part and a

    behavioral, "dynamic" part. In addition to these two characteristics, an additional characteristic

    that a software system possesses is related to implementation. Before we categorize UML

    diagrams into each of these three characteristics, let us take a quick look at exactly what these

    characteristics are.

    Static:

    The static characteristic of a system is essentially the structural aspect of the system. The

    static characteristics define what parts the system is made up of.

    Dynamic:

    The behavioral features of a system; for example, the ways a system behaves in response

    to certain events or actions are the dynamic characteristics of a system.

    Implementation:

    The implementation characteristic of a system is an entirely new feature that describes

    the different elements required for deploying a system.

  • 8/3/2019 Project Documentation 1

    32/68

    The UML diagrams that fall under each of these categories are:

    Static

    Use case diagram

    Class diagram

    Dynamic

    o Object diagram

    o State diagram

    o Activity diagram

    o Sequence diagram

    o Collaboration diagram

    Implementation

    o Component diagram

    o Deployment diagram

    Finally, let us take a look at the 4+1 view of UML diagrams.

    Views of UML Diagrams

    Considering that the UML diagrams can be used in different stages in the life cycle of a system,

    let us take a look at the "4+1 view" of UML diagrams. The 4+1 view offers a different

    perspective to classify and apply UML diagrams. The 4+1 view is essentially how a system can

    be viewed from a software life cycle perspective. Each of these views represents how a system

    can be modeled. This will enable us to understand where exactly the UML diagrams fit in and

    their applicability.

  • 8/3/2019 Project Documentation 1

    33/68

    The different views are:

    Design View:

    The design view of a system is the structural view of the system. This gives an idea of what

    a given system is made up of. Class diagrams and object diagrams form the design view of

    the system.

    Process View:

    The dynamic behavior of a system can be seen using theprocess view. The different

    diagrams such as the state diagram, activity diagram, sequence diagram, and collaboration

    diagram are used in this view.

    Component View:

    Component view shows the grouped modules of a given system modeled using the

    component diagram.

    Deployment View:

    The deployment diagram of UML is used to identify the deployment modules for a given

    system.

    Use case View:

    Finally, we have the use case view. Use case diagrams of UML are used to view a system

    from this perspective as a set of discrete activities or transactions.

  • 8/3/2019 Project Documentation 1

    34/68

    5 .3.4 UML Diagrams:

    a) USECASE DIAGRAM:

    Give the image

    Interpolation

    Save

    Copy

    Paste

    Print

    Zoom in/out

    User

    Fig7: USECASE DIAGRAM

  • 8/3/2019 Project Documentation 1

    35/68

    b) CLASS DIAGRAM:

    ImageFunctions

    imgWidth : int

    imgHeight : int

    newWidht : int

    newHeight : int

    Interpolation()

    Copy()

    Paste()

    Print()

    ZoomInOut()

    User1

    imgName : varchar

    imgHeight : int

    imgWidth : int

    Login()

    InputImage()

    ImageInterpolation()

    *

    *

    *

    *

    Login

    username : varchar

    password : varchar

    Login()

    *

    **

    *

    Fig8: CLASS DIAGRAM

  • 8/3/2019 Project Documentation 1

    36/68

    c) OBJECT DIAGRAM:

    Login Provide Input Image

    Analyze Missing PixelsProvide Interpolation Factors

    View Compared Results

    SAI Interpolation

    Fig9: OBJECT DIAGRAM

  • 8/3/2019 Project Documentation 1

    37/68

    d) STATE DIAGRAM:

    Input Image

    Analyze Image

    Apply Interpolation

    View Results

    Login

    Fig10: STATE DIAGRAM

  • 8/3/2019 Project Documentation 1

    38/68

    e) SEQUENCE DIAGRAM:

    User Application

    Login

    Verify and Direct to Main Form

    Select the image

    Show the image on the form

    Request for Interpolation

    Interpolate using SAI Technique

    Request for Saving the image

    Save the image

    Request for Copy the image

    Copy the image

    Request for pasting the image

    Paste the image

    Request for Zoom in/out

    Zoom in/out the image

    Fig11: SEQUENCE DIAGRAM

  • 8/3/2019 Project Documentation 1

    39/68

    5 .4 DFD (Data Flow Diagrams):

    Level 0:

    Level 1:

    Level 2:

    Level 3:

    Login

    LoginUSER

    Select the

    image

    USER

    LoginUSER Select the

    image

    Interpolate

    using SAI

    technique

    LoginUSER Select the

    image

    Interpolate

    using SAI

    technique

    Perform different

    image operations

  • 8/3/2019 Project Documentation 1

    40/68

    Fig12:Data Flow Diagrams

    6.CODE TEMPLATES

    Code Snippets (Important Features):

    First we need to login and the values of the textboxes must be validated and redirected to

    the MainForm.cs

    For that the following must be written in the button click event:privatevoid button1_Click(object sender, EventArgs e)

    {

    if (textBox1.Text != "" && textBox2.Text != ""){

    SqlConnection cn = newSqlConnection("server=.;IntegratedSecurity=true;database=imginterpolation");

    cn.Open(); SqlCommand cmd; SqlDataReader dr;cmd = newSqlCommand("select * from login where uname='" + textBox1.Text + "'and pword='" + textBox2.Text + "'", cn);

    dr = cmd.ExecuteReader(); if (dr.Read())

    { this.Hide(); MainForm main = newMainForm();

    main.Show();}

    else{

    MessageBox.Show("The values are not correct");}

    } else

    { MessageBox.Show("Enter the values");

    }

    }

    After redirecting it to the Main form

    The image must be loaded in to the form

    For this the click event for Open menu item must be written

  • 8/3/2019 Project Documentation 1

    41/68

    privatevoid OpenItem_Click(object sender, System.EventArgs e){

    OpenFile();}

    OpenFile() method must be writen using the following code.privatevoid OpenFile()

    {if (ofd.ShowDialog() == DialogResult.OK){

    ImageDoc imgDoc = null;

    try{

    // create image documentimgDoc = newImageDoc(ofd.FileName, (IDocumentsHost) this);

    imgDoc.Text = Path.GetFileName(ofd.FileName);

    }catch (ApplicationException ex){

    MessageBox.Show(ex.Message, "Error",MessageBoxButtons.OK, MessageBoxIcon.Error);

    }

    if (imgDoc != null){

    imgDoc.Show(dockManager);imgDoc.Focus();

    // set eventsSetupDocumentEvents(imgDoc);

    }}

    }

    SetupDocumentEvents(imgDoc) method is as follows:

    privatevoid SetupDocumentEvents(ImageDoc doc){

    doc.DocumentChanged += newSystem.EventHandler(this.document_DocumentChanged);

    doc.ZoomChanged += newSystem.EventHandler(this.document_ZoomChanged);

  • 8/3/2019 Project Documentation 1

    42/68

    doc.MouseImagePosition += newImageDoc.SelectionEventHandler(this.document_MouseImagePosition);

    doc.SelectionChanged += newImageDoc.SelectionEventHandler(this.document_SelectionChanged);

    }

    Reload click event for the menu item Reload.

    privatevoid reloadFileItem_Click(object sender, System.EventArgs e){

    Content doc = dockManager.ActiveDocument;

    if ((doc != null) && (doc isImageDoc)){

    try{

    ((ImageDoc) doc).Reload();}catch (ApplicationException ex){

    MessageBox.Show(ex.Message, "Error",MessageBoxButtons.OK, MessageBoxIcon.Error);

    }}

    }

    Click event for the Save menu item in the menu bar. For doing this writes the following

    code.

    privatevoid saveFileItem_Click(object sender, System.EventArgs e){

    SaveFile();}

    Writing the Save Method for the above functionality:privatevoid SaveFile()

  • 8/3/2019 Project Documentation 1

    43/68

    {Content doc = dockManager.ActiveDocument;

    if (doc != null){

    // set initial file nameif ((doc isImageDoc) && (((ImageDoc) doc).FileName !

    = null)){

    sfd.FileName = Path.GetFileName(((ImageDoc)doc).FileName);

    }else{

    sfd.FileName = doc.Text + ".jpg";}

    sfd.FilterIndex = 0;

    // show dialog

    if (sfd.ShowDialog(this) == DialogResult.OK){

    ImageFormat format = ImageFormat.Jpeg;

    // resolve file formatswitch

    (Path.GetExtension(sfd.FileName).ToLower()){

    case".jpg":format = ImageFormat.Jpeg;break;

    case".bmp":format = ImageFormat.Bmp;break;

    default:MessageBox.Show(this, "Unsupported

    image format was specified", "Error",MessageBoxButtons.OK,

    MessageBoxIcon.Error);return;

    }

    // save the imagetry{

    if (doc isImageDoc){

    ((ImageDoc)doc).Image.Save(sfd.FileName, format);

    } //if (doc is FourierDoc) //{ // ((FourierDoc) doc).Image.Save(sfd.FileName,format); //}

    }catch (Exception)

  • 8/3/2019 Project Documentation 1

    44/68

    {MessageBox.Show(this, "Failed writing image file", "Error",

    MessageBoxButtons.OK,MessageBoxIcon.Error);

    }}

    }}

    Click event for the Copy menu item:// On "File->Copy" - copy image to clipboardprivatevoid copyFileItem_Click(object sender, System.EventArgs e)

    {CopyToClipboard();

    }

    // Copy image to clipboardprivatevoid CopyToClipboard(){

    Content doc = dockManager.ActiveDocument;

    if (doc != null){

    if (doc isImageDoc){

    Clipboard.SetDataObject(((ImageDoc) doc).Image, true);}

    }}

    Method for Applying SAI Interpolation: publicBitmap Apply(Bitmap srcImg)

    { // get source image size int width = srcImg.Width; int height = srcImg.Height;

    if ((newWidth == width) && (newHeight == height)){

    // just clone the image return AForge.Imaging.Image.Clone(srcImg);

    }

  • 8/3/2019 Project Documentation 1

    45/68

    PixelFormat fmt = (srcImg.PixelFormat ==PixelFormat.Format8bppIndexed) ? PixelFormat.Format8bppIndexed : PixelFormat.Format24bppRgb;

    // lock source bitmap data BitmapData srcData = srcImg.LockBits( newRectangle(0, 0, width, height), ImageLockMode.ReadOnly, fmt);

    // create new image Bitmap dstImg = (fmt == PixelFormat.Format8bppIndexed) ?

    AForge.Imaging.Image.CreateGrayscaleImage(newWidth,newHeight) : newBitmap(newWidth, newHeight, fmt);

    // lock destination bitmap data BitmapData dstData = dstImg.LockBits( newRectangle(0, 0, newWidth, newHeight), ImageLockMode.ReadWrite, fmt);

    int pixelSize = (fmt == PixelFormat.Format8bppIndexed) ? 1 : 3; int srcStride = srcData.Stride; int dstOffset = dstData.Stride - pixelSize * newWidth; float xFactor = (float)width / newWidth; float yFactor = (float)height / newHeight;

    // do the job unsafe

    { byte* src = (byte*)srcData.Scan0.ToPointer(); byte* dst = (byte*)dstData.Scan0.ToPointer();

    switch (method){

    caseInterpolationMethod.SAI:{

    // ------------------------------------ // resize using SAI interpolation // ------------------------------------

    float ox, oy, dx1, dy1, dx2, dy2; int ox1, oy1, ox2, oy2; int ymax = height - 1; int xmax = width - 1;

    byte v1, v2; byte* tp1, tp2;

    byte* p1, p2, p3, p4;

    // for each line for (int y = 0; y < newHeight; y++)

    { // Y coordinates

    oy = (float)y * yFactor;

  • 8/3/2019 Project Documentation 1

    46/68

    oy1 = (int)oy;oy2 = (oy1 == ymax) ? oy1 : oy1 + 1;dy1 = oy - (float)oy1;dy2 = 1.0f - dy1;

    // get temp pointerstp1 = src + oy1 * srcStride;tp2 = src + oy2 * srcStride;

    // for each pixel for (int x = 0; x < newWidth; x++)

    { // X coordinates

    ox = (float)x * xFactor;ox1 = (int)ox;ox2 = (ox1 == xmax) ? ox1 : ox1 + 1;dx1 = ox - (float)ox1;dx2 = 1.0f - dx1;

    // get four points

    p1 = tp1 + ox1 * pixelSize;p2 = tp1 + ox2 * pixelSize;p3 = tp2 + ox1 * pixelSize;p4 = tp2 + ox2 * pixelSize;

    // interpolate using 4 points for (int i = 0; i < pixelSize; i++, dst++, p1++, p2++, p3++, p4++)

    {v1 = (byte)(dx2 * (*p1) + dx1 *

    (*p2));v2 = (byte)(dx2 * (*p3) + dx1 *

    (*p4));*dst = (byte)(dy2 * v1 + dy1 * v2);

    }}dst += dstOffset;

    } break;

    }}

    }

    // unlock both imagesdstImg.UnlockBits(dstData);srcImg.UnlockBits(srcData);

    return dstImg;}

    }

  • 8/3/2019 Project Documentation 1

    47/68

    7.TESTING

    7 .1 Testing Introduction :

    Software testing is a critical element of software quality assurance and represents

    the ultimate review of specification, design and coding. The increasing visibility of

    software as a system element and attendant costs associated with a software failure are

    motivating factors for we planned, through testing. Testing is the process of executing a

    program with the intent of finding an error. The design of tests for software and other

    engineered products can be as challenging as the initial design of the product itself.

    There of basically two types of testing approaches.

    One is Black-Box testingthe specified function that a product has been

    designed to perform, tests can be conducted that demonstrate each function is fully

    operated.

    The other is White-Box testingknowing the internal workings of the

    product ,tests can be conducted to ensure that the internal operation of the product performs according to specifications and all internal components have been

    adequately exercised.

    White box and Black box testing methods have been used to test this

    package. All the loop constructs have been tested for their boundary and

  • 8/3/2019 Project Documentation 1

    48/68

    intermediate conditions. The test data was designed with a view to check for

    all the conditions and logical decisions. Error handling has been taken care of

    by the use of exception handlers.

    7 .2 Testing Strategies:

    Testing is a set of activities that can be planned in advanced and conducted

    systematically. A strategy for software testing must accommodation low-level tests that

    are necessary to verify that a small source code segment has been correctly implemented

    as well as high-level tests that validate major system functions against customer

    requirements.

    Software testing is one element of verification and validation. Verification

    refers to the set of activities that ensure that software correctly implements as specific

    function. Validation refers to a different set of activities that ensure that the software that

    has been built is traceable to customer requirements.

    The objective of software testing to uncover errors. To fulfill this objective, a

    series of test steps unit, integration, validation and system tests are planned and executed.

    Each test step is accomplished through a series of systematic test technique that assist in

    the design of test cases. With each testing step, the level of abstraction with which

    software is considered is broadened.

    7.2.1 Unit Testing :

  • 8/3/2019 Project Documentation 1

    49/68

    Unit testing focuses verification effort on the smallest unit of software design the

    module. The unit test is always white box oriented. The tests that occur as part of unit testing

    are testing the module interface, examining the local data structures, testing the boundary

    conditions, executing all the independent paths and testing error-handling paths.

    7.2.2 Integration Testing :

    Integration testing is a systematic technique for constructing the program structure

    while at the same time conducting tests to uncover errors associated with interfacing.

    Scope of testing summarizes the specific functional, performance, and internal design

    characteristics that are to be tested. It employs top-down testing and bottom-up testing

    methods for this case.

    7.2.3 White Box Testing:

    The purpose of any security testing method is to ensure the robustness of a system in the face of

    malicious attacks or regular software failures. White box testing is performed based on theknowledge ofhow the system is implemented. White box testing includes analyzing data flow,

    control flow, information flow, coding practices, and exception and error handling within the

    system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate

    implemented security functionality, and to uncover exploitable vulnerabilities.

    White box testing requires access to the source code. Though white box testing can be performed

    any time in the life cycle after the code is developed, it is a good practice to perform white boxtesting during the unit testing phase.

    White box testing requires knowing what makes software secure or insecure, how to think like an

    attacker, and how to use different testing tools and techniques. The first step in white box testing

    is to comprehend and analyze source code, so knowing what makes software secure is afundamental requirement. Second, to create tests that exploit software, a tester must think like an

    attacker. Third, to perform testing effectively, testers need to know the different tools and

    techniques available for white box testing. The three requirements do not work in isolation, buttogether.

    7.2.4 Black Box Testing:

    Also known asfunctional testing. A software testing technique whereby the internal workings of

    the item being tested are not known by the tester. For example, in a black box test on softwaredesign the tester only knows the inputs and what the expected outcomes should be and not how

    the program arrives at those outputs. The tester does not ever examine the programming code

    and does not need any further knowledge of the program other than its specifications.

  • 8/3/2019 Project Documentation 1

    50/68

    The advantages of this type of testing include:

    The test is unbiased because the designer and the tester are independent of each other.

    The tester does not need knowledge of any specific programming languages.

    The test is done from the point of view of the user, not the designer.

    Test cases can be designed as soon as the specifications are complete.

    7.2.5 System Testing :

    System testing validates software once it has been incorporated into a larger

    system. Software is incorporated with other system elements and a series of system

    integration and validation tests are conducted. System testing is actually a series of

    different test whose primary purpose is to fully exercise the computer- based system.

    Once the system has been developed it has to be tested. In the present system we have to

    take care of valid property and assessment numbers i.e. there should not exist any

    duplicate number in each case. Care should be taken that the appropriate data is retrieved

    in response to the queries.

    7.3 Design Of Test Cases and Scenarios:

    Authentication:

    FUNCTION EXPECTED

    RESULTS

    ACTUAL

    RESULTS

    LOW

    PRIORITY

    HIGH

    PRIORITY

    Login Verify andauthenticate

    Verified andredirected to Mainpage

    ----------------- Yes

    Select the

    image

    Select the image

    and show it on tothe form

    Image is selected

    and shown on theform.

    --------------- Yes

    ApplyInterpolation

    SAI Interpolationmust applied

    SAI Interpolation isapplied on the

    ---------------- Yes

  • 8/3/2019 Project Documentation 1

    51/68

    image

    Save the image Image must be

    saved

    Image is saved -------------- Yes

    Zoom In/Out Zoom size is

    increased/decreased

    as per therequirement

    Zoom size is

    increased/decreased

    as per therequirement

    ------------- Yes

    Table1: TEST CASES

    8.IMPLEMENTATION & RESULTS

    8.1Running Application:

    In order to run the web application the steps we need to follow are listed below:

    1) Open the visual studio IDE (Integrated Development Environment) that is Visual Studio2008

    or other version.

  • 8/3/2019 Project Documentation 1

    52/68

    Fig 13:Application1

    2) Click on file -> open -> browse the folder in which the project is there then select the Solution

    file of the project.

  • 8/3/2019 Project Documentation 1

    53/68

    Fig 14:Application2

    3) Click on open option. Than in the solution explorer you will find all the forms and classes that

    are related to the project.

  • 8/3/2019 Project Documentation 1

    54/68

    Fig 15:Application3

    4) Run the application by pressing F5 or debugging button.

    8.2Output Screens:

  • 8/3/2019 Project Documentation 1

    55/68

    Fig 16: Login Form

    Fig 17: Login Form

  • 8/3/2019 Project Documentation 1

    56/68

    Fig 18: Login Form

  • 8/3/2019 Project Documentation 1

    57/68

    Fig 19: Image Interpolation

  • 8/3/2019 Project Documentation 1

    58/68

    Fig20 : Image Interpolation

  • 8/3/2019 Project Documentation 1

    59/68

    Fig 21: Image Interpolation

  • 8/3/2019 Project Documentation 1

    60/68

    Fig 22: Image Interpolation

  • 8/3/2019 Project Documentation 1

    61/68

    Fig23 : Image Interpolation

  • 8/3/2019 Project Documentation 1

    62/68

    Fig24 : Image Interpolation

  • 8/3/2019 Project Documentation 1

    63/68

    Fig25 : Image Interpolation

  • 8/3/2019 Project Documentation 1

    64/68

    Fig 26: Image Interpolation

  • 8/3/2019 Project Documentation 1

    65/68

    Fig 27: Image Interpolation

  • 8/3/2019 Project Documentation 1

    66/68

    Fig 28: Image Interpolation

  • 8/3/2019 Project Documentation 1

    67/68

    9.CONCLUSION

    This new image interpolation technique outperforms the existing methods in both bilinear and

    bicubic techniques, by preserving the edges while reconstructing the HR image. The complexity

    is less compared to the existing system. The blurs in the image will be reduced. The final image

    will be very effective. Not only these the user can also save the image and has the option to print

    the image.

    FUTURE ENHANCEMENT

    This project gives the interpolated image without blurs in it. We can extend this project by

    giving different features like editing, cropping, etc.,

  • 8/3/2019 Project Documentation 1

    68/68

    REFERENCES

    The following were referred during the analysis and execution phase of the project:

    Books:

    SOFTWARE ENGINEERING - By Roger.S.Pressman

    MSDN 2002 - By Microsoft

    Deitel, H., Deitel, P., et al., C#, How to Program, Prentice-Hall, Upper Saddle River, NJ

    Wrox publications, Professional C#

    Advanced Reflection concepts by George Lenn

    web pages:

    http://msdn.microsoft.com/hi-in/default.aspx

    - Microsoft Developer Network Webpage

    Turtschi, A., et al., C# .NET Web Developers Guide, Syngress, electronic volume at

    www.netlibrary.com

    Workshop slides and tutorial materials can be downloaded from:

    http://cs.mwsu.edu/~stringfe/CCSCWorkshop

    http://msdn.microsoft.com/hi-in/default.aspxhttp://www.netlibrary.com/http://www.netlibrary.com/http://cs.mwsu.edu/~stringfe/CCSCWorkshophttp://msdn.microsoft.com/hi-in/default.aspxhttp://www.netlibrary.com/http://cs.mwsu.edu/~stringfe/CCSCWorkshop