cyber security new
TRANSCRIPT
-
8/9/2019 Cyber Security New
1/75
Introduction
Our Project is Cyber Security Project, where in we are trying to implementvarious IT Security aspects along with Complete Cyber Caf Managementand Surrivalnce system, the system will make Cyber Cafe administratorsJob easy by providing him the necessary tool to maintain the cyber cafrecords and manage the services offered to the customer along withcomplete tracking of Customer usage, activity and accounting aspects.This software give cyber caf Administrators complete control over thesystems in the network so that they are able to monitor and maintain the
systems remotely.
-
8/9/2019 Cyber Security New
2/75
CHAPTER 1
SYSTEM
DEVELOPMENT
LIFE CYCLE
========================================================
-
8/9/2019 Cyber Security New
3/75
1. System Development Life Cycle
The basic idea of software development life cycle (SDLC) is that there is a
well defined process by which an application is conceived, developed and
implemented. The phases in the SDLC provide a basis for the
management and control because they define segments of the flow of
work, which can be identified for the managerial purpose and specifies
the documents or other deliveries to be produced in each phase.
System Development revolves around a life cycle that begins with the
recognition of user needs. In order to develop good software, it has to go
through different phases. There are various phases of the System
Development Life Cycle for the project and different models for software
development, which depict these phases. We decided to use waterfall
model, the oldest and the most widely used paradigm for software
engineering. The Various relevant stages of the System Life Cycle of this
Application Tool are depicted in the following flow diagram.
========================================================
SYSTEM ANAYLSIS
SYSTEM DESIGN
CODING
SYSTEM TESTING
-
8/9/2019 Cyber Security New
4/75
Let us have a look on each of the above activities:
1. System Analysis
System Analysis is the process of diagnosing situations, done with a
defiant aim, with the boundaries of the system kept in mind to produce areport based on the findings. Analysis is fact-finding techniques where
problem definition, objective, system requirement specifications, feasibility
analysis and cost benefit analysis are carried out. The requirement of
both the system and the software are document and reviewed with the
user.
2. System Design
System Design is actually a multistep process that focuses on four distinct
attributes of a program: data structures, software architecture, interface
representations, and procedural (algorithmic) detail. System design is
concerned with identifying the software components (Functions, data streams,
and data stores), specifying relationships among components, specifying
software structure, maintaining a record of design decisions and providing a
blueprint for the implementation phase.
3. Coding
Coding step performs the translations of the design representations into an
artificial language resulting in instructions that can be executed by the
computer. It thus involves developing computer programs that meet the
system specifications of design stage.
4. System Testing
========================================================
SYSTEM
SYSTEM MAINTENANCE
-
8/9/2019 Cyber Security New
5/75
System testing process focuses on the logical internals of the software,
ensuring that all statements have been tested on the functional externals, that
is conducting tests using various tests data to uncover errors that defined
input will produce actual results that agree with required results.
5. System Implementation
System Implementation is a process that includes all those activities that
take place to convert an old system to a new system. The new system
may be totally new system replacing the existing system or it may be
major modification to the existing system. Coding performs the
translations of the design representations into an artificial language
resulting in instructions that can be executed by the computer. It thus
involves developing computer programs that meet the system design
specifications. System implementation involves the translation of the
design specifications into source code and debugging, documentation and
unit testing of the source code.
6. System Maintenance
Maintenance is modification of a software product after delivery to correct
faults to improve performance or to adopt the product to a new operating
environment. Software maintenance canot be avoided due to ware & tear
caused by users. Some of the reasons for maintaining the software are
1. Over a period of time, software original requirements may change.
2. Errors undetected during software development may be found during
user & require correction.
3. With time new technologies are introduced such as hardware,
operating system etc. The software therefore must be modified to
adapt new operating environment.
Type of Software Maintenance
========================================================
-
8/9/2019 Cyber Security New
6/75
Corrective Maintenance: This type of maintenance is also called bug
fixing that may observed while the system is in use i.e correct reported
errors.
Adaptive Maintenance: This type of maintenance is concern with the
modification required due to change in environment. (i.e external changes
like use in different hardware platform or use different O.S.
Perfective Maintenance: Perfective maintenance refers to enhancement
to the software product there by adding or support to new features or when
user change different functionalities of the system according to customer
demands making the product better, faster with more function or reports.
Preventive Maintenance: This type of maintenance is done to anticipate
future problems and to improve the maintainability to provide a better basis
for future enhancement or business changes.
========================================================
-
8/9/2019 Cyber Security New
7/75
SYSTEM ANAYLSIS
1.1.1 Problem Definition
Our Project is Cyber Security tries to implement some important aspect
of Cyber Cafe and System Security. Our project will relay information to the
network administrator about what is happening on a network node, also the
system will provide notifications regarding any file delete etc.
1.1.2 Proposed System
The Proposed system will have the following features:
Cyber Cafe Mangement: This feature allows the cyber caf
administrator to to maintain the system chart and view which user is
sitting on which compuer and maintain the accounting for that particular
user. The user account can be topped up by the cyber caf administrator
with a particular amount and when the user sits on a particular system he
is not able to log on to the system until he provides his
username/password, as soon as he logs in the system deducts money
========================================================
-
8/9/2019 Cyber Security New
8/75
from his accounts as per usage similar to what we see in case of prepaid
mobile system.
File Delete Notification System: The system will write notification
events in case a file is deleted from the system along with all attributes
like time,date etc. This module is very useful in case of College Lab as we
can track that when a file was deleted from a system.
User Activity Monitoring Module: This module will enable the
administrator to keep a track on the activities occurring on a user system
in a real time basis. This system will work in the manner that after every
few minutes the screen of the user desktop will be catured as an image
and will be transferred to the administrator so that can keep the track of
what is happening on the user system.
Remote Desktop Login: This module enables the administrator to
remotely logon to a particular system on the network to view what all is
happening on that remote computer.
========================================================
-
8/9/2019 Cyber Security New
9/75
1.1.3 Significance of Project
These days information and data security has been a very important
concept. This project will help lab administrator to manage the network
effectivey and keep a track on students as it is becoming increasingly
necessary to prevent the misuse of College IT infrastructure by students.
1.1.4 Advantages of the Proposed System
The various advantages of the proposed System are:
Cyber caf is automated no need to keep whatching the clock and to maintain
the records of cyber caf.
Prevents and Notifies about deletion of any important files. The culprit can be
tracked who has deleted the files
Administrator can track user working by keep watch on the activities thus
preventing further misuse.
1.1.5 REQUIREMENT ANALYSIS
Software requirement analysis is a software-engineering task that bridges the
gap between system level software allocation and software design. For
Developing our Travel Portal in-depth analysis was deon. The analysis was
divided into the following three Parts.
Problem Recognition
Evaluation and Synthesis
Specification & Review
Problem Recognition
The aim of the poroject was understood and through research was done on
internet to get a deep insight of how the proposed system will work, we went
========================================================
-
8/9/2019 Cyber Security New
10/75
to different travel related sites and understood their working. We recorded
what all features will be required when we build our website like for eg. We
need to keep a database of destinations, Travel Agents and Hotels should be
able to register and post their data online etc. All these features were noted
down so that they could be incorporated in our application.
Evaluation and Synthesis
Problem evaluation and solution synthesis was the next major area of effort. It
was in this step that all externally observable data objects, evaluation of flow
and content of information was defined. It was decided in this phase that how
our application will look and works, what parameters it will take and what it will
return.
Specification & Review
The main objective is to improve the quality of software that can be done by
inspection or walkthrough of formal technical reviews. The main objective is
To uncover errors in function, logfics or implementation.
Verify software under revies to meet requirement specification.
Ensure that software has been represented according to predefined
standards.
Achive software development in uniform manner
Make projexct more meaningfull.
1.1.6 FEASIBILITY STUDY
The feasibility study is carried out to test if the proposed system is worth being
implemented. Given unlimited and infinite time, all projects are feasible.
Unfortunately such resources and time are not possible in real life situations.
Hence it becomes both necessary and prudent to evaluate the feasibility of
the project at the earliest possible time in order to avoid unnecessarilywastage of time, effort and professional embarrassment over an ill conceived
========================================================
-
8/9/2019 Cyber Security New
11/75
system. Feasibility study is a test of system proposed regarding its work
ability, impact on the organization ability to meet the user needs and effective
use of resources.
The main objective of feasibility study is to test the technical, operational and
economical feasibility of developing a computer system Application.
The following feasibility studies were carried out for the proposed system:
Economic Feasibility: An evaluation of development cost weighed
against the income of benefit derived from the developed system. Here
the development cost is evaluated by weighing it against the ultimate
benefits derived from the new system. The proposed system is
economically feasible if the benefits obtained in the long run
compensate rather than overdo the cost incurred in designing and
implementing. In this case the benefits outweigh the cost that makes
the system economically feasible.
Technical Feasibility: A study of function performance and constraints
that may affect the ability to achieve the acceptable system. A system
is technically feasible, if it can be designed and implemented within the
limitations of available resources like funds, hardware, software etc.
The considerations that are normally associated with technical
feasibility include development risk, resources availability and
technology. Management provides latest hardware and software
facilities for successful completion of the project.
The proposed system is technically feasible as the Technology we are
using to implement the Project (i.e. ASP.NET) is fully capable to
implement our projects requirement analysis that was performed in the
analysis section.
Operational Feasibility: The Project is Operationally Feasilbe as it
can be implemented easily in the college computer Lab.
========================================================
-
8/9/2019 Cyber Security New
12/75
Schedule Feasibility: Evaluates the time taken in the development of
the project. The system had schedule feasibility.
1.2 SYSTEM DESIGN
========================================================
-
8/9/2019 Cyber Security New
13/75
1.2.1 DESIGN CONCEPTS
The design of an information system produces the detail that state how a
system will meet the requirements identified during system analysis. System
specialists often refer to this stage as Logical Design, in contrast to the
process of development program software, which is referred to as Physical
Design.
System Analysis begins process by identifying the reports and the other
outputs the system will produce. Then the specific on each are pin pointed.
Usually, designers sketch the form or display as they expect it to appear when
the system is complete. This may be done on a paper or computer display,
using one of the automated system tools available. The system design also
describes the data to be input, calculated or stored. Individual data items and
calculation procedures are written in detail. The procedure tells how toprocess the data and produce the output.
1.2.2 DESIGN OBJECTIVES
The following goals were kept in mind while designing the system:
To reduce the manual work required to be done in the existing system.
========================================================
-
8/9/2019 Cyber Security New
14/75
To avoid errors inherent in the manual working and hence make the
outputs consistent and correct.
To improve the management of permanent information of the
Computer center by keeping it in properly structured tables and to
provide facilities to update this information efficiently as possible.
To make the system completely menu-driven and hence user friendly,
and hence user friendly, this was necessary so that even non-
programmers could use the system efficiently.
To make the system completely compatible i.e., it should fit in the total
integrated system.
To design the system in such a way that reduced future maintenance
and enhancement times and efforts.
To make the system reliable, understandable and cost effective.
1.2.3 DESIGN MODULES
Cyber Cafe Mangement: This feature allows the cyber caf
administrator to to maintain the system chart and view which user is
sitting on which compuer and maintain the accounting for that particular
user. The user account can be topped up by the cyber caf administrator
with a particular amount and when the user sits on a particular system he
is not able to log on to the system until he provides his
username/password, as soon as he logs in, the system deducts money
from his accounts as per usage similar to what we see in case of prepaid
mobile system.
File Delete Notification System: The system will write notification
events in case a file is deleted from the system along with all attributes
like time,date etc. This module is very useful in case of College Lab as we
can track that when a file was deleted from a system.
User Activity Monitoring Module: This module will enable the
administrator to keep a track on the activities occurring on a user system
in a real time basis. This system will work in the manner that after every
few minutes the screen of the user desktop will be catured as an image
and will be transferred to the administrator so that can keep the track of
what is happening on the user system.
========================================================
-
8/9/2019 Cyber Security New
15/75
Remote Desktop Login: This module enables the administrator to
remotely logon to a particular system on the network to view what all is
happening on that remote computer.
SYSTEM DESIGN
The design stage takes the final specification of the system from analysis
stages and finds the best way of filing them, given the technical environment
and previous decision on required level of automation.
The system design is carried out in two phases:
i) Architectural Design (High Level Design)
ii) Detailed Design (Low Level Design)
1.2.4 ARCHITECTURAL DESIGN
The high level Design maps the given system to logical data structure.
Architectural design involves identifying the software component, decoupling
and decomposing the system into processing modules and conceptual data
structures and specifying the interconnection among components. Good
notation can clarify the interrelationship and interactions if interest, while poor
notation can complete and interfere with good design practice. A data flow-
oriented approach was used to design the project. This includes Entity
Relationship Diagram (ERD) and Data Flow Diagrams (DFD).
1.2.4.1 Entity Relationship Diagram
One of the best design approaches is Entity Relationship Method. This design
approach is widely followed in designing projects normally known as Entity
Relationship Diagram (ERD).
ERD helps in capturing the business rules governing the data relationships of
the system and is a conventional aid for communicating with the end users in
the conceptual design phase. ERD consists of:
========================================================
-
8/9/2019 Cyber Security New
16/75
Entity It is the term use to describe any object, place, person, concept,
activity that the enterprise recognizes in the area under investigation and
wishes to collect and store data. It is diagrammatically represented as
boxes.
Attribute They are the data elements that are used to describe the
properties that distinguish the entities.
Relationship It is an association or connection between two or more
entities. They are diagrammatically represented as arrows.
A Unary relationship is a relationship between instances of the same
entity.
A Binary relationship is a relationship between two entities.
A N-ary relationship is a relationship among N entities. It is defined only
when the relationship does have a meaning without the participation of all
the N entities.
Degree of Relationship An important aspect of relationship between
two or more entities is the degree of relationship. The different
relationships recognized among various data stores in the database are:
One-to-One (1:1)
It is an association between two entities. For example, each student
can have only one Roll No.
One-to-Many (1:M)
It describes entities that may have one or more entities related to it.
For example, a father may have one or many children.
Many-to-Many (M:M)
It describes entities that may have relationships in both directions.
This relationship can be explained by considering items sold by
========================================================
-
8/9/2019 Cyber Security New
17/75
Vendors. A vendor can sell many items and many vendors can sell
each item.
ERD representation of the project is given below. It follows Chens convention
in which entities are represented as rectangles and relationships as
diamonds.
========================================================
-
8/9/2019 Cyber Security New
18/75
Entity Relationship Diagram
========================================================
-
8/9/2019 Cyber Security New
19/75
1.2.4.2 Context Analysis Diagram
Context Analysis Diagram (CAD) is the top-level data flow diagram, which
depicts the overview of the entire system. The major external entities, a single
process and the output data stores constitute the CAD. Though this diagram
does not depict the system in detail, it presents the overall inputs, process
and the output of the entire system at a very high level. The Context Analysis
Diagram if the project is given ahead.
Context LevelData Flow Diagram
1.2.4.3 Data Flow Diagrams
A Data Flow Diagram (DFD) is a graphical tool used to describe and analyze
the movement of data through a system manual or automated including the
processes, stores of data and delays in the system. They are central tools and
the basis from which other components are developed. It depicts the
========================================================
Computer Users
Cyber
Security
SystemAdministrator
-
8/9/2019 Cyber Security New
20/75
transformation of data from input to output through processes and the
interaction between processes.
Transformation of data from input to output through processes logically and
independent of physical components is called the DFD. The physical DFD
shows the actual implementation and movement of data between people,
departments and workstation.
DFDs are an excellent mechanism of communicating with the customers
during requirement analysis and are widely used for representing external and
top-level internal design specification. In the Later situations, DFDs are quite
valuable for establishing naming conventions and names of system
components such as subsystems, files and data links.
In a DFD there are four components:
1. Sources or Destinations of data such as human, entities that
interact with system, outside the system boundary, who form the
source and the recipient of information are depicted in the form of
a closed rectangle.
2. Data flow is a packet of data. It identifies data flow. It is a pipeline
through which information flows. It is depicted in DFD as an arrow
with the pointer pointing in the direction of flow. This connectingsymbol connects an entity, process and data stores. This arrow
mark also specifies the sender and the receiver.
3. Process depicts procedure, function or module that transform
input data into output data. It is represented as a circle or a
bubble with the procedure name and a unique number inside the
circle.
========================================================
-
8/9/2019 Cyber Security New
21/75
4. Data stores are the physical areas in the computers hard disk
where a group of related data is stored in the form of files. They
are depicted as an open-ended rectangle. The Data store is used
either for storing data into the files or for reference purpose.
DFD 1
DFD 2
========================================================
-
8/9/2019 Cyber Security New
22/75
DFD 3
DFD - 4
1.2.5 DETAILED DESIGN
The Low Level Design maps the logical model of the system to a physicaldatabase design. Tables created for the system Entities and Attributes were
mapped into Physical tables. The name of the entity is taken as the table
name.
During detailed design phase, the database if any and programming modules
are designed and detailed user procedures are documented. The interfaces
between the System users and computers are also defined.
========================================================
-
8/9/2019 Cyber Security New
23/75
1.2.5.1 APPLICATION DESIGN
After the detailed problem definition and system analysis of the problem, it
was thought of designing web based Computer designing. Simplicity is hard to
design. It is difficult to design something that is technically sophisticated but
appears simple to use. Any software product must be efficient, fast and
functional but more important it must be user friendly, easy to learn and use.
For designing good interface we should use the following principles.
i) Clarity and consistency
ii) Visual feedback.
iii) Understanding the people.
iv) Good response.
MODULES
The software has been designed in a modular manner. There is a separatemodule for the every function of the System. These are then integrated tobuild an easy to use system.
The various Modules of the Software were identified as:
Cyber Dafe Mangement: This feature allows the cyber caf
administrator to to maintain the system chart and view which user is
sitting on which compuer and maintain the accounting for that particular
user. The user account can be topped up by the cyber caf administrator
with a particular amount and when the user sits on a particular system he
is not able to log on to the system until he provides his
username/password, as soon as he logs in the system deducts money
from his accounts as per usage similar to what we see in case of prepaid
mobile system.
File Delete Notification System: The system will write notification
events in case a file is deleted from the system along with all attributes
like time,date etc. This module is very useful in case of College Lab as we
can track that when a file was deleted from a system.
User Activity Monitoring Module: This module will enable the
administrator to keep a track on the activities occurring on a user system
========================================================
-
8/9/2019 Cyber Security New
24/75
in a real time basis. This system will work in the manner that after every
few minutes the screen of the user desktop will be catured as an image
and will be transferred to the administrator so that can keep the track of
what is happening on the user system.
Remote Desktop Login: This module enables the administrator to
remotely logon to a particular system on the network to view what all is
happening on that remote computer.
========================================================
-
8/9/2019 Cyber Security New
25/75
WORKING ENVIRONMENT
========================================================
-
8/9/2019 Cyber Security New
26/75
2.1 Technical Specifications
HARDWARE ENVIRONMENT
PC with the following Configuration
Processor - Pentium-IV 3.0 GHz
RAM - 256 DDR 2 RAM
HARD DISK - 80 GB
SOFTWARE ENVIRONMENT
Operating System - Microsoft Windows XP.
Backend - Microsoft Access
Frontend - VB.NET
Case Tool - Say Microsoft Word 2003, Ms Front Page
========================================================
-
8/9/2019 Cyber Security New
27/75
========================================================
-
8/9/2019 Cyber Security New
28/75
Technology Used: VB.NET
VB.NET
VB.NET introduces many exciting new features to the VB developer,though these enhancements do cause some minor compatibilityissues with legacy code. The new Integrated DevelopmentEnvironment (IDE) incorporates some of the best ideas of VB 6.0and InterDev to make it easier and more intuitive to quickly createapplications using a wider variety of development resources.Thecode developed in the IDE can then be compiled to work with thenew .NET Framework, which is Microsofts new technology designedto better leverage internal and external Internet resources.Thecompiler writes the code to Common Language Runtime (CLR),making it easier to interact with otherapplications not written in VB.NET. It is now possible to use trueinheritancewith VB, which means that a developer can more efficientlyleverage code andreduce application maintenance. Not only is the CLR used for stand-alone VBapplications, it is also used for Web Applications, which makes iteasier to exploit
the full feature set of VB from a scripted Web application. Anotherway in whichsecurity is enhanced is through enforcement of data typecompatibility, whichreduces the number of crashes due to poorly designed code.Exploiting the newfeatures of VB.NET is not a trivial task, and many syntax changeswere introducedthat will cause incompatibilities with legacy code. But, many ofthese are
identified, emphasized, and in some cases automatically updated bythe IDEwhen a VB 6.0 project is imported into VB.NET.
.NET ArchitectureThe .NET Framework consists of three parts: the Common LanguageRuntime,the Framework classes, and ASP.NET, which are covered in thefollowing sections.
The components of .NET tend to cause some confusion.
========================================================
-
8/9/2019 Cyber Security New
29/75
ASP.NETOne major headache that Visual Basic developers have had in thepast is trying toreconcile the differences between compiled VB applications andapplications builtin the lightweight interpreted subset of VB known as VBScript.
Unfortunately,when Active Server Pages were introduced, the language supportedfor serversidescripting was VBScript, not VB. (Technically, other languages couldbe usedfor server side scripting, but VBScript has been the most commonlyused.) Now, with ASP.NET, developers have a choice. Files with theASP extensionare now supported for backwards compatibility, but ASPX files havebeen introduced
as well. ASPX files are compiled when first run, and they use thesame syntax that is used in stand-alone VB.NET applications.Previously, many developershave gone through the extra step of writing a simple ASP page thatsimplyexecuted a compiled method, but now it is possible to run compiledcodedirectly from an Active Server Page.
Framework ClassesIronically, one of the reasons that VB.NET is now so much more
powerful isbecause it does so much less. Up through VB 6.0, the Visual Basiccompiler hadto do much more work than a comparable compiler for a languagelike C++.This is because much of the functionality that was built into VB wasprovided inC++ through external classes.This made it much easier to updateand add featuresto the language and to increase compatibility among applications
that sharedthe same libraries.Now, in VB.NET, the compiler adopts this model. Many features thatwereformerly in Visual Basic directly are now implemented throughFrameworkclasses. For example, if you want to take a square root, instead ofusing the VBoperator, you use a method in the System.Math class.This approachmakes the
language much more lightweight and scalable..NET Servers
========================================================
-
8/9/2019 Cyber Security New
30/75
We mention this here only to distinguish .NET servers from .NETFramework.These servers support Web communication but are not necessarilythemselveswritten in the .NET Framework.
Common Language RuntimeCLR provides the interface between your code and the operatingsystem, providingsuch features as Memory Management, a Common Type System,andGarbage Collection. It reflects Microsofts efforts to provide a unifiedand safeframework for all Microsoft-generated code, regardless of thelanguage used tocreate it.
What Is the .NET Framework?
The .NET Frameworkis Microsofts latest offering in the world ofcrossdevelopment(developing both desktop and Web-usable applications),interoperability, and, soon, cross-platform development. As you gothrough this chapter, youll see just how .NET meets thesedevelopmental requirements.
However, Microsofts developers did not stop there; they wanted tocompletely revamp the way we program. In addition to the moretechnical changes, .NET strives to be as simple as possible. .NETcontains functionality that a developer can easily access. This samefunctionality operates within the confines of standardized data typesand namingconventions. This internal functionality also encompasses thecreation of special data within an assembly file that is vital forinteroperability, .NETs built-in security, and NETs automaticresource management.
Another part of the keep it simple philosophy is that .NETapplications are geared to be copy-only installations; in otherwords, the need for a special installation package for yourapplication is no longer a requirement. The majority of .NETapplications work if you simply copy them into a directory. Thisfeature substantially eases the burden on the programmer.The CLR changes the way that programs are written, because VBdevelopers wont be limited to the Windows platform. Just as withISO C/C++, VB programmers are now capable of seeing theirprograms run on any platform with the .NET runtime installed.
Furthermore, if you delegate a C programmer to oversee futuredevelopments on your VB.NET program, the normal learning curve
========================================================
-
8/9/2019 Cyber Security New
31/75
for this programmer will be dramatically reduced by .NETsMultilanguage capabilities.
Introduction to the CommonLanguage Runtime
CLR controls the .NET code execution. CLR is the step above COM,MTS, and COM+ and will, in due time, replace them as the VisualBasic runtime layer. To developers, this means that our VB.NETcode will execute on par with other languages, while aintaining thesame, small file size.The CLR is the runtime environment for .NET. It manages codeexecution as well as the services that .NET provides.The CLRknows what to do through special data (referred to as metadata)that is contained within the applications. The special data within theapplications store a map of where to find classes, when to loadclasses, and when to set up runtime context boundaries, generatenative code, enforce security, determine which classes use whichmethods, and load classes when needed. Since the CLR is privy tothis information, it can also determine when an object is used andwhen it is released. This is known as managed code.Managed code allows us to create fully CLR-compliant code. Codethats compiled with COM and Win32API declarations is calledunmanaged code, which is what you got with previous versions ofVisual Basic.Managed code keeps us from depending on obstinatedynamic link library (DLL) files (discussed in the Ending DLL Hellsection later in this chapter). In fact, thanks to the CLR, we donthave to deal with the registry, graphical user identifications(GUIDs), AddRef, HRESULTS, and all the macros and applicationprogramming interfaces (APIs) we depended on in the past.Theyarent even available options in .NET. Removing all the excess alsoprovides a more consistent programming model. Since the CLRencapsulates all the functions that we had with unmanaged code,we wont have to depend on any pre-existing DLL files residing onthe hard drive. This does not mean that we have seen the last ofDLLs; it simply means that the .NET Framework contains a system
within it that can map out the location of all the resources we areusing.We are no longer dependent upon VB runtime files beinginstalled, or certain pre-existing components.Because CLR-compliant code is also Common LanguageSpecification (CLS)- compliant code, it allows CLR-based code toexecute properly. CLS is a subset of the CLR types defined in theCommon Type System (CTS), which is also discussed later in thechapter. CLS features are instrumental in the interoperabilityprocess, because they contain the basic types required for CLRoperability. These combined features allow .NET to handle multiple
programming languages. The CLR manages the mapping; all thatyou need is a compiler that can generate the code and the special
========================================================
-
8/9/2019 Cyber Security New
32/75
data needed within the application for the CLR to operate. Thisensures that any dependencies your application might have arealways met and never broken.
When you set your compiler to generate the .NET code, it runsthrough the CTS and inserts the appropriate data within theapplication for the CLR to read. Once the CLR finds the data, itproceeds to run through it and lay out everything it needs withinmemory, declaring any objects when they are called (but notbefore).Any application interaction, such as passing values fromclasses, is also mapped within the special data and handled by theCLR.
Using .NET-CompliantProgramming Languages.NET isnt just a single, solitary programming language takingadvantage of a multiplatform system. A runtime that allowsportability, but requires you to use a single programming modelwould not truly be delivering on its perceived value. If this were thecase, your reliance on that language would become a liability whenthe language does not meet the requirements for a particular task.All of a sudden, portability takes a back seat to necessityforsomething to be truly portable, you require not only a portableruntime but also the ability to code in whatyou need, when youneed it. .NET solves that problem by allowing any .NET compliantprogramming language to run. Cant get that bug in your classworked out in VB, but you know that you can work around it in C?Use C# to create a class that can be easily used with your VBapplication. Third-party programming language users dont need tofret for long, either; several companies plan to create .NET-compliant versions of their languages.Currently, the only .NET-compliant languages are the entireMicrosoft flavor; for more information, check these out athttp://msdn.microsoft.com/net:
_ C#_ C++ with Managed Extensions_ VB.NET_ ASP.NET (although this one is more a subset of VB.NET)_ Jscript.NET
Visual Basic for Windows is a little over ten years old. Itdebuted on March 20, 1991, at a show called WindowsWorld, although its roots go back to a tool called Rubythat Alan Cooper developed in 1988.
========================================================
-
8/9/2019 Cyber Security New
33/75
Origin of .Net Technology
1. Ole Technology
Object linking and embedding technology was
developed by Microsoft in the early 1990 to enable easy
interprocess communications. To embed documents from one
application into another application. This enabled users to develop
applications which required inter- operability between various
products such as MS Word and MS Excel.
2. Com Technology
Microsoft introduced component-based model for
developing softwares programs. In the components based
approaching a program is broken into a number of independent
components where each one offers a particular service. It reduces
the overall complexity of software. Enables distributed
developments across multiple organization or departments and
Enhances software maintainability
3. Dot Net Technology.NET technology is a third-generation
component model. This provides a new level of inter-operability
compared to COM technology. COM provides a standard binary
mechanism for inter-module communication .this mechanism is
replaced by an intermediate language called Microsoft Intermediate
language (MSIL) or simply IL.
Introduction to the .NET Framework andVisual Studio .NET
The .NET Framework (pronounced dot net framework)defines the
========================================================
-
8/9/2019 Cyber Security New
34/75
environment that you use to execute Visual Basic .NETapplications and the services you can use within thoseapplications. One of the main goals of this framework is tomake it easier to develop applications that run over theInternet. However, this framework can also be used todevelop traditional business applications that run on theWindows desktop. To develop a Visual Basic .NETapplication, you use a product called Visual Studio .NET(pronounced Visual Studio dot net). This is actually asuite of products that includes the three programminglanguages described. Visual Basic .NET, which is designedfor rapid application development. Visual Studio alsoincludes several other components that make it anoutstanding development product. One of these is the
Microsoft Development Environment, which youll beintroduced to in a moment. Another is the Microsoft SQLServer 2000 Desktop Engine (or MSDE). MSDE is adatabase engine that runs on your own PC so you can useVisual Studio for developing database applications that arecompatible with Microsoft SQL Server. SQL Server in turnis a database management system that can be used toprovide the data for large networks of users or forInternet applications.
The two other languages that come with Visual Studio.NET are C# and C++. C# .NET (pronounced C sharp dotnet) is a new language that has been developed byMicrosoft especially for the .NET Framework. Visual C++ .NET is Microsofts version of the C++ language that isused on many platforms besides Windows PCs.
Programming languages supported by VisualStudio .NET Language Description
Visual Basic .NET - Designed for rapid applicationdevelopment Visual C# .NET - A new language thatcombines the features of Java and C++ and is suitable forrapid application development.Visual C++ .NET - Microsofts version of C++ that can beused for developing high- performance applications.
========================================================
-
8/9/2019 Cyber Security New
35/75
Two other components of VisualStudio .NETComponent Description
Microsoft Development Environment The IntegratedDevelopment Environment (IDE) that you use fordeveloping applications in any of the three languagesMicrosoft SQL Server 2000 Desktop Engine A databaseengine that runs on your own PC so you can use VisualStudio for developing database applications that arecompatible with Microsoft SQL Server.
Platforms that can run Visual Studio .NET
Windows 2000 and later releases of Windows
Platforms that can run Visual Studio .NETapplications
Windows 98 and later releases of Windows, depending onwhich .NET components the application uses.
Visual Basic .NET Standard Edition
An inexpensive alternative to the complete Visual Studio .NETpackage that supports a limited version of Visual Basic .NET as itsonly programming language.
Description
The .NET Framework defines the environment that you use forexecuting Visual Basic .NET applications.
Visual Studio .NET is a suite of products that includes all three ofthe programming languages listed above. These languages runwithin the .NET Framework. You can develop business applications using either Visual Basic.NET or Visual C# .NET.Both are integrated with the design environment, so thedevelopment techniques are similar although the language detailsvary. Besides the programming languages listed above, third-partyvendors can develop languages for the .NET Framework. However,
programs written in these languages cant be developed from withinVisual Studio .NET.
========================================================
-
8/9/2019 Cyber Security New
36/75
The components of the .NET Framework
The .NET Framework provides a common set of services that
application programs written in a .NET language such as VisualBasic .NET can use to run on various operating systems andhardware platforms. The .NET Framework is divided into two maincomponents: the .NET Framework Class Library and the CommonLanguage Runtime. The .NET Framework Class Library consists ofsegments of pre-written code called classes that provide many ofthe functions that you need for developing .NET applications. Forinstance, the Windows Forms classes are used for developingWindows Forms applications. The ASP.NET classes are used fordeveloping Web Forms applications. And other classes let you work
with databases, manage security, access files, and perform manyother functions. Although its not apparent in this figure, the classesin the .NET Framework Class Library are organized in a hierarchicalstructure. Within this structure,related classes are organized into groups called namespaces. Eachnamespace contains the classes used to support a particularfunction. For example, the System.Windows.Forms namespacecontains the classes used to create forms and the System.Datanamespace contains the classes you use to access data. TheCommon Language Runtime, or CLR, provides the services that areneeded for executing any application thats developed with one ofthe .NET languages. This is possible because all of the .NETlanguages compile to a common intermediate language, which youlllearn more about in the next figure. The CLR also provides theCommon Type System that defines the data types that are used byall the .NET languages. That way, you can use more than one of the.NET languages as you develop a single application without worryingabout incompatible data types. If youre new to programming, thediagram in this figure probably doesnt mean too much to you rightnow.
========================================================
-
8/9/2019 Cyber Security New
37/75
Description
.NET applications do not access the operating system or computerhardware directly. Instead, they use services of the .NETFramework, which in turn access the operating system andhardware. The .NET Framework consists of two main components: the .NETFramework Class Library and the Common Language Runtime. The .NET Framework Class Library provides pre-written code inthe form of classes that are available to all of the .NETprogramming languages. This class library consists of hundreds ofclasses, but you can create simple .NET applications once you learn
how to use just a few of them. The Common Language Runtime, or CLR, is the foundation ofthe .NET Framework. It manages the execution of .NET programsby coordinating essential functions such as memory management,code execution, security, and other services. Because .NETapplications are managed by the CLR, they are called managedapplications. The Common Type System is a component of the CLR thatensures that all .NET applications use the same basic data typesregardless of what programming languages were used to develop
the applications.
========================================================
-
8/9/2019 Cyber Security New
38/75
The Common Language Runtime
Visual Basic has always used a runtime, so it may seem strange tosay that the biggest change to VB that comes with .NET is thechange to a Common Language Runtime (CLR) shared by all .NET
languages. The reason is that while on the surface the CLR is aruntime library just like the C Runtime library, MSVCRTXX.DLL, orthe VB Runtime library, MSVBVMXX.DLL, it is much larger and hasgreater functionality. Because of its richness, writing programs thattake full advantage ofthe CLR often seems like you are writing for a whole new operatingsystem API. Since all languages that are .NET-compliant use thesame CLR, there is no need for a language-specific runtime. What ismore, code that is CLR can be written in any language and still beused equally well by all .NET CLR-compliant languages.
Your VB code can be used by C# programmers and vice versa withno extra work. Next, there is a common file format for .NETexecutable code, called Microsoft Intermediate Language (MSIL, orjust IL). MSIL is a semi compiled language that gets compiled intonative code by the .NET runtime at execution time. This is a vastextension of what existed in all versions of VB prior to version 5. VBapps used to be compiled to p-code (or pseudo code, a machinelanguage for a hypothetical machine), which was an intermediaterepresentation of the final executable code.The various VB runtime engines, interpreted the p-code when auser ran the program. People always complained that VB was tooslow because of this, and therefore, constantly begged Microsoft toadd native compilation to VB. This happened starting in version 5,when you had a choice of p-code (small) or native code (bigger butpresumably faster). The key point is that .NET languages combinethe best features of a p-code language with the best features ofcompiledlanguages. By having all languages write to MSIL, a kind of p-code,and then compile the resulting MSIL to native code, it makes itrelatively easy to have cross-language compatibility. But byultimately generating native code you still get good performance.
Completely Object Oriented
The object-oriented features in VB5 and VB6 were (to be polite)somewhat limited. One key issue was that these versions of VBcould not automatically initialize the data inside a class whencreating an instance of a class. This led to classes being created inan indeterminate (potentially buggy) state and required theprogrammer to exercise extra care when using objects. To resolvethis, VB .NET adds an important feature called parameterizedconstructors.
========================================================
-
8/9/2019 Cyber Security New
39/75
Another problem was the lack of true inheritance. Inheritance is aform of code reuse where you use certain objects that are reallymore specialized versions of existing objects. Inheritance is thus theperfect tool when building something like a better textbox based onan existing textbox. In VB5 and 6 you did not have inheritance, soyou had to rely on a fairly cumbersome wizard to help make theprocess of building a better textbox tolerable.
Automatic Garbage Collection: Fewer MemoryLeaks
Programmers who used Visual Basic always had a problem withmemory leaks from what are called circular references. (A circularreference is when you have object A referring to object B and objectB referring to object A.) Assuming this kind of code was not therefor a reason, there was no way for the VB compiler to realize that
this circularity was not significant. This meant that the memory forthese two objects was never reclaimed. The garbage collectionfeature built into the .NET CLR eliminates this problem of circularreferences using much smarter algorithms to determine whencircular references can be cut and the memory reclaimed. Ofcourse, this extra power comes at a cost.
Structured Exception Handling
All versions of Visual Basic use a form of error handling that dates
back to the first Basic written almost 40 years ago. To be charitable,it had problems. To be uncharitable (but we feel realistic), it isabsurd to use On Error GoTo with all the spaghetti code problemsthat ensue in a modern programming language. Visual Basic addsstructured exception handling the most modern and most powerfulmeans of handling errors.
True Multithreading
Multithreaded programs seem to do two things at once. E-mailprograms that let you read old e-mail while downloading new e-mailare good examples. Users expect such apps, but you could not writethem very easily in earlier versions of VB.
How a Visual Basic application is compiled
and run
Figure below shows how an application is compiled and run when
using Visual Basic .NET. To start, you use Visual Studio .NET tocreate a project, which is made of one or more source files that
========================================================
-
8/9/2019 Cyber Security New
40/75
contain Visual Basic statements. Most simple projects consist of justone source file, but more complicated projects can have more thanone source file. A project may also contain other types of files, suchas sound files, image files, or simple text files. As the figure shows,a solution is a container for projects, which youll learn more aboutin a moment. You use the Visual Basic compiler, which is built intoVisual Studio, to compile your Visual Basic source code intoMicrosoft Intermediate Language (or MSIL). For short, this can bereferred to as Intermediate Language (or IL). At this point, theIntermediate Language is stored on disk in a file thats called anassembly. In addition to the IL, the assembly includes references tothe classes that the application requires. The assembly can then berun on any PC that has the Common Language Runtime installed onit. When the assembly is run, the CLR converts the IntermediateLanguage to native code that can be run by the Windows operatingsystem. Although the CLR is only available for Windows systemsright now, it is possible that the CLR will eventually be available for
other operating systems as well. In other words, the CommonLanguage Runtime makes platform independence possible. If, forexample, a CLR is developed for the Unix and Linux operatingsystems, Visual Basic applications will be able to run on thoseoperating systems as well as Windows operating systems.
========================================================
-
8/9/2019 Cyber Security New
41/75
Description
1. The programmer uses Visual Studios Integrated DevelopmentEnvironment to create a project, which includes one or more VisualBasic source files. In some cases, a project may contain other typesof files, such as graphic image files or sound files. A solution is a
container that holds projects. Although a solution can contain morethan one project, the solution for most simple applications containsjust one project. So you can think of the solution and the project asessentially the same thing.2. The Visual Basic compiler translates or builds the source codeinto Microsoft Intermediate Language (MSIL), or just IntermediateLanguage (IL). This language is stored on disk in an assembly thatalso contains references to the classes that the application requires.An assembly is simply an executable file that has an .exe or .dllextension.
3. The assembly is then run by the .NET Frameworks CommonLanguage Runtime. The CLR manages all aspects of how theassembly is run, including converting the Intermediate Language tonative code that can be run by the operating system, managingmemory for the assembly, enforcing security, and so on.
The VB .NET IDE: Visual Studio .NET
The concept of a rapid application development (RAD) tool withcontrols that you to drag onto forms is certainly still there, andpressing F5 will still run your program, but much has changed andmostly for the better. For example, the horrid Menu Editor thatessentially has been unchanged since VB1 has been replaced by anin-place menu editing system that is a dream to use.Also, VB .NET, unlike earlier versions of VB, can build many kinds ofapplications other than just GUI-intensive ones. For example, youcan build Web-based applications, server-side applications, andeven console-based (in what looks like an old-fashioned DOS
window) applications. Moreover, there is finally a unifieddevelopment environment for all of the Visual languages fromMicrosoft. The days when there were different IDEs for VC++, VJ++, Visual InterDev, Visual Basic, and DevStudio are gone. Anothernice feature of the new IDE is the customization possible via anenhanced extensibility model. VS .NET can be set up to look muchlike the IDE from VB6, or any of the other IDEs, if you like thosebetter.
========================================================
-
8/9/2019 Cyber Security New
42/75
VB .NET is the first fully object-oriented version ofVB
Introduction to OOP
OOP is a vast extension of the event-driven, control-based model ofprogramming used in early versions of VB. With VB .NET, yourentire program will be made up of self-contained objects thatinteract. These objects are stamped out from factories calledclasses. These objects will: Have certain properties and certain operations they can perform. Not interact with each other in ways not provided by your code'spublic interface. Only change their current state over time, and only in response toa specific request. (In VB .NET this request is made through aproperty change or a method call.)The point is as long as the objects satisfy their specifications as towhat they can do (their public interface) and thus how they respondto outside stimuli, the user does not have to be interested in howthat functionality is implemented. In OOP-speak, you only careabout what objects expose.
Classes As User-Defined Types
Another way to approach classes is to think of them as an extensionof user- defined types where, for example, the data that is storedinside one can be validated before any changes take place.Similarly, a class is able to validate a request to return data beforedoing so. Finally, imagine a type that has methods to return data ina special form rather than simply spew out the internalrepresentation. From this point of view, an object is then simply ageneralization of a specific (data-filled) user-defined type withfunctions attached to it for data access and validation. The key pointyou need to keep in mind is that: You are replacing direct access to data by various kinds offunction calls that do the work.For example, in a user-defined type such as this:Employee Info TypeName As StringSocial Security Number As StringAddress as StringEnd Employee Info Typethe pseudocode that makes this user-defined type "smart" wouldhide the actual data and have functions instead to return thevalues. The pseudocode might look like this:Employee Info as a CLASS
(hidden) Name As String - instead has functions that validate andreturn and change name
========================================================
-
8/9/2019 Cyber Security New
43/75
(hidden)Social Security Number As String - instead has functionsthat validate and return and change the Social Security number(hidden) Address as String - instead has functions that validate andreturn and change the address and also return it in a useful formEnd Employee Info as CLASS
How Should Objects InteractOne key practice in OOP is making each class (= object factory)responsible for carrying out only a small set of related tasks. Youspend less time designing the class and debugging it when yourclasses are designed to build small objects that perform relativelyfew tasks, rather than architected with complex internal data alongwith many properties and methods to manipulate the internal data.If an object needs to do something that is not its responsibility,make a new class whose objects will be optimized for that taskinstead of adding the code to the first object and thus complicating
the original object. If you give the first object access to the secondtype of object, then the first object can ask the second object tocarry out the required task.
Abstraction
Abstraction is a fancy term for building a model of an object incode. In other words, it is the process of taking concrete day-to-dayobjects and producing a model of the object in code that simulateshow the object interacts in the real world. For example, the firstobject-oriented language was called Simula, because it wasinvented to make simulations easier. Of course, the more modernideas of virtual reality carry abstraction to an extreme.Abstraction is necessary because: You cannot use OOP successfully if you cannot step back andabstract the key issues from your problem.Always ask yourself: What properties and methods will I need tomirror in the objects code so that my code will model the situationwell enough to solve the problem?
Encapsulation
Encapsulation is the formal term for what we used to call datahiding. It means hide data, but define properties and methods thatlet people access it. Remember that OOP succeeds only if youmanipulate data inside objects, only sending requests to the object.The data in an object is stored in its instance fields. Other termsyou will see for the variables that store the data are membervariables and instance variables. All three terms are usedinterchangeably, and which you choose is a matter of taste; we
usually use instance fields. The current values of these instance
========================================================
-
8/9/2019 Cyber Security New
44/75
fields for a specific object define the objects current state. Keep inmind that you should: Never ever give anyone direct access to the instance fields.
Inheritance
As an example of inheritance, imagine specializing the Employeeclass to get a Programmer class, a Manager class, and so on.Classes such as Manager would inherit from the Employee class.The Employee class is called the base (or parent) class, and theManager class is called the child class. Child classes are: Always more specialized than their base (parent) classes. Have at least as many members as their parent classes (althoughthe behavior of an individual member may be very different).
Polymorphism
Traditionally, polymorphism (from the Greek many forms) meansthat inherited objects know what methods they should use,depending on where they are in the inheritance chain. For example,as we noted before, an Employee parent class and, therefore, theinherited Manager class both have a method for changing the salaryof their object instances. However, the RaiseSalary methodprobably works differently for individual Manager objects than forplain old Employee objects. The way polymorphism works in theclassic situation where a Manager class inherits from an Employeeclass is that an Employee object would know if it were a plain oldemployee or really a manager. When it got the word to use theRaiseSalary method, then: If it were a Manager object, it would call the RaiseSalary methodin the Manager class rather than the one in the Employee class. Otherwise, it would use the usual RaiseSalary method.
Advantages to OOP
At first glance, the OOP approach that leads to classes and theirassociated methods and properties is much like the structured
approach that leads to modules. But the key difference is that: Classes are factories for making objects whose states can divergeover time.Sound too abstract? Sound as though it has nothing to do with VBprogramming?Well, this is exactly what the Toolbox is! Each control on theToolbox in earlier versions of VB was a little factory for makingobjects that are instances of that controls class. Suppose theToolbox was not a bunch of little class factories waiting to churn outnew textboxes and command buttons in response to your requests.
Can you imagine how convoluted your VB code would been if youneeded a separate code
========================================================
-
8/9/2019 Cyber Security New
45/75
module for each textbox? After all, the same code module cannot belinked into your code twice, so you would have to do some fairlycomplicated coding to build a form with two identical textboxeswhose states can diverge over time.
Windows Forms, Drawing, and Printing
EVERYTHING YOU HEAR ABOUT .NET development in themagazines or online seems to focus on features such as WebServices, using the browser as the delivery platform, ASP .NET, andother Web-based topics. The many, many improvements made toclient-side Windows GUI development under .NET using the VisualStudio IDE are barely mentioned. This may sound strange to say ofa Microsoft product, but GUI development in Visual Studio is under-hyped; there are, in fact, many improvements that VB
programmers have long awaited! Although we agree that using thebrowser as a delivery platform is clearly becoming more and moreimportant, we also feel pretty strongly that the traditional Windows-based client is not going away. In this chapter, we hope tocounterbalance this general trend by showing you the fundamentalsof the programming needed to build GUIs in VB .NET. We will notspend a lot of time on how to use the RAD (Rapid ApplicationDevelopment) features of the IDE,1 or the properties, methods, andevents for the various controls in the Toolboxdoing this justicewould take a book at least as long as this one. Instead, by
concentrating on the programming issues involved, we hope toshow you how GUI development in .NET works. At that point, youcan look at the documentation as needed or wait for a completebook on GUI development to learn more. After discussing how toprogram with forms and controls, we take up the basics of graphicsprogramming in VB .NET, which is quite a bit different than it was.
Form Designer Basics
For VB6 programmers, adjusting to how the VS .NET IDE handles
forms and controls is pretty simple. You have a couple of new (andvery cool) tools that we briefly describe later, but the basic idea ofhow to work with the Toolbox has not changed very much. (See thesections in this chapter on the Menu Editor and on how to changethe tab order, for our two favorite additions.) For those who havenever used an older version of the VB IDE, here is what you need todo to add a control to the Form window:1. Double-click on a control or drag it from the Toolbox to the formin the default size.2. Position it by clicking inside it and then dragging it to the correct
location.
========================================================
-
8/9/2019 Cyber Security New
46/75
3. Resize it by dragging one of the small square sizing boxes thatthe cursor points to.
You can also add controls to a form by following these steps:1. In the Toolbox, click on the control you want to add to your form.2. Move the cursor to the form. (Unlike earlier versions of VB, thecursor now gives you a clue about which control you are workingwith.)3. Click where you want to position the top left corner of the controland then drag to the lower right corner position. (You can then use
Shift+ an Arrow key to resize the control as needed.)
For controls without a user interface, such as timers, simply double-click on them. They end up in a tray beneath the form, thusreducing clutter. You can use the Format menu to reposition andresize controls once they are on the form. Of course, many of the
items on the Format menu, such as the ones on the Align submenu,make sense only for a group of controls. One way to select a groupof controls is to click the first control in the group and then holddown the Control key while clicking the other members you want inthe group. At this point they will all show sizing handles but onlyone control will have dark sizing handles.
MDI Forms
In earlier versions of VB, Multiple Document Interface (MDI)applications required you to decide which form was the MDI parent
========================================================
-
8/9/2019 Cyber Security New
47/75
form at design time. In .NET, you need only set the IsMdiContainerproperty of the form to True. You create the child forms at designtime or at run time via code, and then set their MdiParentproperties to reference a form whose IsMdiContainer property istrue. This lets you do something that was essentially impossible inearlier versions of VB: change a MDI parent/child relationship at runtime. It also allows an application to contain multiple MDI parentforms.
Database Access with VB .NET
ADO .NET
With each version of VB came a different model for accessing adatabase. VB .NET follows in this tradition with a whole new way of
accessing data: ADO .NET. This means ADO .NET is horriblymisnamed. Why? Because it is hardly the next generation of ADO!In fact, it is a completely different model for accessing data thanclassic ADO. In particular, you must learn a new object model basedon a DataSet object for your results. (Because they are not tied to asingle table, ADO .NET DataSet objects are far more capable thanADO RecordSet objects, for example.) In addition, ADO .NET: Is designed as a completely disconnected architecture (althoughthe DataAdapter, Connection, Command, and DataReader classesare still
connection-oriented). Does not support server-side cursors. ADOs dynamic cursors arenolonger available. Is XML-based1 (which lets you work over the Internet, even if theclient sits behind a firewall). Is part of the .NET System.Data.DLL assembly, rather than beinglanguage-based. Is unlikely to support legacy Windows 95 clients.The other interesting point is that in order to have essential
features such as two-phase commit, you need to use EnterpriseServices (which is basically COM+/MTS with a .NET wrapper).
In VB6, a typical database application opened a connection to thedatabase and then used that connection for all queries for the life ofthe program. In VB .NET, database access through ADO .NETusually depends on disconnected (detached) data access. This is afancy way of saying that you most often ask for the data from adatabase and then, after your program retrieves the data, theconnection is dropped. With ADO .NET, you are very unlikely to
have a persistent connection to a data source. (You can continue touse persistent connections through classic ADO using the
========================================================
-
8/9/2019 Cyber Security New
48/75
COM/Interop facilities of .NET with the attendant scalabilityproblems that classic ADO always had.)Because data is usually disconnected, a typical .NET databaseapplication has to reconnect to the database for each query itexecutes. At first, this seems like a big step backward, but it reallyis not. The old way of maintaining a connection is not reallypractical for a distributed world: if your application opens aconnection to a database and then leaves it open, the server has tomaintain that connection until the client closes it. With heavilyloaded servers pushing googles of bits of data, maintaining all thoseper-client connections is very costly in terms of bandwidth.
System.Data.SqlClient
Retrieving data from a SQL Server database is similar: the syntaxfor the OleDb and SqlClient namespaces is almost identical. The key
difference (aside from the different class names) is the form of theconnection string, which assumes there is a test account with apassword of apress on a server named Apress. The SQL Serverconnection string requires the user ID, password, server, anddatabase name. We pass the connection string to get a connectionobject. Finally, as you can imagine, more complicated SQL queriesare easy to construct: just build up the query string one piece at atime.
========================================================
-
8/9/2019 Cyber Security New
49/75
========================================================
-
8/9/2019 Cyber Security New
50/75
Software Engineering process
The attribute of web based system and application
have a profound influence on the web engineering process
that is chosen. If immediacy and continuous evolution are
primary attribute of a web engineering, a web engineering
team might choose an agile process model that produces
web applications releases in the rapid fire sequence. On
the other hand, if the web application is to be developed
over a long time period) e.g., a major (e-commerce
application), an incremental process model can be chosen.
The network intensive nature of the application in
this domain suggests a population of the user that is
diverse (thereby making special demands on requirements
elicitation and modeling) and an application architecture
that can be highly specialized. Because web applicationsare often content-driven with an emphasis on aesthetic, it
is likely that parallel development activities will be
scheduled within the web applications process and involve
a team of both technical and non technical people (e.g.,
copywriter, graphic designer).
Defining the framework
Any one of the agile process models (e.g., extreme
programming, adaptive software development, SCRUM)
To be effective, any engineering process must be
adaptive. That is, the organization of the project team,
the modes of communication among team members, the
========================================================
-
8/9/2019 Cyber Security New
51/75
engineering activities and tasks to be performed, the
information that is collected and created, and the methods
used to produce a high quality product must all be
adapted tom the people doing the work, the project time
line and constraint, and the problem to be solved. Before
we define a process framework for web engineering, we
must recognize that:
1. Webapps are often delivered incrementally.
That is, frame work activities will occur repeatedly as
each increment is engineered and delivered.
2. Changes will occur frequently. These changes
may occur as a result of the evaluation of a delivered
increment or as a consequence of changing business
conditions.
3. Timelines are short. This mitigates against the
creation and review of voluminous engineering
documentation, but it does not preclude the simple
reality that critical analysis, design, and testing must
be recorded in some manner.
========================================================
-
8/9/2019 Cyber Security New
52/75
Software Model of the
Project
The software model used in our Project is theIncrement Model. We used incremental modelbecause the project was done in increments orparts and these parts were tested individually. Forex. Like the Candidate Registration and musicuploading Page was developed first and tested
thoroughly, then other part the registrationmodule was developed and tested individually.
Incremental model combines elements of thelinear sequential model with the iterativephilosophy of prototyping. The incremental modelapplies linear sequences in a staggered fashion astime progresses. Each linear sequence produces adeliverable increment of the software. For
example, word processing software may deliverbasic file management, editing and documentproduction functions in the first increment. Moresophisticated editing and document production inthe second increment, spelling and grammarchecking in the third increment, advanced pagelayout in the fourth increment and so on. Theprocess flow for any increment can incorporatethe prototyping model.
When an incremental model is used, the firstincrement is often a core product. Hence, basicrequirements are met, but supplementaryfeatures remain undelivered. The client uses thecore product. As a result of his evaluation, a planis developed for the next increment. The planaddresses improvement of the core features andaddition of supplementary features. This process
========================================================
-
8/9/2019 Cyber Security New
53/75
is repeated following delivery of each increment,until the complete product is produced.As opposed to prototyping, incremental models focus onthe delivery of an operational product after every iteration.
Advantages:
Particularly useful when staffing is
inadequate for a complete implementation by
the business deadline.
Early increments can be implemented
with fewer people. If the core product is well
received.
========================================================
-
8/9/2019 Cyber Security New
54/75
additional staff can be added to
implement the next increment.
Increments can be planned to manage
technical risks. For example, the system may require
availability of some hardware that is under development.
It may be possible to plan early increments without the
use of this hardware, thus enabling partial functionality
and avoiding unnecessary delay.
========================================================
Increment 2Analysis Design Code Test
Analysis Design Code Test
System / information
engineering
Increment 1
Delivery of 1st
increment
Delivery of 2nd
increment
Analysis Design Code Test
Increment 3Delivery of 3rd
increment
Analysis Design Code Test
Increment 4Delivery of 4th
increment
Calendar time
Figure 1.6: The incremental model
-
8/9/2019 Cyber Security New
55/75
Time Scheduling
Scheduling of a software project does not differgreatly from scheduling of any multitaskdevelopment effort. Therefore, generalized project
scheduling tools and techniques can be applied tosoftware with little modification.The program evaluation and review technique(PERT) and the critical path method (CPM) are twoproject scheduling methods that can be applied tosoftware development. Both techniques a tasknetwork description of a project, that is, a pictorialor tabular representation of tasks that must beaccomplished from beginning to end of project. The
network is defined by developing a list of all tasks,sometimes called the project work breakdownstructure (WBS), associated with a specific projectand list of orderings (sometimes called a restrictionlist) that indicates in what order tasks must beaccomplished.Both PERT and CPM provide quantitative tools thatallow the software planner to:i) Determine the critical path- the chain of tasks
that determines the duration of the projectii) Establish most likely time estimates for
individual tasks by applying statistical modelsiii) Calculate boundary times that define a time
window for a particular task.Boundary time calculations can be very useful insoftware project scheduling. Riggs describesimportant boundary times that may be discerned
from a PERT or CPM networks. Earliest time that a task can begin when allpreceding tasks are completed in the shortestpossible time
The latest time for task initiation before theminimum project completion time is delayed
The earliest finish-the sum of the earliest start-and the task duration
The latest finish-the latest start time added to
task duration
========================================================
-
8/9/2019 Cyber Security New
56/75
The total float-the amount of surplus time orleeway allowed in scheduling tasks to that so thatthe network critical path is maintained onschedule. Boundary time calculations lead to adetermination of critical path and provide themanger with a quantitative method for evaluatingprogress as tasks are completed. The plannermust recognize that effort expended on softwaredoes not terminate at the end of development.Maintenance effort, although not easy to scheduleat this stage, will ultimately become the largestcost factor. A primary goal of softwareengineering is to help reduce this cost.
Time-scheduling for Our Project will be likethis:
Project Analysis: Two Weeks
GUI Designing: Three Weeks
Core Coding and Algorithm: Four Weeks
Testing and Debugging: Two Weeks
========================================================
-
8/9/2019 Cyber Security New
57/75
========================================================
-
8/9/2019 Cyber Security New
58/75
TESTING PROCESSES
All software intended for public consumption should
receive some level of testing. The more complex or widely
distributed a piece of software is, the more essential
testing is to its success. Without testing, you have noassurance that software will behave as expected. The
results in a public environment can be truly embarrassing.
For software, testing almost always means automated
testing. Automated tests use a programming language to
replay recorded user actions or to simulate the internal
use of a component. Automated tests are reproducible
(the same test can be run again and again) and
measurable (the test either succeeds or fails). These two
advantages are key to ensuring that software meets
product requirements.
Developing a Test Plan
The first step in testing is developing a test plan based
on the product requirements. The test plan is usually a
formal document that ensures that the product meets the
following standards:
========================================================
-
8/9/2019 Cyber Security New
59/75
Is thoroughly tested. Untested code adds an
unknown element to the product and increases the
risk of product failure.
Meets product requirements. To meet customer
needs, the product must provide the features and
behavior described in the product specification. For
this reason, product specifications should be clearly
written and well understood.
Does not contain defects. Features must work within
established quality standards .Having a test plan
helps you avoid ad hoc testingthe kind of testing
that relies on the uncoordinated efforts of developers
or testers to ensure that code works. The results of
ad hoc testing are usually uneven and always
unpredictable. A good test plan answers the following
questions:
How are tests written? Describe the languages and
tools used for testing.
Who is responsible for the testing? List the teams or
individuals who write and perform the tests.
When are the tests performed? The testing schedule
closely follows the development schedule.
Where are the tests and how are test results shared?
Tests should be organized so that they can be rerun
on a regular basis.
What is being tested? Measurable goals with concrete
targets let you know when you have achieved
success.
========================================================
-
8/9/2019 Cyber Security New
60/75
Types of Tests
The test plan specifies the different types of tests that will
be performed to ensure that product meets customer
requirements and does not contain defects.
Types of Tests
Test type Ensures that
Unit testEach independent piece of code workscorrectly.
Integration test All units work together without errors.
Regression test Newly added features do not introduceerrors to other features that arealready working.
Load test (alsocalled stress test)
The product continues to work underextreme usage.
Platform testThe product works on all of the targethardware and software platforms.
These test types build on each other, and the tests
are usually performed in the order shown
========================================================
-
8/9/2019 Cyber Security New
61/75
The testing cycle
5.1 Unit Testing
A product unit is the smallest piece of code that can
be independently tested. From an object-oriented
programming perspective, classes, properties, methods,
and events are all individual units. A unit should pass its
unit test before it is checked into the project for
integration.
Unit tests are commonly written by the developer
who programmed the unit and are either written in the
========================================================
-
8/9/2019 Cyber Security New
62/75
same programming language as the product unit being
tested or in a similar scripting language, such as VBScript.
The unit test itself can be as simple as getting and setting
a property value, or it can be more complicated. For
instance, a unit test might take sample data and calculate
a result and then compare that result against the
expected result to check for accuracy.
5.2 Integration Testing
The first integration test always answers the
question, Does the application compile? At this point, a
compilation error in any of the components can keep the
integration testing from moving forward. Some projects
use nightly builds to ensure that the product will always
compile. If the build fails, the problem can be quickly
resolved the next morning.
The most common build problem occurs when one
component tries to use another component that has not
yet been written. This occurs with modular design because
the components are often created out of sequence. You
solve this problem by creating stubs. Stubs are
nonfunctional components that provide the class,
property, or method definition used by the other
component. Stubs are a kind of outline of the code you
will create later.
When all of the build problems are resolved,
integration testing really becomes just an extension of
========================================================
-
8/9/2019 Cyber Security New
63/75
unit testing, although the focus is now whether the units
work together. At this point, it is possible to wind up with
two components that need to work together through a
third component that has not been written yet. To test
these two components, you create a driver. Drivers are
simply test components that make sure two or more
components work together. Later in the project, testing
performed by the driver can be performed by the actual
component.
Top-down integration testing is an incremental
approach to construction of program structure. Modules
are integrated by moving downward through the control
hierarchy, beginning with the main control module.
Modules subordinate to the main module are incorporated
into the structure in steps.
The integration process is performed in five steps:
1. The main control module is used as a test driver and
stubs are substituted for all components directly
subordinate to the main module.
2. Depending on the integration approach selected,
subordinate stubs are replaced one at a time with actual
components, in breadth-first or depth-first order.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is
replaced by the real component.
Regression testing may be conducted to ensure that new
errors have not been introduced. The top-down strategy
========================================================
-
8/9/2019 Cyber Security New