real time interactive lecture delivery system

104
REAL TIME INTERACTIVE LECTURE DELIVERY SYSTEM

Upload: sandhyadanturi

Post on 07-Feb-2016

8 views

Category:

Documents


0 download

DESCRIPTION

The aim of this project is to build a system that facilitates feedback from students and also to facilitate interactive question-answer sessions via mobile client with internet connectivity

TRANSCRIPT

Page 1: Real Time Interactive Lecture Delivery System

REAL TIME INTERACTIVE LECTURE DELIVERY SYSTEM

Page 2: Real Time Interactive Lecture Delivery System

Contents

Table of Contents1. Introduction ………………………………………………………………2. Phases …………………………………………………………………….

2.1 Module 12.2 Module 22.3 Module 3

3. Purpose ……………………………………………………………………3.1 Scope

4. Application, Objective and Benefits ……………………………………..5. UML ……………………………………………………………………..

5.1 Structural Diagrams5.2 Deployment Diagrams5.3 Behavioral Diagrams5.4 Sequence Diagrams5.5 Collaboration Diagrams5.6 State chart Diagrams

6. Architecture Diagram …………………………………………………….7. Database ………………………………………………………………….8. User Interface ……………………………………………………………9. Software Requirement Specification (SRS)

9.1 Overview ……………………………………………………………9.2 Product Perspective ………………………………………………...9.3 System Interfaces …………………………………………………..

9.3.1 User Interface …………………………………………………9.3.2 Hardware interface ……………………………………………9.3.3 Software Interfaces ……………………………………………9.3.4 Memory Constraints …………………………………………..

9.4 Product Functions ……………………………………………………9.5 User Characteristics ………………………………………………….9.6 Constraints ……………………………………………………………9.7 Logical Database Requirements ………………………………………9.8 Design Constraints ……………………………………………………9.9 Standard Compliance …………………………………………………

9.10 Reliability …………………………………………………………...9.11 Availability ………………………………………………………….9.12 Security …………………………………………………………….

9.13 Maintainability ……………………………………………………...9.14 Portability …………………………………………………………..9.15 System Mode ……………………………………………………….9.16 Supporting Information ……………………………………………

Page 3: Real Time Interactive Lecture Delivery System

9.17 Document Control …………………………………………………9.18 Appendix A ………………………………………………………9.19 Specific Requirements ……………………………………………..9.20 Performance requirements ………………………………………….9.21 Software System Attributes ………………………………………..9.22 Features ……………………………………………………………..9.23 Objects ……………………………………………………………..9.24 Stimulus ……………………………………………………………9.25 Response ……………………………………………………………9.26 Functional Hierarchy ………………………………………………. 9.27 Additional Comments ………………………………………………

10. Programming Code …………………………………………………………11. Technology Features ……………………………………………………….

11.1 J2EE ……………………………………………….11.2 J2ME ……………………………………………………………11.3 JDBC……………………………………………………………11.4 WEB SERVICES………………………………………………..11.5 FLASH BUILDER4.5……………………………………………..11.6 ANDRROID…………………………………………………………..

12. Testing ………………………………………………………………………12.1 Black Box Testing 12.2 White Box Testing12.3 Security Testing12.4 Compatibility Testing

13. Conclusions ………………………………………………………………14. Future Enhancements ………………………………………………………15. References …………………………………………………………………..

1. Introduction:

Interactive lectures are an important way to enhance student learning, particularly in large classes. They help to keep students’ attention focused on the class, give students repeated opportunities to practice, and increase student retention of lecture material. They also provide an easy way to experiment with different teaching techniques. In small classes (less than 30 students) most students are prepared to ask questions if they are struggling to understand what is being covered by the lecturer. However they may be intimidated in larger groups. In small classes students have a sense of inclusion and involvement that enhances their learning and

Page 4: Real Time Interactive Lecture Delivery System

motivation. But this is lost in larger classes leading to reduced satisfaction and less effective learning.

2 Phase:

Module1:Create a web administration front end where a lecturer can activate/deactivate users, upload presentations, upload quizzes, view statistics(results, feedbacks).

Module2:Create an android application for students to send their questions and answers to the professor.After validating they will receive the results which will be presented by this android application.

Module3:Create a lecturer module that will receive a students’ doubts/questions while the lecture is being delivered and to manage the similar and dissimilar doubts among them.

Module4:Database to store all the quizzes and answers to them as well as the feedbacks given by the students regarding the lecture.

Purpose:

The purpose of this project is to develop an Interactive Lecture Delivery System in real time to address the students directly in vast class rooms where hundreds of students gather for any session.

Scope:

The scope of this project is to make the lectures very interactive among students and lecturers in order to clarify the students doubts then and there during a session in large lecture halls.

Page 5: Real Time Interactive Lecture Delivery System

4. Application, Objective and Benefits:

Application: The application will be utilized by the Users.

•Goal:The Web Site is intended to avoid server failure, virus attack and decrease the infrastructure costs, maintenance costs occurred to maintain the application .

•Objective:

Design is the first step in moving from problem domain to solution domain. Design is essentially the

bridge between requirements specification and the final solution.

The goal of design process is to produce a model or representation of a system can be used later to

build that system. The produced model is called the ‘Design of the System’.

It is a plan for a solution of the system.

The objective of design phase is to:

Create User Interface.

Create Application.

.

5. UML:

Design Patterns brought a paradigm shift in the way object oriented systems are designed. Instead of relying on the knowledge of problem domain alone, design patterns allow past experience to be utilized while solving new problems. Traditional object oriented design (OOD) approaches such as Booch, OMT, etc. advocated identification and specification of individual objects and classes. Design Patterns on the other hand promote identification and specification of collaborations of objects and classes. However, much of the focus of recent research has been towards identification and cataloging of new design patterns. The effort has been to assimilate knowledge gained from designing systems of the past, in various problem domains. The problem analysis phase has gained little benefit from this paradigm. Most projects still use traditional object oriented analysis (OOA) approaches to identify classes from the problem description.

Page 6: Real Time Interactive Lecture Delivery System

Responsibilities to those classes are assigned based upon the obvious description of entities given in the problem definition.

Pattern Oriented Technique (POT) is a methodology for identifying interactions among classes and mapping them to one or more design patterns. However, this methodology also uses traditional OOA for assigning class responsibilities. As a result, its interaction oriented design phase (driven by design patterns) receives its input in terms of class definitions that might not lead to best possible design.

The missing piece here is the lack of an analysis method that can help in identifying class definitions and the collaborations between them which would be amenable to application of interaction oriented design. There are two key issues here. First is to come up with good class definitions and the second is to identify good class collaborations.

It has been observed in that even arriving at good class definitions from the given problem definition is non-trivial. The key to various successful designs is the presence of abstract classes (such as an event handler) which are not modeled as entities in the physical world and hence do not appear in the problem description. In anticipating change has been proposed as the method for identifying such abstract classes in a problem domain. Another difficult task is related to assignment of responsibilities to entities identified from the problem description. Different responsibility assignments could lead to completely different designs. Current approaches such as Coad and Yourdon, POT etc. follow the simple approach of using entity descriptions in the problem statement to define classes and fix responsibilities. We propose to follow a flexible approach towards assigning responsibilities to classes so that the best responsibility assignment can be chosen.The second issue is to identify class collaborations. Techniques such as POT analyze interactions among different sets of classes as specified in the problem description. Such interacting classes are then grouped together to identify design patterns that may be applicable. However, as mentioned earlier, only the interactions among obvious classes are determined currently. Other interactions involving abstract classes not present in the problem or interactions that become feasible due to different responsibility assignments are not considered. We present some techniques that enable the designer to capture such interactions as well.

Interaction Based Analysis and Design

Top-down approach

This approach is applicable to situations where the designer knows the solution to the given problem. It is true for problem domains that have well established high-level solutions and different implementations vary in low level details (for e.g. Enterprise Resource Planning (ERP) systems). Her main concern is to realize that solution in a way such that the implemented system has nice properties such as maintainability and reusability etc.

To achieve this goal, the system designer selects appropriate design patterns that form the building blocks of her solution. Having obtained this design template (design type), she maps the classes and objects participating in those patterns to the entities of the problem domain. This

Page 7: Real Time Interactive Lecture Delivery System

mapping implicitly defines the responsibilities of various classes/objects that represent those entities. To help clarify the concept, consider a scenario where an architect is assigned the task of building a flyover. Flyover construction is an established science and the architect knows the solution to the problem. She starts by identifying component patterns such as road strip, support pillars, side railings and so on. Having done that, she maps the participating objects to actual entities in the problem domain. This would involve defining the length and width of the road strip based upon the space constraints specified in the problem. The height and weight of the pillars get decided based upon the load requirements specified. The entry and exit points get decided based upon the geography of the location and so on. This results in a concrete design instance. Some new classes or objects, not existing in the domain model, may also have to be introduced for a successful instantiation of the design template. For instance, the problem domain may not model an abstract entity such as an event handler which may be a participant in some portion of the design template. Such generic classes/objects may be drawn from a common repository of utility classes. Interaction driven analysis phase here is simple since the interactions (in the form of design patterns) are already well established and directly obtained from the knowledge base.

Bottom-up approach

This approach is applicable in scenarios where interactions in the problem domain are not well understood and need to be discovered and explored. This situation is a fundamental problem faced by the designers of object oriented systems. It relates to the fact that objects oriented analysis (OOA) does not help much in creating a solution to the problem at hand. The analysis phase is mainly concerned with enhancing the understanding of the problem domain. This knowledge is then later used by a problem solving approach to come up with a solution possessing good design properties. As a result, at the end of the analysis phase the designer has a set of well defined components that need to be assembled together for realizing a solution. For instance, to build a route finder application the OOA phase helps in modeling the domain objects such as roads, vehicles, cities, addresses etc. but does not actually provide a solution for finding routes between two given addresses. This is similar to having various pieces of a jigsaw puzzle but the puzzle still needs to be solved. The problem in software systems is further complicated by the fact that there is generally no unique solution to a problem. There are always trade-offs at various stages and the resulting designs are a reflection of the choices made at those stages. In the jigsaw puzzle example this is similar to the situation where different sets of the same puzzle are available each differing from another in terms of the design of its component pieces. Some component designs may help in solving the puzzle faster and more efficiently than others.The bottom-up approach helps in such situations where the entities in the problem domain have been identified by traditional OOA techniques but multiple choices exist in terms of assigning responsibilities to those entities. Unlike top-down approach, the mapping of responsibilities to entities is not dictated by the design solution specified by the designer. Instead, the task of the designer here is to try various responsibility assignments and create an interaction specification involving those objects. The objective of this interaction driven analysis is to obtain an interaction specification that helps in arriving at a solution with best design characteristics possible. Having identified the entities in the domain, the starting point for the designer is to identify various alternatives available for assigning responsibilities to individual objects. Her domain knowledge helps her in this task. Given these alternatives for potential object definitions

Page 8: Real Time Interactive Lecture Delivery System

and standard utility objects (such as schedulers, event handlers etc.), the next step is to find compositions of these building blocks (i.e. interactions of these objects) that provide alternative solutions to the problem. This task is a non-trivial one especially when done manually. There are just too many combinations to be considered, for any human designer to obtain alternative solutions in a reasonable amount of time. We need to apply (semi-)automated software composition techniques based on some formal specification. Several such approaches have been recently investigated in the context of e-services. These include workflow based approaches and AI Planning based techniques. Other formal techniques for specifying composition include Petri-net based models, automata-based models; temporal logics etc. from verification community and XQuery, XML constraint tools based techniques from data management community .The resulting candidate compositions (i.e. interaction specifications) then need to be compared with existing design patterns either manually or automatically. It is not beyond imagination to visualize that with advancement in automated composition techniques, new design patterns may get identified during this process. For instance, techniques such as Reinforcement Learning have resulted in new novel solutions in various domains such as playing Backgammon. In such a case, the resulting designs may need to be evaluated manually. The best design among the alternatives is then chosen for implementing the system.

Open Issues

Identifying interactions

This is a crucial step in the analysis phase and the success of remaining phases depends on it. The issue here is to identify interactions which are not evident from the problem description but may hold the key to an efficient design solution. The bottom-up approach proposed in this paper takes a step in this direction but a lot more work is needed. The analysis method should be such that it is able to incorporate abstract classes such as event handlers, proxies etc. Moreover, current analysis methods map entities to responsibilities of individual classes in terms of services they provide and methods they invoke on other classes. However, an entity may be realized by a set of classes. For instance, an adapter class hides the interface of an adoptee class and they collectively provide the desired functionality. Similarly, an abstraction and its implementation provide a single functionality through separate classes resulting in increased maintainability. The analysis method needs to be able to determine when is it appropriate to realize an entity responsibility by means of multiple interacting classes.

Representation of Class Responsibilities

Since we need to specify different alternative class responsibilities, as in bottom-up approach, a mechanism is required to document them in a machine interpretable format. Some of these responsibilities would get captured in the form of methods a class exports or methods it invokes on other classes. However, other responsibilities with respect to its interaction with other classes need to be explicitly specified. These may include pre- and post-conditions for different method invocations, other properties such as ’hasSameInterfaceAs <another class>’, ’hidesInterfaceOf <another class>’ etc. Languages such as could be used as it is or extended for this purpose.

Language for Specifying Design Patterns

Page 9: Real Time Interactive Lecture Delivery System

The approaches for OO Design proposed in this paper favor automatic techniques over manual ones for reasons described earlier. This means that we need a mechanism to be able to express design patterns in a format amenable to be read and interpreted by programs. Some attempts have been made at defining such pattern description languages [14, 13]. One of these or some variation of these could be used to express design patterns in a formal language.

Comparison of Software Designs

Once we have alternative designs available, they need to be compared to arrive at the best one.Each design may consist of multiple design patterns. The criteria here would not be to simply count the number of design patterns used but to evaluate the interaction between patterns and also between other design elements used. This would involve an understanding of good and bad design interactions and an ability to identify them in a given design. The final challenge would be to do it automatically.

5.1 Structural Diagram

Class Diagram:

Class diagrams identify the class structure of a system, including the properties and methods of each class. Also depicted are the various relationships that can exist between classes, such as an inheritance relationship. The Class diagram is one of the most widely used diagrams from the UML specification

Class Diagrams are given to depict interactions

Page 10: Real Time Interactive Lecture Delivery System

Student

usernamepassword

login()sendQueries()attemptQuiz()sendFeedback()

Professor

usernamepassword

register()login()receiveQueries()viewResults()viewFeedback()

Admin

usernamepassowrd

login()registerStudents()managePresentations()manageQuiz()manageUserdetails()

Database

presentationTableQuizTableusersTable

sendsResponse()

Object Diagram:

Object diagrams model instances of classes. This type of diagram is used to describe the system at a particular point in time. Using this technique, you can validating the class diagram and it's multiplicity rules with real-world data, and record test scenarios. From a notation standpoint, Object diagrams borrow elements from Class diagrams.

Component Diagram:

Component diagrams fall under the category of an implementation diagram, a kind of diagram that models the implementation and deployment of the system. A Component Diagram, in particular, is used to describe the dependencies between various software components such as the dependency between executable files and source files. This information is similar to that within make files, which describe source code dependencies and can be used to properly compile an application.

5.2 Deployment Diagram:

Page 11: Real Time Interactive Lecture Delivery System

Deployment diagrams are another model in the implementation diagram category. The Deployment diagram models the hardware used in implementing a system and the association between those hardware components. Components can also be shown on a Deployment diagram to show the location of their deployment. Deployment diagrams can also be used early on in the design phase to document the physical architecture of a system. 

5.3 Behavioral Diagrams:

Use Case Diagram:

Use Case diagrams identify the functionality provided by the system (use cases), the users who interact with the system (actors), and the association between the users and the functionality. Use Cases are used in the Analysis phase of software development to articulate the high-level requirements of the system. The primary goals of Use Case diagrams include:

Providing a high-level view of what the system does Identifying the users ("actors") of the system

Determining areas needing human-computer interfaces

Use Cases extend beyond pictorial diagrams. In fact, text-based use case descriptions are often used to supplement diagrams, and explore use case functionality in more detail.

Page 12: Real Time Interactive Lecture Delivery System

Register

login

receiveQueries

viewResults

Professor1

viewFeedback

Page 13: Real Time Interactive Lecture Delivery System

adminLogin

registerStudents

managePresentations

manageQuiz

Admin1

maintainUserDetails

Page 14: Real Time Interactive Lecture Delivery System

login

sendQueries

attemptQuiz

Student1

sendFeedback

5.4 Sequence Diagram:

Sequence diagrams document the interactions between classes to achieve a result, such as a use case. The Sequence diagram lists objects horizontally, and time vertically, and models these messages over time.

Page 15: Real Time Interactive Lecture Delivery System

A : admin P : professor S : student Android : application DB : databaseWeb : application

login

upload presentations, quiz, userdetails

store all files and details

login

get presentation

login

send doubts

presents a pop up message on screen

update student status

presents list of presentations

display quiz

send answers for the quiz

sends answers for validation

register students

get results

5.5 Collaboration Diagram:

Collaboration diagrams model the interactions between objects. This type of diagram is a cross between an object diagram and a sequence diagram. It uses free-form arrangement of objects which makes it easier to see all iterations involving a particular object.

Page 16: Real Time Interactive Lecture Delivery System

A : admin

P : professor

S : student

Android : application

DB : database

Web : application

1: login2: register students

3: upload presentations, quiz, userdetails

5: login7: get presentation

11: update student status12: display quiz

8: login9: send doubts

13: send answers for the quiz

10: presents a pop up message on screen

14: sends answers for validation

15: get results

4: store all files and details

6: presents list of presentations

5.6 State chart Diagram:

State diagrams, are used to document the various modes ("state") that a class can go through, and the events that cause a state transition.

5.7 Activity Diagram:

Activity diagrams are used to document workflows in a system, from the business level down to the operational level. The general purpose of Activity diagrams is to focus on flows driven by internal processing vs. external events.

6. Architecture Diagram:

Project architecture represents no of components we are using as part of our project and the flow of request processing.

Page 17: Real Time Interactive Lecture Delivery System
Page 18: Real Time Interactive Lecture Delivery System

7. Database:

RealTime-Interactive Lecture Delivery System:

DataBaseTables

Page 19: Real Time Interactive Lecture Delivery System

UserDetails Table:

DoubtsTable:

FeedBack Table:

ManagementDetails Table:

PresentationTable:

Page 20: Real Time Interactive Lecture Delivery System

QuizTable:

ResultsTable:

8. User Interface:

<Version 1.0.0>

Page 21: Real Time Interactive Lecture Delivery System

Approvals Signature Block

Organization Responsibility Signature Date

Customer /customer representative

Project Manager

Software Quality Assurance Leader

Software Configuration Management Leader

User Documentation LeaderUser Name

User Training Leader

Definitions, Acronym and AbbreviationTerm or Acronym Definition

Storage Generally, Storage can be defined as the act of storing

something

Store Management Store Management can be simply defined as managing or

maintain the stored data properly.

J2ME Application An Application which is deployed and run on mobile.

Page 22: Real Time Interactive Lecture Delivery System

9.1 Overall Description:

The Real Time Interactive Lecture Delivery System is motivated by the need to increase students’ interaction with the lecturer in real time during a lecture. With the growth of the number of students in a class, interaction with students has become increasingly difficult. This system comprises of a website that maintains profiles of lecturers and lecture material (presentations, quizzes, student feedback, etc.). The aim of this project is to build a system that facilitates feedback from students and also to facilitate interactive question-answer sessions with the teacher via our mobile client that runs on any Java enabled mobile phone with an internet connectivity. Lecturers would be managing the presentations of their respective subjects and also conduct quizzes.

9.2 Product Perspective:

The project is a part of a large system. Block diagram: The major components of the larger system are shown as below.

9.3 System Interfaces:

Description of various interfaces used in the system is described in the subsequent paragraph.

9.3.1 User Interfaces:

Interface between the software product and its users: User friendly interfaces as depicted below will be used.

1. Screen formats are required to be created with following features:-a) User friendly. b) Indicate the Mandatory fields by asterisk (*). c) Fill up default values where ever possible d) Give combo boxes in all input screens.

Page 23: Real Time Interactive Lecture Delivery System

e) User and Data entry personnel details should be stored in the database through “Save” button. f) Authentication of users should be carried out where ever required.

2. Web Page or window layouts Each screen should have

• Menu driven facility • Uniformity • Consistency

3. Outputs

Analytical outputs should be supported by graphs.a) For each output there should be provision of printingc) Emails should be well structuredd) Each paper output should be formatted to A4 size paper.

4. DOs (Input)

a) Each input box to be supported by labelb) Give tool tips where requiredc) Give formats (mm/ dd /yy)d) Provide Tab Indexe) Input elements without visible labels should continue to contain text (search,

login)

f) Password g) radio inputs should have one option checked as default

5. DONTs (Inputs)a) Don’t distract users from their goalsb) Don’t use dark background with dark font colorsc) Don’t use too many colors

6. Error messages Give small error messages like “Incorrect Data” or “Incorrect Date Format” or “Field is required” and supplement it with “Ok” Button.

9.3.2 Hardware Interfaces:Following Hardware Interfaces are required:

1. 1GB RAM

Page 24: Real Time Interactive Lecture Delivery System

2. 80GB HardDisk

3. Pentium 4 Processor

4. Internet Connectivity

9.3.3 Software Interfaces:

Following Software is required:-

Java Adobe Flash Builder 4.5.1 My Eclipse 8.6 JBoss 5.1 Mysql 5.1 Android SDK

9.3.4 Memory Constraints:

RAM with minimum 1 GB and 40 GB HD space

1.1.1.1 Broad level software requirements :

Requirement Remarks

1. Make User for Registration. Provide a Registration and authentication user interface.

2. Allows User to Upload Files3. Allows User to View the Files4. Allows User to Download the Files

9.4 Product Functions:

Database Functions

Page 25: Real Time Interactive Lecture Delivery System

1. Database should have the facility to

a) insert records, b) update, c) edit, d) delete (Restricted users only), e) Search and f) Sort records. g) Create and edit master and transaction records. h) All the interaction to and from the database should be through Web Pages. i) Database should have following tables:-

I. Data Entry Personnel: First Name, Last Name, Address, Contact Details, Qualification, Place of work, Designation. etc

II. Data Entry Points: user credentials.

2. Authentication Users at all levels should be authenticated before giving them access.

3. Analysis Outputs of all analysis should be in the form of

a) Data in tabular formb) Graphical representation of Performance.

9.5 User characteristics: Following are the characteristics of the intended users:-

Educational Level: Level of the users is between Educated to highly educated.Experience: Experienced in their domain but training in the proposed application.Technical Expertise: Training required in the proposed application.

9.6 Constraints: Site Adaption Requirements: Only system Administrator and DBA are authorized to carry out this task jointly.

Assumptions and DependenciesIt is assumed that all the systems will have the basic HW, SW and Communication Interfaces available. The users are trained in using the application.

Apportioning of Requirements

Identify requirements that may be delayed until future versions of the system. All requirements will be met.

9.7 Logical Database Requirements:

Logical requirements connected with the database include:

a) Most of the values are string types but the count is in numbers.

Page 26: Real Time Interactive Lecture Delivery System

b) Counting of the patients connected with a specific disease is monitored immediately after entering the record of each patient.

c) Accessing rights are limited to authenticated users only.d) Integrity constraints are maintained by setting the relationships.

Functions:Validity checks on the inputs

Data Entry Operators

a) Responses to abnormal situation, including • Overflow – Periodic Backups• Communication facilities : Internet, Telephone• Error handling and recovery : Periodic Backup, Error alerts, Maintain Error Logs

b) Effect of parameters

9.8 Design Constraints

Hardware Constraints• Pentium 4 processor, • RAM Minimum 1 GB memory, • 40 GB Hard disk.• 100 Mbps Modem

Design constraints• Charts have to be created using Flex• All user interfaces have to be created using Flex• Interaction between User Interface and the database should be through Web Services.

9.9 Standards Compliance:Following standards will be maintained:-

• Report format.• Data naming (As per the Naming Conventions – organization policy)• Accounting procedures• Audit Tracing (All changes made to student details will be traced in trace files with

before and after values)

9.10 Reliability:

Software will be handed over to the client after carryout extensive Unit Testing Integration Testing System Testing After performing periodic demonstrations to the end users on completion of each module

and keep a log of the errors / observations made by the user

Page 27: Real Time Interactive Lecture Delivery System

Number of errors during Unit Testing may be more but they should show decreasing trend during Integration and System Testing and should reduce to zero at the time of delivery.

Ensure strict compliance to Project Plan

9.11 Availability:

Specify the factors required to guarantee a defined availability level for the entire system such as checkpoint, recovery, and restart.

9.12 Security:

• Authentication module will ensure that only authorized users are provided access control on the web site.

• Roles will be defined to impose restrictions on the authorized users.• Ensure that buffer overflow and integer overflow will be avoided.• Whenever user is deleted his privileges will also get deleted.• Carryout periodic backup of the database and maintain a log.• Honey Pots Intentionally include some PCs in the network which are vulnerable for

hackers. They can be used to catch hackers or fix vulnerability.

9.13 MaintainabilityKeep a count of the number of lines of code. Though there cannot be a benchmark for the

maximum lines of code in a sub routine but higher the lines of code indicates

Higher is the maintenance. Need to split up in to child levels. Place every module in Try Catch () and finally () block to prevent disgraceful exit.

Avoid excessive complexityAvoid excessive InheritanceVariable name should not match the field namesReduce complexity of conditional branching

9.14 Portability: Specify attributes of software that relate to the ease of porting the software to other host machines and/or operating systems. This may include

a) Percentage of components with host-dependent codeb) Percentage of code that is host dependentc) Use of a proven portable languaged) Use of a particular compiler or language subsete) Use of a particular operating system.Once the relevant characteristics are selected, a subsection should be written for each, explaining the rationale for including this characteristic and how it will be tested and measured. A chart like

Page 28: Real Time Interactive Lecture Delivery System

this might be used to identify the key characteristics (rating them High, Medium, or Low), or ranking them in order of importance (1, 2, 3, etc.).

ID Characteristic Rank

1 Correctness

2 Efficiency

3 Flexibility

etc.

Organizing the Specific Requirements:

For anything but trivial systems the detailed requirements tend to be extensive. For this reason, it is recommended that careful consideration be given to organizing these in a manner optimal for understanding. There is no one optimal organization for all systems. Different classes of systems lend themselves to different organizations of requirements in section 3. Some of these organizations are described in the following subsections.

9.15 System Mode:

Some systems behave quite differently depending on the mode of operation. For example, a control system may have different sets of functions depending on its mode: training, normal, or emergency. When organizing by mode there are two possible outlines. The choice depends on whether interfaces and performance are dependent on mode.Web Site being developed in ASP.NET 2.0 it is compatible to most of the OS and the Web Browsers.

9.16 Supporting Information:

The supporting information makes the SRS easier to use. It includes:a) Table of Contents at the front of the document

b) Index c) Appendices: Definitions of important terminologies are given in the Appendix

The Appendices are not always considered part of the actual requirements specification and are not always necessary. They may include:

(a) Sample I/O formats, descriptions of cost analysis studies, results of user surveys;(b) Supporting or background information that can help the readers of the SRS;(c) A description of the problems to be solved by the software;(d) Special packaging instructions for the code and the media to meet security, export,

initial loading, or other requirements.

Page 29: Real Time Interactive Lecture Delivery System

When Appendices are included, the SRS should explicitly state whether or not the Appendices are to be considered part of the requirements.

Tables on the following pages provide alternate ways to structure section 3 on the specific requirements.

9.17 Document Control:

Change History

RevisionRelease Date

Description [list of changed pages and reason for change]

Document Storage

This document was created using standard SRS Template followed by IEEE. The file is stored by the Project Manager, one signed copy is handed over to the authorized representative of the customer and the second copy is kept with the Administrator. It establishes the basis for agreement with the client on what the software product is expected to do, as well as what it is not expected to do.

Document Owner

Project Manager is responsible for developing and maintaining this document.

9.18 Appendix A:

Definitions of the quality characteristics follow.

Correctness - extent to which program satisfies specifications, fulfills user’s mission objectives

Efficiency - amount of computing resources and code required to perform function Flexibility - effort needed to modify operational program Integrity/Security - factors that protect the software from accidental or malicious

access, use, modification, destruction, or disclosure Interoperability - effort needed to couple one system with another Maintainability - ease of maintenance of the software itself Portability - ease of porting the software to another host Reliability - factors required to establish the required reliability of the system Reusability - extent to which it can be reused in another application Testability - effort needed to test to ensure performs as intended Usability - effort required to learn, operate, prepare input, interpret output

Page 30: Real Time Interactive Lecture Delivery System

Availability - factors required to guarantee a defined availability level for the system

9.19 Specific Requirements

Unique ID

Requirement Remarks

1.2.3.4.5.6.7.8.

9.20 Performance Requirements:

Presently we are working on three terminals. It is expected that at any point of time three terminals will be in operation simultaneously. The amount of information will be numerical and text oriented and the volume will be limited

9.21 Software System Attribute:Analysis: This is an important module and this module should be able to raise an alert by estimating the probability of disease being escalated to epidemic.

Modular approach will be followed.

1. Create Roles.2. Manage the application

Graphs and email generation should be asynchronous.

9.22 objects:

Following objects are being considered in this application:-

Authorized users. Performance Report.

9.23 Feature: The features are listed in the subsequent paragraphs in the form of Stimulus and Response.

9.24 Stimulus• User ID + Pwd of Data Entry Operators

Page 31: Real Time Interactive Lecture Delivery System

9.25 Response:• Performance Report.• Dynamic Graphs.• Authentication Alert.

9.26 Functional Hierarchy:

When none of the above organizational schemes prove helpful, the overall functionality can be organized into a hierarchy of functions organized by common inputs, common outputs, or common internal data access. Data flow diagrams and data dictionaries can be used to show the relationships between and among the functions and data.Draw DFDs

Prepare Data Dictionaries.

9.27 Additional Comments:

Whenever a new SRS is contemplated, more than one of the organizational techniques given in 3.7 may be appropriate. In such cases, organize the specific requirements for multiple hierarchies tailored to the specific needs of the system under specification. Any additional requirements may be put in a separate section at the end of the SRS. There are many notations, methods, and automated support tools available to aid in the documentation of requirements. For the most part, their usefulness is a function of organization. For example, when organizing by mode, finite state machines or state charts may prove helpful; when organizing by object, object-oriented analysis may prove helpful; when organizing by feature, stimulus-response sequences may prove helpful; when organizing by functional hierarchy, data flow diagrams and data dictionaries may prove helpful. In any of the outlines below, those sections called “Functional Requirement i” may be described in native language, in pseudo code, in a system definition language, or in four subsections titled: Introduction, Inputs, Processing, and Outputs.<Click.

10. Programming Code

11. 11.TECHNOLOGY

11.1 J2EE (Java 2 Platform, Enterprise Edition):

Introduction

Distributed Multi-tiered Applications

J2EE Components

Conclusion

Page 32: Real Time Interactive Lecture Delivery System

The J2EE platform uses a distributed multitiered application model for enterprise

applications. Application logic is divided into components according to function, and the various

application components that make up a J2EE application are installed on different machines

depending on the tier in the multitiered J2EE environment to which the application component

belongs. Multitiered J2EE applications divided into the tiers described in thefollowing list.:

Client-tier components run on the client machine.

Web-tier components run on the J2EE server.

Business-tier components run on the J2EE server.

Enterprise information system (EIS)-tier software runs on the EIS server.

Multi-tiered J2EE Application .

J2EE Components:

Page 33: Real Time Interactive Lecture Delivery System

J2EE applications are made up of components. A J2EE component is a self-contained

functional software unit that is assembled into a J2EE application with its related classes and

files and that communicates with other components. The J2EE specification defines the

following J2EE components:

Application clients and applets are components that run on the client.

Java Servlet and Java Server Pages™ (JSP™) technology components are Web

components that run on the server.

Enterprise JavaBeans™ (EJB™) components (enterprise beans) are business

components that run on the server.

J2EE components are written in the Java programming language and are compiled in the

same way as any program in the language. The difference between J2EE components and

“standard” Java classes is that J2EE components are assembled into a J2EE application, are

verified to be well formed and in compliance with the J2EE specification, and are deployed to

production, where they are run and managed by the J2EE server.

J2EE web components are either servlets or pages created using JSP technology (JSP pages).

Servlets are Java programming language classes that dynamically process requests and construct

responses. JSP pages are text-based documents that execute as servlets but allow a more natural

approach to creating static content.

Static HTML pages and applets are bundled with web components during application

assembly but are not considered web components by the J2EE specification. Server-side utility

classes can also be bundled with web components and, like HTML pages, are not considered

web components.

11.2 Java ME:

Java platform designed for embedded systems (mobile devices are one kind of such systems) .

Target devices range from industrial controls to mobile phones (especially feature phones) and

set-top boxes. Java ME was formerly known as Java 2 Platform, Micro Edition (J2ME).

Java ME was designed by Sun Microsystems, now a subsidiary of Oracle Corporation; the

platform replaced a similar technology, Personal Java. Originally developed under the Java

Community Process as JSR 68, the different flavors of Java ME have evolved in separate

Page 34: Real Time Interactive Lecture Delivery System

JSRs. Sun provides a reference implementation of the specification, but has tended not to

provide free binary implementations of its Java ME runtime environment for mobile devices,

rather relying on third parties to provide their own.

Java ME source code is licensed under the GNU General Public License, and is released under

the project name phoneME.

As of 2008, all Java ME platforms are currently restricted to JRE 1.3 features and use that

version of the class file format (internally known as version 47.0). Should Oracle ever declare

a new round of Java ME configuration versions that support the later class file formats and

language features, such as those corresponding JRE 1.5 or 1.6 (notably, generics), it will entail

extra work on the part of all platform vendors to update their JREs.

Java ME devices implement a profile. The most common of these are the Mobile Information

Device Profile aimed at mobile devices, such as cell phones, and the Personal Profile aimed at

consumer products and embedded devices like set-top boxes and PDAs. Profiles are subsets of

configurations, of which there are currently two: the Connected Limited Device Configuration

(CLDC) and the Connected Device Configuration

There are more than 2.1 billion Java ME enabled mobile phones and but it is becoming old

technology as it is not used on any of today's newest mobile platforms (eg iPhone, Android,

Windows Phone 7, MeeGo, BlackBerry's new QNX).

Connected Limited Device Configuration (CLDC) contains a strict subset of the Java-class

libraries, and is the minimum amount needed for a Java virtual machine to operate. CLDC is

basically used for classifying myriad devices into a fixed configuration.

A configuration provides the most basic set of libraries and virtual-machine features that must

be present in each implementation of a J2ME environment. When coupled with one or more

profiles, the Connected Limited Device Configuration gives developers a solid Java platform

for creating applications for consumer and embedded devices.

Mobile Information Device Profile

Designed for mobile phones, the Mobile Information Device Profile includes a GUI, and a

data storage API, and MIDP 2.0 includes a basic 2D gaming API. Applications written for this

profile are called MIDlets. Almost all new cell phones come with a MIDP implementation,

and it is now the de facto standard for downloadable cell phone games. However, many

Page 35: Real Time Interactive Lecture Delivery System

cellphones can run only those MIDlets that have been approved by the carrier, especially in

North America.

JSR 271: Mobile Information Device Profile 3 (Final release on 09 Dec, 2009) specified the

3rd generation Mobile Information Device Profile (MIDP3), expanding upon the functionality

in all areas as well as improving interoperability across devices. A key design goal of MIDP3

is backward compatibility with MIDP2 content.

Information Module Profile

The Information Module Profile (IMP) is a profile for embedded, "headless" devices such as

vending machines, industrial embedded applications, security systems, and similar devices

with either simple or no display and with some limited network connectivity.

Originally introduced by Siemens Mobile and Nokia as JSR-195, IMP 1.0 is a strict subset of

MIDP 1.0 except that it doesn't include user interface APIs — in other words, it doesn't

include support for the Java package javax.microedition.lcdui. JSR-228, also known as IMP-

NG, is IMP's next generation that is based on MIDP 2.0, leveraging MIDP 2.0's new security

and networking types and APIs, and other APIs such as PushRegistry and platformRequest(),

but again it doesn't include UI APIs, nor the game API.

Connected Device ConfigurationMain article: Connected Device Configuration

The Connected Device Configuration is a subset of Java SE, containing almost all the libraries

that are not GUI related. It is richer than CLDC.

Foundation Profile

The Foundation Profile is a Java ME Connected Device Configuration (CDC) profile. This

profile is intended to be used by devices requiring a complete implementation of the Java

virtual machine up to and including the entire Java Platform, Standard Edition API. Typical

implementations will use some subset of that API set depending on the additional profiles

supported. This document describes the facilities that the Foundation Profile provides to the

device and other profiles that use it. This specification was developed under the Java

Community Process.

Personal Basis Profile

The Personal Basis Profile extends the Foundation Profile to include lightweight GUI support

in the form of an AWT subset. This is the platform that BD-J is built upon.

Page 36: Real Time Interactive Lecture Delivery System

Implementations

Sun provides a reference implementation of these configurations and profiles for MIDP and

CDC. Starting with the Java ME 3.0 SDK, a Net beans-based IDE will support them in a

single IDE.

In contrast to the numerous binary implementations of the Java Platform built by Sun for

servers and workstations, Sun does not provide any binaries for the platforms of Java ME

targets with the exception of an MIDP 1.0 JRE (JVM) for Palm OS. Sun provides no J2ME

JRE for the Microsoft Windows Mobile (Pocket PC) based devices, despite an open-letter

campaign to Sun to release a rumored internal implementation of PersonalJava known by the

code name "Captain America".Third Party VM's like JBlend and JBed are widely used by

Windows Mobile vendors like HTC and Samsung

Operating systems targeting Java ME have been implemented by DoCoMo in the form of

DoJa, and by SavaJe as SavaJe OS. The latter company was purchased by Sun in April 2007

and now forms the basis of Sun's JavaFX Mobile. The company IS2T provides Java ME

virtual machine (MicroJvm), for any RTOS and even with no-RTOS then qualified as

baremetal. When baremetal, the virtual machine is the OS/RTOS: the device boots in Java.[5]

MicroEmulator provides an open source (LGPL) implementation of MIDP emulator. This is a

Java Applet based emulator and can be embedded in web pages.

The open-source Mika VM aims to implement JavaME CDC/FP, but is not certified as such

(certified implementations are required to charge royalties, which is impractical for an open-

source project). Consequently devices which use this implementation are not allowed to claim

JavaME CDC compatibility.

11.3 JDBC

The Java DataBase connectivity is a set of java classes that provide connectivity to relational databases. We can use JDBC from java programs to access almost every SQL database like Oracle, Sybase, DB2, SQL server, Access and FoxBASE, etc. JDBC drivers are available from Symantec, Intersolve, IBM, javasoft and Borelan/Visigenic etc., with little effort we can connect to database.

Origin of JDBC:

JDBC is not a new query language. It is a simply a java object interface (communication) to SQL. Our applications use JDBC to forward SQL statements to a DBMX. We write SQL statements in a java program to perform database queries and updates. We can think of JDBC

Page 37: Real Time Interactive Lecture Delivery System

as just a java SQL wrapper. JDBC does not enhance or diminish the power of SQL. It’s simply a mechanism for submitting SQL statements. JDC can handle easily the manipulations like connecting to a database, retrieving query results, committing or rolling back transactions.

JDBC is based on the X/Open SQL CLI (Call Level Interface), which is also the basis for Microsoft’s ODBC interface. The CLI is not a new query language. It is simply a procedural interface to SQL.

JDBC Drivers:

JDBC drivers are either direct or ODBC bridged direct driver sits on top of the DBMS’s native interface. For example, Symantec provides direct drivers for Oracle 7.X using ODBC. IBM also provides a native JDBC driver for its DB2 products direct means no transactions will be done between a JDBC program and the database. This will be faster and used in real-time environment.

In contrast to direct drivers, bridged drivers are built on the top of existing ODBC drivers. JDBC is created after ODBC. Consequently, there exists a translation between these the two protocols. These types of bridge drivers are slow in communication. Java Soft and Intersolve provide JDBC to ODBC bridge drivers that make it easier to translate between JDBC and the various ODBC drivers. As a result, the JDBC applications, we write are guaranteed to be portable across multi vendor DBMS. That is, a JDBC program is both platform-independent and data-base-independent.

All the classes and interfaces required for writing a JDBC program are included in java.sql package. The java.sql package is supplied with the core jdk software itself. JDBC drivers provide implementation for these classes and interfaces of java.sql package. The other responsibility of drivers is to maintain transaction reliability. The drivers must provide the required synchronization protection.

Java Database Connectivity (JDBC)

JDBC is a set of classes and Interface used for the purpose of connecting to a database using applications developed in Java language. To get a connection with the database, a Driver, which is the implementation of JDBC API loaded. This driver is used to create a Statement, which is an object used to execute SQL queries. Result of a statement is stored as ResultSet.

Page 38: Real Time Interactive Lecture Delivery System

JDBC Architecture

JAVA

Application

JAVA

Application

JAVA

Application

JDBC TO ODBC

DRIVER

ODBC DRIVER

Sybase

DRIVER

Oracle

DRIVER

MS SQL

SERVER

SQL

DRIVER

Oracle

DATABASE

JDBC DRIVER MANAGER

Page 39: Real Time Interactive Lecture Delivery System

11.4 Web Service

Web services can be defined as loosely coupled software components delivered over IP networks. The primary objective of Web services is to simplify and standardize application interoperability within and across companies, leading to increased operational efficiencies and tighter partner relationships.

Web service:

A Web service is a unit of managed code that can be remotely invoked using HTTP, that is, it

can be activated using HTTP requests.

Historically speaking, remote access to binary units required platform-specific and sometimes

language-specific protocols. For example, DCOM clients access remote COM types using

tightly coupled RPC calls. CORBA requires the use of tightly coupled protocol referred to as

Internet Inter-ORB Protocol (IIOP), to activate remote types. Enterprise JavaBeans (EJBs)

requires a Remote Method Invocation (RMI) Protocol and by and large a specific language

(Java). Thus each of these remote invocation architectures needs proprietary protocols, which

typically require a tight connection to the remote source.

One can access Web services using nothing but HTTP. Of all the protocols in existence today,

HTTP is the one specific wire protocol that all platforms tend to agree on. Thus , using Web

services, a Web service developer can use any language he wish and a Web service consumer

can use standard HTTP to invoke methods a Web service provides. The bottom line is that we

have true language and platform integration . Simple Object Access Protocol (SOAP) and

XML are also two key pieces of the Web services architecture.

What is a Web Service

Web services constitute a distributed computer architecture made up of many different

computers trying to communicate over the network to form one system. They consist of a set

of standards that allow developers to implement distributed applications - using radically

different tools provided by many different vendors - to create applications that use a

combination of software modules called from systems in disparate departments or from other

companies.

Page 40: Real Time Interactive Lecture Delivery System

A Web service contains some number of classes, interfaces, enumerations and structures that

provide black box functionality to remote clients. Web services typically define business

objects that execute a unit of work (e.g., perform a calculation, read a data source, etc.) for the

consumer and wait for the next request. Web service consumer does not necessarily need to be

a browser-based client. Console-baed and Windows Forms-based clients can consume a Web

service. In each case, the client indirectly interacts with the Web service through an

intervening proxy. The proxy looks and feels like the real remote type and exposes the same

set of methods. Under the hood, the proxy code really forwards the request to the Web service

using standard HTTP or optionally SOAP messages.

Web Service Standards

Web services are registered and announced using the following services and protocols. Many

of these and other standards are being worked out by the UDDI project, a group of industry

leaders that is spearheading the early creation and design efforts.

Universal Description, Discovery, and Integration (UDDI) is a protocol for describing

available Web services components. This standard allows businesses to register with an

Internet directory that will help them advertise their services, so companies can find one

another and conduct transactions over the Web. This registration and lookup task is done using

XML and HTTP(S)-based mechanisms.

Simple Object Access Protocol (SOAP) is a protocol for initiating conversations with a UDDI

Service. SOAP makes object access simple by allowing applications to invoke object methods

or functions, residing on remote servers. A SOAP application creates a request block in XML,

supplying the data needed by the remote method as well as the location of the remote object

itself.

Web Service Description Language (WSDL), the proposed standard for how a Web service is

described, is an XML-based service IDL (Interface Definitition Language) that defines the

service interface and its implementation characteristics. WSDL is referenced by UDDI entries

and describes the SOAP messages that define a particular Web service.

ebXML (e-business XML) defines core components, business processes, registry and

repository, messaging services, trading partner agreements, and security.

Implementing Web Services

Page 41: Real Time Interactive Lecture Delivery System

Here comes a brief step-by-step on how a Web service is implemented.

A service provider creates a Web service The service provider uses WSDL to describe the service to a UDDI registry

The service provider registers the service in a UDDI registry and/or ebXML registry/repository.

Another service or consumer locates and requests the registered service by querying UDDI and/or ebXML registries.

The requesting service or user writes an application to bind the registered service using SOAP in the case of UDDI and/or ebXML

Data and messages are exchanged as XML over HTTP

Web Service Infrastructure

Even though Web services are being built using existing infrastructure, there exists a strong

necessity for a number of innovative infrastructures. The core architectural foundation of Web

services are XML, XML namespaces, and XML schema. UDDI, SOAP, WSDL, ebXML and

security standards are being developed in parallel by different vendors

Web Services Technologies and Tools

There are a number of mechanisms for constructing Web services. Microsoft has come out

with a new object-oriented language C# as the development language for Web services

and .NET framework. Microsoft has an exciting tool called Visual Studio .NET in this regard.

The back end database can be Microsoft SQL Server 2000 in Windows 2000 Professional.

Sun Microsystems has its own set of technologies and tools for facilitating Web services

development. Java Servlets, Java Server Pages (JSPs), Enterprise JavaBeans (EJB)

architecture and other Java 2 Enterprise Edition (J2EE) technologies play a very critical role in

developing Web services.

There are a number of tools for developing Web services. They are Forte Java IDE, Oracle

JDeveloper, and WebGain Studio.

Sun Microsystems has taken an initiative called Sun ONE (Open Network Environment) and

is planning to push Java forward as a platform for Web services. It is developing Java APIs for

XML-based remote procedure calls and for looking up services in XML registries - two more

JAX family APIs: JAX/RPC (Java API for XML Remote Procedure Calls) and JAXR (Java

API for XML Registries). These will wrap up implementations of Web services standards,

such as SOAP and UDDI.

Page 42: Real Time Interactive Lecture Delivery System

IBM also for its part has already developed a suite of early-access tools for Web services

development. They are Web Services Toolkit (WSTK), WSDL Toolkit, and Web Services

Development Environment (WSDE).

Apache Axis is an implementation of the SOAP ("Simple Object Access Protocol")

submission to W3C.

From the draft W3C specification:

SOAP is a lightweight protocol for exchanging structured information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that defines a framework for describing what is in a message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing remote procedure calls and responses.

Apache Axis is an Open Source SOAP server and client. SOAP is a mechanism for inter-

application communication between systems written in arbitrary languages, across the Internet.

SOAP usually exchanges messages over HTTP: the client POSTs a SOAP request, and

receives either an HTTP success code and a SOAP response or an HTTP error code. Open

Source means that you get the source, but that there is no formal support organization to help

you when things go wrong.

For the last few years, XML has enabled heterogeneous computing environments to share

information over the Web. It now offers a simplified means by which to share process as well.

From a technical perspective, the advent of Web services is not a revolution in distributed

computing. It is instead a natural evolution of XML application from structured representation

of information to structured representation of inter-application messaging.

Prior to the advent of Web services, enterprise application integration (EAI) was very difficult

due to differences in programming languages and middleware used within organizations. This

led to the situation where interoperability was cumbersome and painful. With the arrival of

Web services, any application can be integrated as long as it is Internet-enabled.

It is difficult to avoid the popularity and hype that is surrounding Web services. Each software

vendor has some initiative concerning Web services and there is always great speculation

about the future of the market for them. Whichever way it turns out, Web service architectures

provide a very different way of thinking about software development. From client-server to n-

tier systems, to distributed computing, Web service applications represent the culmination of

each of these architectures in combination with the Internet.

Page 43: Real Time Interactive Lecture Delivery System

11.5 FLASH BUILDER 4.5:

With the rapid evolution of mobile computing platforms, new challenges have emerged for

application developers. Much like the early days of web and desktop computing, each platform

has its own development model, including programming language, framework, tools, and

different deployment options. These challenges add time, cost, and complexity to delivering

applications across the web, the desktop, and the many mobile device platforms.

The Adobe Flash Platform already enables developers to deliver consistent application

experiences across multiple browsers and operating systems. With the introduction of the

Adobe Flex 4.5 SDK and Adobe Flash Builder 4.5 software along with the availability of the

Adobe AIR runtime on mobile devices, developers can now build mobile Flex applications for

touch screen smart phones and tablets with the same ease and quality as on desktop platforms.

By providing a common path to creating applications for web, desktop, and multiple mobile

platforms, Flex and Flash Builder can significantly reduce the time and cost associated with

Page 44: Real Time Interactive Lecture Delivery System

application development and testing while providing users with a consistent application

experience.

Mobile web components

Flex 4.5 expands the extensive existing web and desktop component library by adding 21 new

touch-enabled, optimized, and density-aware mobile components, accelerating mobile

application development. New mobile Spark components include View, ViewNavigator,

TabbedViewNavigator; MobileApplication, ViewNavigatorApplication; ViewMenu,

ViewMenuItem; Busy Cursor; SkinnablePopUpContainer; Scroller; List; ItemRenderer;

Button, CheckBox, RadioButton, ToggleButton; TextArea, TextInput; HSlider, VSlider; and

ActionBar.

In addition to the new mobile components, Flex 4.5 adds other important capabilities and

improvements for building mobile applications including kinetics and elastic bounce/pull

effects for scrolling components, performance optimizations for scrolling and transitions, auto-

scaling based on device pixel density, default mobile themes including a light-on-dark color

scheme for mobile component skins, native support for keyboard input, splash screen support,

and multiresolution bitmap support.

Tooling for mobile development

Flash Builder 4.5 adds important new mobile development workflows to help you code,

debug, and optimize mobile applications. It features a new mobile project type, new design-

view per-device preview, new support for multidensity authoring, per-platform application

permissions editing, and a powerful new debugging workflow that allows you to debug on a

physical device or on the desktop using a device emulator. You can deploy, package, and sign

the required resources as a platform-specific installer file for upload to a mobile application

distribution site or store.

Mobile platform support

Build apps for Android, Blackberry Playbook, iPhone and iPad

Both Flex 4.5 and Flash Builder 4.5 provide full support for building and deploying mobile

Flex and ActionScript applications for Android, Blackberry Tablet OS, iPhone, and iPad.

Adobe Flash Builder (previously known as Adobe Flex Builder) is an integrated development

environment (IDE) built on the Eclipse platform meant for developing rich Internet applications

(RIAs) and cross-platform desktop applications, particularly for the Adobe Flash platform.

Page 45: Real Time Interactive Lecture Delivery System

Support for cross-platform desktop applications was added in Flex Builder 3 with the

introduction of AIR.

Adobe Flash (formerly Macromedia Flash) is a proprietary multimedia platform used to add

animation, video, and interactivity to Web pages. Flash is frequently used for advertisements and

games. More recently, it has been positioned as a tool for "Rich Internet Applications".

Responsiveness:

When the application responds directly to a user’s command, or when the user can

directly manipulate elements on the screen, it engenders a feeling of connectedness and

responsiveness to the application. The user trusts the application, which feels more like a solid-

state machine than a harried librarian shuttling off to fetch the next volume at a user’s request.

Productivity:

Not having to subtly re-orient themselves with each new page, users can optimize their

focus and stay engaged with the tasks at hand. The feeling of a solid-state machine also

empowers the user to explore more, without fear of losing the page-oriented thread or having to

reload previous data-filled pages upon return.

User persistence:

The feeling of connectedness to the application, without page transitions that cause

attention gaps, keeps users more committed throughout the course of an application task. Each

attention gap provides an opening for a user to shift his or her attention and move to a different

task, application or site.

Integrated development environment:

An integrated development environment (IDE) also known as integrated design environment or

integrated debugging environment is a software application that provides comprehensive

facilities to computer programmers for software development. An IDE normally consists of:

Source code editor

Page 46: Real Time Interactive Lecture Delivery System

Compiler and/or an interpreter

Build automation tools

Debugger

11.6 Android

Android is a mobile operating system for mobile devices such as mobile phones and tablet

computers developed by the Open Handset Alliance led by Google.

Google purchased the initial developer of the software, Android Inc., in 2005. The unveiling

of the Android distribution on 5 November 2007 was announced with the founding of the

Open Handset Alliance, a consortium of 80 hardware, software, and telecommunication

companies devoted to advancing open standards for mobile devices. Google released most of

the Android code under the Apache License, a free software license. The Android Open

Source Project (AOSP) is tasked with the maintenance and further development of Android.

Android consists of a kernel based on the Linux kernel, with middleware, libraries and APIs

written in C and application software running on an application framework which includes

Java-compatible libraries based on Apache Harmony. Android uses the Dalvik virtual machine

with just-in-time compilation to run compiled Java code. Android has a large community of

developers writing applications ("apps") that extend the functionality of the devices.

Developers write primarily in Java. There are currently more than 250,000 apps available for

Android. Android Market is the online app store run by Google, though apps can also be

downloaded from third-party sites.

Foundation

Android, Inc. was founded in Palo Alto, California, United States in October, 2003 by Andy

Rubin (co-founder of Danger), Rich Miner (co-founder of Wildfire Communications, Inc.),

Nick Sears (once VP at T-Mobile), and Chris White (headed design and interface development

at WebTV) to develop, in Rubin's words "...smarter mobile devices that are more aware of its

owner's location and preferences". Despite the obvious past accomplishments of the founders

and early employees, Android Inc. operated secretively, revealing only that it was working on

software for mobile phones.

That same year, Rubin ran out of cash. Steve Perlman brought him $10,000 in cash in an

envelope and refused a stake in the company.

Page 47: Real Time Interactive Lecture Delivery System

Acquisition by Google

Google acquired Android Inc. in August 2005, making Android Inc. a wholly owned

subsidiary of Google Inc. Key employees of Android Inc., including Andy Rubin, Rich Miner

and Chris White, stayed at the company after the acquisition.

Not much was known about Android Inc. at the time of the acquisition, but many assumed that

Google was planning to enter the mobile phone market with this move.

Version history

Android has seen a number of updates since its original release. These updates to the base

operating system typically fix bugs and add new features. Generally, each new version of the

Android operating system is developed under a code name based on a dessert item. Past

updates included Cupcake and Donut. The code names are in alphabetical order (Cupcake,

Donut, Eclair, Froyo, Gingerbread, Honeycomb, and the upcoming Ice Cream Sandwich).

Below is a list of the most recent versions, and what they include:

2.0 (Eclair) included a new web browser, with a new user interface and support for HTML5 and the W3C Geolocation API. It also included an enhanced camera app with features like digital zoom, flash, color effects, and more.

2.1 (Eclair) included support for voice controls throughout the entire OS. It also included a new launcher, with 5 homescreens instead of 3, animated backgrounds, and a button to open the menu (instead of a slider). It also included a new weather app, and improved functionality in the Email and Phonebook apps.

2.2 (Froyo) introduced speed improvements with JIT optimization and the Chrome V8 JavaScript engine, and added Wi-Fi hotspot tethering and Adobe Flash support

2.3 (Gingerbread) refined the user interface, improved the soft keyboard and copy/paste features, and added support for Near Field Communication

3.0 (Honeycomb) was a tablet-oriented release which supports larger screen devices and introduces many new user interface features, and supports multicore processors and hardware acceleration for graphics.[55] The Honeycomb SDK has been released and the first device featuring this version, the Motorola Xoom tablet, went on sale in February 2011.

3.1 (Honeycomb) was announced at the 2011 Google I/O on 10 May 2011. - To allow honeycomb devices to directly transfer content from USB devices

3.2 (Honeycomb) is "an incremental release that adds several new capabilities for users and developers". Highlights include optimization for a broader range of screen sizes; new "zoom-to-fill" screen compatibility mode; capability to load media files directly from the SD card; and an extended screen support API, providing developers with more precise control over the UI.

Page 48: Real Time Interactive Lecture Delivery System

Future releases that have been announced include:

4.0 (Ice Cream Sandwich) is said to be a combination of Gingerbread and Honeycomb into a "cohesive whole".It will be released in Q4 2011.

Design

Android's kernel is derived from the Linux kernel. Google contributed code to the Linux

kernel as part of their Android effort, but certain features, notably a power management

feature called wakelocks, were rejected by mainline kernel developers, so the Android kernel

is now a separate version or fork of the Linux kernel.

Google announced in April 2010 that they would hire two employees to work with the Linux

kernel community. Greg Kroah-Hartman, the current Linux kernel maintainer for the -stable

branch, said in December 2010 that he was concerned that Google was no longer trying to get

their code changes included in mainstream Linux. Some Google Android developers hinted

that "the Android team was getting fed up with the process", because they were a small team

and had more urgent work to do on Android.

Android does not have a native X Window System nor does it support the full set of standard

GNU libraries, and this makes it difficult to port existing GNU/Linux applications or libraries

to Android. However, support for the X Window System is possible.

Features

Page 49: Real Time Interactive Lecture Delivery System

The Android Emulator default home screen (v1.5)

Architecture diagram

Current features and specifications:

Handset layouts

The platform is adaptable to larger, VGA, 2D graphics library, 3D graphics library based on OpenGL ES 2.0 specifications, and traditional smartphone layouts.

Storage

SQLite, a lightweight relational database, is used for data storage purposes.

Connectivity

Android supports connectivity technologies including GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE, NFC and WiMAX.

Messaging

SMS and MMS are available forms of messaging, including threaded text messaging and now Android Cloud To Device Messaging Framework(C2DM) is also a part of Android Push Messaging service.

Multiple language support

Android supports multiple human languages. The number of languages more than doubled for the platform 2.3 Gingerbread. Android lacks font rendering of several languages even after official announcements[of added support (e.g. Hindi).

Page 50: Real Time Interactive Lecture Delivery System

Web browser

The web browser available in Android is based on the open-source WebKit layout engine, coupled with Chrome's V8 JavaScript engine. The browser scores a 93/100 on the Acid3 Test.

Java support

While most Android applications are written in Java, there is no Java Virtual Machine in the platform and Java byte code is not executed. Java classes are compiled into Dalvik executables and run on Dalvik, a specialized virtual machine designed specifically for Android and optimized for battery-powered mobile devices with limited memory and CPU. J2ME support can be provided via third-party applications.

Media support

Android supports the following audio/video/still media formats: WebM, H.263, H.264 (in 3GP or MP4 container), MPEG-4 SP, AMR, AMR-WB (in 3GP container), AAC, HE-AAC (in MP4 or 3GP container), MP3, MIDI, Ogg Vorbis, FLAC, WAV, JPEG, PNG, GIF, BMP.

Streaming media support

RTP/RTSP streaming (3GPP PSS, ISMA), HTML progressive download (HTML5 <video> tag). Adobe Flash Streaming (RTMP) and HTTP Dynamic Streaming are supported by the Flash plugin. Apple HTTP Live Streaming is supported by RealPlayer for Mobile, and by the operating system in Android 3.0 (Honeycomb).

Additional hardware support

Android can use video/still cameras, touchscreens, GPS, accelerometers, gyroscopes, magnetometers, dedicated gaming controls, proximity and pressure sensors, thermometers, accelerated 2D bit blits (with hardware orientation, scaling, pixel format conversion) and accelerated 3D graphics.

Multi-touch

Android has native support for multi-touch which was initially made available in handsets such as the HTC Hero. The feature was originally disabled at the kernel level (possibly to avoid infringing Apple's patents on touch-screen technology at the time). Google has since released an update for the Nexus One and the Motorola Droid which enables multi-touch natively.

Bluetooth

Page 51: Real Time Interactive Lecture Delivery System

Supports A2DP, AVRCP, sending files (OPP), accessing the phone book (PBAP), voice dialing and sending contacts between phones. Keyboard, mouse and joystick (HID) support is available in Android 3.1+, and in earlier versions through manufacturer customizations and third-party applications.

Video calling

Android does not support native video calling, but some handsets have a customized version of the operating system that supports it, either via the UMTS network (like the Samsung Galaxy S) or over IP. Video calling through Google Talk is available in Android 2.3.4 and later. Gingerbread allows Nexus S to place Internet calls with a SIP account. This allows for enhanced VoIP dialing to other SIP accounts and even phone numbers. Skype 2.1 offers video calling in Android 2.3, including front camera support.

Multitasking

Multitasking of applications is available.

Voice based features

Google search through voice has been available since initial release. Voice actions for calling, texting, navigation, etc. are supported on Android 2.2 onwards.

Tethering

Android supports tethering, which allows a phone to be used as a wireless/wired Wi-Fi hotspot. Before Android 2.2 this was supported by third-party applications or manufacturer customizations.

Screen capture

Android does not support screenshot capture as of 2011. This is supported by manufacturer and third-party customizations. Screen Capture is available through a PC connection using the DDMS developer's tool.

Page 52: Real Time Interactive Lecture Delivery System

12. Testing:

A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.

Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirements gaps, e.g., unrecognized requirements that result in errors of omission by the program designer. A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security.

Software faults occur through the following process. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software. A single defect may result in a wide range of failure symptoms.

Compatibility

A frequent cause of software failure is compatibility with another application or new operating system (or, increasingly web browser version). In the case of lack of backward compatibility this can occur because the programmers have only considered coding the programs for, or testing the software, on the latest operating system they have access to or else, in isolation (no other conflicting applications running at the same time) or under 'ideal' conditions ('unlimited' memory; 'superfast' processor; latest operating system incorporating all updates, etc). In effect, everything is running "as intended" but only when executing at the same time on the same machine with the particular combination of software and/or hardware. These are some of the hardest failures to predict, detect and test for and many are therefore discovered only after release into the larger world with its largely unknown mix of applications, software and hardware. It is likely that an experienced programmer will have had exposure to these factors through "co-evolution" with several older systems and be much more aware of potential future compatibility problems and therefore tend to use tried and tested functions or instructions rather

Page 53: Real Time Interactive Lecture Delivery System

than always the latest available which may not be fully compatible with earlier mixtures of software/hardware. This could be considered a prevention oriented strategy that fits well with the latest testing phase suggested by Dave Gelperin and William C. Hetzel cited below.

Input combinations and preconditions

A problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do) -- for example, usability, scalability, performance, compatibility, reliability -- can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.

Static vs. dynamic testing

There are many approaches to software testing. Reviews, walkthroughs or inspections are considered as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. The former can be, and unfortunately in practice often is, omitted, whereas the latter takes place when programs begin to be used for the first time - which is normally considered the beginning of the testing stage. This may actually begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). For example, Spreadsheet programs are, by their very nature, tested to a large extent "on the fly" during the build process as the result of some calculation or text manipulation is shown interactively immediately after each formula is entered.

Software verification and validation

Software testing is used in association with verification and validation:

Verification: Have we built the software right (i.e., does it match the specification?)? It is process based. Validation: Have we built the right software (i.e., is this what the customer wants?)? It is product based.

The software testing team

Software testing can be done by software testers. Until the 1950s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing there have been established different roles: test lead/manager, test designer, tester, test automater/automation developer, and test administrator.

Page 54: Real Time Interactive Lecture Delivery System

Software Quality Assurance (SQA)

Though controversial, software testing may be viewed as an important part of the software quality assurance (SQA) process. In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than mission critical software such as that used to control the functions of an airliner. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.

Software Testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA is the implementation of policies and procedures intended to prevent defects from occurring in the first

Testing methods

Software testing methods are traditionally divided into black box testing and white box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

Black box testing

Black box testing treats the software as a black box without any knowledge of internal implementation. Black box testing methods include equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix, exploratory testing and specification-based testing.

Specification-based testing Specification-based testing aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case.

Specification-based testing is necessary but insufficient to guard against certain risks.

Advantages and disadvantagesThe black box tester has no "bonds" with the code, and a tester's perception is very simple: a code MUST have bugs. Using the principle, "Ask and you shall receive," black box testers find bugs where programmers don't. BUT, on the other hand, black box testing is like a walk in a dark labyrinth without a flashlight, because the tester doesn't know how the back end was actually constructed. That's why there are situations when 1. A black box tester writes many test cases to check something that can be tested by only one test case and/or 2. Some parts of the back end are not tested at all

Page 55: Real Time Interactive Lecture Delivery System

Therefore, black box testing has the advantage of an unaffiliated opinion on the one hand and the disadvantage of blind exploring on the other.

White box testing

White box testing, by contrast to black box testing, is when the tester has access to the internal data structures and algorithms (and the code that implement these)

Types of white box testingThe following types of white box testing exist: api testing - Testing of the application using Public and Private APIs. code coverage - creating tests to satisfy some criteria of code coverage. For

example, the test designer can create tests to cause all statements in the program to be executed at least once.

fault injection methods.

mutation testing methods.

static testing - White box testing includes all static testing.

Code completeness evaluationWhite box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Two common forms of code coverage are: Function coverage, which reports on functions executed and statement coverage,

which reports on the number of lines executed to complete the test.

They both return coverage metric, measured as a percentage.

Grey Box Testing

In recent years the term grey box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey-box because the input and output are clearly outside of the black-box we are calling the software under test. This is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages.

Acceptance testing

Acceptance testing can mean one of two things:

Page 56: Real Time Interactive Lecture Delivery System

1. A smoke test is used as an acceptance test prior to introducing a build to the main testing process.2. Acceptance testing performed by the customer is known as user acceptance testing (UAT).

Regression Testing

Regression testing is any type of software testing that seeks to uncover software regressions. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically regressions occur as an unintended consequence of program changes. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.

Non Functional Software Testing

Special methods exist to test non-functional aspects of software.

Performance testing checks to see if the software can handle large quantities of data or users. This is generally referred to as software scalability. This activity of Non Functional Software Testing is often times referred to as Load Testing. Usability testing is needed to check if the user interface is easy to use and understand.

Security testing is essential for software which processes confidential data and to prevent system intrusion by hackers.

Internationalization and localization is needed to test these aspects of software, for which a pseudo localization method can be used.

In contrast to functional testing, which establishes the correct operation of the software (correct in that it matches the expected behavior defined in the design requirements), non-functional testing verifies that the software functions properly even when it receives invalid or unexpected inputs. Software fault injection, in the form of fuzzing is an example of non-functional testing. Non-functional testing, especially for software, is designed to establish whether the device under test can tolerate invalid or unexpected inputs, thereby establishing the robustness of input validation routines as well as error-handling routines. Various commercial non-functional testing tools are linked from the Software fault injection page; there are also numerous open-source and free software tools available that perform non-functional testing.

Testing process

A common practice of software testing is performed by an independent group of testers after the functionality is developed before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.

Page 57: Real Time Interactive Lecture Delivery System

In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).

Testing can be done on the following levels:

Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.

System testing tests a completely integrated system to verify that it meets its requirements.

System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.

Before shipping the final version of software, alpha and beta testing are often done additionally:

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Finally, acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development.

Regression testing

Page 58: Real Time Interactive Lecture Delivery System

After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality. Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.

More specific forms of regression testing are known as sanity testing, when quickly checking for bizarre behavior, and smoke testing when testing for basic functionality.

Benchmarks may be employed during regression testing to ensure that the performance of the newly modified software will be at least as acceptable as the earlier version or, in the case of code optimization, that some real improvement has been achieved.

Testing Tools

Program testing and fault detection can be aided significantly by testing tools and debuggers. Types of testing/debug tools include features such as:

Program monitors, permitting full or partial monitoring of program code including: o Instruction Set Simulator, permitting complete instruction level monitoring and

trace facilities

o Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code

o code coverage reports

Formatted dump or Symbolic debugging, tools allowing inspection of program variables on error or at chosen points

Benchmarks, allowing run-time performance comparisons to be made

Performance analysis, or profiling tools that can help to highlight hot spots and resource usage

Some of these features may be incorporated into an integrated development environment (IDE).

Measuring software testing

Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed]

but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.

There are a number of common software measures, often called "metrics", which are used to measure the state of the software or the adequacy of the testing.

Testing artifacts

Software testing process can produce several artifacts.

Page 59: Real Time Interactive Lecture Delivery System

Test case A test case in software engineering normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.

Test script The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.

Test data The most common test manually or in automation is retesting and regression testing, In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project.

Test suite The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Test plan A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.

Test harness The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

Software testing certification types

Page 60: Real Time Interactive Lecture Delivery System

Certifications can be grouped into: exam-based and education-based. Exam-based certifications: For these there is the need to pass an exam, which can also be

learned by self-study: e.g. for ISTQB or QAI. Education-based certifications: Education based software testing certifications are

instructor-led sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing).

Testing certifications CATE offered by the International Institute for Software Testing CBTS offered by the Brazilian Certification of Software Testing (ALATS)

Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)

Certified Software Test Professional (CSTP) offered by the International Institute

Software Testing

CSTP (TM) (Australian Version) offered by K. J. Ross & Associates ISEB offered by the Information Systems Examinations Board

ISTQB Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board

ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board

CBTS offered by the Brazilian Certification of Software Testing (ALATS)

TMPF Next Foundation offered by the Examination Institute for Information Science

Quality assurance certifications CSQE offered by the American Society for Quality CSQA offered by the Quality Assurance Institute

CQIA offered by the American Society for Quality

CMSQ offered by the Quality Assurance Institute

Controversy

Some of the major software testing controversies include:

What constitutes responsible software testing? Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.

Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing

Page 61: Real Time Interactive Lecture Delivery System

popularity since 2006 mainly in commercial circles, whereas government and military software providers are slow to embrace this methodology, and mostly still hold to CMMI.

Exploratory test vs. scriptedShould tests be designed at the same time as they are executed or should they be designed beforehand?

Manual testing vs. automated Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests.[citation needed] More in particular, test-driven development states that developers should write unit-tests of the x-unit type before coding the functionality. The tests then can be considered as a way to capture and implement the requirements.

Software design vs. software implementationShould testing be carried out only at the end or throughout the whole process?

Who watches the watchmen? The idea is that any form of observation is also an interaction that the act of testing can also affect that which is being teste

13.OUTPUT SCREENS:REAL TIME INTERACTIVE LECTURE DELIVERY SYSTEM

Screen1: Home page of the application.

Page 62: Real Time Interactive Lecture Delivery System

-

Screen2: When click on “Get Login” Link button, a login component will be presented as in the below screen:

Page 63: Real Time Interactive Lecture Delivery System

Screen3: Login as “admin”

Page 64: Real Time Interactive Lecture Delivery System

Screen4: Admin home page will be presented as shown in the below screen. This is to register the students

Page 65: Real Time Interactive Lecture Delivery System
Page 66: Real Time Interactive Lecture Delivery System

Screen5: Upload presentation screen will be presented as shown in the below screen. Admin will upload the presentations here.

Page 67: Real Time Interactive Lecture Delivery System

Screen7: “List of presentations” screen is shown as in the below screen, All the uploaded presentations can be viewed here:

Page 68: Real Time Interactive Lecture Delivery System

Screen8: Select the subject and then the list of presentations of selected subject will be presented as shown in the below screen:

Screen9: Adding quiz component screen, quiz will be uploaded here:

Page 69: Real Time Interactive Lecture Delivery System

Screen10: Login as professor in login component

Page 70: Real Time Interactive Lecture Delivery System

Screen11: Professor home screen

Page 71: Real Time Interactive Lecture Delivery System

Screen12: The list of presentations of selected subject will be presented as shown in the below screen

Page 72: Real Time Interactive Lecture Delivery System

Screen13: The professor can view the registered students in this below screen

Page 73: Real Time Interactive Lecture Delivery System
Page 74: Real Time Interactive Lecture Delivery System

Screen14: When click on the Get Users list button, the enrolled students will be displayed in the data grid as in the below screen

Page 75: Real Time Interactive Lecture Delivery System

Screen15: Presentation screen will be presented as shown in the below screen, the students doubts will be displayed in the list on right hand side.

Page 76: Real Time Interactive Lecture Delivery System

Screen16: The professor will conduct quiz as shown in this screen, Top three students information will be displayed on the right hand side.

Real Time Interactive Lecture Deliver System

Page 77: Real Time Interactive Lecture Delivery System
Page 78: Real Time Interactive Lecture Delivery System
Page 79: Real Time Interactive Lecture Delivery System
Page 80: Real Time Interactive Lecture Delivery System
Page 81: Real Time Interactive Lecture Delivery System
Page 82: Real Time Interactive Lecture Delivery System

13 Conclusions:

This software is developed using Flex as front end, java as middle ware and Sql server as

backend. The goals that are achieved by the software are:

• Instant Access• Improved Productivity• Optimum Utilization of resources• Efficient management of records• Simplification of the operations• Less processing time and required information • User Friendly• Portable and flexible for further enhancements.

Page 83: Real Time Interactive Lecture Delivery System

Intensive care has been taken by the development team to meet all the requirements of the client. Each module has undergone stringent testing procedures and the integration testing activity has been performed by the team leaders of each module.

After completion, execution and successful demonstration, we confirm that the Product has reached client requirements and is ready for launch. Finally we conclude that the project stands up in maintaining its sole motive of betterment to the society and work for a social cause.

14 Future Enhancements:

It is not possible to develop a system that makes all the requirements of the user .user requirements keep changing as the system is being used. Some of the future enhancements that can be done to this system are:

1) The System can be enhanced to increase look and feel of the Application.2) Will try to scale the application.

15. References

List of Reference Documents

The Unified Modeling Language Users guide

By Grady Brooch

Software Engineering, A practitioners approach

By Roger S Pressman

Software Project Management

By Walker Royce

The applicable IEEE standards as published in ‘IEEE standards collection, for the

preparation of SRS’.

Backup policy, Naming Conventions as per Teleparadigm Conventions.