ieee projects 2012 2013 - software engineering

16
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com , [email protected] IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects IEEE FINAL YEAR PROJECTS 2012 2013 Software Engineering Corporate Office: Madurai 227-230, Church road, Anna nagar, Madurai 625 020. 0452 4390702, 4392702, +9199447933980 Email: [email protected] , [email protected] Website: www.elysiumtechnologies.com Branch Office: Trichy 15, III Floor, SI Towers, Melapudur main road, Trichy 620 001. 0431 4002234, +919790464324. Email: [email protected] , [email protected] . Website: www.elysiumtechnologies.com Branch Office: Coimbatore 577/4, DB Road, RS Puram, Opp to KFC, Coimbatore 641 002. +919677751577 Website: Elysiumtechnologies.com, Email: [email protected] Branch Office: Kollam Surya Complex, Vendor junction, Kollam 691 010, Kerala. 0474 2723622, +919446505482. Email: [email protected] . Website: www.elysiumtechnologies.com Branch Office: Cochin 4 th Floor, Anjali Complex, near south over bridge, Valanjambalam, Cochin 682 016, Kerala. 0484 6006002, +917736004002. Email: [email protected] , Website: www.elysiumtechnologies.com

Upload: k-sundaresh-ka

Post on 12-Nov-2014

9.796 views

Category:

Education


0 download

DESCRIPTION

ieee projects download, base paper for ieee projects, ieee projects list, ieee projects titles, ieee projects for cse, ieee projects on networking,ieee projects 2012, ieee projects 2013, final year project, computer science final year projects, final year projects for information technology, ieee final year projects, final year students projects, students projects in java, students projects download, students projects in java with source code, students projects architecture, free ieee papers

TRANSCRIPT

Page 1: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

IEEE FINAL YEAR PROJECTS 2012 – 2013

Software Engineering

Corporate Office: Madurai

227-230, Church road, Anna nagar, Madurai – 625 020.

0452 – 4390702, 4392702, +9199447933980

Email: [email protected], [email protected]

Website: www.elysiumtechnologies.com

Branch Office: Trichy

15, III Floor, SI Towers, Melapudur main road, Trichy – 620 001.

0431 – 4002234, +919790464324.

Email: [email protected], [email protected].

Website: www.elysiumtechnologies.com

Branch Office: Coimbatore

577/4, DB Road, RS Puram, Opp to KFC, Coimbatore – 641 002.

+919677751577

Website: Elysiumtechnologies.com, Email: [email protected]

Branch Office: Kollam

Surya Complex, Vendor junction, Kollam – 691 010, Kerala.

0474 – 2723622, +919446505482.

Email: [email protected].

Website: www.elysiumtechnologies.com

Branch Office: Cochin

4th

Floor, Anjali Complex, near south over bridge, Valanjambalam,

Cochin – 682 016, Kerala.

0484 – 6006002, +917736004002.

Email: [email protected], Website: www.elysiumtechnologies.com

Page 2: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9201

EGC

9202

EGC

9203

SOFTWARE ENGINEERING 2012 - 2013

Even though data warehousing (DW) requires huge investments, the data warehouse market is experiencing incredible

growth. However, a large number of DW initiatives end up as failures. In this paper, we argue that the maturity of a data

warehousing process (DWP) could significantly mitigate such large-scale failures and ensure the delivery of consistent,

high quality, “single-version of truth” data in a timely manner. However, unlike software development, the assessment of

DWP maturity has not yet been tackled in a systematic way. In light of the critical importance of data as a corporate

resource, we believe that the need for a maturity model for DWP could not be greater. In this paper, we describe the

design and development of a five-level DWP maturity model (DWP-M) over a period of three years. A unique aspect of

this model is that it covers processes in both data warehouse development and operations. Over 20 key DW executives

from 13 different corporations were involved in the model development process. The final model was evaluated by a

panel of experts; the results strongly validate the functionality, productivity, and usability of the model. We present the

initial and final DWP-M model versions, along with illustrations of several key process areas at different levels of

maturity.

In the presence of an internal state, often a sequence of function calls is required to test software. In fact, to cover a

particular branch of the code, a sequence of previous function calls might be required to put the internal state in the

appropriate configuration. Internal states are not only present in object-oriented software, but also in procedural

software (e.g., static variables in C programs). In the literature, there are many techniques to test this type of software.

However, to the best of our knowledge, the properties related to the choice of the length of these sequences have

received only a little attention in the literature. In this paper, we analyze the role that the length plays in software testing,

in particular branch coverage. We show that, on “difficult” software testing benchmarks, longer test sequences make

their testing trivial. Hence, we argue that the choice of the length of the test sequences is very important in software

testing. Theoretical analyses and empirical studies on widely used benchmarks and on an industrial software are carried

out to support our claims.

A Model of Data Warehousing Process Maturity

A Theoretical and Empirical Analysis of the Role of Test Sequence Length in Software

Testing for Structural Coverage

An Autonomous Engine for Services Configuration and Deployment

Page 3: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9204

EGC

9205

Mobile devices are getting more pervasive, and it is becoming increasingly necessary to integrate web services into

applications that run on these devices. We introduce a novel approach for dynamically invoking web service methods

from mobile devices with minimal user intervention that only involves entering a search phrase and values for the

method parameters. The architecture overcomes technical challenges that involve consuming discovered services

dynamically by introducing a man-in-the-middle (MIM) server that provides a web service whose responsibility is to

discover needed services and build the client-side proxies at runtime. The architecture moves to the MIM server energy-

consuming tasks that would otherwise run on the mobile device. Such tasks involve communication with servers over

the Internet, XML-parsing of files, and on-the-fly compilation of source code. We perform extensive evaluations of the

system performance to measure scalability as it relates to the capacity of the MIM server in handling mobile client

requests, and device battery power savings resulting from delegating the service discovery tasks to the server.

It is inevitable that some concerns crosscut a sizeable application, resulting in code scattering and tangling. This issue

is particularly severe for security-related concerns: It is difficult to be confident about the security of an application

when the implementation of its security-related concerns is scattered all over the code and tangled with other concerns,

making global reasoning about security precarious. In this study, we consider the case of access control in Java, which

turns out to be a crosscutting concern with a nonmodular implementation based on runtime stack inspection. We

describe the process of modularizing access control in Java by means of Aspect-Oriented Programming (AOP). We first

show a solution based on AspectJ, the most popular aspect-oriented extension to Java, that must rely on a separate

automata infrastructure. We then put forward a novel solution via dynamic deployment of aspects and scoping

strategies. Both solutions, apart from providing a modular specification of access control, make it possible to easily

express other useful policies such as the Chinese wall policy. However, relying on expressive scope control results in a

compact implementation, which, at the same time, permits the straightforward expression of even more interesting

policies. These new modular implementations allowed by AOP alleviate maintenance and evolution issues produced by

the crosscutting nature of access control.

The primary claimed benefits of aspect-oriented programming (AOP) are that it improves the understandability and

maintainability of software applications by modularizing crosscutting concerns. Before there is widespread adoption of

AOP, developers need further evidence of the actual benefits as well as costs. Applying AOP techniques to refactor

legacy applications is one way to evaluate costs and benefits. We replace crosscutting concerns with aspects in three

industrial applications to examine the effects on qualities that affect the maintainability of the applications. We study

several revisions of each application, identifying crosscutting concerns in the initial revision and also crosscutting

Aspect-Oriented Refactoring of Legacy Applications: An Evaluation

Aspectizing Java Access Control

Page 4: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9206

EGC

9207

concerns that are added in later revisions. Aspect-oriented refactoring reduced code size and improved both change

locality and concern diffusion. Costs include the effort required for application refactoring and aspect creation, as well

as a decrease in performance.

This With the increase of energy consumption associated with IT infrastructures, energy management is becoming a

priority in the design and operation of complex service-based systems. At the same time, service providers need to

comply with Service Level Agreement (SLA) contracts which determine the revenues and penalties on the basis of the

achieved performance level. This paper focuses on the resource allocation problem in multitier virtualized systems with

the goal of maximizing the SLAs revenue while minimizing energy costs. The main novelty of our approach is to

address-in a unifying framework-service centers resource management by exploiting as actuation mechanisms

allocation of virtual machines (VMs) to servers, load balancing, capacity allocation, server power state tuning, and

dynamic voltage/frequency scaling. Resource management is modeled as an NP-hard mixed integer nonlinear

programming problem, and solved by a local search procedure. To validate its effectiveness, the proposed model is

compared to top-performing state-of-the-art techniques. The evaluation is based on simulation and on real experiments

performed in a prototype environment. Synthetic as well as realistic workloads and a number of different scenarios of

interest are considered. Results show that we are able to yield significant revenue gains for the provider when compared

to alternative methods (up to 45 percent). Moreover, solutions are robust to service time and workload variations.

Pre/postcondition-based specifications are commonplace in a variety of software engineering activities that range from

requirements through to design and implementation. The fragmented nature of these specifications can hinder

validation as it is difficult to understand if the specifications for the various operations fit together well. In this paper, we

propose a novel technique for automatically constructing abstractions in the form of behavior models from

pre/postcondition-based specifications. Abstraction techniques have been used successfully for addressing the

complexity of formal artifacts in software engineering; however, the focus has been, up to now, on abstractions for

verification. Our aim is abstraction for validation and hence, different and novel trade-offs between precision and

tractability are required. More specifically, in this paper, we define and study enabledness-preserving abstractions, that

is, models in which concrete states are grouped according to the set of operations that they enable. The abstraction

results in a finite model that is intuitive to validate and which facilitates tracing back to the specification for debugging.

The paper also reports on the application of the approach to two industrial strength protocol specifications in which

concerns were identified.

Collaborative Testing of Web Services

Automated Abstractions for Contract Validation

Page 5: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9208

EGC

9210

EGC

9209

Dynamic loading of software components (e.g., libraries or modules) is a widely used mechanism for an improved

system modularity and flexibility. Correct component resolution is critical for reliable and secure software execution.

However, programming mistakes may lead to unintended or even malicious components being resolved and loaded. In

particular, dynamic loading can be hijacked by placing an arbitrary file with the specified name in a directory searched

before resolving the target component. Although this issue has been known for quite some time, it was not considered

serious because exploiting it requires access to the local file system on the vulnerable host. Recently, such

vulnerabilities have started to receive considerable attention as their remote exploitation became realistic. It is now

important to detect and fix these vulnerabilities. In this paper, we present the first automated technique to detect

vulnerable and unsafe dynamic component loadings. Our analysis has two phases: 1) apply dynamic binary

instrumentation to collect runtime information on component loading (online phase), and 2) analyze the collected

information to detect vulnerable component loadings (offline phase). For evaluation, we implemented our technique to

detect vulnerable and unsafe component loadings in popular software on Microsoft Windows and Linux. Our evaluation

results show that unsafe component loading is prevalent in software on both OS platforms, and it is more severe on

Microsoft Windows. In particular, our tool detected more than 4,000 unsafe component loadings in our evaluation, and

some can lead to remote code execution on Microsoft Windows.

Dynamic specification mining observes program executions to infer models of normal program behavior. What makes

us believe that we have seen sufficiently many executions? The TAUTOKO (“Tautoko” is the ,

enrich.”) typestate miner generates test cases that cover previously unobserved behavior, systematically extending the

execution space, and enriching the specification. To our knowledge, this is the first combination of systematic test case

generation and typestate mining-a combination with clear benefits: On a sample of 800 defects seeded into six Java

subjects, a static typestate verifier fed with enriched models would report significantly more true positives and

significantly fewer false positives than the initial models.

Current and future information systems require a better understanding of the interactions between users and systems in

order to improve system use and, ultimately, success. The use of personas as design tools is becoming more

widespread as researchers and practitioners discover its benefits. This paper presents an empirical study comparing the

performance of existing qualitative and quantitative clustering techniques for the task of identifying personas and

Automatic Detection of Unsafe Dynamic Component Loadings

Comparing Semi-Automated Clustering Methods for Persona Development

Automatically Generating Test Cases for Specification Mining

Page 6: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9211

EGC

9212

EGC

9213

grouping system users into those personas. A method based on Factor (Principal Components) Analysis performs

better than two other methods which use Latent Semantic Analysis and Cluster Analysis as measured by similarity to

expert manually defined clusters.

This study is a quasi experiment comparing the software defect rates and implementation costs of two methods of

software defect reduction: code inspection and test-driven development. We divided participants, consisting of junior

and senior computer science students at a large Southwestern university, into four groups using a two-by-two, between-

subjects, factorial design and asked them to complete the same programming assignment using either test-driven

development, code inspection, both, or neither. We compared resulting defect counts and implementation costs across

groups. We found that code inspection is more effective than test-driven development at reducing defects, but that code

inspection is also more expensive. We also found that test-driven development was no more effective at reducing

defects than traditional programming methods.

A predictive model is required to be accurate and comprehensible in order to inspire confidence in a business setting.

Both aspects have been assessed in a software effort estimation setting by previous studies. However, no univocal

conclusion as to which technique is the most suited has been reached. This study addresses this issue by reporting on

the results of a large scale benchmarking study. Different types of techniques are under consideration, including

techniques inducing tree/rule-based models like M5 and CART, linear models such as various types of linear regression,

nonlinear models (MARS, multilayered perceptron neural networks, radial basis function networks, and least squares

support vector machines), and estimation techniques that do not explicitly induce a model (e.g., a case-based reasoning

approach). Furthermore, the aspect of feature subset selection by using a generic backward input selection wrapper is

investigated. The results are subjected to rigorous statistical testing and indicate that ordinary least squares regression

in combination with a logarithmic transformation performs best. Another key finding is that by selecting a subset of

highly predictive attributes such as project size, development, and environment related attributes, typically a significant

increase in estimation accuracy can be obtained.

An increasing number of popular SOAP web services exhibit a stateful behavior, where a successful interaction is

determined as much by the correct format of messages as by the sequence in which they are exchanged with a client.

The set of such constraints forms a “message contract” that needs to be enforced on both sides of the transaction; it

often includes constraints referring to actual data elements inside messages. We present an algorithm for the runtime

Comparing the Defect Reduction Benefits of Code Inspection and Test-Driven

Development

Runtime Enforcement of Web Service Message Contracts with Data

Data Mining Techniques for Software Effort Estimation: A Comparative Study

Page 7: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9214

EGC

9216

EGC

9215

monitoring of such message contracts with data parameterization. Their properties are expressed in LTL-FO+, an

extension of Linear Temporal Logic that allows first-order quantification over the data inside a trace of XML messages.

An implementation of this algorithm can transparently enforce an LTL-FO+ specification using a small and invisible Java

applet. Violations of the specification are reported on-the-fly and prevent erroneous or out-of-sequence XML messages

from being exchanged. Experiments on commercial web services from Amazon.com and Google indicate that LTL-FO+

is an appropriate language for expressing their message contracts, and that its processing overhead on sample traces is

acceptable both for client-side and server-side enforcement architectures.

We present a new technique for predicting the resource demand requirements of services implemented by multitier

systems. Accurate demand estimates are essential to ensure the efficient provisioning of services in an increasingly

service-oriented world. The demand estimation technique proposed in this paper has several advantages compared with

regression-based demand estimation techniques, which many practitioners employ today. In contrast to regression, it

does not suffer from the problem of multicollinearity, it provides more reliable aggregate resource demand and

confidence interval predictions, and it offers a measurement-based validation test. The technique can be used to

support system sizing and capacity planning exercises, costing and pricing exercises, and to predict the impact of

changes to a service upon different service customers.

In this paper, we define and validate a new multidimensional measure of Open Source Software (OSS) project

survivability, called Project Viability. Project viability has three dimensions: vigor, resilience, and organization. We

define each of these dimensions and formulate an index called the Viability Index (VI) to combine all three dimensions.

Archival data of projects hosted at SourceForge.net are used for the empirical validation of the measure. An Analysis

Sample (n=136) is used to assign weights to each dimension of project viability and to determine a suitable cut-off point

for VI. Cross-validation of the measure is performed on a hold-out Validation Sample (n=96). We demonstrate that project

viability is a robust and valid measure of OSS project survivability that can be used to predict the failure or survival of an

OSS project accurately. It is a tangible measure that can be used by organizations to compare various OSS projects and

to make informed decisions regarding investment in the OSS domain.

BACKGROUND-Software Process Improvement (SPI) is a systematic approach to increase the efficiency and

effectiveness of a software development organization and to enhance software products. OBJECTIVE-This paper aims to

DEC: Service Demand Estimation with Confidence

Evaluation and Measurement of Software Process Improvement—A Systematic Literature

Review

Defining and Evaluating a Measure of Open Source Project Survivability

Page 8: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9218

EGC

9217

identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives.

METHOD-The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers

were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential

confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS-Seven distinct

evaluation strategies were identified, wherein the most common one, “Pre-Post Comparison,” was applied in 49 percent

of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and

Schedule (18 percent). Looking at measurement perspectives, “Project” represents the majority with 66 percent.

CONCLUSION-The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential

confounding factors, particularly given that “Pre-Post Comparison” was identified as the most common evaluation

strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term

impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on

investment tend to be less used.

Modern IDEs such as Eclipse offer static views of the source code, but such views ignore information about the runtime

behavior of software systems. Since typical object-oriented systems make heavy use of polymorphism and dynamic

binding, static views will miss key information about the runtime architecture. In this paper, we present an approach to

gather and integrate dynamic information in the Eclipse IDE with the goal of better supporting typical software

maintenance activities. By means of a controlled experiment with 30 professional developers, we show that for typical

software maintenance tasks, integrating dynamic information into the Eclipse IDE yields a significant 17.5 percent

decrease of time spent while significantly increasing the correctness of the solutions by 33.5 percent. We also provide a

comprehensive performance evaluation of our approach.

There are too many design options for software effort estimators. How can we best explore them all? Aim: We seek

aspects on general principles of effort estimation that can guide the design of effort estimators. Method: We identified

the essential assumption of analogy-based effort estimation, i.e., the immediate neighbors of a project offer stable

conclusions about that project. We test that assumption by generating a binary tree of clusters of effort data and

comparing the variance of supertrees versus smaller subtrees. Results: For 10 data sets (from Coc81, Nasa93,

Desharnais, Albrecht, ISBSG, and data from Turkish companies), we found: 1) The estimation variance of cluster

subtrees is usually larger than that of cluster supertrees; 2) if analogy is restricted to the cluster trees with lower

variance, then effort estimates have a significantly lower error (measured using MRE, AR, and Pred(25) with a Wilcoxon

test, 95 percent confidence, compared to nearest neighbor methods that use neighborhoods of a fixed size). Conclusion:

Exploiting the Essential Assumptions of Analogy-Based Effort Estimation

Exploiting Dynamic Information in IDEs Improves Speed and Correctness of Software

Maintenance Tasks

Page 9: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9220

EGC

9219

EGC

9221

Estimation by analogy can be significantly improved by a dynamic selection of nearest neighbors, using only the project

data from regions with small variance.

In recent years, there has been significant interest in fault-localization techniques that are based on statistical analysis

of program constructs executed by passing and failing executions. This paper shows how the Tarantula, Ochiai, and

Jaccard fault-localization algorithms can be enhanced to localize faults effectively in web applications written in PHP by

using an extended domain for conditional and function-call statements and by using a source mapping. We also

propose several novel test-generation strategies that are geared toward producing test suites that have maximal fault-

localization effectiveness. We implemented various fault-localization techniques and test-generation strategies in

Apollo, and evaluated them on several open-source PHP applications. Our results indicate that a variant of the Ochiai

algorithm that includes all our enhancements localizes 87.8 percent of all faults to within 1 percent of all executed

statements, compared to only 37.4 percent for the unenhanced Ochiai algorithm. We also found that all the test-

generation strategies that we considered are capable of generating test suites with maximal fault-localization

effectiveness when given an infinite time budget for test generation. However, on average, a directed strategy based on

path-constraint similarity achieves this maximal effectiveness after generating only 6.5 tests, compared to 46.8 tests for

an undirected test-generation strategy.

Worldwide, firms have made great efforts to implement Enterprise Resource Planning (ERP) systems. Despite these

efforts, ERP adoption success is not guaranteed. Successful adoption of an ERP system also depends on proper system

maintenance. For this reason, companies should follow a maintenance strategy that drives the ERP system toward

success. However, in general, ERP maintenance managers do not know what conditions they should target to

successfully maintain their ERP systems. Furthermore, numerous risks threaten these projects, but they are normally

dealt with intuitively. To date, there has been limited literature published regarding ERP maintenance risks or ERP

maintenance success. To address this need, we have built a dynamic simulation tool that allows ERP managers to

foresee the impact of risks on maintenance goals. This research would help professionals manage their ERP

maintenance projects. Moreover, it covers a significant gap in the literature.

This paper describes GenProg, an automated method for repairing defects in off-the-shelf, legacy programs without

formal specifications, program annotations, or special coding practices. GenProg uses an extended form of genetic

Forecasting Risk Impact on ERP Maintenance with Augmented Fuzzy Cognitive Maps

Fault Localization for Dynamic Web Applications

GenProg: A Generic Method for Automatic Software Repair

Page 10: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9222

EGC

9221

EGC

9223

programming to evolve a program variant that retains required functionality but is not susceptible to a given defect,

using existing test suites to encode both the defect and required functionality. Structural differencing algorithms and

delta debugging reduce the difference between this variant and the original program to a minimal repair. We describe the

algorithm and report experimental results of its success on 16 programs totaling 1.25 M lines of C code and 120K lines

of module code, spanning eight classes of defects, in 357 seconds, on average. We analyze the generated repairs

qualitatively and quantitatively to demonstrate that the process efficiently produces evolved programs that repair the

defect, are not fragile input memorizations, and do not lead to serious degradation in functionality.

Search-Based Test Data Generation reformulates testing goals as fitness functions so that test input generation can be

automated by some chosen search-based optimization algorithm. The optimization algorithm searches the space of

potential inputs, seeking those that are “fit for purpose,” guided by the fitness function. The search space of potential

inputs can be very large, even for very small systems under test. Its size is, of course, a key determining factor affecting

the performance of any search-based approach. However, despite the large volume of work on Search-Based Software

Testing, the literature contains little that concerns the performance impact of search space reduction. This paper

proposes a static dependence analysis derived from program slicing that can be used to support search space

reduction. The paper presents both a theoretical and empirical analysis of the application of this approach to open

source and industrial production code. The results provide evidence to support the claim that input domain reduction

has a significant effect on the performance of local, global, and hybrid search, while a purely random search is

unaffected.

Ajax-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side runtime

manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but

also more error-prone and harder to test. We propose a method for testing Ajax applications automatically, based on a

crawler to infer a state-flow graph for all (client-side) user interface states. We identify Ajax-specific faults that can occur

in such states (related to, e.g., DOM validity, error messages, discoverability, back-button compatibility) as well as DOM-

tree invariants that can serve as oracles to detect such faults. Our approach, called Atusa, is implemented in a tool

offering generic invariant checking components, a plugin-mechanism to add application-specific state validators, and

generation of a test suite covering the paths obtained during crawling. We describe three case studies, consisting of six

subjects, evaluating the type of invariants that can be obtained for Ajax applications as well as the fault revealing

capabilities, scalability, required manual effort, and level of automation of our testing approach..

Input Domain Reduction through Irrelevant Variable Removal and Its Effect on Local,

Global, and Hybrid Search-Based Structural Test Data Generation

Invariant-Based Automatic Testing of Modern Web Applications

Page 11: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9224

EGC

9226

EGC

9225

Formal specifications can help with program testing, optimization, refactoring, documentation, and, most importantly,

debugging and repair. However, they are difficult to write manually, and automatic mining techniques suffer from 90-99

percent false positive rates. To address this problem, we propose to augment a temporal-property miner by

incorporating code quality metrics. We measure code quality by extracting additional information from the software

engineering process and using information from code that is more likely to be correct, as well as code that is less likely

to be correct. When used as a preprocessing step for an existing specification miner, our technique identifies which

input is most indicative of correct program behavior, which allows off-the-shelf techniques to learn the same number of

specifications using only 45 percent of their original input. As a novel inference technique, our approach has few false

positives in practice (63 percent when balancing precision and recall, 3 percent when focused on precision), while still

finding useful specifications (e.g., those that find many bugs) on over 1.5 million lines of code.

Ajax Model checking is a formal verification method widely accepted in the web service world because of its capability

to reason about service behavior at process level. It has been used as a basic tool in several scenarios such as service

selection, service validation, and service composition. The importance of semantics is also widely recognized. Indeed,

there are several solutions to the problem of providing semantics to web services, most of them relying on some form of

Description Logic. This paper presents an integration of model checking and semantic reasoning technologies in an

efficient way. This can be considered the first step toward the use of semantic model checking in problems of selection,

validation, and composition. The approach relies on a representation of services at process level that is based on

semantically annotated state transition systems (asts) and a representation of specifications based on a semantically

annotated version of computation tree logic (anctl). This paper proves that the semantic model checking algorithm is

sound and complete and can be accomplished in polynomial time. This approach has been evaluated with several

experiments.

To assess the quality of test suites, mutation analysis seeds artificial defects (mutations) into programs; a nondetected

mutation indicates a weakness in the test suite. We present an automated approach to generate unit tests that detect

these mutations for object-oriented classes. This has two advantages: First, the resulting test suite is optimized toward

finding defects modeled by mutation operators rather than covering code. Second, the state change caused by

mutations induces oracles that precisely detect the mutants. Evaluated on 10 open source libraries, our μtest prototype

generates test suites that find significantly more seeded defects than the original manually written test suites.

Measuring Code Quality to Improve Specification Mining

Mutation-Driven Generation of Unit Tests and Oracles

Model Checking Semantically Annotated Services

Page 12: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9227

EGC

9229

EGC

9228

The problem of deciding whether an observed behavior is acceptable is the oracle problem. When testing from a finite

state machine (FSM), it is easy to solve the oracle problem and so it has received relatively little attention for FSMs.

However, if the system under test has physically distributed interfaces, called ports, then in distributed testing, we

observe a local trace at each port and we compare the set of local traces with the set of allowed behaviors (global

traces). This paper investigates the oracle problem for deterministic and nondeterministic FSMs and for two alternative

definitions of conformance for distributed testing. We show that the oracle problem can be solved in polynomial time for

the weaker notion of conformance (⊆w) but is NP-hard for the stronger notion of conformance (⊆), even if the FSM is

deterministic. However, when testing from a deterministic FSM with controllable input sequences, the oracle problem

can be solved in polynomial time and similar results hold for nondeterministic FSMs. Thus, in some cases, the oracle

problem can be efficiently solved when using ⊆s and where this is not the case, we can use the decision procedure for

⊆w as a sound approximation.

A declarative SQL-like language and a middleware infrastructure are presented for collecting data from different nodes

of a pervasive system. Data management is performed by hiding the complexity due to the large underlying

heterogeneity of devices, which can span from passive RFID(s) to ad hoc sensor boards to portable computers. An

important feature of the presented middleware is to make the integration of new device types in the system easy through

the use of device self-description. Two case studies are described for PerLa usage, and a survey is made for comparing

our approach with other projects in the area.

Pointcut fragility is a well-documented problem in Aspect-Oriented Programming; changes to the base code can lead to

join points incorrectly falling in or out of the scope of pointcuts. In this paper, we present an automated approach that

limits fragility problems by providing mechanical assistance in pointcut maintenance. The approach is based on

harnessing arbitrarily deep structural commonalities between program elements corresponding to join points selected

by a pointcut. The extracted patterns are then applied to later versions to offer suggestions of new join points that may

require inclusion. To illustrate that the motivation behind our proposal is well founded, we first empirically establish that

join points captured by a single pointcut typically portray a significant amount of unique structural commonality by

analyzing patterns extracted from 23 AspectJ programs. Then, we demonstrate the usefulness of our technique by

Oracles for Distributed Testing

Pointcut Rejuvenation: Recovering Pointcut Expressions in Evolving Aspect-Oriented

Software

PerLa: A Language and Middleware Architecture for Data Management and Integration

in Pervasive Information

Page 13: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9231

EGC

9230

rejuvenating pointcuts in multiple versions of three of these programs. The results show that our parameterized

heuristic algorithm was able to accurately and automatically infer the majority of new join points in subsequent software

versions that were not captured by the original pointcuts.

A major challenge of dynamic reconfiguration is Quality of Service (QoS) assurance, which is meant to reduce

application disruption to the minimum for the system's transformation. However, this problem has not been well studied.

This paper investigates the problem for component-based software systems from three points of view. First, the whole

spectrum of QoS characteristics is defined. Second, the logical and physical requirements for QoS characteristics are

analyzed and solutions to achieve them are proposed. Third, prior work is classified by QoS characteristics and then

realized by abstract reconfiguration strategies. On this basis, quantitative evaluation of the QoS assurance abilities of

existing work and our own approach is conducted through three steps. First, a proof-of-concept prototype called the

reconfigurable component model is implemented to support the representation and testing of the reconfiguration

strategies. Second, a reconfiguration benchmark is proposed to expose the whole spectrum of QoS problems. Third,

each reconfiguration strategy is tested against the benchmark and the testing results are evaluated. The most important

conclusion from our investigation is that the classified QoS characteristics can be fully achieved under some acceptable

constraints.

A substantial amount of work has shed light on whether random testing is actually a useful testing technique. Despite its

simplicity, several successful real-world applications have been reported in the literature. Although it is not going to

solve all possible testing problems, random testing appears to be an essential tool in the hands of software testers. In

this paper, we review and analyze the debate about random testing. Its benefits and drawbacks are discussed. Novel

results addressing general questions about random testing are also presented, such as how long does random testing

need, on average, to achieve testing targets (e.g., coverage), how does it scale, and how likely is it to yield similar results

if we rerun it on the same testing problem (predictability). Due to its simplicity that makes the mathematical analysis of

random testing tractable, we provide precise and rigorous answers to these questions. Results show that there are

practical situations in which random testing is a viable option. Our theorems are backed up by simulations and we show

how they can be applied to most types of software and testing criteria. In light of these results, we then assess the

validity of empirical analyzes reported in the literature and derive guidelines for both practitioners and scientists.

Random Testing: Theoretical Results and Practical Implications

QoS Assurance for Dynamic Reconfiguration of Component-Based Software Systems

Page 14: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9233

EGC

9232

EGC

9234

The exact performance analysis of large-scale software systems with discrete-state approaches is difficult because of

the well-known problem of state-space explosion. This paper considers this problem with regard to the stochastic

process algebra PEPA, presenting a deterministic approximation to the underlying Markov chain model based on

ordinary differential equations. The accuracy of the approximation is assessed by means of a substantial case study of a

distributed multithreaded application.

A Bad smells are signs of potential problems in code. Detecting and resolving bad smells, however, remain time-

consuming for software engineers despite proposals on bad smell detection and refactoring tools. Numerous bad smells

have been recognized, yet the sequences in which the detection and resolution of different kinds of bad smells are

performed are rarely discussed because software engineers do not know how to optimize sequences or determine the

benefits of an optimal sequence. To this end, we propose a detection and resolution sequence for different kinds of bad

smells to simplify their detection and resolution. We highlight the necessity of managing bad smell resolution

sequences with a motivating example, and recommend a suitable sequence for commonly occurring bad smells. We

evaluate this recommendation on two nontrivial open source applications, and the evaluation results suggest that a

significant reduction in effort ranging from 17.64 to 20 percent can be achieved when bad smells are detected and

resolved using the proposed sequence.

Software development effort estimates are frequently too low, which may lead to poor project plans and project failures.

One reason for this bias seems to be that the effort estimates produced by software developers are affected by

information that has no relevance for the actual use of effort. We attempted to acquire a better understanding of the

underlying mechanisms and the robustness of this type of estimation bias. For this purpose, we hired 374 software

developers working in outsourcing companies to participate in a set of three experiments. The experiments examined

the connection between estimation bias and developer dimensions: self-construal (how one sees oneself), thinking

style, nationality, experience, skill, education, sex, and organizational role. We found that estimation bias was present

along most of the studied dimensions. The most interesting finding may be that the estimation bias increased

significantly with higher levels of interdependence, i.e., with stronger emphasis on connectedness, social context, and

relationships. We propose that this connection may be enabled by an activation of one's self-construal when engaging

Schedule of Bad Smell Detection and Resolution: A New Way to Save Effort

Scalable Differential Analysis of Process Algebra Models

Software Development Estimation Biases: The Role of Interdependence

Page 15: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9235

EGC

9237

EGC

9236

in effort estimation, and a connection between a more interdependent self-construal and increased search for indirect

messages, lower ability to ignore irrelevant context, and a stronger emphasis on socially desirable responses.

Dynamic analysis is increasingly attracting attention for debugging, profiling, and program comprehension. Ten to

twenty years ago, many dynamic analyses investigated only simple method execution traces. Today, in contrast, many

sophisticated dynamic analyses exist, for instance, for detecting memory leaks, analyzing ownership properties,

measuring garbage collector performance, or supporting debugging tasks. These analyses depend on complex program

instrumentations and analysis models, making it challenging to understand, compare, and reproduce the proposed

approaches. While formal specifications and proofs are common in the field of static analysis, most dynamic analyses

are specified using informal, textual descriptions. In this paper, we propose a formal framework using operational

semantics that allows researchers to precisely specify their dynamic analysis. Our goal is to provide an accessible and

reusable basis on which researchers who may not be familiar with rigorous specifications of dynamic analyses can

build. By extending the provided semantics, one can concisely specify how runtime events are captured and how this

data is transformed to populate the analysis model. Furthermore, our approach provides the foundations to reason

about properties of a dynamic analysis.

Requirements elicitation is the software engineering activity in which stakeholder needs are understood. It involves

identifying and prioritizing requirements-a process difficult to scale to large software projects with many stakeholders.

This paper proposes StakeRare, a novel method that uses social networks and collaborative filtering to identify and

prioritize requirements in large software projects. StakeRare identifies stakeholders and asks them to recommend other

stakeholders and stakeholder roles, builds a social network with stakeholders as nodes and their recommendations as

links, and prioritizes stakeholders using a variety of social network measures to determine their project influence. It then

asks the stakeholders to rate an initial list of requirements, recommends other relevant requirements to them using

collaborative filtering, and prioritizes their requirements using their ratings weighted by their project influence.

StakeRare was evaluated by applying it to a software project for a 30,000-user system, and a substantial empirical study

of requirements elicitation was conducted. Using the data collected from surveying and interviewing 87 stakeholders,

the study demonstrated that StakeRare predicts stakeholder needs accurately and arrives at a more complete and

accurately prioritized list of requirements compared to the existing method used in the project, taking only a fraction of

the time.

Specifying Dynamic Analyses by Extending Language Semantics

vA UML/MARTE Model Analysis Method for Uncovering Scenarios Leading to Starvation

and Deadlocks in Concurrent

StakeRare: Using Social Networks and Collaborative Filtering for Large-Scale

Requirements Elicitation

Page 16: Ieee projects 2012 2013 - Software Engineering

Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]

IEEE Final Year Projects 2012 |Student Projects | Software Engineering Projects

EGC

9238

Concurrency problems such as starvation and deadlocks should be identified early in the design process. As larger,

more complex concurrent systems are being developed, this is made increasingly difficult. We propose here a general

approach based on the analysis of specialized design models expressed in the Unified Modeling Language (UML) that

uses a specifically designed genetic algorithm to detect concurrency problems. Though the current paper addresses

deadlocks and starvation, we will show how the approach can be easily tailored to other concurrency issues. Our main

motivations are 1) to devise solutions that are applicable in the context of the UML design of concurrent systems

without requiring additional modeling and 2) to use a search technique to achieve scalable automation in terms of

concurrency problem detection. To achieve the first objective, we show how all relevant concurrency information is

extracted from systems' UML models that comply with the UML Modeling and Analysis of Real-Time and Embedded

Systems (MARTE) profile. For the second objective, a tailored genetic algorithm is used to search for execution

sequences exhibiting deadlock or starvation problems. Scalability in terms of problem detection is achieved by showing

that the detection rates of our approach are, in general, high and are not strongly affected by large increases in the size

of complex search spaces.

In collaborative software development projects, work items are used as a mechanism to coordinate tasks and track

shared development work. In this paper, we explore how “tagging,” a lightweight social computing mechanism, is used

to communicate matters of concern in the management of development tasks. We present the results from two empirical

studies over 36 and 12 months, respectively, on how tagging has been adopted and what role it plays in the

development processes of several professional development projects with more than 1,000 developers in total. Our

research shows that the tagging mechanism was eagerly adopted by the teams, and that it has become a significant part

of many informal processes. Different kinds of tags are used by various stakeholders to categorize and organize work

items. The tags are used to support finding of tasks, articulation work, and information exchange. Implicit and explicit

mechanisms have evolved to manage the tag vocabulary. Our findings indicate that lightweight informal tool support,

prevalent in the social computing domain, may play an important role in improving team-based software development

practices.

Work Item Tagging: Communicating Concerns in Collaborative Software Development