personalization of search engine by using cache based approach€¦ · issue, past work has...

12
International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 316 _______________________________________________________________________________________________ 305 IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org _______________________________________________________________________________________ Personalization of Search Engine by Using Cache based Approach , Under Graduate, Student, Department of Computer Science & Engineering GuruNanak Institute of Technology (G.N.I.T.) Dahegaon, Kalmeshwar Road, Nagpur Maharashtra 441501, India E-Mail: [email protected] E-Mail:[email protected] Prof. Neeranjan Chitare Assistant Professor, Department of Computer Science & Engineering GuruNanak Institute of Technology (G.N.I.T.) Dahegaon, Kalmeshwar Road, Nagpur Maharashtra 441501, India E-Mail:[email protected] ABSTRACT As profound web develops at a quick pace, there has been expanded enthusiasm for strategies that assistance effectively find profound web interfaces. Be that as it may, because of the extensive volume of web assets and the dynamic idea of profound web, accomplishing wide scope and high productivity is a testing issue. In this venture propose a three-organize structure, for productive reaping profound web interfaces. In the principal organize, web crawler performs website based hunting down focus pages with the assistance of web indexes, abstaining from going to countless. To accomplish more precise outcomes for an engaged creep, Web Crawler positions sites to organize exceedingly pertinent ones for a given theme. In the second stage the proposed framework opens the pages inside in application with the assistance of Jsoup API and preprocess it. At that point it plays out the word include of inquiry website pages. In the third stage the proposed framework performs recurrence examination in view of TF and IDF. It additionally utilizes a mix of TF*IDF for positioning website pages. To wipe out inclination on going to some very applicable connections in shrouded web registries, In this undertaking propose plan a connection tree information structure to accomplish more extensive scope for a site. Undertaking trial comes about on an arrangement of delegate areas demonstrate the nimbleness and precision of our proposed crawler structure, which productively recovers profound web interfaces from extensive scale locales and accomplishes higher reap rates than different crawlers utilizing Naïve Bayes algorithms. KEYWORDS: personalization; search engine; user interests; search, cache, web crawlers, frameworks; I. INTRODUCTION The profound (or shrouded) web alludes to the substance lie behind accessible web interfaces that can't be listed via seeking motors. In view of extrapolations from an investigation done at University of California, Berkeley, it is assessed that the profound web contains around 91,850 terabytes and the surface web is just around 167 terabytes in 2003. Later investigations assessed that 1.9 petabytes were come to and 0.3 petabytes were expended worldwide in 2007. An IDC report appraises that the aggregate of every computerized datum made, duplicated, and expended will achieve 6 petabytes in 2014. A noteworthy part of this gigantic measure of information is assessed to be put away as organized or social information in web databases profound web makes up around 96% of all the substance on the Internet, which is 500-550 times bigger than the surface web. This information contains a tremendous measure of profitable data and elements, for example, Infomine, Clusty, Books in Print might be keen on building a list of the profound web sources in a given space, (for example, book). Since these elements can't get to the exclusive web lists of web search tools (e.g., Google and Baidu), there is a requirement for a proficient crawler that can precisely and rapidly investigate the profound web databases. It is trying to find the profound web databases, since they are not enlisted with any web indexes, are normally scantily appropriated, and keep continually evolving. To address this issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers, bring every accessible shape and can't center around a particular point. Centered crawlers, for example, Form- Focused Crawler (FFC) and Adaptive Crawler for Hidden- web Entries (ACHE) can consequently seek online databases on a particular point. FFC is composed with connection, page, and shape classifiers for centered creeping of web frames, and is stretched out by ACHE with extra parts for frame separating and versatile connection student. The connection classifiers in these crawlers assume a significant part in accomplishing higher creeping effectiveness than the best-first crawler. In any case, these connection classifiers are utilized to foresee the separation to the page containing accessible structures, which is hard to appraise, particularly for the deferred advantage joins (interfaces in the end prompt pages with shapes). Thus, the crawler can be wastefully prompted pages without focused structures. Other than proficiency, quality and scope on important profound web sources are additionally testing. Crawler must deliver a substantial amount of fantastic outcomes from the most pertinent substance sources. For surveying source quality, Source Rank positions the outcomes from the chose sources by figuring the assertion between them. Web Testing instruments are for the most part used to accumulate execution and dependability data about the web application running on a specific server. Web Testing is isolated into three fundamental classes:

Upload: others

Post on 26-May-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

305

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

Personalization of Search Engine by Using Cache based Approach

𝑲𝒓𝒖𝒑𝒂𝒍𝒊 𝑩𝒉𝒂𝒘𝒂𝒓𝒆𝟏, 𝑺𝒉𝒖𝒃𝒉𝒂𝒎 𝑵𝒂𝒓𝒌𝒉𝒆𝒅𝒆𝟐

Under Graduate, Student,

Department of Computer Science & Engineering

GuruNanak Institute of Technology (G.N.I.T.)

Dahegaon, Kalmeshwar Road, Nagpur

Maharashtra 441501, India

E-Mail: [email protected]

E-Mail:[email protected]

Prof. Neeranjan Chitare

Assistant Professor, Department of Computer Science & Engineering

GuruNanak Institute of Technology (G.N.I.T.)

Dahegaon, Kalmeshwar Road, Nagpur

Maharashtra 441501, India

E-Mail:[email protected]

ABSTRACT As profound web develops at a quick pace, there has been expanded enthusiasm for strategies that assistance effectively find profound web

interfaces. Be that as it may, because of the extensive volume of web assets and the dynamic idea of profound web, accomplishing wide scope

and high productivity is a testing issue. In this venture propose a three-organize structure, for productive reaping profound web interfaces. In the

principal organize, web crawler performs website based hunting down focus pages with the assistance of web indexes, abstaining from going to

countless. To accomplish more precise outcomes for an engaged creep, Web Crawler positions sites to organize exceedingly pertinent ones for a

given theme. In the second stage the proposed framework opens the pages inside in application with the assistance of Jsoup API and preprocess

it. At that point it plays out the word include of inquiry website pages. In the third stage the proposed framework performs recurrence

examination in view of TF and IDF. It additionally utilizes a mix of TF*IDF for positioning website pages. To wipe out inclination on going to

some very applicable connections in shrouded web registries, In this undertaking propose plan a connection tree information structure to

accomplish more extensive scope for a site. Undertaking trial comes about on an arrangement of delegate areas demonstrate the nimbleness and

precision of our proposed crawler structure, which productively recovers profound web interfaces from extensive scale locales and accomplishes

higher reap rates than different crawlers utilizing Naïve Bayes algorithms.

KEYWORDS: personalization; search engine; user interests; search, cache, web crawlers, frameworks;

I. INTRODUCTION

The profound (or shrouded) web alludes to the substance lie

behind accessible web interfaces that can't be listed via

seeking motors. In view of extrapolations from an

investigation done at University of California, Berkeley, it is

assessed that the profound web contains around 91,850

terabytes and the surface web is just around 167 terabytes in

2003. Later investigations assessed that 1.9 petabytes were

come to and 0.3 petabytes were expended worldwide in

2007. An IDC report appraises that the aggregate of every

computerized datum made, duplicated, and expended will

achieve 6 petabytes in 2014. A noteworthy part of this

gigantic measure of information is assessed to be put away

as organized or social information in web databases —

profound web makes up around 96% of all the substance on

the Internet, which is 500-550 times bigger than the surface

web. This information contains a tremendous measure of

profitable data and elements, for example, Infomine, Clusty,

Books in Print might be keen on building a list of the

profound web sources in a given space, (for example, book).

Since these elements can't get to the exclusive web lists of

web search tools (e.g., Google and Baidu), there is a

requirement for a proficient crawler that can precisely and

rapidly investigate the profound web databases.

It is trying to find the profound web databases, since they

are not enlisted with any web indexes, are normally scantily

appropriated, and keep continually evolving. To address this

issue, past work has proposed two kinds of crawlers,

nonspecific crawlers and centered crawlers. Bland crawlers,

bring every accessible shape and can't center around a

particular point. Centered crawlers, for example, Form-

Focused Crawler (FFC) and Adaptive Crawler for Hidden-

web Entries (ACHE) can consequently seek online

databases on a particular point. FFC is composed with

connection, page, and shape classifiers for centered creeping

of web frames, and is stretched out by ACHE with extra

parts for frame separating and versatile connection student.

The connection classifiers in these crawlers assume a

significant part in accomplishing higher creeping

effectiveness than the best-first crawler. In any case, these

connection classifiers are utilized to foresee the separation

to the page containing accessible structures, which is hard to

appraise, particularly for the deferred advantage joins

(interfaces in the end prompt pages with shapes). Thus, the

crawler can be wastefully prompted pages without focused

structures. Other than proficiency, quality and scope on

important profound web sources are additionally testing.

Crawler must deliver a substantial amount of fantastic

outcomes from the most pertinent substance sources. For

surveying source quality, Source Rank positions the

outcomes from the chose sources by figuring the assertion

between them.

Web Testing instruments are for the most part used to

accumulate execution and dependability data about the web

application running on a specific server.

Web Testing is isolated into three fundamental classes:

Page 2: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

306

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

• Performance Testing

• Stability or Stress Testing

• Capacity Planning

The main undertaking in Performance testing is to utilize an

apparatus to apply worry to the site and measure the most

extreme solicitations every second that the web server can

deal with. This is a quantitative estimation. The second

undertaking is to figure out which asset keeps the

solicitations every second from going higher, for example,

CPU, Memory or backend conditions. This second

undertaking is a greater amount of a craftsmanship than an

estimation.

Much of the time, the web server processor is the

bottleneck. Increment the worry to the point where the

solicitations every second begin to diminish, at that point

back the worry off marginally. This is the most extreme

execution that the site can accomplish. Expanding the

quantity of customer machines will likewise deliver a more

noteworthy feeling of anxiety.

The Web Server Performance Testing Tool is an application

created in Java to quantify the execution of Web Server or

Server-side application. Parameters utilized for estimation

are:

• Request – Response Time

• Number of Requests effectively took care of

by a Web Server

• Sent Requests—the quantity of solicitations

sent to the server amid a period.

• Received Requests—the quantity of answers

got from the server amid a period.

• Open Requests—the quantity of open

solicitations for a given minute.

Every client is reenacted by a different string with his own

session data (i.e., treats for each mimicked client are put

away independently) and "surfs" the URLs freely from

alternate clients – simply like in certifiable Web utilization.

URLs can be parameterized for every client and the

succession of URLs can be fluctuated. POST and GET asks

for are upheld and in addition BASIC HTTP Authentication

and a few different settings. With the new scripting

usefulness, you can even make exceedingly complex URL

designs for expansive scale web applications.

While choosing a pertinent subset from the accessible

substance sources, FFC and ACHE organize joins that bring

quick return (interfaces specifically point to pages

containing accessible structures) and postponed advantage

joins. In any case, the arrangement of recovered structures is

extremely heterogeneous. For instance, from an arrangement

of agent spaces, all things considered just 16% of structures

recovered by FFC are pertinent. Moreover, little work has

been done on the source choice issue when creeping more

substance sources. Along these lines it is urgent to create

brilliant creeping techniques that can rapidly find important

substance sources from the profound web however much as

could be expected. In this task, I propose a powerful

profound web gathering structure, specifically Smart

Crawler, for accomplishing both wide scope and high

productivity for an engaged crawler. In light of the

perception that profound sites normally contain a couple of

accessible structures and the greater part of them are inside a

profundity of three our crawler is separated into two phases:

site finding and in-site investigating. The website finding

stage accomplishes wide scope of locales for an engaged

crawler, and the in-webpage investigating stage can

productively perform looks for web frames inside a

webpage. Our fundamental commitments are:

In this venture propose a novel three-organize structure to

address the issue of hunting down shrouded web assets. Our

site finding procedure utilizes a switch looking strategy

(e.g., utilizing Google’s ―link:" office to get pages indicating

a given connection) and incremental three-level site

organizing method for uncovering pertinent locales,

accomplishing more information sources. Amid the in-

webpage investigating stage, I plan a connection tree for

adjusted connection organizing, disposing of inclination

toward website pages in prevalent indexes.

In this venture propose a versatile learning calculation that

performs online element choice and utilizations these

highlights to consequently build interface rankers. In the site

finding stage, high significant destinations are organized and

the creeping is centered around a theme utilizing the

substance of the root page of locales, accomplishing more

exact outcomes. Amid the in site investigating stage,

significant connections are organized for quick in-site

looking. In this task will played out a broad execution

assessment of Smart Crawler over genuine web information

in 12 delegate spaces and contrasted and ACHE and a

webpage based crawler. Assessment will demonstrate that

our slithering structure is extremely successful,

accomplishing generously higher reap rates than the best in

class ACHE crawler. The outcomes additionally

demonstrate the adequacy of the switch looking and

versatile learning.

II. PROPOSED SYSTEM

4.1 Architecture Block Diagram of System

The proposed work is wanted to be done in the

accompanying way:

Page 3: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

307

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

To productively and viably find profound web information

sources, Crawler is planned with a three-arrange

engineering, website finding and in-webpage investigating,

as appeared in above Figure. The primary site finding stage

finds the most pertinent site for a given point, the second in-

site investigating stage reveals accessible structures from the

site and after that the third stage apply innocent base

arrangement positioned the outcome.

In particular, the site finding stage begins with a seed set of

destinations in a site database. Seeds locales are hopeful

destinations given for Crawler to begin creeping, which

starts by following URLs from picked seed locales to

investigate different pages and different areas. At the point

when the quantity of unvisited URLs in the database is not

as much as a limit amid the slithering procedure, Crawler

performs "invert seeking" of known profound sites for focus

pages (very positioned pages that have numerous connects

to different spaces) and sustains these pages back to the site

database. Site Frontier gets landing page URLs from the site

database, which are positioned by Site Ranker to organize

exceptionally pertinent destinations.

The framework proposes a two-organize system, in

particular Smart Crawler, for effective collecting profound

web interfaces. In the principal arrange, Smart Crawler

performs site-based hunting down focus pages with the

assistance of web indexes, abstaining from going by an

expansive number of pages. To accomplish more exact

outcomes for an engaged creep, Smart Crawler positions

sites to organize profoundly pertinent ones for a given point.

In the second stage, Smart Crawler accomplishes quick in-

site seeking by uncovering most applicable connections with

a versatile connection positioning. To take out

predisposition on going by some exceedingly significant

connections in concealed web registries, we plan a

connection tree information structure to accomplish more

extensive scope for a site. Our trial comes about on an

arrangement of delegate areas demonstrate the deftness and

exactness of our proposed crawler structure, which

productively recovers profound web interfaces from

expansive scale locales and accomplishes higher reap rates

than different crawlers. Propose a compelling gathering

structure for profound web interfaces, in particular Smart-

Crawler. We have demonstrated that our approach

accomplishes both wide scope for profound web interfaces

and keeps up very proficient creeping. Keen Crawler is an

engaged crawler comprising of two phases: productive site

finding and adjusted in-site investigating. Savvy Crawler

performs webpage based situating by conversely looking

through the known profound sites for focus pages, which

can viably discover numerous information hotspots for

meager areas. By positioning gathered locales and by

concentrating the slithering on a point, Smart Crawler

accomplishes more precise outcomes.

III. RESULTS AND DISCUSSION

8.1 System Simulation Snapshots

Snapshot 7.1 GUI Design for Crawling System

Page 4: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

308

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

This is the first form of design for proposed system. In this page user needs to enter the query that’s gets searched on Google

search engine with help of Google API framework. Using this first 5 results of Google search engine can be searched.

Page 5: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

309

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

Snapshot 8.2 Enter Search Element

Page 6: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

310

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

This is an example for searching this keyword on Google. As the keyword is entered it is send to google

search engine using query based JSON API. The nexr form shows the result of searching this keyword on

proposed system.

Page 7: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

311

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

Snapshot. 8.3 Showing Majority Based Result

This are the results that are fetched from Google Search Engine. Basic preprocessing is done as Google API return not

only link but page details like meta tag titles URL Images and promotional links. All have been removed and original

ranked results are showed.

Page 8: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

312

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

Page 9: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

313

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

Page 10: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

314

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

Page 11: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

315

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

IV. CONCLUSION AND FUTURE SCOPE

We propose an effective harvesting framework for deep-web

interfaces, namely Smart- Crawler. We have shown that our

approach achieves both wide coverage for deep web

interfaces and maintains highly efficient crawling.

SmartCrawlerV2 is a focused crawler consisting of two

stages: efficient site locating and balanced in-site exploring.

SmartCrawlerV2 performs site-based locating by reversely

searching the known deep web sites for center pages, which

can effectively find many data sources for sparse domains.

By ranking collected sites and by focusing the crawling on a

topic, SmartCrawlerV2 achieves more accurate results. The

in-site exploring stage uses adaptive link-ranking to search

within a site; and we design a link tree for eliminating bias

toward certain directories of a website for wider coverage of

web directories. Our experimental results on a representative

set of domains show the effectiveness of the proposed two-

stage crawler, which achieves higher harvest rates than other

crawlers. In future work, we plan to combine pre-query and

post-query approaches for classifying deep-web forms to

further improve the accuracy of the form classifier.

Future Scope

As the future scope, the following can be done to the

algorithm:

1) We can further improve this algorithm to include many

different types of efficient hybrid page ranking techniques

which can further fortify the ranking procedures thereby

generating the most accurate crawling results.

2) The algorithm can be improved with respect to do a

crawling of the sub-child links also and applying page

ranking techniques on same. We can further improve this

algorithm to do an intelligent time-based crawling by which

the application would fire a search crawl within a specific

time and also complete within a specific time thereby

making the crawling process more efficient.

V. REFERENCES

[1]. Feng Zhao, Jingyu Zhou, Chang Nie, Heqing Huang, Hai

Jin ―Smart Crawler: A Two-stage Crawler for Efficiently

Harvesting Deep-Web Interfaces‖ in IEEE

TRANSACTIONS ON SERVICES COMPUTING, VOL.

9, NO. 4, JULY/AUGUST 2016.

[2]. Jianxiao Liu, Zonglin Tian, Panbiao Liu, Jiawei Jiang, ―An

Approach of Semantic Web Service Classification Based

on Naive Bayes‖ in 2016 IEEE International Conference on

Services Computing, SEPTEMBER 2016.

[3]. Bo Tang, Student Member, IEEE, Steven Kay, Fellow,

IEEE, and Haibo He, Senior Member, IEEE ―Toward

Optimal Feature Selection in Naive Bayes for Text

Categorization‖ in IEEE TRANSACTIONS ON

KNOWLEDGE AND DATA ENGINEERING, 9 Feb

2016.

[4]. Amruta Pandit , Prof. Manisha Naoghare, ―Efficiently

Harvesting Deep Web Interface with Reranking and

Clustering‖, in International Journal of Advanced Research

in Computer and Communication Engineering Vol. 5, Issue

1, January 2016.

[5]. Anand Kumar , Rahul Kumar, Sachin Nigle, Minal

Shahakar, ―Review on Extracting the Web Data through

Deep Web Interfaces, Mechanism‖, in International

Journal of Innovative Research in Computer and

Communication Engineering, Vol. 4, Issue 1, January 2016

[6]. Sayali D. Jadhav, H. P. Channe ―Comparative Study of K-

NN, Naive Bayes and Decision Tree Classification

Page 12: Personalization of Search Engine by Using Cache based Approach€¦ · issue, past work has proposed two kinds of crawlers, nonspecific crawlers and centered crawlers. Bland crawlers,

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248 Volume: 4 Issue: 3 305 – 316

_______________________________________________________________________________________________

316

IJFRCSCE | March 2018, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

Techniques‖ in International Journal of Science and

Research, Volume 5 Issue 1, January 2016.

[7]. Akshaya Kubba, ―Web Crawlers for Semantic Web‖ in

International Journal of Advanced Research in Computer

Science and Software Engineering, Volume 5, Issue 5, May

2015.

[8]. Monika Bhide, M. A. Shaikh, Amruta Patil, Sunita

Kerure,―Extracting the Web Data Through Deep Web

Interfaces‖ in INCIEST-2015.

[9]. Y. He, D. Xin, V. Ganti, S. Rajaraman, and N. Shah,

―Crawling deep web entity pages,‖ in Proc. 6th ACM Int.

Conf. Web Search Data Mining, 2013, pp. 355–364.

[10]. Raju Balakrishnan, Subbarao Kambhampati, ―SourceRank:

Relevance and Trust Assessment for Deep Web Sources

Based on Inter-Source Agreement‖ in WWW 2011, March

28–April 1, 2011.

[11]. D. Shestakov, ―Databases on the web: National web

domain survey,‖ in Proc. 15th Symp. Int. Database Eng.

Appl., 2011, pp. 179–184. [12] D. Shestakov and T.

Salakoski, ―Host-ip clustering technique for deep web

characterization,‖ in Proc. 12th Int. Asia-Pacific Web

Conf., 2010, pp. 378–380.

[12]. S. Denis, ―On building a search interface discovery

system,‖ in Proc. 2nd Int. Conf. Resource Discovery, 2010,

pp. 81–93.

[13]. D. Shestakov and T. Salakoski, ―On estimating the scale of

national deep web,‖ in Database and Expert Systems

Applications. New York, NY, USA: Springer, 2007, pp.

780–789.

[14]. Luciano Barbosa, Juliana Freire ―An Adaptive Crawler for

Locating Hidden Web Entry Points‖ in WWW 2007

[15]. K. C.-C. Chang, B. He, and Z. Zhang, ―Toward large scale

integration: Building a metaquerier over databases on the

web,‖ in Proc. 2nd Biennial Conf. Innovative Data Syst.

Res., 2005, pp. 44–55.

[16]. M. K. Bergman, ―White paper: The deep web: Surfacing

hidden value,‖ J. Electron. Publishing, vol. 7, no. 1, pp. 1–

17, 2001.