crawling and web indexes. today’s lecture crawling connectivity servers
Post on 21-Dec-2015
219 views
TRANSCRIPT
Crawling and web indexes
Today’s lecture
Crawling Connectivity servers
Basic crawler operation
Begin with known “seed” pages
Fetch and parse them Extract URLs they point to Place the extracted URLs on a queue
Fetch each URL on the queue and repeat
20.2
Crawling picture
Web
URLs crawledand parsed
URLs frontier
Unseen Web
Seedpages
20.2
Simple picture – complications
Web crawling isn’t feasible with one machine All of the above steps distributed
Even non-malicious pages pose challenges Latency/bandwidth to remote servers vary Webmasters’ stipulations
How “deep” should you crawl a site’s URL hierarchy?
Site mirrors and duplicate pages Malicious pages
Spam pages Spider traps – incl dynamically generated
Politeness – don’t hit a server too often 20.1.1
What any crawler must do
Be Polite: Respect implicit and explicit politeness considerations Only crawl allowed pages Respect robots.txt (more on this shortly)
Be Robust: Be immune to spider traps and other malicious behavior from web servers
20.1.1
What any crawler should do
Be capable of distributed operation: designed to run on multiple distributed machines
Be scalable: designed to increase the crawl rate by adding more machines
Performance/efficiency: permit full use of available processing and network resources
20.1.1
What any crawler should do
Fetch pages of “higher quality” first
Continuous operation: Continue fetching fresh copies of a previously fetched page
Extensible: Adapt to new data formats, protocols
20.1.1
Updated crawling picture
URLs crawledand parsed
Unseen Web
SeedPages
URL frontier
Crawling thread 20.1.1
URL frontier
Can include multiple pages from the same host
Must avoid trying to fetch them all at the same time
Must try to keep all crawling threads busy
20.2
Explicit and implicit politeness
Explicit politeness: specifications from webmasters on what portions of site can be crawled robots.txt
Implicit politeness: even with no specification, avoid hitting any site too often 20.2
Robots.txt
Protocol for giving spiders (“robots”) limited access to a website, originally from 1994 www.robotstxt.org/wc/norobots.html
Website announces its request on what can(not) be crawled For a URL, create a file URL/robots.txt
This file specifies access restrictions
20.2.1
Robots.txt example
No robot should visit any URL starting with "/yoursite/temp/", except the robot called “searchengine":
User-agent: *
Disallow: /yoursite/temp/
User-agent: searchengine
Disallow:
20.2.1
Processing steps in crawling
Pick a URL from the frontier Fetch the document at the URL Parse the URL
Extract links from it to other docs (URLs)
Check if URL has content already seen If not, add to indexes
For each extracted URL Ensure it passes certain URL filter tests
Check if it is already in the frontier (duplicate URL elimination)
E.g., only crawl .edu, obey robots.txt, etc.
Which one?
20.2.1
Basic crawl architecture
WWW
DNS
Parse
Contentseen?
DocFP’s
DupURLelim
URLset
URL Frontier
URLfilter
robotsfilters
Fetch
20.2.1
DNS (Domain Name Server)
A lookup service on the internet Given a URL, retrieve its IP address Service provided by a distributed set of servers – thus, lookup latencies can be high (even seconds)
Common OS implementations of DNS lookup are blocking: only one outstanding request at a time
Solutions DNS caching Batch DNS resolver – collects requests and sends them out together 20.2.2
Parsing: URL normalization
When a fetched document is parsed, some of the extracted links are relative URLs
E.g., at http://en.wikipedia.org/wiki/Main_Page
we have a relative link to /wiki/Wikipedia:General_disclaimer which is the same as the absolute URL http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer
During parsing, must normalize (expand) such relative URLs 20.2.1
Content seen?
Duplication is widespread on the web
If the page just fetched is already in the index, do not further process it
This is verified using document fingerprints or shingles
20.2.1
Filters and robots.txt
Filters – regular expressions for URL’s to be crawled/not
Once a robots.txt file is fetched from a site, need not fetch it repeatedly Doing so burns bandwidth, hits web server
Cache robots.txt files 20.2.1
Duplicate URL elimination
For a non-continuous (one-shot) crawl, test to see if an extracted+filtered URL has already been passed to the frontier
For a continuous crawl – see details of frontier implementation
20.2.1
Distributing the crawler
Run multiple crawl threads, under different processes – potentially at different nodes Geographically distributed nodes
Partition hosts being crawled into nodes Hash used for partition
How do these nodes communicate?
20.2.1
20.2.1
Communication between nodes
The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all nodes
WWW
Fetch
DNS
ParseContentseen?
URLfilter
DupURLelim
DocFP’s
URLset
URL Frontier
robotsfilters
Hostsplitter
Tootherhosts
Fromotherhosts
URL frontier: two main considerations
Politeness: do not hit a web server too frequently
Freshness: crawl some pages more often than others E.g., pages (such as News sites) whose content changes often
These goals may conflict each other.(E.g., simple priority queue fails – many links out of a page go to its own site, creating a burst of accesses to that site.)
20.2.3
Politeness – challenges
Even if we restrict only one thread to fetch from a host, can hit it repeatedly
Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host
20.2.3
URL frontier: Mercator scheme
Prioritizer
Biased front queue selectorBack queue router
Back queue selector
K front queues
B back queuesSingle host on each
URLs
Crawl thread requesting URL 20.2.3
Mercator URL frontier
URLs flow in from the top into the frontier
Front queues manage prioritization
Back queues enforce politeness Each queue is FIFO
20.2.3
Front queues
Prioritizer
1 K
Biased front queue selectorBack queue router 20.2.3
Front queues
Prioritizer assigns to URL an integer priority between 1 and K Appends URL to corresponding queue
Heuristics for assigning priority Refresh rate sampled from previous crawls
Application-specific (e.g., “crawl news sites more often”)
20.2.3
Biased front queue selector
When a back queue requests a URL (in a sequence to be described): picks a front queue from which to pull a URL
This choice can be round robin biased to queues of higher priority, or some more sophisticated variant Can be randomized
20.2.3
20.2.3
Back queues
Biased front queue selectorBack queue router
Back queue selector
1 B
Heap
Back queue invariants
Each back queue is kept non-empty while the crawl is in progress
Each back queue only contains URLs from a single host Maintain a table from hosts to back queues
Host name Back queue
… 3
1
B20.2.3
Back queue heap
One entry for each back queue The entry is the earliest time te at which the host corresponding to the back queue can be hit again
This earliest time is determined from Last access to that host Any time buffer heuristic we choose 20.2.3
20.2.3
Back queue processing
A crawler thread seeking a URL to crawl: Extracts the root of the heap Fetches URL at head of corresponding back queue q (look up from table)
Checks if queue q is now empty – if so, pulls a URL v from front queues If there’s already a back queue for v’s host, append v to q and pull another URL from front queues, repeat
Else add v to q When q is non-empty, create heap entry for it
Number of back queues B
Keep all threads busy while respecting politeness
Mercator recommendation: three times as many back queues as crawler threads
20.2.3
Connectivity servers
20.4
Connectivity Server[CS1: Bhar98b, CS2 & 3: Rand01]
Support for fast queries on the web graph Which URLs point to a given URL? Which URLs does a given URL point to?
Stores mappings in memory from URL to outlinks, URL to inlinks
Applications Crawl control Web graph analysis
Connectivity, crawl optimization Link analysis 20.4
Champion published work
Boldi and Vigna http://www2004.org/proceedings/docs/1p595.pdf
Webgraph – set of algorithms and a java implementation
Fundamental goal – maintain node adjacency lists in memory For this, compressing the adjacency lists is the critical component
20.4
Adjacency lists
The set of neighbors of a node Assume each URL represented by an integer
E.g., for a 4 billion page web, need 32 bits per node
Naively, this demands 64 bits to represent each hyperlink
20.4
Adjaceny list compression
Properties exploited in compression: Similarity (between lists) Locality (many links from a page go to “nearby” pages)
Use gap encodings in sorted lists
Distribution of gap values20.4
Storage
Boldi/Vigna get down to an average of ~3 bits/link (URL to URL edge) For a 118M node web graph
How?
Why is this remarkable?
20.4
Main ideas of Boldi/Vigna
Consider lexicographically ordered list of all URLs, e.g., www.stanford.edu/alchemy www.stanford.edu/biology www.stanford.edu/biology/plant www.stanford.edu/biology/plant/copyright
www.stanford.edu/biology/plant/people www.stanford.edu/chemistry
20.4
Boldi/Vigna Each of these URLs has an adjacency list Main idea: due to templates, the adjacency list of a node is similar to one of the 7 preceding URLs in the lexicographic ordering
Express adjacency list in terms of one of these
E.g., consider these adjacency lists 1, 2, 4, 8, 16, 32, 64 1, 4, 9, 16, 25, 36, 49, 64 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 1, 4, 8, 16, 25, 36, 49, 64Encode as (-2), remove 9, add 8
Why 7?
20.4
Duplicate/Near-Duplicate Detection
Duplication: Exact match with fingerprints Near-Duplication: Approximate match
Overview Compute syntactic similarity with an edit-distance measure
Use similarity threshold to detect near-duplicates
E.g., Similarity > 80% => Documents are “near duplicates”
Not transitive though sometimes used transitively
Computing Near Similarity
Features: Segments of a document (natural or artificial breakpoints) [Brin95]
Shingles (Word N-Grams) [Brin95, Brod98] “a rose is a rose is a rose” => a_rose_is_a rose_is_a_rose is_a_rose_is
Similarity Measure TFIDF [Shiv95] Set intersection [Brod98] (Specifically, Size_of_Intersection / Size_of_Union )
Shingles + Set Intersection Computing exact set intersection of shingles between all pairs of documents is expensive and infeasible Approximate using a cleverly chosen subset of shingles from each (a sketch)
Shingles + Set Intersection
Estimate size_of_intersection / size_of_union based on a short sketch ( [Brod97, Brod98] ) Create a “sketch vector” (e.g., of size 200) for each document
Documents which share more than t (say 80%) corresponding vector elements are similar
For doc D, sketch[ i ] is computed as follows:
Let f map all shingles in the universe to 0..2m (e.g., f = fingerprinting)
Let i be a specific random permutation on 0..2m
Pick sketch[i] := MIN i ( f(s) ) over all shingles s in D
Computing Sketch[i] for Doc1
Document 1
264
264
264
264
Start with 64 bit shingles
Permute on the number line
with i
Pick the min value
Test if Doc1.Sketch[i] = Doc2.Sketch[i]
Document 1 Document 2
264
264
264
264
264
264
264
264
Are these equal?
Test for 200 random permutations: , ,… 200
A B
However…
Document 1 Document 2
264
264
264
264
264
264
264
264
A = B iff the shingle with the MIN value in the union of Doc1 and Doc2 is common to both (I.e., lies in the intersection)
This happens with probability: Size_of_intersection / Size_of_union
BA
Question Document D1=D2 iff size_of_intersection=size_of_union ?
Mirror Detection Mirroring is systematic replication of web pages across hosts. Single largest cause of duplication on the web
Host1/ and Host2/ are mirrors iffFor all (or most) paths p such that when
http://Host1/ / p exists http://Host2/ / p exists as wellwith identical (or near identical) content, and vice versa.
Mirror Detection example http://www.elsevier.com/ and
http://www.elsevier.nl/ Structural Classification of Proteins
http://scop.mrc-lmb.cam.ac.uk/scop http://scop.berkeley.edu/ http://scop.wehi.edu.au/scop http://pdb.weizmann.ac.il/scop http://scop.protres.ru/
Repackaged MirrorsAuctions.msn.com Auctions.lycos.com
Aug 2001
Motivation
Why detect mirrors? Smart crawling
Fetch from the fastest or freshest server Avoid duplication
Better connectivity analysis Combine inlinks Avoid double counting outlinks
Redundancy in result listings “If that fails you can try: <mirror>/samepath”
Proxy caching
Maintain clusters of subgraphs Initialize clusters of trivial subgraphs
Group near-duplicate single documents into a cluster Subsequent passes
Merge clusters of the same cardinality and corresponding linkage
Avoid decreasing cluster cardinality To detect mirrors we need:
Adequate path overlap Contents of corresponding pages within a small time range
Bottom Up Mirror Detection[Cho00]
Can we use URLs to find mirrors?
www.synthesis.org
a b
cd
synthesis.stanford.edu
a b
cd
www.synthesis.org/Docs/ProjAbs/synsys/synalysis.htmlwww.synthesis.org/Docs/ProjAbs/synsys/visual-semi-quant.htmlwww.synthesis.org/Docs/annual.report96.final.htmlwww.synthesis.org/Docs/cicee-berlin-paper.htmlwww.synthesis.org/Docs/myr5www.synthesis.org/Docs/myr5/cicee/bridge-gap.htmlwww.synthesis.org/Docs/myr5/cs/cs-meta.htmlwww.synthesis.org/Docs/myr5/mech/mech-intro-mechatron.htmlwww.synthesis.org/Docs/myr5/mech/mech-take-home.htmlwww.synthesis.org/Docs/myr5/synsys/experiential-learning.htmlwww.synthesis.org/Docs/myr5/synsys/mm-mech-dissec.htmlwww.synthesis.org/Docs/yr5arwww.synthesis.org/Docs/yr5ar/assesswww.synthesis.org/Docs/yr5ar/ciceewww.synthesis.org/Docs/yr5ar/cicee/bridge-gap.htmlwww.synthesis.org/Docs/yr5ar/cicee/comp-integ-analysis.html
synthesis.stanford.edu/Docs/ProjAbs/deliv/high-tech-…synthesis.stanford.edu/Docs/ProjAbs/mech/mech-enhanced…synthesis.stanford.edu/Docs/ProjAbs/mech/mech-intro-…synthesis.stanford.edu/Docs/ProjAbs/mech/mech-mm-case-…synthesis.stanford.edu/Docs/ProjAbs/synsys/quant-dev-new-…synthesis.stanford.edu/Docs/annual.report96.final.htmlsynthesis.stanford.edu/Docs/annual.report96.final_fn.htmlsynthesis.stanford.edu/Docs/myr5/assessmentsynthesis.stanford.edu/Docs/myr5/assessment/assessment-…synthesis.stanford.edu/Docs/myr5/assessment/mm-forum-kiosk-…synthesis.stanford.edu/Docs/myr5/assessment/neato-ucb.htmlsynthesis.stanford.edu/Docs/myr5/assessment/not-available.htmlsynthesis.stanford.edu/Docs/myr5/ciceesynthesis.stanford.edu/Docs/myr5/cicee/bridge-gap.htmlsynthesis.stanford.edu/Docs/myr5/cicee/cicee-main.htmlsynthesis.stanford.edu/Docs/myr5/cicee/comp-integ-analysis.html
Top Down Mirror Detection[Bhar99, Bhar00c]
E.g., www.synthesis.org/Docs/ProjAbs/synsys/synalysis.htmlsynthesis.stanford.edu/Docs/ProjAbs/synsys/quant-dev-new-teach.html
What features could indicate mirroring? Hostname similarity:
word unigrams and bigrams: { www, www.synthesis, synthesis, …}
Directory similarity: Positional path bigrams { 0:Docs/ProjAbs, 1:ProjAbs/synsys, … }
IP address similarity: 3 or 4 octet overlap Many hosts sharing an IP address => virtual hosting by an ISP
Host outlink overlap Path overlap
Potentially, path + sketch overlap
Implementation
Phase I - Candidate Pair Detection Find features that pairs of hosts have in common Compute a list of host pairs which might be
mirrors Phase II - Host Pair Validation
Test each host pair and determine extent of mirroring
Check if 20 paths sampled from Host1 have near-duplicates on Host2 and vice versa
Use transitive inferences: IF Mirror(A,x) AND Mirror(x,B) THEN Mirror(A,B) IF Mirror(A,x) AND !Mirror(x,B) THEN !Mirror(A,B)
Evaluation 140 million URLs on 230,000 hosts (1999) Best approach combined 5 sets of features
Top 100,000 host pairs had precision = 0.57 and recall = 0.86
Resources
www.seochat.com/ www.google.com/webmasters/seo.html www.google.com/webmasters/faq.html www.smartmoney.com/bn/ON/index.cfm?story=ON-20041215-000871-1140
research.microsoft.com/research/sv/sv-pubs/p97-fetterly/p97-fetterly.pdf
news.com.com/2100-1024_3-5491704.html www.jupitermedia.com/corporate/press.html
Resources
IIR Chapter 20 Mercator: A scalable, extensible web crawler (Heydon et al. 1999)
A standard for robot exclusion The WebGraph framework I: Compression techniques (Boldi et al. 2004)