![Page 1: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/1.jpg)
INF 2914Web Search
Lecture 2: Crawlers
![Page 2: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/2.jpg)
Today’s lecture Crawling
![Page 3: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/3.jpg)
Basic crawler operation Begin with known “seed”
pages Fetch and parse them
Extract URLs they point to Place the extracted URLs on a queue
Fetch each URL on the queue and repeat
![Page 4: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/4.jpg)
Crawling picture
Web
URLs crawledand parsed
URLs frontier
Unseen Web
Seedpages
![Page 5: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/5.jpg)
Simple picture – complications Web crawling isn’t feasible with one machine
All of the above steps distributed Even non-malicious pages pose challenges
Latency/bandwidth to remote servers vary Webmasters’ stipulations
How “deep” should you crawl a site’s URL hierarchy?
Site mirrors and duplicate pages Malicious pages
Spam pages Spider traps – incl dynamically generated
Politeness – don’t hit a server too often
![Page 6: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/6.jpg)
What any crawler must do Be Polite: Respect implicit and explicit
politeness considerations for a website Only crawl pages you’re allowed to Respect robots.txt (more on this
shortly) Be Robust: Be immune to spider traps
and other malicious behavior from web servers
![Page 7: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/7.jpg)
What any crawler should do Be capable of distributed operation:
designed to run on multiple distributed machines
Be scalable: designed to increase the crawl rate by adding more machines
Performance/efficiency: permit full use of available processing and network resources
![Page 8: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/8.jpg)
What any crawler should do Fetch pages of “higher quality”
first Continuous operation: Continue
fetching fresh copies of a previously fetched page
Extensible: Adapt to new data formats, protocols
![Page 9: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/9.jpg)
Updated crawling picture
URLs crawledand parsed
Unseen Web
SeedPages
URL frontierCrawling thread
![Page 10: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/10.jpg)
URL frontier Can include multiple pages from
the same host Must avoid trying to fetch them
all at the same time Must try to keep all crawling
threads busy
![Page 11: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/11.jpg)
Explicit and implicit politeness Explicit politeness: specifications
from webmasters on what portions of site can be crawled robots.txt
Implicit politeness: even with no specification, avoid hitting any site too often
![Page 12: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/12.jpg)
Robots.txt Protocol for giving spiders (“robots”)
limited access to a website, originally from 1994 www.robotstxt.org/wc/norobots.html
Website announces its request on what can(not) be crawled For a URL, create a file URL/robots.txt
This file specifies access restrictions
![Page 13: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/13.jpg)
Robots.txt example No robot should visit any URL starting with
"/yoursite/temp/", except the robot called “searchengine":
User-agent: *Disallow: /yoursite/temp/
User-agent: searchengineDisallow:
![Page 14: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/14.jpg)
Processing steps in crawling Pick a URL from the frontier Fetch the document at the URL Parse the URL
Extract links from it to other docs (URLs) Check if URL has content already seen
If not, add to indexes For each extracted URL
Ensure it passes certain URL filter tests Check if it is already in the frontier
(duplicate URL elimination)
E.g., only crawl .edu, obey robots.txt, etc.
Which one?
![Page 15: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/15.jpg)
Basic crawl architecture
WWWFetch
DNS
ParseContentseen?
URLfilter
DupURLelim
DocFP’s
URLset
URL Frontier
robotsfilters
![Page 16: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/16.jpg)
DNS (Domain Name Server) A lookup service on the internet
Given a URL, retrieve its IP address Service provided by a distributed set of servers
– thus, lookup latencies can be high (even seconds)
Common OS implementations of DNS lookup are blocking: only one outstanding request at a time
Solutions DNS caching Batch DNS resolver – collects requests and
sends them out together
![Page 17: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/17.jpg)
Parsing: URL normalization When a fetched document is parsed, some of
the extracted links are relative URLs E.g., at http://en.wikipedia.org/wiki/Main_Pagewe have a relative link to
/wiki/Wikipedia:General_disclaimer which is the same as the absolute URL http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer
During parsing, must normalize (expand) such relative URLs
![Page 18: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/18.jpg)
Content seen? Duplication is widespread on
the web If the page just fetched is
already in the index, do not further process it
This is verified using document fingerprints or shingles
![Page 19: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/19.jpg)
Filters and robots.txt Filters – regular expressions for
URL’s to be crawled/not Once a robots.txt file is fetched
from a site, need not fetch it repeatedly Doing so burns bandwidth, hits
web server Cache robots.txt files
![Page 20: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/20.jpg)
Duplicate URL elimination For a non-continuous (one-shot)
crawl, test to see if an extracted+filtered URL has already been passed to the frontier
For a continuous crawl – see details of frontier implementation
![Page 21: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/21.jpg)
Distributing the crawler Run multiple crawl threads, under
different processes – potentially at different nodes Geographically distributed nodes
Partition hosts being crawled into nodes Hash used for partition
How do these nodes communicate?
![Page 22: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/22.jpg)
Communication between nodes
The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all nodes
WWWFetch
DNS
ParseContentseen?
URLfilter
DupURLelim
DocFP’s
URLset
URL Frontier
robotsfilters
Hostsplitter
Tootherhosts
Fromotherhosts
![Page 23: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/23.jpg)
URL frontier: two main considerations Politeness: do not hit a web server
too frequently Freshness: crawl some pages more
often than others E.g., pages (such as News sites)
whose content changes oftenThese goals may conflict each other.
![Page 24: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/24.jpg)
Politeness – challenges Even if we restrict only one
thread to fetch from a host, can hit it repeatedly
Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host
![Page 25: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/25.jpg)
URL frontier: Mercator scheme
Prioritizer
Biased front queue selectorBack queue router
Back queue selector
K front queues
B back queuesSingle host on each
URLs
Crawl thread requesting URL
![Page 26: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/26.jpg)
Mercator URL frontier
Front queues manage prioritization Back queues enforce politeness Each queue is FIFO
![Page 27: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/27.jpg)
Front queuesPrioritizer
1 K
Biased front queue selectorBack queue router
![Page 28: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/28.jpg)
Front queues Prioritizer assigns to URL an integer
priority between 1 and K Appends URL to corresponding queue
Heuristics for assigning priority Refresh rate sampled from previous
crawls Application-specific (e.g., “crawl news
sites more often”)
![Page 29: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/29.jpg)
Biased front queue selector When a back queue requests a URL
(in a sequence to be described): picks a front queue from which to pull a URL
This choice can be round robin biased to queues of higher priority, or some more sophisticated variant Can be randomized
![Page 30: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/30.jpg)
Back queuesBiased front queue selector
Back queue router
Back queue selector
1 B
Heap
![Page 31: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/31.jpg)
Back queue invariants Each back queue is kept non-empty while
the crawl is in progress Each back queue only contains URLs from
a single host Maintain a table from hosts to back queues
Host name Back queue… 3
1B
![Page 32: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/32.jpg)
Back queue heap One entry for each back queue The entry is the earliest time te at
which the host corresponding to the back queue can be hit again
This earliest time is determined from Last access to that host Any time buffer heuristic we choose
![Page 33: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/33.jpg)
Back queue processing A crawler thread seeking a URL to crawl: Extracts the root of the heap Fetches URL at head of corresponding back
queue q (look up from table) Checks if queue q is now empty – if so, pulls a
URL v from front queues If there’s already a back queue for v’s host,
append v to q and pull another URL from front queues, repeat
Else add v to q When q is non-empty, create heap entry for it
![Page 34: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/34.jpg)
Number of back queues B Keep all threads busy while
respecting politeness Mercator recommendation: three
times as many back queues as crawler threads
![Page 35: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/35.jpg)
Determinando a frequência de atualização de páginas
Estudar políticas de atualização de uma base de dados local com N páginas
As páginas na base de dados são cópias das páginas encontradas na Web
Dificuldade Quando uma página da Web é atualizada o crawler não é
informado
![Page 36: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/36.jpg)
FrameworkMedindo o quanto o banco de dados esta atualizado
Freshness F(ei,t)=1 se a página ei esta atualizada no instante 1 e 0
caso contrário F(S,t): Freshness médio da base de dados S no instante t
Age A(ei,t) = 0 se ei esta atualizada no instante t e t-tm(ei) onde
tm é o instante da última modificação de ei A(S,t) : Age médio da base de dados S no instante t
![Page 37: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/37.jpg)
FrameworkUtiliza-se o freshness (age) médio ao longo do tempo
para comparar diferentes políticas de atualização de páginas
t
ot
t
oiti
dttSFt
SF
dtteFt
eF
),(1lim)(ˆ
),(1lim)(ˆ
![Page 38: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/38.jpg)
Framework Processo de Poisson (N(t))
Processo estocástico que determina o número de eventos ocorridos no intervalo [0,t]
Propriedades Processo sem memória: O número de
eventos que ocorrem em um intervalo limitado de tempo após o instante t independe do número de eventos que ocorreram antes de t
ttNt ]1)(Pr[0lim
![Page 39: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/39.jpg)
Framework Hipótese: Um processo de Poisson é uma
boa aproximação para o modelo de modificação de páginas
Probabilidade de uma página ser modificada pelo menos uma vez no intervalo (0,t]
Quanto maior a taxa , maior a probabilidade de haver mudança
)exp(1 t
![Page 40: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/40.jpg)
Framework Probabilidade de uma página
ser modificada pelo menos uma vez no intervalo (0,t]
Quanto maior a taxa , maior a probabilidade de haver mudança
)exp(1 t
![Page 41: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/41.jpg)
Framework Evolução da base de dados
Taxa uniforme de modificação (único )
É razoável quando não se conhece o parâmetro
Taxa não uniforme de modificação (um diferente para cada página)
![Page 42: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/42.jpg)
Políticas de Atualização Frequência de Atualização
Deve-se decidir quantas atualizações podem ser feitas por unidade de tempo
Assumimos que os N elementos são atulizados em I unidades de tempo
Diminuindo I aumentamos a taxa de atualização
A relação N/I depende do número de cralwlers, banda disponível, banda nos servidores, etc.
![Page 43: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/43.jpg)
Políticas de Atualização Alocação de Recursos
Após decidir quantos elementos atualizar devemos decidir a frequência de atualização de cada elemento
Podemos atualizar todos elementos na mesma taxa ou atualizar mais frequentemente elementos que se modificam com maior frequência
![Page 44: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/44.jpg)
Políticas de Atualização Exemplo
Um banco de dados contem 3 elementos {e1,e2,e3}
As taxas de modificação 4/dia , 3/dia e
2/dia, respectivamente.
São permitidas 9 atualizações por dia
Com que taxas as atualizações devem ser feita ?
![Page 45: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/45.jpg)
Políticas de Atualização Política Uniforme
3 atualizações / dia para todas as páginas
Política Proporcional: e1: 4 atualizações / dia e2: 3 atualizaões /dia e3: 2 atualizações /dia
Qual da políticas é melhor ?
![Page 46: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/46.jpg)
Política ótimaCalculando a frequência de atualização ótima. Dada uma frequência média f=N/I e a taxa média i
para cada página, deve-se calcular a frequência fi com que cada página i deve ser atualizada
ff
as
fFN
eFN
Maximizar
N
ii
N
iii
N
ii
1
11
N1
.
),(ˆ1)(ˆ1
![Page 47: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/47.jpg)
Política Ótima
Considere um banco de dados com 5 elementos e com as seguintes taxas de modificação 1,2,3,4 e 5. Assumindo que 5 atualizações por dias são possíveis.
![Page 48: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/48.jpg)
Política ÓtimaFrequência de atualização ótima. Para maximizar o freshness temos a seguinte curva
![Page 49: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/49.jpg)
Política Ótima Maximizando o Freshenss
Páginas que se modificam pouco devem ser pouco atualizadas
Páginas que se modificam demais devem ser pouco atualizadas
Uma atualização consome recurso e garante o freshness da página atualizada por muito pouco tempo
![Page 50: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/50.jpg)
Experimentos Foram escohidos 270 sites. Para
cada um destes 3000 páginas são atualizadas todos os dias.
Experimentos realizados de 9PM as 6AM durante 4 meses
10 segundos de intervalo entre requisições ao mesmo site
![Page 51: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/51.jpg)
Estimando frequências de modificação
O intervalo médio de modificação de uma página é estimado dividindo o período em que o experimento foi realizado pelo total de modificações no período
![Page 52: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/52.jpg)
Experimentos Verificação do processo de Poisson
Seleciona-se as páginas que se
modificam a uma taxa média (e.g. a cada 10 dias) e plota-se a distribuição do intervalo de mudanças
![Page 53: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/53.jpg)
Verificando o Processo de Poisson Eixo horizontal: tempo entre duas modificações Eixo vertical: fração das modificações que ocorreram no dado
intervalo As retas são as predições Para a páginas que se modificam muito e páginas que se
modificam muito pouco não foi possível obter conclusões.
![Page 54: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/54.jpg)
Alocação de Recursos Baseado nas frequências estimadas, calcula-se o freshness
esperado para cada uma das três políticas consideradas. Consideramos que é possível visitar 1Gb páginas/mês
![Page 55: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/55.jpg)
Resources IIR 20 See also
Effective Page Refresh Policies For Web Crawlers
Cho& Molina
![Page 56: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/56.jpg)
Trabalho 2 - Proposta Estudar políticas de atualização de páginas Web
Effective Page Refresh Policies For Web Crawlers
Optimal Crawling Strategies for Web Search Engines
User-centric Web crawling http://www.cs.cmu.edu/~olston/publications/
wic.pdf
![Page 57: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/57.jpg)
Trabalho 3- Proposta Estudar políticas para distribuição de Crawlers
Parallel Crawlers Realizar pesquisa bibliográfica em artigos que citam este
![Page 58: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/58.jpg)
Parallel crawling
Why? Aggregate resources of many machines Network load dispersion
Issues Quality of pages crawled (as before) Communication overhead Coverage; Overlap
![Page 59: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/59.jpg)
Parallel crawling approach [Cho+ 2002] Partition URL’s; each crawler node responsible for
one partition many choices for partition function e.g., hash(host IP)
What kind of coordination among crawler nodes? 3 options:
firewall mode cross-over mode exchange mode
![Page 60: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/60.jpg)
Coordination of parallel crawlers
ih
g
f
e
d
a
c b
crawler node 1 crawler node 2
modes: firewall cross-over exchange
![Page 61: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/61.jpg)
Trabalho 4 - Proposta Google File System Map Reduce
http://labs.google.com/papers.html
![Page 62: INF 2914 Web Search Lecture 2: Crawlers. Today’s lecture Crawling](https://reader031.vdocument.in/reader031/viewer/2022020401/5706384b1a28abb8238f5fda/html5/thumbnails/62.jpg)
Trabalho 5 - Proposta XML Parsing, Tokenization, and Indexing