Improving the performance of focused web crawlers

Sotiris Batsakis, Euripides G.M. Petrakis, Evangelos Milios

Research output: Contribution to journalArticlepeer-review

110 Citations (Scopus)


This work addresses issues related to the design and implementation of focused crawlers. Several variants of state-of-the-art crawlers relying on web page content and link information for estimating the relevance of web pages to a given topic are proposed. Particular emphasis is given to crawlers capable of learning not only the content of relevant pages (as classic crawlers do) but also paths leading to relevant pages. A novel learning crawler inspired by a previously proposed Hidden Markov Model (HMM) crawler is described as well. The crawlers have been implemented using the same baseline implementation (only the priority assignment function differs in each crawler) providing an unbiased evaluation framework for a comparative analysis of their performance. All crawlers achieve their maximum performance when a combination of web page content and (link) anchor text is used for assigning download priorities to web pages. Furthermore, the new HMM crawler improved the performance of the original HMM crawler and also outperforms classic focused crawlers in searching for specialized topics.

Original languageEnglish
Pages (from-to)1001-1013
Number of pages13
JournalData and Knowledge Engineering
Issue number10
Early online date21 Apr 2009
Publication statusPublished - 1 Oct 2009
Externally publishedYes


Dive into the research topics of 'Improving the performance of focused web crawlers'. Together they form a unique fingerprint.

Cite this