Miłosz Orzeł

.net, js, html, arduino, java... no rants or clickbaits.

Html Agility Pack - massive information extraction from WWW pages

Recently I needed to acquire some database. Unfortunately it was published only as a website that presented 50 records per single page. Whole database had more than 150 thousand records. What to do in such situation? Click through 3000 pages, manually collecting data in a text file? One week and it's done! ;) Better to write a program (so called scraper) which will do the work for you. The program has to do three things:

  • generated a list of addresses from which data should be collected;
  • visit pages sequentially and extract information from HTML code;
  • dump data to local database and log work progress.

Address generation should be quite easy. For most sites pagination is built with plain links in which page number is clearly visible in the main part of URL (http://example.com/somedb/page/1) or in the query string (http://example.com/somedb?page=1). If pagination is done via ajax calls situation is a bit more complex, but let's not bother with that in this post... When you know the pattern for page number parameter, all it's needed is a simple loop with something like:

string url = string.Format("http://example.com/somedb?page={0}", pageNumber)

Now it's time for something more interesting. How to extract data from a webpage? You can use WebRequest/WebResponse or WebClient classes from System.Net namespace to get page content. After that you can obtain information via regular expressions. You can also try to treat downloaded content as XML and scrutinize it with XPath or LINQ to XML. These are not good approaches, however. For complicated page structure writing correct expression might be difficult, one should also remember that in most cases webpages are not valid XML documents. Fortunately Html Agility Pack library was created. It allows convenient parsing of HTML pages, even these with malformed code (i.e. lacking proper closing tags). HAP goes through page content and builds document object model that can be later processed with LINQ to Objects or XPath.

To start working with HAP you should install NuGet package named HtmlAgilityPack (I was using version 1.4.6) and import namespace with the same name. If you don't want to use NuGet (why?) download zip file from project's website and add reference to HtmlAgilityPack.dll file suitable for your platform (zip contains separate versions for .NET 4.5 and Silverlight 5 for example). Documentation in .chm file might be useful too. Attention! When I opened downloaded file (in Windows 7), the documentation looked empty. "Unlock" option from file's properties screen helped to solve the problem.

Retrieving webpage content with HAP is very easy. You have to create HtmlWeb object and use its Load method with page address:

HtmlWeb htmlWeb = new HtmlWeb();
HtmlDocument htmlDocument = htmlWeb.Load("http://en.wikipedia.org/wiki/Paintball");

In return, you will receive object of HtmlDocument class which is the core of HAP library.

HtmlWeb contains a bunch of properties that control how document is retrieved. For example, it is possible to indicate whether cookies should be used (UseCookies) and what should be the value of User Agent header included in HTTP request (UserAgent). For me AutoDetectEncoding and OverrideEncoding properties were especially useful as they let me correctly read document with Polish characters.

HtmlWeb htmlWeb = new HtmlWeb() { AutoDetectEncoding = false, OverrideEncoding = Encoding.GetEncoding("iso-8859-2") };

StatusCode (typeSystem.Net.HttpStatusCode) is another very useful property of HttpWeb. With it you can check the result of latest request processing.

Having HtmlDocument object ready, you can start to extract data. Here's an example of how to obtain links addresses and texts from previously downloaded webpage (add using System.Linq):

IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.Descendants("a").Where(x => x.Attributes.Contains("href"));
foreach (var link in links)
{
    Console.WriteLine(string.Format("Link href={0}, link text={1}", link.Attributes["href"].Value, link.InnerText));       
}

Property DocumentNode of type HtmlNode points to page's root. Method Descendants is used to retrieve all links (a tag) that contain href attribute. After that texts and address are printed on the console. Quite easy, huh? Few other examples:

Getting HTML code of the whole page:

string html = htmlDocument.DocumentNode.OuterHtml;

Getting element with "footer" id:

HtmlNode footer = htmlDocument.DocumentNode.Descendants().SingleOrDefault(x => x.Id == "footer");

Getting children of div with "toc" id and displaying names of child nodes which have type different than Text:

IEnumerable<HtmlNode> tocChildren = htmlDocument.DocumentNode.Descendants().Single(x => x.Id == "toc").ChildNodes;
foreach (HtmlNode child in tocChildren)
{
    if (child.NodeType != HtmlNodeType.Text)
    {
        Console.WriteLine(child.Name);
    }
}

Getting list elements (li tag) that have toclevel-1 class:

IEnumerable<HtmlNode> tocLiLevel1 = htmlDocument.DocumentNode.Descendants()
    .Where(x => x.Name == "li" && x.Attributes.Contains("class")
    && x.Attributes["class"].Value.Split().Contains("toclevel-1"));

Notice that Where filter is quite complex. Simple condition:

Where(x => x.Name == "li" && x.Attributes["class"].Value == "toclevel-1")

is not correct! Firstly there is no guarantee that each li tag will have class attribute set so we need to check if attribute exist to avoid NullReferenceException exception. Secondly the check for toclevel-1 is flawed. HTML element might have many classes, so instead of using == it's worthwhile to use Contains(). Plain Value.Contains is not enough though. What if we are looking for "sec" class and element has "secret" class? Such element will be matched too! Rather than Value.Contains you should use Value.Split().Contains. This way an array of strings will be checked via equals operator (instead of searching a single string for substring).

Getting texts of all li elements which are nested in minimum one li element:

var h1Texts = from node in htmlDocument.DocumentNode.Descendants()
              where node.Name == "li" && node.Ancestors("li").Count() > 0
              select node.InnerText;

Beyond LINQ to Objects, XPath might also be used to extract information. For example:

Getting a tags that have href attribute value starting with # and longer than 15 characters:

IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.SelectNodes("//a[starts-with(@href, '#') and string-length(@href) > 15]");

Finding li elements inside div with id "toc" which are third child in their parent element:

IEnumerable<HtmlNode> listItems = htmlDocument.DocumentNode.SelectNodes("//div[@id='toc']//li[3]");

XPath is a complex tool and it's impossible to show all its great capabilities in this post...

HAP lets you explore page structure and content but it also allows page modification and save. It has helper methods good for detecting document encoding (DetectEncoding), removing HTML entities (DeEntitize) and more... It is also possible to gather validation information (i.e. check if original document had proper closing tags). These topics are beyond the scope of this post.

While processing consecutive pages, dump useful information to local database most suitable for your needs, Maybe .csv file will be enough for you, maybe SQL database will be required? For me plain text file was sufficient.

Last thing worth doing is ensuring that scraper properly logs information about its work progress (for sure you want to know how far your program went and if it encountered any errors). For logging it is best to use specialized library such as log4net. There's a lot of tutorials on how to use log4net so I will not write about it. But I will show you a sample configuration which you can use in console application:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <configSections>
        <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>          
    </configSections>
    <log4net>        
        <root>
            <level value="DEBUG"/>            
            <appender-ref ref="ConsoleAppender" />
            <appender-ref ref="RollingFileAppender"/>
        </root>
        <appender name="ConsoleAppender" type="log4net.Appender.ColoredConsoleAppender">
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline" />
            </layout>
            <mapping>
                <level value="ERROR" />
                <foreColor value="White" />
                <backColor value="Red" />
            </mapping>
            <filter type="log4net.Filter.LevelRangeFilter">
                <levelMin value="INFO" />                
            </filter>
        </appender>         
        <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
            <file value="Log.txt" />
            <appendToFile value="true" />
            <rollingStyle value="Size" />
            <maxSizeRollBackups value="10" />
            <maximumFileSize value="50MB" />
            <staticLogFileName value="true" />
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline%exception" />
            </layout>
        </appender>
    </log4net>    
</configuration>

Above config contains two appenders: ConsoleAppender and RollingFileAppender. The first logs text to console window, ensuring that errors are clearly distinguished by color. To reduce amount of information LevelRangeFilter is set so only entries with INFO or higher level are presented. The second appender logs to text file (even entries with DEBUG level go there). Maximum size of singe file is set to 50MB and total files number limit is set to 10. Current log is always in Log.txt file...

And that's all, scraper is ready! Run it and let it labor for you. No dull, long hour work - leave it for people who don't know how to program :)

Additionally you can try a little exercise: instead of creating a list of all pages to visit, determine only the first page and find a link to next page in currently processed one...

P.S. Keep in mind that HAP works on HTML code that was sent by the server (this code is used by HAP to build document model). DOM which you can observe in browser's developer tools is often a result of scripts execution and might differ greatly form the one build directly from HTTP response.

Update 08.12.2013: As requested, I created simple demo (Visual Studio 2010 solution) of how to use Html Agility Pack and log4net. The app extracts some links from wiki page and dumps them to text file. Wiki page is saved to htm file to avoid dependency on web resource that might change. Download