How fast is .NET Garbage Collector? Part 2.

by Miłosz Orzeł 12. June 2014 20:58

Read the first part of the article if you haven’t done so already. Part 1 has a brief overview of what GC is and how it performs its magic. It contains a test of GC performance with regards to large array of bytes. You can also find there a detailed information about my test environment…

This part will focus on scenarios which put a lot more pressure on GC and appear more commonly in real world applications. You will see that even a tree of more than 100 million objects can be handled quickly… But first let’s see how GC responds to big array of type object:

static void TestSpeedForLargeObjectArray()
{
    Stopwatch sw = new Stopwatch();

    Console.Write(string.Format("GC.GetTotalMemory before creating array: {0:N0}. Press key any to create array...", GC.GetTotalMemory(true)));
    Console.Read();
    object[] array = new object[100 * 1000 * 1000]; // About 800 MB (3.2 GB when filled with object instances)
             
    sw.Start();
    for (int i = 0; i < array.Length; i++)
    {
        array[i] = new object();
    }
    Console.WriteLine("Setup time: " + sw.Elapsed);
    Console.WriteLine(string.Format("GC.GetTotalMemory after creating array: {0:N0}. Press Enter to set array to null...", GC.GetTotalMemory(true)));

    if (Console.ReadKey().Key == ConsoleKey.Enter)
    {
        Console.WriteLine("Setting array to null");
        array = null;
    }
    
    sw.Restart();
    GC.Collect();
    Console.WriteLine("Collection time: " + sw.Elapsed);
    Console.WriteLine(string.Format("GC.GetTotalMemory after GC.Collect: {0:N0}. Press any key to finish...", GC.GetTotalMemory(true)));

    Console.WriteLine(array); // To avoid compiler optimization...
    Console.ReadKey();
}

Above test creates and array of 100 million items. Initially such array takes about 800 megabytes of memory (on x64 platform). This part is allocated on LOH. When object instances are created total heap allocation jumps to 3.2 GB. Array items are tiny so they are part of Small Object Heap and initially belong to Gen 0.

Here are the test results for situation when array is set to null:

GC.GetTotalMemory before creating array: 41,736. Press key any to create array...
Press any key to fill array...
Setup time: 00:00:07.7910574
GC.GetTotalMemory after creating array: 3,200,057,616. Press Enter to set array to null...
Setting array to null
Collection time: 00:00:00.7481998
GC.GetTotalMemory after GC.Collect: 57,624. Press any key to finish...

It took only about 700 ms to reclaim over 3 GB of memory!

Take a look at this graph from Performance Monitor:

Managed memory counters for large object[] array; setting array to null... Click to enlarge...

You can see that while program was filling the array, Gen 0 and Gen 1 changed size (notice though that the scale for these is 100x bigger than scale for other counters). This means that GC cycles were triggered while items were created - this is expected behavior. Notice how Gen 2 and LOH size adds up to total bytes on managed heap.

What if instead of setting array reference to null we set array items to null?

Let’s see. Here’s the graph:

Managed memory counters for large object[] array; setting array items to null... Click to enlarge...

Notice that after GC.Collect is done 800 MB are still allocated - this is LOH memory held by array itself…

Here are the results:

GC.GetTotalMemory before creating array: 41,752. Press key any to create array...
Press any key to fill array...
Setup time: 00:00:07.7707024
GC.GetTotalMemory after creating array: 3,200,057,632. Press Enter to set array elements to null...
Setting array elements to null
Collection time: 00:00:01.0926220
GC.GetTotalMemory after GC.Collect: 800,057,672. Press any key to finish...

Ok, enough with arrays. One can argue that as continues blocks of memory they are easier to handle then complex objects structures that are abundant in real word programs.

Let’s create a very big tree of small reference types:

static int _itemsCount = 0;

class Item
{
    public Item ChildA { get; set; }
    public Item ChildB { get; set; }
    
    public Item()
    {
        _itemsCount++;
    }           
}

static void AddChildren(Item parent, int depth) 
{
    if (depth == 0)
    {
        return;
    }
    else
    {
        parent.ChildA = new Item();
        parent.ChildB = new Item();

        AddChildren(parent.ChildA, depth - 1);
        AddChildren(parent.ChildB, depth - 1);                
    }
}

static void TestSpeedForLargeTreeOfSmallObjects()
{
    Stopwatch sw = new Stopwatch();

    Console.Write(string.Format("GC.GetTotalMemory before building object tree: {0:N0}. Press any key to build tree...", GC.GetTotalMemory(true)));
    Console.ReadKey();

    sw.Start();
    _itemsCount = 0;       
    Item root = new Item();            
    AddChildren(root, 26);
    Console.WriteLine("Setup time: " + sw.Elapsed);
    Console.WriteLine("Number of items: " + _itemsCount.ToString("N0"));

    Console.WriteLine(string.Format("GC.GetTotalMemory after building object tree: {0:N0}. Press Enter to set root to null...", GC.GetTotalMemory(true)));

    if (Console.ReadKey().Key == ConsoleKey.Enter)
    {
        Console.WriteLine("Setting tree root to null");
        root = null;                
    }
    
    sw.Restart();
    GC.Collect();
    Console.WriteLine("Collection time: " + sw.Elapsed);
    Console.WriteLine(string.Format("GC.GetTotalMemory after GC.Collect: {0:N0}. Press any key to finish...", GC.GetTotalMemory(true)));
                
    Console.WriteLine(root); // To avoid compiler optimization...            
    Console.ReadKey();
}

The test presented above creates a tree with over 130 million nodes which take almost 4.3 GB of memory.

Here’s what happens when tree root is set to null:

GC.GetTotalMemory before building object tree: 41,616. Press any key to build tree...
Setup time: 00:00:14.3355583
Number of items: 134,217,727
GC.GetTotalMemory after building object tree: 4,295,021,160. Press Enter to set root to null...
Setting tree root to null
Collection time: 00:00:01.1069927
GC.GetTotalMemory after GC.Collect: 53,856. Press any key to finish...

Managed memory counters for large tree of small objects; setting tree root to null... Click to enlarge...

It took only 1.1 second to clear all the garbage! When root reference was set to null all nodes below it became useless as defined by mark and sweep algorithm… Notice that this time LOH is not utilized as no single object instance is over 85 KB threshold.

Now let’s see what happens when the root is not set to null and all the objects survive GC cycle:

GC.GetTotalMemory before building object tree: 41,680. Press any key to build tree...
Setup time: 00:00:14.3915412
Number of items: 134,217,727
GC.GetTotalMemory after building object tree: 4,295,021,224. Press Enter to set root to null...
Collection time: 00:00:03.7172580
GC.GetTotalMemory after GC.Collect: 4,295,021,184. Press any key to finish...

This time it took 3.7 sec (less than 28 nanoseconds per reference) for GC.Collect to run – remember that reachable references put more work on GC then dead one!

There is one more scenario we should test. Instead of setting root = null let's set root.ChildA = null. This way half of the tree would became unreachable. GC will have a chance to reclaim memory and compact it to avoid fragmentation. Check the results:

GC.GetTotalMemory before building object tree: 41,696. Press any key to build tree...
Setup time: 00:00:15.1326459
Number of items: 134,217,727
GC.GetTotalMemory after creating array: 4,295,021,240. Press Enter to set root.ChildA to null...
Setting tree root.ChildA to null
Collection time: 00:00:02.5062596
GC.GetTotalMemory after GC.Collect: 2,147,537,584. Press any key to finish...

Time for final test. Let’s create a tree of over 2 million complex nodes that contain some object references, small array and unique string. Additionally lets fill some of the MixedItem instances with byte array big enough to be put on Large Object Heap.

static int _itemsCount = 0;

class MixedItem
{
    byte[] _smallArray;
    byte[] _bigArray;
    string _uniqueString;

    public MixedItem ChildA { get; set; }
    public MixedItem ChildB { get; set; }

    public MixedItem()
    {
        _itemsCount++;

        _smallArray = new byte[1000];
        if (_itemsCount % 1000 == 0)
        {
            _bigArray = new byte[1000 * 1000];
        }

        _uniqueString = DateTime.Now.Ticks.ToString();
    }
}

static void AddChildren(MixedItem parent, int depth)
{
    if (depth == 0)
    {
        return;
    }
    else
    {
        parent.ChildA = new MixedItem();
        parent.ChildB = new MixedItem();

        AddChildren(parent.ChildA, depth - 1);
        AddChildren(parent.ChildB, depth - 1);
    }
}

static void TestSpeedForLargeTreeOfMixedObjects()
{
    Stopwatch sw = new Stopwatch();

    Console.Write(string.Format("GC.GetTotalMemory before building object tree: {0:N0}. Press any key to build tree...", GC.GetTotalMemory(true)));
    Console.ReadKey();

    sw.Start();
    _itemsCount = 0;
    MixedItem root = new MixedItem();
    AddChildren(root, 20);
    Console.WriteLine("Setup time: " + sw.Elapsed);
    Console.WriteLine("Number of items: " + _itemsCount.ToString("N0"));

    Console.WriteLine(string.Format("GC.GetTotalMemory after building object tree: {0:N0}. Press Enter to set root to null...", GC.GetTotalMemory(true)));

    if (Console.ReadKey().Key == ConsoleKey.Enter)
    {
        Console.WriteLine("Setting tree root to null");
        root = null;
    }

    sw.Restart();
    GC.Collect();
    Console.WriteLine("Collection time: " + sw.Elapsed);
    Console.WriteLine(string.Format("GC.GetTotalMemory after GC.Collect: {0:N0}. Press any key to finish...", GC.GetTotalMemory(true)));

    Console.WriteLine(root); // To avoid compiler optimization...
    Console.ReadKey();
}

How will GC perform when subjected to almost 4.5 GB of managed heap memory with such complex structure? Test results for setting root to null

GC.GetTotalMemory before building object tree: 41,680. Press any key to build tree...
Setup time: 00:00:11.5479202
Number of items: 2,097,151
GC.GetTotalMemory after building object tree: 4,496,245,632. Press Enter to set root to null...
Setting tree root to null
Collection time: 00:00:00.5055634
GC.GetTotalMemory after GC.Collect: 54,520. Press any key to finish...

Managed memory counters for large tree of mixed objects; setting tree root to null... Click to enlarge...

And in case you wonder, here's what happens when root is not set to null:

GC.GetTotalMemory before building object tree: 41,680. Press any key to build tree...
Setup time: 00:00:11.6676969
Number of items: 2,097,151
GC.GetTotalMemory after building object tree: 4,496,245,632. Press Enter to set root to null...
Collection time: 00:00:00.5617486
GC.GetTotalMemory after GC.Collect: 4,496,245,592. Press any key to finish...

So what it all means? The conclusion is that unless you are writing applications which require extreme efficiency or total guarantee of uninterrupted execution, you should be really glad that .NET uses automatic memory management. GC is a great piece of software that frees you from mundane and error prone memory handling. It lets you focus on what really matters: providing features for application users. I’ve been professionally writing .NET applications for past 8 years (enterprise stuff, mainly web apps and Windows services) and I’m yet to witness1 a situation when GC cost would be a major factor. Usually performance bottleneck lays in things like: bad DB configuration, inefficient SQL/ORM queries, slow remote services, bad network utilization, lack of parallelism, poor caching, sluggish client side rendering etc. If you avoid basic mistakes like creating to many strings you probably won’t even notice that there is a Garbage Collector :)

Update 31.08.2014: I've just run the most demanding test (big tree of small reference types with all objects surviving GC cycle) on my new laptop. The result is 3.3s compared to 3.7s result presented in the post. Test program: .NET 4.5 console app in Release mode run without debugger attached. Hardware: i7-4700HQ 2.4-3.4GHz 4 Core CPU, 8GB DDR3/1600MHz RAM. System: Windows 7 Home Premium x64.

1. I’ve met some out of memory exceptions related to LOH fragmentation. The good thing is that LOH algorithms are improving and x86 platform, which is especially susceptible to such errors, is becoming a thing of the past…

How fast is .NET Garbage Collector? Part 1.

by Miłosz Orzeł 29. May 2014 21:08

.NET GC is very fast! Well... I hope you need more than this reassuring statement, if so, read on :) I will show you some test results to prove that I’m not lying but first I will give you a quick reminder about what GC is:

Garbage Collector is fundamental component of .NET CLR. It takes care of freeing managed heap memory so a programmer doesn’t have to think about deallocation. Contrary to popular belief, GC handles memory occupied for both reference and value types. Why? Because quite often space for things like structures or integers is allocated on the heap. It happens for example when value type is an item of array or is a field in a class instance.

GC in .NET uses mark and sweep algorithm: it walks through objects graph starting from root elements (such us statics, references on stack or registers) and marks every object it can reach. When the walk is done, GC knows it can safely free the memory of objects that have not been marked – because as unreachable they are useless for the application.

For efficiency purposes GC supports notion of generations. When the object is created it belongs to Gen 0 (except for so called large objects). If Gen 0 object survives GC cycle it’s moved to Gen 1. If it survives one more cycle it goes to Gen 2 and stays there (there’s no Gen 31). Most objects are short lived – they don’t get pass Gen 0 or Gen 1 so .NET GC tries to free memory from lower generations first. It does Gen 2 (aka full collection) far less often then Gen 0 collection. If the object is big – above 850002 bytes it is allocated in Large Object Heap (LOH) and goes straight to Gen 2. Treating big objects the same way as small objects would have negative impact on heap defragmentation3 algorithms because moving such objects is time-consuming.

GC supports different modes for workstation and server configurations, is able to do some work in background threads, has to consider special cases like finalizers or pined memory buffers, its settings vary between platforms (e. g. CLR on PC is not the same as the one on Xbox)… well let’s stop here – I’ve promised a “quick reminder”! 

Ok, time for the test! 

In this first post I will show you how fast .NET GC can handle large arrays of value (non-reference) types. The test will involve an array of bytes that occupies about two gigabytes of memory. Despite its large size, such object does not put much pressure on the Garbage Collector. This is because the only reference GC has to check is the one for the array itself. If array becomes unreachable all memory occupied by its elements can be safely reused. Additionally, such array, being part of LOH, is not copied (to avoid heap fragmentation) when it survives collection cycle… In the second installment of this article I will show you how GC can handle more complex scenarios. We will examine performance for object array, and more importantly, a tree of objects with thousands of references… 

Note about test environment: I’ve run the test code on desktop PC with Intel i-5 2400 3.1 GHz 4 Core CPU, 12 GB of DDR 3/1333 RAM, running Windows 7 Ultimate x64. The program was a .NET 4.0 console application compiled in Release mode and run without debugger attached.

Here’s the test code:

static void TestSpeedForLargeByteArray()
{
    Stopwatch sw = new Stopwatch();

    Console.Write(string.Format("GC.GetTotalMemory before creating array: {0:N0}. Press any key to create array...", GC.GetTotalMemory(true)));
    Console.ReadKey();
    byte[] array = new byte[2000 * 1000 * 1000]; // About 2 GB
   
    sw.Start();
    for (int i = 0; i < array.Length; i++)
    {
        array[i] = 1; // Touch array elements to fill working set              
    }          
    Console.WriteLine("Setup time: " + sw.Elapsed);

    Console.WriteLine(string.Format("GC.GetTotalMemory after creating array: {0:N0}. Press Enter to set array to null...", GC.GetTotalMemory(true)));
    if (Console.ReadKey().Key == ConsoleKey.Enter)
    {
        Console.WriteLine("Setting array to null");
        array = null;
    }
               
    sw.Restart();
    GC.Collect();                 
    Console.WriteLine("Collection time: " + sw.Elapsed);
    Console.WriteLine(string.Format("GC.GetTotalMemory after GC.Collect: {0:N0}. Press any key to finish...", GC.GetTotalMemory(true)));

    Console.WriteLine(array); // To avoid compiler optimization...
    Console.ReadKey();
}

As you can see test is very simple. The code uses two methods of static GC class: GC.GetTotalMemory and GC.Collect. The former returns the amount of allocated managed memory and the latter forces Garbage Collector to do its job and free unused memory. The only thing that might be surprising is the loop that touches array items. Without it you will witness “strange” phenomenon: after array is defined GC.GetTotalMemory would report about 2 GB but you will not see memory usage increase in Task Manager! This is because taskmgr.exe shows Working Set data. You can run more advanced tool: Resource Monitor (resmon.exe) to see what’s happening: 

This is the screenshot before the “touch”:

Memory usage before access to array items. Click to enlarge...

And this is the one after the loop is executed:

Memory usage after access to array items. Click to enlarge...

Here are the test results (yeah, finally):

GC.GetTotalMemory before creating array: 41,568. Press any key to create array...
Setup time: 00:00:01.0633903
GC.GetTotalMemory after creating array: 2,000,053,864. Press Enter to set array to null...
Setting array to null
Collection time: 00:00:00.1443391
GC.GetTotalMemory after GC.Collect: 53,800. Press any key to finish...

What you can see here is that it took GC just around 150 milliseconds to free about 2 GB of memory. Nice, huh? You can appreciate the speed especially if you compare it with 1 second it took to just iterate over the array!

Below is a screenshot from Performance Monitor (perfmon.exe) running couple of .NET memory counters:

Performance Monitor with managed memory counters. Click to enlarge...

Our array is a big object (well over LOH threshold) – hence after it is defined memory of LOH increases yet Gen 0 heap size remains flat. 

Below are the results for GC.Collect run when reference to the array was not set to null:

GC.GetTotalMemory before creating array: 41,568. Press any key to create array..
Setup time: 00:00:01.0385779
GC.GetTotalMemory after creating array: 2,000,053,864. Press Enter to set array to null...
Collection time: 00:00:00.0001287
GC.GetTotalMemory after GC.Collect: 2,000,053,824. Press any key to finish...

Fraction of a millisecond. Totally negligible! Why I’m even mentioning a test for situation when memory is not collected? Well, in part two you will see that GC usually has more work to do when heap items survive collection cycle…

I hope to post the second part of the article in about a week or two. No promises though – you know, life… ;)

Update 26.07.2014: I've changed the screenshot from perfmon (Gen 0 and Gen 1 scale is bigger now).

Update 24.06.2014: I wrote the second part of this article few days ago. Click here to read it.

1. You can use GC.GC.MaxGeneration method to check it.

2. LOH threshold is an implementation detail. Most sources mention 85KB as the limit but it's not always the case - array of doubles with as little as 1000 items goes on LOH (on x86)...

3. .NET 4.5 introduces LOH improvements for better fragmentation prevention through balancing and enhanced free list usage. Future releases will probably have compaction option too.

jQuery UI Autocomplete in MVC 5 - selecting nested entity

by Miłosz Orzeł 21. April 2014 14:56

Imagine that you want to create edit view for Company entity which has two properties: Name (type string) and Boss (type Person). You want both properties to be editable. For Company.Name simple text input is enough but for Company.Boss you want to use jQuery UI Autocomplete widget*. This widget has to meet following requirements:

  • suggestions should appear when user starts typing person's last name or presses down arrow key;
  • identifier of person selected as boss should be sent to the server;
  • items in the list should provide additional information (first name and date of birth);
  • user has to select one of the suggested items (arbitrary text is not acceptable);
  • the boss property should be validated (with validation message and style set for appropriate input field).

Above requirements appear quite often in web applications. I've seen many over-complicated ways in which they were implemented. I want to show you how to do it quickly and cleanly... The assumption is that you have basic knowledge about jQuery UI Autocomplete and ASP.NET MVC. In this post I will show only the code which is related to autocomplete functionality but you can download full demo project here. It’s ASP.NET MVC 5/Entity Framework 6/jQuery UI 1.10.4 project created in Visual Studio 2013 Express for Web and tested in Chrome 34, FF 28 and IE 11 (in 11 and 8 mode). 

So here are our domain classes:

public class Company
{
    public int Id { get; set; } 

    [Required]
    public string Name { get; set; }

    [Required]
    public Person Boss { get; set; }
}
public class Person
{
    public int Id { get; set; }

    [Required]
    [DisplayName("First Name")]
    public string FirstName { get; set; }
    
    [Required]
    [DisplayName("Last Name")]
    public string LastName { get; set; }

    [Required]
    [DisplayName("Date of Birth")]
    public DateTime DateOfBirth { get; set; }

    public override string ToString()
    {
        return string.Format("{0}, {1} ({2})", LastName, FirstName, DateOfBirth.ToShortDateString());
    }
}

Nothing fancy there, few properties with standard attributes for validation and good looking display. Person class has ToString override – the text from this method will be used in autocomplete suggestions list.

Edit view for Company is based on this view model:

public class CompanyEditViewModel
{    
    public int Id { get; set; }

    [Required]
    public string Name { get; set; }

    [Required]
    public int BossId { get; set; }

    [Required(ErrorMessage="Please select the boss")]
    [DisplayName("Boss")]
    public string BossLastName { get; set; }
}

Notice that there are two properties for Boss related data.

Below is the part of edit view that is responsible for displaying input field with jQuery UI Autocomplete widget for Boss property:

<div class="form-group">
    @Html.LabelFor(model => model.BossLastName, new { @class = "control-label col-md-2" })
    <div class="col-md-10">
        @Html.TextBoxFor(Model => Model.BossLastName, new { @class = "autocomplete-with-hidden", data_url = Url.Action("GetListForAutocomplete", "Person") })
        @Html.HiddenFor(Model => Model.BossId)
        @Html.ValidationMessageFor(model => model.BossLastName)
    </div>
</div>

form-group and col-md-10 classes belong to Bootstrap framework which is used in MVC 5 web project template – don’t bother with them. BossLastName property is used for label, visible input field and validation message. There’s a hidden input field which stores the identifier of selected boss (Person entity). @Html.TextBoxFor helper which is responsible for rendering visible input field defines a class and a data attribute. autocomplete-with-hidden class marks inputs that should obtain the widget. data-url attribute value is used to inform about the address of action method that provides data for autocomplete. Using Url.Action is better than hardcoding such address in JavaScript file because helper takes into account routing rules which might change.

This is HTML markup that is produced by above Razor code:

<div class="form-group">
    <label class="control-label col-md-2" for="BossLastName">Boss</label>
    <div class="col-md-10">
        <span class="ui-helper-hidden-accessible" role="status" aria-live="polite"></span>
        <input name="BossLastName" class="autocomplete-with-hidden ui-autocomplete-input" id="BossLastName" type="text" value="Kowalski" 
         data-val-required="Please select the boss" data-val="true" data-url="/Person/GetListForAutocomplete" autocomplete="off">
        <input name="BossId" id="BossId" type="hidden" value="4" data-val-required="The BossId field is required." data-val-number="The field BossId must be a number." data-val="true">
        <span class="field-validation-valid" data-valmsg-replace="true" data-valmsg-for="BossLastName"></span>
    </div>
</div>

This is JavaScript code responsible for installing jQuery UI Autocomplete widget:

$(function () {
    $('.autocomplete-with-hidden').autocomplete({
        minLength: 0,
        source: function (request, response) {
            var url = $(this.element).data('url');
   
            $.getJSON(url, { term: request.term }, function (data) {
                response(data);
            })
        },
        select: function (event, ui) {
            $(event.target).next('input[type=hidden]').val(ui.item.id);
        },
        change: function(event, ui) {
            if (!ui.item) {
                $(event.target).val('').next('input[type=hidden]').val('');
            }
        }
    });
})

Widget’s source option is set to a function. This function pulls data from the server by $.getJSON call. URL is extracted from data-url attribute. If you want to control caching or provide error handling you may want to switch to $.ajax function. The purpose of change event handler is to ensure that values for BossId and BossLastName are set only if user selected an item from suggestions list.

This is the action method that provides data for autocomplete:

public JsonResult GetListForAutocomplete(string term)
{               
    Person[] matching = string.IsNullOrWhiteSpace(term) ?
        db.Persons.ToArray() :
        db.Persons.Where(p => p.LastName.ToUpper().StartsWith(term.ToUpper())).ToArray();

    return Json(matching.Select(m => new { id = m.Id, value = m.LastName, label = m.ToString() }), JsonRequestBehavior.AllowGet);
}

value and label are standard properties expected by the widget. label determines the text which is shown in suggestion list, value designate what data is presented in the input filed on which the widget is installed. id is custom property for indicating which Person entity was selected. It is used in select event handler (notice the reference to ui.item.id): Selected ui.item.id is set as a value of hidden input field - this way it will be sent in HTTP request when user decides to save Company data.

Finally this is the controller method responsible for saving Company data:

public ActionResult Edit([Bind(Include="Id,Name,BossId,BossLastName")] CompanyEditViewModel companyEdit)
{
    if (ModelState.IsValid)
    {
        Company company = db.Companies.Find(companyEdit.Id);
        if (company == null)
        {
            return HttpNotFound();
        }

        company.Name = companyEdit.Name;

        Person boss = db.Persons.Find(companyEdit.BossId);
        company.Boss = boss;
        
        db.Entry(company).State = EntityState.Modified;
        db.SaveChanges();
        return RedirectToAction("Index");
    }
    return View(companyEdit);
}

Pretty standard stuff. If you've ever used Entity Framework above method should be clear to you. If it's not, don't worry. For the purpose of this post the important thing to notice is that we can use companyEdit.BossId because it was properly filled by model binder thanks to our hidden input field.

That's it, all requirements are met! Easy, huh? :)

* You may be wondering why I want to use jQuery UI widget in Visual Studio 2013 project which by default uses Twitter Bootstrap. It's true that Bootstrap has some widgets and plugins but after a bit of experimentation I've found that for some more complicated scenarios jQ UI does a better job. The set of controls is simply more mature...

Html Agility Pack - massive information extraction from WWW pages

by Miłosz Orzeł 1. December 2013 22:56

Recently I needed to acquire some database. Unfortunately it was published only as a website that presented 50 records per single page. Whole database had more than 150 thousand records. What to do in such situation? Click through 3000 pages, manually collecting data in a text file? One week and it's done! ;) Better to write a program (so called scraper) which will do the work for you. The program has to do three things:

  • generated a list of addresses from which data should be collected;
  • visit pages sequentially and extract information from HTML code;
  • dump data to local database and log work progress.

Address generation should be quite easy. For most sites pagination is built with plain links in which page number is clearly visible in the main part of URL (http://example.com/somedb/page/1) or in the query string (http://example.com/somedb?page=1). If pagination is done via ajax calls situation is a bit more complex, but let's not bother with that in this post... When you know the pattern for page number parameter, all it's needed is a simple loop with something like:

string url = string.Format("http://example.com/somedb?page={0}", pageNumber)

Now it's time for something more interesting. How to extract data from a webpage? You can use WebRequest/WebResponse or WebClient classes from System.Net namespace to get page content. After that you can obtain information via regular expressions. You can also try to treat downloaded content as XML and scrutinize it with XPath or LINQ to XML. These are not good approaches, however. For complicated page structure writing correct expression might be difficult, one should also remember that in most cases webpages are not valid XML documents. Fortunately Html Agility Pack library was created. It allows convenient parsing of HTML pages, even these with malformed code (i.e. lacking proper closing tags). HAP goes through page content and builds document object model that can be later processed with LINQ to Objects or XPath.

To start working with HAP you should install NuGet package named HtmlAgilityPack (I was using version 1.4.6) and import namespace with the same name. If you don't want to use NuGet (why?) download zip file from project's website and add reference to HtmlAgilityPack.dll file suitable for your platform (zip contains separate versions for .NET 4.5 and Silverlight 5 for example). Documentation in .chm file might be useful too. Attention! When I opened downloaded file (in Windows 7), the documentation looked empty. "Unlock" option from file's properties screen helped to solve the problem.

Retrieving webpage content with HAP is very easy. You have to create HtmlWeb object and use its Load method with page address:

HtmlWeb htmlWeb = new HtmlWeb();
HtmlDocument htmlDocument = htmlWeb.Load("http://en.wikipedia.org/wiki/Paintball");

In return, you will receive object of HtmlDocument class which is the core of HAP library.

HtmlWeb contains a bunch of properties that control how document is retrieved. For example, it is possible to indicate whether cookies should be used (UseCookies) and what should be the value of User Agent header included in HTTP request (UserAgent). For me AutoDetectEncoding and OverrideEncoding properties were especially useful as they let me correctly read document with Polish characters.

HtmlWeb htmlWeb = new HtmlWeb() { AutoDetectEncoding = false, OverrideEncoding = Encoding.GetEncoding("iso-8859-2") };

StatusCode (typeSystem.Net.HttpStatusCode) is another very useful property of HttpWeb. With it you can check the result of latest request processing.

Having HtmlDocument object ready, you can start to extract data. Here's an example of how to obtain links addresses and texts from previously downloaded webpage (add using System.Linq):

IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.Descendants("a").Where(x => x.Attributes.Contains("href"));
foreach (var link in links)
{
    Console.WriteLine(string.Format("Link href={0}, link text={1}", link.Attributes["href"].Value, link.InnerText));       
}

Property DocumentNode of type HtmlNode points to page's root. Method Descendants is used to retrieve all links (a tag) that contain href attribute. After that texts and address are printed on the console. Quite easy, huh? Few other examples:

Getting HTML code of the whole page:

string html = htmlDocument.DocumentNode.OuterHtml;

Getting element with "footer" id:

HtmlNode footer = htmlDocument.DocumentNode.Descendants().SingleOrDefault(x => x.Id == "footer");

Getting children of div with "toc" id and displaying names of child nodes which have type different than Text:

IEnumerable<HtmlNode> tocChildren = htmlDocument.DocumentNode.Descendants().Single(x => x.Id == "toc").ChildNodes;
foreach (HtmlNode child in tocChildren)
{
    if (child.NodeType != HtmlNodeType.Text)
    {
        Console.WriteLine(child.Name);
    }
}

Getting list elements (li tag) that have toclevel-1 class:

IEnumerable<HtmlNode> tocLiLevel1 = htmlDocument.DocumentNode.Descendants()
    .Where(x => x.Name == "li" && x.Attributes.Contains("class")
    && x.Attributes["class"].Value.Split().Contains("toclevel-1"));

Notice that Where filter is quite complex. Simple condition:

Where(x => x.Name == "li" && x.Attributes["class"].Value == "toclevel-1")

is not correct! Firstly there is no guarantee that each li tag will have class attribute set so we need to check if attribute exist to avoid NullReferenceException exception. Secondly the check for toclevel-1 is flawed. HTML element might have many classes, so instead of using == it's worthwhile to use Contains(). Plain Value.Contains is not enough though. What if we are looking for "sec" class and element has "secret" class? Such element will be matched too! Rather than Value.Contains you should use Value.Split().Contains. This way an array of strings will be checked via equals operator (instead of searching a single string for substring).

Getting texts of all li elements which are nested in minimum one li element:

var h1Texts = from node in htmlDocument.DocumentNode.Descendants()
              where node.Name == "li" && node.Ancestors("li").Count() > 0
              select node.InnerText;

Beyond LINQ to Objects, XPath might also be used to extract information. For example:

Getting a tags that have href attribute value starting with # and longer than 15 characters:

IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.SelectNodes("//a[starts-with(@href, '#') and string-length(@href) > 15]");

Finding li elements inside div with id "toc" which are third child in their parent element:

IEnumerable<HtmlNode> listItems = htmlDocument.DocumentNode.SelectNodes("//div[@id='toc']//li[3]");

XPath is a complex tool and it's impossible to show all its great capabilities in this post...

HAP lets you explore page structure and content but it also allows page modification and save. It has helper methods good for detecting document encoding (DetectEncoding), removing HTML entities (DeEntitize) and more... It is also possible to gather validation information (i.e. check if original document had proper closing tags). These topics are beyond the scope of this post.

While processing consecutive pages, dump useful information to local database most suitable for your needs, Maybe .csv file will be enough for you, maybe SQL database will be required? For me plain text file was sufficient.

Last thing worth doing is ensuring that scraper properly logs information about its work progress (for sure you want to know how far your program went and if it encountered any errors). For logging it is best to use specialized library such as log4net. There's a lot of tutorials on how to use log4net so I will not write about it. But I will show you a sample configuration which you can use in console application:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <configSections>
        <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>          
    </configSections>
    <log4net>        
        <root>
            <level value="DEBUG"/>            
            <appender-ref ref="ConsoleAppender" />
            <appender-ref ref="RollingFileAppender"/>
        </root>
        <appender name="ConsoleAppender" type="log4net.Appender.ColoredConsoleAppender">
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline" />
            </layout>
            <mapping>
                <level value="ERROR" />
                <foreColor value="White" />
                <backColor value="Red" />
            </mapping>
            <filter type="log4net.Filter.LevelRangeFilter">
                <levelMin value="INFO" />                
            </filter>
        </appender>         
        <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
            <file value="Log.txt" />
            <appendToFile value="true" />
            <rollingStyle value="Size" />
            <maxSizeRollBackups value="10" />
            <maximumFileSize value="50MB" />
            <staticLogFileName value="true" />
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline%exception" />
            </layout>
        </appender>
    </log4net>    
</configuration>

Above config contains two appenders: ConsoleAppender and RollingFileAppender. The first logs text to console window, ensuring that errors are clearly distinguished by color. To reduce amount of information LevelRangeFilter is set so only entries with INFO or higher level are presented. The second appender logs to text file (even entries with DEBUG level go there). Maximum size of singe file is set to 50MB and total files number limit is set to 10. Current log is always in Log.txt file...

And that's all, scraper is ready! Run it and let it labor for you. No dull, long hour work - leave it for people who don't know how to program :)

Additionally you can try a little exercise: instead of creating a list of all pages to visit, determine only the first page and find a link to next page in currently processed one...

P.S. Keep in mind that HAP works on HTML code that was sent by the server (this code is used by HAP to build document model). DOM which you can observe in browser's developer tools is often a result of scripts execution and might differ greatly form the one build directly from HTTP response.

Update 08.12.2013: As requested, I created simple demo (Visual Studio 2010 solution) of how to use Html Agility Pack and log4net. The app extracts some links from wiki page and dumps them to text file. Wiki page is saved to htm file to avoid dependency on web resource that might change. Download

Short but very usueful regex – lookbehind, lazy, group and backreference

by Miłosz Orzeł 16. August 2013 10:04

Recently, I wanted to extract calls to external system from log files and do some LINQ to XML processing on obtained data. Here’s a sample log line (simplified, real log was way more complicated but it doesn’t matter for this post):

Call:<getName seqNo="56789"><id>123</id></getName> Result:<getName seqNo="56789">John Smith</getName>

I was interested in XML data of the call:

<getName seqNo="56789">
  <id>123</id>
</getName>

Quick tip: super-easy way to get such nicely formatted XML in .NET 3.5 or later is to invoke ToString method on XElement object:

var xml = System.Xml.Linq.XElement.Parse(someUglyXmlString);     
Console.WriteLine(xml.ToString());

When it comes to log, some things were certain: 

  • call’s XML will be logged after “Call:” text on the beginning of line
  • call’s root element name will contain only alphanumerical chars or underscore
  • there will be no line brakes in call’s data
  • call’s root element name may also appear in the “Result” section

Getting to the proper information was quite easy thanks to Regex class:

Regex regex = new Regex(@"(?<=^Call:)<(\w+).*?</\1>");
string call = regex.Match(logLine).Value;

This short regular expressions has a couple of interesting parts. It may not be perfect but proved really helpful in log analysis. If this regex is not entirely clear to you - read on, you will need to use something similar sooner or later. 

Here’s the same regex with comments (RegexOptions.IgnorePatternWhitespace is required to process expression commented this way):

string pattern = @"(?<=^Call:) # Positive lookbehind for call marker
                   <(\w+)      # Capturing group for opening tag name
                   .*?         # Lazy wildcard (everything in between)
                   </\1>       # Backreference to opening tag name";   
Regex regex = new Regex(pattern, RegexOptions.IgnorePatternWhitespace);
string call = regex.Match(logLine).Value;

Positive lookbehind

(?<=Call:) is a lookaround or more precisely positive lookbehind. It’s a zero-width assertion that lets us check whether some text is preceded by another text. Here “Call:” is the preceding text we are looking for. (?<=something) denotes positive lookbehind. There is also negative lookbehind expressed by (?<!something).  With negative lookbehind we can match text that doesn’t have particular string before it. Lookaround checks fragment of the text but doesn't became part of the match value. So the result of this:

Regex.Match("X123", @"(?<=X)\d*").Value

Will be "123" rather than "X123".

.NET regex engine has lookaheads too. Check this awesome page if you want to learn more about lookarounds. 

Note: In some cases (like in our log examination example) instead of using positive lookaround we may use non-capturing group...

Capturing group

<(\w+) will match less-than sign followed by one or more characters from \w class (letters, digits or underscores). \w+ part is surrounded with parenthesis to create a group containing XML root name (getName for sample log line). We later use this group to find closing tag with the use of backreference. (\w+) is capturing group, which means that results of this group existence are added to Groups collection of Match object. If you want to put part of the expression into a group but you don’t want to push results into Groups collection you may use non-capturing group by adding a question mark and colon after opening parenthesis, like this: (?:something)

Lazy wildcard

.*? matches all characters except newline (because we are not using RegexOptions.Singleline) in lazy (or non-greedy) mode thanks to question mark after asterisk. By default * quantifier is greedy, which means that regex engine will try to match as much text as possible. In our case, default mode will result in too long text being matched:

<getName seqNo="56789"><id>123</id></getName> Result:<getName seqNo="56789">John Smith</getName>

Backreference

</\1> matches XML close tag where element's name is provided with \1 backreference. Remember the (\w+) group? This group has number 1 and by using \1 syntax we are referencing the text matched by this group. So for our sample log, </\1> gives us </getName>. If regex is complex it may be a good idea to ditch numbered references and use named references instead. You can name a group by <name> or ‘name’ syntax and reference it by using k<name> or k’name’. So your expression could look like this:

@"(?<=^Call:)<(?<tag>\w+).*?</\k<tag>>"

or like this:

@"(?<=^Call:)<(?'tag'\w+).*?</\k'tag'>"

The latter version is better for our purpose. Using < > signs while matching XML is confusing. In this case regex engine will do just fine with < > version but keep in mind that source code is written for humans…

Regular expressions look intimidating, but do yourself a favor and spend few hours practicing them, they are extremely useful (not only for quick log analysis)!

Fast pixel operations in .NET (with and without unsafe)

by Miłosz Orzeł 6. July 2013 21:18

Bitmap class has GetPixel and SetPixel methods that let you acquire and change color of chosen pixels. Those methods are very easy to use but are also extremely slow. My previous post gives detailed explanation on the topic, click here if you are interested.

Fortunately you don’t have to use external libraries (or resign from .NET altogether) to do fast image manipulation. The Framework contains class called ColorMatrix that lets you apply many changes to images in an efficient manner. Properties such as contrast or saturation can be modified this way. But what about manipulation of individual pixels? It can be done too, with the help from Bitmap.LockBits method and BitmapData class…

Good way to test individual pixel manipulation speed is color difference detection. The task is to find portions of an image that have color similar to some chosen color. How to check if colors are similar? Think about color as a point in three dimensional space, where axes are: red, green and blue. Two colors are two points. The difference between colors is described by the distance between two points in RGB space.

Colors as points in 3D space diff = sqrt((C1R-C2R)2+(C1G-C2G)2+(C1B-C2B)2)

This technique is very easy to implement and gives decent results. Color comparison is actually a pretty complex matter though. Different color spaces are better suited for the task than RGB and human color perception should be taken into account (e. g. our eyes are more keen to detect difference in shades of green that in shades of blue). But let’s keep things simple here…

Our test image will be this Ultra HD 8K (7680x4320, 33.1Mpx) picture* (on this blog it’s of course scaled down to save bandwidth):

Color difference detection input image (scaled down for blog)

This is a method that may be used to look for R=253 G=129 B=84 pixels (aka “pink bra”). It sets matching pixels as white (the rest will be black):

static void DetectColorWithGetSetPixel(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    int toleranceSquared = tolerance * tolerance;
    
    for (int x = 0; x < image.Width; x++)
    {
        for (int y = 0; y < image.Height; y++)
        {
            Color pixel = image.GetPixel(x, y);

            int diffR = pixel.R - searchedR;
            int diffG = pixel.G - searchedG;
            int diffB = pixel.B - searchedB;

            int distance = diffR * diffR + diffG * diffG + diffB * diffB;

            image.SetPixel(x, y, distance > toleranceSquared ? Color.Black : Color.White);
        }
    }
}

Above code is our terribly slow Get/SetPixel baseline. If we call it this way (named parameters for clarity):

DetectColorWithGetSetPixel(image, searchedR: 253, searchedG: 129, searchedB: 255, tolerance: 84);

we will receive following outcome:

Color difference detection output image (scaled down)

Result may be ok but having to wait over 84300ms* is a complete disaster! 

Now check out this method:

static unsafe void DetectColorWithUnsafe(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
    int bytesPerPixel = 3;

    byte* scan0 = (byte*)imageData.Scan0.ToPointer();
    int stride = imageData.Stride;

    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    for (int y = 0; y < imageData.Height; y++)
    {
        byte* row = scan0 + (y * stride);

        for (int x = 0; x < imageData.Width; x++)
        {
            // Watch out for actual order (BGR)!
            int bIndex = x * bytesPerPixel;
            int gIndex = bIndex + 1;
            int rIndex = bIndex + 2;

            byte pixelR = row[rIndex];
            byte pixelG = row[gIndex];
            byte pixelB = row[bIndex];

            int diffR = pixelR - searchedR;
            int diffG = pixelG - searchedG;
            int diffB = pixelB - searchedB;

            int distance = diffR * diffR + diffG * diffG + diffB * diffB;

            row[rIndex] = row[bIndex] = row[gIndex] = distance > toleranceSquared ? unmatchingValue : matchingValue;
        }
    }

    image.UnlockBits(imageData);
}

It does exactly the same thing but runs for only 230ms over 360 times faster!

Above code makes use of Bitmap.LockBits method that is a wrapper for native GdipBitmapLockBits (GDI+, gdiplus.dll) function. LockBits creates a temporary buffer that contains pixel information in desired format (in our case RGB, 8 bits per color component). Any changes to this buffer are copied back to the bitmap upon UnlockBits call (therefore you should always use LockBits and UnlockBits as a pair). Bitmap.LockBits returns BitmapData object (System.Drawing.Imaging namespace) that has two interesting properties: Scan0 and Stride. Scan0 returns an address of the first pixel data. Stride is the width of single row of pixels (scan line) in bytes (with optional padding to make it dividable by 4). 

BitmapData layout

Please notice that I don’t use calls to Math.Pow and Math.Sqrt to calculate distance between colors. Writing code like this: 

double distance = Math.Sqrt(Math.Pow(pixelR - searchedR, 2) + Math.Pow(pixelG - searchedG, 2) + Math.Pow(pixelB - searchedB, 2));

to process millions of pixels is a terrible idea. Such line can make our optimized method about 25 times slower! Using Math.Pow with integer parameters is extremely wasteful and we don’t have to calculate square root to determine if distance is longer than specified tolerance.

Previously presented method uses code marked with unsafe keyword. It allows C# program to take advantage of pointer arithmetic. Unfortunately, unsafe mode has some important restrictions. Code must be compiled with \unsafe option and executed for fully trusted assembly. 

Luckily there is a Marshal.Copy method (from System.Runtime.InteropServices namespace) that can move data between managed and unmanaged memory. We can use it to copy image data into a byte array and manipulate pixels very efficiently. Look at this method:

static void DetectColorWithMarshal(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{        
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

    byte[] imageBytes = new byte[Math.Abs(imageData.Stride) * image.Height];
    IntPtr scan0 = imageData.Scan0;

    Marshal.Copy(scan0, imageBytes, 0, imageBytes.Length);
  
    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    for (int i = 0; i < imageBytes.Length; i += 3)
    {
        byte pixelB = imageBytes[i];
        byte pixelR = imageBytes[i + 2];
        byte pixelG = imageBytes[i + 1];

        int diffR = pixelR - searchedR;
        int diffG = pixelG - searchedG;
        int diffB = pixelB - searchedB;

        int distance = diffR * diffR + diffG * diffG + diffB * diffB;

        imageBytes[i] = imageBytes[i + 1] = imageBytes[i + 2] = distance > toleranceSquared ? unmatchingValue : matchingValue;
    }

    Marshal.Copy(imageBytes, 0, scan0, imageBytes.Length);

    image.UnlockBits(imageData);
}

It runs for 280ms, so it is only slightly slower than unsafe version. It is CPU efficient but uses more memory then previous method – almost 100 megabytes for our test Ultra HD 8K image in RGB 24 format.

If you want to make pixel manipulation even faster you may process different parts of the image in parallel. You need to make some benchmarking first because for small images the cost of threading may be bigger than gains from concurrent execution. Here’s a quick sample of code that uses 4 threads to process 4 parts of the image simultaneously. It yields 30% time improvement on my machine. Treat is as a quick and dirty hint, this post is already to long…

static unsafe void DetectColorWithUnsafeParallel(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
    int bytesPerPixel = 3;

    byte* scan0 = (byte*)imageData.Scan0.ToPointer();
    int stride = imageData.Stride;

    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    Task[] tasks = new Task[4];
    for (int i = 0; i < tasks.Length; i++)
    {
        int ii = i;
        tasks[i] = Task.Factory.StartNew(() =>
            {
                int minY = ii < 2 ? 0 : imageData.Height / 2;
                int maxY = ii < 2 ? imageData.Height / 2 : imageData.Height;

                int minX = ii % 2 == 0 ? 0 : imageData.Width / 2;
                int maxX = ii % 2 == 0 ? imageData.Width / 2 : imageData.Width;                        
                
                for (int y = minY; y < maxY; y++)
                {
                    byte* row = scan0 + (y * stride);

                    for (int x = minX; x < maxX; x++)
                    {
                        int bIndex = x * bytesPerPixel;
                        int gIndex = bIndex + 1;
                        int rIndex = bIndex + 2;

                        byte pixelR = row[rIndex];
                        byte pixelG = row[gIndex];
                        byte pixelB = row[bIndex];

                        int diffR = pixelR - searchedR;
                        int diffG = pixelG - searchedG;
                        int diffB = pixelB - searchedB;

                        int distance = diffR * diffR + diffG * diffG + diffB * diffB;

                        row[rIndex] = row[bIndex] = row[gIndex] = distance > toleranceSquared ? unmatchingValue : matchingValue;
                    }
                }
            });
    }

    Task.WaitAll(tasks);

    image.UnlockBits(imageData);
}

* Originally I had some triangles and squares as an illustration, but Victoria's Secret models (source) are better, huh? :) 

* .NET 4 console app, executed  on MSI GE620 DX laptop: Intel Core i5-2430M 2.40GHz (2 cores, 4 threads), 4GB DDR3 RAM, NVIDIA GT 555M 2GB DDR3, HDD 500GB 7200RPM, Windows 7 Home Premium x64.

Radio buttons for list items in MVC 4 – problem with id uniqueness

by Miłosz Orzeł 24. June 2013 20:37

Let's suppose that we have some model that has a list property and we want to render some radio buttons for items of that list. Take the following basic setup as an example.

Main model class with list:

using System.Collections.Generic;

public class Team
{
    public string Name { get; set; }
    public List<Player> Players { get; set; }
}

List item class:

public class Player
{
    public string Name { get; set; }
    public string Level { get; set; }
}

There are three accepted values for player’s skill Level property: BEG (Beginner), INT (Intermediate) and ADV (Advanced) so we want three radio buttons (with labels) for each player in a team. Yup, normally we would rather use enum instead of a string for Level property, but let’s skip it here for the sake of simplicity…

Controller action method that returns sample data:

public ActionResult Index()
{
    var team = new Team() {
        Name = "Some Team",
        Players = new List<Player> {
               new Player() {Name = "Player A", Level="BEG"},
               new Player() {Name = "Player B", Level="INT"},
               new Player() {Name = "Player C", Level="ADV"}
        }
    };

    return View(team);
}

Here is our Index.cshtml view:

@model Team

<section>
    <h1>@Model.Name</h1>        

    @Html.EditorFor(model => model.Players)            
</section>

Notice that markup for Players is not created inside a loop. Instead, EditorTemplate is used. It’s a good practice since it makes code more clear and maintainable. Framework is smart enough to use code from template for each player on a list, because Team.Players property implements IEnumerable interface...

And here is Players.cshtml EditorTemplate:

@model Player
<div>
    <strong>@Model.Name:</strong>

    @Html.RadioButtonFor(model => model.Level, "BEG")
    @Html.LabelFor(model => model.Level, "Beginner")

    @Html.RadioButtonFor(model => model.Level, "INT")
    @Html.LabelFor(model => model.Level, "Intermediate")

    @Html.RadioButtonFor(model => model.Level, "ADV")
    @Html.LabelFor(model => model.Level, "Advanced")        
</div>

The code looks fine, nice strongly typed helpers that relay on lambda expressions (for compile-time checking and easier refactoring)... but there’s a catch: HTML markup that is generated by such code is actually seriously flawed. Check this snipped of web page source generated for the first player:

<div>
    <strong>Player A:</strong>  
 
    <input name="Players[0].Level" id="Players_0__Level" type="radio" checked="checked" value="BEG">
    <label for="Players_0__Level">Beginner</label>
 
    <input name="Players[0].Level" id="Players_0__Level" type="radio" value="INT">
    <label for="Players_0__Level">Intermediate</label>
 
    <input name="Players[0].Level" id="Players_0__Level" type="radio" value="ADV">
    <label for="Players_0__Level">Advanced</label>        
</div>

Players_0__Level id is used for three different radio buttons! Lack of uniqueness not only violates HTML specification and makes scripting hard but also causes label tags to not work properly (clicking on them doesn’t check their corresponding input element). 

Fortunately MVC framework contains TemplateInfo class that has GetFullHtmlFieldId method. This method returns id for DOM element. That id is constructed by appending name provided as method argument to an automatically determined prefix. This prefix takes into account nesting level and list item's index. Internally, GetFullHtmlFieldId uses TemplateInfo.HtmlFieldPrefix property and TagBuilder.CreateSanitizedId method so even if you pass some illegal characters to id suffix they will be replaced.

Here is modified EditorTemplate
@model Player
<div>
    <strong>@Model.Name:</strong>

    @{string rbBeginnerId = ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldId("rbBeginner"); }
    @Html.RadioButtonFor(model => model.Level, "BEG", new { id = rbBeginnerId })
    @Html.LabelFor(model => model.Level, "Beginner",  new { @for = rbBeginnerId} )

    @{string rbIntermediateId = ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldId("rbIntermediate"); }
    @Html.RadioButtonFor(model => model.Level, "INT", new { id = rbIntermediateId })
    @Html.LabelFor(model => model.Level, "Intermediate",  new { @for = rbIntermediateId })

    @{string rbAdvancedId = ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldId("rbAdvanced"); }
    @Html.RadioButtonFor(model => model.Level, "ADV", new { id = rbAdvancedId })
    @Html.LabelFor(model => model.Level, "Advanced",  new { @for = rbAdvancedId })
</div>

Calls to ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldId method let us obtain ids for radio buttons which are also used to set for attributes of labels. In MVC 3 there was no overload of LabelFor extension method that accepted htmlAttributes object. Luckily version 4 has it build-in.

Above code produces such markup:

<div>
    <strong>Player A:</strong>
 
    <input name="Players[0].Level" id="Players_0__rbBeginner" type="radio" checked="checked" value="BEG">
    <label for="Players_0__rbBeginner">Beginner</label>
 
    <input name="Players[0].Level" id="Players_0__rbIntermediate" type="radio" value="INT">
    <label for="Players_0__rbIntermediate">Intermediate</label>
 
    <input name="Players[0].Level" id="Players_0__rbAdvanced" type="radio" value="ADV">
    <label for="Players_0__rbAdvanced">Advanced</label>
</div>

Now inputs ids are unique and labels properly reference radio buttons via for attribute. Alright :) 

BTW: the weird name “radio buttons” for mutually exclusive option elements comes from buttons on radio receivers that were used to switch between stations (pushing one in automatically pushed the others out).

Coordinate system in HTML5 Canvas, drawing with y-axis value increasing upwards

by Miłosz Orzeł 19. May 2013 10:06

Coordinate system in HTML5 Canvas is set up in such a way that its origin (0,0) is in the upper-left corner. This solution is nothing new in the world of screen graphics (e.g. the same goes for Windows Forms and SVG). CRT monitors, which were standard in the past, displayed picture lines from top to bottom and image within a line was created from left to right. So locating origin (0,0) in the upper-left corner was intuitive and it made creating hardware and software for handling graphics easier.

Unfortunately sometimes default coordinate system in canvas is a bit impractical. Let’s assume that you want to create projectile motion animation. It seems natural that for ascending projectile, the value of y coordinate should increase. But it will result in a weird effect of inverted trajectory:

Default coordinate system (y value increases downwards)

You can get rid of this problem by modifying y value that is passed to drawing function:

context.fillRect(x, offsetY - y, size, size);

For y = 0, projectile will be placed in a location determined by offsetY (to make y = 0 be the very bottom of the canvas, set offsetY equal to height of the canvas). The bigger the value of y the higher a projectile will be drawn. The problem is that you can have hundreds of places in your code that use y coordinate. If you forget to use offsetY just once the whole image may get destroyed. 

Luckily canvas lets you make changes to coordinate system by means of transformations. Two transformation methods will be useful for us: translate(x ,y) and scale(x, y). The former allows us to move origin to an arbitrary place, the latter is for changing size of drawn objects, but it may also be used to invert coordinates.

Single execution of the following code will move origin of coordinate system to point (0, offsetY) and establish y-axis values as increasing towards the top of the screen:

context.translate(0, offsetY);
context.scale(1, -1);

Translation and scaling of coordinate system. Click to enlarge...

But there’s a catch: the result of providing -1 as scale’s method second argument is that the whole image is created for inverted y coordinate. This applies to text too (calling fillText will render letters upside-down). Therefore before writing any text, you have to restore default y-axis configuration. Because manual restoring of canvas state is awkward, methods save() and restore() exist. These methods are for pushing canvas state on the stack and popping canvas state from the stack, respectively. It is recommended to use save method before doing transformations. Canvas state includes not only transformations but also values such as fill style or line width...

context.save();
 
context.fillStyle = 'red';
context.scale(2, 2);
context.fillRect(0, 0, 10, 10);
 
context.restore();
 
context.fillRect(0, 0, 10, 10);

Above code draws 2 squares: 

First square is red and is drawn with 2x scale. Second square is drawn with default canvas settings (color black and 1x scale). This occurs because right before any changes to scale and color, canvas state was save on the stack, later on it was restored before second square drawing.

TortoiseSVN pre-commit hook in C# - save yourself some troubles!

by Miłosz Orzeł 13. January 2013 19:24

Probably everyone who creates or debugs a program happens to make temporary changes to the code that make current task easier but should never get into the repository. And probably everyone has accidentally put such code into next revision. If you are lucky enough, mistake will be revealed quickly and the only result will be a bit of shame, if not...

If only there was a way to mark “uncommitable” code...

You can do it and it’s pretty simple!

TortoiseSVN lets you set so-called pre-commit hook. It’s a program (or script) that is run when user clicks “OK” button in “SVN Commit” window. Hook can for example check content of modified files and block commit when deemed appropriate. Tortoise hooks differ from Subversion hooks in that they are executed locally and not on the server that hosts the repository. You therefore don’t have to worry whether your hook will be accepted by the admin or if it works on the server (e.g. server may not have .NET installed), you also don’t affect the experience of other users of the repository. Client-side hooks are quicker too.

Detailed description of hooks can be found in „4.30.8. Client Side Hook Scripts” chapter of Tortoises help file.

Tortoise supports 7 kinds of hooks: start-commit, pre-commit, post-commit, start-update, pre-update, post-update and pre-connect. We are concerned with pre-commit action. The essence of the hook is to check whether one of added or modified files contains temporary code marker. Our marker may be a “NOT_FOR_REPO” text put into a comment placed above temporary code.

This is whole hook’s code – simple console application, that may save your ass :)

using System;
using System.IO;
using System.Text.RegularExpressions;

namespace NotForRepoPreCommitHook
{
    class Program
    {
        const string NotForRepoMarker = "NOT_FOR_REPO";

        static void Main(string[] args)
        {              
            string[] affectedPaths = File.ReadAllLines(args[0]);

            Regex fileExtensionPattern = new Regex(@"^.*\.(cs|js|xml|config)$", RegexOptions.IgnoreCase);

            foreach (string path in affectedPaths)
            {
                if (fileExtensionPattern.IsMatch(path) && File.Exists(path))
                {
                    if (ContainsNotForRepoMarker(path))
                    {
                        string errorMessage = string.Format("{0} marker found in {1}", NotForRepoMarker, path);
                        Console.Error.WriteLine(errorMessage);    
                        Environment.Exit(1);  
                    }
                }
            }             
        }

        static bool ContainsNotForRepoMarker(string path)
        {
            StreamReader reader = File.OpenText(path);

            try
            {
                string line = reader.ReadLine();

                while (line != null)
                {
                    if (line.Contains(NotForRepoMarker))
                    {
                        return true;
                    }

                    line = reader.ReadLine();
                }
            }
            finally
            {
                reader.Close();
            }  

            return false;
        }
    }
}

TSVN calls pre-commit hook with four parameters. We are interested only in the first one. It contains a path to *.tmp file. In this file there is a list of files affected by current commit. Each line is one path. After loading the list, files are filtered by extension (useful if you don’t want to process files of all types). Checking if file exists is also important – the list from *.tmp file contains paths for deleted files too! Detection of the marker represented by NotForRepoMarker constant is realized by ContainsNotForRepoMarker method. Despite its simplicity it provides good performance. On mine (middle range) laptop, 100 MB file takes less than a second to process. If marker is found, program exits with error code (value different than 0). Before quitting, information about which file contains the marker is sent to standard error output (via Console.Error). This message will get displayed in Tortoise window.

The code is simple, isn’t it? In addition, hook installation is also trivial!

To attach hook, choose “Settings” item from Tortoise’s context menu. Then select “Hook scripts” element and click “Add…” button. Such window will appear:

TSVN hooks configuration window

Set „Hook Type” to „Pre-Commit Hook”. Fill “Working Copy Path” field with a path to the directory that contains local copy of the repo (different folders can have different hooks). In “Command Line To Execute” field, set path to the application that implements the hook. Check “Wait for the script to finish” and “Hide the script while running” options (the latter will prevent console window from showing). Press “OK” button and voila, hook is installed!

Now mark some code with “NOT_FOR_REPO” comment and try to execute commit. You should see something like that:

Operation blocked by pre-commit hook

Notice the „Retry without hooks” button – it allows commit to be completed by ignoring hooks.

We now have a hook that prevents from temporary code submission. One may also want to create a hook that enforces log message to be filled, blocks *.log files commits etc. Your private hooks – you decide! And if some of the hooks will be usefull for the whole team, you can always remake them as Subversion hooks.

Tested on TortoiseSVN 1.7.8/Subversion 1.7.6.

Update 24.03.2014: Added emphasis to checking "Wait for the script to finish" option - without it hook will not block commits!

Update 17.09.2013: (additional info): You may set hook on a parent folder which contains multiple repositories checkouts. If you are willing to sacrifice a bit of performance for added protection you may resign from filtering files before checking for NotForRepoMarker marker.  

How to close pop-ups upon main window closure or logout?

by Miłosz Orzeł 4. November 2012 14:58

Imagine you have to provide support for some really old web application. The app has one main window and pop-up windows that show some sensitive information (for example payroll list). Client wants to ensure that all pop-ups are closed when user leaves main window or clicks “logout” button in this window...

So... how to close all the windows opened with window.open?

On the web this question comes up very often. Unfortunately, most common answer is really naive. Proposed solution is based on keeping references to opened pop-ups and subsequent invocation of close method: 

var popups = []; 
 
function openPopup() {
    var wnd = window.open('Home/Popup', 'popup' + popups.length, 'height=300,width=300');
     
    popups.push(wnd);
}
 
function closePopups() {
    for (var i = 0; i < popups.length; i++) {
        popups[i].close();
    }
 
    popups = [];
}

In practice this doesn’t work because the array of references is cleared at full page reload (for example after clicking on a link or upon postback)...

Other suggested solution is to give the pop-up a unique name (using the second parameter of the open method) and later acquisition of a reference to the window:

var wnd = window.open('', 'popup0');
wnd.close();

This is based on the fact, that window.open method works in two modes:

  1. If a window with a given name doesn’t exist, it is created.
  2. If a window with a given name does exist, it will not be recreated, instead a reference to that window will be returned (if non empty URL is passed to the open method pop-up will be reloaded).

The problem lies at point no. 1. If pop-up window with given name wasn’t previously opened, the call to open and close methods will cause the pop-up to be briefly visible. It sucks…

But maybe a reference to pop-up can be retained between page reloads?

If there is no need to support older browsers (unlikely for the old application) we can try to put reference to the pop-up window into localStorage. However, this will not work:

var popup = window.open('http://morzel.net', 'test');
localStorage.setItem('key', JSON.stringify(popup)); 
 
TypeError: Converting circular structure to JSON

Old tricks for keeping page state between reloads that are based on cookies or window.name will not work too.

 

So… what to do?

Even if you can’t afford to have a major change such as introducing frames, don’t give up :)

Pop-up windows have opener property that points to parent window (that is the window in which the call to window.open was placed). Pop-ups can therefore periodically check whether the main window still remains open. Additionally, pop-ups can also access variables from parent window. This can be used to enforce pop-ups closure when main window is closed or when user clicks on “logout” button in parent window. When user is logged-in (and only then!), a marker variable (i.e. loggedIn) should be set in main window.

Here is the JS code that should be placed on a page displayed in a pup-up:

window.setInterval(function () {
    try {
        if (!window.opener || window.opener.closed === true || window.opener.loggedIn !== true) {
            window.close();
        }
    } catch (ex) {
        window.close(); // FF may throw security exception when you try to access loggedIn (for external site)
    }
}, 1000);

Checking variable from the opener window has another advantage. If user moves away from our application in main window (for example by clicking back button or a link to an external website), then the pop-up window will detect the lack of monitored variable in window.opener and close automatically.

Well, it's not the kind of code you enjoy to write but it achieves the desired result despite the painful gaps in the browsers API. If only they provide us with window.exists('name') method...

What for?

I can’t imagine working as a programmer without hundreds of web pages on which people "wasting" their free time share what they managed to find out. Therefore I will try to add a bit of useful information to the web’s resources myself...  - about me

Also on CodeProject!

 230K reads since may 2012 :)

Language

This blog is my first attempt to write in English so if you see any language mistakes please let me know. I didn’t have enough time to translate most of my old posts but I will try to make new ones both in Polish and in English.
Znasz polski? Kliknij tutaj.

History