morzel.net

.net, js, html, arduino, java... no rants or clickbaits.

Fast pixel operations in .NET (with and without unsafe)

Bitmap class has GetPixel and SetPixel methods that let you acquire and change color of chosen pixels. Those methods are very easy to use but are also extremely slow. My previous post gives detailed explanation on the topic, click here if you are interested.

Fortunately you don’t have to use external libraries (or resign from .NET altogether) to do fast image manipulation. The Framework contains class called ColorMatrix that lets you apply many changes to images in an efficient manner. Properties such as contrast or saturation can be modified this way. But what about manipulation of individual pixels? It can be done too, with the help from Bitmap.LockBits method and BitmapData class…

Good way to test individual pixel manipulation speed is color difference detection. The task is to find portions of an image that have color similar to some chosen color. How to check if colors are similar? Think about color as a point in three dimensional space, where axes are: red, green and blue. Two colors are two points. The difference between colors is described by the distance between two points in RGB space.

Colors as points in 3D space diff = sqrt((C1R-C2R)2+(C1G-C2G)2+(C1B-C2B)2)

This technique is very easy to implement and gives decent results. Color comparison is actually a pretty complex matter though. Different color spaces are better suited for the task than RGB and human color perception should be taken into account (e. g. our eyes are more keen to detect difference in shades of green that in shades of blue). But let’s keep things simple here…

Our test image will be this Ultra HD 8K (7680x4320, 33.1Mpx) picture* (on this blog it’s of course scaled down to save bandwidth):

Color difference detection input image (scaled down for blog)

This is a method that may be used to look for R=253 G=129 B=84 pixels (aka “pink bra”). It sets matching pixels as white (the rest will be black):

static void DetectColorWithGetSetPixel(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    int toleranceSquared = tolerance * tolerance;
    
    for (int x = 0; x < image.Width; x++)
    {
        for (int y = 0; y < image.Height; y++)
        {
            Color pixel = image.GetPixel(x, y);

            int diffR = pixel.R - searchedR;
            int diffG = pixel.G - searchedG;
            int diffB = pixel.B - searchedB;

            int distance = diffR * diffR + diffG * diffG + diffB * diffB;

            image.SetPixel(x, y, distance > toleranceSquared ? Color.Black : Color.White);
        }
    }
}

Above code is our terribly slow Get/SetPixel baseline. If we call it this way (named parameters for clarity):

DetectColorWithGetSetPixel(image, searchedR: 253, searchedG: 129, searchedB: 255, tolerance: 84);

we will receive following outcome:

Color difference detection output image (scaled down)

Result may be ok but having to wait over 84300ms* is a complete disaster! 

Now check out this method:

static unsafe void DetectColorWithUnsafe(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
    int bytesPerPixel = 3;

    byte* scan0 = (byte*)imageData.Scan0.ToPointer();
    int stride = imageData.Stride;

    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    for (int y = 0; y < imageData.Height; y++)
    {
        byte* row = scan0 + (y * stride);

        for (int x = 0; x < imageData.Width; x++)
        {
            // Watch out for actual order (BGR)!
            int bIndex = x * bytesPerPixel;
            int gIndex = bIndex + 1;
            int rIndex = bIndex + 2;

            byte pixelR = row[rIndex];
            byte pixelG = row[gIndex];
            byte pixelB = row[bIndex];

            int diffR = pixelR - searchedR;
            int diffG = pixelG - searchedG;
            int diffB = pixelB - searchedB;

            int distance = diffR * diffR + diffG * diffG + diffB * diffB;

            row[rIndex] = row[bIndex] = row[gIndex] = distance > toleranceSquared ? unmatchingValue : matchingValue;
        }
    }

    image.UnlockBits(imageData);
}

It does exactly the same thing but runs for only 230ms over 360 times faster!

Above code makes use of Bitmap.LockBits method that is a wrapper for native GdipBitmapLockBits (GDI+, gdiplus.dll) function. LockBits creates a temporary buffer that contains pixel information in desired format (in our case RGB, 8 bits per color component). Any changes to this buffer are copied back to the bitmap upon UnlockBits call (therefore you should always use LockBits and UnlockBits as a pair). Bitmap.LockBits returns BitmapData object (System.Drawing.Imaging namespace) that has two interesting properties: Scan0 and Stride. Scan0 returns an address of the first pixel data. Stride is the width of single row of pixels (scan line) in bytes (with optional padding to make it dividable by 4). 

BitmapData layout

Please notice that I don’t use calls to Math.Pow and Math.Sqrt to calculate distance between colors. Writing code like this: 

double distance = Math.Sqrt(Math.Pow(pixelR - searchedR, 2) + Math.Pow(pixelG - searchedG, 2) + Math.Pow(pixelB - searchedB, 2));

to process millions of pixels is a terrible idea. Such line can make our optimized method about 25 times slower! Using Math.Pow with integer parameters is extremely wasteful and we don’t have to calculate square root to determine if distance is longer than specified tolerance.

Previously presented method uses code marked with unsafe keyword. It allows C# program to take advantage of pointer arithmetic. Unfortunately, unsafe mode has some important restrictions. Code must be compiled with \unsafe option and executed for fully trusted assembly. 

Luckily there is a Marshal.Copy method (from System.Runtime.InteropServices namespace) that can move data between managed and unmanaged memory. We can use it to copy image data into a byte array and manipulate pixels very efficiently. Look at this method:

static void DetectColorWithMarshal(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{        
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

    byte[] imageBytes = new byte[Math.Abs(imageData.Stride) * image.Height];
    IntPtr scan0 = imageData.Scan0;

    Marshal.Copy(scan0, imageBytes, 0, imageBytes.Length);
  
    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    for (int i = 0; i < imageBytes.Length; i += 3)
    {
        byte pixelB = imageBytes[i];
        byte pixelR = imageBytes[i + 2];
        byte pixelG = imageBytes[i + 1];

        int diffR = pixelR - searchedR;
        int diffG = pixelG - searchedG;
        int diffB = pixelB - searchedB;

        int distance = diffR * diffR + diffG * diffG + diffB * diffB;

        imageBytes[i] = imageBytes[i + 1] = imageBytes[i + 2] = distance > toleranceSquared ? unmatchingValue : matchingValue;
    }

    Marshal.Copy(imageBytes, 0, scan0, imageBytes.Length);

    image.UnlockBits(imageData);
}

It runs for 280ms, so it is only slightly slower than unsafe version. It is CPU efficient but uses more memory then previous method – almost 100 megabytes for our test Ultra HD 8K image in RGB 24 format.

If you want to make pixel manipulation even faster you may process different parts of the image in parallel. You need to make some benchmarking first because for small images the cost of threading may be bigger than gains from concurrent execution. Here’s a quick sample of code that uses 4 threads to process 4 parts of the image simultaneously. It yields 30% time improvement on my machine. Treat is as a quick and dirty hint, this post is already to long…

static unsafe void DetectColorWithUnsafeParallel(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
    int bytesPerPixel = 3;

    byte* scan0 = (byte*)imageData.Scan0.ToPointer();
    int stride = imageData.Stride;

    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    Task[] tasks = new Task[4];
    for (int i = 0; i < tasks.Length; i++)
    {
        int ii = i;
        tasks[i] = Task.Factory.StartNew(() =>
            {
                int minY = ii < 2 ? 0 : imageData.Height / 2;
                int maxY = ii < 2 ? imageData.Height / 2 : imageData.Height;

                int minX = ii % 2 == 0 ? 0 : imageData.Width / 2;
                int maxX = ii % 2 == 0 ? imageData.Width / 2 : imageData.Width;                        
                
                for (int y = minY; y < maxY; y++)
                {
                    byte* row = scan0 + (y * stride);

                    for (int x = minX; x < maxX; x++)
                    {
                        int bIndex = x * bytesPerPixel;
                        int gIndex = bIndex + 1;
                        int rIndex = bIndex + 2;

                        byte pixelR = row[rIndex];
                        byte pixelG = row[gIndex];
                        byte pixelB = row[bIndex];

                        int diffR = pixelR - searchedR;
                        int diffG = pixelG - searchedG;
                        int diffB = pixelB - searchedB;

                        int distance = diffR * diffR + diffG * diffG + diffB * diffB;

                        row[rIndex] = row[bIndex] = row[gIndex] = distance > toleranceSquared ? unmatchingValue : matchingValue;
                    }
                }
            });
    }

    Task.WaitAll(tasks);

    image.UnlockBits(imageData);
}

* Originally I had some triangles and squares as an illustration, but Victoria's Secret models (source) are better, huh? :) 

* .NET 4 console app, executed  on MSI GE620 DX laptop: Intel Core i5-2430M 2.40GHz (2 cores, 4 threads), 4GB DDR3 RAM, NVIDIA GT 555M 2GB DDR3, HDD 500GB 7200RPM, Windows 7 Home Premium x64.

Coordinate system in HTML5 Canvas, drawing with y-axis value increasing upwards

Coordinate system in HTML5 Canvas is set up in such a way that its origin (0,0) is in the upper-left corner. This solution is nothing new in the world of screen graphics (e.g. the same goes for Windows Forms and SVG). CRT monitors, which were standard in the past, displayed picture lines from top to bottom and image within a line was created from left to right. So locating origin (0,0) in the upper-left corner was intuitive and it made creating hardware and software for handling graphics easier.

Unfortunately sometimes default coordinate system in canvas is a bit impractical. Let’s assume that you want to create projectile motion animation. It seems natural that for ascending projectile, the value of y coordinate should increase. But it will result in a weird effect of inverted trajectory:

Default coordinate system (y value increases downwards)

You can get rid of this problem by modifying y value that is passed to drawing function:

context.fillRect(x, offsetY - y, size, size);

For y = 0, projectile will be placed in a location determined by offsetY (to make y = 0 be the very bottom of the canvas, set offsetY equal to height of the canvas). The bigger the value of y the higher a projectile will be drawn. The problem is that you can have hundreds of places in your code that use y coordinate. If you forget to use offsetY just once the whole image may get destroyed. 

Luckily canvas lets you make changes to coordinate system by means of transformations. Two transformation methods will be useful for us: translate(x ,y) and scale(x, y). The former allows us to move origin to an arbitrary place, the latter is for changing size of drawn objects, but it may also be used to invert coordinates.

Single execution of the following code will move origin of coordinate system to point (0, offsetY) and establish y-axis values as increasing towards the top of the screen:

context.translate(0, offsetY);
context.scale(1, -1);

Translation and scaling of coordinate system. Click to enlarge...

But there’s a catch: the result of providing -1 as scale’s method second argument is that the whole image is created for inverted y coordinate. This applies to text too (calling fillText will render letters upside-down). Therefore before writing any text, you have to restore default y-axis configuration. Because manual restoring of canvas state is awkward, methods save() and restore() exist. These methods are for pushing canvas state on the stack and popping canvas state from the stack, respectively. It is recommended to use save method before doing transformations. Canvas state includes not only transformations but also values such as fill style or line width...

context.save();
 
context.fillStyle = 'red';
context.scale(2, 2);
context.fillRect(0, 0, 10, 10);
 
context.restore();
 
context.fillRect(0, 0, 10, 10);

Above code draws 2 squares: 

First square is red and is drawn with 2x scale. Second square is drawn with default canvas settings (color black and 1x scale). This occurs because right before any changes to scale and color, canvas state was save on the stack, later on it was restored before second square drawing.

Why the use of GetPixel and SetPixel is so inefficient!

Bitmap class provides two simple methods: GetPixel and SetPixel used respectively to retrieve a point of image (as the Color structure) and set a point of image. The following code illustrates how to retrieve/set all the pixels in the bitmap:

private void GetSetPixel(Bitmap image) {
   for (int x = 0; x < image.Width; x++) {
      for (int y = 0; y < image.Height; y++) {
         Color pixel = image.GetPixel(x, y);
         image.SetPixel(x, y, Color.Black);
      }
   } 
}

As shown, review and modification of pixels is extremely simple. Unfortunately behind the simplicity of the code lies a serious performance trap. While for a small number of references to image points, the speed at which GetPixel and SetPixel work is good enough, for larger images it is not the case. Graph presented below can serve as a proof of that. It shows results of 10 tests* which consisted of 10-fold invocation of previously shown GetSetPixel method for images 100x100 and 1000x1000 pixels in size.

Wyniki testów prędkości operacji na pikselach obrazu z użyciem metod GetPixel i SetPixel klasy Bitmap.

The average test time for an image measuring 100 by 100 pixels was 543 milliseconds. This speed is acceptable if the image processing is not done frequently. Performance problem is, however, clearly visible when you try to use an image of size 1000 per 1000 pixels. Execution of the test in this case takes an average of more than 41 seconds - more than 4 sec. on a single call to GetSetPixel (seriously!).

Why so slow?

Low efficiency is due to the fact that access to the pixel is not a simple reference to a memory area. Each getting or setting of color is associated with invocation of .NET Framework method, which is a wrapper for native function contained in gdiplus.dll. This call is through the mechanism of P/Invoke (Platform Invocation), which is used to communicate from managed code to unmanaged API (API outside of the .NET Framework). So for a bitmap of 1000x1000 pixels there will be 1 million calls to GetPixel method that besides the validation of parameters uses the native GdipBitmapGetPixel function. Before returning color information, GDI+ function has to perform such operations as calculating the position of bytes responsible for description of desired pixel… Similar situation occurs in the case SetPixel method.

Look at the following code of Bitmap.GetPixel method obtained with the .NET Reflector (System.Drawing.dll, .NET Framework 2.0):

public Color GetPixel(int x, int y) {
   int argb = 0;
   if ((x < 0) || (x >= base.Width)) {
      throw new ArgumentOutOfRangeException(“x”, SR.GetString(“ValidRangeX”));
   }
   if ((y < 0) || (y >= base.Height)) {
      throw new ArgumentOutOfRangeException(“y”, SR.GetString(“ValidRangeY”));
   }
   
   int status = SafeNativeMethods.Gdip.GdipBitmapGetPixel(new HandleRef(this, base.nativeImage), x, y, out argb);
   if (status != 0) {
      throw SafeNativeMethods.Gdip.StatusException(status);
   }
   return Color.FromArgb(argb);
}

Here is import of GDI + function:

[DllImport(“gdiplus.dll”, CharSet=CharSet.Unicode, SetLastError=true, 
ExactSpelling=true)]
internal static extern int GdipBitmapGetPixel(HandleRef bitmap, int x, int y, out int argb);

* I have tested on such laptop: HP Pavilion dv5, AMD Turion X2 Dual-Core Mobile RM-70, 3 GB RAM, Vista Home Premium