morzel.net

.net, js, html, arduino, java... no rants or clickbaits.

Detecting a Drone - OpenCV in .NET for Beginners (Emgu CV 3.2, Visual Studio 2017). Part 2

OVERVIEW

In Part 1 you have learned what OpenCV is, what is the role of Emgu CV wrapper and how to create a Visual Studio 2017 C# project that utilizes the two libraries. In this part I will show you how to loop through frames captured from video file. Check the first part to watch demo video and find information about sample project (all the interesting stuff is inside Program.cs  - keep this file opened in separate tab as its fragments will be shown in this post)...

 

STEP 1: Capturing video from file

Before any processing can happen we need to obtain a frame from video file. This can be easily done by using VideoCapture class (many tutorials mention Capture class instead but it is not available in recent Emgu versions).

Check the Main method from our sample project:

private const string BackgroundFrameWindowName = "Background Frame";
// ...
private static Mat backgroundFrame = new Mat(); // Frame used as base for change detection
// ...

static void Main(string[] args)
{
    string videoFile = @"PUT A PATH TO VIDEO FILE HERE!";

    using (var capture = new VideoCapture(videoFile)) // Loading video from file
    {
        if (capture.IsOpened)
        {
            // ...

            // Obtaining and showing first frame of loaded video (used as the base for difference detection)
            backgroundFrame = capture.QueryFrame();
            CvInvoke.Imshow(BackgroundFrameWindowName, backgroundFrame);

            // Handling video frames (image processing and contour detection)
            VideoProcessingLoop(capture, backgroundFrame);
        }
        else
        {
            Console.WriteLine($"Unable to open {videoFile}");
        }
    }
}

VideoCapture has four constructor versions. The overload we are using takes string parameter that is a path to video file or video stream. Other versions allow us to connect to cameras. If you design your program right, switching from file input to a webcam might be as easy as changing new VideoCapture call!

Once VideoCapture instance is created we can confirm if opening went fine by accessing IsOpened property (maybe path is wrong or codecs are missing?).

VideoCapture offers few ways of acquiring frames but the one I find most convenient is by call to QueryFrame method. This method returns Mat class instance (you know it already from part 1) and moves to next frame. If next frame cannot be found then null is returned. We can use this fact to easily loop through video. 

 

STEP 2: Loading and presenting background frame

Our drone detection project is based on finding the difference between background frame and other frames. The assumption is that we can treat the first frame obtained from the video as the background, hence the call to QueryFrame right after creating VideoCapture object:

 backgroundFrame = capture.QueryFrame();

After background is loaded we can check how it looks with a call to Imshow method (you know it from part 1 too):

CvInvoke.Imshow(BackgroundFrameWindowName, backgroundFrame);

Is finding a (meaningful!) difference in a video always as easy as subtracting frames? No, it isn't. First of all the background might not be static (imagine that drone was flying in front of threes moved by wind or if lighting in a room was changing significantly). The second challenge might come from movements of the camera. Having a fixed background and camera position keeps our drone detection task simple enough for beginner's OpenCV tutorial plus it's not completely unrealistic. Video detection/recognition is often used in fully controlled environment such as part of factory... OpenCV is capable of handling more complex scenarios - you can read about background subtraction techniques and optical flow to get a hint...

 

STEP 3: Looping through video frames

We know that we can use QueryFrame to get single frame image (Mat instance) and progress to next frame and we know that QueryFrame returns null if it can't go any further. Let's use this knowledge to build a method that goes through frames in a loop:

private static void VideoProcessingLoop(VideoCapture capture, Mat backgroundFrame)
{
    var stopwatch = new Stopwatch(); // Used for measuring video processing performance

    int frameNumber = 1;
    while (true) // Loop video
    {
        rawFrame = capture.QueryFrame(); // Getting next frame (null is returned if no further frame exists)

        if (rawFrame != null) 
        {
            frameNumber++;

            stopwatch.Restart();
            ProcessFrame(backgroundFrame, Threshold, ErodeIterations, DilateIterations);
            stopwatch.Stop();

            WriteFrameInfo(stopwatch.ElapsedMilliseconds, frameNumber);
            ShowWindowsWithImageProcessingStages();

            int key = CvInvoke.WaitKey(0); // Wait indefinitely until key is pressed

            // Close program if Esc key was pressed (any other key moves to next frame)
            if (key == 27)
                Environment.Exit(0);
        }
        else
        {
            capture.SetCaptureProperty(CapProp.PosFrames, 0); // Move to first frame
            frameNumber = 0;
        }
    }
}

In each loop iteration a frame is grabbed from video file. It is then passed to ProcessFrame method which does image difference, noise removal, contour detection and drawing (it will be discussed in detail in the next post)... Call to ProcessFrame is surrounded with System.Diagnostics.Stopwatch usage - this way we can measure video processing performance. It took my laptop only about 1.5ms to fully handle each frame - I've told you OpenCV is fast! :)

If QueryFrame returns null then program moves back to first frame by calling SetCaptureProperty method on VideoCapture instance (video will be processed again).

WriteFrameInfo puts a text in the frame's upper-left corner with information about it's number and how long it took to process it. ShowWindowsWithImageProcessingStages ensures that we can see current (raw) frame, background frame, intermediate frames and final frame in separate windows... Both methods will be shown in next post.

The while loop is going to spin forever unless program execution is stopped by Escape key being pressed in any of the windows that show frames (not the console window!). If 0 is passed as WaitKey argument then program waits until some key is pressed. This let's you look at each frame as long as you want. If you pass other number to WaitKey then the program will wait until key is pressed or a delay elapses. You might use it to automatically play video at specified frame rate:

int fps = (int)capture.GetCaptureProperty(CapProp.Fps);
int key = CvInvoke.WaitKey(1000 / fps); // 40ms delay

Warning: One thing you might notice while processing videos is that moving through a file is not always as easy as setting CapProp.PosFrame to desired number. Your experience might vary from format to format. This is because video files are optimized for playing forward at natural speed and frames might not be simply kept as sequence of images. Full HD (1920x1080) movie has over 2 million pixels in each frame. Now let's say we have an hour of video at 30 FPS ->  3600 * 30 * 2,073,600 = 223,948,800,000. Independent frame compression is not enough to crush that number! No wonder some people need to dedicate their scientific/sofware careers to video compression...

Ok, enough for now - next part coming soon!

Detecting a Drone - OpenCV in .NET for Beginners (Emgu CV 3.2, Visual Studio 2017). Part 1

INTRO

Emgu CV is a .NET wrapper for OpenCV (Open Source Computer Vision Library) which is a collection of over 2500 algorithms focused on real-time image processing and machine learning. OpenCV lets you write software for:

  • face detection,
  • object identification,
  • motion tracking,
  • image stitching,
  • stereo vision
  • and much, much more...

Open CV is written in highly optimized C/C++, supports multi-core execution and heterogeneous execution platforms (CPU, GPU, DSP...) thanks to OpenCL. The project was launched in 1999 by Intel Research and is now actively developed by open source community members and contributors from companies like Google, Microsoft or Honda...

My experience with Emgu CV/OpenCV comes mostly from working on paintball turret project (which I use to have a break from "boring" banking stuff at work). I'm far from computer vision expert but I know enough to teach you how to detect a mini quadcopter flying in a room:

In the upper-left corner you can see frame captured from video file, following that is the background frame (static background and camera makes our task simpler)... Next to it are various stages of image processing run before drone (contour) detection is executed. Last frame shows original frame with drone position marked. Job done! Oh, and if you are wondering what is the "snow" seen in the video: these are some particles I put to make the video a bit more "noisy"...

I assume that you know a bit about C# programming but are completely new to Emgu CV/OpenCV.

By the end of this tutorial you will know how to:

  • use Emgu CV 3.2 in C# 7 (Visual Studio 2017) application (most tutorials available online are quite outdated!),
  • capture frames from video,
  • find changes between images (diff and binary threshold),
  • remove noise with erode and dilate (morphological operations),
  • detect contours,
  • draw and write on a frame

Sounds interesting? Read on!

 

THE CODE

I plan to give detailed description of the whole program (don't worry: it's just about 200 lines) but if you would like to jump straight to the code visit this GitHub repository: https://github.com/morzel85/blog-post-emgucv. It's a simple console app - I've put everything into Program.cs so you can't get lost!

Mind that because Emgu CV/OpenCV binaries are quite large these are not included in the repo. This should not be a problem because Visual Studio 2017 should be able to automatically download (restore) the packages...

Here you can download the video I've used for testing: http://morzel.net/download/emgu_cv_drone_test_video.mp4 (4.04 MB, MPEG4 H264 640x480 25fps).

 

STEP 0: Crating project with Emgu CV

To start lets use Visual Studio Community 2017 to create new console application:

New project... Click to enlarge...

Now we need to add Emgu CV to our project. The easiest way to do it is to use Nuget to install Emgu CV package published by Emgu Corporation. To do so run "Install-Package Emgu.CV" command in Package Manager Console or utilize Visual Studio UI:

Adding Nuget package... Click to enlarge...

If all goes well package.config and DLL references should look like this (you don't have to worry about ZedGraph):

Packages and references... Click to enlarge...

Now we are ready to test if OpenCV's magic is available to us through Emgu CV wrapper library. Let's do it by creating super simple program that loads an image file and shows it in a window with obligatory "Hello World!" title:

using Emgu.CV; // Contains Mat and CvInvoke classes

class Program
{
    static void Main(string[] args)
    {
        Mat picture = new Mat(@"C:\Users\gby\Desktop\Krakow_BM.jpg"); // Pick some path on your disk!
        CvInvoke.Imshow("Hello World!", picture); // Open window with image
        CvInvoke.WaitKey(); // Render image and keep window opened until any key is pressed
    }
}

Run it and you should see a window with the image you've selected. Here's what I got - visit Kraków if you like my picture :)

Window with image... Click to enlarge...

Above code loads picture from a file into Mat class instance. Mat is a n-dimensional dense array containing pointer to image matrix and a header describing this matrix. It supports a reference counting mechanism that saves memory if multiple image processing operations act on same data... Don't worry if it sounds a bit confusing. All you need to know now is that we can load images (from files, webcams, video frames etc.) into Mat objects. If you are curious read this nice description of Mat.

The other interesting thing you can see in the code is the CvInvoke class. You can use it to call OpenCV functions from your C# application without dealing with complexity of operating native code and data structures from managed code - Emgu the wrapper will do it for you through PInvoke mechanism.

Ok, so now you have some idea on what Emgu CV/OpenCV libraries are and how to bring them into your application. Next post coming soon...

Fast pixel operations in .NET (with and without unsafe)

Bitmap class has GetPixel and SetPixel methods that let you acquire and change color of chosen pixels. Those methods are very easy to use but are also extremely slow. My previous post gives detailed explanation on the topic, click here if you are interested.

Fortunately you don’t have to use external libraries (or resign from .NET altogether) to do fast image manipulation. The Framework contains class called ColorMatrix that lets you apply many changes to images in an efficient manner. Properties such as contrast or saturation can be modified this way. But what about manipulation of individual pixels? It can be done too, with the help from Bitmap.LockBits method and BitmapData class…

Good way to test individual pixel manipulation speed is color difference detection. The task is to find portions of an image that have color similar to some chosen color. How to check if colors are similar? Think about color as a point in three dimensional space, where axes are: red, green and blue. Two colors are two points. The difference between colors is described by the distance between two points in RGB space.

Colors as points in 3D space diff = sqrt((C1R-C2R)2+(C1G-C2G)2+(C1B-C2B)2)

This technique is very easy to implement and gives decent results. Color comparison is actually a pretty complex matter though. Different color spaces are better suited for the task than RGB and human color perception should be taken into account (e. g. our eyes are more keen to detect difference in shades of green that in shades of blue). But let’s keep things simple here…

Our test image will be this Ultra HD 8K (7680x4320, 33.1Mpx) picture* (on this blog it’s of course scaled down to save bandwidth):

Color difference detection input image (scaled down for blog)

This is a method that may be used to look for R=253 G=129 B=84 pixels (aka “pink bra”). It sets matching pixels as white (the rest will be black):

static void DetectColorWithGetSetPixel(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    int toleranceSquared = tolerance * tolerance;
    
    for (int x = 0; x < image.Width; x++)
    {
        for (int y = 0; y < image.Height; y++)
        {
            Color pixel = image.GetPixel(x, y);

            int diffR = pixel.R - searchedR;
            int diffG = pixel.G - searchedG;
            int diffB = pixel.B - searchedB;

            int distance = diffR * diffR + diffG * diffG + diffB * diffB;

            image.SetPixel(x, y, distance > toleranceSquared ? Color.Black : Color.White);
        }
    }
}

Above code is our terribly slow Get/SetPixel baseline. If we call it this way (named parameters for clarity):

DetectColorWithGetSetPixel(image, searchedR: 253, searchedG: 129, searchedB: 255, tolerance: 84);

we will receive following outcome:

Color difference detection output image (scaled down)

Result may be ok but having to wait over 84300ms* is a complete disaster! 

Now check out this method:

static unsafe void DetectColorWithUnsafe(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
    int bytesPerPixel = 3;

    byte* scan0 = (byte*)imageData.Scan0.ToPointer();
    int stride = imageData.Stride;

    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    for (int y = 0; y < imageData.Height; y++)
    {
        byte* row = scan0 + (y * stride);

        for (int x = 0; x < imageData.Width; x++)
        {
            // Watch out for actual order (BGR)!
            int bIndex = x * bytesPerPixel;
            int gIndex = bIndex + 1;
            int rIndex = bIndex + 2;

            byte pixelR = row[rIndex];
            byte pixelG = row[gIndex];
            byte pixelB = row[bIndex];

            int diffR = pixelR - searchedR;
            int diffG = pixelG - searchedG;
            int diffB = pixelB - searchedB;

            int distance = diffR * diffR + diffG * diffG + diffB * diffB;

            row[rIndex] = row[bIndex] = row[gIndex] = distance > toleranceSquared ? unmatchingValue : matchingValue;
        }
    }

    image.UnlockBits(imageData);
}

It does exactly the same thing but runs for only 230ms over 360 times faster!

Above code makes use of Bitmap.LockBits method that is a wrapper for native GdipBitmapLockBits (GDI+, gdiplus.dll) function. LockBits creates a temporary buffer that contains pixel information in desired format (in our case RGB, 8 bits per color component). Any changes to this buffer are copied back to the bitmap upon UnlockBits call (therefore you should always use LockBits and UnlockBits as a pair). Bitmap.LockBits returns BitmapData object (System.Drawing.Imaging namespace) that has two interesting properties: Scan0 and Stride. Scan0 returns an address of the first pixel data. Stride is the width of single row of pixels (scan line) in bytes (with optional padding to make it dividable by 4). 

BitmapData layout

Please notice that I don’t use calls to Math.Pow and Math.Sqrt to calculate distance between colors. Writing code like this: 

double distance = Math.Sqrt(Math.Pow(pixelR - searchedR, 2) + Math.Pow(pixelG - searchedG, 2) + Math.Pow(pixelB - searchedB, 2));

to process millions of pixels is a terrible idea. Such line can make our optimized method about 25 times slower! Using Math.Pow with integer parameters is extremely wasteful and we don’t have to calculate square root to determine if distance is longer than specified tolerance.

Previously presented method uses code marked with unsafe keyword. It allows C# program to take advantage of pointer arithmetic. Unfortunately, unsafe mode has some important restrictions. Code must be compiled with \unsafe option and executed for fully trusted assembly. 

Luckily there is a Marshal.Copy method (from System.Runtime.InteropServices namespace) that can move data between managed and unmanaged memory. We can use it to copy image data into a byte array and manipulate pixels very efficiently. Look at this method:

static void DetectColorWithMarshal(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{        
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

    byte[] imageBytes = new byte[Math.Abs(imageData.Stride) * image.Height];
    IntPtr scan0 = imageData.Scan0;

    Marshal.Copy(scan0, imageBytes, 0, imageBytes.Length);
  
    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    for (int i = 0; i < imageBytes.Length; i += 3)
    {
        byte pixelB = imageBytes[i];
        byte pixelR = imageBytes[i + 2];
        byte pixelG = imageBytes[i + 1];

        int diffR = pixelR - searchedR;
        int diffG = pixelG - searchedG;
        int diffB = pixelB - searchedB;

        int distance = diffR * diffR + diffG * diffG + diffB * diffB;

        imageBytes[i] = imageBytes[i + 1] = imageBytes[i + 2] = distance > toleranceSquared ? unmatchingValue : matchingValue;
    }

    Marshal.Copy(imageBytes, 0, scan0, imageBytes.Length);

    image.UnlockBits(imageData);
}

It runs for 280ms, so it is only slightly slower than unsafe version. It is CPU efficient but uses more memory then previous method – almost 100 megabytes for our test Ultra HD 8K image in RGB 24 format.

If you want to make pixel manipulation even faster you may process different parts of the image in parallel. You need to make some benchmarking first because for small images the cost of threading may be bigger than gains from concurrent execution. Here’s a quick sample of code that uses 4 threads to process 4 parts of the image simultaneously. It yields 30% time improvement on my machine. Treat is as a quick and dirty hint, this post is already to long…

static unsafe void DetectColorWithUnsafeParallel(Bitmap image, byte searchedR, byte searchedG, int searchedB, int tolerance)
{
    BitmapData imageData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
    int bytesPerPixel = 3;

    byte* scan0 = (byte*)imageData.Scan0.ToPointer();
    int stride = imageData.Stride;

    byte unmatchingValue = 0;
    byte matchingValue = 255;
    int toleranceSquared = tolerance * tolerance;

    Task[] tasks = new Task[4];
    for (int i = 0; i < tasks.Length; i++)
    {
        int ii = i;
        tasks[i] = Task.Factory.StartNew(() =>
            {
                int minY = ii < 2 ? 0 : imageData.Height / 2;
                int maxY = ii < 2 ? imageData.Height / 2 : imageData.Height;

                int minX = ii % 2 == 0 ? 0 : imageData.Width / 2;
                int maxX = ii % 2 == 0 ? imageData.Width / 2 : imageData.Width;                        
                
                for (int y = minY; y < maxY; y++)
                {
                    byte* row = scan0 + (y * stride);

                    for (int x = minX; x < maxX; x++)
                    {
                        int bIndex = x * bytesPerPixel;
                        int gIndex = bIndex + 1;
                        int rIndex = bIndex + 2;

                        byte pixelR = row[rIndex];
                        byte pixelG = row[gIndex];
                        byte pixelB = row[bIndex];

                        int diffR = pixelR - searchedR;
                        int diffG = pixelG - searchedG;
                        int diffB = pixelB - searchedB;

                        int distance = diffR * diffR + diffG * diffG + diffB * diffB;

                        row[rIndex] = row[bIndex] = row[gIndex] = distance > toleranceSquared ? unmatchingValue : matchingValue;
                    }
                }
            });
    }

    Task.WaitAll(tasks);

    image.UnlockBits(imageData);
}

* Originally I had some triangles and squares as an illustration, but Victoria's Secret models (source) are better, huh? :) 

* .NET 4 console app, executed  on MSI GE620 DX laptop: Intel Core i5-2430M 2.40GHz (2 cores, 4 threads), 4GB DDR3 RAM, NVIDIA GT 555M 2GB DDR3, HDD 500GB 7200RPM, Windows 7 Home Premium x64.

Coordinate system in HTML5 Canvas, drawing with y-axis value increasing upwards

Coordinate system in HTML5 Canvas is set up in such a way that its origin (0,0) is in the upper-left corner. This solution is nothing new in the world of screen graphics (e.g. the same goes for Windows Forms and SVG). CRT monitors, which were standard in the past, displayed picture lines from top to bottom and image within a line was created from left to right. So locating origin (0,0) in the upper-left corner was intuitive and it made creating hardware and software for handling graphics easier.

Unfortunately sometimes default coordinate system in canvas is a bit impractical. Let’s assume that you want to create projectile motion animation. It seems natural that for ascending projectile, the value of y coordinate should increase. But it will result in a weird effect of inverted trajectory:

Default coordinate system (y value increases downwards)

You can get rid of this problem by modifying y value that is passed to drawing function:

context.fillRect(x, offsetY - y, size, size);

For y = 0, projectile will be placed in a location determined by offsetY (to make y = 0 be the very bottom of the canvas, set offsetY equal to height of the canvas). The bigger the value of y the higher a projectile will be drawn. The problem is that you can have hundreds of places in your code that use y coordinate. If you forget to use offsetY just once the whole image may get destroyed. 

Luckily canvas lets you make changes to coordinate system by means of transformations. Two transformation methods will be useful for us: translate(x ,y) and scale(x, y). The former allows us to move origin to an arbitrary place, the latter is for changing size of drawn objects, but it may also be used to invert coordinates.

Single execution of the following code will move origin of coordinate system to point (0, offsetY) and establish y-axis values as increasing towards the top of the screen:

context.translate(0, offsetY);
context.scale(1, -1);

Translation and scaling of coordinate system. Click to enlarge...

But there’s a catch: the result of providing -1 as scale’s method second argument is that the whole image is created for inverted y coordinate. This applies to text too (calling fillText will render letters upside-down). Therefore before writing any text, you have to restore default y-axis configuration. Because manual restoring of canvas state is awkward, methods save() and restore() exist. These methods are for pushing canvas state on the stack and popping canvas state from the stack, respectively. It is recommended to use save method before doing transformations. Canvas state includes not only transformations but also values such as fill style or line width...

context.save();
 
context.fillStyle = 'red';
context.scale(2, 2);
context.fillRect(0, 0, 10, 10);
 
context.restore();
 
context.fillRect(0, 0, 10, 10);

Above code draws 2 squares: 

First square is red and is drawn with 2x scale. Second square is drawn with default canvas settings (color black and 1x scale). This occurs because right before any changes to scale and color, canvas state was save on the stack, later on it was restored before second square drawing.

Why the use of GetPixel and SetPixel is so inefficient!

Bitmap class provides two simple methods: GetPixel and SetPixel used respectively to retrieve a point of image (as the Color structure) and set a point of image. The following code illustrates how to retrieve/set all the pixels in the bitmap:

private void GetSetPixel(Bitmap image) {
   for (int x = 0; x < image.Width; x++) {
      for (int y = 0; y < image.Height; y++) {
         Color pixel = image.GetPixel(x, y);
         image.SetPixel(x, y, Color.Black);
      }
   } 
}

As shown, review and modification of pixels is extremely simple. Unfortunately behind the simplicity of the code lies a serious performance trap. While for a small number of references to image points, the speed at which GetPixel and SetPixel work is good enough, for larger images it is not the case. Graph presented below can serve as a proof of that. It shows results of 10 tests* which consisted of 10-fold invocation of previously shown GetSetPixel method for images 100x100 and 1000x1000 pixels in size.

Wyniki testów prędkości operacji na pikselach obrazu z użyciem metod GetPixel i SetPixel klasy Bitmap.

The average test time for an image measuring 100 by 100 pixels was 543 milliseconds. This speed is acceptable if the image processing is not done frequently. Performance problem is, however, clearly visible when you try to use an image of size 1000 per 1000 pixels. Execution of the test in this case takes an average of more than 41 seconds - more than 4 sec. on a single call to GetSetPixel (seriously!).

Why so slow?

Low efficiency is due to the fact that access to the pixel is not a simple reference to a memory area. Each getting or setting of color is associated with invocation of .NET Framework method, which is a wrapper for native function contained in gdiplus.dll. This call is through the mechanism of P/Invoke (Platform Invocation), which is used to communicate from managed code to unmanaged API (API outside of the .NET Framework). So for a bitmap of 1000x1000 pixels there will be 1 million calls to GetPixel method that besides the validation of parameters uses the native GdipBitmapGetPixel function. Before returning color information, GDI+ function has to perform such operations as calculating the position of bytes responsible for description of desired pixel… Similar situation occurs in the case SetPixel method.

Look at the following code of Bitmap.GetPixel method obtained with the .NET Reflector (System.Drawing.dll, .NET Framework 2.0):

public Color GetPixel(int x, int y) {
   int argb = 0;
   if ((x < 0) || (x >= base.Width)) {
      throw new ArgumentOutOfRangeException(“x”, SR.GetString(“ValidRangeX”));
   }
   if ((y < 0) || (y >= base.Height)) {
      throw new ArgumentOutOfRangeException(“y”, SR.GetString(“ValidRangeY”));
   }
   
   int status = SafeNativeMethods.Gdip.GdipBitmapGetPixel(new HandleRef(this, base.nativeImage), x, y, out argb);
   if (status != 0) {
      throw SafeNativeMethods.Gdip.StatusException(status);
   }
   return Color.FromArgb(argb);
}

Here is import of GDI + function:

[DllImport(“gdiplus.dll”, CharSet=CharSet.Unicode, SetLastError=true, 
ExactSpelling=true)]
internal static extern int GdipBitmapGetPixel(HandleRef bitmap, int x, int y, out int argb);

* I have tested on such laptop: HP Pavilion dv5, AMD Turion X2 Dual-Core Mobile RM-70, 3 GB RAM, Vista Home Premium