Miłosz Orzeł

.net, js, html, arduino, java... no rants or clickbaits.

Legacy Apps - Dealing With IFRAME Mess (Window.postMessage)

It's October 2018 so I should probably write something about React 16.5, Angular 7.0 or Blazor 0.6... But... How about some fun with iframe-infested legacy application instead? ;)

TL;DR

You can pass a message to embedded iframe with:

someIframe.contentWindow.postMessage({ a: 'aaa', b: 'bbb' }, '*');

and to parent of an iframe like this:

window.parent.postMessage({ a: 'aaa', b: 'bbb' }, '*');

This is how you can receive the message:

window.addEventListener('message', function (e) {
    // Do something with e.data...         
});  

Mind the security! See my live example and its code on GitHub or go for framebus library if you want some more features.

THE IFRAMES

Unless you work for a startup, chances are that part of your duties is to keep some internal web application alive and this app remembers the glorious era of IE 6. It’s called work for a reason, right? ;) In the old days iframes were used a lot. Not only for embedding content from other sites, cross domain ajax or hacking an overlay that covered selects but also to provide boundaries between page zones or mimic desktop-like windows layout…

So let’s assume that you have site with nested iframes where you need to modify state of one iframe based on action that happened in another iframe:

Page with many iframes...

In the example above, top iframe 0 has a text field and if Update User Name button is clicked we should modify User Name labels in nested iframe 1a and iframe 1b. When Update Account Number button is pressed Account Number in deeply nested iframe 2a should change. Clicking on Update News should modify text in iframe 2b. That last iframe contains a Clear News button, and when it's clicked a notification should be passed to top iframe 0...

DIRECT ACCESS (THE BAD WAY)

One way of implementing interactions between iframes is through direct access to nested/parent iframe's DOM elements (if same-origin policy allows). State of element in nested iframe can modified be such code:

document.getElementById('someIframe').contentWindow.document.getElementById('someInput').value = 'test';

and reaching element in parent can be done with:

window.parent.document.getElementById('someInput').value = 'test';

The problem with this approach is that it tightly couples iframes and that’s unfortunate since the iframes were likely used to provide some sort of encapsulation. Direct DOM access has another flaw: it gets really nasty in case of deep nesting: window.parent.parent.parent.document...

MESSAGING (THE GOOD WAY)

Window.postMessage metod was introduced into browsers to enable safe cross-origin communication between Window objects. The method can be used to pass data between iframes. In this post I’m assuming that the application with iframes is old but it can be run in Internet Explorer 11, which is the last version that Microsoft released (in 2013). From what I’ve seen it’s often the case that IE has to be supported but at least it’s the latest version of it. Sorry if that assumption doesn’t work for you, I’ve suffered my share of old IE support... 

Thanks to postMessage method it’s very easy to create a mini message bus so events triggered in one iframe can be handled in another if the target iframe chooses to take an action. Such approach reduces coupling between iframes as one frame doesn't need to know any details about elements of the other...

Take a look at an example function that can send a messages down to all directly nested iframes:

const sendMessage = function (type, value) {
     console.log('[iframe0] Sending message down, type: ' + type + ', value: ' + value);

     var iframes = document.getElementsByTagName('iframe');
     for (var i = 0; i < iframes.length; i++) {
         iframes[i].contentWindow.postMessage({ direction: 'DOWN', type: type, value: value }, '*');
     }
};

In the code above, iframes are found with document.getElementsByTagName and then a message is sent to each of them through contentWindow.postMessage call. First parameter of postMessage method is the message (data) we want to pass. Browser will take care of its serialization and its up to you to decide what needs to be passed. I've chosen to pass an object with 3 properties: first designate in which direction message should go (UP or DOWN), second states the message type (UPDATE_USER for example) and the last one contains the payload of the message. In the case of our sample app it will be a text user put into input and which should affect elements in nested iframes. The '*' value passed to contentWindow method determines how browser dispatches the event. Asterisk means no restrictions - it's ok for our code sample but in real world you should consider providing an URI as the parameter value so browser will be able to restrict the event based on scheme, host name and port number. This is a must in case you need to pass sensitive data (you don't want to show it to any site that got loaded into iframe)!

This is how sendMessage function can be used to notify nested iframes about the need to update user info:

document.getElementById('updateUserName').addEventListener('click', function (event) {
    sendMessage('UPDATE_USER', document.getElementById('textToSend').value);
});

Code shown above belongs to iframe 0 which contains two nested iframes: 1a and 1b. Below is the code from iframe 1b which can do two things: handle a message in case it is interested in it or just pass it UP or DOWN:

window.addEventListener('message', function (e) {
    console.log('[iframe1b] Message received');

    if (e.data.type === 'UPDATE_USER') {
        console.log('[iframe1b] Handling message - updating user name to: ' + e.data.value);
        document.getElementById('userName').innerText = e.data.value;
    } else {
        if (e.data.direction === 'UP') {
            console.log('[iframe1b] Passing message up');
            window.parent.postMessage(e.data, '*');
        } else {
            console.log('[iframe1b] Passing message down');
            document.getElementById('iframe2b').contentWindow.postMessage(e.data, '*');
        }
    }               
});

You can see that messages can be captured by listening to message event on window object. Passed message is available in event's data field, hence the check for e.data.type is done to see if code should handle the message or just pass it. Passing UP is done with window.parent.postMessage, passing DOWN works with contentWindow.postMessage called on an iframe element. 

iframe 2b has a button with following click handler:

document.getElementById('clearNews').addEventListener('click', function () {
   document.getElementById('news').innerText = '';

   console.log('[iframe2b] News cleared, sending message up, type: NEWS_CLEARED');
   window.parent.postMessage({ direction: 'UP', type: 'NEWS_CLEARED' }, '*');
);

It clears news text and sends notification to parent window (iframe). This message will be received by iframe 1b and passed up to iframe 0 which will handle it by displaying 'News cleared' text: 

window.addEventListener('message', function (e) {
    console.log('[iframe0] Message received');

    if (e.data.type === 'NEWS_CLEARED') {
        console.log('[iframe0] Handling message - notifying about news clear');
        document.getElementById('newsClearedNotice').innerText = 'News cleared!';
    }
});

Notice that this time message handler is quite simple. This is because in the case of top iframe 0 we don't want to pass received messages. 

EXAMPLE

That's it. Here's a working sample of iframe "rich" page. Open browser console to see how messages fly around. Check the repo to see the code, it's vanilla JS with no fancy features since we assumed that IE 11 has to be directly supported (checked also in Firefox 62, Chrome 69 and Edge 42).

Detecting a Drone - OpenCV in .NET for Beginners (Emgu CV 3.2, Visual Studio 2017). Part 3

OVERVIEW

Part 1 introduced you to OpenCV and its Emgu CV wrapper library plus showed the easiest way to create Emgu project in Visual Studio 2017. Part 2 was all about grabbing frames from video file. The third (and last) episode focuses on image transformations and contour detection...

If case you forgot: here's the complete code sample on GitHub (focus on Program.cs file as it contains all image acquisition and processing code). This is the app in action:

 

STEP 4: Difference detection and noise removal

In previous post you've seen that VideoProcessingLoop method invoked ProcessFrame for each frame grabbed from video, here's the method:

// Determines boundary of brightness while turning grayscale image to binary (black-white) image
private const int Threshold = 5;

// Erosion to remove noise (reduce white pixel zones)
private const int ErodeIterations = 3;

// Dilation to enhance erosion survivors (enlarge white pixel zones)
private const int DilateIterations = 3;

// ...

// Containers for images demonstrating different phases of frame processing 
private static Mat rawFrame = new Mat(); // Frame as obtained from video
private static Mat backgroundFrame = new Mat(); // Frame used as base for change detection
private static Mat diffFrame = new Mat(); // Image showing differences between background and raw frame
private static Mat grayscaleDiffFrame = new Mat(); // Image showing differences in 8-bit color depth
private static Mat binaryDiffFrame = new Mat(); // Image showing changed areas in white and unchanged in black
private static Mat denoisedDiffFrame = new Mat(); // Image with irrelevant changes removed with opening operation
private static Mat finalFrame = new Mat(); // Video frame with detected object marked

// ...

private static void ProcessFrame(Mat backgroundFrame, int threshold, int erodeIterations, int dilateIterations)
{
    // Find difference between background (first) frame and current frame
    CvInvoke.AbsDiff(backgroundFrame, rawFrame, diffFrame);

    // Apply binary threshold to grayscale image (white pixel will mark difference)
    CvInvoke.CvtColor(diffFrame, grayscaleDiffFrame, ColorConversion.Bgr2Gray);
    CvInvoke.Threshold(grayscaleDiffFrame, binaryDiffFrame, threshold, 255, ThresholdType.Binary);

    // Remove noise with opening operation (erosion followed by dilation)
    CvInvoke.Erode(binaryDiffFrame, denoisedDiffFrame, null, new Point(-1, -1), erodeIterations, BorderType.Default, new MCvScalar(1));
    CvInvoke.Dilate(denoisedDiffFrame, denoisedDiffFrame, null, new Point(-1, -1), dilateIterations, BorderType.Default, new MCvScalar(1));

    rawFrame.CopyTo(finalFrame);
    DetectObject(denoisedDiffFrame, finalFrame);
}

AbsDiff and CvtColor

First the current frame (kept in rawFrame) is compared to background with CvInvoke.AbsDiff. In other words: current frame is subtracted from background frame pixel by pixel (absolute value is used to avoid negative results). After that the difference image is converted into grayscale with CvInvoke.CvtColor call. We care only about overall pixel difference (not it's individual blue-green-red color components). The whiter the pixel is the more it color has changed... Take a look at below picture showing background frame, current frame and the grayscale difference:

Frames difference... Click to enlarge...

 

Threshold

Grayscale image is changed into and image with only black and white (binary) pixels with the use of CvInvoke.Threshold. Our intention is to mark changed pixels as white. Threshold value allows as to control change detection sensitivity. Below you can see how different thresholds produce various binary results:

Threshold difference... Click to enlarge...

First image (left) was produced with Threshold = 1, so even the slightest change got marked - such image is not a suitable input for contour detection. Image in the middle used Threshold = 5. Drone position is clearly marked and smaller white pixel zones can be removed with erosion... Last image (right) is the result of setting Threshold to = 200. This time sensitivity was too low and we got just a couple of white pixels. 

Erode and Dilate

It's hard to find a threshold that gives desired difference detection for each video frame. If threshold is too low then too many pixels are white, if it's too high then the drone might not be detected at all... The best is to pick a threshold which marks the change we need even if we get a bit of undesired white spots as these can be safely removed with erosion followed by dilation. When combined, these two operations create so called opening operation which can be treated as a noise removal method. Opening is a type of morphology transformation (read this article to learn more about them). These operations work by probing pixel neighborhood with a structuring element (aka kernel) and deciding pixel value based on the values found in the neighborhood...

CvInvoke.Erode is meant to simulate a physical process in which object area is reduced due to destructive effects of its surroundings. Detailed behavior depends on the parameters passed (structuring element, anchor, border type - never mind, this is a beginners guide, right?) but the general idea is like this: if pixel is white but has a black pixel around it then it should become black too. The more erode iterations are run the more white pixel zones get eaten away. Here's an example of image erosion:

Erosion... Click to enlarge...

On the left is the input image and on the right we have the result of erosion which used structuring element in the shape of a 3x3 square (this is the default value used when null is passes for element parameter in Erode invocation). The value of output pixel was decided by probing all neighboring pixels and checking for their minimal value. If a pixel was white in the input image but had at least one black pixel in its immediate surroundings then it became black in the output image.

If erosion is used wisely we can get rid of irrelevant white pixels. But there's a catch: the white pixel zone that marks the drone is reduced too. Don't worry: dilation is going to help us! Just like pupils in your eyes are enlarged (dilated) when it gets dark the white pixel zones that survived erosion can get enlarged too... Again details vary by CvInvoke.Dilate parameters but generally speaking: if pixel is black but has while neighbor it gets white too. The more iterations are run the more white zones are enlarged. This is an example of dilation:

Dilation... Click to enlarge...

On the left we have the same input image as used in erosion example and on the right we can see the result of single call to Dilate method (again with 3 by 3 kernel matrix). Notice how pixel obtains the maximal value of its surroundings (if any neighboring pixel is white it becomes white too)...

Erosion followed by dilation is such a common transformation that OpenCV has methods that combine the two into one opening operation but using separate Erode and Dilate calls gives you a bit more control. Below you can see how opening cleared the noise and enhanced white spot that marks drone position:

Opening... Click to enlarge...

 

STEP 5: Contour detection

Once all above image preparation steps are done we have a binary image which is suitable input for contour detection provided by DetectObject method:

private static void DetectObject(Mat detectionFrame, Mat displayFrame)
{
    using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
    {
        // Build list of contours
        CvInvoke.FindContours(detectionFrame, contours, null, RetrType.List, ChainApproxMethod.ChainApproxSimple);

        // Selecting largest contour
        if (contours.Size > 0)
        {
            double maxArea = 0;
            int chosen = 0;
            for (int i = 0; i < contours.Size; i++)
            {
                VectorOfPoint contour = contours[i];

                double area = CvInvoke.ContourArea(contour);
                if (area > maxArea)
                {
                    maxArea = area;
                    chosen = i;
                }
            }

            // Draw on a frame
            MarkDetectedObject(displayFrame, contours[chosen], maxArea);
        }
    }
}

The method takes binary difference image (detectionFrame) on which contours will be detected and another Mat instance (displayFrame) on which detected object will be marked (it's a copy of unprocessed frame)...

CvInvoke.FindContours takes the image and runs contour detection algorithm to find boundaries between black (zero) and white (non-zero) pixels on 8-bit single channel image - our Mat instance with the result of AbsDiff->CvtColor->Threshold->Erode->Dilate suites it just fine! 

A contour is a VectorOfPoint, because image might have many contours we keep them inside VectorOfVectorOfPoint. In case many contours get detected we want to pick the largest of them. This is easy thanks to a CvInvoke.ContourArea method...

Read the docs about contour hierarchy and approximation methods if you are curious about RetrType.List, ChainApproxMethod.ChainApproxSimple enum values seen in CvInvoke.FindContours call. This is a good read too...

 

STEP 6: Drawing and writing on a frame

Once we've found the drone (that is we have a contour that marks its position) it would be good to present this information to the user. This is done by MarkDetectedObject method:

private static void MarkDetectedObject(Mat frame, VectorOfPoint contour, double area)
{
    // Getting minimal rectangle which contains the contour
    Rectangle box = CvInvoke.BoundingRectangle(contour);

    // Drawing contour and box around it
    CvInvoke.Polylines(frame, contour, true, drawingColor);
    CvInvoke.Rectangle(frame, box, drawingColor);

    // Write information next to marked object
    Point center = new Point(box.X + box.Width / 2, box.Y + box.Height / 2);

    var info = new string[] {
        $"Area: {area}",
        $"Position: {center.X}, {center.Y}"
    };

    WriteMultilineText(frame, info, new Point(box.Right + 5, center.Y));
}

The method uses CvInvoke.BoundingRectangle to find minimal box (Rectangle) that surrounds the entire contour. Box is later drawn with a call to CvInvoke.RectangleThe contour itself is plotted by CvInvoke.Polylines method which takes a list of points that describe the line. You can notice that both drawing methods receive drawingColor parameter, it is an instance of MCvScalar defined this way:

private static MCvScalar drawingColor = new Bgr(Color.Red).MCvScalar;

Bgr structure constructor can take 3 values that define its individual color components or it can take a Color structure like in my example.

Important: Point, Rectangle and Color structures come from System.Drawing assembly which by default is not included in new console application template so you need to add reference to System.Drawing.dll yourself.

Information about detected object location is written by WriteMultilineText helper method (the same method is used to print info about frame number and processing time). This is the code:

private static void WriteMultilineText(Mat frame, string[] lines, Point origin)
{
    for (int i = 0; i < lines.Length; i++)
    {
        int y = i * 10 + origin.Y; // Moving down on each line
        CvInvoke.PutText(frame, lines[i], new Point(origin.X, y), FontFace.HersheyPlain, 0.8, drawingColor);
    }
}

In each invocation of CvInvoke.PutText method y coordinate of the line is increased so lines are not colliding with each other...

This is how frame captured from video looks like after drawing and writing is applied:

Drone marking... Click to enlarge...

 

STEP 7: Showing it all

In part 1 you've seen that CvInvoke.Imshow method can be used to present a window with an image (Mat instance). Below method is called for every video frame so the user has a chance to see various stages of image processing and the final result:

private static void ShowWindowsWithImageProcessingStages()
{
    CvInvoke.Imshow(RawFrameWindowName, rawFrame);
    CvInvoke.Imshow(GrayscaleDiffFrameWindowName, grayscaleDiffFrame);
    CvInvoke.Imshow(BinaryDiffFrameWindowName, binaryDiffFrame);
    CvInvoke.Imshow(DenoisedDiffFrameWindowName, denoisedDiffFrame);
    CvInvoke.Imshow(FinalFrameWindowName, finalFrame);
}

Displaying intermediate steps is a great debugging aid for any image processing application (I didn't show separate windows for erosion and dilation because only 6 windows of my test video fit on full HD screen):

All windows... Click to enlarge...

 

SUMMARY

This three part series assumed that you were completely new to image processing with OpenCV/Emgu CV. Now you have some idea what these libraries are and how to use them in Visual Studio 2017 project while following a coding approach recommended for version 3 of the libs...

You've learned how to grab frames from video and how to prepare them for contour detection using fundamental image processing operations (difference, color space conversion, thresholding and morphological transformations). You also know how to draw shapes and text on an image. Good job!

Computer vision is a complex yet very interesting topic (its importance is constantly increasing), you've just made a first step in this field - who knows, maybe one day I will ride in autonomous vehicle powered by your software? :) 

 

 

Detecting a Drone - OpenCV in .NET for Beginners (Emgu CV 3.2, Visual Studio 2017). Part 2

OVERVIEW

In Part 1 you have learned what OpenCV is, what is the role of Emgu CV wrapper and how to create a Visual Studio 2017 C# project that utilizes the two libraries. In this part I will show you how to loop through frames captured from video file. Check the first part to watch demo video and find information about sample project (all the interesting stuff is inside Program.cs  - keep this file opened in separate tab as its fragments will be shown in this post)...

 

STEP 1: Capturing video from file

Before any processing can happen we need to obtain a frame from video file. This can be easily done by using VideoCapture class (many tutorials mention Capture class instead but it is not available in recent Emgu versions).

Check the Main method from our sample project:

private const string BackgroundFrameWindowName = "Background Frame";
// ...
private static Mat backgroundFrame = new Mat(); // Frame used as base for change detection
// ...

static void Main(string[] args)
{
    string videoFile = @"PUT A PATH TO VIDEO FILE HERE!";

    using (var capture = new VideoCapture(videoFile)) // Loading video from file
    {
        if (capture.IsOpened)
        {
            // ...

            // Obtaining and showing first frame of loaded video (used as the base for difference detection)
            backgroundFrame = capture.QueryFrame();
            CvInvoke.Imshow(BackgroundFrameWindowName, backgroundFrame);

            // Handling video frames (image processing and contour detection)
            VideoProcessingLoop(capture, backgroundFrame);
        }
        else
        {
            Console.WriteLine($"Unable to open {videoFile}");
        }
    }
}

VideoCapture has four constructor versions. The overload we are using takes string parameter that is a path to video file or video stream. Other versions allow us to connect to cameras. If you design your program right, switching from file input to a webcam might be as easy as changing new VideoCapture call!

Once VideoCapture instance is created we can confirm if opening went fine by accessing IsOpened property (maybe path is wrong or codecs are missing?).

VideoCapture offers few ways of acquiring frames but the one I find most convenient is by call to QueryFrame method. This method returns Mat class instance (you know it already from part 1) and moves to next frame. If next frame cannot be found then null is returned. We can use this fact to easily loop through video. 

 

STEP 2: Loading and presenting background frame

Our drone detection project is based on finding the difference between background frame and other frames. The assumption is that we can treat the first frame obtained from the video as the background, hence the call to QueryFrame right after creating VideoCapture object:

 backgroundFrame = capture.QueryFrame();

After background is loaded we can check how it looks with a call to Imshow method (you know it from part 1 too):

CvInvoke.Imshow(BackgroundFrameWindowName, backgroundFrame);

Is finding a (meaningful!) difference in a video always as easy as subtracting frames? No, it isn't. First of all the background might not be static (imagine that drone was flying in front of threes moved by wind or if lighting in a room was changing significantly). The second challenge might come from movements of the camera. Having a fixed background and camera position keeps our drone detection task simple enough for beginner's OpenCV tutorial plus it's not completely unrealistic. Video detection/recognition is often used in fully controlled environment such as part of factory... OpenCV is capable of handling more complex scenarios - you can read about background subtraction techniques and optical flow to get a hint...

 

STEP 3: Looping through video frames

We know that we can use QueryFrame to get single frame image (Mat instance) and progress to next frame and we know that QueryFrame returns null if it can't go any further. Let's use this knowledge to build a method that goes through frames in a loop:

private static void VideoProcessingLoop(VideoCapture capture, Mat backgroundFrame)
{
    var stopwatch = new Stopwatch(); // Used for measuring video processing performance

    int frameNumber = 1;
    while (true) // Loop video
    {
        rawFrame = capture.QueryFrame(); // Getting next frame (null is returned if no further frame exists)

        if (rawFrame != null) 
        {
            frameNumber++;

            stopwatch.Restart();
            ProcessFrame(backgroundFrame, Threshold, ErodeIterations, DilateIterations);
            stopwatch.Stop();

            WriteFrameInfo(stopwatch.ElapsedMilliseconds, frameNumber);
            ShowWindowsWithImageProcessingStages();

            int key = CvInvoke.WaitKey(0); // Wait indefinitely until key is pressed

            // Close program if Esc key was pressed (any other key moves to next frame)
            if (key == 27)
                Environment.Exit(0);
        }
        else
        {
            capture.SetCaptureProperty(CapProp.PosFrames, 0); // Move to first frame
            frameNumber = 0;
        }
    }
}

In each loop iteration a frame is grabbed from video file. It is then passed to ProcessFrame method which does image difference, noise removal, contour detection and drawing (it will be discussed in detail in the next post)... Call to ProcessFrame is surrounded with System.Diagnostics.Stopwatch usage - this way we can measure video processing performance. It took my laptop only about 1.5ms to fully handle each frame - I've told you OpenCV is fast! :)

If QueryFrame returns null then program moves back to first frame by calling SetCaptureProperty method on VideoCapture instance (video will be processed again).

WriteFrameInfo puts a text in the frame's upper-left corner with information about it's number and how long it took to process it. ShowWindowsWithImageProcessingStages ensures that we can see current (raw) frame, background frame, intermediate frames and final frame in separate windows... Both methods will be shown in next post.

The while loop is going to spin forever unless program execution is stopped by Escape key being pressed in any of the windows that show frames (not the console window!). If 0 is passed as WaitKey argument then program waits until some key is pressed. This let's you look at each frame as long as you want. If you pass other number to WaitKey then the program will wait until key is pressed or a delay elapses. You might use it to automatically play video at specified frame rate:

int fps = (int)capture.GetCaptureProperty(CapProp.Fps);
int key = CvInvoke.WaitKey(1000 / fps); // 40ms delay

Warning: One thing you might notice while processing videos is that moving through a file is not always as easy as setting CapProp.PosFrame to desired number. Your experience might vary from format to format. This is because video files are optimized for playing forward at natural speed and frames might not be simply kept as sequence of images. Full HD (1920x1080) movie has over 2 million pixels in each frame. Now let's say we have an hour of video at 30 FPS ->  3600 * 30 * 2,073,600 = 223,948,800,000. Independent frame compression is not enough to crush that number! No wonder some people need to dedicate their scientific/sofware careers to video compression...

Ok, enough for now - next part coming soon!

Update: Part 3 is ready!

Detecting a Drone - OpenCV in .NET for Beginners (Emgu CV 3.2, Visual Studio 2017). Part 1

INTRO

Emgu CV is a .NET wrapper for OpenCV (Open Source Computer Vision Library) which is a collection of over 2500 algorithms focused on real-time image processing and machine learning. OpenCV lets you write software for:

  • face detection,
  • object identification,
  • motion tracking,
  • image stitching,
  • stereo vision
  • and much, much more...

Open CV is written in highly optimized C/C++, supports multi-core execution and heterogeneous execution platforms (CPU, GPU, DSP...) thanks to OpenCL. The project was launched in 1999 by Intel Research and is now actively developed by open source community members and contributors from companies like Google, Microsoft or Honda...

My experience with Emgu CV/OpenCV comes mostly from working on paintball turret project (which I use to have a break from "boring" banking stuff at work). I'm far from computer vision expert but I know enough to teach you how to detect a mini quadcopter flying in a room:

In the upper-left corner you can see frame captured from video file, following that is the background frame (static background and camera makes our task simpler)... Next to it are various stages of image processing run before drone (contour) detection is executed. Last frame shows original frame with drone position marked. Job done! Oh, and if you are wondering what is the "snow" seen in the video: these are some particles I put to make the video a bit more "noisy"...

I assume that you know a bit about C# programming but are completely new to Emgu CV/OpenCV.

By the end of this tutorial you will know how to:

  • use Emgu CV 3.2 in C# 7 (Visual Studio 2017) application (most tutorials available online are quite outdated!),
  • capture frames from video,
  • find changes between images (diff and binary threshold),
  • remove noise with erode and dilate (morphological operations),
  • detect contours,
  • draw and write on a frame

Sounds interesting? Read on!

 

THE CODE

I plan to give detailed description of the whole program (don't worry: it's just about 200 lines) but if you would like to jump straight to the code visit this GitHub repository: https://github.com/morzel85/blog-post-emgucv. It's a simple console app - I've put everything into Program.cs so you can't get lost!

Mind that because Emgu CV/OpenCV binaries are quite large these are not included in the repo. This should not be a problem because Visual Studio 2017 should be able to automatically download (restore) the packages...

Here you can download the video I've used for testing: http://morzel.net/download/emgu_cv_drone_test_video.mp4 (4.04 MB, MPEG4 H264 640x480 25fps).

 

STEP 0: Crating project with Emgu CV

To start lets use Visual Studio Community 2017 to create new console application:

New project... Click to enlarge...

Now we need to add Emgu CV to our project. The easiest way to do it is to use Nuget to install Emgu CV package published by Emgu Corporation. To do so run "Install-Package Emgu.CV" command in Package Manager Console or utilize Visual Studio UI:

Adding Nuget package... Click to enlarge...

If all goes well package.config and DLL references should look like this (you don't have to worry about ZedGraph):

Packages and references... Click to enlarge...

Now we are ready to test if OpenCV's magic is available to us through Emgu CV wrapper library. Let's do it by creating super simple program that loads an image file and shows it in a window with obligatory "Hello World!" title:

using Emgu.CV; // Contains Mat and CvInvoke classes

class Program
{
    static void Main(string[] args)
    {
        Mat picture = new Mat(@"C:\Users\gby\Desktop\Krakow_BM.jpg"); // Pick some path on your disk!
        CvInvoke.Imshow("Hello World!", picture); // Open window with image
        CvInvoke.WaitKey(); // Render image and keep window opened until any key is pressed
    }
}

Run it and you should see a window with the image you've selected. Here's what I got - visit Kraków if you like my picture :)

Window with image... Click to enlarge...

Above code loads picture from a file into Mat class instance. Mat is a n-dimensional dense array containing pointer to image matrix and a header describing this matrix. It supports a reference counting mechanism that saves memory if multiple image processing operations act on same data... Don't worry if it sounds a bit confusing. All you need to know now is that we can load images (from files, webcams, video frames etc.) into Mat objects. If you are curious read this nice description of Mat.

The other interesting thing you can see in the code is the CvInvoke class. You can use it to call OpenCV functions from your C# application without dealing with complexity of operating native code and data structures from managed code - Emgu the wrapper will do it for you through PInvoke mechanism.

Ok, so now you have some idea on what Emgu CV/OpenCV libraries are and how to bring them into your application. Next post coming soon...

Update: Part 2 is ready!

Save Yourself Some Troubles With TortoiseGit pre-commit Hook

INTRO

Back in 2013, when I was using SVN, I wrote the post about creating a TortoiseSVN pre-commit hook that can prevent someone from committing code which is not meant to be shared (e. g. some hack done for troubleshooting). The idea was to mark “uncommittable” code with a comment containing NOT_FOR_REPO text and block the commit if such text is found in any of the modified or added files… The technique saved me a few times and proved to be useful to others…

This days I’m mostly using Git, and with Git’s decentralized nature and cheap branching the above technique is less needed but might still be helpful. The good news is that the same hook can be used in both TortoiseSVN and in TortoiseGit (I like to do commits with Tortoise UI and reserve command line for things like interactive rebase)…

First I will show you how to implement a pre-commit hook (I will use C# but you can use anything that Windows can run) and then you will see how to setup the hook in TortoiseGit Settings... 

 

TORTOISE PRE-COMMIT HOOK IN C#

You can find full code sample in this GitHub repository (it's a C# 6 console project from Visual Studio 2015, targeting .NET 4.5.2). Below is the class that implements the hook:

using System;
using System.IO;
using System.Text.RegularExpressions;

namespace DontLetMeCommit
{
    class Program
    {
        const string NotForRepoMarker = "NOT_FOR_REPO";

        static void Main(string[] args)
        {
            string[] affectedPaths = File.ReadAllLines(args[0]);
                        
            foreach (string path in affectedPaths)
            {
                if (ShouldFileBeChecked(path) && HasNotForRepoMarker(path))
                {
                    string errorMessage = $"{NotForRepoMarker} marker found in {path}";
                    Console.Error.WriteLine(errorMessage); // Notice write to Error output stream!
                    Environment.Exit(1);
                }
            }
        }

        static bool ShouldFileBeChecked(string path)
        {
            // Here we are choosing to check only selected file types but you may want to check
            // all the files except specified types or skip filtering altogether...
            Regex filePattern = new Regex(@"^.*\.(cs|js|xml|config)$", RegexOptions.IgnoreCase);

            // List of files affected by the commit might include (re)moved files so we check if file exists...
            return File.Exists(path) && filePattern.IsMatch(path);
        }

        static bool HasNotForRepoMarker(string path)
        {
            using (StreamReader reader = File.OpenText(path))
            {
                string line = reader.ReadLine();

                while (line != null)
                {
                    if (line.Contains(NotForRepoMarker)) 
                        return true; // "Uncommittable" code marker found - let's block the commit!

                    line = reader.ReadLine();
                }
            }

            return false;
        }
    }
}

How it works?

When Tortoise calls a pre-commit hook it passes a path to temporary file as a the first argument (args[0]). Each line in that file contains a path to a file that is affected by the commit. Hook reads all the lines (paths) from tmp file and checks if NOT_FOR_REPO text appears in any of them. If that's the case the commit is blocked by ending the program with non-zero code (call to Environment.Exit). Before that happens, a message is printed to Error stream (Tortoise will present this message to user). HasNotForRepoMarker method checks file by reading it line-by-line (via StreamReader) and stopping as soon as the marker is found. On my laptop full scan of 100 MB text file with one million lines takes about half a second so I guess its fast enough :) ShouldFileBeChecked method is there to decide if a path is interesting for us. We definitely don't want to check paths of removed files, hence the File.Exists call. I've also added Regex file name pattern matching to show you that you can be quite picky about which files you wish to check... That's it, compile it and you can use it as a hook!

 

ENABLING THE HOOK IN TORTOISEGIT SETTINGS

To set the hook first right click any folder and open TortoiseGit | Settings menu (I'm using TortoiseGit 2.1.0.0):

Menu step 1

Then go to Hook Scripts section and click Add... button:

Menu step 2... Click to enlarge...

Now choose Pre-Commit Hook type, next choose a Working Tree Path (select a folder which you want to protect with the hook - its subdirectories will be covered too!), and then choose Command Line To Execute (in case of C# hook this is an *.exe file). Make sure that what Wait for the script to finish and Hide script while running checkboxes are ticked (first checkbox is to make sure that commit is not going to complete unit all files are scanned and the second prevents console window from appearing). Her's how my settings look like:

Menu step 3... Click to enlarge...

Now click OK and voila - you have a pre-commit hook. Let's test it...

 

TESTING THE HOOK 

To check if the hook is working I've added NOT_FOR_REPO comment in one of the files from C:\Examples\Repos\blog-post-sonar Git repository:

namespace SonarServer
{
    class Program
    {
        const byte DataSampleStartMarker = 255; // NOT_FOR_REPO
        static List<byte> rawSonarDataBuffer = new List<byte>();
		

 I also did some other modification in a different file and removed one file, so my commit window looked like this:

Git commit... Click to enlarge...

After clicking Commit button the hook did it's job and blocked the commit:

Git commit blocked

Cool, and what if you really want to commit this even if the NOT_FOR_REPO marker is present? In that case use can do the commit through Git command line because TortoiseGit hook is something different than a "native" Git hook (from .git/.hooks)

And here's a proof that the same hook works when used with TortoiseSVN:

SVN commit blocked... Click to enlarge...

TortoiseSVN window looks a bit nicer and has Retry without hooks option...