top of page
Search

Can you detect my aircraft?

  • Writer: Leo Liu
    Leo Liu
  • Dec 10, 2018
  • 3 min read

Updated: May 29, 2019


Background

According to recent reports, more and more aviation safety accidents and incidents occurred due to overloaded attention of pilots and tower controllers. In 2016 in Shanghai, an aircraft taxied across a runway without noticing another aircraft already speed up to take off on the exact same runway, which would have led to a collision if the pilot on the take-off aircraft had not pulled up responsively. An animation depicting the situation is shown below (See Gif 1).

Gif 1 - Shanghai Incident Demo

This type of incident could have been prevented or alleviated at a very early stage if there were onboard aircraft detecting system in place. It motivates me to use image processing neural network to bridge the gap of technology. Plus, I am highly excited to develop a viable product to ease the workload of pilots and tower controllers, and further enhance aviation safety and improve ground traffic efficiency. Such system can be integrated with Automatic Dependent System - Broadcast (ADS-B) to help pilots recognize aircrafts which may trigger conflict events on the taxiing path.

Data

Over 3,000 images containing aircrafts were downloaded from Microsoft Common Object in Context (MS COCO) dataset and their ground truth masks were extracted from API demonstrated on their GitHub repository. These images include photographs of aircrafts varied in type, distance, flight phase and perspective. It would be ideal if the dataset is biased towards photographs of commercial aircrafts taxiing on the ground. The dataset is split into 70% training and 30% testing.

Design

The general workflow is segmented into four steps: data collection, image preprocessing, model construction and real-world application (See Figure 1).


Figure 1 - General Workflow

In the image preprocessing step, all images and masks were resized into 256 X 256 pixels by Scikit-image and verified by Numpy and matplotlib so that the data is ready for modeling. To fully segment the images pixel by pixel, I built up a U-net convolutional neural network (See Figure 2) from scratch with batch normalization layer before each 2D-Convolution layer to prevent strong effects being amplified over subsequent layers.


Figure 2 - U-net Neural Network

Figure 3 - IOU Demonstration

Basically, the model takes in a full RGB image and generate a fully segmented image by examining each individual pixel to determine whether it belongs to an aircraft or not. I used intersection over union (IOU - See Figure 3) as my metric and 1 – IOU as my loss function. The model was trained for 70 epochs with a batch size of 16. Eventually, the model was applied to detect aircrafts in real-world unlabeled videos with the help of MoviePy to decode videos and OpenCV to draw bounding boxes.

Result

Figure 4 - Histogram of Test Set IOU

The average IOU on the testing set is 0.83. The distribution of testing IOU is shown as a histogram (See Figure 4). Overall, my model predicts very well (See Figure 5) except for some false negatives when the aircraft is too far away (See Figure 6) or only partially captured (See Figure 7). In addition, some false positives occur when other objects have texture or color close to an aircraft (See Figure 8).



Figure 5 - Excellent Prediction

Figure 6 - False Negative of Remote Aircrafts

Figure 7 - False Negative of Partially Included Aircraft

Figure 8 - False Positive of Similar Texture

To further improve my model, more labelled images with desired features need to be fed into the neural network. Also, separate models for separate tasks might also be helpful. Anyway, this is a good starting point for something that can be turned into production to save lives, and improve aviation safety and airport ground efficiency.


Real World Implementation Demo

There are some small bounding boxes on the engine which can be further improved by refining the function which converts the fully segmented masks into bounding boxes.


Future Work

As discussed above, there are several aspects of this project worth extra effort. First, I will simplified my model so that it functions responsively in the real time. Then, more images with desired features like remote and partial aircrafts will be collected and labeled. The model will be trained extensively with these features in order to address the false negative issue. Finally, I will transfer learning on other critical objects in the airport for safety concern. For example, the model can be used to detect runway centerline so that aircrafts can land safely in low visibility weather conditions.



 
 
 

Comments


Post: Blog2_Post

4256159581

  • Facebook
  • Twitter
  • LinkedIn

©2018 by Leo's data science portfolio. Proudly created with Wix.com

bottom of page