Object Detection and Tracking using Automated Image Annotation with Residual Network based Faster R-CNN Model

Main Article Content

K. Vijiyakumar
V. Govindasamy

Abstract

At present days, object detection and tracking concepts has gained more importance among
researchers and business people. Several application areas of objective detection and tracking
are vehicle navigation, augmented reality, surveillance, etc. Object tracking is a discipline
within computer vision and it is aimed to track the objects as they move across a sequence of
video frames. Presently, deep learning (DL) approaches have been used for object tracking as
it increases the performance and speed of the tracking process. This paper presents a novel
robust DL based object detection and tracking algorithm using Automated Image Annotation
with ResNet based Faster RCNN (AIA-FRCNN) model. The AIA-RFRCNN method
performs image annotation using Discriminative Correlation Filter (DCF) with Channel and
Spatial Reliability tracker (CSR) called DCF-CSRT model. The AIA-RFRCNN model makes
use of Faster RCNN as an object detector and tracker, which involves region proposal
network (RPN) and Fast R-CNN. The RPN is a fully convolution network which
concurrently predicts the bounding box and score of different objects. The RPN is a trained
model used for the generation of the high-quality region proposals, which are utilized by Fast
R-CNN for detection process. Besides, Residual Network (ResNet 101) model is used as a
shared CNN for the generation of feature map. The performance of the ResNet 101 model is
further improved by the use of Adam optimizer, which tunes the hyperparameters namely
learning rate, batch size, momentum, and weight decay. Finally, softmax layer is applied to
classify the images. The performance of the AIA-RFRCNN method has been assessed using a
benchmark dataset and a detailed comparative analysis of the results takes place. The
outcome of the experiments clearly indicated the superior characteristics of the AIARFRCNN
model under diverse aspects.

Article Details

Section
Articles