Feedback Optical Flow Convolutional Neural Network.
In IEEE Access, vol. 6, pp. 6048-6057, doi:
10.1109/ACCESS.2017.2771389.
Jiang, Z., Zhao, L., Li, S., & Jia, Y. (2020). Real-time object
detection method based on improved YOLOv4-
tiny. ArXiv, abs/2011.04244.
Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012).
ImageNet Classification with Deep Convolutional
Neural Networks. In F. Pereira, C. J. C. Burges, L.
Bottou & K. Q. Weinberger (ed.), Advances in Neural
Information Processing Systems 25 (pp. 1097--1105) .
Curran Associates, Inc.
Kumar, A., Kalia, A., Sharma, A. et al. (2021). A hybrid
tiny YOLO v4-SPP module based improved face mask
detection vision system. In J Ambient Intell Human
Comput (2021). https://doi.org/10.1007/s12652-021-
03541-x
Lalitha, V. L., Raju, S. H., Sonti, V. K., Mohan, V. M.
(2021). Customized Smart Object Detection: Statistics
of detected objects using IoT. In International
Conference on Artificial Intelligence and Smart
Systems (ICAIS), pp. 1397-1405, doi:
10.1109/ICAIS50930.2021.9395913.
Lin, T., Dollár, P., Girshick, R., He, K., Hariharan, B.,
Belongie, S. (2017). Feature Pyramid Networks for
Object Detection. In 2017 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), pp.
936-944, doi: 10.1109/CVPR.2017.106.
Lin, T., Goyal, P., Girshick, R., He, K., Dollár, P. (2020).
Focal Loss for Dense Object Detection. In IEEE
Transactions on Pattern Analysis and Machine
Intelligence, vol. 42, no. 2, pp. 318-327, doi:
10.1109/TPAMI.2018.2858826.
Liu W, Anguelov D, Erhan D, SzegedyC, Reed S, Fu CY,
Berg, A. (2016).SSD: single shot MultiBox detector.
arXiv. https://arxiv.org/abs/1512.02325.
Liu, S., Qi, L., Qin, H., Shi, J., Jia, J. (2018). Path
Aggregation Network for Instance Segmentation. In
2018 IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pp. 8759-8768, doi:
10.1109/CVPR.2018.00913.
Nowozin S. (2014). Optimal decisions from probabilistic
models: the intersection-over-union case. In
Proceedings of the 2014 IEEE Conference on
Computer Vision and Pattern Recognition, pp 548–555.
https://doi.org/10.1109/CVPR.2014.7.
Rane, S., Dubey, A., Parida, T. (2017). Design of IoT based
intelligent parking system using image processing
algorithms. In International Conference on Computing
Methodologies and Communication (ICCMC), pp.
1049-1053, doi: 10.1109/ICCMC.2017.8282631.
Redmon J, Divvala S, Girshick R, Farhadi A. (2016). You
only look once: unified, real-time object detection. In
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), Las Vegas,
NV, USA 2016, pp 779–788. https://doi.org/10.1109/
CVPR.2016.91.
Redmon J, Farhadi A. (2017). YOLO9000: better, faster,
stronger. In 2017 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), Honolulu, HI, USA,
pp 6517–6525. https://doi.org/10.1109/CVPR.2017.6
90.
Redmon, J., Farhadi, A. (2018). Yolov3: An incremental
improvement. arXiv preprint arXiv:1804.02767.
Simonyan, K. and Zisserman, A. (2015). Very Deep
Convolutional Networks for Large-Scale Image
Recognition. In The 3rd International Conference on
Learning Representations (ICLR2015).
https://arxiv.org/abs/1409.1556.
Srivastava, S., Divekar, A.V., Anilkumar, C. et al. (2021).
Comparative analysis of deep learning image detection
algorithms. In J Big Data 8, 66
https://doi.org/10.1186/s40537-021-00434-w
Uddin, M. I., Alamgir, M. S., Rahman, M. M., Bhuiyan, M.
S., Moral, M. A. (2021). AI Traffic Control System
Based on Deepstream and IoT Using NVIDIA Jetson
Nano. In 2nd International Conference on Robotics,
Electrical and Signal Processing Techniques
(ICREST), pp. 115-119, doi:
10.1109/ICREST51555.2021.9331256.
Wu, X., Xu, H., Wei, X., Wu, Q., Zhang, W., Han, X.
(2020). Damage Identification of Low Emissivity
Coating Based on Convolution Neural Network.
In IEEE Access, vol. 8, pp. 156792-156800, doi:
10.1109/ACCESS.2020.3019484.
Zhang, Y., Zhao, P., Li, D., Konstantin, K. (2020). Spatial
Attention Based Real-Time Object Detection Network
for Internet of Things Devices. In IEEE Access, vol. 8,
pp. 165863-165871, doi:
10.1109/ACCESS.2020.3022645.