A Literature Survey on Wildlife Camera Trap Image processing using Machine Learning Techniques

Authors

  • Shreyas Sreedhar Dept. Of Computer Science & Engineering, Jyothy Institute of Technology, Bangalore, India
  • Sandesh S Dept. Of Computer Science & Engineering, Jyothy Institute of Technology, Bangalore, India
  • Prarthana P Dept. Of Computer Science & Engineering, Jyothy Institute of Technology, Bangalore, India
  • N Karthik Pranav Dept. Of Computer Science & Engineering, Jyothy Institute of Technology, Bangalore, India
  • Prabhanjan S Dept. Of Computer Science & Engineering, Jyothy Institute of Technology, Bangalore, India
  • Srinidhi K Dept. Of Computer Science & Engineering, Jyothy Institute of Technology, Bangalore, India

DOI:

https://doi.org/10.5281/zenodo.4902949

Keywords:

Wildlife Image Camera Traps, Convolution Neural Network, Object Detection, Image Classification, Machine Learning

Abstract

Motion Triggered Wildlife Camera traps are rapidly being used to remotely track animals and help perform different ecological studies across the globe. The system captures animal visuals that enable the forest department of the respective country to keep track of critically endangered species, record their actions, research environmental changes in order to generate methods This piece of equipment is typically deployed within the forest area in large numbers, resulting in millions of recorded images and videos. It normally takes days, if not months, to go through the dataset completely and, identify the captured animals. In this paper, we study some classifiers of the fauna image that use the convolution neural network to process and identify the wildlife captured by these camera traps.

Downloads

Download data is not yet available.

References

K. Ullas Karanth, Estimating tiger Panthera tigris populations from camera-trap data using capture—recapture models, Biological Conservation, Volume 71, Issue 3, 1995, Pages 333-338, ISSN 0006-3207, https://doi.org/10.1016/0006-3207(94)00057-W. (https://www.sciencedirect.com/science/article/pii/000632079400057W)

Newey, S., Davidson, P., Nazir, S. et al. Limitations of recreational camera traps for wildlife management and conservation research: A practitioner’s perspective. Ambio 44, 624–635 (2015). https://doi.org/10.1007/s13280-015-0713-1

Newey, S., Davidson, P., Nazir, S. et al. Limitations of recreational camera traps for wildlife management and conservation research: A practitioner’s perspective. Ambio 44, 624–635 (2015). https://doi.org/10.1007/s13280-015-0713-1

Parker, D.B. (1985) Learning-Logic: Casting the Cortex of the Human Brain in Silicon. Technical Report Tr-47, Center for Computational Research in Economics and Management Science. MIT Cambridge, MA.

Lecun, Y. (1985). Une procedure d'apprentissage pour reseau a seuil asymmetrique (A learning scheme for asymmetric threshold networks). In Proceedings of Cognitiva 85, Paris, France (pp. 599-604)

Rumelhart, D., Hinton, G. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986). https://doi.org/10.1038/323533a0

Kunihiko Fukushima, Sei Miyake, Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position,

Pattern Recognition, Volume 15, Issue 6, 1982, Pages 455-469, ISSN 0031-3203, https://doi.org/10.1016/0031-3203(82)90024-3. (https://www.sciencedirect.com/science/article/pii/0031320382900243)

Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, andL. D. Jackel. Handwritten Digit Recognition with a Back-Propagation Network.In Advances in Neural Information Processing Systems, pages 396–404. Morgan Kaufmann, 1990...

D. H. Hubel and T. N. Wiesel. Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195(1):215–243, March 1968.

Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv:1311.2524[cs], November 2013.

D.G. Lowe. Object recognition from local scale-invariant features. In The Proceedings of the Seventh IEEE International Conference on Computer Vision,1999, volume 2, pages 1150–1157 vol.2, 1999.

N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. CVPR 2005, volume 1, pages 886–893 vol. 1, June 2005.

Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams,John Winn, and Andrew Zisserman. The Pascal Visual Object Classes Challenge:A Retrospective. International Journal of Computer Vision, 111(1):98–136, June 2014.

Ross Girshick. Fast R-CNN. arXiv:1504.08083 [cs], April 2015.

FMark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The Pascal Visual Object Classes Challenge: A Retrospective. International Journal of Computer Vision, 111(1):98–136, June 2014.

Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: TowardsReal-Time Object Detection with Region Proposal Networks. arXiv:1506.01497[cs], June 2015.

Matthew D. Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. arXiv:1311.2901 [cs], November 2013.

Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556 [cs], September 2014.

Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. arXiv:1506.02640 [cs], June 2015.

Yu, Xiaoyuan & Jiangping, Wang & Kays, Roland & Jansen, Patrick & Wang, Tianjiang & Huang, Thomas. (2013). Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing. 1. 10.1186/1687-5281-2013-52.).

Price Tack, Jennifer & West, Brian & McGowan, Conor & Ditchkoff, Stephen & Reeves, Stanley & Keever, Allison & Grand, James. (2016). AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images. Ecological Informatics. 36. 10.1016/j.ecoinf.2016.11.003.

Tabak, MA, Norouzzadeh, MS, Wolfson, DW, et al. Machine learning to classify animal species in camera trap images: Applications in ecology. Methods Ecol Evol. 2019; 10: 585– 590. https://doi.org/10.1111/2041-210X.13120

Tabak, MA, Norouzzadeh, MS, Wolfson, DW, et al. Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2. Ecol Evol. 2020; 10: 10374– 10383. https://doi.org/10.1002/ece3.6692

BANUPRIYA, N., S. SARANYA, RASHMI SWAMINATHAN, SANCHITHAA HARIKUMAR, and SUKITHA PALANISAMY. "ANIMAL DETECTION USING DEEP LEARNING ALGORITHM." Journal of Critical Reviews 7.1 (2020), 434-439. Print. doi:10.31838/jcr.07.01.85

Kellenberger, B, Tuia, D, Morris, D. AIDE: Accelerating image?based ecological surveys with interactive machine learning. Methods Ecol Evol. 2020; 11: 1716– 1727. https://doi.org/10.1111/2041-210X.13489

https://www.zambacloud.com/

https://www.wildme.org

Downloads

Published

2021-06-04

How to Cite

[1]
S. Sreedhar, . S. S, P. P, N. K. Pranav, P. S, and S. K, “A Literature Survey on Wildlife Camera Trap Image processing using Machine Learning Techniques”, pices, vol. 5, no. 2, pp. 11-14, Jun. 2021.

URN