A Deep Learning Approach for Automatic Counting of Bales and Product Boxes in Industrial Production Lines

A Deep Learning Approach for Automatic Counting of Bales and Product Boxes in Industrial Production Lines

Abstract

Recent advances in machine learning and computer vision have led to widespread use of these technologies in the industrial sector. Quality control and production counting are the most important applications. This article describes a solution for counting products in an industrial production line. It consists of two main modules: i) hardware infrastructure and ii) software solution. In ii) there are modules for image capture and product recognition using the Yolov5 algorithm and modules for tracking and counting products. The results show that our solution achieves 99.91% 99.91% accuracy in product counting and classification. Furthermore, these results were compared to the current manual counting system used in the industry considered in this study. This demonstrated the feasibility of our solution in a real production environment.

Keywords

Detection, Classification, Tracking, Counting, Machine learning, Deep learning e Industry 4.0

Notes

  1. https://semalo.com.br/.

References

  1. Bahaghighat, M., Akbari, L., Xin, Q.: A machine learning-based approach for counting blister cards within drug packages. IEEE Access 7, 83785–83796 (2019). https://doi.org/10.1109/ACCESS.2019.2924445

    CrossRef Google Scholar

  2. Bewley, A., Ge, Z., Ott, L., Ramos, F., Upcroft, B.: Simple online and realtime tracking. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 3464–3468 (2016). https://doi.org/10.1109/ICIP.2016.7533003

  3. Deac, C., Popa, C.L., Ghinea, M., Cotet, C.: Machine vision in manufacturing processes and the digital twin of manufacturing architectures, pp. 0733–0736 (2017). https://doi.org/10.2507/28th.daaam.proceedings.103

  4. Frank, A.G., Dalenogare, L.S., Ayala, N.F.: Industry 4.0 technologies: implementation patterns in manufacturing companies. Int. J. Prod. Econ. 210, 15–26 (2019). https://doi.org/10.1016/j.ijpe.2019.01.004http://www.sciencedirect.com/science/article/pii/S0925527319300040

  5. Jocher, G., et al.: ultralytics/yolov5: v5.0 – YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations (2021). https://doi.org/10.5281/zenodo.4679653

  6. Lee, S., Yang, C.: A real time object recognition and counting system for smart industrial camera sensor. IEEE Sens. J. 17(8), 2516–2523 (2017)

    CrossRef Google Scholar

  7. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)

    Google Scholar

  8. Rahimzadeh, M., Attar, A.: Introduction of a new dataset and method for detecting and counting the pistachios based on deep learning (2020)

    Google Scholar

  9. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection (2015)

    Google Scholar

  10. Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger (2016)

    Google Scholar

  11. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement (2018)

    Google Scholar

  12. Shrestha, A., Mahmood, A.: Review of deep learning algorithms and architectures. IEEE Access 7, 53040–53065 (2019)

    CrossRef Google Scholar

  13. Ulaszewski, M., Janowski, R., Janowski, A.: Application of computer vision to egg detection on a production line in real time. Electron. Lett. Comput. Vision Image Anal. 20, 113–143 (2021). https://doi.org/10.5565/rev/elcvia.1390

    CrossRef Google Scholar

  14. Wang, S., Wan, J., Zhang, D., Li, D., Zhang, C.: Towards smart factory for industry 4.0: a self-organized multi-agent system with big data based feedback and coordination. Comput. Netw. 101, 158–168 (2016). https://doi.org/10.1016/j.comnet.2015.12.017http://www.sciencedirect.com/science/article/pii/S1389128615005046

Download references

Acknowledgments

This paper was only possible thanks to the help of the Semalo Indústria e Comércio de Alimentos and its workers. We thank the support of the UFMS (Universidade Federal de Mato Grosso do Sul), PET (Programa de Educação Tutorial – FNDE), FUNDECT, Finep, and Ministério da Ciência, Tecnologia, Inovações e Comunicações, funded by FNDCT. We also thank the support of the INCT of the Future Internet for Smart Cities funded by CNPq, proc. 465446/2014-0, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001, and FAPESP, proc. 2014/50937-1 and 2015/24485-9.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of FUNDECT, Finep, FAPESP, CAPES and CNPq.

Author information

Authors and Affiliations

Corresponding author

Correspondence to Renato P. Ishii .

Editor information

Editors and Affiliations

Fonte: https://link.springer.com/chapter/10.1007/978-3-031-10522-7_42