Surface roughness estimation and chatter vibration identification using vision-based deep learning

Achmad Pratama Rifai, Rvo Fukuda, Hideki Aoyama

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


This study presents the development of an automatic, low-cost, fast, and reliable non-contact system for surface roughness estimation and chatter vibration identification based on machine vision. The developed system focuses on predicting two critical problems in the machining process, which are surface roughness and chatter vibration, using images as input. Deep learning with convolutional neural networks is integrated into the system to bypass the feature extraction method traditionally used in conventional vision-based roughness and chattel predictions. Two systems are proposed: Separate models that work specifically for each problem, and a combined model Thai aims to predict both problems in one process. Four deep learning architectures are proposed for both systems. The proposed models are built and tested on turned and milled surface datasets. A set of machining experiments are performed with various cutting conditions to generate the training and testing data. The result of the prediction model is then analyzed and compared with the measured data using a contact-based profilomctcr. The results indicate that the proposed system performs favorably in terms of accuracy and processing time, thus offering a promising alternative for quick inspection of surface quality.

Original languageEnglish
Pages (from-to)658-666
Number of pages9
JournalSeimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering
Issue number7
Publication statusPublished - 2019 Jan 1


  • Chattcr vibration
  • Deep learning
  • Machinc vision
  • Surface roughness estimation

ASJC Scopus subject areas

  • Mechanical Engineering


Dive into the research topics of 'Surface roughness estimation and chatter vibration identification using vision-based deep learning'. Together they form a unique fingerprint.

Cite this