Objective The purpose of this research was to develop a deep-learning model to assess radiographic finger joint destruction in RA. Methods The model comprises two steps: A joint-detection step and a joint-evaluation step. Among 216 radiographs of 108 patients with RA, 186 radiographs were assigned to the training/validation dataset and 30 to the test dataset. In the training/validation dataset, images of PIP joints, the IP joint of the thumb or MCP joints were manually clipped and scored for joint space narrowing (JSN) and bone erosion by clinicians, and then these images were augmented. As a result, 11 160 images were used to train and validate a deep convolutional neural network for joint evaluation. Three thousand seven hundred and twenty selected images were used to train machine learning for joint detection. These steps were combined as the assessment model for radiographic finger joint destruction. Performance of the model was examined using the test dataset, which was not included in the training/validation process, by comparing the scores assigned by the model and clinicians. Results The model detected PIP joints, the IP joint of the thumb and MCP joints with a sensitivity of 95.3% and assigned scores for JSN and erosion. Accuracy (percentage of exact agreement) reached 49.3 65.4% for JSN and 70.6 74.1% for erosion. The correlation coefficient between scores by the model and clinicians per image was 0.72 0.88 for JSN and 0.54 0.75 for erosion. Conclusion Image processing with the trained convolutional neural network model is promising to assess radiographs in RA.
ASJC Scopus subject areas