Dec 3, 2024
8:00pm - 10:00pm
Hynes, Level 1, Hall A
Xingquan Wang1,Bowen Zheng1,Zeqing Jin1,Grace Gu1
University of California, Berkeley1
Xingquan Wang1,Bowen Zheng1,Zeqing Jin1,Grace Gu1
University of California, Berkeley1
Fused filament fabrication (FFF), a widely used 3D printing process, faces challenges in printing quality such as under- and over-extrusion. The accumulation of these anomalies can degrade printed parts in mechanical properties and surface quality. Previous research has used computer vision to detect printing defects. However, traditional computer vision techniques are mostly based on supervised learning, and therefore require intensive and costly manual data labeling. In this work, we combine self-supervised learning and transformers to detect anomalies in FFF 3D printing in an annotation-efficient manner. Self-supervised learning allows the model to learn from random, unlabeled examples, and transformers enable selectively focusing on certain parts of their input. Using tens of thousands of unlabeled frames recorded by a camera near the nozzle area as training data, our model can segment printed area and background and discover printing defects with no supervision or any segmentation-targeted objective. Additionally, the model is intrinsically interpretable, thus contributing to a higher-level of image understanding. Our work presents a data-driven anomaly detection technique less dependent on labeled data, which may be important for domains where annotated images are scarce such as in additive manufacturing and medical imaging.