TY - JOUR
T1 - Closed-loop control of direct ink writing via reinforcement learning
AU - Piovarči, Michal
AU - Foshey, Michael
AU - Xu, Jie
AU - Erps, Timmothy
AU - Babaei, Vahid
AU - Didyk, Piotr
AU - Rusinkiewicz, Szymon
AU - Matusik, Wojciech
AU - Bickel, Bernd
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/7/22
Y1 - 2022/7/22
N2 - Enabling additive manufacturing to employ a wide range of novel, functional materials can be a major boost to this technology. However, making such materials printable requires painstaking trial-and-error by an expert operator, as they typically tend to exhibit peculiar rheological or hysteresis properties. Even in the case of successfully finding the process parameters, there is no guarantee of print-to-print consistency due to material differences between batches. These challenges make closed-loop feedback an attractive option where the process parameters are adjusted on-the-fly. There are several challenges for designing an efficient controller: the deposition parameters are complex and highly coupled, artifacts occur after long time horizons, simulating the deposition is computationally costly, and learning on hardware is intractable. In this work, we demonstrate the feasibility of learning a closed-loop control policy for additive manufacturing using reinforcement learning. We show that approximate, but efficient, numerical simulation is sufficient as long as it allows learning the behavioral patterns of deposition that translate to real-world experiences. In combination with reinforcement learning, our model can be used to discover control policies that outperform baseline controllers. Furthermore, the recovered policies have a minimal sim-to-real gap. We showcase this by applying our control policy in-vivo on a single-layer printer using low and high viscosity materials.
AB - Enabling additive manufacturing to employ a wide range of novel, functional materials can be a major boost to this technology. However, making such materials printable requires painstaking trial-and-error by an expert operator, as they typically tend to exhibit peculiar rheological or hysteresis properties. Even in the case of successfully finding the process parameters, there is no guarantee of print-to-print consistency due to material differences between batches. These challenges make closed-loop feedback an attractive option where the process parameters are adjusted on-the-fly. There are several challenges for designing an efficient controller: the deposition parameters are complex and highly coupled, artifacts occur after long time horizons, simulating the deposition is computationally costly, and learning on hardware is intractable. In this work, we demonstrate the feasibility of learning a closed-loop control policy for additive manufacturing using reinforcement learning. We show that approximate, but efficient, numerical simulation is sufficient as long as it allows learning the behavioral patterns of deposition that translate to real-world experiences. In combination with reinforcement learning, our model can be used to discover control policies that outperform baseline controllers. Furthermore, the recovered policies have a minimal sim-to-real gap. We showcase this by applying our control policy in-vivo on a single-layer printer using low and high viscosity materials.
KW - additive manufacturing
KW - closed-loop control
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85137618464&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137618464&partnerID=8YFLogxK
U2 - 10.1145/3528223.3530144
DO - 10.1145/3528223.3530144
M3 - Article
AN - SCOPUS:85137618464
SN - 0730-0301
VL - 41
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 4
M1 - 112
ER -