-
|
In the Training section of the README, it states the following for the MP-20 training:
When I train the model using the MP-20 data, the training does not stop until it reaches 900 epochs which is the default value in |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
|
Hi @wigging, the statement in the README is only to help you determine whether your training job is performing reasonably. It is not meant to indicate that you should stop at this loss value. I am not aware of a pytorch lightning callback that lets you stop at a certain loss value (typically what people use is early stopping if the loss does no longer decrease), but you should be able to write one in a few lines of code if you need training to stop at a certain loss value. |
Beta Was this translation helpful? Give feedback.
If I wanted the training to stop at a loss value of 0.5 is this what I put in
trainer/default.yaml?