Skip to content

SparseGAP implementation#76

Closed
agoscinski wants to merge 239 commits intoextensionsfrom
sparse-gap
Closed

SparseGAP implementation#76
agoscinski wants to merge 239 commits intoextensionsfrom
sparse-gap

Conversation

@agoscinski
Copy link
Contributor

@agoscinski agoscinski commented Feb 13, 2024

A draft of SparseGAP that can predict energies. It contains the code of PR lab-cosmo/equisolve#61. The utilities used in this PR will be merged to corresponding metatensor-{operations,learn} modules and then replaced here, but for the sake of having something working that can be tried out the current version is self-contained.

TODOs

  • Fitting with composition features before the Subset of Regressors
  • Extend AggregateKernel so they can be used
    • for features with different properties so we can support different species in the properties
    • for Gradients
  • Support multiple outputs
  • Figure out if we try to use given loss utils, they seem to be not so suitable for one time fitted models

📚 Documentation preview 📚: https://metatensor-models--76.org.readthedocs.build/en/76/

frostedoyster and others added 30 commits November 20, 2023 15:51
Co-authored-by: Philip Loche <ploche@physik.fu-berlin.de>
* Rename package to `metatensor-models`

* Rename module to `metatensor.models`
Co-authored-by: Philip Loche <ploche@physik.fu-berlin.de>
* Write train output to hydra's output directory

* Added evaluation function

* Add usage example for cli interface

* update train cli

* Disable MacOS tests

* Add cli skeleton for exporter

---------

Co-authored-by: frostedoyster <bigi.f@libero.it>
* Add gradient calculator
* Temporary losses
* Forces and stresses
* Support multiple model outputs in SOAP-BPNN

jobs:
tests:
runs-on: ${{ matrix.os }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't need a Matrix here. We have just ubuntu.

]

__maintainers__ = [
("Davide Tisi <davide.tisi@epfl.ch>", "@DavideTisi"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need to add @DavideTisi to the CODEOWNERS files as well

Copy link
Member

@Luthaf Luthaf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can't be merged until #185 and #201 are merged first

Comment on lines +54 to +55
print(ref_output["mtm::U0"].block().values)
print(scripted_output["mtm::U0"].block().values)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
print(ref_output["mtm::U0"].block().values)
print(scripted_output["mtm::U0"].block().values)

logger = logging.getLogger(__name__)


class Trainer:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I follow what's going on here. This seems to already be using #185 which is not merged yet.

@frostedoyster
Copy link
Collaborator

Note: need better error message when number of sparse points is greater than the number of atomic environments in the training set

@frostedoyster
Copy link
Collaborator

Note: we may want to save GAP checkpoints

@frostedoyster
Copy link
Collaborator

Note: we need to fix eval which has the old next(model.parameters()) issue

if n_sparse_point > environments
@DavideTisi
Copy link
Contributor

Note: we may want to save GAP checkpoints

what's a gap checkpoints?

if the number of sparse env is bigger than the number of environments
@Luthaf
Copy link
Member

Luthaf commented May 30, 2024

what's a gap checkpoints?

For GAP, the checkpoint is the same thing as the final model. But having checkpoints allow use to have the same interface as the other architectures, especially for final export.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants