Skip to content

Contrastive Language Image Pretraining and Astronomy #74

@rahulranjansah

Description

@rahulranjansah

A suggestion for a long-term project. I recently found this interesting paper AstroCLIP: A Cross-Modal Foundation Model for Galaxies.

As far as I understand, the project focuses on developing a machine learning model that processes different types of data (e.g., images, time series, spectral data) from astronomical observations. The model is trained to distinguish between similar and dissimilar data points by creating representations that pull similar data closer and push dissimilar data apart within a learned feature space.

By exposing the model to various forms of observational data (e.g., telescope images, spectroscopy, light curves), the pretraining process teaches the model to align related information across different modalities while distinguishing between distinct classes (e.g., galaxies, stars, asteroids). This is the Github link of their project AstroCLIP.

I’d love your input to explore similar work and engage with coding tutorials or simplified versions of AstroCLIP to learn about the latest developments and technologies.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions