Skip to content

A risky method used in this repo may cause deserialization vulnerability #1

@Rockstar292

Description

@Rockstar292

Hi, @jdmartinev, I'd like to report that a potentially risky method is being used in this project, which may pose deserialization threats. Please check the following code example:

• Clase04/imageclassifier_fastapi/app.py

from fastai.learner import load_learner
learner = load_learner(hf_hub_download("jdmartinev/intel_image_classification_fastai","model.pkl"))

Issue Description

As shown above, in the imageclassifier_fastapi/app.py file, the file "model.pkl" is downloaded and loaded by the the load_learner method in fastai library .

In fact, the method is not secure. The load_learner method uses torch.load with the pickle module as its core loading mechanism, and the documentation for both explicitly warns that using them may lead to arbitrary code execution risks.

The official document says "load_learner uses Python's insecure pickle module, which can execute malicious arbitrary code when loading. Only load files you trust.If you only need to load model weights and optimizer state, use the safe Learner.load instead."

Related Risk Reports::fastai document torch.load

Suggested Repair Methods

  1. I recommend using learner.load_model or `learner.loadmethod of fastai library, the officially recommended and safer loading function, for handling pickle files instead.

As one popular opensource machine learning projects, every potential risk could be propagated and amplified. Could you please address the above issues?

Thanks for your help~

Best regards,
Rockstars

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions