Skip to content

Problem with combing quantisation and pruning #289

@DoubleZZeta

Description

@DoubleZZeta

Hi, I was trying to extend the tutorial6 code to support all types of linear layers and then pass the quantised model to the next NAS search to find the best pruning configuration. However, when I try to pass a model that is quantised using torch.nn.linear and LinearMinifloatDenorm to the pruning NAS search, I got some errors as shown in the picture.

Image

For reference, here is how I extend the code to make it support LinearMinifloatDenorm

Image

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions