You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PQuant is a library for training compressed machine learning models, developed at CERN as part of the [Next Generation Triggers](https://nextgentriggers.web.cern.ch/t13/) project.
@@ -9,13 +9,20 @@ To run the code, [HGQ2](https://github.com/calad0i/HGQ2) is also needed.
9
9
PQuant replaces the layers and activations it finds with a Compressed (in the case of layers) or Quantized (in the case of activations) variant. These automatically handle the quantization of the weights, biases and activations, and the pruning of the weights.
10
10
Both PyTorch and TensorFlow models are supported.
11
11
12
-
Layers that can be compressed: Conv2D and Linear layers, Tanh and ReLU activations for both TensorFlow and PyTorch. For PyTorch, also Conv1D.
12
+
### Layers that can be compressed
13
13
14
-

14
+
***PQConv*D**: Convolutional layers
15
+
***PQAvgPool*D**: Average pooling layers
16
+
***PQBatchNorm*D**: BatchNorm layers
17
+
***PQDense**: Linear layer
18
+
***PQActivation**: Activation layers (ReLU, Tanh)
15
19
16
20
The various pruning methods have different training steps, such as a pre-training step and fine-tuning step. PQuant provides a training function, where the user provides the functions to train and validate an epoch, and PQuant handles the training while triggering the different training steps.
Example notebook can be found [here](https://github.com/nroope/PQuant/tree/main/examples). It handles the
@@ -24,6 +31,8 @@ Example notebook can be found [here](https://github.com/nroope/PQuant/tree/main/
24
31
3. Loading a default pruning configuration of a pruning method.
25
32
4. Using the configuration, the model, and the training and validation functions, call the training function of PQuant to train and compress the model.
26
33
5. Creating a custom quantization and pruning configuration for a given model (disable pruning for some layers, different quantization bitwidths for different layers).
34
+
6. Direct layers usage and layers replacement approaches.
35
+
7. Usage of fine-tuning platform.
27
36
28
37
### Pruning methods
29
38
A description of the pruning methods and their hyperparameters can be found [here](docs/pruning_methods.md).
@@ -32,6 +41,9 @@ A description of the pruning methods and their hyperparameters can be found [her
32
41
A description of the quantization parameters can be found [here](docs/quantization_parameters.md).
33
42
34
43
44
+
For detailed documentation check this page: [PQuantML documentation](https://pquantml.readthedocs.io/en/latest/)
0 commit comments