Awesome Knowledge Distillation
-
Updated
Mar 22, 2026
Awesome Knowledge Distillation
Images to inference with no labeling (use foundation models to train supervised models).
🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform human-designed algorithms
PyTorch Implementation of Matching Guided Distillation [ECCV'20]
Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)
The Codebase for Causal Distillation for Language Models (NAACL '22)
AI Community Tutorial, including: LoRA/Qlora LLM fine-tuning, Training GPT-2 from scratch, Generative Model Architecture, Content safety and control implementation, Model distillation techniques, Dreambooth techniques, Transfer learning, etc for practice with real project!
A framework for knowledge distillation using TensorRT inference on teacher network
Repository for the publication "AutoGraph: Predicting Lane Graphs from Traffic"
LUPI-OD - A novel methodology to improve object detection accuracy without increasing model size or complexity. | 2025 European Workshop on Visual Information Processing (EUVIP) | M.Sc. ICT (By Research) Dissertation | University of Malta
A repo for distilling a large teacher into a small vision-language model for efficient embodied spatial reasoning and action planning.
The Codebase for Causal Distillation for Task-Specific Models
AudioMuse-AI-DCLAP is a lightweight, high-speed distilled version of LAION CLAP, designed for fast and efficient text-to-music search
Few-step diffusion for audio-driven talking head generation making diffusion models speak faster without losing their composure.
Awesome Deep Model Compression
Train cheap models on expensive ones. Automatically. With receipts.
Compress command outputs to reduce token use by up to 99% while preserving essential information for language models.
Use LLaMA to label data for use in training a fine-tuned LLM.
Autodistill Google Cloud Vision module for use in training a custom, fine-tuned model.
Add a description, image, and links to the model-distillation topic page so that developers can more easily learn about it.
To associate your repository with the model-distillation topic, visit your repo's landing page and select "manage topics."