-
Recent Posts
- Real Time Inferencing Of Deep Learning Models
- AWS Blog In Collaboration With Nvidia – Optimizing Inference For Seq2Seq And Encoder Only Models Using Nvidia GPU And Triton Model Server
- ~30% Compression Of LLM (Flan-T5-Base) With Low Rank Decomposition Of Attention Weight Matrices
- Adapter Based Fine Tuning BART And T5-Flan-XXL For Single Word Spell Correction
- Revamping Dual Encoder Model Architecture: A layered approach to fuse multi-modal features and plug-and-play integration of Encoders
Recent Comments
Archives
Categories
Meta
Category Archives: performance efficient fine tuning
Summary Of Adapter Based Performance Efficient Fine Tuning (PEFT) Techniques For Large Language Models
The two most common transfer learning techniques in NLP were feature-based transfer (generating input text embedding from a pre-trained large model and using it as a feature in your custom model) and fine-tuning (fine tuning the pre-trained model on custom … Continue reading
Posted in performance efficient fine tuning, Uncategorized
Tagged adapters, gpt, large language model, llama, lora, machine learning, nlp, peft
Leave a comment