Presentation:
-
Recent Posts
- ~30% Compression Of LLM (Flan-T5-Base) With Low Rank Decomposition Of Attention Weight Matrices
- Adapter Based Fine Tuning BART And T5-Flan-XXL For Single Word Spell Correction
- Revamping Dual Encoder Model Architecture: A layered approach to fuse multi-modal features and plug-and-play integration of Encoders
- Summary Of Adapter Based Performance Efficient Fine Tuning (PEFT) Techniques For Large Language Models
- Neural Ranking Architectures
Recent Comments
Archives
Categories
Meta
Pingback: Neural Ranking Architectures | smashinggradient