Dive Into LoRA Adapters
Exploring Parameter Efficient Finetuning (PEFT): Intuitively Understanding Finetuning Using LoRA- 22652Murphy ≡ DeepGuide
How to efficiently fine-tune your own open-source LLM using novel techniques – code provided
In this article I tune a base LLama2 LLM to output SQL code. I use Parameter Efficient Fine-Tuning Techniques to optimise the process.- 28723Murphy ≡ DeepGuide
A Winding Road to Parameter Efficiency
Deliberately Exploring Design Decisions for Parameter Efficient Finetuning (PEFT) with LoRA- 25047Murphy ≡ DeepGuide
Bit-LoRA as an application of BitNet and 1.58 bit neural network technologies
Abstract: applying ~1bit transformer technology to LoRA adapters allows us to reach comparable performance with full-precision LoRA...- 24086Murphy ≡ DeepGuide
Is ReFT All We Needed?
Representation Fintuning - Beyond the PEFT Techniques for fine-tuning LLMs- 25790Murphy ≡ DeepGuide
Classifier-Free Guidance in LLMs Safety – NeurIPS 2024 Challenge Experience
LLM unlearning without model degradation is achieved through direct training on the replacement data and classifier-free guidance- 26103Murphy ≡ DeepGuide
We look at an implementation of the HyperLogLog cardinality estimati
Using clustering algorithms such as K-means is one of the most popul
Level up Your Data Game by Mastering These 4 Skills
Learn how to create an object-oriented approach to compare and evalu
When I was a beginner using Kubernetes, my main concern was getting
Tutorial and theory on how to carry out forecasts with moving averag