threepi 9 hours ago

Author here. Happy to see this posted here. This is actually a series of blog posts:

1. Exploring LoRA — Part 1: The Idea Behind Parameter Efficient Fine-Tuning and LoRA: https://medium.com/inspiredbrilliance/exploring-lora-part-1-...

2. Exploring LoRA - Part 2: Analyzing LoRA through its Implementation on an MLP: https://medium.com/inspiredbrilliance/exploring-lora-part-2-...

3. Intrinsic Dimension Part 1: How Learning in Large Models Is Driven by a Few Parameters and Its Impact on Fine-Tuning https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...

4. Intrinsic Dimension Part 2: Measuring the True Complexity of a Model via Random Subspace Training https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...

Hope you enjoy reading the other posts too. Merry Christmas and Happy Holidays!

  • 3abiton 2 hours ago

    Thanks for sharing. This got me thinking, why is medium so used for such technical articles? Especially that lots of articles get blasted behind a paywall for me recently.

jwildeboer 5 hours ago

(Not to be confused with LoRa, (short for long range) which is a spread spectrum modulation technique derived from chirp spread spectrum (CSS) technology, powering technologies like LoRaWAN and Meshtastic)

  • FusspawnUK 4 hours ago

    really wish they had come up with another name. googling gets annoying

    • the__alchemist 13 minutes ago

      Contributors: They both use mixed capitalization. They have partially-overlapping audiences.