컨텐츠 시작

학술대회/행사

초록검색

제출번호(No.) 0213
분류(Section) Special Session
분과(Session) (SS-18) Mathematics and AI (SS-18)
발표시간(Time) 20th-C-10:30 -- 11:00
영문제목
(Title(Eng.))
LoRA training in the NTK regime has no spurious local minima
저자(Author(s))
Uijeong Jang1, Jason D. Lee2, Ernest K. Ryu1
Seoul National University1, Princeton University2
초록본문(Abstract) Low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM), but our theoretical understanding of LoRA has been limited. In this work, we theoretically analyze LoRA fine-tuning in the neural tangent kernel (NTK) regime with $N$ data points, showing: (i) full fine-tuning (without LoRA) admits a low-rank solution of rank $r\lesssim \sqrt{N}$; (ii) using LoRA with rank $r\gtrsim \sqrt{N}$ eliminates spurious local minima, allowing gradient descent to find the low-rank solutions; (iii) the low-rank solution found using LoRA generalizes well.
분류기호
(MSC number(s))
68T01
키워드(Keyword(s)) Low-rank adaptation, deep learning theory, non-convex optimization
강연 형태
(Language of Session (Talk))
English