LLM personalization and long-context learning
Efficient fine-tuning and long-context understanding
This research thrust focuses on enabling personalization of foundation models through memory efficient fine-tuning solutions. Some of the research outcome of this project includes: LaMDA, AFLoRA, efficient long-context understanding without forgeting in the middle.