Student Getting Research Boost Through Google Ph.D. Fellowship
A Georgia Tech Ph.D. candidate is getting a boost to his research into developing more efficient multi-tasking artificial intelligence (AI) models without fine-tuning.
Georgia Stoica is one of 38 Ph.D. students worldwide researching machine learning who were named a 2025 Google Ph.D. Fellow.
Stoica is designing AI training methods that bypass fine-tuning, which is the process of adapting a large pre-trained model to perform new tasks. Fine-tuning is one of the most common ways engineers update large-language models like ChatGPT, Gemini, and Claude to add new capabilities.
If an AI company wants to give a model a new capability, it could create a new model from scratch for that specific purpose. However, if the model already has relevant training and knowledge of the new task, fine-tuning is cheaper.
Stoica argues that fine-tuning still uses large amounts of data, and that other methods can help models learn more effectively and efficiently.
Georgia Stoica is one of 38 Ph.D. students worldwide researching machine learning who were named a 2025 Google Ph.D. Fellow.
Stoica is designing AI training methods that bypass fine-tuning, which is the process of adapting a large pre-trained model to perform new tasks. Fine-tuning is one of the most common ways engineers update large-language models like ChatGPT, Gemini, and Claude to add new capabilities.
If an AI company wants to give a model a new capability, it could create a new model from scratch for that specific purpose. However, if the model already has relevant training and knowledge of the new task, fine-tuning is cheaper.
Stoica argues that fine-tuning still uses large amounts of data, and that other methods can help models learn more effectively and efficiently.