It seems extremely likely we will continue to use LLMs and similar models well into the future. So the fact that it’s possible, in part using the existing expensive models, to build pretty comparable smaller, cheaper, and less resource intensive models seems great.