top | item 46668296 (no title) snek_case | 1 month ago You can work on building LLMs that use less compute and run locally as well. There are some pretty good open models. They probably be made even more computationally efficient. discuss order hn newest No comments yet.
No comments yet.