(no title)
bed147373429 | 2 years ago
Your commands assume the model is a .bin file (so I guess there must be a way to convert the pytorch model .pth to the .bin file). How can I do this and what is the difference between the two models?
The facebook repo provides commands for using the models, these commands don't work on my windows machine: "NOTE: Redirects are currently not supported in Windows or MacOs. [W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to ...."
The facebook repo does not describe which OS you are supposed to use, so I assumed it would work on Windows too. But then if this can work why would anyone need the ggerganov llama code? I am new to all of this and easily confused, so any help is appreciated
shortrounddev2|2 years ago
bed147373429|2 years ago
pdntspa|2 years ago
https://www.reddit.com/r/LocalLLaMA/wiki/models#wiki_llama_2...
spike_protein|2 years ago