top | item 35623272

(no title)

orpheansodality | 2 years ago

the paper says llama:

> We develop a large multimodal model (LMM), by connecting the open-set visual encoder of CLIP [36] with the language decoder LLaMA, and fine-tuning them end-to-end on our generated instructional vision-language data

discuss

order

No comments yet.