I switched from llama 2 to vicuna for my offline LLM task. I did not run any objective benchmarks, but subjectively it feels better.

Next - Previous