top | item 41456567

(no title)

JediPig | 1 year ago

I tested this out on my workload ( SRE/Devops/C#/Golang/C++ ). it started responding about non-sense on a simple write me boto python script that changes x ,y,z value.

Then I tried other questions in my past to compare... However, I believe the engineer who did the LLM, just used the questions in benchmarks.

One instance after a hour of use ( I stopped then ) it answered one question with 4 different programming languages, and answers that was no way related to the question.

discuss

order

tmikaeld|1 year ago

I have the same experience, hallucinates and rambles on and on about "solutions" that are not related.

Unfortunately, this has always been my experience with all open source code models that can be self-hosted.

Gracana|1 year ago

It sounds like you are trying to chat with the base model when you should be using a chat model.

tarruda|1 year ago

Have you ran the model in full FP16? It is possible a lot of performance is lost when running quantized versions.