model in it's 4_1 quantization is almost as good as guanaco-65b

#1
by onealeph0cc - opened

It performs very well in my tests, which is an execution of the chain:

(loop
        (-> get-task-description 
              llm_generate_plan
              llm_convert_plan_to_json 
              parse_command))

and what I measure is the number of steps to first command which leads to meaningful results.

Sign up or log in to comment