Quote:
Originally Posted by shovelquest
[You must be logged in to view images. Log in or Register.]
|
will check it out, i think 200 tokens is actually super fucking slow though i'd have to check. i know in my logs for my chatbot i see it using like 1000+ tokens on one response. and the local LLM i did try ran super slow due to my GTX 950 card, so i don't doubt it "works" but i bet it's like asking the computer in Hitchhikers guide a question and having to wait years for the response.