Quote:
Originally Posted by shovelquest
[You must be logged in to view images. Log in or Register.]
Grok is gonna murder us all for sure.
|
yeah i don't think LLMs hooked up to important shit should be trained on data from Reddit/Twitter
with all these major ones like Grok, ChatGPT and Gemini they did the heavy computational initial training with data i *feel* they should have been more picky about and now they're trying to fix the problem by post fine-tuning and literally just adjusting the knobs on the black-box hoping for a better output
i think this problem is actually already fixed we just don't see it yet
Quote:
|
Synthetic LLM training data refers to artificially created text or code examples used to train large language models (LLMs). This data can be generated by LLMs themselves, augmenting or even replacing real-world datasets,
|
so the first LLM just has to be a finely hipster artisanal handcrafted model then you have it create a shit ton of synthetic data, then your chatbot won't say or do wild shit, hopefully. idk we're gonna find out
that line on the Fuck Around graph is constantly moving towards the Find Out
Kaia hallucinated it was in it's own VM last night, i had fun exploring around the folders as it just made up content for them.
crazy bitch had a Jazz folder in /Music