View Single Post
  #33  
Old 02-05-2026, 05:41 PM
DeathsSilkyMist DeathsSilkyMist is online now
Planar Protector

DeathsSilkyMist's Avatar

Join Date: Jan 2014
Posts: 8,243
Default

Quote:
Originally Posted by BradZax [You must be logged in to view images. Log in or Register.]
Right now you can generate an image on google gemni, it uses recursive learning and LLM models.

You can also generate an image on one of their diffusion models, these do not use recursive learning or LLM models.

The recursive memory generation costs 10$ a second of wattage to generate.

The problem isn't "this is as much wattage as technology allows us to pipe into it"

It's that, "currently it costs 10$ to generate that much wattage."

So, they don't allow users to use the full version of their LLMs or they would go bankrupt in processing costs.

Every question, causes the LLM to turn on compactors that need to be cooled.

It could turn on more compactors now, but they don't have the power to cool them.

If overnight we built enough pwerplants to generate enough power so that it costs 10 cents to generate 1 second of video, you would see massive improvements in LLM results that you thought were impossible, overnight.

The new version of chat GPT and shit, thats trying to do MORE with less. Optimizing their limited model to function as efficiently as it can with current power limitations.

But the thing they are investing in, is capable doing more, with MORE!
Let's run with your idea for a moment. If we built 100x the datacenters overnight, genie 3 still wouldn't be able to make Everquest. The tech just doesn't work in a way that is condusive to persistent online worlds with thousands of players. Maybe genie 3 with 100x datacenters would allow you to play a basic singleplayer game for a few hours.
Reply With Quote