Quote:
Originally Posted by BradZax
[You must be logged in to view images. Log in or Register.]
I think we're just waiting on more power. For example. Google image creation with Gemini using context costs about 3-5$ in wattage to generate a 6-10 second video. Vs their diffusion model for video generation is more like 5 cents or less.
That doesn't mean anything except it just gives you an idea about how much wattage context adds to the equation!
I am hoping corporations using their own nuclear reactors will be give us a big jump in usage.
https://www.latitudemedia.com/news/o...-nuclear-push/
|
Well power is certainly a factor, but there's something more fundamental to the tech.
You have to understand "AI' is just math. Everything you tell the AI (ie. all context) becomes part of its calculation. There are fundamental limits to how much context it can process. Throwing more processing power (which requires electrical power) can increase the "context window" some, but only so far.
It's a diminishing returns thing with how the math works: the thousandth context token takes more power than the first, and the ten thousandth takes way more, and so on ... much like how level 60 requires way more effort than level 1.
You might pay a neighborhood kid to level to 2, but it will cost wayyyy more (more than 60x) to pay him to get to 60. Even if you inherit $100 more, it probably won't get you to 60.