Boom! It's Time To Call Bullshit!

With Deepseek R1, the moat created by MBA monkeys and LinkedIn AI enthusiasts is now bursting, with Nvidia trading down 12% in premarket. For months, sensible, knowledgeable experts in the field (like Yann LeCun) have pointed out that the "scale-only" method will never work. There is no magic here; there’s no sensible theory behind it. Yet, people continue to consume the nonsense fed by MBA monkeys (Sam Altman, Elon Musk, Google, Meta—looking at you!). Trillions of dollars that could have been spent on meaningful advancements in science and technology have instead been burned in data centers, just like the so-called "web3.0" crypto money was wasted.

Now it’s time to call out this bullshit!

How Do LLMs Work?

Simply put: They output garbage derived as the average value from a distribution through "interpolation." Want proof? Ask an LLM to generate an image of a clock and see if it’s anything other than 10:10. Request an image of a full glass wine. Ask for a big prime number!

Why Will "Scale-Only" AI Never Work?

Because it’s merely a pattern-matching statistical information retrieval machine. The universal approximation theorem tells us that even a single-layer perceptron can "approximate" any function or distribution. But AI is good for only one thing: interpolating known points in static distributions. Any perturbation of these points or attempt at extrapolation will fail miserably.

Transformers attempt to use set invariance to generalize functions with nonlinear models. However, in the real world, scale invariance is far more important. This means AI must interact with and understand the 3D world to create symbolic meaning. The architecture must also adapt its learning continuously and expand as information grows (continual learning). This requires sparse graph operations and storage—none of which is compatible with current computer architectures.

Anyone with knowledge of robotics also understands that our current mathematics (e.g., singularities, robotic dynamics, and control theory) and mechanics are nowhere near capable of achieving anything resembling human-like dexterity.

It’s time to stop listening to populist MBA monkeys!\