@parthknowsai: researchers at META are experimenting with the free transformer which helps LLMs plan before generating text #EduTok #Science #LearnOnTikTok #Tech #ai
but ain't that the same as the thinking models out there
2025-11-09 10:50:41
0
fifi :
Isn't that exactly what "Thinking" for models is?
2025-11-04 13:10:50
35
𝙍𝙚𝙚𝙣𝙤𝙈𝙤𝙤𝙣 :
They all plan ahead, doesnt mean intelligence. Grok and Gemini and gpt all show their "descitions" as its "thinking" this isnt new, meta is just behind
2025-11-04 23:52:20
5
ruvda :
"AI has been faking intelligence all this time " 🤔 is higher dimensional pattern recognition, transformation and extension faking intelligence or a higher form of intelligence than what humans can comprehend ? and is planning a merely encoder based architectural consequence or embodied experience based goal oriented actions🤔
2025-11-04 16:05:19
4
Vizlin :
That was interesting, will be good to see how other developers integrate that in their models (if not patented by Meta)
2025-11-04 13:09:02
2
K :
Just one author for a Meta paper is insane
2025-11-04 16:32:03
3
Janani Subramanian :
how is this different than the thinking text that generates when u use deepseek
2025-11-04 14:24:22
2
Waschbar :
Isn’t this how you inject bias into the results? Can you “drift” the results by “hard coding” the Free Transformer?
2025-11-05 09:26:28
1
Masa Maeda :
That’s sounds like the equivalent to priming or anchoring.
2025-11-05 03:46:07
1
Circuit87 :
everyday reports of breakthrough but no real world impact
2025-11-05 03:10:04
1
Sudomike :
Just waiting to see how this is used to sway public opinion. Imagine an AI that puts in political opinion bias in before it starts thinking.
2025-11-04 23:10:14
1
Dr. Douglas Fartbox PhD Esq. :
I thought that's what "attention" was referring to in newer LLMs?
2025-11-04 18:07:01
1
lvoltamol :
Isn’t this how deepseek works?
2025-11-04 14:21:38
1
MartinJMS :
Paper?
2025-11-10 07:44:59
0
alanlille8 :
Can I get this model in Ollama or HuggingFace hubs ? Thank you!
2025-11-09 10:18:53
0
dyvphbbcv69c :
That was the initial plan.
"a tiny sticky note".
2025-11-08 15:25:34
0
Wethotshame :
Waiting on the continuous autoregressive language models break down. I’ll read the paper the weekend. What’s ur take?!
2025-11-06 21:15:30
0
meekuai :
Better planning could improve AI responses a lot.
2025-11-05 15:43:09
0
666bbb.0 :
Meta 🔥
2025-11-05 03:44:00
0
Phrank :
it's still a language model... 😏
2025-11-05 02:09:26
0
Richard.Edits :
not even gonna lie walter ai humanizer been clutch lately
2025-11-05 01:20:29
0
chloe.made1 :
You made AI so practical
2025-11-04 23:49:24
2
w_ill1 :
Crazy. Thanks for that information
2025-11-04 21:45:25
0
Fs :
So basically, if anthropic adopts this new architecture software development is done? Think Sonnet 4.5 50% better. Incredible
2025-11-04 21:32:53
0
Lotta Tostada :
Love you videos! Feature request... Would be nice if your audio was clearer. Like a decent mic for your voice. Thx.
2025-11-04 16:50:25
0
To see more videos from user @parthknowsai, please go to the Tikwm
homepage.