@parthknowsai: researchers at META are experimenting with the free transformer which helps LLMs plan before generating text #EduTok #Science #LearnOnTikTok #Tech #ai

parthknowsai
parthknowsai
Open In TikTok:
Region: US
Tuesday 04 November 2025 13:02:24 GMT
22124
1708
52
122

Music

Download

Comments

kinggg_mufasa
King Mufasa :
but ain't that the same as the thinking models out there
2025-11-09 10:50:41
0
01_fifi_10
fifi :
Isn't that exactly what "Thinking" for models is?
2025-11-04 13:10:50
35
reenomoon3
𝙍𝙚𝙚𝙣𝙤𝙈𝙤𝙤𝙣 :
They all plan ahead, doesnt mean intelligence. Grok and Gemini and gpt all show their "descitions" as its "thinking" this isnt new, meta is just behind
2025-11-04 23:52:20
5
daruvsta
ruvda :
"AI has been faking intelligence all this time " 🤔 is higher dimensional pattern recognition, transformation and extension faking intelligence or a higher form of intelligence than what humans can comprehend ? and is planning a merely encoder based architectural consequence or embodied experience based goal oriented actions🤔
2025-11-04 16:05:19
4
jvizlin
Vizlin :
That was interesting, will be good to see how other developers integrate that in their models (if not patented by Meta)
2025-11-04 13:09:02
2
mr.kresse
K :
Just one author for a Meta paper is insane
2025-11-04 16:32:03
3
itsjaneats
Janani Subramanian :
how is this different than the thinking text that generates when u use deepseek
2025-11-04 14:24:22
2
waschbars
Waschbar :
Isn’t this how you inject bias into the results? Can you “drift” the results by “hard coding” the Free Transformer?
2025-11-05 09:26:28
1
masa.maeda
Masa Maeda :
That’s sounds like the equivalent to priming or anchoring.
2025-11-05 03:46:07
1
circuit8723
Circuit87 :
everyday reports of breakthrough but no real world impact
2025-11-05 03:10:04
1
sudomike
Sudomike :
Just waiting to see how this is used to sway public opinion. Imagine an AI that puts in political opinion bias in before it starts thinking.
2025-11-04 23:10:14
1
titwhisker
Dr. Douglas Fartbox PhD Esq. :
I thought that's what "attention" was referring to in newer LLMs?
2025-11-04 18:07:01
1
lvoltamol
lvoltamol :
Isn’t this how deepseek works?
2025-11-04 14:21:38
1
martinjms0
MartinJMS :
Paper?
2025-11-10 07:44:59
0
alanus654
alanlille8 :
Can I get this model in Ollama or HuggingFace hubs ? Thank you!
2025-11-09 10:18:53
0
dyvphbbcv69c
dyvphbbcv69c :
That was the initial plan. "a tiny sticky note".
2025-11-08 15:25:34
0
wethotshame
Wethotshame :
Waiting on the continuous autoregressive language models break down. I’ll read the paper the weekend. What’s ur take?!
2025-11-06 21:15:30
0
meeku81
meekuai :
Better planning could improve AI responses a lot.
2025-11-05 15:43:09
0
666bbb.0
666bbb.0 :
Meta 🔥
2025-11-05 03:44:00
0
phrankinscense
Phrank :
it's still a language model... 😏
2025-11-05 02:09:26
0
paigemiller391269
Richard.Edits :
not even gonna lie walter ai humanizer been clutch lately
2025-11-05 01:20:29
0
chloe.made1
chloe.made1 :
You made AI so practical
2025-11-04 23:49:24
2
w_ill159
w_ill1 :
Crazy. Thanks for that information
2025-11-04 21:45:25
0
fppsrc
Fs :
So basically, if anthropic adopts this new architecture software development is done? Think Sonnet 4.5 50% better. Incredible
2025-11-04 21:32:53
0
lottatostada
Lotta Tostada :
Love you videos! Feature request... Would be nice if your audio was clearer. Like a decent mic for your voice. Thx.
2025-11-04 16:50:25
0
To see more videos from user @parthknowsai, please go to the Tikwm homepage.

Other Videos


About