I dont get it
Why did we draw the first lines like that
Is there a reason or is it just random
2025-08-25 11:22:47
36
acidayan :
i dont think they teach this in cs
2025-08-25 00:02:30
139
? :
The visualizations go crazy. Would not have been able to understand any of that without the visuals. Great job
2025-08-25 01:52:54
474
Taylor Cates :
What are these words 😩
2025-08-26 03:45:04
40
alex_p_howard :
Can this be used in Economics and Economtrics ?
2025-08-25 23:17:04
0
Jesus Christ • Creator :
What goes on when training it
2025-08-25 05:51:59
1
🎀Aeden🎀 :
it kinda feels like integration to be honest, where continuously adding neurs brings the accuracy up
2025-08-25 01:34:22
6
ogrussy🥀 :
neural nets are definitely one of the biggest step innovations in human history
2025-08-25 05:37:36
30
glitch01 :
This is a theoretical result. We don’t know much about whether back prop gets us these clean classes
2025-08-25 03:34:33
7
DT :
those are all definitely words.
2025-08-25 03:09:25
7
Trent Greenan, MS, MBA :
Deep learning is so cool!
2025-08-25 06:23:55
0
Memo Akten :
UFA theorem only proves that for any given fn, there exists a single hidden layer ANN that can approx it to any given precision. *Finding* that ANN is a different problem :)
2025-08-25 04:24:58
54
atomicself001 🇨🇦 :
Could use this for space travel navigation.
2025-09-03 00:54:00
0
ding-dong :
This is absolutely excellent educational content
2025-08-25 03:47:13
50
Citlali :
Can you explain some use case/application? Why would I elect to have this information via the 2D neural network vs some other means of storing/accessing the data? Just better compared to the 1000+ version?
2025-08-25 22:40:08
0
Kevin :
You’re asking questions that we cannot solve. Higher dimensional math is true
2025-09-28 06:53:58
2
nosnej7 :
I was just about to say that
2025-08-27 14:05:40
5
Dysentery Gary :
Ready for part 2
2025-08-25 06:27:06
2
Joseph Louis Lagrange :
Slow convergence is still convergence. Fourier series are practically divergent for many waveforms simply because convergence is so slow. Idk anything about ML but all you’re doing is selecting a more efficient subsequence—that it gives the same limit is a trivial consequence of the first sequence converging in the first place
2025-09-02 17:03:09
1
Evo-D 333 :
instead of a flat plane each layer should be hyperbolic and when it flip scale you'll get a whole another set of data with intersecting multiple flat plans
2025-09-30 17:30:28
0
mcdonald’s cup full of piss :
universal approximation theorems is actually BAD news :-) there is no such thing as a free lunch
2025-09-15 20:35:49
0
Noor Kamal :
can i find these videos on youtube?
2025-09-19 01:34:06
0
fighter 23 :
the main problem is generalization and universal approximation theorem do not tell us anything. By the way from statistical learning theory neural networks are dumps approximation some one you can use
2025-08-25 05:52:23
0
quelque-chose :
so glad to see you back making content! your earlier videos on neural networks are golden and contains such deep insights
2025-08-25 07:58:10
0
To see more videos from user @welchlabs, please go to the Tikwm
homepage.