@dianasaurbytes: Interesting product lessons taken from Nvidia's recent release of ChatRTX. This is a "beta" product that allows you to run LLMs locally on your windows computer, as long as you have certain Nvidia GPUs. #generativeai #ai #llm #productmanager #techstartup
dianasaurbytes
Region: US
Monday 08 April 2024 21:33:10 GMT
Music
Download
Comments
Letitride :
so you GPU would have more spyware
2024-04-09 10:42:56
0
Alex :
Very generous review, though, Chat RTX is very basic when compared to LM Studio.
2024-04-08 21:39:46
0
Dylan S :
Pretty wild that any kind of llm trained on what I assume is probably terabytes of data could even run locally
2024-04-08 21:53:13
0
Andrew B :
I'll need to go check it out and see if it's better than LMStudio
2024-04-23 03:12:32
2
Travis Hoyt :
You can do it now with Ollama and it doesn’t require an Nvidia GPU. Macbooks with M1/M2 chips can also run it.
2024-04-09 10:58:56
0
kiwi_feathers :
too bad I don't have a rtx video card
2024-04-09 18:57:53
0
ElonMusk :
Try “Faraday”
2024-04-08 22:42:19
1
boombanger :
That chatbot must be reaching out to the full LLM ?
2024-04-08 21:42:14
0
Mega Tronite :
Need this on mobile device! Do a vid on that 😁
2024-04-11 19:23:50
0
nsr :
ollama is what you looking for
2024-09-13 20:06:52
1
Trev :
I’m hoping the localLLM movement continues their open source alliance to avoid central operated closed data gathering censored products
2024-04-08 22:46:32
1
To see more videos from user @dianasaurbytes, please go to the Tikwm
homepage.
© 2021-2025 TikWM. All rights reserved.