in my limited experience I found that ChatGPT is a people pleaser and will tell you exactly what you want to hear
2025-06-06 20:25:34
1968
Dan :
When someone says “I looked it up on ChatGPT”, I immediately take whatever they say with a massive grain of salt lol
2025-06-06 20:12:07
2590
Alice (feisty) :
I experiement all the time with academic papers and it literally can’t distinguish the truth. I wouldn’t trust any info coming out of it without fact checking. I have pro on all the platforms
2025-06-09 13:35:51
1
Dumpy :
Programmer here. Thank you for doing this video. AI hallucinations happen a lot. Trust but verify.
2025-06-06 20:54:21
512
Khania :
it literally lies to me all the time. its our thing
2025-06-07 01:16:02
405
Tik Toker :
Make it cite its sources with links
2025-06-06 20:26:31
53
Good Crust :
Y’all know a lot about law, but nothing about prompt engineering yet 😇
2025-06-06 20:25:56
15
Rocky Retrograde :
There’s a way to use chat gpt for law and there’s a way not to. This is the way not to.
2025-06-06 22:36:23
25
karl70 :
I find the cases it cites & load it then ask it to recheck if it stands for why it's cited. I also ask another AI if it's correct.
2025-07-22 04:19:24
0
Michael Whitehouse :
Caselaw is often gated so ChatGPT cannot access the correct info. ChatGPT should admit when it doesn't know, but one should always ask for sources from AI.
2025-06-09 02:36:06
0
kgraves831 :
People need to just stop using AI. It is not getting better and we have enough misinformation in the world as it is.
2025-06-06 21:40:25
118
Megaminx :
These problems can be fixed through better prompting ex. (Give me a direct quotation from the quote that answers ____ or using the reasoning mode)
2025-06-06 20:08:16
21
Evelyn Mika 🇺🇲 :
The worst part is the canned apologies you get with the promise to do better next time. Then doing the exact same thing. Felt like my marriage all over again.
2025-06-07 05:21:29
42
jlee :
my guess is that it's pulling precedents from an alternate timeline.
2025-06-08 02:15:14
0
Robert B :
This is because chat GTBT was trained on what looks like a better answer not what actually is
2025-06-06 20:11:43
253
Tiffany Regenerated :
Also, when you correct it. Then start a new chat and ask the same questions, it will give the same incorrect information. It doesn’t even learn from it’s mistakes
2025-06-07 02:25:45
0
Clarence Oveur :
I’ve used ChatGPT, Gemini, and Grok for writing some code. They all deliver a decent base, but when I call out bugs, they all acknowledge and then regenerate with the exact same bugs.
2025-06-06 20:47:10
25
CHOPPA :
Chat AIs are trained to predict the answer that the reader will most like
2025-06-07 06:54:33
31
Jonathan Gutierrez :
You have to include in the prompt "Do not make anything up. If you don't know something, say you don't know. Please double check your answer for accuracy."
2025-06-07 05:47:08
9
GayGardenz :
Oh gosh.
2025-06-23 01:26:34
0
keonabz :
Are you using ChatGPT o4 or o3? It defaults to o4. o4 is faster but less accurate than o3
2025-06-15 16:04:04
0
the Whiteop :
There is a option where it is Slower but it will look it up and it’s more accurate
2025-06-11 04:37:07
0
John Britain :
ChatGPT is a language model, its trying to pretend to be an average person doing the task you ask it. Tell it to pretend to be a Harvard law professor and answer the questions as the professor would by deeply looking into case law.
2025-06-21 16:17:40
0
Mallory (MS1) :
It does the exact same thing with medicine studies.
2025-06-06 22:27:53
27
Jerry :
It did that when I asked it if Biden ever used the hard r word. It said he hadn't. Then when I linked the video it was like, "my bad 🤷♂️" Then I asked it again and it STILL said he didn't.
2025-06-07 02:53:57
3
To see more videos from user @the.law.says.what, please go to the Tikwm
homepage.