@77chiachi2: 我的私人游泳池6千坪啦 吊車大王歹勢了#雲林7姐 #你的南部七辣 #魚塭 #泳裝 #chancy嘉義千禧家

你的南部七辣
你的南部七辣
Open In TikTok:
Region: TW
Wednesday 27 November 2024 11:53:57 GMT
209091
11176
117
188

Music

Download

Comments

user7217093203600
豆豆 :
水很多
2024-11-27 15:54:03
82
boy453191
舟舟漢堡🍔👸 :
口湖什麼時候搬去嘉義了😆😆😆
2024-11-27 23:34:04
22
_8098631
雲林房仲小鎮姑娘 :
喜歡尼的泳裝
2024-11-28 05:56:00
3
lihyunbin
Eric 楚🥃 :
想去釣魚🥰
2024-11-27 15:50:16
2
he_yao_cheng
小丞 :
這套衣服真好看
2024-11-27 12:23:16
20
userpjegsqmmu4
陳文 :
雲林口湖還是嘉義口湖😳
2024-11-28 15:46:49
0
dyfojckfr87v
漂泊人生 :
皮膚真好
2024-11-27 14:09:57
3
306_g8
306_g8 :
夾槍神器🥰
2024-11-29 11:15:15
13
user3621728437773
梅川依福 :
已設 小孩很愛
2024-11-27 14:58:44
8
ttn_5757
俊宇 :
口湖怎麼搬到嘉義去了呢?到底是雲林民雄還是嘉義口湖
2024-11-28 07:10:38
5
play22352
牛奶鍋必須死 :
以後都穿這套制服拍就好了😋😋😋
2024-11-27 18:18:15
7
kkk__0418
杨小K :
我家什麼時候跑到嘉義了⋯⋯😂
2024-11-28 01:34:15
1
r6_joeliao13
亨. :
口湖在雲林😀😀😀
2024-11-27 15:57:41
3
418042_
軒 ☻ :
好喜歡這種態度🤣
2024-11-28 09:16:13
3
hideee5556
是誰 :
好美
2024-11-28 09:32:18
3
zhangzhihong10
當愛在靠近✨️🧸 :
如何購買🤣
2024-11-27 23:17:58
2
k79921785
原子洨精肛 :
我都不知道原來口湖已經搬去嘉義了⋯⋯
2024-11-29 08:39:22
1
danny07373
Danny :
姐姐我不想努力了
2024-11-28 08:12:12
1
hehongyi0
今晚住你家 :
口湖是雲林
2024-11-28 08:19:47
1
kiso3000
陳雷公 :
有關注嗎?😳
2024-11-28 06:19:59
1
wldpce
Wldpce :
真的有魚池的
2024-11-28 01:29:29
1
user20326841884548
張立謀 :
有趣美女雲林一隻花🤣🤣
2024-11-27 23:23:12
1
banana881113
胡二天 :
我口湖人我怎麼不知道在嘉義
2024-11-28 04:25:30
1
nickhsieh4
Nick Hsieh :
😅😅😅😅😅感覺很冷 辛苦了
2024-11-27 22:36:14
1
user457181005
新北芭樂 :
超大聲
2024-11-30 11:22:49
1
To see more videos from user @77chiachi2, please go to the Tikwm homepage.

Other Videos

How I Made an AI Music Video Using ChatGPT, Suno, Veo 3, Kling, Imagen 4, and Hedra Alright, so this video is different. No voiceover, no intro I just dropped the piece. But I wanted to break down how I actually made it, because it’s one of the first times I fully stacked all these AI tools together into something that feels like a real story. It started with an idea part of a personal story I’ve been working on. I went into ChatGPT, dropped the concept in, and we started riffing on lyrics. I told it I wanted something soulful, with a surfer feel, and it helped shape the vibe. Once I had the lyrics right, I took them over to Suno and generated the music. Simple, smooth, and on point. From there, I already had a few B-roll visuals tied to this storyline. But I wanted to build a performance scene. So I went back into ChatGPT and crafted a prompt for a singer addressing a crowd. Dropped that into Veo 3, and it generated a great first shot. Then I asked for a second angle same character, same scene and Veo 3 kept everything consistent. That was a nice surprise. Then I moved to world-building. I used ChatGPT’s DALL·E and Imagen 4 to generate detailed stills for the other scenes. Some of them were moody, some warm, but all grounded in the same emotional tone. I brought those into Kling 2.1 to animate the images just enough movement to bring them to life without overdoing it. Now for the key piece: lip-syncing. I took a screenshot of the singer from the Veo 3 video and loaded it into Hedra, then fed in the Suno track. Hedra synced it perfectly, and just like that, the still image became a believable performance. This wasn’t about making something perfect it was about learning the rhythm of these tools. Understanding what each one does best, and how to chain them together. If you’re curious about the full workflow the prompts, the time, the tools I’ll put together a breakdown. Just drop a comment if you’re into that. #AIForRealLife #AIMusicVideo #SunoAI #HedraAI #KlingAI #Veo3 #AIStorytelling #chatgpt #imagen #google
How I Made an AI Music Video Using ChatGPT, Suno, Veo 3, Kling, Imagen 4, and Hedra Alright, so this video is different. No voiceover, no intro I just dropped the piece. But I wanted to break down how I actually made it, because it’s one of the first times I fully stacked all these AI tools together into something that feels like a real story. It started with an idea part of a personal story I’ve been working on. I went into ChatGPT, dropped the concept in, and we started riffing on lyrics. I told it I wanted something soulful, with a surfer feel, and it helped shape the vibe. Once I had the lyrics right, I took them over to Suno and generated the music. Simple, smooth, and on point. From there, I already had a few B-roll visuals tied to this storyline. But I wanted to build a performance scene. So I went back into ChatGPT and crafted a prompt for a singer addressing a crowd. Dropped that into Veo 3, and it generated a great first shot. Then I asked for a second angle same character, same scene and Veo 3 kept everything consistent. That was a nice surprise. Then I moved to world-building. I used ChatGPT’s DALL·E and Imagen 4 to generate detailed stills for the other scenes. Some of them were moody, some warm, but all grounded in the same emotional tone. I brought those into Kling 2.1 to animate the images just enough movement to bring them to life without overdoing it. Now for the key piece: lip-syncing. I took a screenshot of the singer from the Veo 3 video and loaded it into Hedra, then fed in the Suno track. Hedra synced it perfectly, and just like that, the still image became a believable performance. This wasn’t about making something perfect it was about learning the rhythm of these tools. Understanding what each one does best, and how to chain them together. If you’re curious about the full workflow the prompts, the time, the tools I’ll put together a breakdown. Just drop a comment if you’re into that. #AIForRealLife #AIMusicVideo #SunoAI #HedraAI #KlingAI #Veo3 #AIStorytelling #chatgpt #imagen #google

About