AI Commerce Creator — Slide Deck v2
OPENING: What's Possible (Slides 1–8)
Slide 1: Title
🤖💰 AI-POWERED COMMERCE CREATOR
Make Content That Sells
[Your Name / Brand]
Slide 2: Today's Promise
By 5pm today, you will have:
✅ A 15-sec polished product ad
✅ A 30-sec organic UGC review
✅ 3 social media image posts
✅ Your own AI voice + music
✅ A ComfyUI automation workflow
No design skills. No video experience. No ideation paralysis.
Just follow the cases. Build as you learn.
Slide 3: This Is Happening Right Now
[6 viral content examples — one per slide transition]
🧴 Perfume bottle, liquid swirling in slo-mo, light refracting
☕ Coffee pour — same shot, 3 different moods (Higgsfield)
🎧 Earbuds floating in space, sound waves visible
📱 "I tried this skincare for 30 days" — UGC-style
🏠 Room transforms from morning to night in one shot
💄 Lipstick color changes with camera angle shift
Slide 4: The Tools Making This Possible
TEXT → ChatGPT, Claude
IMAGES → DALL·E 3, Bing Image Creator
VIDEO → Kling AI, Runway, Higgsfield Cinema Studio
AUDIO → ElevenLabs, Suno
ASSEMBLY → CapCut, DaVinci Resolve
AUTOMATION → ComfyUI
All have free tiers. You'll use every one today.
Slide 5: Higgsfield Cinema Studio — The Camera Revolution
50+ cinematic camera presets:
📹 Zoom 📹 Pan 🚁 Drone 🤚 Handheld 🎥 Dolly
Mood shift — same shot, different atmosphere:
🌅 Golden Hour 🌙 Neon Night ☁️ Moody Morning
Viral presets library:
→ Encodes what already works on social
→ One-click: "Make this product cinematic"
Single photo → full video with camera movement
Slide 6: Today's Approach
❌ No "think of an idea" (paralysis kills workshops)
✅ Pre-made cases — you just follow and build
✅ Competition — reverse-engineer an image, closest wins
✅ Real products, real formats, real platforms
You're not "learning AI."
You're making content that could sell TODAY.
Slide 7: The Flow
TEXT (copy + script)
↓
IMAGE (social post + UGC + storyboard) ← COMPETITION
↓
VIDEO (text-to-video, image-to-video, start/end frame)
↓
AUDIO (voiceover + music)
↓
ASSEMBLY (polished ad + organic UGC)
↓
AUTOMATION (ComfyUI — do it all in one click)
Slide 8: Ground Rules
1. Follow the pre-made cases — no ideation required
2. Ask questions anytime — this is a workshop, not a lecture
3. Share your screen if stuck — we debug together
4. Competition is friendly — the real win is learning
5. Export everything — you'll leave with a portfolio piece
MODULE 1: AI Text Generation (Slides 9–12)
Slide 9: Module 1 Title
MODULE 1
AI Text Generation
✍️ Captions, Ads, Scripts
Slide 10: Your Case
PRODUCT: HydraGlow Smart Water Bottle
PRICE: $49
FEATURES: Tracks hydration, glows to remind you,
keeps water cold 24 hours
AUDIENCE: Health-conscious millennials
YOU NEED:
1. Instagram caption (hook + body + CTA)
2. Ad headline + description (for paid ads)
3. 15-second video script (shot-by-shot + VO)
Slide 11: The Formulas
AIDA: Attention → Interest → Desire → Action
"You're dehydrated right now and don't know it." →
"This bottle tracks every sip." →
"Imagine never forgetting to drink water again." →
"Tap the link. Your body will thank you."
PAS: Problem → Agitation → Solution
"80% of adults are chronically dehydrated." →
"It's killing your energy, skin, and focus." →
"HydraGlow. The bottle that won't let you forget."
HOOK-FIRST: You have 1.5 seconds on social.
Open with: Question. Statistic. Bold claim. Controversy.
Slide 12: EXERCISE — Module 1
1. Open ChatGPT
2. Use the pre-made prompt (next slide)
3. Generate: 1 caption, 1 ad, 1 video script
4. Read your best hook aloud to the group
PRE-MADE PROMPT:
"You're a direct-response copywriter. Write 3 pieces for HydraGlow,
a $49 smart water bottle that tracks hydration, glows to remind you
to drink, and keeps water cold 24 hours. Target: health-conscious
millennials. Deliver: (1) Instagram caption with hook, (2) Facebook
ad headline + 2-line description, (3) 15-sec video script shot-by-shot
with voiceover script."
MODULE 2: AI Image Creation (Slides 13–20)
Slide 13: Module 2 Title
MODULE 2
AI Image Creation
📸 Social Posts, UGC Photos, Storyboards
Slide 14: Prompt Anatomy
A good image prompt has 5 parts:
1. SUBJECT — What's in the frame?
"Cold brew bottle, marble surface, coffee beans"
2. STYLE — What kind of image?
"Flat lay product photography" / "Casual candid photo"
3. LIGHTING — How is it lit?
"Warm morning light through window" / "Studio softbox"
4. COMPOSITION — Camera framing
"1:1 square" / "16:9 cinematic" / "Overhead shot"
5. QUALITY TAGS — Technical descriptors
"Photorealistic, 4K, shallow depth of field"
Slide 15: Your Case
BRAND: Mornings Coffee
PRODUCTS: Cold brew, single-origin beans
AESTHETIC: Minimalist, warm, design-conscious
AUDIENCE: Coffee lovers, 25–40
YOU NEED:
1. Social media post (Instagram carousel-worthy)
2. UGC-style photo ("real person" holding product)
3. Storyboard frame (cinematic, for video ad)
Slide 16: The 3 Image Types
SOCIAL POST:
"Flat lay product photography, Mornings cold brew bottle
on marble surface, coffee beans scattered, warm morning
light, minimalist, 1:1 square, Instagram aesthetic"
UGC PHOTO:
"Casual photo of person holding Mornings coffee bottle,
sitting at kitchen table, morning sunlight through window,
candid, unposed, phone camera quality, natural skin texture"
STORYBOARD:
"Cinematic wide shot, cold brew pouring into glass with
ice, condensation on bottle, golden hour backlight,
shallow depth of field, 16:9, commercial quality"
Slide 17: COMPETITION — Reverse Prompt Engineering
THE CHALLENGE:
I'll show you a reference image.
You write a prompt to recreate it.
Closest match wins.
3 rounds. Increasing difficulty.
WHY THIS WORKS:
→ Forces precision (every word counts)
→ Teaches prompt anatomy better than any lecture
→ No "I don't know what to make" — the target is right there
→ Competition = memory = retention
Slide 18: Competition — Round 1
[Show reference image: Simple product on white background]
ROUND 1: Easy
A single product. Clean background. Studio lighting.
What to nail:
- The product itself (color, shape, details)
- The background (color, texture)
- The lighting direction
Write your prompt. Submit via [shared doc / chat].
Slide 19: Competition — Round 2
[Show reference image: Product in lifestyle setting]
ROUND 2: Medium
Product in a scene. Specific lighting. Props.
What to nail:
- Everything from Round 1, PLUS
- The setting (room type, surfaces, time of day)
- The lighting quality (golden hour, window light, shadows)
- The composition (angle, framing, depth of field)
Write your prompt. Submit.
Slide 20: Competition — Round 3
[Show reference image: UGC-style with "imperfections"]
ROUND 3: Hard
"Real person" holding product. Unpolished look.
What to nail:
- Everything from Round 2, PLUS
- "Phone camera" aesthetic (not professional)
- Natural skin texture (not airbrushed)
- Candid framing (not posed)
- Imperfections (slight blur, natural light, unposed)
Write your prompt. Submit.
MODULE 3: AI Video Generation (Slides 21–27)
Slide 21: Module 3 Title
MODULE 3
AI Video Generation
🎥 Text → Image → Video
Slide 22: Three Ways to Generate Video
1. TEXT-TO-VIDEO
Prompt only → AI creates everything
Fastest. Least control.
2. IMAGE-TO-VIDEO
Upload keyframe → AI animates it
Most control. Needs good starting image.
3. START + END FRAME
Upload image A AND image B → AI fills the transition
Best for before/after, transformation, reveals.
Slide 23: Your Case
PRODUCT: PulsePods Wireless Earbuds
PRICE: $129
FEATURES: 36hr battery, water resistant, spatial audio,
active noise cancellation
YOU NEED:
1. Text-to-video clip (abstract product shot)
2. Image-to-video clip (UGC person using product)
3. Start+end frame clip (case → in-ear transition)
Slide 24: Clip 1 — Text-to-Video
TOOL: Kling AI or Runway
INPUT: Text prompt only
"Cinematic close-up of wireless earbuds floating in
space, pulsing sound waves visible as blue light,
dark background with subtle particles. Smooth rotation.
5 seconds."
WHY: Abstract product shots are hard to photograph
but AI generates them perfectly. This is the "hero shot."
Slide 25: Clip 2 — Image-to-Video
TOOL: Kling AI or Runway
INPUT: Your UGC photo from Module 2 + prompt
"Person puts earbuds in, expression shifts to delight
as music starts, natural head nod, candid moment.
Handheld camera feel. 5 seconds."
WHY: The UGC photo sets the character. The prompt
animates the reaction. Together = authentic demo.
Slide 26: Clip 3 — Start + End Frame
TOOL: Kling AI or Runway
INPUT: Image A (case closed) + Image B (in ears) + prompt
"Earbud case opens smoothly, earbuds float upward,
rotate and slide into ears. Smooth tech aesthetic.
Clean transition. 5 seconds."
WHY: Transformation is the most engaging video format.
Viewers stay to see the "after." Perfect for product demos.
Slide 27: Higgsfield — Camera Without Complexity
SAME PRODUCT. 9 DIFFERENT LOOKS.
Camera presets (pick from menu):
🔍 Slow zoom in ↔️ Pan across 🚁 Drone rise
Mood presets (click to apply):
🌅 Golden Hour 🌙 Neon Night ☁️ Moody Morning
Single photo → cinematic video. One click.
[Show demo: same product image, 3 camera presets × 3 moods]
MODULE 4: AI Audio (Slides 28–31)
Slide 28: Module 4 Title
MODULE 4
AI Audio Generation
🗣️ Voiceover + 🎵 Music = Brand
Slide 29: The Two-Brand Test
SAME COPY. DIFFERENT VOICE. DIFFERENT BRAND.
Product A: LuxeGlow Serum ($89, premium skincare)
Voice: [Warm, sophisticated, intimate]
Pace: Slow, deliberate
Music: "Luxury spa ambient, soft piano, gentle strings, 70bpm"
Product B: BeatBuds ($49, budget tech)
Voice: [Energetic, excited, casual]
Pace: Fast, punchy
Music: "Upbeat tech pop, punchy drums, synth bass, 120bpm"
Voice IS your brand signal.
Slide 30: EXERCISE — Voice Cloning
1. Record 60 seconds of your voice (phone, quiet room)
2. ElevenLabs → VoiceLab → Add Voice → Instant Clone
3. Upload recording → name your voice
4. Generate LuxeGlow VO:
"[Warm, sophisticated] Your skin deserves more than hope."
5. Generate BeatBuds VO:
"[Energetic, excited] 36 hours of battery. Zero excuses."
LISTEN: Same person's voice. Two completely different brands.
That's what emotion tags do.
Slide 31: EXERCISE — Music
SUNO — 2 tracks, 2 vibes:
Track 1 (LuxeGlow skincare):
"Luxury spa ambient, soft piano, gentle strings,
70bpm, C major, no drums, warm atmosphere"
Track 2 (BeatBuds tech):
"Upbeat tech pop, punchy drums, synth bass,
120bpm, E minor, energetic, short"
MUSIC = MOOD.
Match the music to the product, not your personal taste.
MODULE 5: Commercial & UGC (Slides 32–36)
Slide 32: Module 5 Title
MODULE 5
Commercial & UGC Assembly
💰 Sell the product. Two ways.
Slide 33: Two Formats, One Product
POLISHED AD (15 sec) ORGANIC UGC (30 sec)
───────────────────── ────────────────────
Cinematic, color-graded Phone-selfie, natural
Professional VO Conversational VO
Brand-matched music Trending/lo-fi music
Key features + price Personal testimony
"Available now. Link in bio." "I paid for these. Honest
review. Link below."
YOU NEED BOTH.
Polished = credibility. UGC = trust.
Slide 34: Assembly — Polished Ad
TIMELINE (CapCut or Resolve):
0:00-0:03 Clip 1 (text-to-video abstract shot)
VO: "36 hours of battery."
0:03-0:06 Clip 3 (start→end frame transition)
VO: "Noise cancellation that actually works."
0:06-0:09 Clip 2 (person using product)
VO: "PulsePods. Hear what matters."
0:09-0:12 Product shot + text overlay: "$129"
0:12-0:15 Logo + "Link in bio."
Music: BeatBuds tech track. Duck during VO.
Slide 35: Assembly — Organic UGC
TIMELINE (CapCut or Resolve):
0:00-0:05 "Phone selfie" intro — person talking to camera
VO: "Okay so I just got the PulsePods..."
0:05-0:10 Clip 2 (person using, natural reaction)
VO: "...and honestly the battery is insane."
0:10-0:18 Clip 1 (abstract, overlaid with casual commentary)
VO: "I've been using them for a week. No issues."
0:18-0:25 Clip 3 (transition, product close)
VO: "Not sponsored. I paid $129. Link below."
0:25-0:30 Text: "Honest review ↓" + CTA
Music: Trending lo-fi or no music (natural feel).
Slide 36: Affiliate CTAs That Convert
WHERE TO PUT IT:
→ Last 2-3 seconds of video
→ Peak attention (right after the reveal or testimonial)
WHAT TO SAY:
"Link in bio 👆"
"Comment 'LINK' and I'll DM you"
"Tap the link to shop"
"Check the description"
WHAT NOT TO SAY:
❌ "Buy now" (too aggressive)
❌ "Limited time" (unless true)
❌ CTA at the START (nobody's ready yet)
MODULE 6: ComfyUI (Slides 37–41)
Slide 37: Module 6 Title
MODULE 6
ComfyUI Workflow Builder
⚡ Automate everything you did today
Slide 38: Why Automation Matters
What you did today (manual):
Prompt → generate image → download → upload to video tool
→ generate video → download → upload to editor → repeat
What ComfyUI does:
Prompt → [Workflow loads model → generates image
→ passes to video node → generates video → saves]
ONE CLICK.
50 product images/week, manually:
5 hours.
50 product images/week, ComfyUI:
10 minutes.
Slide 39: The Node Paradigm
[Show ComfyUI interface screenshot]
Each box = one AI operation.
Connect with wires = define the flow.
SIMPLE WORKFLOW:
[Load Model] → [Type Prompt] → [Generate Image] → [Save]
PRODUCT AUTOMATION:
[Load Model] → [Type Prompt] → [Generate Image]
→ [AnimateDiff] → [Generate Video] → [Save Video]
Build once. Run 1000 times.
Slide 40: EXERCISE — Build Your First Workflow
1. Open ComfyUI (demo station or your GPU laptop)
2. Load pre-built workflow: product-image.json
3. Change the prompt to YOUR product
4. Click "Queue Prompt"
5. Watch the pipeline execute
You just automated product image creation.
Next: Load product-video.json
→ Same concept, adds AnimateDiff node
→ Static image → animated video in one click
Slide 41: Where This Leads
What professionals are building:
→ Batch systems: 100 product images from CSV of product names
→ Template pipelines: Change 1 prompt → full video ad updates
→ Multi-platform: Same workflow, different aspect ratios
→ API integration: Connect to Shopify, schedule posts
Today: You learned to make 1 ad.
ComfyUI: You learned to make 1000.
Figma Weave = cloud/polished version of same concept.
But ComfyUI is free, local, unlimited. Start here.
CLOSING (Slides 42–44)
Slide 42: What You Built Today
In 8 hours, you:
✍️ Wrote copy for a real product
📸 Created social posts + UGC photos + storyboards
🎥 Generated video 3 different ways
🗣️ Cloned your voice + scored your ad
💰 Assembled a polished ad AND an organic review
⚡ Built an automation workflow in ComfyUI
This portfolio didn't exist this morning.
Slide 43: Keep Building
Tools to master next:
ComfyUI → Deep automation, custom pipelines
Higgsfield → Camera presets, viral templates
ElevenLabs → Voice design, multi-voice projects
Runway → Professional video generation
Kling → Longer, more complex video
Concepts to explore:
→ Start/end frame narratives (transformation sells)
→ UGC authenticity (imperfection = trust)
→ Prompt precision (the competition proved this)
→ Affiliate content systems (build once, sell repeatedly)
Slide 44: Thank You
💰🤖
The tools are ready.
The cases are real.
Go make content that sells.
[Contact info]
[Link to all workshop docs + cheat sheet]
BONUS: Lip Sync (Slides B1–B2)
Slide B1: When Commerce Needs Lip Sync
👄 + 🔊 = 🎬
Only sync lips when:
✅ Lips are clearly visible
✅ Shot is close-up
✅ Voice is meant to come from that person
Skip lip sync when:
❌ Voiceover over product shots
❌ Wide shots (mouths invisible)
❌ Cutaways, B-roll, text-only
TOOLS:
Sync.so → Upload video + audio, get synced result
HeyGen → Talking-head avatar, type text, it speaks
Slide B2: Demo
1. Take your UGC video (person using product)
2. Take your VO from Module 4
3. Upload both to Sync.so
4. Download synced result
5. Compare: unsynced voiceover vs lip-synced
The difference matters for talking-head content.
For product B-roll? Nobody notices.
Deck Production Notes
Media to Prepare
- 6 viral content screenshots (X/Instagram)
- Higgsfield demo video (1 product, 3 camera angles × 3 moods)
- 3 competition reference images (easy, medium, hard)
- ComfyUI screenshot (simple workflow)
- CapCut/Resolve timeline screenshot (polished ad)
- Before/after: unsynced vs synced (bonus module)
- Final "Aurora" or "PulsePods" demo video
Speaker Notes
- Transitions between modules
- Competition rules (verbatim, on screen)
- Timing warnings (visible only to presenter)
- FAQ answers (see appendix)
Appendix: FAQ
Q: Will AI replace content creators? A: It replaces the repetitive parts of creation — generating options, resizing formats, trying variations. It doesn't replace taste, strategy, or knowing what will resonate with an audience. The creators who thrive are the ones who learn these tools.
Q: Can I use AI-generated content for client work? A: Yes — with paid tiers. Free tiers are for learning. Check each tool's commercial terms. Voice cloning requires consent.
Q: How realistic is UGC-style AI content? A: Very — if you prompt for imperfections. The key is asking for "phone camera quality," "natural skin texture," and "candid, unposed." Perfect AI output looks fake. Imperfect AI output looks real.
Q: Why ComfyUI and not just keep using Kling/Runway? A: Per-use costs add up at scale. ComfyUI = unlimited generation on your own GPU. For professional content creators producing volume, it pays for itself almost immediately.
Q: What's the most important skill to develop after this workshop? A: Taste. The AI generates. You select. The difference between good content and great content is the human choosing which of the 10 AI-generated options is actually worth publishing.