
AI Scriptwriting While You Sleep For Faceless Channels
Brief an LLM with guardrails, run overnight, and wake up to clean drafts to polish.
You don’t need to stare at a blinking cursor at midnight anymore. With a clear plan and a few well‑chosen tools, you can brief an AI, go to bed, and wake up to hooks, outlines, and full scripts ready to polish. This guide shows you how to set that up—without getting overly technical. You’ll learn where AI shines, where humans still do better, how to choose and install a local model, how to steer it with guided prompts (no heavy “fine‑tuning” yet), and how to run an overnight script pipeline that hands you clean drafts in the morning.

Why use an LLM for scripting (and where humans still win)
Speed & structure. LLMs are tireless first‑draft machines. They can spit out 20 hook options, a clean 6‑ to 8‑beat outline, and a readable script in minutes.
Consistency. Give the model a simple template—title, hook, sections, CTA—and it will stick to it.
Brainstorming power. “Show me 10 ways to tell this story,” “write 5 counterintuitive openings.” It explores; you decide.
Where humans still win
- Taste, voice, and brand. Knowing what feels right for your audience.
- Originality. Fresh examples, personal anecdotes, unusual structures.
- Fact‑checking. Names, dates, numbers, and claims must be verified.
- Ethics & context. Sensitive topics need human judgment.

Healthy stance: let AI draft; let humans decide.
Two paths: web tools vs. local model
A) Web tools (ChatGPT/Claude/Gemini). Fast, zero setup. Great way to learn your prompts and shapes.
B) Local model (runs on your computer via Ollama). Private, predictable cost, offline after setup—perfect for overnight batches.
Beginner‑friendly local setup (Ollama)
- Install Ollama (Mac/Windows/Linux).
- Pull a small model to start; step up later if needed.
ollama pull deepseek-r1:7b
ollama run deepseek-r1:7b
Type: “Write five YouTube hooks about how to save money on groceries.” You should see usable ideas.
CPU vs GPU (plain talk)
CPU works everywhere (slower). GPU is faster if you have enough VRAM. If your laptop wheezes, use a smaller model first—speed beats size at the start.
“Training” without fine‑tuning: guided prompting
The model doesn’t learn your style permanently. It follows instructions you provide each time (your “voice card” + examples) while the session is open. That’s guided prompting.
Fine‑tuning (feeding lots of examples to change the model itself) is powerful but overkill for most creators. Nail guided prompts first.
Create your Voice & Format Card (paste into every job)
Voice (how it should sound)
Role: friendly expert narrator for YouTube explainers.
Audience: beginners; no jargon; define acronyms on first use.
Tone: warm, clear, slightly witty; never snarky.
Pacing: short sentences; one idea per line.
Reading level: Grade 7–8.
Must‑haves (every script includes)
- Hook in 1–2 lines promising a benefit or curiosity.
- 6–8 sections, each with 2–3 beats (fact, example, action tip).
- One on‑screen text cue per section in brackets, e.g.,
[TEXT: 3 rules to start]
. - One CTA line.
Never include
Unverifiable claims, medical/legal advice, private info, or hype words (ban: “game‑changer,” “mind‑blowing,” “ultimate hack”).
Format template (the model must follow)
Title:
Hook (≤2 lines):
Sections:
1) [Heading]
- [Beat 1: fact/example]
- [Beat 2: tip/action]
- [Beat 3: optional stat/analogy]
2) ...
CTA (1 line):
Notes for editor (assets, captions):

Folder layout you can reuse
/ai-scripts
/config
voice_card.txt
banned_phrases.txt
/jobs
2025-09-14_budget-mistakes.yaml
/out
(generated files go here by date/slug)
Job file (YAML) example
topic: 5 beginner mistakes in personal budgeting (and easy fixes)
audience: 18–34, first job, no jargon
length: 6–8 minutes (≈900–1200 words)
tone: warm, practical, slightly witty
must_include: one example per section; define acronyms; [TEXT:] cues
avoid: hype words; scare tactics; shaming
sources:
- simple 50/30/20 explainer
- 2024 consumer spending stat
deliverables: outline.json, script.txt, script.json, notes_for_editor.txt
The overnight pipeline (bird’s‑eye view)
- Brief → the job file + your Voice Card.
- Hooks → generate 10, auto‑score for clarity/length, pick best.
- Outline → 6–8 headings; validate count and uniqueness.
- Sections → expand one at a time with strict word windows and required elements.
- Assemble → title + hook + sections + CTA.
- Checks → banned words, sentence length, total words.
- Outputs →
outline.json
,script.txt
,script.json
,notes_for_editor.txt
,run.log
.

Prompts you can copy
Hook list (Shorts or long‑form)
Using the Voice & Format Card below, write 20 YouTube hooks for the topic.
Rules: each ≤ 70 characters, no hype words, name a concrete result or timeframe.
Return just the list, numbered 1–20.
Outline (7 sections)
Using this chosen hook: "<best hook>"
Create a 7-section outline with concise, descriptive headings (≤ 60 chars).
No duplicates. No fluff sections. Return JSON: { "sections": ["...", "..."] }
Section expansion (loop this per section)
You are a friendly expert narrator. Short sentences. Define acronyms.
Hook: <best hook>
Write Section <n>: "<heading>"
- ≤150 words
- Include one real-world example; one action tip; one [TEXT:] cue
- End with a 1-sentence bridge to the next section
Return only narration lines; no commentary.
Final tightening
Cut 10–15% by removing filler. Keep all examples and action tips.
Replace any generic claims with concrete phrasing.
Simple JSON you’ll reuse
Beat‑sheet style (handy for captions/b‑roll generation)
{
"title": "string",
"hook": "string",
"sections": [
{ "heading": "string", "beats": ["string", "string", "string"] }
],
"cta": "string"
}
A tiny Python flow (readable pseudo‑code)
# Local model endpoint (Ollama)
BASE = "http://localhost:11434/api/generate"
MODEL = "deepseek-r1:7b"
def ask(prompt, max_tokens=800):
payload = {"model": MODEL, "prompt": prompt, "stream": False}
# POST to BASE, return r.json()["response"]
# 1) Load voice card + job brief
# 2) Ask for 20 hooks → pick best by simple score (length + specificity)
# 3) Ask for outline → validate 6–8 unique headings
# 4) For each heading, ask for ≤150 words, with example + action + [TEXT:]
# 5) Stitch full script → run banned-phrases + sentence-length checks
# 6) Save script.txt, script.json, outline.json, notes_for_editor.txt, run.log
Tip: keep all prompts and responses for reproducibility. If a draft hits, you can redo it with the same inputs months later.
Length targets (so VO fits)
- Shorts (60–70s): ~150–170 words total.
- 3–4 minutes: ~500–700 words.
- 6–8 minutes: ~900–1,200 words.
- 10 minutes: ~1,400–1,700 words.
These are ballparks; pacing and pauses matter. Use a final word‑count check.
Quality guardrails (copy/paste)
Automatic
- Reject banned words (keep a short list).
- Flag lines > 28 words.
- Ensure each section includes an example and an action tip.
- CTA present; one sentence.
Manual (3‑minute skim)
- Does the hook make a clear promise?
- Does Section 1 begin to pay that promise fast?
- Any claims without sources? Mark them
[CHECK]
. - Any clichés or generic lines? Replace with a concrete example.
- Smooth flow? Add one signpost line if needed.

Common issues & quick fixes
Repetition & clichés. Enforce “one new idea per beat”; maintain a banned phrase list.
Hollow claims. Require either a concrete example or a source for assertions.
AI tone. Specify reading level; remove intensifiers (“really, very”); include a short style sample to mimic.
Drift from structure. Generate one section at a time; restate hook and purpose in each prompt.
Durations off. Put a word window per section; do a final total check; then “tighten 15%.”
Generic hooks. Ask for 20 and require a concrete noun or number in each.
Hallucinated details. Mark [CHECK]
and verify in morning; keep links in notes.
Messy formatting. Enforce your template. Reject outputs that miss required keys.
Scheduling the overnight run (lightweight)
macOS (LaunchAgents or cron): schedule a command that runs your script in the project folder.
Windows (Task Scheduler): create a basic task → “Start a program” → point to Python and your script; set “Run whether user is logged on or not.”
Keep logs in out/<date>/<slug>/run.log
and review them over coffee.
Practical tip: keep the model “warm” by running a tiny ping at the top of the job (first call is often slower).
What the morning looks like
/out/2025-09-14/budget-mistakes/
outline.json
script.json
script.txt
notes_for_editor.txt
run.log
Open script.txt
and skim hook + first two sections. If it sings, move to edit. If not, tweak the job file and rerun while you prep visuals.

Editor‑friendly extras
- Notes for editor: list 5–8 b‑roll cues and on‑screen text pulls per section.
- Captions: use your
script.json
beats to generate SRT later. - Thumbnail prompts: ask for 5 thumbnail lines derived from the hook (store in
thumb_prompts.txt
).
Summary
You defined a voice and template, set up a simple local model, and built a boring, reliable night‑run: brief → hooks → outline → sections → checks → outputs. In the morning, you review and ship. Keep prompts tight, reuse your Voice & Format Card, and log everything for repeatability. That’s how you script while you sleep—and keep your faceless channel consistent, fast, and on‑brand.