Last updated: May 3, 2026
This page exists because most AI video reviews on the open web are repackaged press releases. We've tried to do something different: be specific about what we have and haven't done with each tool. If a review on this site says we tested HeyGen Avatar V on a real client video, we did. If it says a feature is based on vendor documentation, that's exactly what it means.
This document covers the scoring criteria we use, what scope labels mean in practice, the dated test log by tool, how often reviews are refreshed, and how affiliate revenue interacts with editorial choices. If something on this site doesn't match what you find when you sign up for a tool, email [email protected] and we'll fix it.
Reviews on aivideopicks.com are written and edited by Tom Tran, a Sydney-based business and data analyst with 8+ years' experience across banking, edtech, logistics, and CRM (TPBank, AFS, Trames, CMC Global on Starbucks/Highland Coffee accounts). Tom holds a Master of ICT (Cloud) from Western Sydney University and a Master of Business from the University of Plymouth (UK). The site is a one-person operation — there is no editorial team behind a stock photo.
The relevant skill for this site is software evaluation: cost per output, where the tool breaks under real load, whether it actually fits a workflow. AI video tools get judged the same way Tom would evaluate any other piece of business software in a procurement decision.
Every review on this site uses the same six-criterion framework, scored 1–10 each, weighted to produce the headline rating you see at the top of every review (e.g. "8.5 / 10"). Weights add up to 100%.
Avatar realism, video resolution, lip-sync accuracy, voice naturalness, motion stability. The thing the customer actually receives.
Does this tool actually solve the job a buyer is trying to do? A great avatar tool that can't export to your LMS scores low here.
Public pricing, no hidden credit traps, watermark policy on free tiers, fair annual-vs-monthly spread. Hidden costs lose points.
First-30-minutes friction, signup flow, empty-state UX, time to first usable output. Rated from a non-power-user perspective.
Response time on real tickets, billing transparency, account recovery, public Trustpilot/G2 patterns. Vendor lock-in counts against.
How fast the vendor ships meaningful improvements (model versions, language coverage, integrations) without breaking existing workflows.
Headline ratings are rounded to one decimal place. We do not award 10/10 — nothing in this category is perfect, and a perfect score would imply we'd run out of room to upgrade later when the next model lands.
This is the part most affiliate sites skip. Three honest categories cover every tool on this site:
| Scope label | What it means | How we mark it |
|---|---|---|
| Hands-on, paid plan | We have an active paid subscription on the tool and use it on real client work or this site's own production. Reviews include first-hand observations. | Phrasing like "in our testing", "we ran X on Y" with the relevant month marker. Dated test log entry below. |
| Trial / free-tier sample | We have used the trial or free tier hands-on but have not maintained a paid subscription long enough to ship paid client work with it. | Phrasing like "we tested on the free plan", "trial behaviour suggests". Limits are flagged explicitly. |
| Vendor docs only | We do not have current access. The review pulls verifiable facts from the vendor's published pricing, product, and technical documentation, dated to the month of the source pull. | Phrasing like "based on vendor documentation as of {month YYYY}". No invented test observations, no fake screenshots. |
Most reviews on this site fall into one of these three buckets cleanly. A few are mixed — e.g. we paid for Tool X for three months but cancelled when our use case changed, so the review is "hands-on, paid plan (until {month})" plus "vendor docs for changes since". Mixed-scope reviews say so up front.
This is the load-bearing part of this page. It's a sparse, honest record of what we actually did with each tool and when. It will grow as we publish more reviews and revisit existing ones. If a tool you care about isn't here yet, the review for it is either (a) freshly published and the log entry is being added, or (b) flagged as vendor-docs-only on the review page itself.
| Date | Tool | Scope | What we tested / verified |
|---|---|---|---|
| 2026-04-08–ongoing | HeyGen | Hands-on, paid plan | Avatar V (launched 2026-04-08): 15-second avatar setup, 0.840 face similarity claim verified on HeyGen's own benchmark page, lip-sync tested on a 90-second product walkthrough script and a 3-minute training segment in English and Vietnamese. Creator plan ($24/mo annual) used for ongoing site work. Trustpilot 2.3/5 pattern noted — matches our experience: product is excellent, billing/support raise legitimate caution flags. |
| 2026-04-12–2026-04-30 | HeyGen Avatar V (deal page) | Hands-on, paid plan | April 2026 creator bonus program tracked through to its 2026-04-30 close. Bonus tier thresholds (YouTube 10K–250K views, TikTok/Reels/Shorts 100K–2M views) verified against HeyGen's published creator agreement at the time. Page now reflects the bonus program ended. |
| 2026-04-17–ongoing | Submagic | Hands-on, paid plan | Caption auto-generation tested on 4 short-form clips (TikTok and YouTube Shorts dimensions), Hormozi/MrBeast/Ali Abdaal preset comparisons, B-roll auto-suggestion accuracy, silence-removal pass quality. Active paid subscription used for this site's own short-form output. |
| 2026-04-08–ongoing | Movavi Video Editor Plus 2026 | Hands-on, paid plan | Yearly plan at $54.95 tested on a Windows desktop install, AI background removal on three different subject types, motion tracking on a horizontal pan, 4K H.264 export benchmarked. Annual code PTNAFFDIS010426ALLAFS15 verified at checkout (15% off, expires 2026-05-15). |
| 2026-04-17–ongoing | ElevenLabs | Hands-on, paid plan | Voice cloning tested on a 30-second sample of editor's own voice (English), v3 model output compared against Murf and Descript Overdub, Studio long-form generation tested on a 4-minute script with a single voice and with a multi-speaker dialogue. Used as the ongoing voiceover engine for this site's tutorial videos. |
| 2026-04-09–ongoing | Pictory | Hands-on, paid plan | Standard plan ($23/mo) tested on blog-to-video repurposing of three existing posts on this site. Stock footage match quality, auto-caption accuracy, and voiceover voice library spot-checked. Pictory affiliate slug confirmed working as ?ref=aivideopicks at this time. |
| 2026-04-23 (rejected) · ongoing | Descript | Hands-on, paid plan | Edit-by-transcript workflow on a 22-minute interview, filler-word removal accuracy, Studio Sound effect on noisy laptop-mic audio, Overdub voice cloning. Affiliate application rejected 2026-04-23 (low traffic per Descript's stated bar at the time); review remains hands-on regardless — we still pay for the Creator plan and use it. Affiliate parameter has been stripped from links until/unless re-approved. |
| 2026-04-29 · ongoing | Google Vids (Veo 3.1) | Trial / free-tier sample | 10 free Veo 3.1 clips/month confirmed on a fresh Workspace account. Clip quality on three prompt types (product demo, talking-head substitute, b-roll filler) compared against Sora export and Kling 3.0 output captured pre-shutdown. |
| 2026-04-26 · one-off | Sora — export & shutdown | Trial / free-tier sample | Sora consumer export (sora.chatgpt.com/exports/me) used to download our own assets before the 2026-04-26 shutdown. Export ZIP structure (videos, prompts, metadata) documented in the migration post. Sora API timeline (live until 2026-09-24) verified against OpenAI's published deprecation notice. |
| 2026-05-03 | Vidnoz | Trial / free-tier sample | Free tier (60 credits/day, 1,800+ avatars, 720p watermarked output) tested on three short avatar clips. Affiliate program status confirmed active via Vidnoz's affiliate-solutions page (50–70% commission tiers, Post Affiliate Pro platform). Pricing claims ($14.99/mo Starter) verified against the live pricing page at this date. |
| 2026-04-25 | Synthesia | Vendor docs only | Pricing tiers ($22/mo Starter on annual, 230+ avatars, 140+ languages, SOC 2 Type II, SCORM export) pulled from Synthesia's published plan page and security/compliance pages. No active paid subscription at this time — review explicitly says so. |
| 2026-04-25 | Runway Gen-4.5 | Trial / free-tier sample | 125-credit free allowance tested on three Gen-4.5 prompts at 1080p. Generation length cap (up to 40 sec on the free pool) and credit burn rate matched Runway's published table. |
| 2026-04-25 | Fliki | Trial / free-tier sample | Free 5 min/month plan tested on a blog-to-video conversion of one site post. Voice library breadth across 75+ languages spot-checked on English (US), Vietnamese, French, and Spanish. Standard pricing ($21/mo annual) verified at this date. |
Tools not listed above (Colossyan, Murf, DeepBrain, D-ID, Elai, HourOne, Kling, Hailuo, Luma, HappyHorse, Seedance, Veed, Kapwing, Capcut, Opus Clip, Zebracat, Arcads, MakeUGC, InVideo, Writesonic, Jasper, Copy.ai, and others covered on this site) are currently Vendor docs only — reviews lean on the vendor's published pricing and feature pages plus public benchmarks (Artificial Analysis Elo for video generators, G2 and Trustpilot rating patterns for SaaS). The review page for each one labels its scope inline. New hands-on entries will be added here as we cycle them onto the active subscription roster.
AI video tools change quietly. Pricing shifts, plan limits move, model versions ship without changelogs. To keep reviews from rotting, we run the following update cadence:
dateModified in the JSON-LD bumps when something material changes — not just on cosmetic edits.Every post on this site shows its last-updated date in the body and in dateModified in the JSON-LD schema block. As a general rule for this category: information older than ~90 days should be verified against the vendor's own page before you commit to a paid plan or annual contract — including on this site.
Many links on this site are affiliate links. If you sign up for a tool through one and become a paying customer, we receive a commission — typically a one-off bounty or a small recurring share — at no extra cost to you. That money pays for the tool subscriptions used to research the reviews above and keeps the site running.
The honest version: affiliate revenue does not change the rating a tool receives. We have written negative observations about tools we earn from (HeyGen Trustpilot pattern, Vidnoz support concerns, Runway credit burn rate) and we have continued to recommend tools that pay no commission at all (D-ID, Kapwing, Capcut where the use case fits). Where we don't have an affiliate relationship, we say so. Where a program is dead or we were rejected (Murf 2026-04-13, Colossyan declined, Descript 2026-04-23), we strip the affiliate parameters and re-route to a current alternative or leave the link clean.
Full program-by-program detail is in the Affiliate Disclosure. Editorial principles and the broader site policy live in the Terms of Use and the editorial process section of About AI Video Picks.
If a claim on this site doesn't match what a tool actually does today — pricing has moved, a feature was removed, a benchmark was misread — email [email protected]. We aim to fix factual errors within a few business days and will note material corrections in the affected post.
For related documents see also our Affiliate Disclosure, Privacy Policy, and Terms of Use.