AI at GhostQuests
Our goal is simple: turn hours of paranormal TV and community submissions into structured, comparable evidence. The GhostQuests AI summarizes each episode, extracts and scores evidence, and then compares results across multiple shows filmed at the same location. This page explains exactly what that means and how it works.
See It Live: Villisca Axe Murder House
We’ve completed our first full AI rollout at the Villisca Axe Murder House. It’s our reference example for how per-episode AI summaries feed into a cross-show, per-location conclusion.
As of September 2025, AI location analysis is complete for Villisca. We are adding new episode analyses daily and releasing more cross-show location comparisons each week.
Per-Episode: AI Analysis & Episode Summary
Every covered episode receives a consistent, researcher-friendly breakdown. You’ll see:
- Detailed Episode Summary: A clear, time-ordered recap of the investigation and key moments.
- Evidence Summary: A concise list of captures (e.g., EVPs, REM-POD events, movement, thermal anomalies) with timestamps and context.
- AI Analysis: An assessment of evidential strength that checks for multi-device corroboration, controls/baselines, redundancy, and plausible alternative explanations.
Per-Location: Cross-Show AI Comparison
After we analyze individual episodes, our AI compares results for any location that multiple teams have investigated:
- Best Evidence: The strongest, most corroborated captures across all shows at that site.
- Confirming Evidence: Patterns that repeat (e.g., same room, same device type, similar timing) across different teams and seasons.
- Conflicting Evidence: Findings that disagree or fail to replicate—plus likely reasons (environment, methodology, equipment, contamination).
- Judgment: A rolling verdict—Likely Haunted, Inconclusive, or Unlikely—that updates as new data arrives.
How We Score Evidence
To keep things consistent, each capture is scored on a 0–10 rubric. Five criteria (0–2 points each):
- Multi-Modality: Video, audio, and/or sensors agree at the same moment.
- Redundancy: Independent devices or angles record the same event.
- Controls & Baselines: Space secured; pre/post checks; noise/RF logs; time-sync.
- Specificity/Intelligence: Clear, relevant responses or behaviors over random noise.
- Alt-Explanations: Natural/technical causes actively considered and ruled out.
8–10 = Strong | 4–7 = Inconclusive | 0–3 = Weak. Scores feed episode summaries and the location-level judgment.
Data Sources & Safeguards
- TV Episodes: Time-aligned to segment markers where available; otherwise manual timestamping.
- Community Evidence: Photos, audio, video, and logs submitted by users (coming online in phases).
- Quality Controls: Duplicate device checks, RF scans (when provided), and reviewer notes.
- Transparency: Every claim links back to a source (episode/timecode or user upload ID).
- Privacy: User submissions are opt-in, watermarked, and moderated before inclusion.
Roadmap
- Automated timelines and heatmaps by room/device.
- Device reliability weighting across locations and shows.
- Cross-location pattern mining (e.g., EMF profiles, time-of-night clusters).
- Community evidence ingestion directly into AI comparisons.
Join SPIRIT: Society for Paranormal Investigation, Research & Information Tracking
Help us scale the dataset and keep analyses rigorous. Join the Society to assist with research, verify timestamps, and upload your own evidence to be included in future AI comparisons.
Questions or partnerships? Contact us.
Last updated: September 2025