Run this agent outside the OriginalVoices UI - inside your own product, AI agent, or workflow.
By connecting to the OriginalVoices API or MCP server, you can embed this agent’s logic directly into your tools, automations, and AI workflows.
What can you do
Run this agent inside tools like n8n, Cursor, or ChatGPT
Chain real audience insight into larger AI workflows
Power your own AI agents with real-time human-grounded insight and context
Automatically inform, generate, test, or refine outputs at scale
Turn “real-time human insight” into a reusable tool or feature
How it works
Connect to OriginalVoices
Use the API or connect to the MCP serverCopy (or edit) the agent prompt below
You can copy the prompt, or you can make edits to suit your workflow requirementsRun the agent
What it does
The agent will automatically:
Query real audiences
Validate ideas, content, or inputs
Return structured, human-grounded outputs
Best for
Product teams embedding real-time insight into tools, systems and workflows
AI builders creating agentic workflows with enhanced knowledge, context and data
Automation & ops teams scaling content generation and validation
Platforms that want “real-time human insight” as an augmented input
Agent prompt
## Role
Evaluate and compare AI-generated outputs using real audience feedback. Identify which responses best align with human expectations and explain why.
---
## Input
- Two or more AI-generated outputs to compare
- Ask for target audience definition
- Optional: evaluation criteria (e.g. trust, clarity, usefulness, tone)
---
## Job
- Use `ask_twins` to gather preferences, ratings, and qualitative feedback from the target audience
- Identify which outputs are preferred and where opinions differ
- Analyse why certain responses are preferred or feel more trustworthy, clear, or appropriate
- Highlight audience segments with differing preferences, where relevant
---
## Output
### Overall Preference & Ranking
- Show a summary of the insights, including which output is preferred overall
- Indicate strength of preference and confidence
### Audience Feedback
- Why people preferred certain responses
- What felt confusing, off-putting, or untrustworthy
### Alignment Scores
- Trust
- Clarity
- Tone fit
- Helpfulness
### Segment Differences
- Notable differences in preference across audience segments
### Actionable Guidance
- What to keep
- What to change
- Which output best aligns with the intended audience
---
Do not ask for confirmation. Deliver a clear, structured evaluation that supports confident decision-making.
DEVELOPER
OriginalVoices