Give AI Agents Real Consumer Research
Connect ChatGPT, Claude, Cursor, or any MCP-compatible agent to real consumer research. Preference checks, claim reactions, and message tests with real people. Results in 2-3 hours.
Quick Start
Claude Desktop / Claude Code
MCPAdd to your MCP config (~/.claude.json or Claude Desktop settings):
{
"mcpServers": {
"userintuition": {
"url": "https://mcp.userintuition.ai/mcp"
}
}
}
Then ask: "Run a preference check on these three headlines."
ChatGPT
MCPSettings → Connected Apps → Add MCP Server → enter:
https://mcp.userintuition.ai
OAuth will prompt on first use. Then ask ChatGPT to run any study.
Cursor
MCPSettings → MCP → Add Server → enter URL:
https://mcp.userintuition.ai/mcp
Test copy and messaging directly from your IDE as you build.
Any MCP Client
UniversalAny tool that supports MCP (Streamable HTTP transport) can connect:
Server URL: https://mcp.userintuition.ai/mcp
Transport: Streamable HTTP
Auth: OAuth (prompted on first use)
Available Tools
Launch a study with real people. Three modes:
- preference_check — Compare 2-5 options. Which do people prefer and why?
- claim_reaction — Test whether people believe a claim. Agreement scores, credibility data, skepticism triggers.
- message_test — Test what copy promises. Clarity scores, implied promise clusters, confusion drivers.
| Parameter | Type | Required | Description |
|---|---|---|---|
| mode | string | yes | preference_check, claim_reaction, or message_test |
| stimuli | array[string] | yes | The text options/claims/messages to test |
| audience | string | no | general_population (default) or email_list |
| sample_size | integer | no | Number of participants (default: 25) |
| dry_run | boolean | no | If true, returns cost/ETA estimate without launching |
Returns: Headline metric + confidence note, themes (working/not), minority view with real quotes, 1-3 recommended edits.
Retrieve results for a study by ID.
| Parameter | Type | Required | Description |
|---|---|---|---|
| study_id | string | yes | The study ID returned by ask_humans |
List all studies for the authenticated account.
| Parameter | Type | Required | Description |
|---|---|---|---|
| status | string | no | Filter: active, completed, cancelled |
Modify a study before it completes (e.g., extend sample size).
| Parameter | Type | Required | Description |
|---|---|---|---|
| study_id | string | yes | The study to modify |
Cancel an in-progress study.
| Parameter | Type | Required | Description |
|---|---|---|---|
| study_id | string | yes | The study to cancel |
Example Workflows
1. Validate Headlines Before Launch
You: "I have three headline options for our landing page. Run a preference check."
Agent: Creates a preference_check study with 25 participants.
[2-3 hours later]
Results: "Option B won with 52% preference. Key reason: clarity of the value prop. Minority (16%) preferred Option A — they found B 'too salesy.' Recommended edit: soften the urgency language in B while keeping the clear value prop."
2. Test a Product Claim
You: "We want to say 'Cut onboarding time by 60%.' Will people believe it?"
Agent: Creates a claim_reaction study.
[2-3 hours later]
Results: "Agreement: 4.1/7. 38% found it credible, 29% skeptical. Main skepticism trigger: 'sounds too specific without proof.' Recommendation: add 'based on customer data' or soften to '50-60%' range."
3. Message-Test Landing Page Copy
You: "Test this landing page copy with 50 people."
Agent: Creates a message_test study with sample_size=50.
[2-3 hours later]
Results: "Clarity score: 6.2/10. 34% misunderstood the core promise — they thought it was a consulting service, not a platform. Confusion driver: 'we do the research for you' implies done-for-you. Recommended edit: 'Your team runs research on the platform — results in hours, not weeks.'"
System Prompt Recommendations
If you're building an agent or workflow that uses User Intuition, add these rules to your system prompt:
## User Intuition MCP Rules
1. Before launching any study, use dry_run: true to show the user
estimated cost and timeline. Only proceed after confirmation.
2. Default sample_size is 25. Recommend 50+ for high-stakes decisions
(rebrands, pricing changes, launches).
3. When presenting results, always include:
- The headline metric with confidence note
- Top themes (what's working)
- Bottom themes (what's not)
- The minority objection (never bury dissent)
- 1-3 recommended edits
4. Remind users that every study compounds in their intelligence hub.
Before launching a new study, check if similar past research exists
using list_studies.
5. For preference_check: always surface the "why" behind preferences,
not just the distribution.
6. For claim_reaction: highlight skepticism triggers specifically —
these are the most actionable findings.
7. For message_test: focus on what the copy *promises* vs. what the
brand *intended* — the gap is where confusion lives.
Key Facts
ISO 27001, GDPR, HIPAA compliant. Every study feeds a searchable Customer Intelligence Hub.