MCP Server Live

Give AI Agents Real Consumer Research

Connect ChatGPT, Claude, Cursor, or any MCP-compatible agent to real consumer research. Preference checks, claim reactions, and message tests with real people. Results in 2-3 hours.

https://mcp.userintuition.ai/mcp

Quick Start

Claude Desktop / Claude Code

MCP

Add to your MCP config (~/.claude.json or Claude Desktop settings):

{
  "mcpServers": {
    "userintuition": {
      "url": "https://mcp.userintuition.ai/mcp"
    }
  }
}

Then ask: "Run a preference check on these three headlines."

ChatGPT

MCP

Settings → Connected Apps → Add MCP Server → enter:

https://mcp.userintuition.ai

OAuth will prompt on first use. Then ask ChatGPT to run any study.

Cursor

MCP

Settings → MCP → Add Server → enter URL:

https://mcp.userintuition.ai/mcp

Test copy and messaging directly from your IDE as you build.

Any MCP Client

Universal

Any tool that supports MCP (Streamable HTTP transport) can connect:

Server URL: https://mcp.userintuition.ai/mcp
Transport:  Streamable HTTP
Auth:       OAuth (prompted on first use)

Available Tools

ask_humans

Launch a study with real people. Three modes:

  • preference_check — Compare 2-5 options. Which do people prefer and why?
  • claim_reaction — Test whether people believe a claim. Agreement scores, credibility data, skepticism triggers.
  • message_test — Test what copy promises. Clarity scores, implied promise clusters, confusion drivers.
ParameterTypeRequiredDescription
modestringyespreference_check, claim_reaction, or message_test
stimuliarray[string]yesThe text options/claims/messages to test
audiencestringnogeneral_population (default) or email_list
sample_sizeintegernoNumber of participants (default: 25)
dry_runbooleannoIf true, returns cost/ETA estimate without launching

Returns: Headline metric + confidence note, themes (working/not), minority view with real quotes, 1-3 recommended edits.

get_results

Retrieve results for a study by ID.

ParameterTypeRequiredDescription
study_idstringyesThe study ID returned by ask_humans
list_studies

List all studies for the authenticated account.

ParameterTypeRequiredDescription
statusstringnoFilter: active, completed, cancelled
edit_study

Modify a study before it completes (e.g., extend sample size).

ParameterTypeRequiredDescription
study_idstringyesThe study to modify
cancel_study

Cancel an in-progress study.

ParameterTypeRequiredDescription
study_idstringyesThe study to cancel

Example Workflows

1. Validate Headlines Before Launch

You: "I have three headline options for our landing page. Run a preference check."

Agent: Creates a preference_check study with 25 participants.

[2-3 hours later]

Results: "Option B won with 52% preference. Key reason: clarity of the value prop. Minority (16%) preferred Option A — they found B 'too salesy.' Recommended edit: soften the urgency language in B while keeping the clear value prop."

2. Test a Product Claim

You: "We want to say 'Cut onboarding time by 60%.' Will people believe it?"

Agent: Creates a claim_reaction study.

[2-3 hours later]

Results: "Agreement: 4.1/7. 38% found it credible, 29% skeptical. Main skepticism trigger: 'sounds too specific without proof.' Recommendation: add 'based on customer data' or soften to '50-60%' range."

3. Message-Test Landing Page Copy

You: "Test this landing page copy with 50 people."

Agent: Creates a message_test study with sample_size=50.

[2-3 hours later]

Results: "Clarity score: 6.2/10. 34% misunderstood the core promise — they thought it was a consulting service, not a platform. Confusion driver: 'we do the research for you' implies done-for-you. Recommended edit: 'Your team runs research on the platform — results in hours, not weeks.'"

System Prompt Recommendations

If you're building an agent or workflow that uses User Intuition, add these rules to your system prompt:

## User Intuition MCP Rules

1. Before launching any study, use dry_run: true to show the user
   estimated cost and timeline. Only proceed after confirmation.

2. Default sample_size is 25. Recommend 50+ for high-stakes decisions
   (rebrands, pricing changes, launches).

3. When presenting results, always include:
   - The headline metric with confidence note
   - Top themes (what's working)
   - Bottom themes (what's not)
   - The minority objection (never bury dissent)
   - 1-3 recommended edits

4. Remind users that every study compounds in their intelligence hub.
   Before launching a new study, check if similar past research exists
   using list_studies.

5. For preference_check: always surface the "why" behind preferences,
   not just the distribution.

6. For claim_reaction: highlight skepticism triggers specifically —
   these are the most actionable findings.

7. For message_test: focus on what the copy *promises* vs. what the
   brand *intended* — the gap is where confusion lives.

Key Facts

4M+
Vetted global panel
98%
Participant satisfaction
2-3 hrs
Time to results
~$200
Studies from
50+
Languages supported
5-7
Laddering depth levels

ISO 27001, GDPR, HIPAA compliant. Every study feeds a searchable Customer Intelligence Hub.