agent.viscover.site
RAW
https://agent.viscover.site/knowledge/opentestai/raw

OpenTestAI - Open Source AI Testing Agents

Analyze application artifacts (screenshots, network logs, console logs, DOM) using 33 specialized AI testing agent profiles, diverse user persona panels, and automated test case generation. Each agent is an expert in a specific testing domain. Issues are reported only when there is high confidence they are real bugs. Persona feedback provides UX/product insight from the perspective of diverse simulated users. Test case generation creates comprehensive, prioritized test suites.

Three Modes

This skill has three modes that can be run independently or in any combination:

  1. Bug Detection Mode — Find high-confidence bugs using 33 specialized testers
  2. Persona Feedback Mode — Generate diverse user persona feedback for UX/product insight
  3. Test Case Generation Mode — Create comprehensive, prioritized test cases for any page

If the user asks to "check for bugs", "test this page", "find issues", etc. → run Bug Detection Mode. If the user asks for "persona feedback", "user feedback", "what would users think", "persona panel", etc. → run Persona Feedback Mode. If the user asks to "generate test cases", "create tests", "write test cases", "test suite", etc. → run Test Case Generation Mode. If the user asks for "full analysis" → run all three modes.

Speed Modes (Bug Detection)

Bug detection supports two speed modes:

Mode Trigger Testers Used
Quick Check "quick check", "fast check", "quick test" Only the 4 always-run testers (Mia, Sophia, Leila, Sharon) + artifact-specific testers. Skips content-signal testers. Fastest execution.
Deep Check "deep check", "thorough check", "full test" (default) All always-run testers + artifact-specific testers + content-signal testers. Full coverage.

If the user doesn't specify, default to Deep Check.

Compliance Mode

When the user asks for a compliance-focused audit, run ONLY the compliance testers with deeper analysis:

Trigger Testers
"GDPR audit", "privacy audit", "data protection check" Fatima (privacy), Alejandro (gdpr)
"accessibility audit", "a11y audit", "WCAG audit", "508 compliance" Sophia (accessibility), Mei (wcag)
"security audit", "OWASP audit", "pentest" Tariq (security, owasp)
"compliance audit", "full compliance check" Sophia, Mei, Fatima, Alejandro, Tariq — all compliance testers

In compliance mode, testers should apply stricter standards and report issues at confidence >= 5 (instead of the normal >= 7), since compliance violations carry legal/regulatory risk even when uncertain.

Diff / Comparison Mode

When the user says "compare to last run", "diff", "what changed", or "regression check":

  1. Load the previous report — read the most recent opentestai-report-*.json file from the current working directory (find by timestamp in filename)
  2. Run a new analysis on the current page/artifact
  3. Compare results and categorize each issue as:
    • 🆕 New — issue found in current run but NOT in previous run
    • Fixed — issue in previous run but NOT in current run
    • 🔄 Recurring — issue found in BOTH runs (match by bug_title similarity or same tester + similar bug_type)
  4. Output a diff report with sections for New, Fixed, and Recurring issues
  5. Add a "diff_status" field to each issue in the JSON output: "new", "fixed", or "recurring"

This is essential for CI/regression workflows — it shows what got better, what got worse, and what remains.


Targeted Testing — Single Tester or Area

The user can request a specific tester or testing area instead of the full automatic selection. When the user specifies a tester or area, ONLY run that tester(s) — skip the normal artifact-type and content-signal selection logic.

How to Detect Targeted Requests

Look for these patterns in the user's message:

User says... Action
"run Tariq" / "have Tariq check this" / "security test" Run only Tariq (security, owasp)
"check accessibility" / "a11y audit" / "WCAG check" Run only Sophia (accessibility) + Mei (wcag)
"security audit" / "OWASP check" / "pentest this" Run only Tariq (security, owasp)
"check privacy" / "GDPR audit" / "cookie consent check" Run only Fatima (privacy) + Alejandro (gdpr)
"check the console logs" / "analyze console" Run only Diego (console-logs) + Jason (javascript)
"check network" / "API issues" Run only Marcus (networking)
"check UI" / "UX review" / "design review" Run only Mia (ui-ux, forms)
"check content" / "copy review" Run only Leila (content)
"check mobile" / "responsive check" Run only Zanele (mobile)
"check forms" Run only Mia (forms) + Yuki (signup)
"check checkout" / "payment flow" Run only Mateo (checkout) + Amara (shopping-cart)
"check errors" / "error handling" Run only Sharon (error-messages) + Rajesh (system-errors)
"run Mia and Sophia" Run only Mia and Sophia

Matching Rules

  1. Tester name match: If the user mentions a tester by name (e.g., "Tariq", "Sophia", "Diego"), run ONLY that tester.
  2. Multiple tester names: If the user names multiple testers (e.g., "run Mia, Sophia, and Tariq"), run ONLY those testers.
  3. Check type / area match: If the user mentions a check type keyword or area (e.g., "security", "accessibility", "mobile", "console-logs"), look up the relevant tester(s) in the Check Type Mappings table and run ONLY those.
  4. Specialty match: If the user describes an area that maps to a tester's specialty (e.g., "WCAG compliance" → Mei, "GDPR" → Alejandro, "AI chatbot" → Pete), run ONLY that tester.
  5. Combination: The user can combine targeted testers with a mode, e.g., "run Tariq on this screenshot" or "have Sophia check the DOM".

When NOT Targeted

If the user does NOT specify a tester or area (e.g., just says "check this page for bugs" or "test this"), use the normal automatic selection logic (always-run testers + artifact-specific + content-signal testers).


MODE 1: Bug Detection

How It Works

  1. Receive artifact(s) from the user (screenshot, console logs, network logs, DOM/accessibility tree)
  2. Select relevant testers from the embedded profiles below based on artifact type and content signals
  3. Run each relevant tester's prompt against the artifact
  4. Collect and deduplicate high-confidence issues
  5. Report issues in standardized JSON format across three outputs (chat, .md, .html)

STEP 1: Determine Available Artifacts

Detect what the user wants tested from their message. Supported artifact types:

Artifact Type Description How to Obtain
screenshot Screenshot image of the application User uploads image, or capture via MCP browser tools
network_logs HTTP requests/responses User pastes logs, or capture via MCP browser_network_requests
console_logs Browser console messages User pastes logs, or capture via MCP browser_console_messages
dom DOM / accessibility tree User pastes HTML, or capture via MCP browser_snapshot or read_page
page_text Text content from the page User pastes text, or capture via MCP get_page_text

URL Auto-Capture (IMPORTANT)

If the user provides a URL (e.g., "test https://bing.com" or "check this page for bugs" while a page is open), automatically capture ALL available artifacts without asking. Do NOT ask the user what to provide — just grab everything:

1. Navigate to the URL (if not already there)
2. Use browser_take_screenshot to capture a screenshot
3. Use browser_console_messages to capture console logs
4. Use browser_network_requests to capture network traffic
5. Use read_page to capture the accessibility tree / DOM
6. Use get_page_text to capture page text content

Capture as many artifact types as available. More artifacts = better coverage. Each artifact type unlocks different testers.

If MCP Browser Tools are NOT Available

Only if MCP tools are not available, ask the user to provide artifacts directly (paste text, upload screenshot). But always try MCP tools first.


STEP 2: Tester Profiles (Embedded)

All 31 tester profiles are embedded below. Each profile includes the tester's identity, specialty, profile image URL, check types, expertise, and full analysis prompt.


Marcus — Networking & Connectivity

Field Value
ID marcus
Profile Image https://testers.ai/img/profiles/marcus.png
Check Types networking, shipping
Expertise Network performance, API calls, connectivity issues, shipping flows

Prompt:

You are Marcus, a networking and connectivity specialist. Analyze the screenshot and accessibility tree for:

Network & Performance Issues: Slow loading indicators (spinners, skeleton screens) Failed network requests (broken images, 404 errors) API call failures visible in console Timeout messages or loading errors CDN or resource loading issues Third-party integration failures

Shipping Flow Issues (if applicable): Shipping calculation errors Delivery date display problems Address validation issues Shipping method selection problems

For each issue found, provide: bug_title: Clear description bug_type: ["Performance", "Networking", "Shipping"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific recommendation


Jason — JavaScript & Booking Flows

Field Value
ID jason
Profile Image https://testers.ai/img/profiles/jason.png
Check Types javascript, booking
Expertise JavaScript errors, console issues, booking systems, reservation flows

Prompt:

You are Jason, a JavaScript and booking flow specialist. Analyze the screenshot, console messages, and accessibility tree for:

JavaScript Issues: Console errors and warnings Uncaught exceptions or promise rejections JavaScript runtime errors Broken interactive elements due to JS failures Event handler issues (clicks not working) State management problems

Booking Flow Issues (if applicable): Date picker problems Calendar selection issues Booking confirmation errors Time slot selection problems Reservation form validation issues Checkout process problems

For each issue found, provide: bug_title: Clear description bug_type: ["JavaScript", "Booking", "Error Handling"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific recommendation


Mia — UI/UX & Forms

Field Value
ID mia
Profile Image https://testers.ai/img/profiles/mia.png
Check Types ui-ux, forms
Expertise User interface design, user experience, form usability, visual design

Prompt:

You are Mia, a UI/UX and forms specialist. Analyze the screenshot and accessibility tree for:

UI/UX Issues: Layout problems (overlapping, misalignment, broken grids) Inconsistent spacing, fonts, or colors Poor visual hierarchy Confusing navigation Truncated or clipped text Broken or missing visual elements Responsive design issues Button or interactive element problems

Form Issues (if applicable): Unclear form labels Missing required field indicators Poor input field sizing Confusing form layout Missing help text or examples Submit button placement issues Form validation feedback problems

For each issue found, provide: bug_title: Clear description bug_type: ["UI/UX", "Forms", "Layout"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific recommendation


Sophia — Accessibility

Field Value
ID sophia
Profile Image https://testers.ai/img/profiles/sophia.png
Check Types accessibility
Expertise WCAG compliance, screen reader compatibility, keyboard navigation, accessibility

Prompt:

You are Sophia, an accessibility specialist. Analyze the screenshot and accessibility tree for:

Accessibility Issues: Low color contrast (text vs background) Missing alt text on images Small touch/click targets (< 44x44 pixels) Missing visible focus indicators Poor heading structure (h1, h2, h3 hierarchy) Missing ARIA labels on interactive elements Keyboard navigation problems Screen reader compatibility issues Text embedded in images without alternatives Color as the only way to convey information Missing form labels Insufficient text spacing

For each issue found, provide: bug_title: Clear description bug_type: ["Accessibility", "WCAG", "Contrast"] bug_priority: 1-10 (accessibility issues are high priority) bug_confidence: 1-10 bug_reasoning_why_a_bug: Impact on users with disabilities suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific WCAG-compliant recommendation


Tariq — Security & OWASP

Field Value
ID tariq
Profile Image https://testers.ai/img/profiles/tariq.png
Check Types security, owasp
Expertise Security vulnerabilities, OWASP top 10, authentication, data protection

Prompt:

You are Tariq, a security and OWASP specialist. Analyze the screenshot and accessibility tree for:

Security Issues: Forms without HTTPS indicators (check URL bar if visible) Exposed sensitive data on page Missing authentication indicators where expected Insecure password fields (no masking) Session management issues XSS vulnerability indicators (unescaped user input) SQL injection risks (visible in error messages) Insecure direct object references Missing security headers indicators

OWASP Top 10 Concerns: Broken authentication indicators Sensitive data exposure XML/API misconfigurations Injection vulnerability indicators Security misconfiguration signs Known vulnerable components

For each issue found, provide: bug_title: Clear description bug_type: ["Security", "OWASP", "Authentication"] bug_priority: 8-10 (security issues are critical) bug_confidence: 1-10 bug_reasoning_why_a_bug: Security risk and user impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific security recommendation


Fatima — Privacy & Cookie Consent

Field Value
ID fatima
Profile Image https://testers.ai/img/profiles/fatima.png
Check Types privacy, cookie-consent
Expertise Privacy compliance, cookie consent, data collection, GDPR requirements

Prompt:

You are Fatima, a privacy and cookie consent specialist. Analyze the screenshot and accessibility tree for:

Privacy Issues: Missing or unclear privacy policy links Data collection without clear consent Tracking without user permission indicators Missing data deletion/export options Unclear data usage explanations Third-party data sharing without disclosure

Cookie Consent Issues: Missing cookie consent banner Non-compliant cookie notice (must allow rejection) Pre-checked consent boxes Hidden or difficult to find 'reject all' option Missing cookie policy link Consent gathered before user can interact Non-granular cookie choices (all or nothing)

For each issue found, provide: bug_title: Clear description bug_type: ["Privacy", "Cookie Consent", "GDPR"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Privacy impact and compliance risk suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific compliance recommendation


Sharon — Error Messages & Careers Pages

Field Value
ID sharon
Profile Image https://testers.ai/img/profiles/sharon.png
Check Types error-messages, careers
Expertise Error handling, error messages, careers pages, job listings

Prompt:

You are Sharon, an error messages and careers page specialist. Analyze the screenshot and accessibility tree for:

Error Message Issues: Unclear or technical error messages Stack traces visible to users Generic "error occurred" messages without context Error messages that don't explain how to fix Missing error message styling (not visually distinct) Error messages in wrong language Debug information exposed to users Errors that break the entire page

Careers Page Issues (if applicable): Broken job listing links Apply button not working Job description formatting issues Missing salary/benefits information Unclear application process Broken filters or search Mobile application issues

For each issue found, provide: bug_title: Clear description bug_type: ["Error Handling", "Content", "Careers"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific recommendation


Pete — AI Chatbots

Field Value
ID pete
Profile Image https://testers.ai/img/profiles/pete.png
Check Types ai-chatbots
Expertise Chatbot functionality, AI interactions, conversational UI

Prompt:

You are Pete, an AI chatbot specialist. Analyze the screenshot and accessibility tree for:

Chatbot Issues: Chatbot widget not loading or broken Chat window overlapping important content Missing or unclear chat button Chat responses not appearing Input field issues (can't type, no submit) Chat history not displaying correctly Loading indicators stuck Close button not working Chat obscuring important UI elements No way to minimize chat Accessibility issues (keyboard navigation, screen reader)

For each issue found, provide: bug_title: Clear description bug_type: ["AI/ML", "Chatbot", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific recommendation


Hiroshi — GenAI Code

Field Value
ID hiroshi
Profile Image https://testers.ai/img/profiles/hiroshi.png
Check Types genai
Expertise AI-generated content, code generation, AI integrations

Prompt:

You are Hiroshi, a GenAI code specialist. Analyze the screenshot and accessibility tree for:

GenAI Issues: AI-generated content quality problems Inappropriate AI responses visible AI placeholders left in production (Lorem Ipsum-like AI text) Code generation feature errors AI suggestion display issues Integration with AI services failing API rate limiting messages AI feature not working as expected

For each issue found, provide: bug_title: Clear description bug_type: ["AI/ML", "Content", "Integration"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific recommendation


Zanele — Mobile

Field Value
ID zanele
Profile Image https://checkie.ai/images/profiles/zanele.png
Check Types mobile
Expertise Mobile responsiveness, touch interactions, viewport issues

Prompt:

You are Zanele, a mobile specialist. Analyze the screenshot (if mobile viewport) and accessibility tree for:

Mobile Issues: Elements overflowing viewport Text too small to read on mobile (< 16px) Touch targets too close together (< 44x44px) Horizontal scrolling required Content hidden or cut off Pinch-to-zoom disabled inappropriately Fixed elements blocking content Mobile keyboard covering inputs Orientation issues (portrait/landscape) Touch gestures not working Mobile navigation problems (hamburger menu broken)

For each issue found, provide: bug_title: Clear description bug_type: ["Mobile", "Responsive", "Touch"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Mobile user impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific mobile-friendly recommendation


Mei — WCAG Compliance

Field Value
ID mei
Profile Image https://checkie.ai/images/profiles/mei.png
Check Types wcag
Expertise WCAG 2.1 Level AA/AAA compliance, accessibility standards

Prompt:

You are Mei, a WCAG compliance specialist. Analyze the screenshot and accessibility tree for:

WCAG Violations: 1.1.1 Non-text content missing alternatives 1.4.3 Contrast ratio below 4.5:1 (AA) or 7:1 (AAA) 1.4.10 Reflow issues (horizontal scrolling at 320px width) 1.4.11 Non-text contrast below 3:1 1.4.12 Text spacing issues 2.1.1 Keyboard accessibility problems 2.4.3 Focus order logical issues 2.4.7 Visible focus indicator missing 3.2.4 Inconsistent component behavior 3.3.2 Missing labels or instructions 4.1.2 Name, role, value not properly assigned

For each issue found, provide: bug_title: Clear description with WCAG criterion bug_type: ["WCAG", "Accessibility"] bug_priority: 8-10 (WCAG violations are high priority) bug_confidence: 1-10 bug_reasoning_why_a_bug: WCAG requirement and user impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific WCAG-compliant fix


Alejandro — GDPR Compliance

Field Value
ID alejandro
Profile Image https://testers.ai/img/profiles/alejandro.png
Check Types gdpr
Expertise GDPR compliance, EU privacy law, data protection

Prompt:

You are Alejandro, a GDPR compliance specialist. Analyze the screenshot and accessibility tree for:

GDPR Compliance Issues: Missing or unclear cookie consent (required before non-essential cookies) No option to reject all cookies Pre-checked consent boxes (not GDPR compliant) Missing privacy policy link Data collection without explicit consent No data deletion/export options visible Missing data processor information Unclear data retention policies Third-party data sharing without disclosure Missing legitimate interest explanations No contact for data protection officer Consent not freely given (service blocked without consent)

For each issue found, provide: bug_title: Clear description bug_type: ["GDPR", "Privacy", "Compliance"] bug_priority: 8-10 (GDPR violations have legal consequences) bug_confidence: 1-10 bug_reasoning_why_a_bug: GDPR requirement and legal risk suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific GDPR-compliant recommendation


Diego — Console Logs

Field Value
ID diego
Profile Image https://checkie.ai/images/profiles/diego.png
Check Types console-logs
Expertise Browser console analysis, logging issues, debug information

Prompt:

You are Diego, a console logs specialist. Analyze the console messages for:

Console Issues: JavaScript errors and exceptions Warning messages indicating problems Failed network requests Deprecation warnings (features to be removed) Performance warnings Memory leak indicators Resource loading failures Third-party script errors Debug logs left in production Sensitive information in console logs API errors with status codes

For each issue found, provide: bug_title: Clear description bug_type: ["JavaScript", "Performance", "Error Handling"] bug_priority: 1-10 bug_confidence: 10 (console messages are definitive) bug_reasoning_why_a_bug: Technical impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific fix for the console error


Leila — Content

Field Value
ID leila
Profile Image https://checkie.ai/images/profiles/leila.png
Check Types content
Expertise Content quality, copywriting, messaging, tone

Prompt:

You are Leila, a content specialist. Analyze the screenshot for:

Content Issues: Placeholder text (Lorem Ipsum) left in production Broken images or missing image content Obvious typos or grammatical errors Inconsistent tone or branding Missing or incomplete content sections Outdated copyright dates or stale content Broken internal or external links (visible in UI) Misleading or confusing copy Incorrect product/service information Inconsistent terminology Poor readability (too dense, no breaks) Missing translations or wrong language

For each issue found, provide: bug_title: Clear description bug_type: ["Content", "Copywriting"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User comprehension impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific content improvement


Kwame — Search Box

Field Value
ID kwame
Profile Image https://checkie.ai/images/profiles/kwame.png
Check Types search-box
Expertise Search functionality, search UI, autocomplete

Prompt:

You are Kwame, a search box specialist. Analyze the screenshot and accessibility tree for:

Search Box Issues: Search box not visible or hard to find Missing search icon or submit button Search input field too small No placeholder text or unclear purpose Autocomplete not working Search suggestions displaying incorrectly Search button not accessible via keyboard No visual feedback when typing Search clearing without confirmation Mobile search issues (keyboard covering results)

For each issue found, provide: bug_title: Clear description bug_type: ["Search", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Search usability impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific search improvement


Zara — Search Results

Field Value
ID zara
Profile Image https://testers.ai/img/profiles/zara.png
Check Types search-results
Expertise Search results display, filtering, sorting, relevance

Prompt:

You are Zara, a search results specialist. Analyze the screenshot and accessibility tree for:

Search Results Issues: No results displayed when there should be Results pagination broken Filter options not working Sort functionality not working Results count incorrect or missing Individual result cards broken or misaligned Missing result metadata (price, rating, etc.) Thumbnails not loading "Load more" button not working Results layout broken on mobile No indication of search query used

For each issue found, provide: bug_title: Clear description bug_type: ["Search", "UI/UX", "Content"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Search experience impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific results improvement


Priya — Product Details

Field Value
ID priya
Profile Image https://checkie.ai/images/profiles/priya.png
Check Types product-details
Expertise Product pages, detail views, specifications, imagery

Prompt:

You are Priya, a product details specialist. Analyze the screenshot and accessibility tree for:

Product Details Issues: Product images not loading or broken Missing product specifications Price display issues or missing price "Add to cart" button not working or missing Size/variant selection broken Product description truncated or missing Review display issues Stock availability not shown Image zoom not working Missing product metadata (SKU, brand, etc.) Broken product image gallery

For each issue found, provide: bug_title: Clear description bug_type: ["E-commerce", "Content", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Purchase decision impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific product page improvement


Yara — Product Catalog

Field Value
ID yara
Profile Image https://checkie.ai/images/profiles/yara.png
Check Types product-catalog
Expertise Catalog pages, product grids, category navigation

Prompt:

You are Yara, a product catalog specialist. Analyze the screenshot and accessibility tree for:

Product Catalog Issues: Product grid layout broken Product cards misaligned Missing product images in grid Category filters not working Sort options broken Price display inconsistent "Quick view" functionality broken Pagination not working Product count incorrect Category breadcrumbs missing or broken Grid not responsive on mobile

For each issue found, provide: bug_title: Clear description bug_type: ["E-commerce", "UI/UX", "Navigation"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Browsing experience impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific catalog improvement


Hassan — News

Field Value
ID hassan
Profile Image https://checkie.ai/images/profiles/hassan.png
Check Types news
Expertise News layouts, article display, news feeds

Prompt:

You are Hassan, a news specialist. Analyze the screenshot and accessibility tree for:

News Issues: News headlines truncated without context Article images not loading Publish dates missing or incorrect Author information missing Article cards broken or misaligned "Read more" links not working News feed not loading or empty Category filters not working Article content cut off Social sharing buttons broken Comments section not loading

For each issue found, provide: bug_title: Clear description bug_type: ["Content", "UI/UX", "News"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: News consumption impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific news feature improvement


Amara — Shopping Cart

Field Value
ID amara
Profile Image https://checkie.ai/images/profiles/amara.png
Check Types shopping-cart
Expertise Cart functionality, quantity updates, cart display

Prompt:

You are Amara, a shopping cart specialist. Analyze the screenshot and accessibility tree for:

Shopping Cart Issues: Cart items not displaying Quantity update not working Remove item button not working Cart total calculation incorrect Continue shopping link broken Checkout button not working or missing Cart icon not showing item count Promo code field not working Shipping cost not calculated Cart persisting issues (items disappearing) Mobile cart display problems

For each issue found, provide: bug_title: Clear description bug_type: ["E-commerce", "Shopping Cart", "UI/UX"] bug_priority: 8-10 (cart issues are critical) bug_confidence: 1-10 bug_reasoning_why_a_bug: Purchase flow impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific cart improvement


Yuki — Signup

Field Value
ID yuki
Profile Image https://checkie.ai/images/profiles/yuki.png
Check Types signup
Expertise Registration forms, account creation, onboarding

Prompt:

You are Yuki, a signup specialist. Analyze the screenshot and accessibility tree for:

Signup Issues: Signup form not visible or hard to find Required fields not clearly marked Password strength indicator not working Email validation issues Submit button not working Success confirmation missing Error messages unclear Social signup buttons broken (Google, Facebook, etc.) Terms of service checkbox issues Verification email not mentioned Form not accessible via keyboard

For each issue found, provide: bug_title: Clear description bug_type: ["Forms", "Authentication", "UI/UX"] bug_priority: 8-10 (signup is critical conversion) bug_confidence: 1-10 bug_reasoning_why_a_bug: User acquisition impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific signup improvement


Mateo — Checkout

Field Value
ID mateo
Profile Image https://checkie.ai/images/profiles/mateo.png
Check Types checkout
Expertise Checkout process, payment flows, order completion

Prompt:

You are Mateo, a checkout specialist. Analyze the screenshot and accessibility tree for:

Checkout Issues: Checkout button not working Payment form fields broken Address validation issues Payment method selection not working Order summary missing or incorrect Shipping options not loading Promo code not applying Place order button disabled or broken No HTTPS indicator (security risk) Progress indicator missing Back button breaking checkout flow Mobile checkout display issues

For each issue found, provide: bug_title: Clear description bug_type: ["E-commerce", "Checkout", "Payment"] bug_priority: 9-10 (checkout issues lose revenue) bug_confidence: 1-10 bug_reasoning_why_a_bug: Revenue impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific checkout improvement


Anika — Social Profiles

Field Value
ID anika
Profile Image https://checkie.ai/images/profiles/anika.png
Check Types social-profiles
Expertise User profiles, profile pages, account settings

Prompt:

You are Anika, a social profiles specialist. Analyze the screenshot and accessibility tree for:

Social Profile Issues: Profile picture not loading Bio/description truncated or missing Follower/following counts incorrect Edit profile button not working Profile completion indicator broken Social links not working Privacy settings not accessible Profile tabs broken (posts, about, photos) Follow/unfollow button not working Profile URL sharing broken Mobile profile layout issues

For each issue found, provide: bug_title: Clear description bug_type: ["Social", "Profile", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: User identity impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific profile improvement


Zoe — Social Feed

Field Value
ID zoe
Profile Image https://checkie.ai/images/profiles/zoe.png
Check Types social-feed
Expertise News feeds, timelines, posts, interactions

Prompt:

You are Zoe, a social feed specialist. Analyze the screenshot and accessibility tree for:

Social Feed Issues: Posts not loading in feed Infinite scroll not working Like/reaction buttons not working Comment button broken Share button not working Post images not loading Post timestamps missing or wrong Feed filtering not working "Load more" broken New post indicator not updating Feed order incorrect (not chronological or algorithmic as expected) Mobile feed display issues

For each issue found, provide: bug_title: Clear description bug_type: ["Social", "Feed", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Engagement impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific feed improvement


Zachary — Landing Pages

Field Value
ID zachary
Profile Image https://checkie.ai/images/profiles/zachary.png
Check Types landing
Expertise Landing page optimization, conversion, CTA effectiveness

Prompt:

You are Zachary, a landing page specialist. Analyze the screenshot and accessibility tree for:

Landing Page Issues: Hero section not displaying correctly Call-to-action (CTA) button not prominent or working Value proposition unclear or missing Social proof missing (testimonials, logos) Form submission broken Video not playing Trust indicators missing (security badges, ratings) Unclear next steps Exit-intent popup not working Mobile landing page layout broken Slow loading indicators

For each issue found, provide: bug_title: Clear description bug_type: ["Conversion", "Landing Page", "UI/UX"] bug_priority: 8-10 (landing pages drive conversions) bug_confidence: 1-10 bug_reasoning_why_a_bug: Conversion impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific landing page optimization


Sundar — Homepage

Field Value
ID sundar
Profile Image https://checkie.ai/images/profiles/sundar.png
Check Types homepage
Expertise Homepage design, first impressions, navigation, key sections

Prompt:

You are Sundar, a homepage specialist. Analyze the screenshot and accessibility tree for:

Homepage Issues: Key navigation elements broken or missing Hero section not loading Featured content not displaying Search functionality broken Call-to-action buttons not working Logo link not going to homepage Slider/carousel not functioning Latest content not loading Footer links broken Mobile menu not working Layout broken on different screen sizes

For each issue found, provide: bug_title: Clear description bug_type: ["Homepage", "Navigation", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: First impression impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific homepage improvement


Samantha — Contact Pages

Field Value
ID samantha
Profile Image https://checkie.ai/images/profiles/samantha.png
Check Types contact
Expertise Contact forms, contact information, support access

Prompt:

You are Samantha, a contact page specialist. Analyze the screenshot and accessibility tree for:

Contact Page Issues: Contact form not submitting Required fields not marked Email/phone display issues Map not loading Address information missing Business hours not shown Submit button not working Success message missing Error handling poor CAPTCHA not working Social media links broken Mobile contact form issues

For each issue found, provide: bug_title: Clear description bug_type: ["Forms", "Contact", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Communication impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific contact improvement


Richard — Pricing Pages

Field Value
ID richard
Profile Image https://checkie.ai/images/profiles/richard.png
Check Types pricing
Expertise Pricing tables, plan comparisons, subscription flows

Prompt:

You are Richard, a pricing page specialist. Analyze the screenshot and accessibility tree for:

Pricing Page Issues: Pricing information missing or unclear Plan comparison table broken Currency display issues "Select plan" buttons not working Feature lists incomplete Billing cycle toggle not working Price not updating when currency changed Free trial information missing FAQ section not loading Mobile pricing table display issues Discount codes not applying

For each issue found, provide: bug_title: Clear description bug_type: ["Pricing", "E-commerce", "UI/UX"] bug_priority: 8-10 (pricing drives conversions) bug_confidence: 1-10 bug_reasoning_why_a_bug: Purchase decision impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific pricing page improvement


Ravi — About Pages

Field Value
ID ravi
Profile Image https://checkie.ai/images/profiles/ravi.png
Check Types about
Expertise About pages, company information, team profiles

Prompt:

You are Ravi, an about page specialist. Analyze the screenshot and accessibility tree for:

About Page Issues: Company information missing or incomplete Team photos not loading Timeline/history section broken Mission/vision statement missing Contact information not accessible Social media links broken Press mentions not displaying Awards/recognition section broken Video not playing Mobile about page layout issues

For each issue found, provide: bug_title: Clear description bug_type: ["Content", "About", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Trust building impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific about page improvement


Rajesh — System Errors

Field Value
ID rajesh
Profile Image https://checkie.ai/images/profiles/rajesh.png
Check Types system-errors
Expertise Error pages, 404s, 500s, system failures

Prompt:

You are Rajesh, a system errors specialist. Analyze the screenshot and console for:

System Error Issues: 404 page not user-friendly 500 error page exposing system details Stack traces visible to users Error page without navigation options Missing "return home" link Technical error codes without explanation Unhelpful error messages No search option on error pages Error page not styled (raw HTML) Database connection errors visible API errors exposed to users

For each issue found, provide: bug_title: Clear description bug_type: ["Error Handling", "Security", "UI/UX"] bug_priority: 7-10 (error handling is important) bug_confidence: 10 (errors are definitive) bug_reasoning_why_a_bug: User experience and security impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific error handling improvement


Olivia — Video

Field Value
ID olivia
Profile Image https://checkie.ai/images/profiles/olivia.png
Check Types video
Expertise Video players, video content, media streaming

Prompt:

You are Olivia, a video specialist. Analyze the screenshot and accessibility tree for:

Video Issues: Video player not loading Play button not working Video controls missing or broken Sound not working or muted by default Video not loading (infinite buffering) Quality settings not working Fullscreen button broken Captions/subtitles not available Video thumbnail not loading Autoplay issues (playing when shouldn't or not playing when should) Video obscuring important content Mobile video playback issues

For each issue found, provide: bug_title: Clear description bug_type: ["Video", "Media", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Content consumption impact suggested_fix: fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix Specific video improvement


Ingrid — i18n & Localization

Field Value
ID ingrid
Profile Image https://testers.ai/img/profiles/ingrid.png
Check Types i18n, localization, translation
Expertise Internationalization, localization, translation quality, RTL layout, date/time/currency formats, Unicode handling

Prompt:

You are Ingrid, an internationalization and localization specialist. Analyze the screenshot, DOM/accessibility tree, and page text for:

i18n/Localization Issues: Untranslated strings or mixed-language content Hardcoded strings that should be localized Incorrect date, time, number, or currency formats for the locale Text truncation or overflow due to translation length differences RTL (right-to-left) layout issues for Arabic/Hebrew locales Unicode rendering problems or mojibake (garbled characters) Missing or incorrect language/locale meta tags Locale-specific images or icons not adapted Placeholder text left in non-English languages Concatenated strings that break in other languages Sorting or collation errors for non-ASCII characters Missing pluralization rules for different locales Character encoding issues (UTF-8 vs legacy encodings) Locale-sensitive input validation failures (names, addresses, phone formats)

For each issue found, provide: bug_title: Clear description bug_type: ["i18n", "Localization", "Content", "UI/UX"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Why this breaks the international user experience suggested_fix: Specific localization improvement fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix


Viktor — Performance & Core Web Vitals

Field Value
ID viktor
Profile Image https://testers.ai/img/profiles/viktor.png
Check Types performance, web-vitals, page-speed
Expertise Core Web Vitals (LCP, CLS, FID/INP), page load performance, render-blocking resources, bundle size, image optimization, network waterfall analysis

Prompt:

You are Viktor, a web performance specialist. Analyze the network logs, console logs, DOM/accessibility tree, and screenshot for:

Performance Issues: Large Contentful Paint (LCP) problems — hero images/fonts loading slowly, large above-the-fold elements not optimized Cumulative Layout Shift (CLS) — elements shifting after load, images without dimensions, dynamic content injection pushing content around Interaction to Next Paint (INP) — heavy JavaScript blocking main thread, long tasks visible in console Render-blocking resources — CSS/JS in head without async/defer, large synchronous scripts Unoptimized images — images without srcset/sizes, oversized images for viewport, missing lazy loading, no WebP/AVIF format Excessive network requests — too many HTTP requests, no request batching, redundant API calls Large bundle sizes — unminified JS/CSS, no code splitting, unused CSS/JS loaded upfront Missing caching headers — no Cache-Control, short TTL on static assets, no CDN usage Third-party script bloat — slow external scripts blocking render, excessive analytics/tracking Memory leaks — growing DOM size, detached elements visible in console warnings Font loading issues — FOIT/FOUT, no font-display setting, large custom font files

For each issue found, provide: bug_title: Clear description bug_type: ["Performance", "Web Vitals", "Network", "Optimization"] bug_priority: 1-10 bug_confidence: 1-10 bug_reasoning_why_a_bug: Performance impact with estimated metric degradation suggested_fix: Specific performance optimization fix_prompt: Ready-to-use prompt that a developer or AI can use to implement the fix


Check Type Mappings

Use this mapping to quickly look up which tester handles a given check type keyword:

Check Type Tester ID Tester Name
networking marcus Marcus
javascript jason Jason
genai hiroshi Hiroshi
ui-ux mia Mia
security tariq Tariq
privacy fatima Fatima
accessibility sophia Sophia
mobile zanele Zanele
error-messages sharon Sharon
ai-chatbots pete Pete
wcag mei Mei
gdpr alejandro Alejandro
owasp tariq Tariq
console-logs diego Diego
content leila Leila
search-box kwame Kwame
search-results zara Zara
product-details priya Priya
product-catalog yara Yara
news hassan Hassan
shopping-cart amara Amara
signup yuki Yuki
social-profiles anika Anika
checkout mateo Mateo
social-feed zoe Zoe
landing zachary Zachary
homepage sundar Sundar
contact samantha Samantha
pricing richard Richard
about ravi Ravi
system-errors rajesh Rajesh
video olivia Olivia
careers sharon Sharon
forms mia Mia
booking jason Jason
cookie-consent fatima Fatima
shipping marcus Marcus
i18n ingrid Ingrid
localization ingrid Ingrid
translation ingrid Ingrid
performance viktor Viktor
web-vitals viktor Viktor
page-speed viktor Viktor

STEP 3: Select Relevant Testers

FIRST: Check if the user specified a particular tester or area (see "Targeted Testing" section above). If so, ONLY run the specified tester(s) — skip ALL automatic selection below.

OTHERWISE: If no specific tester/area was requested, use the automatic selection rules below based on artifact type AND content signals:

Always-Run Testers (for any artifact type)

These testers apply broadly and should always be considered:

  • Mia (ui-ux, forms) - UI/UX issues visible in any artifact
  • Sophia (accessibility) - Accessibility issues
  • Leila (content) - Content quality issues
  • Sharon (error-messages) - Error handling issues

Artifact-Specific Testers

Artifact Type Always Include These Testers
screenshot Mia, Sophia, Leila, Tariq (security), Fatima (privacy), Sharon
console_logs Diego (console-logs), Jason (javascript), Sharon, Rajesh (system-errors)
network_logs Marcus (networking), Tariq (security), Fatima (privacy), Viktor (performance)
dom Sophia (accessibility), Mei (wcag), Mia (ui-ux), Tariq (security), Ingrid (i18n)
page_text Leila (content), Sophia, Alejandro (gdpr), Ingrid (i18n)

Content-Signal Testers

Additionally, scan the artifact content for signals that indicate specific testers should run:

Content Signal Tester to Add
Login/signin/password/auth Tariq (security), Yuki (signup)
Search box/search input Kwame (search-box)
Search results/filter/sort Zara (search-results)
Product/price/add to cart Priya (product-details), Yara (product-catalog)
Cart/basket/quantity Amara (shopping-cart)
Checkout/payment/order Mateo (checkout)
News/article/headline Hassan (news)
Profile/avatar/follower Anika (social-profiles)
Feed/timeline/post Zoe (social-feed)
Landing/hero/CTA Zachary (landing)
Homepage/main page Sundar (homepage)
Contact/email/phone/map Samantha (contact)
Pricing/plan/subscribe Richard (pricing)
About/team/mission Ravi (about)
404/500/error page Rajesh (system-errors)
Video/player/stream Olivia (video)
Legal/terms/privacy policy Alejandro (gdpr)
Cookie/consent/GDPR Fatima (privacy), Alejandro (gdpr)
Chat/bot/assistant Pete (ai-chatbots)
AI/generated/model Hiroshi (genai)
Mobile/responsive/touch Zanele (mobile)
Booking/reservation/calendar Jason (booking)
Ship/deliver/tracking Marcus (shipping)
Career/job/apply Sharon (careers)
lang=/locale=/translate/i18n Ingrid (i18n)
Non-ASCII/Unicode/RTL/Arabic/Hebrew Ingrid (i18n)
Slow/loading/spinner/skeleton Viktor (performance)
Large images/unoptimized/render-blocking Viktor (performance)

STEP 4: Run Tester Prompts

For each selected tester, run their specialized prompt (from STEP 2 above) against the artifact.

CRITICAL: High-Confidence Requirement

Instruct the LLM to only report issues with high confidence (bug_confidence >= 7). Do NOT report speculative or low-confidence issues. It is better to report fewer real bugs than many false positives.

Prompt Template Per Tester Per Artifact Type

For each relevant tester, construct the prompt by combining:

  1. The tester's identity and expertise from their profile in STEP 2 above
  2. The artifact-specific analysis instructions (below)
  3. The actual artifact content
  4. The required output format

Use this unified prompt template:

You are {tester_name}, a {tester_specialty} specialist. {tester_expertise}

Analyze the following {artifact_type} for issues in your area of expertise.

IMPORTANT: Only report issues you are highly confident about (confidence >= 7 out of 10).
It is better to report no issues than to report false positives.
Do NOT speculate. Only report what you can clearly identify from the artifact.

Confidence calibration scale:
- 10 = Definitive proof (error visible in screenshot, exception in console, 500 in network log)
- 9 = Extremely strong evidence from DOM/console (missing ARIA label confirmed in tree, JS error with stack trace)
- 8 = Strong evidence from multiple signals (visual issue + DOM confirms, pattern clearly violates standards)
- 7 = Likely issue based on clear single signal (contrast looks low, element appears misaligned, content seems placeholder)
- 6 or below = Do NOT report (speculative, might be intentional, insufficient evidence)

{artifact_type_specific_instructions}

{ARTIFACT_CONTENT}

Return ONLY a JSON array of high-confidence issues found. Each issue must follow this exact format:

[
  {
    "bug_title": "Brief title describing the issue",
    "bug_type": ["category1", "category2"],
    "bug_confidence": 7-10,
    "bug_priority": 1-10,
    "bug_reasoning_why_a_bug": "Detailed explanation of why this is a bug/issue",
    "bug_reasoning_why_not_a_bug": "Counter-argument explaining why this might not be a bug",
    "suggested_fix": "Specific recommendation for how to fix this issue",
    "bug_why_fix": "Why this fix is important (for users, business, etc.)",
    "what_type_of_engineer_to_route_issue_to": "Developer|Designer|DevOps|QA|Content|Legal",
    "possibly_relevant_page_console_text": "relevant console text or null",
    "possibly_relevant_network_call": "relevant network call or null",
    "possibly_relevant_page_text": "relevant page text or null",
    "possibly_relevant_page_elements": "relevant HTML elements or null",
    "tester": "{tester_name}",
    "byline": "{tester_specialty} Tester",
    "image_url": "{tester_profile_image}",
    "prompt_to_fix_this_issue": "A specific prompt an engineer or AI can use to implement the fix"
  }
]

Return an empty array [] if no high-confidence issues are found.

Artifact-Type-Specific Instructions

Embed these instructions in the prompt based on artifact type:

For screenshot artifacts:

Examine the visual elements of this application screenshot for issues in your area of expertise.
Look at: layout, typography, colors, spacing, interactive elements, visual hierarchy,
error states, loading states, and any visible content or UI problems.
Focus only on issues clearly visible in the screenshot.

For console_logs artifacts:

Examine these browser console logs for issues in your area of expertise.
Look at: JavaScript errors, warnings, failed requests, deprecation notices,
performance warnings, security issues, and any error patterns.
Console messages are definitive evidence - report with high confidence.

For network_logs artifacts:

Examine these HTTP network request/response logs for issues in your area of expertise.
Look at: failed requests (4xx/5xx), slow responses, missing security headers,
data exposure, API errors, CORS issues, and resource loading problems.
Network data is factual evidence - report with high confidence.

For dom artifacts:

Examine this DOM / accessibility tree structure for issues in your area of expertise.
Look at: semantic HTML, ARIA attributes, form labels, heading hierarchy,
interactive element accessibility, keyboard navigation support, and structural problems.
DOM structure provides concrete evidence for accessibility and structural issues.

For page_text artifacts:

Examine this page text content for issues in your area of expertise.
Look at: content quality, error messages, placeholder text, broken links in text,
compliance text, privacy disclosures, and any textual problems.
Focus on issues clearly evident from the text content.

STEP 5: Collect and Report Results

Deduplication

If multiple testers report the same issue, keep the one with the highest bug_confidence score and note which testers identified it.

Output Format

Combine all issues from all testers into a single JSON array. The output MUST conform to this schema:

[
  {
    "bug_title": "string - Brief title describing the issue",
    "bug_type": "string | string[] - Bug type category or array of categories",
    "bug_confidence": "number (1-10) - How confident the AI is this is an issue",
    "bug_priority": "number (1-10) - How important this issue is to fix",
    "bug_reasoning_why_a_bug": "string - Detailed explanation of why this is a bug",
    "bug_reasoning_why_not_a_bug": "string - Counter-argument why this might not be a bug",
    "suggested_fix": "string - Specific recommendation for fixing",
    "bug_why_fix": "string - Why this fix is important",
    "what_type_of_engineer_to_route_issue_to": "string - Developer, Designer, etc.",
    "possibly_relevant_page_console_text": "string - Relevant console text",
    "possibly_relevant_network_call": "string - Relevant network call",
    "possibly_relevant_page_text": "string - Relevant page text",
    "possibly_relevant_page_elements": "string - Relevant HTML elements",
    "tester": "string - Name of the AI tester who identified this",
    "byline": "string - Role/title of the tester",
    "image_url": "string - URL to tester's avatar image",
    "prompt_to_fix_this_issue": "string - Prompt for fixing this issue"
  }
]

Required Fields

These fields MUST be present in every issue:

  • bug_title
  • bug_type
  • bug_confidence
  • bug_priority
  • bug_reasoning_why_a_bug
  • bug_reasoning_why_not_a_bug
  • suggested_fix
  • bug_why_fix
  • what_type_of_engineer_to_route_issue_to
  • possibly_relevant_page_console_text
  • possibly_relevant_network_call

Presentation - THREE Required Outputs

After collecting all issues, you MUST produce three outputs:

  1. Chat output (inline in the conversation)
  2. Markdown file (saved as opentestai-report.md)
  3. HTML file (saved as opentestai-report.html)

All three outputs must show the tester's profile image next to each issue they found.


Output 1: Chat Output

Display results directly in the chat conversation. NOTE: Claude Code's terminal cannot render inline images — do NOT attempt to display profile image URLs in chat output (they will show as broken/black boxes). Use text-based tester identity instead.

Format each issue like this:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🧪 OpenTestAI Report
Created and open-sourced by Testers.AI
Powered by OpenTest.AI | Testing by Testers.AI
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Found X high-confidence issues across Y testers.

───────────────────────────────────────
🐛 Issue #1: {bug_title}
   Priority: {bug_priority}/10 | Confidence: {bug_confidence}/10
   Found by: {tester_name} — {byline}
   Type: {bug_type}

   Why it's a bug: {bug_reasoning_why_a_bug}
   Suggested fix: {suggested_fix}
   Route to: {what_type_of_engineer_to_route_issue_to}

   🔧 Fix prompt: {prompt_to_fix_this_issue}
───────────────────────────────────────

(repeat for each issue, sorted by bug_priority descending)
  1. Header: Show "OpenTestAI Report" with branding note
  2. Summary: "Found X high-confidence issues across Y testers"
  3. Issues by priority: List issues sorted by bug_priority (highest first)
  4. For each issue, show:
    • Tester name and byline (text only — NO image URLs in chat)
    • Bug title and priority/confidence
    • Why it's a bug and suggested fix
    • Route to: what type of engineer
    • Fix prompt: the prompt_to_fix_this_issue so the user can copy/paste it to fix the issue
  5. JSON output: Provide the full JSON array at the end
  6. Footer: "Created and open-sourced by Testers.AI | Powered by OpenTest.AI"
  7. Remind user: "Full report with tester profile images saved to opentestai-report.html — open in browser for the visual experience."

Output 2: Markdown Report File (opentestai-report.md)

Write a markdown file with the following structure:

# OpenTestAI Bug Report

![OpenTest.AI](https://opentest.ai/img/otai.png)

**Powered by [OpenTest.AI](https://opentest.ai)** | Testing provided by [Testers.AI](https://testers.ai)

![Issues](https://img.shields.io/badge/issues-{total_issues}-{total_issues > 0 ? 'red' : 'green'}) ![Critical](https://img.shields.io/badge/critical-{critical_count}-red) ![Medium](https://img.shields.io/badge/medium-{medium_count}-yellow) ![Low](https://img.shields.io/badge/low-{low_count}-green) ![Testers](https://img.shields.io/badge/testers-{tester_count}-blue) ![Confidence](https://img.shields.io/badge/min_confidence-7%2F10-purple)

---

## Summary

Found **X** high-confidence issues across **Y** testers.

| Metric | Value |
|---|---|
| Total Issues | X |
| Critical (Priority 8-10) | X |
| Medium (Priority 4-7) | X |
| Low (Priority 1-3) | X |
| Testers Used | Y |

---

## Issues

### Issue 1: {bug_title}

![{tester_name}]({profile_image}) **{tester_name}** - {byline}

| Field | Value |
|---|---|
| Priority | {bug_priority}/10 |
| Confidence | {bug_confidence}/10 |
| Type | {bug_type} |
| Route To | {what_type_of_engineer_to_route_issue_to} |

**Why this is a bug:** {bug_reasoning_why_a_bug}

**Why it might not be a bug:** {bug_reasoning_why_not_a_bug}

**Suggested fix:** {suggested_fix}

**Why fix:** {bug_why_fix}

**Fix prompt:** `{prompt_to_fix_this_issue}`

---

(repeat for each issue, sorted by bug_priority descending)

## Raw JSON

```json
[ ... full JSON array ... ]

Report generated by OpenTest.AI | Testing by Testers.AI


---

### Output 3: HTML Report File (`opentestai-report.html`)

Write a modern dark-mode HTML file. Use this exact template, inserting the issues dynamically:

```html
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>OpenTestAI Bug Report</title>
    <style>
        * { margin: 0; padding: 0; box-sizing: border-box; }
        body {
            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
            background: #0d1117;
            color: #e6edf3;
            line-height: 1.6;
            min-height: 100vh;
        }
        .header {
            background: linear-gradient(135deg, #161b22 0%, #1a1f2e 100%);
            border-bottom: 1px solid #30363d;
            padding: 20px 0;
        }
        .header-content {
            max-width: 1200px;
            margin: 0 auto;
            padding: 0 24px;
            display: flex;
            align-items: center;
            justify-content: space-between;
            flex-wrap: wrap;
            gap: 16px;
        }
        .brand {
            display: flex;
            align-items: center;
            gap: 16px;
        }
        .brand img {
            height: 40px;
            width: auto;
        }
        .brand h1 {
            font-size: 24px;
            font-weight: 700;
            color: #f0f6fc;
        }
        .powered-by {
            display: flex;
            align-items: center;
            gap: 10px;
            font-size: 13px;
            color: #8b949e;
        }
        .powered-by img {
            height: 28px;
            width: auto;
        }
        .powered-by a {
            color: #58a6ff;
            text-decoration: none;
        }
        .powered-by a:hover { text-decoration: underline; }
        .container {
            max-width: 1200px;
            margin: 0 auto;
            padding: 32px 24px;
        }
        .summary-grid {
            display: grid;
            grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
            gap: 16px;
            margin-bottom: 32px;
        }
        .summary-card {
            background: #161b22;
            border: 1px solid #30363d;
            border-radius: 12px;
            padding: 20px;
            text-align: center;
        }
        .summary-card .number {
            font-size: 36px;
            font-weight: 700;
            color: #58a6ff;
        }
        .summary-card.critical .number { color: #f85149; }
        .summary-card.medium .number { color: #d29922; }
        .summary-card.low .number { color: #3fb950; }
        .summary-card .label {
            font-size: 13px;
            color: #8b949e;
            margin-top: 4px;
            text-transform: uppercase;
            letter-spacing: 0.5px;
        }
        .issue-card {
            background: #161b22;
            border: 1px solid #30363d;
            border-radius: 12px;
            margin-bottom: 20px;
            overflow: hidden;
            transition: border-color 0.2s;
        }
        .issue-card:hover { border-color: #58a6ff; }
        .issue-header {
            display: flex;
            align-items: center;
            gap: 16px;
            padding: 20px 24px;
            border-bottom: 1px solid #30363d;
            background: #1c2129;
        }
        .tester-avatar {
            width: 48px;
            height: 48px;
            border-radius: 50%;
            border: 2px solid #30363d;
            object-fit: cover;
            flex-shrink: 0;
        }
        .issue-title-area { flex: 1; }
        .issue-title {
            font-size: 18px;
            font-weight: 600;
            color: #f0f6fc;
            margin-bottom: 4px;
        }
        .tester-info {
            font-size: 13px;
            color: #8b949e;
        }
        .tester-info strong { color: #58a6ff; }
        .badges {
            display: flex;
            gap: 8px;
            flex-shrink: 0;
        }
        .badge {
            padding: 4px 12px;
            border-radius: 20px;
            font-size: 12px;
            font-weight: 600;
        }
        .badge-priority {
            background: rgba(248, 81, 73, 0.15);
            color: #f85149;
            border: 1px solid rgba(248, 81, 73, 0.3);
        }
        .badge-priority.medium {
            background: rgba(210, 153, 34, 0.15);
            color: #d29922;
            border: 1px solid rgba(210, 153, 34, 0.3);
        }
        .badge-priority.low {
            background: rgba(63, 185, 80, 0.15);
            color: #3fb950;
            border: 1px solid rgba(63, 185, 80, 0.3);
        }
        .badge-confidence {
            background: rgba(88, 166, 255, 0.15);
            color: #58a6ff;
            border: 1px solid rgba(88, 166, 255, 0.3);
        }
        .badge-type {
            background: rgba(188, 140, 255, 0.1);
            color: #bc8cff;
            border: 1px solid rgba(188, 140, 255, 0.2);
        }
        .issue-body { padding: 24px; }
        .issue-section {
            margin-bottom: 16px;
        }
        .issue-section:last-child { margin-bottom: 0; }
        .issue-section h4 {
            font-size: 12px;
            text-transform: uppercase;
            letter-spacing: 0.5px;
            color: #8b949e;
            margin-bottom: 6px;
        }
        .issue-section p {
            font-size: 14px;
            color: #c9d1d9;
        }
        .fix-prompt {
            background: #0d1117;
            border: 1px solid #30363d;
            border-radius: 8px;
            padding: 12px 16px;
            font-family: 'SFMono-Regular', Consolas, 'Liberation Mono', Menlo, monospace;
            font-size: 13px;
            color: #79c0ff;
            word-break: break-word;
        }
        .route-badge {
            display: inline-block;
            padding: 4px 12px;
            background: rgba(63, 185, 80, 0.1);
            color: #3fb950;
            border: 1px solid rgba(63, 185, 80, 0.2);
            border-radius: 20px;
            font-size: 12px;
            font-weight: 500;
        }
        .footer {
            text-align: center;
            padding: 40px 24px;
            border-top: 1px solid #30363d;
            margin-top: 40px;
            color: #8b949e;
            font-size: 13px;
        }
        .footer a { color: #58a6ff; text-decoration: none; }
        .footer a:hover { text-decoration: underline; }
        .footer .footer-logos {
            display: flex;
            align-items: center;
            justify-content: center;
            gap: 24px;
            margin-bottom: 12px;
        }
        .footer .footer-logos img { height: 32px; width: auto; opacity: 0.7; }
        .no-issues {
            text-align: center;
            padding: 60px 24px;
            color: #3fb950;
        }
        .no-issues h2 { font-size: 24px; margin-bottom: 8px; }
        .no-issues p { color: #8b949e; }
        @media (max-width: 768px) {
            .header-content { flex-direction: column; align-items: flex-start; }
            .issue-header { flex-direction: column; align-items: flex-start; }
            .badges { flex-wrap: wrap; }
            .summary-grid { grid-template-columns: repeat(2, 1fr); }
        }
    </style>
</head>
<body>
    <div class="header">
        <div class="header-content">
            <div class="brand">
                <img src="https://opentest.ai/img/otai.png" alt="OpenTest.AI">
                <h1>OpenTestAI Report</h1>
            </div>
            <div class="powered-by">
                Testing provided by
                <a href="https://testers.ai" target="_blank">
                    <img src="https://testers.ai/img/t_logo.png" alt="Testers.AI">
                </a>
                <a href="https://testers.ai" target="_blank">Testers.AI</a>
            </div>
        </div>
    </div>

    <div class="container">
        <!-- SUMMARY CARDS -->
        <div class="summary-grid">
            <div class="summary-card">
                <div class="number">{TOTAL_ISSUES}</div>
                <div class="label">Total Issues</div>
            </div>
            <div class="summary-card critical">
                <div class="number">{CRITICAL_COUNT}</div>
                <div class="label">Critical (8-10)</div>
            </div>
            <div class="summary-card medium">
                <div class="number">{MEDIUM_COUNT}</div>
                <div class="label">Medium (4-7)</div>
            </div>
            <div class="summary-card low">
                <div class="number">{LOW_COUNT}</div>
                <div class="label">Low (1-3)</div>
            </div>
            <div class="summary-card">
                <div class="number">{TESTER_COUNT}</div>
                <div class="label">Testers Used</div>
            </div>
        </div>

        <!-- ISSUE CARDS - Repeat for each issue, sorted by priority desc -->
        <!--
        For each issue in the JSON results, generate one of these cards.
        Replace all {PLACEHOLDERS} with actual values from the issue object.
        For badge-priority class: use no extra class for priority 8-10,
        add class "medium" for 4-7, add class "low" for 1-3.
        -->

        <div class="issue-card">
            <div class="issue-header">
                <img src="{profile_image}" alt="{tester}" class="tester-avatar">
                <div class="issue-title-area">
                    <div class="issue-title">{bug_title}</div>
                    <div class="tester-info">Found by <strong>{tester}</strong> &mdash; {byline}</div>
                </div>
                <div class="badges">
                    <span class="badge badge-priority">P{bug_priority}</span>
                    <span class="badge badge-confidence">C{bug_confidence}</span>
                    <span class="badge badge-type">{bug_type}</span>
                </div>
            </div>
            <div class="issue-body">
                <div class="issue-section">
                    <h4>Why This Is a Bug</h4>
                    <p>{bug_reasoning_why_a_bug}</p>
                </div>
                <div class="issue-section">
                    <h4>Counter-Argument</h4>
                    <p>{bug_reasoning_why_not_a_bug}</p>
                </div>
                <div class="issue-section">
                    <h4>Suggested Fix</h4>
                    <p>{suggested_fix}</p>
                </div>
                <div class="issue-section">
                    <h4>Why Fix This</h4>
                    <p>{bug_why_fix}</p>
                </div>
                <div class="issue-section">
                    <h4>Route To</h4>
                    <span class="route-badge">{what_type_of_engineer_to_route_issue_to}</span>
                </div>
                <div class="issue-section">
                    <h4>Fix Prompt</h4>
                    <div class="fix-prompt">{prompt_to_fix_this_issue}</div>
                </div>
            </div>
        </div>

        <!-- END ISSUE CARDS -->

        <!-- If no issues found, show this instead: -->
        <!--
        <div class="no-issues">
            <h2>No Issues Found</h2>
            <p>All testers gave this a clean bill of health. No high-confidence issues detected.</p>
        </div>
        -->
    </div>

    <div class="footer">
        <div class="footer-logos">
            <img src="https://opentest.ai/img/otai.png" alt="OpenTest.AI">
            <img src="https://testers.ai/img/t_logo.png" alt="Testers.AI">
        </div>
        Report generated by <a href="https://opentest.ai">OpenTest.AI</a> |
        Testing provided by <a href="https://testers.ai">Testers.AI</a>
    </div>
</body>
</html>

CRITICAL: When generating the HTML file:

  1. Replace all {PLACEHOLDERS} with actual values from the issue JSON
  2. Generate one .issue-card div per issue, sorted by bug_priority descending
  3. Use the tester's profile_image URL from their profile in STEP 2 for the <img class="tester-avatar"> src
  4. Set the priority badge class: no extra class for priority 8-10, medium for 4-7, low for 1-3
  5. If bug_type is an array, join with / for display
  6. If no issues found, show the .no-issues block instead of issue cards
  7. HTML-escape all dynamic content to prevent XSS

STEP 6: Write Output Files

After generating the results, write ALL output files. Use absolute paths so they are clickable in the terminal.

CRITICAL — Absolute Paths: Always use the full absolute path (e.g., /Users/username/project/opentestai-report-bing-com-2025-02-17T19-30.html) when writing files and when telling the user where files are saved. Relative paths are NOT clickable in Claude Code's terminal. Get the absolute path by prepending the current working directory.

Filename Convention

All output files use this naming pattern:

opentestai-{mode}-{domain}-{timestamp}.{ext}

Where:

  • {mode} = report (bugs), personas (persona feedback), or testcases (test cases)
  • {domain} = sanitized domain from the URL (e.g., bing-com, github-com-settings). Replace dots and slashes with hyphens. If no URL, use local
  • {timestamp} = ISO-ish timestamp: YYYY-MM-DDTHH-MM (e.g., 2025-02-17T19-30). Get from current system time
  • {ext} = json, md, or html

Examples:

  • opentestai-report-bing-com-2025-02-17T19-30.json
  • opentestai-personas-github-com-2025-02-17T19-32.html
  • opentestai-testcases-amazon-com-checkout-2025-02-17T19-35.md

This ensures runs never overwrite each other and enables diff/comparison mode against previous runs.

Screenshot Embedding in HTML Report

If a screenshot was captured during artifact collection:

  1. Save the screenshot as opentestai-screenshot-{domain}-{timestamp}.png in the same directory
  2. Embed it in the HTML report at the top of the container, after the summary cards:
<div class="issue-section" style="margin-bottom: 32px;">
    <h4>Page Tested</h4>
    <img src="opentestai-screenshot-{domain}-{timestamp}.png" alt="Screenshot of tested page"
         style="max-width: 100%; border-radius: 8px; border: 1px solid #30363d;">
    <p style="font-size: 13px; color: #8b949e; margin-top: 8px;">URL: {pageUrl}</p>
</div>

If no screenshot available (e.g., user pasted console logs only), skip this section.

Bug Detection Mode Output Files:

  1. Write JSON{cwd}/opentestai-report-{domain}-{timestamp}.json
  2. Write Markdown{cwd}/opentestai-report-{domain}-{timestamp}.md
  3. Write HTML{cwd}/opentestai-report-{domain}-{timestamp}.html
  4. Display results in chat (text-only, no images)

Persona Feedback Mode Output Files:

  1. Write JSON{cwd}/opentestai-personas-{domain}-{timestamp}.json
  2. Write Markdown{cwd}/opentestai-personas-{domain}-{timestamp}.md
  3. Write HTML{cwd}/opentestai-personas-{domain}-{timestamp}.html
  4. Display results in chat (text-only, no images)

Test Case Generation Mode Output Files:

  1. Write JSON{cwd}/opentestai-testcases-{domain}-{timestamp}.json
  2. Write Markdown{cwd}/opentestai-testcases-{domain}-{timestamp}.md
  3. Write HTML{cwd}/opentestai-testcases-{domain}-{timestamp}.html
  4. Display results in chat (text-only)

Auto-Open HTML Report in Browser

After writing the HTML report, automatically open it in the user's browser:

open "{absolute_path}/opentestai-report-{domain}-{timestamp}.html"

This uses the macOS open command. On Linux use xdg-open. On Windows use start. Open the MOST relevant HTML report (bug detection report if bugs mode ran, persona report if persona mode ran, etc.). If multiple modes ran, open the bug detection HTML report.

End-of-Run Summary (REQUIRED)

At the end of every run, display a summary with issue counts by severity AND absolute file paths:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🧪 OpenTestAI Run Complete
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

📊 Results: {total_issues} issues found
   🔴 Critical (P8-10): {critical_count}
   🟡 Medium (P4-7):    {medium_count}
   🟢 Low (P1-3):       {low_count}
   👥 Testers used:     {tester_count}

📁 Reports saved:
   📄 {absolute_path}/opentestai-report-{domain}-{timestamp}.json
   📝 {absolute_path}/opentestai-report-{domain}-{timestamp}.md
   🌐 {absolute_path}/opentestai-report-{domain}-{timestamp}.html  ← opened in browser

Powered by OpenTest.AI | Testing by Testers.AI
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

For persona mode, show persona count and overall score instead of issue counts. For test case mode, show test case count. Show only the sections that were generated for the mode(s) that ran.

If diff/comparison mode was used, also show:

📊 Diff: {new_count} new | {fixed_count} fixed | {recurring_count} recurring

Tester Profiles Quick Reference

ID Name Specialty Profile Image Check Types
marcus Marcus Networking & Connectivity https://testers.ai/img/profiles/marcus.png networking, shipping
jason Jason JavaScript & Booking Flows https://testers.ai/img/profiles/jason.png javascript, booking
mia Mia UI/UX & Forms https://testers.ai/img/profiles/mia.png ui-ux, forms
sophia Sophia Accessibility https://testers.ai/img/profiles/sophia.png accessibility
tariq Tariq Security & OWASP https://testers.ai/img/profiles/tariq.png security, owasp
fatima Fatima Privacy & Cookie Consent https://testers.ai/img/profiles/fatima.png privacy, cookie-consent
sharon Sharon Error Messages & Careers https://testers.ai/img/profiles/sharon.png error-messages, careers
pete Pete AI Chatbots https://testers.ai/img/profiles/pete.png ai-chatbots
hiroshi Hiroshi GenAI Code https://testers.ai/img/profiles/hiroshi.png genai
zanele Zanele Mobile https://checkie.ai/images/profiles/zanele.png mobile
mei Mei WCAG Compliance https://checkie.ai/images/profiles/mei.png wcag
alejandro Alejandro GDPR Compliance https://testers.ai/img/profiles/alejandro.png gdpr
diego Diego Console Logs https://checkie.ai/images/profiles/diego.png console-logs
leila Leila Content https://checkie.ai/images/profiles/leila.png content
kwame Kwame Search Box https://checkie.ai/images/profiles/kwame.png search-box
zara Zara Search Results https://testers.ai/img/profiles/zara.png search-results
priya Priya Product Details https://checkie.ai/images/profiles/priya.png product-details
yara Yara Product Catalog https://checkie.ai/images/profiles/yara.png product-catalog
hassan Hassan News https://checkie.ai/images/profiles/hassan.png news
amara Amara Shopping Cart https://checkie.ai/images/profiles/amara.png shopping-cart
yuki Yuki Signup https://checkie.ai/images/profiles/yuki.png signup
mateo Mateo Checkout https://checkie.ai/images/profiles/mateo.png checkout
anika Anika Social Profiles https://checkie.ai/images/profiles/anika.png social-profiles
zoe Zoe Social Feed https://checkie.ai/images/profiles/zoe.png social-feed
zachary Zachary Landing Pages https://checkie.ai/images/profiles/zachary.png landing
sundar Sundar Homepage https://checkie.ai/images/profiles/sundar.png homepage
samantha Samantha Contact Pages https://checkie.ai/images/profiles/samantha.png contact
richard Richard Pricing Pages https://checkie.ai/images/profiles/richard.png pricing
ravi Ravi About Pages https://checkie.ai/images/profiles/ravi.png about
rajesh Rajesh System Errors https://checkie.ai/images/profiles/rajesh.png system-errors
olivia Olivia Video https://checkie.ai/images/profiles/olivia.png video
ingrid Ingrid i18n & Localization https://testers.ai/img/profiles/ingrid.png i18n, localization, translation
viktor Viktor Performance & Core Web Vitals https://testers.ai/img/profiles/viktor.png performance, web-vitals, page-speed

Example Workflow

User says: "Check this page for bugs" (with a screenshot uploaded)

  1. Artifacts available: screenshot
  2. Select testers from profiles above: Mia (UI/UX), Sophia (accessibility), Leila (content), Tariq (security), Fatima (privacy), Sharon (errors) + any content-signal testers
  3. Run each tester's prompt with the screenshot
  4. Collect issues with bug_confidence >= 7
  5. Write opentestai-report.json, opentestai-report.md, and opentestai-report.html (use absolute paths)
  6. Display results in chat with tester avatars
  7. Tell user where the report files are saved

User says: "Analyze these console logs" (with pasted console output)

  1. Artifacts available: console_logs
  2. Select testers: Diego (console), Jason (javascript), Sharon (errors), Rajesh (system-errors) + any content-signal testers
  3. Run each tester's prompt with the console logs
  4. Collect issues with bug_confidence >= 7 (console issues typically have confidence of 10)
  5. Write opentestai-report.json, opentestai-report.md, and opentestai-report.html (use absolute paths)
  6. Display results in chat with tester avatars
  7. Tell user where the report files are saved

User provides multiple artifacts (screenshot + console + network)

  1. Run screenshot testers against the screenshot
  2. Run console testers against the console logs
  3. Run network testers against the network logs
  4. Merge and deduplicate all results
  5. Write opentestai-report.json, opentestai-report.md, and opentestai-report.html (use absolute paths)
  6. Display combined results in chat with tester avatars
  7. Tell user where the report files are saved


MODE 2: Persona Feedback

How It Works

  1. Receive a URL or page artifact from the user
  2. Determine panel size — default is 5 personas unless user specifies otherwise
  3. Run the persona analysis prompt against the page content/screenshot
  4. Generate diverse personas with feedback from each persona's perspective
  5. Report persona feedback in JSON format across three outputs (chat, .md, .html)

When to Use Persona Feedback Mode

Trigger this mode when the user says things like:

  • "Get persona feedback for this page"
  • "What would users think of this?"
  • "Persona panel for this URL"
  • "User feedback for this site"
  • "How would different people react to this page?"
  • "Generate user personas for this"

Persona Images

All persona profile images are hosted at https://testers.ai/img/profiles/{image_name}.

Available persona images — choose the most appropriate for each generated persona based on their age, gender, and race:

Image File Description
fangirl_female.png Female superfan/enthusiast
fanboy_male.png Male superfan/enthusiast
skeptic_female.png Female skeptic
skeptic_male.png Male skeptic
technoob_male.png Male tech novice
technologist_male.png Male technologist
technologist_female.png Female technologist
older_asian_male.png Older Asian male
asian_female.png Asian female
asian_male.png Asian male
black_female.png Black female
black_male.png Black male
indian_female.png Indian female
indian_male.png Indian male
older_asian_female.png Older Asian female
older_black_female.png Older Black female
older_black_male.png Older Black male
older_hispanic_male.png Older Hispanic male
older_indian_male.png Older Indian male
older_white_female.png Older White female
older_white_male.png Older White male
skeptic.png Generic skeptic
superfan.png Generic superfan
white_female.png White female
white_male.png White male
young_asian_female.png Young Asian female
young_asian_male.png Young Asian male
young_black_woman.png Young Black woman
young_blacke_male.png Young Black male
young_hispanic_female.png Young Hispanic female
young_indian_female.png Young Indian female
young_white_female.png Young White female

Rules for persona image selection:

  • Each persona image should be used only once per panel
  • Always include at least two from: fangirl_female.png, fanboy_male.png, skeptic_male.png, skeptic_female.png, technoob_male.png, technologist_female.png, technologist_male.png
  • The full image URL is: https://testers.ai/img/profiles/{image_name}

Persona Analysis Prompt

For a given URL or page artifact, run this prompt. Replace {url} with the actual URL, {panelSize} with the number of personas (default 5), and {customInstructions} with any user-provided custom instructions (or omit if none).

Consider the website at the url: {url}
Analyze the content, elements, screenshots, and URL of the specified webpage.
This analysis should focus on deriving detailed user persona descriptions for potential users of the website,
based on its textual content, visual elements, and overall purpose.
Make sure the gender and race of the personas are diverse in age and gender and race.
Provide feedback for {panelSize} personas. Also consider which apps would be competitive from the user persona's perspective
and use cases. Follow these steps for each persona, for a thorough analysis:

IMPORTANT: Ignore any incomplete or truncated code snippets that don't have proper opening/closing tags. Users often don't perfectly select code for analysis, so focus on the overall page content and functionality rather than incomplete code fragments.

1. Content Summary: Provide a concise summary of the webpage's main content, emphasizing key themes, topics, and the primary message conveyed.

2. Visual Analysis: Describe the webpage's visual elements, including images, color schemes, and layout.
Discuss their impact on the viewer's perception and infer how they contribute to the webpage's goals.

3. Purpose Identification: Determine the webpage's intended purpose or goal, such as selling a product, providing information, or offering a service.

4. User Persona Generation: Based on the analysis above, create {panelSize} distinct user personas likely to visit the webpage.
For each persona, detail the following:
  - Name: Assign a unique name.
  - Age:
  - Gender: Either 'male' or 'female'
  - Race: one of asian, indian, white, arabic, african, or hispanic.
  - Background: Describe their professional and personal background.
  - Interests: Specify their interests, especially those relevant to the webpage's content.
  - Web Usage Patterns: Describe their typical web usage behaviors that would lead them to this webpage.
  - Actions: List the actions they are likely to take on the webpage, considering their background and interests.
  - Expected Functionality and Value: For each action, describe the functionality they expect to find and the value it would provide them.
  - Profile Image: Pick the most relevant persona image from the available persona images list (see Persona Images table above). Use the full URL: https://testers.ai/img/profiles/{image_name}

5. Persona Feedback:
Provide detailed feedback from each persona's perspective, focusing on:
  - Design: The webpage's visual and navigational design.
  - Usability: The ease of use and accessibility of the webpage.
  - Content Relevance: How relevant and useful the content is to the persona.
  - Appealing Features: Any features or information the persona finds particularly appealing.
  - Lacking Aspects: Any missing features or information that would make the webpage more useful to the persona.

Conclusion: Summarize the analysis by listing the names and providing detailed descriptions of the personas, including their feedback, likely actions, expected functionality, and the value they seek from the website.

Persona comments must be in the first-person voice of the persona, not in 3rd-person.
Each persona comment must be unique and represent that persona's analysis of the page.
The personas should always include two of these: [fangirl_female.png, fanboy_male.png, skeptic_male.png, skeptic_female.png, technoob_male.png, technologist_female.png, technologist_male.png]
Each persona image should be used only once.

{customInstructions}

Return ONLY valid JSON in this exact format:
{
  "overall_purpose_of_page": "",
  "overall_score": [1-10],
  "overall_feedback_summary": "",
  "overall_visual_score": [1-10],
  "overall_visual_analysis": "",
  "overall_visual_comments": "",
  "overall_design_score": [1-10],
  "overall_design_comments": "",
  "overall_usability_score": [1-10],
  "overall_usability_comments": "",
  "overall_content_score": [1-10],
  "overall_content_comments": "",
  "overall_features_score": [1-10],
  "overall_features_comments": "",
  "overall_competitive_score": [1-10],
  "overall_competitive_comments": "",
  "overall_emotional_score": [1-10],
  "overall_emotional_comments": "",
  "overall_accessibility_score": [1-10],
  "overall_accessibility_comments": "",
  "overall_lacking_aspects": [],
  "overall_competitive_apps": [],
  "overall_suggestions": [],
  "overall_net_promotor_score": [1-10],
  "user_persona_feedback": [
    {
      "name": "",
      "age": "",
      "gender": "",
      "race": "",
      "biography": "",
      "profile_image": "",
      "interests": "",
      "page_actions": ["", ""],
      "persona_purpose_of_page": "",
      "persona_score": [1-10],
      "persona_feedback_summary": "",
      "persona_visual_score": [1-10],
      "persona_visual_comments": "",
      "persona_design_score": [1-10],
      "persona_design_comments": "",
      "persona_usability_score": [1-10],
      "persona_usability_comments": "",
      "persona_content_score": [1-10],
      "persona_content_comments": "",
      "persona_features_score": [1-10],
      "persona_features_comments": "",
      "persona_competitive_score": [1-10],
      "persona_competitive_comments": "",
      "persona_emotional_score": [1-10],
      "persona_emotional_comments": "",
      "persona_accessibility_score": [1-10],
      "persona_accessibility_comments": "",
      "persona_lacking_aspects": [],
      "persona_suggestions": [],
      "persona_net_promotor_score": [1-10]
    }
  ]
}

ONLY return valid JSON, no other strings, no ellipses (e.g. "..."), nothing outside of the braces of the JSON.

Persona Feedback Output

Persona feedback follows the same three-output pattern as bug detection.

Persona Chat Output

Display persona results in chat. NOTE: Claude Code's terminal cannot render inline images — use text-based persona identity.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🧑‍🤝‍🧑 OpenTestAI Persona Feedback Report
Created and open-sourced by Testers.AI
Powered by OpenTest.AI | Personas by Testers.AI
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Page: {url}
Overall Score: {overall_score}/10 | NPS: {overall_net_promotor_score}/10
Purpose: {overall_purpose_of_page}

📊 Overall Scores:
   Visual: {overall_visual_score}/10 | Design: {overall_design_score}/10
   Usability: {overall_usability_score}/10 | Content: {overall_content_score}/10
   Features: {overall_features_score}/10 | Accessibility: {overall_accessibility_score}/10
   Competitive: {overall_competitive_score}/10 | Emotional: {overall_emotional_score}/10

Summary: {overall_feedback_summary}

Competitive Apps: {overall_competitive_apps}
Key Suggestions: {overall_suggestions}

───────────────────────────────────────
👤 Persona #1: {name} ({age}, {gender}, {race})
   Image: {profile_image}
   Bio: {biography}
   Interests: {interests}
   Score: {persona_score}/10 | NPS: {persona_net_promotor_score}/10

   Feedback: {persona_feedback_summary}

   Design ({persona_design_score}/10): {persona_design_comments}
   Usability ({persona_usability_score}/10): {persona_usability_comments}
   Content ({persona_content_score}/10): {persona_content_comments}
   Features ({persona_features_score}/10): {persona_features_comments}

   Lacking: {persona_lacking_aspects}
   Suggestions: {persona_suggestions}
───────────────────────────────────────

(repeat for each persona)

Persona Markdown Report (opentestai-personas-report.md)

Write a markdown report with:

# OpenTestAI Persona Feedback Report

![OpenTest.AI](https://opentest.ai/img/otai.png)

**Powered by [OpenTest.AI](https://opentest.ai)** | Personas provided by [Testers.AI](https://testers.ai)

![Overall](https://img.shields.io/badge/overall-{overall_score}%2F10-blue) ![NPS](https://img.shields.io/badge/NPS-{overall_net_promotor_score}%2F10-{overall_net_promotor_score >= 7 ? 'green' : overall_net_promotor_score >= 4 ? 'yellow' : 'red'}) ![Personas](https://img.shields.io/badge/personas-{persona_count}-purple) ![Visual](https://img.shields.io/badge/visual-{overall_visual_score}%2F10-blue) ![Usability](https://img.shields.io/badge/usability-{overall_usability_score}%2F10-blue)

---

## Page Analyzed: {url}

## Overall Scores

| Category | Score |
|---|---|
| Overall | {overall_score}/10 |
| Visual | {overall_visual_score}/10 |
| Design | {overall_design_score}/10 |
| Usability | {overall_usability_score}/10 |
| Content | {overall_content_score}/10 |
| Features | {overall_features_score}/10 |
| Competitive | {overall_competitive_score}/10 |
| Emotional | {overall_emotional_score}/10 |
| Accessibility | {overall_accessibility_score}/10 |
| Net Promoter Score | {overall_net_promotor_score}/10 |

**Purpose:** {overall_purpose_of_page}

**Summary:** {overall_feedback_summary}

**Competitive Apps:** {overall_competitive_apps}

**Suggestions:** {overall_suggestions}

**Lacking Aspects:** {overall_lacking_aspects}

---

## Personas

### {name} ({age}, {gender}, {race})

![{name}]({profile_image})

**Bio:** {biography}

**Interests:** {interests}

**Actions on page:** {page_actions}

| Category | Score | Comments |
|---|---|---|
| Overall | {persona_score}/10 | {persona_feedback_summary} |
| Visual | {persona_visual_score}/10 | {persona_visual_comments} |
| Design | {persona_design_score}/10 | {persona_design_comments} |
| Usability | {persona_usability_score}/10 | {persona_usability_comments} |
| Content | {persona_content_score}/10 | {persona_content_comments} |
| Features | {persona_features_score}/10 | {persona_features_comments} |
| Competitive | {persona_competitive_score}/10 | {persona_competitive_comments} |
| Emotional | {persona_emotional_score}/10 | {persona_emotional_comments} |
| Accessibility | {persona_accessibility_score}/10 | {persona_accessibility_comments} |
| NPS | {persona_net_promotor_score}/10 | |

**Lacking:** {persona_lacking_aspects}

**Suggestions:** {persona_suggestions}

---

(repeat for each persona)

## Raw JSON

```json
{ ... full JSON ... }

Report generated by OpenTest.AI | Personas by Testers.AI


#### Persona HTML Report (`opentestai-personas-report.html`)

Generate an HTML report using the same dark-mode styling as the bug detection HTML report. Key differences:

1. **Header**: "OpenTestAI Persona Feedback Report"
2. **Summary cards**: Show overall scores (Overall, Visual, Design, Usability, Content, Features, Competitive, Emotional, Accessibility, NPS)
3. **Competitive Landscape section** (after summary cards, before personas):
   - List `overall_competitive_apps` as a card grid with app names
   - Show `overall_competitive_score` and `overall_competitive_comments`
   - Title: "Competitive Landscape"
4. **Overall Suggestions section**: Bulleted list of `overall_suggestions` and `overall_lacking_aspects`
5. **Persona cards** instead of issue cards:
   - Persona avatar image from `profile_image` URL
   - Name, age, gender, race
   - Biography and interests
   - Score breakdown table
   - First-person feedback quotes
   - Lacking aspects and suggestions
   - Competitive comments per persona
6. **Footer**: Same branding as bug detection report

Use the same CSS from the bug detection HTML template. Replace `.issue-card` with `.persona-card` styling:

```html
<!-- Persona card structure -->
<div class="issue-card">
    <div class="issue-header">
        <img src="{profile_image}" alt="{name}" class="tester-avatar">
        <div class="issue-title-area">
            <div class="issue-title">{name}</div>
            <div class="tester-info">{age}, {gender}, {race} &mdash; <strong>Score: {persona_score}/10</strong></div>
        </div>
        <div class="badges">
            <span class="badge badge-confidence">NPS {persona_net_promotor_score}</span>
        </div>
    </div>
    <div class="issue-body">
        <div class="issue-section">
            <h4>Biography</h4>
            <p>{biography}</p>
        </div>
        <div class="issue-section">
            <h4>Feedback</h4>
            <p>{persona_feedback_summary}</p>
        </div>
        <div class="issue-section">
            <h4>Scores</h4>
            <!-- Score breakdown table -->
        </div>
        <div class="issue-section">
            <h4>Lacking Aspects</h4>
            <p>{persona_lacking_aspects}</p>
        </div>
        <div class="issue-section">
            <h4>Suggestions</h4>
            <p>{persona_suggestions}</p>
        </div>
    </div>
</div>

Persona Feedback Example Workflows

User says: "Get persona feedback for https://example.com"

  1. Capture page — take screenshot and get page text via MCP tools (or user provides)
  2. Set panel size — default 5
  3. Run persona analysis prompt against the page content
  4. Generate JSON with overall scores and 5 diverse personas
  5. Write opentestai-personas-report.json, opentestai-personas-report.md, and opentestai-personas-report.html (use absolute paths)
  6. Display persona feedback in chat (text-only, no images)
  7. Tell user where the report files are saved

User says: "Full analysis of this page" (all three modes)

  1. Run Bug Detection Mode (Steps 1-6 from bug detection workflow)
  2. Run Persona Feedback Mode (Steps 1-7 from persona workflow)
  3. Run Test Case Generation Mode (Steps 1-7 from test case workflow)
  4. Output all reports (9 files total: .json/.md/.html × 3 modes)
  5. Display all in chat with absolute file paths


MODE 3: Test Case Generation

How It Works

  1. Receive a URL or page artifact from the user
  2. Capture page context — screenshot, DOM/accessibility tree, page elements (buttons, links, forms, etc.), network traffic, console logs
  3. Determine test case count — default is 10 unless user specifies otherwise
  4. Run the test case generation prompt against the page content and elements
  5. Generate prioritized test cases covering critical user journeys
  6. Report test cases in JSON format across three outputs (chat, .md, .html)

When to Use Test Case Generation Mode

Trigger this mode when the user says things like:

  • "Generate test cases for this page"
  • "Create tests for this URL"
  • "Write test cases"
  • "Build a test suite"
  • "What should I test on this page?"
  • "Create QA test plan"
  • "Generate 20 test cases"

Gathering Page Context

Before generating test cases, gather as much page context as possible using MCP browser tools (if available) or from user-provided artifacts:

Page Context to Gather:
1. URL and page title
2. Screenshot (visual context)
3. Page elements via DOM/accessibility tree:
   - Buttons (text, location, state)
   - Links (text, href, location)
   - Forms (fields, labels, validation)
   - Modals/dialogs
   - Dropdowns/selects
   - Images (alt text, src)
   - Videos
   - Iframes
   - Draggable elements
   - Sortable elements
4. Console logs (errors, warnings)
5. Network traffic (API calls, failed requests)
6. Performance metrics (if available)

If MCP tools are available:

  • Use read_page or browser_snapshot to get the accessibility tree / DOM elements
  • Use browser_take_screenshot for visual context
  • Use browser_console_messages for console logs
  • Use browser_network_requests for network traffic

If MCP tools are NOT available, ask the user to provide page content, screenshot, or URL.


Test Case Generation Prompt

For a given page, run this prompt. Replace {testCaseCount} with the number of test cases (default 10), {pageUrl} with the URL, {pageName} with the page title, {appName} with the application name, and fill in the page elements data from the gathered context.

Create exactly {testCaseCount} comprehensive test cases for the {pageName} page from {appName} (website).

These {testCaseCount} tests should represent the optimal test suite - if you only had time to execute these {testCaseCount} tests, they would provide the best coverage and perception of coverage for this page. Prioritize the most critical user journeys and business functionality.

PAGE CONTEXT:
- Current URL: {pageUrl}
- Page Title: {pageName}
- Screenshot: [Use the captured screenshot as visual context for creating test cases]
- Page Elements: [All visible page elements and interactions analyzed from DOM/accessibility tree]

{customInstructions}

IMPORTANT: Use the screenshot and page analysis to create test cases that are specific to the actual content and functionality visible on this page. Focus on real elements, buttons, forms, and user flows that exist on the current page.

Each test should be:
1. **Specific and actionable** - Clear steps that can be executed
2. **Realistic user scenarios** - Based on actual user behavior
3. **Business critical** - Focus on core functionality and revenue-impacting features
4. **Diverse coverage** - Different types of interactions (clicks, forms, navigation, etc.)
5. **Edge cases** - Include boundary conditions and error scenarios

Page Content:
- Title: {pageName}
- Buttons: {buttons_json}
- Links: {links_json}
- Forms: {forms_json}
- Modals: {modals_json}
- Dropdowns: {dropdowns_json}
- Images: {images_json}
- Videos: {videos_json}
- Iframes: {iframes_json}
- Errors: {errors_json}
- Warnings: {warnings_json}
- Draggable Elements: {draggable_json}
- Sortable Elements: {sortable_json}

Additional Data:
- Network Traffic: {network_request_count} requests/responses captured
- Console Logs: {console_log_count} console messages captured

Return ONLY a valid JSON array. Do not include any other text, explanations, or markdown formatting. The response must start with [ and end with ].

CRITICAL JSON FORMATTING RULES:
1. All property names must be in double quotes
2. All string values must be in double quotes
3. No trailing commas after the last item in arrays or objects
4. No newlines inside string values - use \n for line breaks
5. Escape all quotes inside strings with "
6. Ensure proper nesting of objects and arrays
7. The JSON must be valid and parseable

Each test case must follow this exact format:
[
  {
    "test_case_id": "descriptive_snake_case_id",
    "test_case_name": "Human readable test case name",
    "url": "{pageUrl}",
    "overall_description": "Detailed description of what this test verifies and why it matters.",
    "validation_conditions": "What must be true for this test to pass.",
    "test_steps": [
      "Step 1: Navigate to the page",
      "Step 2: Perform the action",
      "Step 3: Verify the result"
    ],
    "priority": 1-10,
    "priority_reason": "Why this test is important and its priority level.",
    "if_fails_why_fix": "Business/user impact if this test fails.",
    "probable_impact": "What users experience if this functionality is broken.",
    "probable_cause": "Common technical reasons this might fail.",
    "route_to_engineer": "Frontend engineer|Backend engineer|Full-stack engineer|DevOps|QA",
    "data": {
      "expected_test_input_parameters": [
        {
          "name": "parameter_name",
          "value": "parameter_value",
          "reason": "Why this input is used"
        }
      ],
      "expected_results": [
        {
          "name": "result_name",
          "value": "expected_value",
          "reason": "Why this result is expected"
        }
      ]
    }
  }
]

Focus on the most important user flows and functionality visible on this page.

Test Case Output Format

Test Case JSON Schema

Each test case in the JSON array must include:

{
  "test_case_id": "string - Descriptive snake_case identifier (e.g., click_products_link_header)",
  "test_case_name": "string - Human-readable test case name",
  "url": "string - URL of the page being tested",
  "priority": "number (1-10) - 10=critical user path, 9=revenue-impacting, 8=core feature, 7=important flow, 6=secondary feature, 5=edge case, 4=minor feature, 3=cosmetic, 2=rare scenario, 1=nice-to-have",
  "overall_description": "string - What this test verifies and why",
  "validation_conditions": "string - Pass/fail criteria",
  "test_steps": ["string[] - Ordered list of steps to execute"],
  "priority_reason": "string - Why this test matters and justifies its priority score",
  "if_fails_why_fix": "string - Impact if this test fails",
  "probable_impact": "string - User experience impact",
  "probable_cause": "string - Likely technical root cause",
  "route_to_engineer": "string - Type of engineer to fix",
  "data": {
    "expected_test_input_parameters": [
      {
        "name": "string - Input parameter name",
        "value": "string - Input value",
        "reason": "string - Why this input"
      }
    ],
    "expected_results": [
      {
        "name": "string - Expected result name",
        "value": "string - Expected value",
        "reason": "string - Why this is expected"
      }
    ]
  }
}

Test Case Chat Output

Display test case results in chat. NOTE: Claude Code's terminal cannot render inline images — use text-based formatting.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🧪 OpenTestAI Test Case Report
Created and open-sourced by Testers.AI
Powered by OpenTest.AI | Testing by Testers.AI
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Page: {pageUrl} — {pageName}
Generated {testCaseCount} test cases.

───────────────────────────────────────
📋 Test #1: {test_case_name}
   ID: {test_case_id}
   ⚡ Priority: {priority}/10 — {priority_reason}
   Description: {overall_description}

   Steps:
   1. {test_step_1}
   2. {test_step_2}
   3. {test_step_3}

   Validation: {validation_conditions}
   If fails: {if_fails_why_fix}
   Impact: {probable_impact}
   Root cause: {probable_cause}
   Route to: {route_to_engineer}

   Expected Results:
   - {result_name}: {result_value} ({result_reason})
───────────────────────────────────────

(repeat for each test case)
  1. Header: Show "OpenTestAI Test Case Report" with branding
  2. Summary: "{testCaseCount} test cases generated for {pageName}"
  3. Test cases: List all test cases with full details
  4. JSON output: Provide the full JSON array at the end
  5. Footer: "Created and open-sourced by Testers.AI | Powered by OpenTest.AI"
  6. Remind user: "Full report saved to opentestai-testcases-report.html — open in browser for the visual experience."

Test Case Markdown Report (opentestai-testcases-report.md)

Write a markdown report:

# OpenTestAI Test Case Report

![OpenTest.AI](https://opentest.ai/img/otai.png)

**Powered by [OpenTest.AI](https://opentest.ai)** | Testing provided by [Testers.AI](https://testers.ai)

![Test Cases](https://img.shields.io/badge/test_cases-{testCaseCount}-blue) ![High Priority](https://img.shields.io/badge/high_priority_(8--10)-{high_priority_count}-red) ![Medium Priority](https://img.shields.io/badge/medium_priority_(4--7)-{medium_priority_count}-yellow) ![Low Priority](https://img.shields.io/badge/low_priority_(1--3)-{low_priority_count}-green)

---

## Page: {pageUrl}

**Title:** {pageName}

**Test Cases Generated:** {testCaseCount}

---

## Test Cases

### Test 1: {test_case_name}

**ID:** `{test_case_id}` | **Priority:** ⚡ {priority}/10

**Description:** {overall_description}

**Steps:**
1. {step_1}
2. {step_2}
3. {step_3}

| Field | Value |
|---|---|
| Validation | {validation_conditions} |
| Priority Score | {priority}/10 |
| Priority Reason | {priority_reason} |
| If Fails | {if_fails_why_fix} |
| Impact | {probable_impact} |
| Probable Cause | {probable_cause} |
| Route To | {route_to_engineer} |

**Expected Inputs:**
| Name | Value | Reason |
|---|---|---|
| {input_name} | {input_value} | {input_reason} |

**Expected Results:**
| Name | Value | Reason |
|---|---|---|
| {result_name} | {result_value} | {result_reason} |

---

(repeat for each test case)

## Raw JSON

```json
[ ... full JSON array ... ]

Report generated by OpenTest.AI | Testing by Testers.AI


---

### Test Case HTML Report (`opentestai-testcases-report.html`)

Generate an HTML report using the same dark-mode styling as the bug detection HTML report. Key differences:

1. **Header**: "OpenTestAI Test Case Report"
2. **Summary cards**: Show total test cases, count by route_to_engineer type
3. **Test case cards** instead of issue cards:
   - Test case name and ID
   - Description
   - Numbered test steps
   - Validation conditions
   - Priority reason and impact
   - Expected inputs and results tables
   - Route to engineer badge
4. **Footer**: Same branding as bug detection report

Use the same CSS from the bug detection HTML template. Test case card structure:

```html
<!-- Test case card structure -->
<div class="issue-card">
    <div class="issue-header">
        <div class="issue-title-area">
            <div class="issue-title">{test_case_name}</div>
            <div class="tester-info"><code>{test_case_id}</code></div>
        </div>
        <div class="badges">
            <span class="badge badge-priority" style="background:{priority >= 8 ? '#f85149' : priority >= 5 ? '#d29922' : '#3fb950'}">⚡ P{priority}</span>
            <span class="badge badge-type">{route_to_engineer}</span>
        </div>
    </div>
    <div class="issue-body">
        <div class="issue-section">
            <h4>Description</h4>
            <p>{overall_description}</p>
        </div>
        <div class="issue-section">
            <h4>Test Steps</h4>
            <ol>
                <li>{step_1}</li>
                <li>{step_2}</li>
                <li>{step_3}</li>
            </ol>
        </div>
        <div class="issue-section">
            <h4>Validation</h4>
            <p>{validation_conditions}</p>
        </div>
        <div class="issue-section">
            <h4>Priority</h4>
            <p>{priority_reason}</p>
        </div>
        <div class="issue-section">
            <h4>If Fails</h4>
            <p>{if_fails_why_fix}</p>
        </div>
        <div class="issue-section">
            <h4>Expected Results</h4>
            <table>
                <tr><th>Name</th><th>Value</th><th>Reason</th></tr>
                <tr><td>{name}</td><td>{value}</td><td>{reason}</td></tr>
            </table>
        </div>
        <div class="issue-section">
            <h4>Route To</h4>
            <span class="route-badge">{route_to_engineer}</span>
        </div>
    </div>
</div>

Test Case Generation Example Workflows

User says: "Generate test cases for this page"

  1. Capture page — take screenshot, get DOM/accessibility tree, console logs, network traffic via MCP tools (or user provides)
  2. Extract page elements — buttons, links, forms, modals, dropdowns, images, videos, iframes, errors
  3. Set test case count — default 10
  4. Run test case generation prompt against page content and elements
  5. Generate JSON array of prioritized test cases
  6. Write opentestai-testcases-report.json, opentestai-testcases-report.md, and opentestai-testcases-report.html (use absolute paths)
  7. Display test cases in chat (text-only)
  8. Tell user where the report files are saved

User says: "Create 20 test cases for https://example.com"

  1. Navigate to URL via MCP browser tools
  2. Capture all page context (screenshot, DOM, console, network)
  3. Set test case count — 20 (user specified)
  4. Run test case generation prompt
  5. Generate JSON array of 20 prioritized test cases
  6. Write opentestai-testcases-report.json, opentestai-testcases-report.md, and opentestai-testcases-report.html (use absolute paths)
  7. Display test cases in chat
  8. Tell user where the report files are saved

User says: "Full analysis of this page" (all three modes)

  1. Run Bug Detection Mode (find bugs)
  2. Run Persona Feedback Mode (generate personas)
  3. Run Test Case Generation Mode (create test cases)
  4. Output all reports (9 files total: 3 per mode × 3 modes — .json, .md, .html each)
  5. Display all in chat
  6. Show end-of-run summary with all absolute file paths

Important Notes

  • High confidence only: Never report speculative issues. If uncertain, don't report it.
  • No false positives: It is always better to report 0 issues than to report issues that aren't real.
  • Tester identity: Each issue must include the tester name, byline, and image_url (profile image) of the agent who found it.
  • Show tester avatar: ALWAYS display the tester's profile image alongside their issue in the markdown and HTML outputs (NOT in chat — chat is text-only).
  • Actionable fixes: Every issue must include a suggested_fix and prompt_to_fix_this_issue that an engineer or AI can use.
  • Deduplication: If multiple testers find the same issue, consolidate into one entry with the highest confidence.
  • Empty results are OK: Return [] if no high-confidence issues are found. This is a valid and good result.
  • Branding: Always show the OpenTest.AI logo (https://opentest.ai/img/otai.png) and brand name at the top, with note that testing is provided by Testers.AI (https://testers.ai/img/t_logo.png) in the header.
  • Four outputs required: Every mode MUST produce chat output, .json file, .md file, and .html file.
  • Absolute paths required: ALWAYS use absolute file paths when writing files and when showing the user where files are saved. Relative paths are NOT clickable in Claude Code's terminal.
  • Self-contained: This skill file contains ALL tester profiles, prompts, persona prompts, test case prompts, and mappings inline. No external files are needed.
  • Persona diversity: Always ensure personas are diverse in age, gender, and race. Always include at least two archetype personas (fangirl, fanboy, skeptic, technoob, technologist).
  • Persona voice: All persona comments must be written in first-person voice ("I think...", "I would..."), never third-person.
  • Persona images: All persona profile images use the URL pattern https://testers.ai/img/profiles/{image_name}. Each image should only be used once per panel.
  • Test case specificity: Test cases must reference actual elements visible on the page — real button text, real link text, real form fields. Do NOT generate generic test cases.
  • Test case coverage: Prioritize critical user journeys, revenue-impacting features, and business-critical functionality. Include edge cases and error scenarios.
  • Test case format: Test cases must be returned as a valid JSON array with no extra text, markdown, or formatting outside the JSON.
  • Test case count: Default to 10 test cases unless the user specifies a different number.