The Easiest Way to Use AI in Testing: No Generic Prompts, Real Workflow
No generic prompts. No hallucinated test cases. A real workflow built on git diffs, MCP servers, and your actual codebase — with 5 structured prompts you can copy today.
The Problem
Most QA engineers don't know what the code of the application they are testing looks like. They get a Jira ticket, open the app, and start clicking.
They have no idea what the developer actually changed, which services were impacted, or where the real risk is. The result? Testing blind. Hours wasted on low-risk areas. Critical bugs slip to production.
Here's a real scenario: a developer refactors formatCurrency() — a shared utility used by 3 other features. Your ticket only says "checkout." You test checkout. It works. But after release, invoices show wrong amounts and order history is broken.
You tested the ticket. Not the impact.
The Idea
What if, before touching the app, you already knew everything about what changed? Which components were affected, the risk level, which tests cover it, and what to test first?
No new tools. No expensive platform. Just your IDE, your repos, and 5 prompts.
Works for manual QA (what to test) and automation QA (what to automate). Tool-agnostic: Copilot, Claude Code, Cursor — any AI with repo access.
What You Need
- VS Code — free, with multi-root workspace support
- AI coding assistant — GitHub Copilot, Claude Code, Cursor, or similar
- Git access — clone rights to all your application repos
- At least 2 repos — Frontend + Backend, or Frontend + QA Tests
That's it. No paid tools required. 15 minutes to set up.
Step 1: The Multi-Repo Workspace
The key insight: your AI assistant can only help with what it can see. If you open one repo, it can't trace a frontend change to a backend service or check if your QA tests still cover it.
The fix is a VS Code multi-root workspace. One file that gives your AI visibility across all repos at once.
Clone your repos under one parent folder
~/projects/
├── your-frontend/
├── your-qa-tests/
├── your-backend-api/
└── your-search-service/
Create the workspace file
{
"folders": [
{ "name": "Frontend", "path": "./your-frontend" },
{ "name": "QA Tests", "path": "./your-qa-tests" },
{ "name": "Backend API", "path": "./your-backend-api" }
]
}
Open this file in VS Code. Your AI assistant now sees everything across all repos. One workspace. Full visibility.
How Repos Connect
When the AI can see all repos, it traces the full data flow for you:
data-testid="addToCart"
CartPage → addToCart locator
cart-service → addToCartResolver
One change in Frontend. The AI shows you every service it touches and every test that covers it. No more guessing.
The Git Diff Tells You Everything
Here's what makes this workflow powerful: you don't just read the code — you read the diff. The diff between a feature branch and main (or between two releases) is the single most valuable artifact for a QA engineer.
With MCP servers connected, your AI assistant can fetch the Merge Request directly from GitLab or GitHub, read the full git log and commit history, and trace the real impact across every repository in your workspace.
How it works in practice: You give the AI a ticket ID. It fetches the MR linked to that ticket, runs a git diff against main, groups every change by functional area, and flags which services, selectors, and user flows are affected — all before you open the app.
This is the shift: instead of testing what you can see in the UI, you're testing what actually changed in the code. The diff doesn't lie.
Optional: Configure MCP Servers
MCP (Model Context Protocol) servers extend what your AI assistant can do beyond reading code. They're optional, but they significantly increase accuracy and speed.
Atlassian MCP (Jira + Confluence)
Allows your AI to fetch Jira tickets, read acceptance criteria, comments, linked issues, and Confluence documentation — directly inside your IDE. No more copy-pasting ticket descriptions.
GitLab / GitHub MCP
Gives your AI direct access to Merge Requests, diffs, commit history, and branch metadata. When you say "analyze branch feature/JIRA-4521", the AI fetches the MR, reads every commit, and runs the analysis — without you opening a browser.
Chrome DevTools MCP
Enables live browser automation from your AI. Generate executable scenarios that navigate pages, click elements, fill forms, and take snapshots — all driven by the test selectors found in the code analysis.
All MCP servers are optional. The prompt workflow works with just your repos and any AI assistant. MCP servers make it faster and more accurate.
Step 2: The 5-Prompt Workflow
Each prompt builds on the previous one. Run them in order on any feature branch or release. 10 minutes for full clarity.
First run takes longer. You'll refine the prompts for your project. After that, it gets fast.
Code Analysis
What changed? Which components? Which user flows are affected? Fetches the Jira ticket, runs a git diff against main, maps every change by functional area, traces data flow from frontend to backend, and inventories all test selectors.
Potential Issues
What could break? Identifies integration risks, API mismatches, UX gaps (missing loading/error/empty states), and edge cases derived from Jira ticket comments.
Automated Test Coverage
What's covered? What's exposed? Searches your QA repo for existing Page Objects and feature files that match the changed components. Categorizes: MUST RUN / SHOULD RUN / SKIP.
Manual Test Scenarios
What to test by hand, prioritized. Generates scenarios from acceptance criteria with exact test selectors, expected results, and a validation checklist.
Release Comparison
Full diff between two releases. Highlights feature toggle changes, test selector drift, deployment config changes, and prioritized regression recommendations.
Example: How the Git Diff Works
You give the AI a branch name and a Jira ticket. It fetches the MR, reads the git log, and analyzes every change:
"Analyze code changes in branch
feature/JIRA-4521 vs main.
Fetch the MR from GitLab.
Read the git log and commit history.
For each changed file, identify:
- What component is affected
- What user flows are impacted
- Map any data-testid selectors
- Which backend services are touched"
What you get:
- ✓ Map of every change by functional area
- ✓ Frontend → Backend data flow trace
- ✓ Test selector inventory (new, changed, removed)
- ✓ Impacted user journeys list
- ✓ Risk level: HIGH / MEDIUM / LOW
- ✓ MR metadata: commits, reviewers, linked tickets
What AI Catches That You'd Miss
When the AI can see all repos and the full git diff at once, it catches things no manual review would:
- Test selector drift — a
data-testidgets renamed in the frontend, but the Page Object in the QA repo still uses the old name. Your E2E tests will break silently. - Shared utility side effects — a developer refactors a function used by 3 features. Your ticket only mentions one. The AI flags all 3.
- Feature toggle risks — a feature works in staging because the toggle is ON. In production it's OFF. The AI identifies which components are toggle-gated.
- Cross-repo dependency chains — Backend adds validation → API error shape changes → Frontend error handling breaks → E2E assertions fail. The AI traces the full chain.
Bonus: Prompt 00 — Workspace Initialization
Before running prompts 01–05, run this once. It tells the AI to explore every repo, understand the tech stack, map relationships, and generate context files. After this, all future interactions are faster because the AI already knows your architecture.
The Prompt Templates
Important: The prompts below are intentionally generic. They provide a structured starting point and illustrate the workflow.
Once your multi-repo setup is complete and your AI has full visibility, you should refine, adapt, and experiment with your own project-specific prompts. The real power comes from tailoring prompts to your architecture, conventions, and risk areas.
Click any prompt below to expand the full template. Copy it, paste it into your AI assistant, and adapt the placeholders to your project.
00 — Workspace Initialization (run once)
01 — Code Analysis
02 — Potential Issues
03 — Automated Test Scenarios
04 — Manual Test Scenarios
05 — Release Analysis
Adapting to Your Stack
The prompts are portable. Replace these placeholders with your project's values:
{TICKET_ID} → Your Jira ticket (e.g., PROJ-1234) {BRANCH_NAME} → Your branch (e.g., feature/cart-refactor) {QA_REPO_NAME} → Your QA repo folder name {PATH_TO_PAGE_OBJECTS} → e.g., src/test/java/pages/ {PATH_TO_FEATURE_FILES} → e.g., src/test/resources/features/ data-testid → Your selector convention (data-testid, data-cy, etc.)
Works with any frontend (Angular, React, Vue), any test framework (Playwright, Cypress, Selenium), and any AI assistant that can read your repos.
Get Started in 15 Minutes
- Clone your repos under one parent folder
- Create the
.code-workspacefile - Open it in VS Code
- Open your AI assistant chat
- Run Prompt 01 on any feature branch
Try it on one ticket. Once you see the results, you'll never go back to testing blind.
Written by Anass R. · Sr. QA Automation Engineer · DoQALand