Live training for QA engineersTestersManual QAsQA leadsSDETsAutomation Engineers

You're still testing like
AI doesn't exist.

This free session shows how AI can read the codebase of the application you're testing, tell you what changed functionally, and help you test only what matters. From workspace setup to automated test generation: presentation + live demo on an open-source app. Limited seats.

Get a taste of

Claim your seat, it's free

For years, the bottleneck in QA was execution. Did the tests run? Did someone click through the critical paths before the release went out? CI/CD solved that. Automated pipelines run tests on every commit, every merge, every deploy. Execution is no longer the constraint.

Then the bottleneck shifted to automation coverage. Do we have enough automated tests? Are the right flows covered? Frameworks like Playwright and Cypress, combined with proper test infrastructure, solved that for most mature teams. Coverage is no longer the constraint either.

The bottleneck now is orchestration: knowing what to test, when to test it, and where to focus your effort when a release ships dozens of changes across multiple repos.

Every release is a signal. Changes across the frontend, backend, and shared services, each one carrying functional risk. The tester's real job isn't clicking buttons or writing scripts. It's reading that signal and directing effort where it matters. That's QA orchestration. And until recently, there was no scalable way to do it.

AI changes that. It reads every diff, maps functional impact across your repos, cross-references your acceptance criteria, and tells you exactly where to focus. You don't test less. You test smarter. The orchestration layer that was missing from QA now exists.

AI is changing QA. Most testers are watching.

You've seen the tools. You've read the posts. But you still don't know how to wire AI into your actual daily workflow, across multiple repos, real tickets, real pipelines.

AI without context is just noise

Pasting into ChatGPT doesn't give AI your repos, your tickets, or your diffs. AI without project context gives you generic output that doesn't map to your actual application.

Testing everything instead of what changed

A release ships dozens of changes across multiple repos. Testing everything is the default when nobody tells you where to focus. That's the orchestration gap AI can close.

Your AI sees one file, not your project

AI assistants work on fragments: one file, one prompt, no history. Real QA orchestration requires AI that sees your full project: all repos, all tickets, all test coverage.

QA reacts to builds instead of reviewing diffs

The default tester workflow: wait for a deployment, click through the app, file bugs. AI makes it possible to map functional risk before a single button is clicked.

Four pillars. One end-to-end workflow.

From connecting AI to your full project context, to browser automation, to diff-based functional review, to AI-assisted test generation: here's what the session covers.

The Multi-Repo AI Workspace

Everything starts with giving AI your full project context. Multiple repos, Jira, GitLab, Confluence: all connected through MCP servers in a single workspace. The .mcp.json file wires the tools. AGENTS.md tells the AI how your project works. Without this foundation, AI is just a chatbot. With it, AI becomes a project-aware collaborator.

workspace/
├── fe-repo/          ← Angular, React, Vue...
├── backend-repo/     ← API, services
├── qa-repo/          ← Playwright, Cypress tests
├── .mcp.json         ← Jira + GitLab connected
└── AGENTS.md         ← AI workflow instructions

Chrome MCP: AI Meets the Browser

The visual moment that lands for most testers: AI controlling a real browser through Chrome MCP. Navigating pages, checking elements, performing functional validations, the same things a manual tester does, but directed by AI. This is the bridge between terminal-based AI workflows and the visual testing world most QA engineers work in.

Chrome MCP in action
AI navigates to /checkout
AI checks: discount applied before tax? ✓
AI checks: error shown on invalid coupon? ✓
AI checks: order summary updates on qty change? ✓
→ Structured output: pass / fail / flag for manual

The Diff-First Approach: Functional Review with AI

The core methodology. Feed a release diff and Jira acceptance criteria to AI. Get back a functional impact report in tester language: what changed, what could break, what to test, and what has no coverage. The output isn't a list of files. It's a prioritised QA brief.

Example Output
⚠ cart-service.ts → Discount calculation now
  applies BEFORE tax instead of after.
  Risk: HIGH, impacts all checkout flows.
  Test: multi-item cart with % discount,
  fixed discount, stacked coupons.

✅ user-profile.component.ts → Display-only
  change (avatar border radius). LOW risk.

❌ AC gap: PROJ-456 acceptance criterion #3
  ("user sees confirmation email") has no
  corresponding code change in this release.

AI-Assisted Automation Testing

Taking the functional review output and using AI to write, update, or select tests. AI maps impacted flows to your existing Playwright or Cypress suite, flags gaps, and generates test scenarios matched to your codebase patterns, not generic boilerplate. The workflow goes from "AI identified a gap" to "AI wrote a test for it," validated against your real test structure.

Prompt Hint: Test Generation
"Based on the functional impact for cart-service,
scan /e2e for existing checkout tests. Identify
coverage gaps. Generate Playwright scenarios
matching the existing test patterns in this repo.
Output: new test file, gap summary, risk level."
→ Full details shared in the live session

The full system. Not tips & tricks.

Four modules covering my complete AI-powered QA workflow, from workspace setup to agentic automation.

The Multi-Repo AI Workspace

Set up your environment to work across FE, BE, and QA repos with full AI context. MCP servers, AGENTS.md, Claude Code / Copilot configuration. We set it up live on a demo app. You take the workflow to your own repos.

MCP Servers Claude Code Copilot AGENTS.md

Chrome MCP: AI in the Browser

See how AI interacts with a real application through Chrome MCP. Navigate pages, inspect elements, perform functional checks, all driven by AI. The bridge between terminal-based AI workflows and the visual testing world manual testers know.

Chrome MCP Browser automation Visual testing Manual testing

The Diff-First Method: Functional Review with AI

The core methodology. Release analysis, diff analysis, AC compliance review. AI reads the code changes and translates them into testable functional impact. Prompt patterns and .prompt.md files built on a demo app, ready to adapt to any repo.

.prompt.md Release analysis Diff analysis Functional review

AI-Assisted Automation Testing

From analysis to action. Smart test selection based on what changed. AI-generated test scenarios for Playwright and Cypress validated against your existing patterns. Coverage gap detection before bugs reach production.

Playwright Cypress Smart test selection Test generation

Built in production. Taught from experience.

Anass R.

Anass R.

Senior QA Automation Engineer · DoQALand

10+ years in software engineering and QA. I use this system professionally every day on a multi-repo enterprise platform, helping teams improve release quality and test strategy.

QA engineers who are done waiting.

  • Manual testers who want to understand code changes without reading code, and catch issues before clicking
  • Automation engineers who want smarter test selection and AI-assisted test generation
  • QA leads, QA Managers, and CTOs who want to understand what AI-first QA looks like in practice, and make informed decisions about tooling, process, and team capability
  • Testers who use Jira, GitLab/GitHub, and Confluence and want AI connected directly to those tools
  • Anyone who's watched the 'AI for testing' talks but still runs the same workflow they ran 3 years ago
Not for you if

You're looking for a general 'intro to AI' overview or a coding course. This assumes you're already working as a tester or QA engineer and understand git, CI/CD, and test automation basics.

See the full system. Live. Free.

60 minutes. Live presentation on AI in QA, followed by a full demo of my workflow on a demo application. You walk away knowing how to set this up on your own repos. Limited seats per session.

3rd cohort · 2 sessions already delivered