AI without context is just noise
Pasting into ChatGPT doesn't give AI your repos, your tickets, or your diffs. AI without project context gives you generic output that doesn't map to your actual application.
Live training for QA engineersTestersManual QAsQA leadsSDETsAutomation Engineers
This free session shows how AI can read the codebase of the application you're testing, tell you what changed functionally, and help you test only what matters. From workspace setup to automated test generation: presentation + live demo on an open-source app. Limited seats.
Get a taste of
Claim your seat, it's freeFor years, the bottleneck in QA was execution. Did the tests run? Did someone click through the critical paths before the release went out? CI/CD solved that. Automated pipelines run tests on every commit, every merge, every deploy. Execution is no longer the constraint.
Then the bottleneck shifted to automation coverage. Do we have enough automated tests? Are the right flows covered? Frameworks like Playwright and Cypress, combined with proper test infrastructure, solved that for most mature teams. Coverage is no longer the constraint either.
The bottleneck now is orchestration: knowing what to test, when to test it, and where to focus your effort when a release ships dozens of changes across multiple repos.
Every release is a signal. Changes across the frontend, backend, and shared services, each one carrying functional risk. The tester's real job isn't clicking buttons or writing scripts. It's reading that signal and directing effort where it matters. That's QA orchestration. And until recently, there was no scalable way to do it.
AI changes that. It reads every diff, maps functional impact across your repos, cross-references your acceptance criteria, and tells you exactly where to focus. You don't test less. You test smarter. The orchestration layer that was missing from QA now exists.
You've seen the tools. You've read the posts. But you still don't know how to wire AI into your actual daily workflow, across multiple repos, real tickets, real pipelines.
Pasting into ChatGPT doesn't give AI your repos, your tickets, or your diffs. AI without project context gives you generic output that doesn't map to your actual application.
A release ships dozens of changes across multiple repos. Testing everything is the default when nobody tells you where to focus. That's the orchestration gap AI can close.
AI assistants work on fragments: one file, one prompt, no history. Real QA orchestration requires AI that sees your full project: all repos, all tickets, all test coverage.
The default tester workflow: wait for a deployment, click through the app, file bugs. AI makes it possible to map functional risk before a single button is clicked.
From connecting AI to your full project context, to browser automation, to diff-based functional review, to AI-assisted test generation: here's what the session covers.
Everything starts with giving AI your full project context. Multiple repos, Jira, GitLab, Confluence: all connected through MCP servers in a single workspace. The .mcp.json file wires the tools. AGENTS.md tells the AI how your project works. Without this foundation, AI is just a chatbot. With it, AI becomes a project-aware collaborator.
workspace/ ├── fe-repo/ ← Angular, React, Vue... ├── backend-repo/ ← API, services ├── qa-repo/ ← Playwright, Cypress tests ├── .mcp.json ← Jira + GitLab connected └── AGENTS.md ← AI workflow instructions
The visual moment that lands for most testers: AI controlling a real browser through Chrome MCP. Navigating pages, checking elements, performing functional validations, the same things a manual tester does, but directed by AI. This is the bridge between terminal-based AI workflows and the visual testing world most QA engineers work in.
AI navigates to /checkout AI checks: discount applied before tax? ✓ AI checks: error shown on invalid coupon? ✓ AI checks: order summary updates on qty change? ✓ → Structured output: pass / fail / flag for manual
The core methodology. Feed a release diff and Jira acceptance criteria to AI. Get back a functional impact report in tester language: what changed, what could break, what to test, and what has no coverage. The output isn't a list of files. It's a prioritised QA brief.
⚠ cart-service.ts → Discount calculation now
applies BEFORE tax instead of after.
Risk: HIGH, impacts all checkout flows.
Test: multi-item cart with % discount,
fixed discount, stacked coupons.
✅ user-profile.component.ts → Display-only
change (avatar border radius). LOW risk.
❌ AC gap: PROJ-456 acceptance criterion #3
("user sees confirmation email") has no
corresponding code change in this release.
Taking the functional review output and using AI to write, update, or select tests. AI maps impacted flows to your existing Playwright or Cypress suite, flags gaps, and generates test scenarios matched to your codebase patterns, not generic boilerplate. The workflow goes from "AI identified a gap" to "AI wrote a test for it," validated against your real test structure.
Four modules covering my complete AI-powered QA workflow, from workspace setup to agentic automation.
Set up your environment to work across FE, BE, and QA repos with full AI context. MCP servers, AGENTS.md, Claude Code / Copilot configuration. We set it up live on a demo app. You take the workflow to your own repos.
See how AI interacts with a real application through Chrome MCP. Navigate pages, inspect elements, perform functional checks, all driven by AI. The bridge between terminal-based AI workflows and the visual testing world manual testers know.
The core methodology. Release analysis, diff analysis, AC compliance review. AI reads the code changes and translates them into testable functional impact. Prompt patterns and .prompt.md files built on a demo app, ready to adapt to any repo.
From analysis to action. Smart test selection based on what changed. AI-generated test scenarios for Playwright and Cypress validated against your existing patterns. Coverage gap detection before bugs reach production.
Anass R.
Senior QA Automation Engineer · DoQALand
10+ years in software engineering and QA. I use this system professionally every day on a multi-repo enterprise platform, helping teams improve release quality and test strategy.
You're looking for a general 'intro to AI' overview or a coding course. This assumes you're already working as a tester or QA engineer and understand git, CI/CD, and test automation basics.
60 minutes. Live presentation on AI in QA, followed by a full demo of my workflow on a demo application. You walk away knowing how to set this up on your own repos. Limited seats per session.
3rd cohort · 2 sessions already delivered