TruGreen Automation Dashboard
Latest published AQ automation reports from GitHub Actions.
Dashboard Context
A central place to review automation results, understand impact, and identify follow-up.
Smoke Tests
ActiveDaily core site health checks for the TruGreen automation pilot.
Checks: Confirms that key pages load and core site experiences are available before deeper testing begins.
Protects: This protects basic site health and provides an early release-confidence signal across the highest-value user journeys.
Review: If smoke fails, review the first broken page or assertion, confirm whether the issue is environment-specific, and route it to the owning dev or QA contact before relying on downstream suite results.
Accessibility Audit
ActiveAxe accessibility scan results and grouped common issues.
Checks: Runs automated accessibility checks to highlight violations, recurring patterns, and shared component-level issues.
Protects: This helps the team catch accessibility regressions early and compare recurring issues across releases and environments.
Review: Review the common issues view first, identify whether findings are new or recurring, and confirm whether the issue should be fixed in a shared component, content template, or page-specific implementation.
Analytics Validation
ActiveGA4, dataLayer, and outbound tracking validation results.
Checks: Validates whether critical analytics events fire with the expected tracking configuration.
Protects: This protects reporting integrity, analytics QA signoff, and confidence that releases do not break measurement.
Review: If analytics checks fail, confirm whether the event is missing, delayed, or misconfigured, then involve the dev or analytics owner with the failing report and affected tracking details.
API Validation
ActiveAPI integration checks for key TruGreen endpoints.
Checks: Checks whether key service endpoints respond successfully and return expected data for monitored scenarios.
Protects: This protects integration reliability and helps catch service-side issues that may not be obvious from UI behavior alone.
Review: If an API check fails, confirm the response status and payload behavior first, then determine whether the problem belongs to the upstream service, environment configuration, or downstream site dependency.
Performance Audit
ActivePerformance scan results and historical page speed metrics.
Checks: Tracks Lighthouse-based performance results over time to show lab-measured speed and stability trends.
Protects: This helps the team spot regressions in loading and rendering behavior before they become visible customer experience issues.
Review: Review metric-level changes before focusing on the overall score, confirm whether the regression is lab-only or likely user-facing, and compare recent runs before escalating.
Visual Regression
ActiveVisual comparison results for selected pages and components.
Checks: Compares current screenshots to approved baselines to detect layout, styling, and rendering changes.
Protects: This protects presentation quality on customer-facing pages and helps separate intended UI changes from unexpected regressions.
Review: If a visual diff appears, check whether the change matches an expected release update, then decide whether to accept a new baseline or raise a defect for unintended UI drift.
Storybook Visual Regression
ActiveComponent snapshot results for published Storybook stories.
Checks: Runs visual snapshot checks against Storybook stories tagged for test coverage to catch component-level UI drift.
Protects: This protects shared UI building blocks before regressions spread into full pages and customer journeys.
Review: If a Storybook diff appears, compare it to the intended component change first, then decide whether to update the baseline or open a defect for unintended visual drift.
User Flow Validation
ActiveSanity checks for critical user flows. Currently, the buy flow is covered as an example, but this section will scale to any user flow we want to monitor.
Checks: Exercises key steps in selected user flows (buy-online-e) to verify that users can move through important journeys without major blockers.
Protects: This protects high-value business paths tied directly to conversion, revenue, or other strategic goals.
Review: If a flow fails, identify the earliest blocked step, confirm business impact, and route it quickly because conversion-path failures usually need same-day triage. As more flows are added, review each for its specific business impact.
Link Validation
ActiveHomepage-driven link checks for internal and external URL health.
Checks: Checks for broken or unreachable links so the team can spot navigation and destination issues early.
Protects: This protects content quality, SEO-sensitive paths, and customer trust when moving through the site.
Review: When enabled, review failures for repeated patterns first, separate blocked third-party destinations from real site defects, and prioritize broken customer-path links for follow-up.
Recommended Review Workflow
Use this sequence when reviewing automation output for release confidence, regression triage, or client-facing updates.
- Start with the suite that best matches the release area or reported issue.
- Read the report summary and latest status before opening raw Playwright details.
- If a run fails, confirm whether the issue is reproducible, environment-specific, or an expected product change.
- If a run is flaky or skipped, flag it for human review before using it as release confidence evidence.
- Escalate validated issues to the owning QA, dev, or product contact with the report link and a short impact summary.
Result Definitions
Use these definitions when deciding whether a run should be acknowledged, investigated, or escalated.
Passed
The checks completed successfully for that run. Review only if the result contradicts an expected product or release change.
Failed
At least one check found a problem or could not complete. Review the failing step first, confirm business impact, and decide whether to create or update a defect.
Flaky
The same test behaves inconsistently across runs. Treat this as an investigation item because it reduces trust in release signals even when the product is healthy.
Skipped
The test did not run by design or due to environment constraints. Verify whether the skip is expected before using the suite as release evidence.