Context Rendoo.ai operates multiple web and backend applications evolving rapidly through continuous development. Ensuring stability and performance across environments is challenging. Manual QA and post-release debugging consume significant time and delay deployments.
Objectives Design and implement a complete QA and debugging automation system to help developers identify, reproduce, and fix issues faster. Centralize monitoring, automate test execution, and provide clear visualization of issues and root causes.
Main Tasks / Methodology
- Audit & Benchmark: Analyze current QA/debugging workflows; benchmark tools (Playwright, Cypress, Sentry, Datadog, etc.)
- Monitoring & Logging: Implement unified logging and error tracking; add metrics (performance, uptime, resource usage)
- Test Automation Framework: Design automated functional and regression tests; integrate with CI/CD (GitHub Actions, GitLab CI, Jenkins)
- Debugging Dashboard: Build a centralized interface for bugs, logs, and metrics; enable easy reproduction via snapshots/recorded sessions
- Deployment & Validation: Deploy in staging/production; measure impact on resolution time and release stability
Expected Deliverables
- Functional QA and monitoring system
- Automation scripts and CI/CD integration
- Debugging and issue-tracking dashboard
- Documentation and test reports
Required Skills / Profile
- Solid software engineering or DevOps background
- Experience with CI/CD, Docker, Kubernetes
- Knowledge of Playwright/Cypress/Selenium
- Familiarity with observability tools (Grafana, Prometheus, Sentry, ELK)
Added Value for Rendoo.ai Reduces production incidents, improves testing reliability, and accelerates delivery cycles to support future AI and B2B product development.