Skip to content

Testing and Debugging

Use this page when a documented call does not behave the way you expect.

Debug Order

  1. confirm chain type
  2. confirm portal
  3. confirm auth model
  4. confirm required headers
  5. confirm whether Secure Channel is required
  6. confirm whether the endpoint is actually API-exposed

Web Chain Checklist

  1. correct portal entrypoint
  2. valid JWT if required
  3. valid X-Client-Hash
  4. valid X-SC-Session-Id if required
  5. matching user role and portal permissions

API Chain Checklist

  1. path starts with /api/v1/**
  2. request is signed correctly
  3. timestamp and nonce are valid
  4. API key has the correct scope
  5. endpoint supports API-key access

What To Capture In A Bug Report

  1. exact path
  2. chain type
  3. portal
  4. headers used
  5. whether Secure Channel was active
  6. HTTP status
  7. public error code and message

Simulation Environment (doc-verify)

The doc-verify simulation environment provides full-stack E2E testing against a real backend with a real database, Redis, and RabbitMQ. Unlike mocked tests, every request hits the actual Spring Boot application with live infrastructure, so test results reflect true production behavior.

Components:

  • Docker services — MySQL, Redis, RabbitMQ, Mailpit (local SMTP trap for MFA codes)
  • Spring Boot backend — the full slaunchx-backend-platform application
  • MFA via Mailpit — verification codes are intercepted locally instead of sent to real email providers

Location: slaunchx-api-toolkit/doc-verify/

Test framework: Vitest + TypeScript

Running E2E Tests Locally

bash
# 1. Start the simulation environment (Docker services + backend)
cd repos/devportal/slaunchx-api-toolkit/doc-verify
BACKEND_DIR=~/workspace/repos/core/slaunchx-backend-platform bash start.sh

# 2. Seed test data (creates 4 portals with auth fixtures)
source env.doc-verify && source logs/access-code.env
npx tsx scripts/seed-portals.ts

# 3. Run all tests
npx vitest run

The start.sh script brings up Docker containers, waits for readiness, then launches the backend. Seeding creates users, workspaces, API keys, and other fixtures across all four portals so that every test file has the auth context it needs.

Test Coverage

  • 926 E2E test cases
  • 105 controllers covered across 4 portals (system, tenant, partner, consumer)
  • 75 test files organized by portal and domain

Test categories include: auth, billing (recharge, exchange, withdrawal, settlement), workspace, security, file, export, profile, notifications, sandbox, secure-channel, API keys, and constants.

ExampleRecorder Pattern

E2E tests double as API documentation generators. Each test uses ExampleRecorder to capture the full request/response pair during execution. After a test run the recorded examples are flushed to data/examples/ as JSON files. These files serve as the source of truth for API behavior documentation — they prove that every documented example actually works against the real backend.

Common E2E Testing Issues

IssueDetail
MFA email timingMailpit may take 1–2 seconds to receive verification codes. Tests that fetch codes immediately after requesting them can fail intermittently.
Auth session cachingSessions are cached per portal within a single test run. If a test mutates session state (e.g., changes password), subsequent tests in the same portal may see stale auth.
Storage unavailableFile upload tests may return HTTP 500 when MinIO is not configured. The simulation environment does not start MinIO by default.
Test orderingSome security tests (IP whitelist, session termination) alter global state that can affect subsequent tests when running the full suite. Run isolated tests with npx vitest run <file> to confirm.
  1. Error Model
  2. the domain guide for the failing flow

SlaunchX Internal Documentation