Overview
Why Use Verification?
Verification helps you:- Catch Issues Early - Detect problems before they impact users
- Automate Rollbacks - Trigger rollback policies when verification fails
- Build Confidence - Ensure deployments meet quality standards
- Gate Promotions - Block progression to production until QA verifies
- Environment-Specific Checks - Run different verifications per environment
Basic Configuration
Add a verification rule to your policy:Environment-Specific Verifications
Different environments can have completely different verification requirements:Reusable Verification with Selectors
Use policy selectors to apply the same verification across multiple deployments or environments:Progressive Delivery Gates
Use verification to gate promotion through environments:Metric Configuration
Metric Properties
Unique name for this verification metric. Used for identification in logs and
UI.
Seconds between each measurement. For example,
30 means check every 30
seconds.Total number of measurements to take. Combined with
intervalSeconds, this
determines the verification duration.Configuration for the metric provider (HTTP, Datadog, etc.). See
provider-specific documentation for available options.
CEL expression evaluated against the provider response. Returns
true for
success. Example: result.ok && result.statusCode == 200Optional CEL expression for explicit failure. If matched, verification fails
immediately without waiting for more measurements.
Number of consecutive failures allowed before the metric is considered failed.
Set to
0 for no tolerance (fail on first failure).Number of consecutive successes required before the metric is considered
passed.
Metric Providers
Ctrlplane supports multiple metric providers for collecting verification data. Each provider has its own configuration and capabilities:- HTTP Provider - Query any HTTP endpoint that returns JSON
- Datadog Provider - Query metrics from Datadog’s Metrics API
- Sleep Provider - Wait for a specified duration before considering verification passed
- Terraform Cloud Run Provider - Verify Terraform Cloud run status for infrastructure deployments
Template Variables
All provider configurations support Go templates with access to deployment context:Storing Secrets in Variables
For sensitive values like API keys, use deployment variables:- Create a deployment variable:
- Set the value:
- Reference in verification config:
Success Conditions (CEL)
Success conditions are written in CEL (Common Expression Language). The measurement data is available as theresult variable.
Verification Lifecycle
1. Policy Evaluation
When a job completes, Ctrlplane evaluates policies to determine which verifications apply based on the policy selectors.2. Verification Starts
If a matching policy has verification rules, Ctrlplane creates a verification record and starts the measurement process.3. Measurements Taken
For each configured metric, measurements are taken at the specified interval:4. Verification Result
- Passed: All measurements passed, or failures stayed below
failureLimit - Failed: Failures exceeded
failureLimit
5. Policy Action
Based on the verification result, the policy can:- Allow promotion to the next environment
- Trigger rollback to a previous version
- Block release until manual intervention
Verification Status
| Status | Description |
|---|---|
running | Verification in progress, taking measurements |
passed | All checks passed within acceptable limits |
failed | Too many measurements failed |
cancelled | Verification was manually cancelled |
Best Practices
Timing Recommendations
| Scenario | Recommended Interval | Recommended Count |
|---|---|---|
| Quick smoke test | 10-30s | 3-5 |
| Standard verification | 30s-1m | 5-10 |
| Extended soak test | 5m | 12-24 |
Failure Limits
| Risk Tolerance | Failure Limit | Notes |
|---|---|---|
| Strict | 1 | Fail on first failure |
| Normal | 2-3 | Allow transient issues |
| Lenient | 5+ | For noisy metrics |
Environment-Specific Recommendations
| Environment | Verification Focus | Timing |
|---|---|---|
| QA | Smoke tests, E2E tests | Quick (1-3min) |
| Staging | Integration tests, error rates | Medium (5min) |
| Production | Error rates, latency, business KPIs | Extended (10m) |
Troubleshooting
Verification always fails
- Check if the provider can reach the target (network, DNS)
- Verify API credentials are correct
- Test the query manually
- Review measurement data for unexpected values
- Check if success condition is too strict
Verification not running
- Verify the policy selector matches the release target
- Check that the policy is enabled
- Review policy evaluation logs
- Ensure verification is configured in the policy rules
Wrong verification applied
- Review policy selectors
- Check policy priority/ordering
- Verify environment and metadata values
- Review which policies matched the release
Provider Documentation
For detailed information about each metric provider, see:Next Steps
- Policies Overview - Learn about policy structure
- Gradual Rollouts - Control deployment pace
- Selectors - Deep dive into selector syntax