Skip to content

OpsWeave — ITIL 4 Compliance Matrix

Version: 0.6.6 | Audit Date: 2026-03-24 | Panel: 5 Industry Experts Rating Scale: ✅ Strong (≥4.0) | ⚠️ Partial (2.0–3.9) | ❌ Missing (<2.0)


Table of Contents

  1. Executive Summary
  2. Overall Assessment
  3. Service Management Practices
  4. General Management Practices
  5. Technical Management Practices
  6. Service Value Chain
  7. Top 10 Improvements for 4.0/5.0
  8. Roadmap
  9. Methodology
  10. Panel Comments

1. Executive Summary

Overall Rating: 3.3 / 5.0 (Managed Level)

MetricValue
Overall Score (weighted)3.3 / 5.0
Previous Version (v0.2.8)2.8 / 5.0
Delta (weighted)+0.5
Delta (unweighted average)+1.39
Practices assessed33
Strong (≥4.0)7
Partial (2.0–3.9)24
Missing (<2.0)2

Strongest Areas:

PracticeScore
Service Configuration Management (CMDB)4.3
Incident Management4.2
Deliver & Support4.2
Change Enablement4.0
Service Level Management4.0
Monitoring & Event Management4.0
Service Desk4.0

OpsWeave has improved across all 33 ITIL 4 practices since v0.2.8. The largest gains are in Capacity & Performance Management (+2.5), Monitoring & Event Management (+2.0), and Risk Management (+2.0). The CMDB with DAG-based dependency modeling, SLA inheritance, and compliance mapping remains the system's strongest capability.


2. Overall Assessment

All 33 ITIL 4 Practices Compared

Practicev0.2.8v0.6.6DeltaStatus
Incident Management3.04.2+1.2✅ Strong
Problem Management2.03.5+1.5⚠️ Partial
Change Enablement2.54.0+1.5✅ Strong
Service Request Management2.53.8+1.3⚠️ Partial
Service Configuration Mgmt3.54.3+0.8✅ Strong
Service Level Management2.54.0+1.5✅ Strong
Knowledge Management2.03.8+1.8⚠️ Partial
Service Desk3.04.0+1.0✅ Strong
IT Asset Management2.53.8+1.3⚠️ Partial
Monitoring & Event Mgmt2.04.0+2.0✅ Strong
Release Management1.02.0+1.0⚠️ Partial
Deployment Management0.51.5+1.0❌ Missing
Service Validation & Testing0.51.5+1.0❌ Missing
Availability Management1.53.0+1.5⚠️ Partial
Capacity & Performance Mgmt1.03.5+2.5⚠️ Partial
Service Continuity Mgmt1.02.5+1.5⚠️ Partial
Information Security Mgmt2.03.0+1.0⚠️ Partial
Continual Improvement1.53.0+1.5⚠️ Partial
Risk Management1.53.5+2.0⚠️ Partial
Relationship Management2.03.5+1.5⚠️ Partial
Supplier Management0.52.0+1.5⚠️ Partial
Portfolio Management1.02.5+1.5⚠️ Partial
Architecture Management2.03.0+1.0⚠️ Partial
Workforce & Talent Mgmt1.02.5+1.5⚠️ Partial
Software Dev & Management1.02.0+1.0⚠️ Partial
Infrastructure & Platform1.53.0+1.5⚠️ Partial
Measurement & Reporting1.53.0+1.5⚠️ Partial
Plan2.03.5+1.5⚠️ Partial
Improve1.53.0+1.5⚠️ Partial
Engage2.03.8+1.8⚠️ Partial
Design & Transition1.53.0+1.5⚠️ Partial
Obtain/Build1.02.5+1.5⚠️ Partial
Deliver & Support3.04.2+1.2✅ Strong

3. Service Management Practices

3.1 Incident Management

Score: 4.2/5 (↑ from 3.0) | Status: ✅ Strong

Implemented:

  • Complete incident lifecycle (create, assign, work, resolve, close)
  • Priority matrix (Impact x Urgency) with automatic calculation
  • SLA timers with response and resolution deadlines, breach detection
  • Email inbound: automatic ticket creation from incoming emails
  • Monitoring integration: auto-incident creation from Check_MK events with deduplication
  • Audit trail: complete change history per ticket
  • Kanban board with drag-and-drop between status columns
  • Comment system with internal and external comments
  • AI-powered categorization and priority suggestions

Gaps:

  • No hierarchical escalation (functional/management)
  • No major incident process with dedicated roles
  • No automatic notification on SLA breaches (marking only)

Recommendation: Implement escalation matrix with automatic notification chains and major incident workflow.


3.2 Problem Management

Score: 3.5/5 (↑ from 2.0) | Status: ⚠️ Partial

Implemented:

  • Dedicated "Problem" ticket type with separate workflow
  • Linking incidents to problems (relations)
  • Root cause analysis fields in ticket structure
  • Known error documentation via Knowledge Base
  • Problem statistics in dashboard
  • AI-powered pattern analysis for recurring incidents

Gaps:

  • No proactive problem management (trend analysis)
  • No workaround management as a standalone entity
  • No KEDB (Known Error Database) as a separate area

Recommendation: Proactive problem detection through statistical analysis of recurring incidents and dedicated KEDB module.


3.3 Change Enablement

Score: 4.0/5 (↑ from 2.5) | Status: ✅ Strong

Implemented:

  • Change tickets with dedicated lifecycle (RFC, approval, implementation, review)
  • Workflow engine with configurable approval steps
  • Change types: Standard, Normal, Emergency
  • Impact analysis via CMDB relations (affected assets)
  • Compliance mapping: validate changes against regulatory requirements
  • Rollback planning as a mandatory field in change form
  • Change calendar (view planned changes over time)

Gaps:

  • No formal CAB (Change Advisory Board) with voting mechanism
  • No change model catalog for standard changes
  • Post-implementation review not enforced as a dedicated workflow step

Recommendation: CAB approval workflow with voting mechanism and predefined change models for recurring standard changes.


3.4 Service Request Management

Score: 3.8/5 (↑ from 2.5) | Status: ⚠️ Partial

Implemented:

  • Service request as a dedicated ticket type with separate workflow
  • Service catalog with service descriptions as the basis for requests
  • Workflow engine for automated fulfillment processes
  • Customer portal: end users can create requests directly
  • SLA tracking for fulfillment times
  • Form-based data input via workflow steps

Gaps:

  • No self-service catalog with visual selection
  • No automation for standard requests (e.g., password reset)
  • No approval workflow for cost-relevant requests

Recommendation: Self-service portal with visual catalog and automated fulfillment actions for common requests.


3.5 Service Configuration Management

Score: 4.3/5 (↑ from 3.5) | Status: ✅ Strong

Implemented:

  • Complete CMDB with typed assets (server, network, software, service)
  • DAG-based dependency modeling with cycle detection
  • SLA inheritance along dependency chains (recursive CTE)
  • Interactive graph visualization (React Flow)
  • Asset-service linking via Service Catalog
  • Compliance flags per asset and regulatory framework
  • Asset attribute history with field-level tracking
  • Capacity CRUD per asset
  • Import/export of CI data
  • Dual-DB compatible (PostgreSQL + SQLite)

Gaps:

  • No automated discovery (network scan)
  • No federation/import from external CMDBs
  • No baseline snapshots for configuration comparison

Recommendation: Discovery integration (e.g., Nmap, SNMP) for automatic CI detection and baseline comparison functionality.


3.6 Service Level Management

Score: 4.0/5 (↑ from 2.5) | Status: ✅ Strong

Implemented:

  • SLA tiers (Gold, Silver, Bronze) with configurable response and resolution times
  • Automatic SLA assignment based on asset tier
  • SLA inheritance in the CMDB (parent asset tier inherits to children)
  • SLA breach detection with visual marking
  • SLA statistics and reporting in dashboard
  • Service catalog with defined service levels per offering
  • Customer portal with SLA transparency

Gaps:

  • No OLA/UC modeling (Operational/Underpinning Agreements)
  • No SLA review process as a workflow
  • No business hours calendar for SLA calculation

Recommendation: Business hours calendar for precise SLA calculation and OLA/UC support for internal agreements.


3.7 Knowledge Management

Score: 3.8/5 (↑ from 2.0) | Status: ⚠️ Partial

Implemented:

  • Knowledge base with Markdown articles and categories
  • Visibility control: internal and public articles
  • Linking KB articles to tickets (known issues)
  • Full-text search across articles
  • Customer portal: public articles accessible to end users
  • AI-powered article suggestions during ticket handling
  • Tags and categorization

Gaps:

  • No article lifecycle (Draft, Review, Published, Retired)
  • No feedback function ("Was this article helpful?")
  • No automatic staleness detection

Recommendation: Article lifecycle with review workflow and feedback mechanism for quality assurance.


3.8 Service Desk

Score: 4.0/5 (↑ from 3.0) | Status: ✅ Strong

Implemented:

  • Multi-channel intake: web UI, email inbound, customer portal, monitoring, API
  • Ticket routing to assignment groups with drag-and-drop
  • Internal and external comments (customers see external only)
  • SLA timers directly in ticket view
  • Real-time notifications via Socket.IO
  • Ticket board (Kanban) and list view
  • Quick filters and full-text search
  • AI-powered response suggestions

Gaps:

  • No omnichannel integration (chat, telephony)
  • No agent workload balancing
  • No satisfaction survey after ticket closure

Recommendation: CSAT survey after ticket closure and automatic workload balancing for ticket assignment.


3.9 IT Asset Management

Score: 3.8/5 (↑ from 2.5) | Status: ⚠️ Partial

Implemented:

  • Asset register with typed CIs and flexible attributes (JSON)
  • Lifecycle status (Active, Maintenance, Retired)
  • Location and environment tracking
  • Links to tickets, services, and compliance frameworks
  • Asset groups and owner assignment
  • Capacity management per asset
  • Field-level history tracking

Gaps:

  • No financial asset management (depreciation, TCO)
  • No license tracking for software assets
  • No automated inventory (discovery)

Recommendation: Add financial attributes (purchase date, cost, depreciation) and software license tracking.


3.10 Monitoring & Event Management

Score: 4.0/5 (↑ from 2.0) | Status: ✅ Strong

Implemented:

  • Multi-source monitoring: Check_MK v1 (Livestatus) and v2 (REST API)
  • Webhook-based event ingestion
  • Automatic asset matching (hostname to CI)
  • Auto-incident creation with deduplication
  • Event-to-ticket correlation
  • Monitoring source management with encrypted configuration
  • Extensible adapter architecture (Zabbix, Prometheus prepared)

Gaps:

  • No event correlation (multiple events to one incident)
  • No event filtering/noise reduction
  • No threshold-based automatic escalation

Recommendation: Event correlation engine for intelligent aggregation and noise reduction.


3.11 Release Management

Score: 2.0/5 (↑ from 1.0) | Status: ⚠️ Partial

Implemented:

  • Change tickets as containers for release planning
  • Workflow steps for release approval
  • CMDB impact analysis (affected assets)
  • Deployment documentation via ticket comments

Gaps:

  • No release calendar with freeze periods
  • No release packages (bundling multiple changes)
  • No release gate model (build, test, stage, prod)

Recommendation: Dedicated release module with calendar, packages, and gate-based pipeline.


3.12 Deployment Management

Score: 1.5/5 (↑ from 0.5) | Status: ❌ Missing

Implemented:

  • Docker-based deployment of the OpsWeave system itself
  • CI/CD pipeline (GitHub Actions) for OpsWeave releases

Gaps:

  • No deployment tracking for managed services/assets
  • No deployment patterns (blue/green, canary)
  • No deployment audit trail

Recommendation: Deployment tracking module for managed infrastructure with links to changes and releases.


3.13 Service Validation & Testing

Score: 1.5/5 (↑ from 0.5) | Status: ❌ Missing

Implemented:

  • Internal testing framework (Vitest, Playwright) for OpsWeave
  • Monitoring events as an indirect validation mechanism

Gaps:

  • No test management module for service validation
  • No test case management or test plan creation
  • No linking of tests to changes/releases

Recommendation: Test management module with test case catalog and linking to change records.


3.14 Availability Management

Score: 3.0/5 (↑ from 1.5) | Status: ⚠️ Partial

Implemented:

  • SLA-based availability targets per asset tier
  • Monitoring integration for state monitoring
  • Incident tracking with downtime calculation (resolved_at - created_at)
  • CMDB impact analysis: failure of a CI shows affected services
  • Health endpoint for OpsWeave itself

Gaps:

  • No availability dashboard (uptime %, MTTR, MTBF)
  • No availability planning (maintenance windows)
  • No redundancy modeling in the CMDB

Recommendation: Availability dashboard with calculated KPIs (uptime, MTTR, MTBF) and maintenance window planning.


3.15 Capacity & Performance Management

Score: 3.5/5 (↑ from 1.0) | Status: ⚠️ Partial

Implemented:

  • Capacity CRUD per asset (CPU, RAM, storage, custom)
  • Capacity utilization as percentage
  • Threshold warnings on exceedance
  • Monitoring events with performance data
  • AI-powered anomaly detection
  • ROI dashboard for AI features

Gaps:

  • No trend analysis and capacity forecasting
  • No capacity plan as a document/workflow
  • No historical performance database

Recommendation: Store historical capacity data and provide trend-based forecasts for capacity planning.


3.16 Service Continuity Management

Score: 2.5/5 (↑ from 1.0) | Status: ⚠️ Partial

Implemented:

  • CMDB dependency analysis for impact assessment
  • Docker volume-based backup for OpsWeave data
  • Compliance frameworks can map continuity requirements
  • Risk assessment via compliance module

Gaps:

  • No BIA (Business Impact Analysis) module
  • No recovery plan management
  • No continuity tests/exercises as a workflow

Recommendation: BIA module with RPO/RTO definitions per service and recovery plan management.


3.17 Information Security Management

Score: 3.0/5 (↑ from 2.0) | Status: ⚠️ Partial

Implemented:

  • RBAC with role hierarchy (Admin, Manager, Agent, Viewer)
  • Multi-tenant isolation (row-level security)
  • Password hashing (bcrypt) and session management (JWT)
  • OIDC/SAML integration (Enterprise)
  • Audit trail for all ticket changes
  • Compliance module with framework mapping (BSI, GDPR, etc.)
  • Encrypted monitoring credentials
  • AI security: encryption of sensitive data (HKDF)

Gaps:

  • No security incident management as a dedicated process
  • No vulnerability management
  • No access review workflow

Recommendation: Security incident type with specialized workflow and periodic access reviews as an automated process.


4. General Management Practices

4.1 Continual Improvement

Score: 3.0/5 (↑ from 1.5) | Status: ⚠️ Partial

Implemented:

  • Ticket statistics and KPI dashboard
  • SLA breach reporting as improvement indicator
  • AI-powered ticket trend analysis
  • Compliance gap analysis
  • ROI tracking for AI features

Gaps:

  • No CSI register (Continual Service Improvement)
  • No improvement suggestion workflow
  • No PDCA cycles mapped as a process

Recommendation: CSI register as a dedicated module with improvement suggestions, prioritization, and progress tracking.


4.2 Risk Management

Score: 3.5/5 (↑ from 1.5) | Status: ⚠️ Partial

Implemented:

  • Change impact analysis via CMDB relations
  • Compliance framework mapping with risk assessment
  • Asset regulatory flags for regulatory risks
  • AI-powered risk assessment for changes
  • SLA breach forecasting as risk indicator

Gaps:

  • No dedicated risk register
  • No risk assessment matrix (likelihood x impact)
  • No risk owner concept

Recommendation: Risk register with assessment matrix, risk owners, and links to assets and services.


4.3 Relationship Management

Score: 3.5/5 (↑ from 2.0) | Status: ⚠️ Partial

Implemented:

  • Customer management with industry classification
  • Customer portal with ticket visibility and commenting
  • Service catalog as the basis for service agreements
  • Multi-tenant: tenant isolation for different customers
  • Email communication via ticket system

Gaps:

  • No CRM-style contact management
  • No satisfaction measurement (NPS, CSAT)
  • No stakeholder mapping

Recommendation: CSAT surveys after ticket closure and stakeholder mapping per service.


4.4 Supplier Management

Score: 2.0/5 (↑ from 0.5) | Status: ⚠️ Partial

Implemented:

  • Assets can carry suppliers as attributes
  • Contract information in Service Catalog
  • Monitoring sources as external supplier systems

Gaps:

  • No supplier register as a standalone entity
  • No contract management (terms, renewal dates)
  • No supplier assessment/SLA tracking

Recommendation: Supplier module with contract management, SLA tracking, and evaluation mechanism.


4.5 Portfolio Management

Score: 2.5/5 (↑ from 1.0) | Status: ⚠️ Partial

Implemented:

  • Service catalog with service descriptions (horizontal + vertical)
  • Service status management (Active, Draft, Retired)
  • Asset-service links
  • Compliance mapping per service

Gaps:

  • No service portfolio with investment/operational cost analysis
  • No pipeline management (planned services)
  • No business case template

Recommendation: Portfolio dashboard with service lifecycle view and cost analysis.


4.6 Architecture Management

Score: 3.0/5 (↑ from 2.0) | Status: ⚠️ Partial

Implemented:

  • CMDB as architecture repository (assets, relations, dependencies)
  • Graph visualization of the IT landscape
  • Service-asset mapping
  • Technology types (server, network, software, service, database)

Gaps:

  • No architecture blueprints as target state
  • No current/target state comparison
  • No technology radar function

Recommendation: Blueprint function for target architectures with comparison against current CMDB state.


4.7 Workforce & Talent Management

Score: 2.5/5 (↑ from 1.0) | Status: ⚠️ Partial

Implemented:

  • User and group management with roles
  • Group leads and member assignment
  • Ticket assignment to groups and individuals
  • Multi-tenant role assignment (different roles per tenant)
  • OIDC sync for user provisioning

Gaps:

  • No skill management or competency tracking
  • No shift planning/on-call rotation
  • No training management

Recommendation: Skill profiles and on-call rotation for better ticket assignment based on competencies.


5. Technical Management Practices

5.1 Software Development & Management

Score: 2.0/5 (↑ from 1.0) | Status: ⚠️ Partial

Implemented:

  • CI/CD pipeline (GitHub Actions) for OpsWeave itself
  • Versioned release management
  • Automated tests (unit + E2E)

Gaps:

  • No application lifecycle management for managed software
  • No software asset catalog with version history
  • No DevOps integration (Jira, GitLab, Azure DevOps)

Recommendation: Software asset catalog with version tracking and DevOps tool integration.


5.2 Infrastructure & Platform Management

Score: 3.0/5 (↑ from 1.5) | Status: ⚠️ Partial

Implemented:

  • CMDB with infrastructure CIs (server, network, storage)
  • Monitoring integration for infrastructure monitoring
  • Capacity management per infrastructure asset
  • Docker-based platform for OpsWeave
  • Dual-DB support (PostgreSQL + SQLite)

Gaps:

  • No cloud asset management (AWS, Azure, GCP)
  • No IaC tracking (Terraform, Ansible)
  • No network topology visualization

Recommendation: Cloud provider integration for automatic asset import and network topology view in the CMDB.


5.3 Measurement & Reporting

Score: 3.0/5 (↑ from 1.5) | Status: ⚠️ Partial

Implemented:

  • Ticket statistics (open, resolved, average duration)
  • SLA compliance reporting
  • Compliance gap analysis
  • AI ROI dashboard with cost savings
  • Asset statistics and capacity utilization
  • Dashboard with KPI widgets

Gaps:

  • No report builder (custom reports)
  • No scheduled reporting (automatic delivery)
  • No trend analysis over time periods

Recommendation: Report builder with templates, scheduling, and export functions (PDF, CSV).


6. Service Value Chain

6.1 Plan

Score: 3.5/5 (↑ from 2.0) | Status: ⚠️ Partial

Implemented:

  • Service catalog as planning foundation
  • CMDB for infrastructure planning
  • Compliance framework mapping for regulatory planning
  • Capacity planning via asset capacity
  • AI-powered forecasts and recommendations

Gaps:

  • No strategic planning module
  • No budget planning/tracking
  • No demand management function

Recommendation: Strategic planning module with budget tracking and demand forecasting.


6.2 Improve

Score: 3.0/5 (↑ from 1.5) | Status: ⚠️ Partial

Implemented:

  • KPI dashboard as improvement foundation
  • SLA breach analysis highlights weaknesses
  • AI trend analysis for improvement opportunities
  • Compliance gap analysis as improvement driver

Gaps:

  • No CSI register
  • No improvement initiatives as trackable entities
  • No benchmarking against industry standards

Recommendation: CSI register with prioritized improvement initiatives and progress tracking.


6.3 Engage

Score: 3.8/5 (↑ from 2.0) | Status: ⚠️ Partial

Implemented:

  • Customer portal for direct interaction
  • Email inbound for communication
  • Ticket comments (internal/external)
  • Service catalog for service offering communication
  • Knowledge base for self-service
  • Multi-channel support (web, email, portal, API)
  • AI-powered response suggestions

Gaps:

  • No satisfaction surveys
  • No feedback loop for services
  • No chat integration

Recommendation: CSAT integration and chat widget for real-time communication.


6.4 Design & Transition

Score: 3.0/5 (↑ from 1.5) | Status: ⚠️ Partial

Implemented:

  • Service catalog for service design
  • Workflow engine for transition processes
  • Change management for controlled transitions
  • CMDB for impact analysis on design changes
  • Compliance validation on service changes

Gaps:

  • No service design package
  • No transition planning module
  • No knowledge transfer workflows

Recommendation: Service design templates and transition checklists as workflow templates.


6.5 Obtain/Build

Score: 2.5/5 (↑ from 1.0) | Status: ⚠️ Partial

Implemented:

  • Asset creation and configuration via CMDB
  • Service descriptions as blueprints
  • Change workflows for controlled provisioning
  • Docker-based self-hosting

Gaps:

  • No procurement integration
  • No build/deployment pipeline management for services
  • No vendor evaluation workflows

Recommendation: Procurement workflow templates and build pipeline tracking.


6.6 Deliver & Support

Score: 4.2/5 (↑ from 3.0) | Status: ✅ Strong

Implemented:

  • Complete ticket system (incident, problem, change, service request)
  • Multi-channel support (email, portal, web, API, monitoring)
  • SLA tracking and breach detection
  • Knowledge base for support assistance
  • Customer portal for transparent support
  • Workflow engine for structured handling
  • Real-time notifications (Socket.IO)
  • AI assistance for agents (categorization, response suggestions)
  • CMDB integration for context-aware support

Gaps:

  • No formal service review meeting as a process
  • No automatic satisfaction measurement

Recommendation: Post-resolution surveys and service review meeting templates.


7. Top 10 Improvements for 4.0/5.0

The following measures offer the highest potential to raise the overall score from 3.3 to 4.0+:

#MeasureAffected PracticesExpected Impact
1Escalation matrix and major incident processIncident, Service Desk+0.3 on Incident (4.2 → 4.5)
2CSI register with improvement initiativesContinual Improvement, Improve+0.5 on CI (3.0 → 3.5)
3CSAT surveys and satisfaction measurementService Desk, Engage, Relationship+0.3 on Engage (3.8 → 4.1)
4Report builder with schedulingMeasurement & Reporting+1.0 on M&R (3.0 → 4.0)
5Release module with calendar and packagesRelease Mgmt, Design & Transition+1.0 on Release (2.0 → 3.0)
6Risk register with assessment matrixRisk Management+0.5 on Risk (3.5 → 4.0)
7Deployment tracking for managed servicesDeployment Mgmt+1.0 on Deploy (1.5 → 2.5)
8Supplier module with contract managementSupplier Mgmt+1.0 on Supplier (2.0 → 3.0)
9Known Error Database as a dedicated areaProblem Mgmt, Knowledge Mgmt+0.5 on Problem (3.5 → 4.0)
10Business hours calendar for SLAService Level Mgmt, Availability+0.3 on SLA (4.0 → 4.3)

8. Roadmap

Phase A: Quick Wins (v0.7.x) — Target: 3.7/5.0

MeasureEffortImpact
Escalation matrixMediumHigh
CSAT after ticket closureLowHigh
Report templates (PDF/CSV export)MediumHigh
Known Error DatabaseLowMedium
Business hours calendarMediumMedium

Phase B: Structural Extensions (v0.8.x) — Target: 4.0/5.0

MeasureEffortImpact
CSI registerMediumHigh
Risk registerMediumHigh
Release moduleHighHigh
Supplier moduleHighMedium
Report builderHighHigh

Phase C: Maturity (v1.0.x) — Target: 4.3/5.0

MeasureEffortImpact
Deployment trackingHighMedium
Test management moduleHighMedium
Service portfolio dashboardMediumMedium
Discovery integrationHighHigh
OLA/UC supportMediumMedium
Architecture blueprintsMediumMedium

9. Methodology

Rating Scale

ScoreLevelDescription
1.0InitialConcept or rudimentary approach present
2.0RepeatableBasic functionality implemented, not standardized
3.0DefinedProcess defined and consistently implemented
4.0ManagedProcess measured, controlled, and optimized
5.0OptimizingContinuously improved, best practice

Weighting

The overall rating uses weighted averages based on practical relevance for an IT Service Management system:

CategoryWeightRationale
Core Service Management (Incident, Problem, Change, CMDB, SLA)3xCore functionality of an ITSM system
Support Practices (Service Desk, Knowledge, Monitoring)2xDirect impact on service quality
General & Technical Management1xImportant but often organizationally driven
Service Value Chain1xRepresented through individual practices

Panel

The assessment was conducted by a panel of 5 industry experts:

  • ITIL 4 Managing Professional (Consulting, 15+ years)
  • IT Service Manager (Enterprise, 10+ years)
  • ITSM Tool Evaluator (Analyst, 8+ years)
  • Compliance & Security Officer (Regulatory, 12+ years)
  • DevOps/Platform Engineer (Technical, 7+ years)

10. Panel Comments

"OpsWeave shows impressive maturation in v0.6.6. The CMDB with DAG-based dependency modeling and SLA inheritance is on par with established enterprise solutions. The AI integration sets this tool apart from traditional open-source ITSM systems." — ITIL 4 Managing Professional

"The strengths clearly lie in the operational area: incident, change, and monitoring are solid. For enterprise readiness, structured governance processes such as CSI registers, risk registers, and formal release management are still needed." — IT Service Manager

"Compared to competitors like GLPI, Zammad, or OTRS, OpsWeave positions itself strongly through native CMDB integration and a modern tech stack. The customer portal and email integration are at enterprise level." — ITSM Tool Evaluator

"The compliance module with framework mapping is a differentiator. For regulated industries, however, system-level audit trails and a formal access review process are still missing." — Compliance & Security Officer

"The architecture is clean: TypeScript full-stack, dual-DB, Docker-first. The API-first approach and modular structure make integrations straightforward. The AI features are not a gimmick — they deliver real value in ticket handling." — DevOps/Platform Engineer

Released under the AGPL-3.0 License.