Overview¶
The Enterprise Configuration Management Database (CMDB) is a highly scalable, event-driven platform built on a reactive architecture (Eclipse Vert.x) and backed by PostgreSQL. It provides a single system of record for IT infrastructure, supporting advanced multi-dataset reconciliation, AI-driven analytics (AIOps), high-availability clustering, and granular multi-tenant access control.
The platform is designed around a Zero-Trust, Multi-Tenant architecture, ensuring that every API request is cryptographically validated and strictly scoped. It transforms a passive database into an active automation orchestrator, utilising a real-time event bus, dynamic rule evaluation, and automated action dispatching.
Architecture¶
The LIKPI CMDB follows a decoupled, modular design. The backend is a reactive microservice, while the frontend is a modern Single Page Application (SPA).
Backend Core & Platform¶
The Java/Vert.x backend handles the fundamental operational logic, persistence, and platform security.
- Reactive HTTP/API Router: A non-blocking API gateway managing all inbound REST requests.
- PostgreSQL Persistence: Manages database interactions, schema-less JSONB operations, and table partitioning.
- Activity & Reconciliation Engine: The “brain” of the CMDB, processing merges from staging datasets (SANDBOX) into PRODUCTION by enforcing identification rules, resolving UUIDs, and calculating attribute precedence.
- RBAC & Multi-Tenant Security: Enforces Row-Level Security (RLS) and JWT-based Role-Based Access Control (RBAC) to separate data between tenants.
- License & Quota Service: Validates the platform’s license and enforces limits on HA nodes, CI counts, and users.
- HA Cluster Manager: Manages distributed state and coordinates background job execution via Hazelcast.
Frontend User Interface (UI)¶
The UI is a responsive SPA built with React, TypeScript, and Vite, styled with Tailwind CSS. Key modules include:
- Graph Explorer: An interactive canvas (React Flow) for visualising CI dependencies.
- Data Explorer: A tabular data management interface with an advanced query builder.
- Dashboard Factory: A no-code, drag-and-drop editor for building analytical dashboards.
- AIOps Assistant: A chat interface for natural language to SQL (NL2SQL) queries.
High-Level Topology¶
The system consists of the following tiers:
- Tier 1 – Frontend: End users and APIs access the React SPA.
- Tier 2 – Routing: An API Gateway / Load Balancer distributes traffic.
- Tier 3 – Compute: Multiple Vert.x application nodes form an active-active cluster, synchronised via a Hazelcast Grid.
- Tier 4 – Intelligence: A local Ollama model and a Cloud AI engine provide cognitive services.
- Tier 5 – Storage: A PostgreSQL Primary database with read-only replicas ensures data durability and read scalability.
Key Concepts¶
Core Data Architecture¶
The CMDB uses Single-Table Inheritance with Partitioning to manage Configuration Items (CIs) and their relationships.
- Configuration Items (CIs): Stored in
cmdb_base_element, they combine strict core attributes (Name, Status, Class) with fully flexible, schema-less extended attributes stored in a JSONB column. - Directional Relationships: Connections between CIs (e.g.,
RUNS_ON,DEPENDENCY) are tracked incmdb_base_relationship, enforcing strict Source-to-Target cardinality. - Ontology & Schema Dictionary: A native management system for CI Classes, custom attributes, and a taxonomy (Category, Type, Item, Model) to standardise infrastructure definitions.
- Dataset Isolation: Data is segregated into datasets (e.g.,
PRODUCTION,SANDBOX) via PostgreSQL table partitioning. This ensures massive read/write scalability and prevents data corruption during ingestion.
CI Class Inheritance¶
All CIs derive from a foundational hierarchy based on the DMTF CIM Schema. Below are the core properties of the base classes.
Foundation Classes (ManagedElement, LogicalElement, etc.)
| Property | Data Type | Description |
|---|---|---|
| InstanceID | string | Opaque unique identifier (OrgID:LocalID pattern) |
| Caption | string | Short textual description (max 64 chars) |
| Description | string | Detailed textual description |
| ElementName | string | User-friendly name for the instance |
| InstallDate | datetime | When the object was installed |
| Name | string | Label by which the object is known (max 256 chars) |
| Status | string | Current status: OK, Error, Degraded, Pred Fail… |
PhysicalElement
| Property | Data Type | Description |
|---|---|---|
| Manufacturer | string | Organization that produced the device |
| Model | string | Name by which the physical element is known |
| SerialNumber | string | Manufacturer-allocated serial number |
| Tag | string | Uniquely identifies the physical element |
| Version | string | Version of the physical element |
ComputerSystem
| Property | Data Type | Description |
|---|---|---|
| NameFormat | string | How the ComputerSystem Name is generated (e.g., IP, UUID) |
| Dedicated[] | uint16 array | Purpose(s) of the system (Switch, Firewall, …) |
| ResetCapability | uint16 | Hardware reset capability |
UnitaryComputerSystem
| Property | Data Type | Description |
|---|---|---|
| InitialLoadInfo[] | string array | Data to locate boot device |
| LastLoadInfo | string | Identifies device that last loaded the OS |
| PowerManagementCapabilities[] | uint16 array | Power management capabilities |
| PowerState | uint16 | Current power state (Full Power, Standby, …) |
The full DMTF CIM schema (v2.38) is used as a reference. MOF files are
located under the Schemas/CIM238/DMTF/ tree.
Reconciliation & Merge Engine¶
The Merge Engine is the “brain” of the CMDB, ensuring data from multiple, often conflicting sources is deduplicated and merged into a single “Golden Record”. It operates in three distinct phases.
Phase 1 – Identification (Match)¶
The engine uses configurable Identification Rules (stored in
cmdb_ident_rules) evaluated in order of priority.
- If a match is found (e.g., on
unique_identifier,serial_number + model, orname + class_id), the staging CI is linked to the production CI via a sharedreconciliation_identity(UUID). - If no match is found, the CI is treated as new and a fresh Golden Record is created.
Phase 2 – Dynamic Precedence (Win)¶
Conflicts are resolved using a granular hierarchy:
- Dataset Level (Global): e.g.,
AWS_DISCOVERY(Priority 80) beatsMANUAL_IMPORT(50). - Class Level (Override): e.g., for
NETWORKROUTER,CISCO_API(90) may override the global default. - Attribute Level (Micro-Override): A specific attribute, like
cpu_count, can be owned byVMWARE_DISCOVERY(100) regardless of any other priority.
The engine builds a “Frankenstein” record using the most trusted source for each attribute.
Phase 3 – Time-Decay Aging (Freshness)¶
To prevent stale data from a once-trusted source from permanently locking attributes, the engine applies Time Decay.
When enabled (enable_time_decay = true), the effective priority of a
source is calculated as:
Days Old = Current Date – Last Update Timestamp
If Days Old <= grace_period_days, Effective Priority = Original Priority.
If Days Old > grace_period_days:
Penalty = (Days Old - grace_period_days) * decay_rate_per_dayEffective Priority = MAX(Calculated Priority, priority_floor)
Example Scenario
| Parameter | Value |
|---|---|
| Original Priority | 800 |
| Grace Period | 14 days |
| Decay Rate | 10/day |
| Priority Floor | 400 |
| Incoming Source Priority | 500 |
| Day | Days Past Grace | Penalty | Effective Priority | Overwrite by Source (500)? |
|---|---|---|---|---|
| 1–14 | 0 | 0 | 800 | No |
| 24 | 10 | 100 | 700 | No |
| 45 | 31 | 310 | 490 | Yes |
This mechanism allows fresh data from a lower-priority source to naturally take over when the authoritative source goes silent.
Graph Traversal & Service Mapping¶
PostgreSQL is used as a graph database via Adjacency List modelling and Recursive Common Table Expressions (CTEs).
- Nodes: Configuration Items stored in
cmdb_base_element. - Edges: Directional relationships in
cmdb_base_relationshipwith attributes likesource_instance_id,target_instance_id,class_id(verb), anddataset_id(always ‘PRODUCTION’).
Impact Analysis (Top-Down)¶
When a server fails, this walk finds everything that depends on it.
WITH RECURSIVE graph_walk(root_id, parent_id, child_id, depth) AS (
SELECT
r.source_instance_id,
r.source_instance_id,
r.target_instance_id,
1
FROM cmdb_base_relationship r
JOIN cmdb_base_element ci ON ci.instance_id = r.source_instance_id
WHERE ci.name ILIKE '%SRV-01%'
AND ci.class_id = 'ComputerSystem'
AND ci.dataset_id = 'PRODUCTION'
AND ci.is_deleted = false
UNION
SELECT
gw.root_id,
r.source_instance_id,
r.target_instance_id,
gw.depth + 1
FROM cmdb_base_relationship r
JOIN graph_walk gw ON r.source_instance_id = gw.child_id
WHERE gw.depth < 6
AND r.dataset_id = 'PRODUCTION'
)
SELECT DISTINCT ON (gw.root_id, gw.child_id)
gw.depth,
gw.child_id as instance_id,
ci.name,
ci.class_id,
ci.ci_status
FROM graph_walk gw
JOIN cmdb_base_element ci ON gw.child_id = ci.instance_id
WHERE ci.is_deleted = false
ORDER BY gw.root_id, gw.child_id, gw.depth ASC;
Dependency Mapping (Bottom-Up)¶
When an application is slow, this walk identifies the foundational infrastructure supporting it.
WITH RECURSIVE graph_walk(root_id, parent_id, child_id, depth) AS (
SELECT
r.target_instance_id,
r.target_instance_id,
r.source_instance_id,
1
FROM cmdb_base_relationship r
JOIN cmdb_base_element ci ON ci.instance_id = r.target_instance_id
WHERE ci.name ILIKE '%DIG-BKS-RAC%'
AND ci.class_id = 'BusinessService'
AND ci.dataset_id = 'PRODUCTION'
AND ci.is_deleted = false
UNION
SELECT
gw.root_id,
r.target_instance_id,
r.source_instance_id,
gw.depth + 1
FROM cmdb_base_relationship r
JOIN graph_walk gw ON r.target_instance_id = gw.child_id
WHERE gw.depth < 6
AND r.dataset_id = 'PRODUCTION'
)
SELECT DISTINCT ON (gw.root_id, gw.child_id)
gw.depth,
gw.child_id as instance_id,
ci.name,
ci.class_id,
ci.ci_status
FROM graph_walk gw
JOIN cmdb_base_element ci ON gw.child_id = ci.instance_id
WHERE ci.is_deleted = false
AND ci.dataset_id = 'PRODUCTION'
ORDER BY gw.root_id, gw.child_id, gw.depth ASC;
Architectural Safeguards:
- Depth limiter (``gw.depth < 6``): Prevents infinite loops in cyclic topologies.
- DISTINCT ON: Returns only the shortest path to each CI.
- Outer-join metadata: Class names and statuses are fetched outside the recursive loop to avoid query plan degradation.
Security, Identity & IAM¶
The IAM engine is built on four pillars: Hybrid Authentication, Stateless Tokenization, Role-Based Access Control (RBAC), and Graph-Based Row-Level Security.
Hybrid Authentication¶
- Local Auth: Passwords are verified against Bcrypt hashes
stored in
sys_user. - LDAP/AD Auth: If the user is flagged as LDAP, the system binds to the enterprise directory over LDAPS, validates credentials, and dynamically syncs group memberships.
LDAP Configuration (``sys_config.ldap_config``)
{
"enabled": true,
"url": "ldaps://ad.company.internal:636",
"bind_dn": "cn=cmdb_service_account,ou=Services,dc=company,dc=internal",
"bind_password": "YourSecurePassword",
"user_base_dn": "ou=Users,dc=company,dc=internal",
"user_search_filter": "(sAMAccountName={0})",
"group_base_dn": "ou=Groups,dc=company,dc=internal",
"group_search_filter": "(member={0})"
}
- User Base DN restricts login scans to a specific organisational unit.
- Group Base DN tells the backend where to search for security roles.
Just-In-Time (JIT) Provisioning: On first login, a “Shadow Account”
is created in sys_user (with an empty password hash and
auth_source = LDAP). Group memberships in
sys_user_group_member are wiped and re-inserted at every login,
ensuring real-time synchronisation with the directory.
Stateless JWT Sessions¶
After successful authentication, the server returns a signed JWT containing:
{
"sub": "jdoe",
"role": "editor",
"groups": ["GRP-HR-Admins", "GRP-IT-Viewers"],
"exp": 1711814400
}
Because the token carries all required claims, backend nodes never need to query the database for the user’s identity.
Role-Based Access Control (RBAC)¶
System roles are stored in the JWT and enforced on every request.
| Role | Capabilities |
|---|---|
| admin | Full control: manage config, rules, users; bypasses all Row-Level Security. |
| editor | CRUD on CIs and relationships if Row-Level access permits. |
| reader | Read-only access to authorised CIs, audit logs, and graphs. |
Multi-Tenant Row-Level Security¶
Access to CIs is granted by linking a user’s LDAP/Local group to an
Organisation CI via the sys_group_org_access table.
To avoid calculating graph dependencies on every query, a Visibility Caching Engine runs asynchronously:
- When a group is linked to an Organisation, a background worker performs a top-down graph walk.
- Every CI discovered beneath the Organisation is written to
cache_group_visibilityalong with the group name. - All API queries are transparently joined with this cache:
SELECT ci.* FROM cmdb_base_element ci
JOIN cache_group_visibility v ON ci.instance_id = v.instance_id
WHERE ci.class_id = 'ComputerSystem'
AND v.group_name = 'GRP-HR-Admins';
This delivers sub-millisecond responses while guaranteeing strict multi-tenant isolation.
Artificial Intelligence (AIOps)¶
A dual-model cognitive layer translates natural language into database queries and visual summaries.
- Heavy Engine (NL2SQL): Typically a cloud-based reasoning model (e.g., DeepSeek, GPT-4). It receives the user’s question and a dynamically recompiled system prompt describing the current CMDB schema. It generates a read-only SQL query.
- Fast Engine (Summarisation & Visualisation): A local model (e.g., Ollama Gemma) that consumes raw database results and produces a human-readable summary plus a widget configuration (SCALAR, PIE_CHART, BAR_CHART, DATA_TABLE). This keeps sensitive data on-premises.
Dynamic Schema Awareness¶
Whenever a new CI class or custom attribute is added, a
cmdb.schema.updated event is broadcast. Every AI Service node
intercepts it, rebuilds the system prompt, and stores the new version in
sys_ai_prompt_history. The AI is therefore always aware of the
latest ontology without a restart.
Example NL2SQL Interaction¶
User: “Show me a breakdown of all Physical Servers running in Production that currently have Critical vulnerabilities.”
Generated SQL:
SELECT ci.instance_id, ci.name, ci.ci_status, ci.vulnerability, ci.vulnerability_description, ci.class_id, ci.category, ci.type, ci.item, ci.model
FROM cmdb_base_element ci
WHERE ci.dataset_id = 'PRODUCTION'
AND ci.is_deleted = false
AND ci.class_id = 'COMPUTERSYSTEM'
AND ci.vulnerability = 'Yes'
ORDER BY ci.name
LIMIT 500;
AI Response (JSON sent to frontend):
{
...
"summary": "There is 1 server running in Production that currently has vulnerabilities.",
...
"data": [
{
"instance_id": "08b78964-f4ad-4fc8-bd0e-86b9c15558a7",
"name": "Test1",
"class_id": "COMPUTERSYSTEM",
"type": "Container",
"item": "",
"model": "AWS",
"ci_status": "UP",
"vulnerability": "Yes",
"vulnerability_description": "Vulnerability CVE-12345"
}
]
}
Stateful Chat Memory¶
Conversations are stored in sys_ai_chat_sessions and
sys_ai_chat_messages.
Event-Driven Automation (Trigger Engine)¶
The CMDB acts as a real-time automation orchestrator. It listens to the clustered Event Bus for CI/Relationship changes, evaluates rules, and dispatches actions.
Event Bus Architecture¶
Whenever a CI is created, updated, or deleted (even by background merges), a JSON message is published on the Event Bus:
{
"operation": "UPDATE",
"target_type": "CI",
"class_id": "COMPUTERSYSTEM",
"instance_id": "ce5de307-87ac-4fb1-9353-9608c19e72c5",
"payload": {
"source": "merge-engine",
"changes": {
"vulnerability": { "old": "No", "new": "Yes" },
"ci_status": { "old": "UP", "new": "MAINTENANCE" }
}
}
}
The TriggerEngine consumes these events and evaluates matching
rules.
Rule Evaluation¶
Rules are defined per CI class and contain condition arrays. Supported operators:
EQUALS/NOT_EQUALSCHANGEDCHANGED_TOCONTAINS/REGEX
If all conditions evaluate to TRUE, the configured actions are
executed.
Action Dispatcher¶
The engine supports multiple native action types:
| Type | Description |
|---|---|
| WEBHOOK | POST/PUT/PATCH JSON payloads to external REST APIs (e.g., Ansible Tower, ServiceNow). Variable injection (${instance_id}) is supported. |
| SLACK_TEAMS | Posts formatted, colour-coded messages to collaboration channels. |
| Sends SMTP emails using the Vert.x Mail Client. Recipients can be dynamically pulled from CI attributes. |
Example Ransomware Containment Workflow:
- A Tenable Nessus scan updates a server’s
vulnerabilityto “Yes” in SANDBOX. - The Merge Engine publishes an
UPDATEevent. - The trigger rule
vulnerability CHANGED_TO "Yes"fires. - A Slack alert is sent and a webhook posts the
instance_idto Ansible Tower. - Ansible isolates the server from the network.
All of this happens in milliseconds, outside the user’s request context.
Audit, Compliance & History¶
The CMDB captures an immutable, millisecond-precise record of every meaningful change.
Attribute-Level Diff Engine¶
Instead of copying an entire CI row for every update, the engine
calculates a precise JSON diff. It ignores ephemeral fields like
last_seen_date and only records real changes.
"changes": {
"ci_status": { "old": "MAINTENANCE", "new": "UP" },
"attr_ram_gb": { "old": "16", "new": "32" }
}
If no data actually changes, the audit entry is aborted.
Relational Neighborhood Snapshots¶
At the moment an audit record is written, the engine captures a snapshot
of all active inbound and outbound relationships (via the
reconciliation_identity), together with the health status of
neighbouring CIs. This “Topology Time Machine” allows engineers to see
exactly what depended on a failed CI at the time of the incident,
without complex temporal joins.
Example snapshot payload:
"neighborhood_snapshot": [
{
"relationship_id": "f50cd8f0-...",
"class_id": "RUNS_ON",
"direction": "INBOUND",
"related_ci_name": "Payroll-App-Prod",
"related_ci_status": "UP"
}
]
Immutable Metadata¶
Every audit record carries:
audit_id(UUID)target_instance_id,target_type(CI or RELATION)operation(CREATE, UPDATE, DELETE)changed_by(username or service account)dataset_idchange_time(timestamp)
Automated Data Retention¶
A background watchdog (runLogRetention) periodically purges aged
records from:
sys_access_log(access logins)sys_ai_chat_sessions(AI conversations, cascading to messages)sys_activity_logs(background job histories)
Retention periods are configurable in sys_config.
Enterprise High Availability & Operations¶
The platform is engineered for Active-Active High Availability with self-healing capabilities.
Hazelcast Clustering¶
Vert.x nodes discover each other (via multicast or explicit TCP/IP) and form a unified mesh. This enables:
- Distributed Event Bus: A trigger event generated on Node A can be consumed by Node B.
- Distributed Caching: The IAM Visibility Cache is shared across all cluster members.
If a node crashes, the remaining nodes instantly absorb its workload.
Self-Healing Watchdogs¶
Long-running jobs periodically write heartbeats. A watchdog sweeps the execution tables:
- If a job is
RUNNINGbut its heartbeat is older thanstaleJobTimeoutMinutes(default 30), it is forcefully killed and set toFAILED. - Orphaned queues (jobs stuck for 24+ hours) are cleaned up, allowing the engine to resume processing.
Zero-Downtime Hot Reloading¶
Configuration changes (e.g., toggling audit logging, adding a new CI class) are broadcast via the clustered Event Bus. All nodes intercept the message and silently reload their in-memory settings without dropping requests.
Cryptographic Licensing¶
The platform expects a JWT-signed license key. Every 60 seconds it validates:
- Maximum HA nodes
- Maximum CIs in PRODUCTION
- Number of allowed Tenants and Write-Users
- AI entitlement (enables/disables the NL2SQL endpoints)
A node attempting to join a fully licensed cluster is gracefully rejected.
Using LIKPI¶
Installation¶
The CMDB can be installed in three ways, each suited to a different operating model.
Docker Compose (Recommended)¶
- Extract
LIKPI-CMDB-Docker.zip. - Open a terminal in the extracted folder.
- Run
docker-compose up -d.
Docker automatically pulls a PostgreSQL image, executes the schema
script, starts the backend, and serves the UI via Nginx. Access the
application at http://localhost.
All-In-One Fat JAR¶
- Prerequisites: Java 21, PostgreSQL.
- Execute
cmdb-schema.sqlagainst your PostgreSQL instance. - Extract
LIKPI-CMDB.zip. - Run
start-cmdb.bat(Windows) or./start-cmdb.sh(Linux/Mac). - The backend APIs and the bundled React UI are available at
http://localhost:8888.
Decoupled 2-ZIP (Enterprise)¶
- Backend: Same as the All-In-One approach, but from the backend-only ZIP. The API server runs on port 8888.
- Frontend: Extract the frontend ZIP. Host the
build/folder with Nginx or Apache. - Configure the web server to reverse-proxy
/api/*requests to the backend.
Configuring Enterprise LDAP/LDAPS¶
For production use, follow these steps to enable secure directory authentication.
Obtain the LDAP server’s public certificate (
ldap-server.crt).Create a Java TrustStore:
keytool -import -alias corp-ldap-cert -file ldap-server.crt -keystore cmdb-truststore.jks -storepass changeit -noprompt
Update the database configuration:
UPDATE sys_config SET config_value = '{ "enabled": true, "url": "ldaps://ad.company.internal:636", "bind_dn": "cn=cmdb_service_account,ou=Services,dc=company,dc=internal", "bind_password": "YourSecurePassword", "user_base_dn": "ou=Users,dc=company,dc=internal", "user_search_filter": "(sAMAccountName={0})", "group_base_dn": "ou=Groups,dc=company,dc=internal", "group_search_filter": "(member={0})" }'::jsonb WHERE config_key = 'ldap_config';
For OpenLDAP, use ``(uid={0})`` and ``(memberUid={0})``.
Start the JVM with the TrustStore:
java -cp ".:lib/*" \ -Djavax.net.ssl.trustStore=/opt/cmdb/certs/cmdb-truststore.jks \ -Djavax.net.ssl.trustStorePassword=changeit \ com.cmdb.CmdbApp
Troubleshooting LDAP:
| Error | Root Cause | Solution |
|---|---|---|
| PKIX path building failed | Java does not trust the certificate | Ensure the .crt was imported into the .jks and the path is correct. |
| No subject alternative DNS name | Hostname in URL does not match the certificate | Use the exact FQDN. For testing, add -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true. |
| Invalid Credentials (401) | Bind DN incorrect or User Base DN misconfigured | Double-check the credentials and that the user is inside the specified user_base_dn. |
| No LDAP groups found | Group Search Filter wrong | For AD use (member={0}), for OpenLDAP use (memberUid={0}). |
After the directory connection is established, map LDAP groups to
internal CMDB roles via the Admin UI or the
POST /api/admin/ldap/mappings API.
AI Assistant¶
- Open the chat panel and ask natural-language questions, e.g., “How many Tomcat servers are down?”
- The assistant will display a human-readable summary and a dynamic chart (Pie, Bar, Table).
- Follow-up questions are understood within the session context.
Building Custom Dashboards¶
- Navigate to the Dashboard Factory.
- Create a new dashboard, then Add Widget.
- Drag and resize the widget card on the CSS grid.
- Enter a PostgreSQL query that returns the data you need.
- Choose a visualisation type (Scalar, Pie, Bar, Histogram, Radar, Table) and configure colours, labels, and units.
- Save the dashboard. Use the Dashboard Viewer to display it to end users.
Configuring Triggers¶
- Go to Administration -> Triggers.
- Create a new trigger, specifying the target CI class and operations (CREATE, UPDATE, DELETE).
- Define conditions (e.g.,
ci_status CHANGED_TO "DOWN"). - Attach one or more actions:
- Webhook: URL, method, custom headers, and a payload template.
- Slack/Teams: Webhook URL and message template.
- Email: Recipients, subject, HTML body.
- JS Script: Custom remediation logic using the
cmdbApiobject.
Once saved, the trigger will fire automatically on matching events.
This document covers the complete feature set of the LIKPI Fast & Light CMDB. For developer-oriented API details and database schema references, consult the source code and inline comments.
Frontend Features¶
This section details the user-facing modules of the LIKPI CMDB, covering workspaces, configuration interfaces, and administrative tools.
Core Workspaces & Dashboards¶
1. Graph Explorer¶
The primary interactive canvas for visualising infrastructure topology and relationships (powered by React Flow).
- Visualise Topology: Drag, zoom, and pan the canvas for a bird’s-eye view of your architecture.
- Quick Locate: Built-in search bar to find a specific CI and instantly snap/zoom the camera to it.
- Algorithmic Layouts: Apply automatic directional arrangements (Top-to-Bottom, Bottom-to-Top, Left-to-Right, Right-to-Left) to untangle complex webs of CIs.
- Lasso Multi-Select: Draw a boundary box to select, move, or hide multiple CIs simultaneously.
- Dynamic Styling & Badges: Nodes dynamically render icons based on their class, and feature live status indicator badges (e.g., Red for DOWN, Orange for active Incidents). Edge lines change colour based on the relation type.
- Contextual Actions: Right-click any node or edge to access a context menu for editing, deletion, or triggering deep analytics.
2. Data Explorer¶
The advanced tabular interface for bulk data management and granular searching.
- Tabular View: Browse and sort CIs and Relations in a highly responsive, spreadsheet-like grid.
- Advanced Query Builder: Create complex
AND/ORnested conditions across any core field or custom JSON attribute. - Saved Filters: Save specific complex queries (e.g., “All Linux Servers in Production with Critical Vulnerabilities”) and load them with one click.
- Dynamic Columns: Toggle visibility of columns on the fly, seamlessly blending standard database columns with dynamic schema-less JSON attributes.
- Push to Graph: Select specific rows via checkboxes and push them straight to the Graph Explorer to instantly map their connections.
3. CI & Relation Editor¶
The unified inspection and modification panel for all configuration items and edges.
- Tabbed Interface: Logically organises vast amounts of data into categories: Core Identity, Classification, Status & Health, Lifecycle, People & Support, Extended Details, and Custom Attributes.
- Smart Validation Enforcement: Proactively fetches Identification
Rules from the backend and blocks saving/promotion if mandatory
identification keys (like
unique_identifier) are missing. - Dynamic Input Rendering: Automatically renders the correct input types (Date pickers, Dropdowns, Number inputs, Boolean toggles) based on the specific Class’s custom attribute schema.
- Unmapped Data Handling: Safely captures and displays “Unmapped JSON Attributes” (data sent via API that doesn’t yet have a strict schema definition).
- Drafting & Locking: Enforces read-only views for unauthorised users, while editors can manipulate records safely in a draft state before promoting to production.
4. Dashboard Factory & Viewer¶
The no-code analytical engine for building and viewing operational intelligence.
- Drag-and-Drop CSS Grid: Visually design dashboards by dragging, dropping, and resizing widgets on a flexible grid.
- SQL-Driven Insights: Power widgets with raw PostgreSQL queries,
utilising aliases (
value,label) to easily map complex database joins to visual UI elements. - Rich Visualization Types: Supports multiple visual formats out of the box via Recharts: Single-value Scalars, Pie Charts, Bar Charts, Line/Curve Histograms, Radar Charts, and Tables.
- Visual Theming: Configure primary colour themes, add value suffixes (e.g., MB, %, CIs), and rely on automatically generated smart legends and tooltips.
5. Ontology & Configuration Management¶
The administrative suite that allows you to configure the CMDB’s data model without writing any code.
- Class Registry & Manager: Visually define new CI Classes,
establish their hierarchical inheritance (all deriving from the
BASE_ELEMENTmother class), and assign custom JSON attributes to them. - Dataset Manager: Create and govern isolated data environments (e.g., specific Discovery Sandboxes vs. the live Production dataset).
- Identification Rules Manager: Define the unique keys (e.g.,
serial_number,unique_identifier) required to deduplicate CIs during the reconciliation process. This automatically binds to the View Configuration Item to enforce mandatory fields. - Relation Type Manager: Govern the valid types of connections
between CIs (e.g.,
HostedService,SystemDevice) and attach custom attributes to the relationship edges. - Attribute Precedence Manager: Configure which datasets are trusted most during merges (e.g., prioritising an AWS Discovery dataset for IP addresses over manual entry).
- Criteria Manager: Map your IT taxonomy, allowing the classification of CIs strictly by Category, Type, Item, and Model.
- Lifecycle & Trigger Management: Configure automated state transitions (like auto-retiring stale CIs) and set up database-level event triggers for external webhooks.
6. Advanced Analytics & Intelligence¶
Modules dedicated to analysing topological risks and leveraging AI for operations.
- Impact Analysis: Select a CI and visually calculate its “blast radius” to see exactly which downstream applications and business services will fail if that component goes down.
- Dependency Mapping: Select a CI and trace upstream to find its root-cause dependencies (e.g., figuring out which database server an application relies on).
- AIOps & Cognitive Engine (NL2SQL): An integrated AI Assistant window where users can type natural language questions (e.g., “Show me all Linux servers in Production that have open incidents”) and the AI will translate it into a complex PostgreSQL query, execute it, and return the data.
- Prompt Configuration: Admins can tweak the underlying System Prompts used by the AI to align its behaviour with internal company standards.
7. Auditing, Governance & User Management¶
Modules ensuring the CMDB remains secure, compliant, and traceable.
- User Management & RBAC: Create users and assign granular Role-Based Access Control (Admin, Editor, Viewer). The UI dynamically locks down “Edit Mode” and promotion capabilities based on these roles.
- CI Audit Timeline: A visual, chronological history log attached to every single CI in the database. It tracks exactly who changed what attribute (e.g., IP address changed from X to Y), and when, ensuring total accountability for infrastructure drift.
- Activity Manager: A real-time monitoring interface for background tasks. Allows admins to track the success, failure, and execution logs of asynchronous operations like Sandbox-to-Production merges.
- Access Logs: A centralised security view tracking API access, login attempts, and system interactions for compliance reviews.
8. Admin Access Management & System Configuration¶
The System Configuration Manager is a centralised, restricted administrative dashboard designed to manage the core behaviour, security parameters, background processing, and directory synchronisation of the CMDB platform. The interface is divided into two primary sections: Core Config and LDAP Mappings.
8.1 Core Configuration (Global System Settings)¶
This tab provides granular control over the backend Vert.x engine and background reconciliation tasks. Changes made here apply globally and trigger a hot-reload of the relevant backend services.
8.1.1 LDAP / Active Directory Engine¶
Configures the connection to enterprise directory services for user authentication.
- Enable LDAP Authentication: Global toggle to switch between internal database authentication and external LDAP authentication.
- LDAP URL: The connection URI for the directory server (e.g.,
ldaps://your-dc.company.local:636). - Bind DN & Bind Password: The credentials of the service account used to query the directory.
- User Base DN & Group Base DN: The root organisational units (OUs) where the system will search for users and security groups.
8.1.2 Security¶
Manages the platform’s API security and session lifecycle.
- JWT Session Timeout (Minutes): Defines the strict duration before an active user’s session token expires, requiring re-authentication.
- JWT Secret: The cryptographic key used by the backend to sign and verify authentication tokens.
8.1.3 Timers & Polling¶
Provides fine-grained control over the heartbeat and execution frequencies of the backend asynchronous tasks (measured in milliseconds).
- Queue Polling: The interval at which the standard job queue is checked for new tasks.
- Sandbox Polling: The interval for the staging fast-lane processor.
- Scheduler Check: How often the cron-based scheduled activity engine evaluates pending routines.
- Log Cleanup: The frequency of the routine that purges expired system logs.
- Zombie Job Check: The interval for the watchdog to sweep for stuck or hung background processes.
- Heartbeat Frequency: How often active cluster nodes announce their health to the HA registry.
- WebSocket Ping: The keep-alive interval to maintain persistent real-time UI connections.
8.1.4 SMTP & Email Settings¶
Configures outbound email notifications for system alerts, password resets, and reports.
- Enable Notifications: Global toggle to turn the mail engine on or off.
- Connection Details: Configures the From Address, SMTP Host, and Port (e.g., 587).
- Security & Authentication: Selectable TLS strategies (NONE, STARTTLS, SSL/TLS) and Auth methods (NONE, LOGIN, PLAIN), along with the designated Username and Password.
8.1.5 Data Processing & Merge Limits¶
Governs the memory footprints and chunking behaviour of the Activity & Reconciliation Engine to prevent Out-Of-Memory (OOM) crashes during massive integrations.
- CI Batch Size: The maximum number of Configuration Items pulled into memory per merge cycle.
- Relation Batch Size: The total number of edges/relationships processed per data wave.
- Relation Chunk Size: The subset size defining how relations are chunked and parallelised during heavy processing.
8.1.6 UI Display Limits¶
Prevents browser memory exhaustion by limiting payload sizes to the frontend.
- History Timeline Limit: The maximum number of audit logs the
backend will fetch and render in the
CiAuditTimelineview at one time.
8.1.7 Multi-Tenancy & Graph Traversal¶
- Cache Refresh Interval (ms): Defines how long multi-tenant configuration data is cached in memory before forcing a database refresh (defaults to 10 minutes).
- Max Graph Depth: A crucial circuit breaker that limits how many layers deep the recursive SQL queries will traverse when building topological maps or calculating Impact Analysis, preventing infinite loops.
8.1.8 System Maintenance Policies (Orphan, Aging, Audit, Access Logs)¶
- Orphan Detection: Sets the Max Rows to Scan per sweep and allows admins to define Excluded Classes that should never be flagged as orphaned.
- Aging Policy: Allows administrators to define Excluded Classes that are immune to automatic stale-data retirement.
- Audit History: Features a global Enable Auditing toggle and an Excluded Classes tag input to prevent high-churn, low-value CIs from bloating the audit tables.
- Access Logs: Sets the Retention Days, automatically purging historical login and API access records older than the specified threshold.
- Job Scheduler: Configures the Stale Timeout (Minutes), marking hanging or locked background processes as failed if they exceed this limit.
8.2 Identity Synchronization (LDAP Mappings)¶
This tab translates external LDAP/Active Directory group memberships into internal CMDB Role-Based Access Control (RBAC) groups.
- Mapping Grid: Displays a live, persistent table mapping the LDAP Group DN to the internal Target CMDB Group.
- Create Mapping: Allows administrators to define a new
synchronisation rule by inputting a valid LDAP DN (e.g.,
CN=AppAdmins,OU=Groups,DC=company,DC=com) and selecting an existing internal CMDB group from a dropdown list. - Just-in-Time Provisioning: Upon successful login, the CMDB will read the user’s LDAP group memberships, cross-reference this mapping table, and automatically provision the correct RBAC permissions (e.g., Admin, Editor, Viewer) dynamically.
8.3 Hybrid AI Architecture Configuration¶
The Hybrid AI Architecture Configuration module allows administrators to configure an agnostic, multi-provider AI pipeline that drives the Natural Language to SQL (NL2SQL) and data summarisation features. This interface ensures the CMDB is not locked into a single AI vendor and can mix local inference with cloud APIs.
8.3.1 Provider & Model Selection (Dual-Engine Pipeline)¶
The system divides AI workloads into two distinct categories to optimise for cost, speed, and reasoning capability.
- Heavy Provider & Heavy Model: Dedicated to the highly complex task
of interpreting the PostgreSQL schema and translating human questions
into precise SQL queries.
- Configuration: Administrators can assign powerful reasoning models (e.g., OpenAI gpt-4o, DeepSeek deepseek-coder, or local llama3) to this engine.
- Fast Provider & Fast Model: Dedicated to consuming the resulting
JSON data from the database and quickly summarising it into
human-readable text.
- Configuration: Administrators can assign fast, low-latency models (e.g., gpt-3.5-turbo, or local mistral) to process the final output without incurring high token costs.
8.3.2 Connection & Authentication¶
Allows the platform to bridge between internal, air-gapped AI deployments and external SaaS providers.
- Ollama (Local/On-Premise): Fields for Ollama Host (e.g.,
localhost) and Port (e.g., 11434). The UI dynamically connects to the
Ollama
/api/tagsendpoint to fetch and populate the dropdown with all installed local models. - External API (Cloud): Field for the External API Host (e.g., api.openai.com).
- AI API Key: A secure, hidden input to store the Bearer token required for external API authentication.
8.3.3 System Prompt Engineering¶
Provides embedded code editors (powered by Monaco Editor) for administrators to define the underlying behaviour, rules, and schema context given to the AI.
- SQL Generator Prompt (System Context): The critical prompt
injected into the “Heavy” model. Administrators use this space to
define the CMDB ontology, table structures (e.g.,
cmdb_base_element), JSONB attribute paths, and strict rules to prevent destructive operations (e.g., “Always use SELECT, never DROP”). - Summarizer Prompt: The template injected into the “Fast” model.
It utilises dynamic payload variables (
{QUESTION}and{JSON_DATA}) to instruct the AI on how to format the final textual response for the end-user.
8.3.4 Version Control & Rollback Management¶
Because tweaking AI prompts can cause unexpected behavioural degradation (hallucinations or invalid SQL syntax), the module features an immutable version history system.
- Audit Trail: Every time a new configuration or prompt is published, the backend generates a new version number (e.g., v1, v2, v3), recording the timestamp and the administrator who made the change.
- Version Viewer: Administrators can click on any historical version in the timeline to inspect the exact prompts and models that were used at that time.
- Instant Rollback: If a newly published prompt breaks the AI Assistant, administrators can click “Restore” on a previous stable version. The system immediately reverts the active configuration to the selected historical state, ensuring zero downtime for end-users.