Using LIKPI¶
Installation¶
The CMDB can be installed in three ways, each suited to a different operating model.
Docker Compose (Recommended)¶
Deploying the Enterprise CMDB using Docker Compose is the recommended approach for production and staging environments. It builds the entire multi-tier architecture (Database, Backend, and Frontend) simultaneously while ensuring data persistence and network isolation.
Note
Data Persistence Guarantee
You will not lose your data if you stop or destroy the database container. The docker-compose.yml is configured to use a persistent Docker Volume (pgdata). Even if the cmdb-postgres container is deleted, the volume remains safely on your host machine. Upon restart, Docker reconnects the volume, and your CMDB data remains intact.
Prerequisites¶
- Ensure Docker and Docker Compose are installed on the target server.
- Ensure the following TCP ports are available and not blocked by a firewall:
80(Frontend Nginx Server)8888(Backend Vert.x API)5433(PostgreSQL Database mapping)
Step 1: Configure the Environment¶
Navigate into the cmdb-release directory. You will see a file named .env. This file contains the secure credentials and environment variables required by the database and backend.
Open the file and verify the variables match your deployment requirements.
Step 2: Build and Start the Application¶
From inside the cmdb-release directory (where the docker-compose.yml file is located), run the following command to deploy the stack:
docker-compose up -d --build
What this command does:
-d(Detached mode): Runs the containers in the background, allowing you to close your terminal session without stopping the application.--build: Forces Docker to read thebackend/Dockerfileandfrontend/Dockerfileto compile the Java JAR and Nginx assets into fresh images.- Database Initialization: During the very first boot, the PostgreSQL container will automatically execute the
01-cmdb-schema.sqland02-cmdb-data.sqlscripts located in thedb-init/folder to build the tables and insert seed data.
Step 3: Verify the Deployment¶
To ensure all three microservices are running healthily, execute:
docker-compose ps
You should see cmdb-postgres, cmdb-backend, and cmdb-frontend listed with a status of Up.
Viewing Live Logs: If a service fails to start (e.g., a database connection timeout), you can stream the live console logs to diagnose the issue:
docker-compose logs -f
Step 4: Access the Application¶
Once the containers are running, the application is instantly available on your network:
- Frontend UI: Open a web browser and navigate to
http://<your-server-ip>(Port 80). - Backend API: The REST API is listening on
http://<your-server-ip>:8888/api.
Application Lifecycle Management¶
Stopping the Application (Maintenance)
To safely stop the application without destroying the underlying containers, run:
docker-compose stop
Tearing Down the Stack
To completely remove the containers and network bridges (useful when upgrading to a new release), run:
docker-compose down
Warning
Running docker-compose down deletes the containers, but it does not delete the pgdata volume. Your data is perfectly safe. If you explicitly wish to wipe your database data and start from scratch, you must run docker-compose down -v.
Docker automatically pulls a PostgreSQL image, executes the schema
script, starts the backend, and serves the UI via Nginx. Access the
application at http://localhost.
All-In-One Fat JAR¶
- Prerequisites: Java 21, PostgreSQL.
- Execute
cmdb-schema.sqlagainst your PostgreSQL instance. - Extract
LIKPI-CMDB.zip. - Run
start-cmdb.bat(Windows) or./start-cmdb.sh(Linux/Mac). - The backend APIs and the bundled React UI are available at
http://localhost:8888.
Decoupled 2-ZIP (Enterprise)¶
- Backend: Same as the All-In-One approach, but from the backend-only ZIP. The API server runs on port 8888.
- Frontend: Extract the frontend ZIP. Host the
build/folder with Nginx or Apache. - Configure the web server to reverse-proxy
/api/*requests to the backend.
Configuring Enterprise LDAP/LDAPS¶
For production use, follow these steps to enable secure directory authentication.
Obtain the LDAP server’s public certificate (
ldap-server.crt).Create a Java TrustStore:
keytool -import -alias corp-ldap-cert -file ldap-server.crt -keystore cmdb-truststore.jks -storepass changeit -noprompt
Update the database configuration:
UPDATE sys_config SET config_value = '{ "enabled": true, "url": "ldaps://ad.company.internal:636", "bind_dn": "cn=cmdb_service_account,ou=Services,dc=company,dc=internal", "bind_password": "YourSecurePassword", "user_base_dn": "ou=Users,dc=company,dc=internal", "user_search_filter": "(sAMAccountName={0})", "group_base_dn": "ou=Groups,dc=company,dc=internal", "group_search_filter": "(member={0})" }'::jsonb WHERE config_key = 'ldap_config';
For OpenLDAP, use ``(uid={0})`` and ``(memberUid={0})``.
Start the JVM with the TrustStore:
java -cp ".:lib/*" \ -Djavax.net.ssl.trustStore=/opt/cmdb/certs/cmdb-truststore.jks \ -Djavax.net.ssl.trustStorePassword=changeit \ com.cmdb.CmdbApp
Troubleshooting LDAP:
| Error | Root Cause | Solution |
|---|---|---|
| PKIX path building failed | Java does not trust the certificate | Ensure the .crt was imported into the .jks and the path is correct. |
| No subject alternative DNS name | Hostname in URL does not match the certificate | Use the exact FQDN. For testing, add -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true. |
| Invalid Credentials (401) | Bind DN incorrect or User Base DN misconfigured | Double-check the credentials and that the user is inside the specified user_base_dn. |
| No LDAP groups found | Group Search Filter wrong | For AD use (member={0}), for OpenLDAP use (memberUid={0}). |
After the directory connection is established, map LDAP groups to
internal CMDB roles via the Admin UI or the
POST /api/admin/ldap/mappings API.
AI Assistant¶
- Open the chat panel and ask natural-language questions, e.g., “How many Tomcat servers are down?”
- The assistant will display a human-readable summary and a dynamic chart (Pie, Bar, Table).
- Follow-up questions are understood within the session context.
Building Custom Dashboards¶
- Navigate to the Dashboard Factory.
- Create a new dashboard, then Add Widget.
- Drag and resize the widget card on the CSS grid.
- Enter a PostgreSQL query that returns the data you need.
- Choose a visualisation type (Scalar, Pie, Bar, Histogram, Radar, Table) and configure colours, labels, and units.
- Save the dashboard. Use the Dashboard Viewer to display it to end users.
Configuring Triggers¶
- Go to Administration -> Triggers.
- Create a new trigger, specifying the target CI class and operations (CREATE, UPDATE, DELETE).
- Define conditions (e.g.,
ci_status CHANGED_TO "DOWN"). - Attach one or more actions:
- Webhook: URL, method, custom headers, and a payload template.
- Slack/Teams: Webhook URL and message template.
- Email: Recipients, subject, HTML body.
- JS Script: Custom remediation logic using the
cmdbApiobject.
Once saved, the trigger will fire automatically on matching events.
This document covers the complete feature set of the LIKPI Fast & Light CMDB. For developer-oriented API details and database schema references, consult the source code and inline comments.
API Integration & Automation¶
The LIKPI CMDB is built with an “API-First” philosophy. Every action achievable in the UI can be fully automated via our comprehensive REST API.
Stateless Authentication
All API endpoints are secured using stateless JSON Web Tokens (JWT). Developers must first call the POST /api/login endpoint to retrieve a Bearer token, which is then passed in the Authorization header of subsequent requests.
API Reference & Postman The CMDB ships with a fully interactive OpenAPI 3.0 specification.
- Interactive Docs: Navigate to API Reference in the documentation sidebar to view the interactive Redoc UI, which includes payload examples for all 80+ endpoints.
- Postman Collection: Administrators can download the official Postman Collection from the developer portal, which includes pre-configured authentication scripts and payload templates for CI creation, graph traversal, and AIOps queries.