How to install Uptime Kuma for basic monitoring

Icono oficial del proyecto Uptime Kuma alojado en el repositorio público de Louis Lam en GitHub, monitor de disponibilidad autoalojado de código abierto escrito en Node.js con interfaz web moderna que permite configurar comprobaciones HTTP, TCP, ping, DNS, consulta a bases de datos y endpoints de avisos, diseñado para caber en un contenedor pequeño y ser operado por un equipo pequeño sin servicios externos de pago

Uptime Kuma is probably the most popular open-source self-hosted uptime monitor available today. Born in 2021 as a personal project by Louis Lam, it has grown to more than 60,000 stars on GitHub, an active community, and production deployments in thousands of small and mid-sized organizations. Version 2.0 released early 2025 brought significant architectural changes (optional PostgreSQL migration, redesigned UI, improved distributed monitoring support) and consolidated the project as a serious alternative to paid services like UptimeRobot, Pingdom or Better Stack for teams that prefer self-hosting. This guide covers how to install it with Docker reasonably, configure basic monitors, wire up Telegram and Discord notifications, and the operational mistakes that are easy to make the first time.

What Uptime Kuma is and what to expect

Uptime Kuma is a web application written in Node.js that monitors service availability through periodic checks and notifies when it detects failures. It supports a wide variety of check types: HTTP and HTTPS with response-code and body-content validation, ICMP ping, open TCP ports, DNS queries, SQL queries against databases, gRPC endpoints, SSL certificates about to expire, and a dozen more. The web interface lets you group monitors, define public status pages, and configure over ninety notification channels.

The project’s pragmatic pitch is to fill the gap between “set up Prometheus plus Blackbox Exporter plus Alertmanager with hand-written rules” and “pay 50 euros a month to UptimeRobot.” For teams with between five and a hundred services to monitor, it’s the most reasonable choice. Installation takes ten minutes, configuration is visual without YAML or code, and a 200 MB container covers the whole use case with no additional pieces.

Prerequisites

You need a server with Docker installed. Any recent Linux distribution works; I recommend Debian 13 or Ubuntu 24.04 for stability. The Uptime Kuma container is light: with 100 active monitors it uses about 200 MB of RAM and under 5% of one CPU core. Any modest VPS (1 vCPU, 1 GB RAM) is enough.

Storage matters. Uptime Kuma stores all check history in a database (SQLite by default, PostgreSQL since 2.0). With 60-second checks across 50 monitors, the database grows about 100 MB per month. Plan for at least 10 GB of disk dedicated to Kuma if you want a long history.

If you’re going to expose Kuma to the internet to view status from outside, you need a domain, a reverse proxy (nginx, Caddy or Traefik) and valid SSL certificates. Don’t expose Kuma over plain HTTP and don’t leave the interface accessible without authentication: it’s an application with access to your internal infrastructure and deserves serious protection.

Docker installation

The recommended way in 2025 is Docker Compose with a small configuration file. A file with the image pinned to louislam/uptime-kuma:1.23.16, a volume bound to local ./data, port 3001 exposed, a health check, and a 512M memory cap is enough. Save it as docker-compose.yml and bring it up with docker compose up -d. Open the browser at http://your-server:3001 and you’ll see the admin user creation screen. Pick a strong username and password; Uptime Kuma has no direct password recovery and regaining access requires editing the database manually.

A note on version. The 1.23.x branch is stable by late 2025; 2.0 is available but with some details still in motion. For a fresh production install, 1.23 is the conservative choice. If you want to try 2.0, do it in a separate environment first and measure whether the UI and database changes suit you.

Initial basic configuration

On first entry, three things are worth adjusting. First, the time zone under Settings → General; it defaults to UTC and should be set to Europe/Madrid if you operate from here. Second, two-factor authentication in Settings → Security with a compatible app (Google Authenticator, Authy, 1Password): an important defensive layer given that Kuma keeps sensitive credentials for some monitor types. Third, if Kuma sits behind a reverse proxy, check “Trust proxy headers” in General, without which recorded client IPs are the proxy’s and not the real ones.

Creating the first monitors

The monitor-creation UI is self-explanatory but important decisions deserve detail. For each monitor, think in three axes: what I check (endpoint), how often (interval), and what counts as failure (conditions).

For public websites, the standard check is HTTP with a 60-second interval. Verify at least the response code and optionally that the body contains a specific string. The latter is key: a monitor that only checks for 200 doesn’t detect when your application returns a pretty error page with code 200. For internal TCP-exposed services use the Port type; for SSL certificates about to expire there’s a specific type that’s worth configuring with a 21-day threshold.

Group monitors with coherent tags by criticality (critical, important, informational) and by system. That way you can filter in dashboards and route distinct notifications to distinct channels by severity.

Configuring Telegram notifications

Telegram is one of the most comfortable notification channels. Create a bot by talking to @BotFather on Telegram, follow instructions, and note the token it gives. Next you need the chat ID where you want notifications. For a personal chat, open a conversation with the bot and send /start, then check https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates in the browser to see the ID. For a group, add the bot to the group and do the same trick.

In Uptime Kuma, go to Settings → Notifications → Add. Select Telegram, paste the token and chat ID, and test with the “Test” button. If everything is right, you’ll receive a test message. Then assign this notification to the monitors you want. Do it preferably from group tags, not monitor by monitor, to keep configuration manageable.

Configuring Discord notifications

Discord uses incoming webhooks. In your Discord server, go to channel settings → Integrations → Webhooks → New webhook. Copy the webhook URL. In Uptime Kuma, Notifications → Add → Discord, paste the URL and choose an optional name and avatar. Test and assign to desired monitors.

One detail worth knowing. Discord limits message frequency per webhook (roughly 30 per minute with allowed bursts). If you have 100 monitors and they all fail at once due to a network outage at the Kuma server, Discord may start rejecting messages and you’ll lose alerts. For serious production with many monitors, consider grouping notifications by time window using an intermediary like ntfy.sh or a dedicated on-call service.

Public status page

A useful feature is the public status page, where you can show the state of selected services to external users without giving them access to the full panel. In the sidebar, enter Status Pages → Add New. Choose which monitors to show, the theme, the domain (if you expose it on your own domain), and publish.

Recommended practice is to have at least two pages: a public one with user-facing services, and an internal one with all monitors including internal ones. That lets you communicate status to customers without revealing infrastructure detail.

Common mistakes to avoid

The first is not configuring backup. Uptime Kuma stores everything in /app/data; if that volume is lost, you lose all configuration and history. Schedule a daily copy of the data directory to another location with a simple cron.

The second is leaving the interface exposed without HTTPS or strong authentication. Kuma handles sensitive credentials (notification tokens, connection strings, internal URLs). Exposing it with a weak-password admin is handing over an entry point to your infrastructure.

The third is not validating notifications. After configuring Telegram or Discord, test with an artificial monitor that fails on purpose to confirm the alert arrives. Don’t trust that it’s working because you saw the test message; the real “monitor fails, alert sent” flow has more pieces. The fourth is using intervals that are too aggressive: checking 100 monitors every 10 seconds generates heavy load; 60 seconds is reasonable for most cases.

How to think about the decision

Uptime Kuma is a notably well-made tool for a bounded problem, and that’s its strength. It doesn’t try to be Grafana, it doesn’t try to be Prometheus with complex rules, it doesn’t manage application metrics. It does one thing (check if your things are alive and alert you if not) with enough simplicity that a small team can maintain it without spending significant time.

For a team just entering serious monitoring, Uptime Kuma is probably the best first piece to install before anything else. It gives you immediate basic coverage, teaches you the discipline of thinking about what deserves monitoring, and when the time comes to add deeper observability with Prometheus or a full stack, Kuma will keep covering its layer without interfering. It’s a tool that doesn’t compete with anything else; it simply does its job and disappears from the radar until something fails, which is exactly what you ask of an uptime monitor.

Entradas relacionadas