Signals
Built-in product analytics and frontend diagnostics. Zero config, no cookies, GDPR-compliant by design.
The Code
Signals work out of the box. Every RPC call is automatically recorded on the server. On the client, ForgeProvider initializes a ForgeSignals tracker that captures page views, errors, and lets you track custom events.
// Svelte — accessed anywhere inside ForgeProvider
import { getForgeSignals } from '@forge-rs/svelte';
const signals = getForgeSignals();
signals.track('button_clicked', { button_id: 'signup' });
await signals.identify(userId, { plan: 'pro' });
signals.breadcrumb('Added item to cart', { item_id: '123' });
signals.captureError(new Error('Something broke'), { component: 'Cart' });
// Dioxus — accessed anywhere inside ForgeProvider
let signals = use_signals();
signals.track_with_properties("button_clicked", json!({"button_id": "signup"}));
signals.identify("user-uuid", json!({"plan": "pro"})).await;
signals.breadcrumb("Added item to cart", Some(json!({"item_id": "123"})));
signals.capture_error("Something broke", Some(json!({"component": "Cart"})));
What Happens
Forge captures analytics at two levels that get correlated automatically.
Server-side auto-capture: The function executor records every RPC call with its name, kind (query/mutation), duration, success/failure status, and the caller's identity. Jobs, crons, workflows, webhooks, and daemon runs emit server_execution events with the same shape. Auth failures and rate-limit rejections emit track events (auth.failed, rate_limit.exceeded) so dashboards surface attack patterns. All of this happens inside the framework, so your handler code stays clean.
Client-side tracking: The ForgeSignals class (Svelte) runs in the browser. The use_signals() hook (Dioxus) works across web, desktop, and mobile; browser-only auto-capture such as Web Vitals and window.onerror is enabled only on wasm32. Custom events and error reports work on every Dioxus target. Events are batched locally, persisted where platform storage is available, and flushed to the server periodically or when the batch fills up.
Correlation: Every client-initiated RPC call includes an x-correlation-id header. This ID links the frontend event (user clicked a button) to the backend execution (the mutation that ran). Error reports include the last correlation ID and a trail of breadcrumbs for reproduction.
Sessions: The server manages sessions, not the client. On first contact, the server assigns a session_id and returns it. The client sends it back on subsequent requests via the x-session-id header. Sessions close after 30 minutes of inactivity (configurable). No cookies, no localStorage.
Visitor identity: The server generates a daily-rotating visitor ID from SHA256(client_ip + user_agent + daily_salt). This gives you same-day uniqueness for metrics without persistent tracking. The salt rotates at midnight UTC, so the same visitor gets a different ID the next day.
Bot detection: User-Agent patterns identify 50+ known bots (search crawlers, social previews, monitoring tools, headless browsers). Bot events are stored with is_bot = true so dashboards can filter them out.
Configuration
Add a [signals] section to forge.toml. Every field has a sensible default, so an empty section (or no section at all) enables signals with the defaults shown below.
[signals]
enabled = true # master switch
auto_capture = true # record RPC calls automatically
diagnostics = true # accept frontend error reports
session_timeout_mins = 30 # inactivity before session closes
retention_days = 90 # drop old monthly partitions
anonymize_ip = false # store hashed visitor ID instead of raw IP
batch_size = 100 # events per database flush
flush_interval_ms = 5000 # max milliseconds between flushes
excluded_functions = [] # function names to skip (exact match)
bot_detection = true # tag bot traffic via UA patterns
Forge ships with an embedded DB-IP Country Lite database for IP-to-country resolution. Every signal event is automatically enriched with an ISO 3166-1 alpha-2 country code in the country column. No configuration or external files needed.
For city-level resolution, point geoip_db_path at a MaxMind GeoLite2-City MMDB file (free with a MaxMind account). This populates the city column alongside country. A bad path fails startup — if you explicitly ask for city-level data, Forge doesn't silently downgrade.
[signals]
geoip_db_path = "/etc/forge/GeoLite2-City.mmdb"
To disable signals entirely:
[signals]
enabled = false
To exclude noisy functions from auto-capture:
[signals]
excluded_functions = ["health_check", "get_feature_flags"]
Client Configuration
Both SDKs accept configuration through the provider.
Svelte
<ForgeProvider signals={{ enabled: true, autoPageViews: true, autoCaptureErrors: true, flushInterval: 5000, maxBatchSize: 20 }}>
<slot />
</ForgeProvider>
Pass signals={false} to disable client-side tracking entirely while keeping server-side auto-capture active.
Dioxus
ForgeProvider {
// Signals are enabled by default inside ForgeProvider.
// ForgeAuthProvider also initializes signals context.
}
| Option | Default | Description |
|---|---|---|
enabled | true | Master switch for client-side collection |
autoPageViews | true | Track navigation automatically |
autoCaptureErrors | true | Capture window errors and unhandled rejections |
autoWebVitals | true | Capture LCP, CLS, INP, FCP, TTFB, navigation timing, long tasks |
autoNetworkEvents | true | Emit network.online / network.offline events and drain the offline queue on reconnect |
respectDnt | true | Honor the DNT: 1 and Sec-GPC: 1 browser opt-outs (disables collection) |
persistQueue | true | Mirror the pending queue to localStorage so events survive reloads |
flushInterval | 5000 | Milliseconds between batch flushes |
maxBatchSize | 20 | Events queued before triggering an early flush |
Client API
track(event, properties)
Record a custom event with arbitrary properties.
signals.track('subscription_upgraded', { from: 'free', to: 'pro' });
signals.track("subscription_upgraded");
signals.track_with_properties("subscription_upgraded", json!({"from": "free", "to": "pro"}));
identify(userId, traits)
Link the current anonymous session to a known user. Call this after login. Traits are stored as JSONB in the forge_signals_users table.
await signals.identify(user.id, { name: user.name, plan: user.plan });
breadcrumb(message, data)
Leave a trail for error reproduction. Breadcrumbs attach to the next error report.
signals.breadcrumb('Opened settings modal', { tab: 'billing' });
captureError(error, context)
Report a frontend error with optional context. Auto-captured errors go through the same path.
signals.captureError(new Error('Payment failed'), { orderId: '123' });
signals.capture_error("Payment failed", Some(json!({"order_id": "123"})));
page()
Manually record a page view. Usually not needed since auto page views track SPA navigation.
await signals.page();
vital(name, value, extra?)
Emit a single Web Vitals / performance measurement. Use name values like lcp, cls, inp, fcp, ttfb, long_task, navigation, or any custom metric. value is numeric (ms for timings, unitless for CLS). Optional rating ("good" | "needs-improvement" | "poor") slots into the server's status column for quick filtering.
signals.vital('hero_image_loaded', 412, { rating: 'good' });
The SDK calls this automatically for the standard Web Vitals when autoWebVitals is enabled.
Beacon Flush
When the user navigates away or closes the tab, pending events are flushed via the Beacon API (Svelte/WASM) or a synchronous request (desktop/mobile Dioxus). This prevents data loss on exit.
Endpoints
The server exposes four ingestion endpoints. These are added to quiet_paths internally so they don't generate their own telemetry noise.
| Endpoint | Method | Purpose |
|---|---|---|
/_api/signal/event | POST | Batch custom events (max 50 per request) |
/_api/signal/view | POST | Page view with referrer and UTM params |
/_api/signal/user | POST | Identify user and store traits |
/_api/signal/report | POST | Frontend error reports with breadcrumbs |
/_api/signal/vital | POST | Web Vitals / performance metrics (max 50 per request) |
All endpoints return:
{ "ok": true, "session_id": "uuid" }
Request headers the server looks at:
| Header | Purpose |
|---|---|
x-session-id | Session continuity (value from previous response) |
x-forge-platform | Device classification (web, desktop-macos, desktop-windows, desktop-linux, ios, android) |
x-correlation-id | Links frontend events to backend RPC execution |
Authorization | Optional, associates events with authenticated user |
Storage
Signals use three PostgreSQL tables on the analytics connection pool.
forge_signals_events stores all events, partitioned by month. Each monthly partition is a separate table (e.g. forge_signals_events_2026_03). Old partitions are dropped automatically based on retention_days. Events are batch-inserted using PostgreSQL's UNNEST() for single-roundtrip writes.
forge_signals_sessions tracks server-managed sessions with entry/exit pages, device info, event counts, bounce detection, and duration.
forge_signals_users stores identified user profiles with traits, acquisition data (first referrer, UTM params), and lifetime counters.
Materialized views refresh every 5 minutes for dashboard queries:
| View | Content |
|---|---|
forge_signals_daily_stats | DAU, sessions, events by day |
forge_signals_retention | Weekly cohort retention |
forge_signals_function_stats | Hourly function performance |
Grafana Dashboard
Forge ships with pre-built Grafana dashboards that query PostgreSQL directly. The OTEL-LGTM Docker image includes a PostgreSQL datasource configuration.
To enable in development, use the Forge OTEL-LGTM image in your docker-compose.yml and pass POSTGRES_* environment variables. The dashboards cover business metrics (users, sessions, acquisition, retention) and operations (function performance, error rates, bot traffic).
Privacy
Signals are designed around GDPR compliance without cookie banners:
- No cookies used for tracking; session IDs are in-memory only and server-managed
localStorageis used only to buffer unsent events so they survive reloads — never for identity. The client SDK checks DNT/GPC before restoring the persisted queue, so flipping DNT on after a session has buffered events drops the queue rather than flushing it- Visitor identity is a daily-rotating SHA-256 hash of
IP + UA + daily_salt. The salt is derived fromauth.jwt_secretand the current UTC date, so cross-day correlation is impossible without access to the secret. Rotateauth.jwt_secretto break correlation across deployments - With
anonymize_ip = true, the raw IP is hashed into the visitor ID and then zeroed before storage; theclient_ipcolumn always ends up empty identify()is opt-in and only links sessions you explicitly associate- The server short-circuits
/signal/view,/signal/event,/signal/user, and/signal/vitalfor requests carryingDNT: 1orSec-GPC: 1. The client SDKs disable themselves when the browser has set those signals. Crash reports (/signal/report) still land so production errors from DNT users don't disappear, but they carry novisitor_idand nouser_id - Bot traffic is tagged but not filtered, so you retain full data for debugging
Data retention and right-to-delete
Forge does not enforce a global retention period — you set one with
retention_days (default null, no auto-drop). The session reaper closes
sessions older than session_timeout_mins. Operators are responsible for:
- Choosing a
retention_dayswindow that matches your privacy policy - Running ad-hoc deletes for right-to-delete requests. Two flows:
- By visitor:
DELETE FROM forge_signals_events WHERE visitor_id = $1; DELETE FROM forge_signals_sessions WHERE visitor_id = $1; - By user:
DELETE FROM forge_signals_events WHERE user_id = $1; DELETE FROM forge_signals_users WHERE user_id = $1; DELETE FROM forge_signals_sessions WHERE user_id = $1;
- By visitor:
Visitor IDs rotate daily, so a request "delete me" without a user_id
covers only the most recent calendar day for that browser fingerprint. If
you need cross-day deletion you must either ask users to authenticate
first (so we can scope by user_id) or accept that older daily-hashed
identifiers can no longer be located back to the user.
Multi-tenant data
tenant_id is captured on every event when the JWT carries one. Per-tenant
queries scope by WHERE tenant_id = $1. There is no automatic row-level
security on the signals tables: any query that omits the tenant filter
will see all tenants' rows. If you operate signals on behalf of customers,
either run a separate Forge deployment per tenant or wrap every query in
a tenant-scoped layer.
GeoIP attribution
The default build embeds the DB-IP IP-to-Country Lite database. If
you ship a binary that uses the embedded data (i.e., you don't override
signals.geoip_db_path), include the DB-IP attribution required by
their CC BY 4.0 license: "IP geolocation by DB-IP".
Setting geoip_db_path to a MaxMind MMDB file uses your own licensed
data instead and removes the attribution requirement.
Limitations
Signals are designed for product analytics and diagnostics, not as a security audit log.
- Unauthenticated by design. The
/_api/signal/*endpoints accept requests without authentication so client-side JavaScript can report events, page views, and errors. This means anyone can submit signal data. Don't rely on signals for access control decisions or billing. - Approximate counts. Events are buffered in memory and batch-inserted. Under load, the bounded channel (configurable via
channel_capacity) drops events rather than applying backpressure. Counts are approximate, not exact. - Bot detection is UA-based. Forge filters known bot user-agent patterns, but sophisticated bots that mimic real browsers will pass through.
- Visitor IDs rotate daily. The daily-rotating
SHA256(ip + user-agent + daily_salt)visitor ID is GDPR-friendly but means the same person gets a new ID each day. Long-term user tracking requiresidentify()calls from authenticated sessions. - IP-dependent visitor IDs. Users behind shared NATs or VPNs may share a visitor ID. Users who change networks mid-session get a new one.
Troubleshooting
Events not appearing in Grafana: Check that [signals] enabled = true in forge.toml, the PostgreSQL datasource is configured in Grafana, and materialized views have had time to refresh (first refresh is 5 minutes after startup). Verify events exist with SELECT count(*) FROM forge_signals_events.
High event volume dropping events: The collector uses a bounded channel with 10,000 capacity. If you see signals collector channel full in logs, increase batch_size or decrease flush_interval_ms to drain faster.
Session counts seem inflated: The default session_timeout_mins of 30 might be too short if users have long idle periods. Increase it to match your app's usage pattern.
Don't store PII in track() properties: Custom properties are stored as JSONB. Use identify() for user association instead of passing emails or personal data in event properties.