authdb (20260426+1) unstable; urgency=medium

  * file backend vacuum: check write/ftruncate/lseek return values,
    throw on failure instead of silently corrupting the database
  * file backend: check read/write return values in getRevesion()
    and newRevesion()
  * Fix -Wunused-result warnings: check mkdir/chown return values
    in authdb.cpp, log failures to stderr

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 26 Apr 2026 17:00:00 +0200

authdb (20260424+8) unstable; urgency=medium

  * Cluster: use multi-store protocol for session data (store_id=1).
    Eliminates session_intercepting_store in favor of native server
    routing. Session clients use set_store_id() for proper separation.
  * Scrub no longer needs session group filtering — store_id routing
    ensures domain and session data are fully isolated.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

authdb (20260424+7) unstable; urgency=medium

  * Client library: add ConnectionPool class for TCP connection reuse.
    PooledConnection RAII wrapper auto-returns connections to the pool.
  * Client library: add GPO result cache (5 min TTL) in GPOcheck() to
    avoid redundant network round-trips for repeated policy checks.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

authdb (20260423+6) unstable; urgency=high

  * Cluster health monitor: replace sentinel store+remove with filesystem
    probe to prevent blocks.bin pollution (thousands of dead records per
    day).
  * Cluster::scrub(): store data BEFORE removing old copies to prevent
    data loss when re-store fails after remove.
  * Win32 compatibility for health probe (io.h, fcntl.h).

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 23 Apr 2026 00:00:00 +0200

authdb (20260419+5) unstable; urgency=high

  * Prevent stale manifest overwrite: pushManifestSync now compares local
    revision against cluster revision before pushing. A node with an older
    revision (e.g. after restart before sync) can no longer overwrite a
    freshly imported or updated domain manifest in the cluster.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

authdb (20260418+4) unstable; urgency=medium

  * Cluster: defer recovery_epoch bump until after scrub completes,
    preventing backends from fetching before data is rebalanced
  * Skip fetchFromCluster during active scrub/rebuild (use cached data)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 23:30:00 +0200

authdb (20260418+3) unstable; urgency=medium

  * Cluster: vacuumAllNodes acquires scrub lock for cluster-wide
    synchronisation, preventing concurrent vacuum/scrub/rebalance

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 23:00:00 +0200

authdb (20260418+2) unstable; urgency=medium

  * Warmup all clients (pclient_, session_client_, scrub_client_) on init
    to eliminate first-request QUIC handshake latency

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 22:00:00 +0200

authdb (20260418+1) unstable; urgency=medium

  * Dedicated scrub_client_ for scrub/rebalance to avoid blocking
    push_worker and health_monitor via pclient_ mutex contention
  * Propagate scrub_running status to all nodes via GET_STATUS protocol
  * Set server rebuilding flag during scrub so remote nodes report REBUILD
  * Expose per-node scrub_running in admin cluster status API
  * Propagate mark_node_dead to scrub_client_

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 21:00:00 +0200

authdb (20260417+3) unstable; urgency=medium

  * Fix authdb unreachable during auto-vacuum by releasing Domain Cache
    Mutex before vacuuming

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 17 Apr 2026 21:17:45 +0200

authdb (20260417+1) unstable; urgency=medium

  * Fix missing Username and GPO information in SessionData due to
    incomplete copy constructors
  * Add unit tests for Cluster Operations (session push, fetch, rebalance)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 17 Apr 2026 12:00:00 +0200

authdb (20260416+4) unstable; urgency=medium

  * Increase initial cluster retrieval timeout to 30s to fix startup
    delay failures

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 16 Apr 2026 11:05:04 +0200

authdb (20260414+17) unstable; urgency=medium

  * Fix cluster retrieve blocking HTTP requests: fetchFromCluster now
    uses fire-and-forget async retrieve instead of synchronous 5s wait.
    Result is picked up on the next prefetch() call — no request ever
    blocks on a QUIC retrieve. First fetch after startup does a one-time
    synchronous wait so data is available for the first request.
  * Add revision guard: fetchFromCluster discards manifests with a
    revision lower than the cached revision, preventing data loss when
    a recovering node returns stale data.
  * Fix scrub data loss: scrub now validates retrieved data (size,
    AuthHeader marker) BEFORE calling remove(). Empty or corrupt
    retrieve results no longer trigger remove+store, which was deleting
    valid blocks from healthy nodes.
  * Fix import not persisted after restart: Import now calls flushSync()
    which does a synchronous push to the cluster, ensuring data is
    durably stored before returning the HTTP response. Previously
    pushManifest() was async (queued) and a restart would lose the data.
  * Add regression tests: PrefetchNeverBlocks, RevisionGuardNoDataLoss

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 14 Apr 2026 22:00:00 +0200

authdb (20260412+16) unstable; urgency=medium

  * Dedicated session_read_client_ for session reads to avoid mutex
    contention with push worker holding session_client_->mutex_
  * Pre-connect read clients on startup (warmup) to eliminate
    first-request QUIC handshake + auth latency (2-3s per node)
  * Propagate dead node info to all read clients in health monitor

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 12 Apr 2026 23:00:00 +0200

authdb (20260412+15) unstable; urgency=medium

  * Dedicated read_client_ for domain reads to avoid mutex contention
    with push worker and health monitor
  * Non-blocking constructor: no network I/O, uses static domain cache
  * Push worker: 3-retry limit for data pushes, sessions processed first
  * Orphaned group cleanup in scrub via knownDomainGroupIds()
  * Manifest embeds full buffer data (single QUIC call per fetch)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 12 Apr 2026 22:00:00 +0200

authdb (20260412+14) unstable; urgency=medium

  * Per-entity incremental cluster storage: only push changed
    entities instead of full domain buffer on unlock()
  * Manifest-based entity tracking with FNV-1a hash diffing
  * Support for queued entity removes in push worker

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 12 Apr 2026 19:00:00 +0200

authdb (20260412+13) unstable; urgency=medium

  * Use libparitypp distributed scrub lock to prevent concurrent
    scrub/rebalance across cluster nodes
  * Skip scrub lock group in group listing

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 12 Apr 2026 18:00:00 +0200

authdb (20260412+12) unstable; urgency=medium

  * Fix group distribution imbalance: session_intercepting_store no longer
    exposes session groups via list_groups/list_blocks/total_blocks/total_groups
    — prevents scrub/rebalance from writing session blocks into file store
  * Filter session groups in scrub() to avoid ghost group creation
  * Merge scrub + rebalance into single pass: remove() + store() for
    under-replicated groups fixes both missing and misplaced blocks
  * Add scrub_mutex_: scrub and rebalance cannot run concurrently
  * Add scrub_running_ status indicator to cluster status API and admin UI
  * Add admin API endpoint /api/clusterrebalance with Rebalance button
  * Add Scrub button to cluster status page

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 12 Apr 2026 16:00:00 +0200

authdb (20260412+11) unstable; urgency=medium

  * Add periodic rebalance in health monitor: every ~5 minutes while all
    nodes are healthy, run rebalance() to fix blocks misplaced by
    transient store_stripe timeouts that never triggered degraded state

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 12 Apr 2026 12:00:00 +0200

authdb (20260411+10) unstable; urgency=medium

  * Change outer API request lock from Exclusive to Shared — enables
    parallel read requests per domain
  * Add prefetch() to lock_shared() so cluster data stays fresh for reads
  * Fast-path prefetch: skip _FetchMutex entirely when data is fresh (<30s)
  * Add warmDomainCache() at startup: pre-loads all domain backends so
    API requests never need to lock _AdminBackend for domain resolution
  * Add evictDomainCache() called on domain removal to keep cache consistent
  * Integrate rebalance() into scrub to fix misplaced block distribution

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 11 Apr 2026 20:00:00 +0200

authdb (20260411+9) unstable; urgency=medium

  * Fix cluster node blocking: add per-domain _FetchMutex to serialize
    concurrent prefetch() calls (prevents thundering herd on retrieve)
  * prefetch() uses try_lock — if another thread is fetching, wait for
    its result instead of issuing a parallel cluster retrieve

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 11 Apr 2026 18:00:00 +0200

authdb (20260409+8) unstable; urgency=medium

  * Use immediate flush (threshold=0) for file_block_store to ensure
    blocks.bin durability after every write
  * Purge old session blocks from file store at startup by detecting
    SESB magic in block 0 shards, then vacuum to reclaim space

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 09 Apr 2026 20:00:00 +0200

authdb (20260409+7) unstable; urgency=medium

  * Fix session data bloating blocks.bin: session_intercepting_store now
    routes session groups to memory-only session_store_ instead of
    file-backed inner store, using registered session group IDs
  * Fix session_client_ local node: use session_store_ instead of store_
    so self-node session shards stay in memory
  * Add startup vacuum to reclaim space from old session data in blocks.bin
  * Push worker drains all pending items per cycle instead of one-at-a-time,
    removes std::async spawn per push (sequential processing)
  * Mark session group IDs in cleanup loop for interceptor routing
  * Store interceptor_ pointer in Cluster for group registration

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 09 Apr 2026 18:00:00 +0200

authdb (20260408+6) unstable; urgency=medium

  * New release

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 08 Apr 2026 00:00:00 +0200

authdb (20260407+5) unstable; urgency=medium

  * New release

 -- Jan Koester <jan.koester@tuxist.de>  Tue, 07 Apr 2026 00:00:00 +0200

authdb (20260405+4) unstable; urgency=medium

  * Install index.h public header (fixes mediadb/blogi build against
    installed authdb: session.h includes index.h)

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0200

authdb (20260405+3) unstable; urgency=medium

  * Add cluster health monitor thread: probes peers every 10s, tracks
    degraded state, logs recovery
  * Skip cluster session fetch when degraded to avoid blocking logins
  * Reduce fetchSession lock timeout from 5s to 2s
  * pushSession synchronous again (cluster root cause fixed in paritypp)

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0200

authdb (20260405+2) unstable; urgency=medium

  * Fix AdminController using value copy of AuthBackend instead of
    reference — wizard data was invisible to login handler
  * Make backend push and session push non-blocking (background threads)
    so cluster timeouts don't block HTTP responses
  * Don't delete local session when cluster fetch fails (peers unreachable)
  * Return JSON 503 for /api requests when backend is uninitialized
    instead of wizard HTML page (fixes "unexpected character" in clients)

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0200

authdb (20260405+1) unstable; urgency=medium

  * Fix ClusterBackend constructor overwriting existing cluster data:
    no longer pushes empty header to cluster on startup (prevents
    wizard data loss when second node starts)
  * Retry fetchFromCluster up to 3 times during constructor init

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0200

authdb (20260405) unstable; urgency=medium

  * Vacuum cluster block stores on all nodes when cluster mode is active

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0200

authdb (20260404+3) unstable; urgency=medium

  * Rebuild against libparitypp 20260404+9 (stripe-based parity, 16KB chunks)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 18:30:00 +0200

authdb (20260403+2) unstable; urgency=medium

  * Switch cluster from RAID1 mirroring to erasure coding (k+m shards)
  * Remove local in-memory session/data stores, use distributed storage
  * Sessions and data ops distributed via erasure-coded pclient->store/retrieve
  * listAllSessions enumerates groups across all cluster nodes

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 09:00:00 +0200

authdb (20260401) unstable; urgency=medium

  * Initial Debian packaging with multiarch support.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 01 Apr 2026 00:00:00 +0200
