mediadb (20260426+1) unstable; urgency=medium

  * Fix preview 30s stall: bulk_prefetch() now uses chunked range reads
    (read_media_data_range) instead of read_media_data(), avoiding the
    retrieve_range retry backoff loop that blocks preview generation

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 26 Apr 2026 12:00:00 +0200

mediadb (20260426) unstable; urgency=high

  * Fix cluster data loss: skip orphan cleanup in vacuum_all_stores()
    when local database is empty — prevents deleting all cluster groups
    when a node starts with an empty or corrupt local DB
  * Fix crash-loop on bind failure: bind/listen HTTP sockets before
    starting cluster threads so a port-in-use error exits cleanly
    instead of causing "terminate called without an active exception"
  * Fix missing g_Cluster->setImportRunning(false) in start_import()
    and start_import_file() — cluster stayed in import mode permanently
    after synchronous import, blocking background sync
  * admin.html: fire PUT /api/import without awaiting, poll status
    immediately so the progress bar stays alive during long imports

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 26 Apr 2026 10:00:00 +0200

mediadb (20260425+3) unstable; urgency=medium

  * Rebuild against libparitypp 20260425+4 (interleaved stripe sends
    for faster cluster import throughput)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 25 Apr 2026 22:00:00 +0200

mediadb (20260425+2) unstable; urgency=medium

  * Rebuild against libnetplus 20260425+5 (RSA Montgomery CIOS
    heap over-read fix, SHA-NI message schedule fix)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 25 Apr 2026 21:00:00 +0200

mediadb (20260425+1) unstable; urgency=medium

  * ClusterMediaBackend::add_media: add missing replicate_index() call
    after replicate_store() — peers never saw the updated index after
    upload, causing perpetual "no index exists on any peer" on sync
  * add_media: add initial_sync_ok_ guard consistent with all other
    cluster mutators, preventing writes during unstable sync state

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 25 Apr 2026 18:00:00 +0200

mediadb (20260424+82) unstable; urgency=medium

  * Bulk-prefetch entire file (≤64 MB) in one parity round-trip before FFmpeg
    decode, instead of 52+ small range requests that each reconstruct the full
    parity group.  Reduces preview render from ~30s to ~2s for typical images.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+81) unstable; urgency=medium

  * Fix preview inflight coalescing: use predicate-based wait_until to prevent
    lost CV notifications causing 30s timeouts on simultaneous requests.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+80) unstable; urgency=medium

  * Fix video overlay: stop fetch on close (pause + removeAttribute('src'))
    to free H2 connection for other requests.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+79) unstable; urgency=medium

  * Reduce MAX_CONCURRENT_PREVIEWS 6→2 in admin.html and gallery.html to
    avoid H2 stream serialisation timeouts on multiplexed connections.
  * Retry on network errors (TypeError), not only AbortError.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+78) unstable; urgency=medium

  * Fix: guard PrefetchBuffer future .wait() calls with .valid() check.
    Prevents std::future_error "No associated state" crash after .get()
    consumed the future in read(), then drop/seek loops called .wait().

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+77) unstable; urgency=medium

  * Performance: reduce prefetch block size 256KB→64KB, look-ahead 2MB→256KB,
    probesize 1MB→256KB, analyze duration 2s→1s. ~8× less cluster I/O
    before first frame decode; significantly faster video preview and seek.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+76) unstable; urgency=medium

  * Fix: remove render_pool_, run FFmpeg inline on HTTP worker threads.
    Separate render pool caused thread starvation and UI hangs. io_pool_
    now exclusively serves prefetch I/O; semaphore provides backpressure.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+75) unstable; urgency=medium

  * Fix: split single io_pool into separate render_pool (CPU-heavy FFmpeg)
    and io_pool (prefetch I/O only), eliminating deadlock where render tasks
    starved prefetch threads. Previous semaphore-only fix was insufficient.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+74) unstable; urgency=medium

  * Fix: prevent preview thread pool deadlock — render semaphore now capped
    below io_pool size so prefetch I/O tasks always have free threads.
  * Admin: preview fetch sends Authorization header; failed previews show
    warning icon with HTTP status instead of staying transparent.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260424+73) unstable; urgency=medium

  * Performance: use authdb ConnectionPool for is_authorized(), check_gpo(),
    repopulate_session() and list_groups — eliminates TCP connection setup
    per request.
  * Performance: session lookups now validate TTL inline, preventing stale
    sessions from being treated as valid.
  * Performance: add domain_label_index_ for O(1) find_domain_by_label()
    instead of linear scan across all stores.

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 24 Apr 2026 00:00:00 +0200

mediadb (20260423+72) unstable; urgency=medium

  * Preview: flush FFmpeg decoder after read loop so codecs that buffer
    the single frame (e.g. WebP VP8X with XMP/EXIF chunks) emit it.
    Fixes "could not decode a frame" for extended-format WebP images.
  * Track premature EOF in PrefetchBuffer: when cluster data is
    unavailable, report "cluster data timed out" so the error triggers
    503 + Retry-After instead of a permanent 422.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 23 Apr 2026 20:10:00 +0200

mediadb (20260423+71) unstable; urgency=medium

  * Read client pool: replace single read_client_ with a pool of 4
    independent paritypp clients for cluster reads. Concurrent preview
    and download requests now round-robin across pool members, eliminating
    serialization on the paritypp client mutex.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 23 Apr 2026 00:00:00 +0200

mediadb (20260423+70) unstable; urgency=medium

  * Cluster data consistency: tag media listings with availability flag
    (cluster_available) so the frontend can indicate when media data is
    not yet replicated to the cluster.
  * Increase cluster fetch/fetch_range deadline from 10s to 30s to avoid
    premature timeouts on congested networks.
  * Return HTTP 503 with Retry-After header instead of 422/500 when
    preview rendering or media data fetch fails due to cluster timeouts
    or queue pressure — signals clients to retry.
  * Frontend: throttle preview fetches to 6 concurrent requests, fixing
    NS_BINDING_ABORTED in Firefox. Add AbortController for cancellation
    and automatic 503 retry with 2s backoff.
  * Frontend: dim unavailable media items in admin table with warning
    indicator.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 23 Apr 2026 00:00:00 +0200

mediadb (20260423+69) unstable; urgency=high

  * Fix tombstone infinite loop: load_tombstones() now REPLACES the local
    tombstone set with the cluster version instead of merging. Previously,
    cleared tombstones were re-inserted from the cluster on every sync
    cycle, permanently deleting stores that a successful import had already
    restored.
  * After successful import, clear ALL tombstones (not just matching store
    IDs). A full import replaces the entire dataset — stale tombstones must
    not suppress newly imported stores on other nodes.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 23 Apr 2026 00:00:00 +0200

mediadb (20260423+68) unstable; urgency=high

  * Cluster health monitor: replace sentinel store+remove with filesystem
    probe to prevent blocks.bin pollution (thousands of dead records per
    day).
  * Cluster::scrub(): store data BEFORE removing old copies to prevent
    data loss when re-store fails after remove.
  * Win32 compatibility for health probe (fcntl.h).

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 23 Apr 2026 00:00:00 +0200

mediadb (20260422+67) unstable; urgency=low

  * Catch netplus::NetException in HttpServer::run() to prevent
    std::terminate / SIGABRT when event loop construction fails
    (e.g. cluster not ready at startup).

 -- Jan Koester <jan.koester@tuxist.de>  Tue, 22 Apr 2026 00:00:00 +0200

mediadb (20260422+66) unstable; urgency=low

  * Fix segfault in PrefetchBuffer (use-after-free):
    - Add destructor that waits for all in-flight io_pool futures
      before PrefetchBuffer members are destroyed.
    - Change lambda captures from `this` to by-value copies of db,
      cache, and media_id so tasks remain safe if PrefetchBuffer
      is destroyed while they are still queued or running.

 -- Jan Koester <jan.koester@tuxist.de>  Tue, 22 Apr 2026 00:00:00 +0200

mediadb (20260422+65) unstable; urgency=low

  * Fix HTTP timeouts on /raw and /preview endpoints:
    - Add 10 s deadline to Cluster::fetch() and Cluster::fetch_range()
      so warmup retries and pclient fallback are skipped once the
      deadline expires instead of blocking for 30+ s per client.
    - Reduce MAX_RANGE_CHUNK from 8 MB to 4 MB (= one paritypp stripe)
      to avoid multi-stripe fetches that compound retry delays.
    - Rate-limit on-demand sync_from_cluster() in get_media() and
      get_media_size() to at most once every 10 s, preventing repeated
      full-cluster syncs from blocking HTTP requests.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 22 Apr 2026 00:00:00 +0200

mediadb (20260422+64) unstable; urgency=low

  * Speed up preview generation (webp/jpeg):
    - Set probesize/max_analyze_duration before avformat_open_input so
      format detection reads less data over cluster I/O.
    - Use JPEG lowres decoding when target dimensions are much smaller
      than source (avoids full-resolution decode of large photos).
    - Tune WebP encoder: compression_level=0, quality=80, preset=photo
      for significantly faster encoding with minimal quality loss.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 22 Apr 2026 00:00:00 +0200

mediadb (20260422+63) unstable; urgency=low

  * Speed up raw media data delivery:
    - Increase range-request chunk cap from 2 MB to 8 MB, reducing HTTP
      round-trips for video streaming by 4×.
    - Prefetch 2 chunks ahead (16 MB) instead of 1 for sequential
      playback, so the next request hits the cache.
    - Increase ResponseEvent streaming fill from 1 MB to 4 MB per event
      cycle for faster socket throughput.

 -- Jan Koester <jan.koester@tuxist.de>  Tue, 22 Apr 2025 00:00:00 +0200

mediadb (20260422+62) unstable; urgency=low

  * Speed up preview generation:
    - Increase prefetch block size from 64 KB to 256 KB and look-ahead
      from 4 to 8 blocks (2 MB total), reducing cluster I/O stalls.
    - Limit avformat probesize to 1 MB / 2s to avoid slow format
      detection over cluster network.
    - Enable multi-threaded FFmpeg decoding (thread_count=auto,
      FF_THREAD_FRAME | FF_THREAD_SLICE).
    - Increase AVIF cpu-used from 6 to 8 and crf from 30 to 32 for
      faster still-image encoding at acceptable quality.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 22 Apr 2026 00:00:00 +0200

mediadb (20260422+61) unstable; urgency=medium

  * Fix stores not visible on other nodes after import: sync_from_cluster
    now retries index and store metadata fetches with warmup when the
    first attempt fails (stale QUIC connections after heavy replication).
  * Forward ImportSession::progress() through ImportGuardSession so
    cluster import progress is properly reported on the status page.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 22 Apr 2026 00:00:00 +0200

mediadb (20260422+60) unstable; urgency=low

  * cluster.html: auto-poll every 2s while import is running so
    progress updates live without manual refresh.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 22 Apr 2026 00:00:00 +0200

mediadb (20260422+59) unstable; urgency=low

  * Fix stuck import state: clean up orphaned H2 import sessions on
    client disconnect so import_running_ is correctly reset.
  * Show import status (In Progress / Idle) on cluster status page.
  * Fix false DEGRADED during import: health probe retries once before
    counting a peer as offline; suppress DEGRADED transition while
    import_active_ is set so busy peers are not marked as down.
  * Notify Cluster of import state via setImportRunning() so the health
    loop can tolerate transient probe failures during replication.
  * Fix cluster status bar: use health_loop assessment (isDegraded/
    isCritical) instead of raw pclient snapshot which could show
    DEGRADED while the health loop had already suppressed it.
  * Add detailed import progress: ImportSession::progress() reports
    phase (starting/parsing/importing/replicating/finalizing), bytes
    fed, media count, and store count.  Exposed via /api/import/status
    and /api/cluster/status import_progress object.
  * cluster.html: show import detail card with phase, MB transferred,
    media count and store progress during active imports.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 22 Apr 2026 00:00:00 +0200

mediadb (20260421+58) unstable; urgency=low

  * Fix admin.html: navigating to albums did not hide domain/ACL
    sections, causing multiple panels to overlap.

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 21 Apr 2026 00:00:00 +0200

mediadb (20260421+57) unstable; urgency=medium

  * Rewrite cluster import: synchronous per-entry replication so each
    store is visible on all nodes immediately after it is written.
  * Add Cluster::reserve_block() for streaming media writes with
    automatic retry and connection warmup on the import client.
  * Use separate QUIC clients: import_client_ for media data,
    pclient_ for store metadata and index — avoids stale connections.

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 21 Apr 2026 00:00:00 +0200

mediadb (20260421+56) unstable; urgency=high

  * Fix SEGV crash: warmup_read_clients() on every replicate attempt
    raced with health_loop using pclient_ concurrently, causing
    use-after-free on QUIC connection internals. Now warmup only on
    retry, not on the first attempt.
  * Fix export writing wrong size: write actual fetched data size
    instead of padding/truncating to m.size_bytes — prevents corrupt
    zero-filled media in exports from degraded clusters.

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 21 Apr 2026 00:00:00 +0200

mediadb (20260421+55) unstable; urgency=high

  * Fix cluster import not replicating stores to all nodes: warmup
    pclient/import_client QUIC connections before every replicate
    attempt — connections go stale during long streaming imports
    (QUIC idle timeout), causing silent replication failures.
  * Fix corrupted image/video files in export: export_db_to_buffer_impl
    now writes exactly size_bytes per media entry, truncating or padding
    when fetched data length differs from the recorded size (e.g. on a
    degraded cluster), preventing binary format corruption.

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 21 Apr 2026 00:00:00 +0200

mediadb (20260420+54) unstable; urgency=high

  * Rebuild against libparitypp 20260420+4: sequential store_stripe sends
    instead of parallel std::async threads. Fixes OOM kills and SEGV
    crashes on memory-constrained cluster nodes during bulk import.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+53) unstable; urgency=high

  * CRITICAL: Fix apply_tombstones() deleting store metadata from the
    entire cluster instead of only locally. Every sync cycle on every
    node would call cluster_.remove("store:...") for tombstoned stores,
    causing a race condition where re-imported stores get deleted by
    other nodes before the cleared tombstone propagates. Only the
    originating delete_store() call should remove cluster blobs.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+52) unstable; urgency=medium

  * Add Cluster::rebalance() method for on-demand rebalancing.
  * Trigger automatic rebalance after import completes (both streaming
    and buffer import paths) to fix under-replicated groups from partial
    node failures during import.
  * Warmup read_client_ in addition to import_client_ after streaming
    import completes.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+51) unstable; urgency=medium

  * Fix fetch_range/fetch: skip warmup+retry for data-level errors
    (e.g. "stripe not found") — only retry on NetException (connection
    errors). Prevents 120s hang when stripes are missing.
  * Fix add_media: check cluster_.replicate() return value. Retry once
    after warmup on failure, rollback media record if replication still
    fails. Prevents silent data loss where metadata exists but binary
    data was never stored in the cluster.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+50) unstable; urgency=medium

  * Warmup read_client_ and pclient_ before each sync_from_cluster()
    cycle to prevent QUIC idle-timeout failures.
  * Log actual exception messages in fetch() retry catch blocks instead
    of silently swallowing errors.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+49) unstable; urgency=medium

  * Route store metadata and index replication through pclient_ (kept
    alive by health loop) instead of import_client_ which goes stale
    during long streaming imports. import_client_ now only used for
    media streaming.
  * Warmup all clients before sync_from_cluster after import completes.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+48) unstable; urgency=medium

  * Warmup import_client_ connections before each retry in replicate_fn
    and before each begin_replicate_import() streaming session start.
    Fixes store metadata and index replication failing after long imports
    due to stale QUIC connections.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+47) unstable; urgency=medium

  * Accumulate media data alongside streaming for fallback on finalize
    failure: on stripe timeout, warmup import_client_ connections and
    retry via one-shot replicate_import() with buffered data.
  * Add Cluster::warmup_import_client() to re-establish stale QUIC
    connections on the dedicated import client.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+46) unstable; urgency=medium

  * Route all import replication through import_client_ via new
    Cluster::replicate_import() — avoids contention with pclient_.
  * ReplicateFn now returns bool; deferred ops propagate errors back
    to the import session via fail(), aborting on replication failure.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+45) unstable; urgency=medium

  * Wait for replication per entry during import: restructure feed() to
    drain deferred ops (finalize/replicate) after each media item and
    store metadata, ensuring full cluster replication before proceeding.
  * Add retry with exponential backoff (3 attempts) to import replicate_fn.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260420+44) unstable; urgency=medium

  * Stream import media data entry-by-entry directly to cluster via
    paritypp::store_session instead of accumulating full blobs in RAM.
    Uses begin_replicate / ReplicateSession for per-chunk streaming.
  * Add dedicated import_client_ (4th paritypp::client) so long-running
    imports do not block pclient_ used for normal writes.
  * Add Cluster::begin_replicate_import() using import_client_.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+43) unstable; urgency=critical

  * Fix streaming import: clear tombstones after import completes,
    then trigger sync_from_cluster() to reload RAM state from cluster.
    Previously, old tombstones would immediately delete freshly imported
    stores on the next sync cycle.
  * Add comprehensive import logging for diagnostics.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+40) unstable; urgency=critical

  * Orphan cleanup in vacuum: vacuum_all_stores now collects all group IDs
    from the cluster, compares against known keys (index, tombstones,
    store:*, media:*), and removes orphan groups that are no longer
    referenced. This cleans up leftover data from failed imports or
    deleted stores that weren't properly cleaned up.
  * Add /api/cluster/vacuum endpoint and Vacuum button on cluster page.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+39) unstable; urgency=critical

  * Fix root cause of missing stores/index after import: replicate_index()
    had an initial_sync_ok_ guard that silently dropped index replication
    whenever a node hadn't completed sync_from_cluster first. This meant
    create_store, add_media, and all CRUD operations never pushed the index
    to the cluster on a fresh node or after restart. Removed the guard;
    the empty-index check (local_count == 0) is sufficient protection.
  * Fix import hang: removed the post-import verification phase that fetched
    every media blob from the cluster under exclusive lock.
  * Fix stuck importing_ flag: RAII guard ensures it's always cleared.
  * Add cluster_import_roundtrip unit test that verifies the full
    export → import → fetch → sync_from_cluster cycle.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+38) unstable; urgency=critical

  * Fix import + tombstone conflict: importing stores that were previously
    deleted caused tombstones to immediately re-delete them on the next sync.
    Import now clears tombstones for all imported store IDs and replicates the
    cleared tombstone list to the cluster. Also fixed replicate_tombstones()
    to push even when the set is empty, so cleared tombstones actually
    propagate to other nodes.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+37) unstable; urgency=critical

  * Prevent stale index overwrite: replicate_index() now compares local
    store count against cluster store count and refuses to push a smaller
    index. This prevents nodes that haven't synced yet from overwriting
    a freshly imported index with their empty/stale version. delete_store
    uses force=true to bypass the guard when a store was intentionally
    removed. repair_replication only re-replicates the index if the
    local node actually has stores.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+36) unstable; urgency=high

  * Streaming cluster import: each media blob is now written directly to
    the cluster as it is parsed from the import stream, instead of buffering
    all blobs in RAM first. Drastically reduces memory usage during import.
    Post-import verification ensures all keys are fully replicated.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+35) unstable; urgency=high

  * Import verification: after replicating all blobs to the cluster, a new
    Phase 3 checks shard counts for every key (index, store metadata, media
    blobs). Under-replicated keys are re-replicated immediately, ensuring
    all nodes can serve imported media right away instead of waiting for the
    periodic repair cycle.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+34) unstable; urgency=high

  * Fix index replication deadlock: repair_replication() no longer requires
    initial_sync_ok_ to re-replicate the index. Previously, if the index
    was only on 1/3 nodes after restart, all nodes were stuck: sync_from_cluster
    couldn't fetch the index, and repair_replication refused to run because
    initial_sync_ok was false. Now the index is always repaired if under-replicated,
    which unblocks the other nodes' sync.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+33) unstable; urgency=medium

  * Fix startup hang: start_sync now has a 30-second total time budget.
    If the initial cluster sync does not complete in time (e.g. due to
    slow network or stripe retrieval retries in libparitypp), the server
    starts anyway and the background sync_loop continues retrying.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+32) unstable; urgency=medium

  * Cluster-wide store deletion via tombstones: deleting a store on one
    node now replicates a tombstone record so all other nodes remove the
    store on their next sync cycle. Previously stores reappeared after
    being deleted on only one node.

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 00:00:00 +0200

mediadb (20260419+31) unstable; urgency=critical

  * Fix critical data corruption during concurrent import + periodic 
    repair/sync: `repair_replication()`, `sync_from_cluster()`, scrub, and 
    rebalance are now blocked while an import is in progress. Previously 
    these could push half-imported index/store metadata to the cluster, 
    overwriting valid data with garbage.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 17:30:00 +0200

mediadb (20260419+30) unstable; urgency=medium

  * Add Repair Index button to cluster.html UI.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 17:15:00 +0200

mediadb (20260419+29) unstable; urgency=medium

  * Add `POST /api/cluster/repair` endpoint: scans all group IDs across 
    cluster peers, identifies orphaned store metadata blobs by magic bytes, 
    re-fetches and reloads all store metadata, and re-replicates the 
    repaired index. Recovers stores that disappeared due to index corruption 
    or failed syncs.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 16:45:00 +0200

mediadb (20260419+28) unstable; urgency=critical

  * Fix cluster startup deadlock: initial_sync_ok_ was only set on empty 
    clusters but never after a successful index fetch, causing all nodes 
    with real data to remain permanently locked in "syncing" state (5/5 
    incomplete). Now properly set after load_index_from_buffer succeeds.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 16:15:00 +0200

mediadb (20260419+27) unstable; urgency=critical

  * Fix catastrophic cluster data loss ("stores disappeared") caused by nodes 
    pushing an empty local memory state to the global index when network fetches 
    timed out. Replication loops (`repair_replication` and edits) are now 
    forcefully blocked (`HTTP 503`/`std::runtime_error`) if a node has not yet 
    successfully completed an initial sync (`initial_sync_ok_`). 

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 15:30:00 +0200

mediadb (20260419+26) unstable; urgency=critical

  * Fix preview std::future_error exception caused by invalidating future read states. 
    Store resolved data locally within the asynchronous `PendingBlock` struct 
    to avoid reading from an expired future multiple times on fragmented FFmpeg chunk reads.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 15:00:00 +0200

mediadb (20260419+25) unstable; urgency=critical

  * Fix preview render queue lock-up where an exception thrown during 
    FFmpeg initialization would jump the stack and skip the lock release 
    on the `inflight_mutex_`, permanently deadlocking duplicate preview 
    requests with a "timeout waiting for in-flight preview" error.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 14:00:00 +0200

mediadb (20260419+24) unstable; urgency=critical

  * Fix broken cluster fetch and index replication operations on startup: Add 
    re-wärmup fallback routine to both `fetch()` & `fetch_range()` exceptions 
    in Cluster object to recover lost connections from unexpected timeouts / 
    forks ("stripe 0 not found", "index fetch failed").
    
 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 13:00:00 +0200

mediadb (20260419+23) unstable; urgency=critical

  * Fix Cluster Media Import: Always upload media data blobs ("media:<id>") to 
    cluster layer during .mdb imports, even if the node already knows about this file 
    in its index. This allows re-importing .mdb archives to successfully act as a 
    self-healing repair ("heile machen") for nodes with missing/broken stripes 
    (which previously skipped uploading and thus broke preview/playback).

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 12:45:00 +0200

mediadb (20260419+22) unstable; urgency=critical

  * Fix Cluster Media Import data loss / corruption: The import_db_from_buffer
    sync loop in ClusterMediaBackend now halts replication immediately (after
    up to 3 retries) if a media chunk fails to replicate over the network.
    Previously, it would silently ignore failed chunks and continue committing
    store:id and index blobs, leaving orphan keys on peer nodes ("nicht auf 
    allen nodes heile").

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 12:30:00 +0200

mediadb (20260419+21) unstable; urgency=low

  * Fix delete_store in ClusterMediaBackend to properly delete all associated media blobs from cluster storage.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 12:15:00 +0200

mediadb (20260419+20) unstable; urgency=low

  * Make MDB import synchronous without background thread.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 19 Apr 2026 12:00:00 +0200

mediadb (20260419+19) unstable; urgency=low

  * Add cluster-wide Scrub & Rebalance status to admin UI:
    - Dedicated "Scrub & Rebalance" section with current status (Running/Idle),
      last scrub results (checked/repaired/failed + timestamp), and last
      rebalance results (rebalanced/already_ok/failed + timestamp).
    - Per-node scrub column in Node Health table showing Running/Idle.
    - Results persisted in Cluster object and exposed via /api/cluster/status
      (last_scrub, last_rebalance JSON objects).

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+18) unstable; urgency=critical

  * Fix empty stores after restart: warmup() was called in init() (pre-fork),
    leaving QUIC connections dead in the child process. retrieve() silently
    returned empty data. warmup() now runs in start() (post-fork).
    fetch() and fetch_range() also fall back to pclient_ if read_client_
    fails, adding resilience against stale connections.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+17) unstable; urgency=high

  * Improve cluster error logging: Cluster::fetch and fetch_range now
    always log exceptions to stderr (was DBG_LOG only, invisible in
    release). sync_from_cluster distinguishes fetch failure from empty
    data. This reveals the real error when index sync fails at startup.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+16) unstable; urgency=critical

  * Fix preview endpoint hanging all HTTP threads: FFmpeg rendering now
    runs on io_pool_ with a 60s timeout instead of inline on the HTTP
    worker thread. A stuck FFmpeg render no longer blocks HTTP threads
    indefinitely, preventing the server from becoming unresponsive.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+15) unstable; urgency=critical

  * Fix import_db not replicating media data to cluster: imported pictures
    were only available on the importing node. Now delegates to
    import_db_from_buffer which replicates media:<id> keys correctly.

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+14) unstable; urgency=high

  * Fix empty albums after rebuild: list_albums now triggers lazy sync
    when local result is empty (same pattern as get_store)
  * Improve start_sync retry: wait until albums are loaded, not just stores
  * Log store/album counts on initial sync success

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+13) unstable; urgency=high

  * Fix empty stores after restart: retry initial cluster sync up to
    5 times with 2 s delay when index comes back empty
  * Add error logging to sync_from_cluster for index and store fetches
  * Catch exceptions in cluster fetch to prevent silent sync failures

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+12) unstable; urgency=medium

  * Replace render thread pool with counting semaphore: FFmpeg now runs
    inline on the HTTP worker thread, eliminating context-switch overhead
  * Align PrefetchBuffer block size to libnetplus BLOCKSIZE (64 KB)

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+11) unstable; urgency=high

  * Fix preview rendering blocking HTTP workers indefinitely: add 60 s
    timeout on render future and 30 s timeout on inflight-wait
  * Fix use-after-free risk: render lambda now captures by value so
    timed-out tasks cannot reference freed stack frames
  * Clean up inflight set on timeout so subsequent requests can retry

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+10) unstable; urgency=medium

  * Fix non-Range full-file fetch blocking all HTTP workers: assemble
    response from cached 2 MB range chunks instead of single cluster fetch
  * Reduce non-Range size cap from 200 MB to 50 MB

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+9) unstable; urgency=medium

  * Fix truncated images/videos: non-Range requests now serve the full file
    (up to 200 MB) instead of only the first 2 MB chunk
  * Remove redundant cache put in prefetch (cache_fetch_range caches internally)
  * Add prefetch queue depth limit (2× pool size) to prevent unbounded
    accumulation during heavy concurrent streaming

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+8) unstable; urgency=medium

  * Scale io_pool_ dynamically: max(4, min(16, hardware_concurrency))
  * Add speculative prefetch for video Range streaming: next 2 MB chunk
    is fetched asynchronously into BlobCache via bounded io_pool_

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+7) unstable; urgency=high

  * Replace unbounded std::async with dedicated bounded io_pool_ (4 threads)
    to prevent DoS via thread exhaustion on concurrent preview requests

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 20 Apr 2026 00:00:00 +0200

mediadb (20260419+6) unstable; urgency=high

  * Fix thread-pool deadlock: prefetch I/O now uses std::async instead
    of the render pool, preventing starvation when all render threads
    are busy

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 23:45:00 +0200

mediadb (20260419+5) unstable; urgency=medium

  * Prefetch blocks cached in shared BlobCache (BlobType::prefetch)
  * Repeated preview renders of the same media reuse cached I/O blocks

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 23:30:00 +0200

mediadb (20260419+4) unstable; urgency=medium

  * Dedicated ThreadPool for FFmpeg preview rendering (configurable via
    /MEDIADB/PREVIEW/THREADS, default: hardware_concurrency)
  * Async prefetch buffer for streaming I/O: 256 KB blocks with 4-block
    look-ahead overlaps cluster fetches with FFmpeg decode

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 23:00:00 +0200

mediadb (20260419+3) unstable; urgency=medium

  * Unified BlobCache: single LRU cache for both preview and data blocks
  * Added BlobType enum (data/preview) to BlobValue
  * Removed separate PreviewCache and DataCache types
  * Single config key /MEDIADB/CACHE/SIZE_MB (default 256 MB)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 22:00:00 +0200

mediadb (20260419+2) unstable; urgency=medium

  * Reduce redundant get_media/sync_from_cluster calls in raw handler
  * Large non-range GET serves first 2 MB instead of loading entire file
  * 128 MB LRU data cache for cluster range fetches (avoids repeated
    network round-trips on video seeking)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 14:00:00 +0200

mediadb (20260419+1) unstable; urgency=medium

  * Fix video streaming in clustered mode: cap open-ended HTTP range
    requests to 2 MB chunks to prevent timeout on large cluster fetches
  * Fix get_media_size to sync metadata from cluster when not found locally

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 19 Apr 2026 12:00:00 +0200

mediadb (20260418+4) unstable; urgency=medium

  * Cluster: waitForPeers before start_sync to prevent empty data on startup
  * load_store_metadata_from_buffer: merge-only (never delete existing
    albums/media), prevents stale cluster data from overwriting imports

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 23:30:00 +0200

mediadb (20260418+3) unstable; urgency=medium

  * Cluster: vacuum_all_nodes now acquires scrub lock for cluster-wide
    synchronisation, preventing concurrent vacuum/scrub/rebalance

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 23:00:00 +0200

mediadb (20260418+2) unstable; urgency=medium

  * Cluster: warmup all clients (pclient_, read_client_, scrub_client_)
    on init to eliminate first-request QUIC handshake latency
  * Cluster perf test: ClusterMediaBackend CRUD and preview benchmarks

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 22:00:00 +0200

mediadb (20260418+1) unstable; urgency=medium

  * Cluster: dedicated scrub_client_ and read_client_ to avoid mutex
    contention on pclient_ during long scrub/rebalance operations
  * Cluster: set server rebuilding flag during scrub so status is
    propagated to all querying nodes via protocol
  * API: per-node scrub_running in cluster status endpoint

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 18 Apr 2026 00:00:00 +0200

mediadb (20260414+88) unstable; urgency=medium

  * Fix PreviewCache LRU: get() now promotes entries to front of LRU list
  * Add thundering-herd protection: deduplicate concurrent preview renders
    for the same key via inflight set + condition variable
  * Streaming video preview: video clip formats (mp4/av1/h265) now use
    range-based AVIO reads instead of loading the full file into RAM
  * Optimize transpose_frame: 32x32 tiling for 90°/270° rotations,
    row-level reversal for 180° rotation

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 14 Apr 2026 21:00:00 +0200

mediadb (20260412+87) unstable; urgency=medium

  * Rebuild against libparitypp 20260412+43 (warmup API, 5s deadline)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 12 Apr 2026 23:00:00 +0200

mediadb (20260412+86) unstable; urgency=medium

  * Use libparitypp distributed scrub lock to prevent concurrent
    scrub/rebalance across cluster nodes
  * Remove local lock implementation in favor of libparitypp API
  * Skip scrub lock group in group listing

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 12 Apr 2026 18:00:00 +0200

mediadb (20260412+85) unstable; urgency=medium

  * Add scrub_mutex_: scrub and rebalance cannot run concurrently
  * Add scrub_running_ status to cluster status API and UI
  * Scrub uses remove() + store() to fix both missing and misplaced blocks
  * Add periodic rebalance every ~5min while all nodes are healthy
  * Add /api/cluster/rebalance endpoint
  * Add Scrub and Rebalance buttons to cluster.html dashboard

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 12 Apr 2026 16:00:00 +0200

mediadb (20260411+84) unstable; urgency=medium

  * Rebuild against libparitypp 20260411+37 (retrieve mutex contention fix)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 11 Apr 2026 18:00:00 +0200

mediadb (20260411+83) unstable; urgency=medium

  * Fix import_from_stream losing media data in memory-only mode
  * Fix incremental import session dropping media data in memory-only mode
  * Fix set_album_public + reload losing album-to-media linkage in MDS3
  * Add unit test suite (46 tests) for BinDb data integrity and memory safety

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 11 Apr 2026 00:00:00 +0200

mediadb (20260410+82) unstable; urgency=medium

  * Rebuild against libhttppp 20260410+7 (PEM/DER/P12 auto-detection, password support)

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 10 Apr 2026 00:00:00 +0200

mediadb (20260410+81) unstable; urgency=medium

  * Add cluster scrub method for verifying and repairing under-replicated groups
  * Add degraded/critical state tracking to health monitor
  * Add local store writability check in health loop
  * Auto-scrub on cluster recovery from degraded state
  * Add POST /api/cluster/scrub endpoint
  * Faster health polling when degraded (5s vs 30s)
  * Fix cluster store path permissions: chown before privilege drop

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 10 Apr 2026 00:00:00 +0200

mediadb (20260409+80) unstable; urgency=medium

  * Add startup vacuum for file_block_store on cluster init
  * Add health monitoring thread with 30s peer status updates
  * Add debug logging for cluster fetch/replicate failures
  * Initial peer status reports 0 online until health check runs

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 09 Apr 2026 00:00:00 +0200

mediadb (20260409+79) unstable; urgency=medium

  * Rebuild against libnetplus 20260409+14 (BLOCKSIZE 65536)

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 09 Apr 2026 00:00:00 +0200

mediadb (20260409+78) unstable; urgency=medium

  * Fix slowdown: session map now has 30min TTL and eviction at 10k entries
  * Fix preview cache contention: use shared_mutex for concurrent reads
    and shared_ptr to avoid copying preview bytes on every cache hit
  * Fix image corruption in cache: detect partial ifstream reads in
    read_media_data_range and return empty instead of zero-padded buffer

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 09 Apr 2026 00:00:00 +0200

mediadb (20260408+77) unstable; urgency=medium

  * New release

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 08 Apr 2026 00:00:00 +0200

mediadb (20260407+76) unstable; urgency=medium

  * Fix local storage (BinDb): close ofstream before calling mmap_store()
    in append_media_locked and append_album_locked — the open stream
    prevented MAP_PRIVATE from seeing the newly written data, causing
    uploaded images to be returned as corrupt/empty

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 07 Apr 2026 00:00:00 +0200

mediadb (20260407+75) unstable; urgency=medium

  * Fix cluster replication repair: repair_replication() now also checks
    and re-replicates individual "media:" keys, not just store metadata
  * Activate repair_replication() in periodic sync loop (every ~30s)
  * Add BinDb::media_ids() helper for iterating all media keys

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 07 Apr 2026 00:00:00 +0200

mediadb (20260407+74) unstable; urgency=medium

  * Fix import timeout: pause periodic sync_from_cluster during import to
    prevent sync from wiping in-progress import data and competing for
    cluster I/O

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 07 Apr 2026 00:00:00 +0200

mediadb (20260407+73) unstable; urgency=medium

  * Fix cluster sync: load_store_metadata_from_buffer no longer seeks past
    non-existent inline media data, which caused all albums/media after the
    first to be lost on synced nodes

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 07 Apr 2026 00:00:00 +0200

mediadb (20260407+72) unstable; urgency=medium

  * Cluster mode: re-enable periodic sync loop (every 5s) so all nodes
    stay up to date with index and store metadata
  * Add on-miss cluster sync for get_store, get_album, get_media and
    is_media_public — cache miss triggers immediate sync_from_cluster
  * read_media_data / read_media_data_range always fetch from cluster
    when local pending_data is empty, no early abort on metadata miss

 -- Jan Koester <jan.koester@tuxist.de>  Mon, 07 Apr 2026 00:00:00 +0200

mediadb (20260407+71) unstable; urgency=medium

  * Fix H1 streaming import 100% CPU: sleep 1ms when no data available
    instead of busy-looping in RequestEvent
  * Fix import status reporting "done" while cluster replication still
    running: defer done_/ok_ until all deferred ops have executed

 -- Jan Koester <jan.koester@tuxist.de>  Tue, 07 Apr 2026 00:00:00 +0200

mediadb (20260407+70) unstable; urgency=medium

  * Fix streaming import: store media binary data as separate "media:<id>"
    cluster keys instead of inlining into "store:" buffer, so read_media_data
    can fetch them back

 -- Jan Koester <jan.koester@tuxist.de>  Tue, 07 Apr 2026 00:00:00 +0200

mediadb (20260407+69) unstable; urgency=medium

  * New release

 -- Jan Koester <jan.koester@tuxist.de>  Tue, 07 Apr 2026 00:00:00 +0200

mediadb (20260405+68) unstable; urgency=medium

  * Reduce cluster lock timeouts: metadata/remove 5s, fetch 10s,
    replicate/vacuum 30s (was 30-120s)
  * Add -O2 -DNDEBUG Release build flags

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+67) unstable; urgency=medium

  * Fix import retry phase: keep original buffers for failed media:/store:
    keys instead of re-serializing with wrong format
  * Use save_store_metadata_to_buffer (not save_store_to_buffer) in retries

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+66) unstable; urgency=medium

  * Per-media cluster keys: store each media as its own key (media:<id>)
    instead of inlining data in store blobs
  * Store keys (store:<id>) now contain metadata only (no media data)
  * read_media_data_range uses Cluster::fetch_range for partial retrieval
    (supports large files without fetching full blob)
  * import_from_stream, add_media, delete_media, replicate_store, and
    export updated for per-media key scheme

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+65) unstable; urgency=medium

  * Remove background sync thread: start_sync() loads from cluster once at
    startup, no periodic overwrites of imported data
  * Cluster-only storage: read_media_data fetches store blob from cluster
    on demand instead of keeping all media in RAM
  * sync_from_cluster loads only metadata (skips media data) to save RAM
  * Gate all debug logging behind NDEBUG macro

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+64) unstable; urgency=medium

  * Add diagnostic stderr logging to import pipeline for debugging hangs:
    import_from_stream (store/album/media progress), ClusterMediaBackend
    (phase tracking), Cluster::replicate (lock/store/exception details)

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+63) unstable; urgency=medium

  * Fix albums disappearing after cluster import: keep cluster_op_mutex_
    held during replication to prevent sync_from_cluster from overwriting
    newly imported data with stale cluster state
  * Still avoid BinDb::mutex_ during network I/O by collecting replication
    buffers first, then sending after import_from_stream releases mutex_

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+62) unstable; urgency=medium

  * Fix cluster import blocking all reads: collect replication buffers under
    lock, then replicate outside cluster_op_mutex_ so API stays responsive
  * Fix partial import data loss: save and replicate index after each store
    (not just at the end) for crash consistency

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+61) unstable; urgency=medium

  * Use paritypp VACUUM protocol command to compact block stores on all
    cluster nodes (not just local) after vacuum

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260405+60) unstable; urgency=medium

  * Fix cluster vacuum: re-replicate compacted store files and index to
    cluster after local vacuum so parity blocks actually shrink
  * Vacuum the local file_block_store (blocks.bin) after re-replication
    to reclaim space from tombstoned records

 -- Jan Koester <jan.koester@tuxist.de>  Sun, 05 Apr 2026 00:00:00 +0000

mediadb (20260404+59) unstable; urgency=medium

  * Defer all cluster operations (replicate, lock acquire/release) outside
    db_.mutex_ to prevent import from blocking the entire database
  * Fine-grained locking in sync_from_cluster: hold cluster_op_mutex_ only
    during load, not during network fetch

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+58) unstable; urgency=medium

  * Store all media data inline in per-store cluster blobs instead of
    separate per-media blobs (reduces cluster operations from N to 1 per store)
  * Remove replicate_media / fetch_media_from_cluster methods
  * read_media_data / read_media_data_range now delegate to local backend
  * Simplify import: no per-media begin_replicate callback needed

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+57) unstable; urgency=medium

  * Hold cluster client lock per store instead of per media during import
    (eliminates hundreds of lock acquire/release cycles per store)
  * Add acquire_client_lock / replicate_unlocked / begin_replicate_unlocked
    to Cluster for caller-managed locking
  * Add store lifecycle callbacks (on_store_begin/on_store_end) to import
  * Add error_message() to ReplicateSession interface

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+56) unstable; urgency=medium

  * Include underlying store_session error detail in cluster replication
    failure messages (stripe index, node failure count)
  * Delegate error_message() through LockedSession wrapper in cluster.cpp
  * Silence warn_unused_result warnings (freopen, chown, pwrite)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+55) unstable; urgency=medium

  * Fix use-after-move in load_store / load_store_from_buffer: album ID
    was read after std::move, causing phantom empty albums on restart/sync
  * Fix same use-after-move in MDS1/MDS2 legacy reader and add_media_metadata
  * Chown data directory to run_user before dropping privileges
  * Add error_message() to ImportSession; report write errors to client
  * Check store file open/write errors during import
  * Check cluster replication errors: begin_replicate, feed, finalize
    now abort import with error instead of silently losing media data
  * Defer media record visibility until all data is written (race fix)

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+53) unstable; urgency=medium

  * Fix race: defer media record visibility until all data is written.
    Prevents concurrent readers from seeing media with incomplete data
    during streaming import (was masked by cerr logging acting as barrier).

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+52) unstable; urgency=medium

  * Fix data race on import session done/ok flags: make atomic to ensure
    visibility across threads (H2/H3 handler reads without db mutex)
  * Remove diagnostic cerr logging from BinDbImportSession

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+51) unstable; urgency=medium

  * Fix import status reporting: finish_import now stores ok/error result
    so /api/import/status returns correct success/failure instead of always
    reporting failure with empty error
  * Add diagnostic logging to BinDbImportSession for cluster debugging

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+50) unstable; urgency=medium

  * Fix import data loss: vacuum store files after import to compact
    partial/duplicate records left by failed previous imports (ios::app)
  * Fix vacuum safety: skip raw data copy for media with data_offset==0
  * Fix spurious .mdb file: reject empty store_id in ensure_store_file
  * Fix client import: use PUT instead of POST for /api/import

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+49) unstable; urgency=medium

  * Fix Alpine/musl fork: drop privileges before spawning threads so all
    threads inherit correct UID/GID (musl setuid is per-thread)
  * Fix empty DB after fork: defer ClusterMediaBackend sync_thread_ start
    to post-fork callback so sync runs in the child process after cluster start

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+48) unstable; urgency=medium

  * Fix rwlock ownership violation: use per-call locking in BinDbImportSession
    instead of lifetime lock to avoid cross-thread unlock with H2 frame dispatch

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+47) unstable; urgency=medium

  * Remove 'Cluster started' cerr to avoid race with paritypp server thread

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+46) unstable; urgency=medium

  * Remove remaining std::cerr logging from cluster.cpp and backend.cpp
    to fully eliminate data races on _ZSt4cerr

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 00:00:00 +0000

mediadb (20260404+45) unstable; urgency=medium

  * Fix data race on std::cerr: remove verbose hot-path debug logging
    from HTTP handler threads (send_response, RequestEvent, import start)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 23:00:00 +0200

mediadb (20260404+44) unstable; urgency=medium

  * Fix deadlock: LockedSession::finalize() releases client_mutex_ immediately
    so finish_store() metadata replicate() can acquire it

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 22:30:00 +0200

mediadb (20260404+43) unstable; urgency=medium

  * Web GUI: stream File directly in fetch() instead of arrayBuffer() preload
  * Rebuild against libparitypp 20260404+11 (pipelined store_stripe)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 22:00:00 +0200

mediadb (20260404+42) unstable; urgency=medium

  * Fix segfault: hold client_mutex_ for lifetime of streaming store_session
    (LockedSession wrapper prevents concurrent QUIC connection access)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 21:00:00 +0200

mediadb (20260404+41) unstable; urgency=medium

  * H2/H3 streaming import via onH2StreamHeaders/onH3StreamHeaders callbacks
  * Body data feeds directly into ImportSession per DATA frame (no buffering)
  * Requires libhttppp >= 20260404+1

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 20:00:00 +0200

mediadb (20260404+40) unstable; urgency=medium

  * Fix H2/H3: pass QUIC and ALPN h2 connections to base class immediately
  * H1 import: parse() ourselves then intercept PUT /api/import
  * H2/H3 import: handle_import uses ImportSession with buffered body
  * Restore handle_import route for H2/H3 framing layer

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 19:30:00 +0200

mediadb (20260404+39) unstable; urgency=medium

  * Fix: call parse() before base class to intercept PUT /api/import body
  * Cluster import: stream media to paritypp via ReplicateSession (16KB stripes)
  * Eliminate cluster_media_buf_ — peak RAM ~16KB instead of 100MB per media
  * Remove old handle_import buffering path from router

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 19:00:00 +0200

mediadb (20260404+38) unstable; urgency=medium

  * Import: streaming ImportSession state machine, no temp file needed
  * Import: RequestEvent feeds PUT body chunks directly into ImportSession
  * Import: zero-copy MemBuf streambuf for buffer-based imports
  * Cluster import: replicate media data immediately, skip existing via seekg
  * Import: save index once after all stores instead of per-store
  * Import: open store file once per store for all appends

 -- Jan Koester <jan.koester@tuxist.de>  Sat, 04 Apr 2026 02:00:00 +0200

mediadb (20260404+37) unstable; urgency=medium

  * Import: spool PUT body to temp file via RequestEvent(con&) override
  * Import: peak RAM ~64KB instead of 1GB for large archives
  * Import: import_db(path) reads from file, not from memory buffer
  * Import: clean up temp file on disconnect (orphaned spool)
  * Delete store/album: remove media data from cluster nodes

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 01:00:00 +0200

mediadb (20260404+36) unstable; urgency=medium

  * Cluster import: read media directly into final buffer, no double-copy
  * Cluster import: skip existing media via seekg instead of read loop

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 00:00:00 +0200

mediadb (20260403+35) unstable; urgency=medium

  * Import: remove background thread, run synchronously in request handler
  * Import: skip body copy in convert_request for /api/import (skip_body)
  * Import: fast-path in server.cpp reads directly from RecvData
  * Remove import_thread_ member and async import machinery

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 22:00:00 +0200

mediadb (20260403+34) unstable; urgency=medium

  * Cluster import: replicate media individually during parsing, not after
  * Cluster import: store_buf contains metadata only (no raw media data)
  * Cluster import: don't keep pending_data in RAM after replication
  * Cluster import: eliminate redundant second media replication pass
  * Import: save_index_locked called once after all stores, not per-store

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 21:00:00 +0200

mediadb (20260403+33) unstable; urgency=medium

  * Import: remove per-media stderr logging (major CPU/IO savings)
  * Import: write metadata directly to store file, no ostringstream allocs
  * Import: reusable I/O buffer across all media (single allocation)
  * Import: skip O(n²) duplicate check on media_ids during import
  * Import: release request body and import buffer immediately after parsing

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 20:00:00 +0200

mediadb (20260403+32) unstable; urgency=medium

  * New mediadb-tools package with mds_convert CLI utility
  * mds_convert converts MDS1/MDS2 store files to MDS3 format
  * Import: zero-copy buffer (no double RAM), streaming media I/O
  * Import: single file handle per store instead of per-media open/close

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 19:00:00 +0200

mediadb (20260403+31) unstable; urgency=medium

  * New MDS3 append-only store format: O(1) writes, O(1) deletes
  * Add media appends to end of store file instead of full rewrite
  * Delete marks record as empty (1 byte pwrite) instead of rewriting
  * Album edits (set_album_public) append new record + mark old as deleted
  * Import streams records directly to disk per-record (no full store rewrite)
  * Add vacuum: POST /api/vacuum reclaims space from deleted records
  * Backward-compatible: MDS1/MDS2 files still readable, new files use MDS3

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 18:00:00 +0200

mediadb (20260403+30) unstable; urgency=medium

  * Cluster: no local data storage, direct replication per media
  * Upload replicates only the new media data (media:{id} key), not the
    entire store
  * Store replication is metadata-only (no raw bytes), drastically smaller
  * Reads fetch media data on-demand from cluster
  * Export reconstructs full archive by fetching each media from cluster
  * Import replicates each media individually to cluster
  * Repair checks individual media shard health

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 14:00:00 +0200

mediadb (20260403+29) unstable; urgency=medium

  * Make PUT /api/import synchronous: blocks until replication is confirmed
  * Natural backpressure via HTTP connection, no background thread

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 13:00:00 +0200

mediadb (20260403+28) unstable; urgency=medium

  * Increase cluster client_mutex timeouts: 120s replicate, 60s fetch, 30s misc
  * Import no longer aborts on first store failure; continues and retries only
    the failed keys
  * Import returns success even if some replicas fail (local data preserved)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 12:00:00 +0200

mediadb (20260403+27) unstable; urgency=medium

  * Fix streaming import: keep media in pending_data for local access while
    also writing to cluster buffer, fixing broken pictures after import

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 11:00:00 +0200

mediadb (20260403+26) unstable; urgency=medium

  * Stream import directly to cluster: build store buffer on-the-fly without
    loading all media into memory, drastically reducing memory usage
  * Fix NetException not caught in Cluster (replicate, fetch, server_loop)
  * Fix double save_store_to_buffer call in import retry logic

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 10:00:00 +0200

mediadb (20260403+25) unstable; urgency=medium

  * Import replication: retry up to 3 times on failure, discard on final failure
  * Fix Node Health showing duplicate self node after set_local_node change
  * Switch cluster from RAID1 mirroring to erasure coding (k+m shards)
  * Remove local in-memory block store, rely on distributed erasure-coded storage
  * repair_replication checks shard count (k+m) instead of total peers

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 09:00:00 +0200

mediadb (20260403+22) unstable; urgency=medium

  * Cluster::replicate checks peer count, fails if 0 peers reached

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 08:00:00 +0200

mediadb (20260403+21) unstable; urgency=medium

  * Reject import when cluster is not writable instead of storing locally
  * Cluster::replicate returns bool, stores locally only on peer success

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 07:00:00 +0200

mediadb (20260403+20) unstable; urgency=medium

  * Rebuild against libparitypp 20260403+4 (periodic resync)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 06:00:00 +0200

mediadb (20260403+19) unstable; urgency=medium

  * Fix fork-kills-thread bug: move cluster.start() to post-fork callback
    so QUIC server thread survives daemonization (matches authdb pattern)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 05:00:00 +0200

mediadb (20260403+18) unstable; urgency=medium

  * Rebuild against libnetplus 20260403+2 / libparitypp 20260403+3
    (QUIC client handshake debug logging)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 02:00:00 +0200

mediadb (20260403+17) unstable; urgency=medium

  * Rebuild against libnetplus 20260403+1 / libparitypp 20260403+2
    (udp::sendTo, clean quic::sendPacket)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 01:00:00 +0200

mediadb (20260403+16) unstable; urgency=medium

  * Rebuild against libnetplus 20260403 / libparitypp 20260403+1
    (QUIC Initial packet and ServerHello debug logging)

 -- Jan Koester <jan.koester@tuxist.de>  Fri, 04 Apr 2026 00:10:00 +0200

mediadb (20260403+15) unstable; urgency=medium

  * Rebuild against libparitypp 20260403 (cert load debug logging)

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 23:55:00 +0200

mediadb (20260403+14) unstable; urgency=medium

  * Generate DER format certificates and keys instead of PEM in postinst
  * Fix node health display: map pclient indices to config peer indices
    (self node no longer duplicates index 0)

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 23:50:00 +0200

mediadb (20260403+13) unstable; urgency=medium

  * Add self node to cluster status endpoint (correct nodes_total/nodes_online)
  * Add debug logging to import: stores, albums, media counts and stream state

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 23:30:00 +0200

mediadb (20260403+12) unstable; urgency=medium

  * Exclude local node from pclient QUIC connections — avoid auth/handshake errors
  * Query local store directly for list_peer_groups on self node
  * Map peer config indices to pclient node indices for correct remote queries

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 23:00:00 +0200

mediadb (20260403+11) unstable; urgency=medium

  * Import replicates each store directly to cluster during parsing
  * No more two-step import-then-replicate — stores go to peers immediately

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 22:00:00 +0200

mediadb (20260403+10) unstable; urgency=medium

  * Store locally before replicate_to_peers (match authdb pattern)
  * Remove Local column from cluster status HTML

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 21:00:00 +0200

mediadb (20260403+9) unstable; urgency=medium

  * Remove Local column from cluster status — local node is now a regular peer
  * Fix total_nodes count: no more +1 double-counting

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 20:00:00 +0200

mediadb (20260403+8) unstable; urgency=medium

  * Include local node in peer list — replicate directly to all nodes with parity
  * Remove is_self skip so local paritypp server participates as a regular peer

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 19:00:00 +0200

mediadb (20260403+7) unstable; urgency=medium

  * Fix cluster status: check availability via cluster fetch instead of empty local store
  * local_ok was always false because store_ has no data (no local cache)

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 18:00:00 +0200

mediadb (20260403+6) unstable; urgency=medium

  * Add debug logging to cluster replicate_store, replicate_index, import_db_from_buffer

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 17:00:00 +0200

mediadb (20260403+5) unstable; urgency=medium

  * Cluster: write directly to nodes, no local store — replicate and remove via pclient only
  * Fix import: link new media to existing albums (not only new albums)
  * sync_from_cluster always fetches from peers for fresh data

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 16:00:00 +0200

mediadb (20260403+4) unstable; urgency=medium

  * Remove all local paritypp store operations from Cluster (fetch, fetch_from_peers, remove)
  * All data goes exclusively through peers — no local cache reads or writes
  * Simplify sync_from_cluster: single fetch() call (now peers-only)
  * Remove local_ok status from cluster stores API endpoint

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 15:00:00 +0200

mediadb (20260403+3) unstable; urgency=medium

  * Remove local paritypp cache write on replicate — data goes directly to peers only

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 14:00:00 +0200

mediadb (20260403+2) unstable; urgency=medium

  * Fix intermittent store visibility: use fetch_from_peers in sync (like authdb)
  * Cluster::fetch() returns stale local cache, never checking peers for updates
  * Merge-only index sync: do not delete locally-present stores missing from synced index

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 13:00:00 +0200

mediadb (20260403+1) unstable; urgency=medium

  * Fix store sync ordering: replicate store data before index to prevent stale reads
  * Upgrade cluster_op_mutex_ to shared_mutex for concurrent read access
  * Add shared_lock to all ClusterMediaBackend read methods to prevent reads during sync

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 03 Apr 2026 12:00:00 +0200

mediadb (20260402+5) unstable; urgency=medium

  * Fix race condition: sync_from_cluster could overwrite freshly written data
  * Add cluster_op_mutex_ to serialize all write+replicate operations vs sync

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 02 Apr 2026 23:30:00 +0200

mediadb (20260402+4) unstable; urgency=medium

  * Fix import_db_from_buffer in cluster (memory-only) mode
  * Refactor import to stream-based import_from_stream, no temp file needed

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 02 Apr 2026 23:00:00 +0200

mediadb (20260402+3) unstable; urgency=medium

  * Cluster mode: use paritypp block_store exclusively, no local files
  * Add memory-only BinDb mode for cluster backend
  * Buffer-based serialize/deserialize for index and store data

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 02 Apr 2026 22:00:00 +0200

mediadb (20260402+2) unstable; urgency=medium

  * Fix cluster sync: fetch from peers instead of local store
  * Add comparison check to avoid unnecessary reloads

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 02 Apr 2026 21:00:00 +0200

mediadb (20260402+1) unstable; urgency=medium

  * Fix cluster store replication: sync all store files, not just missing ones

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 02 Apr 2026 20:00:00 +0200

mediadb (20260402) unstable; urgency=medium

  * Split monolithic main.cpp into modular source files
  * service file updated

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 02 Apr 2026 18:11:50 +0200

mediadb (20260401+nmu1) UNRELEASED; urgency=medium

  * service file updated

 -- Jan Koester <jan.koester@tuxist.de>  Thu, 02 Apr 2026 18:11:50 +0200

mediadb (20260401) unstable; urgency=medium

  * Initial Debian packaging with multiarch support.

 -- Jan Koester <jan.koester@tuxist.de>  Wed, 01 Apr 2026 00:00:00 +0200
