* fix replication test flag name for big values
* fix a bug that triggers ub when RegisterOnChange is called on flows that iterate over the callbacks and preempt
* add a stress test for big value serialization
Signed-off-by: kostas <kostas@dragonflydb.io>
* serialize big slots in chunks
* allow preemption on large slots
* disable big entries serialization for RDB files
* add test
Signed-off-by: kostas <kostas@dragonflydb.io>
* feat(namespaces): Initial support for multi-tenant #3050
This PR introduces a way to create multiple, separate and isolated
namespaces in Dragonfly. Each user can be associated with a single
namespace, and will not be able to interact with other namespaces.
This is still experimental, and lacks some important features, such as:
* Replication and RDB saving completely ignores non-default namespaces
* Defrag and statistics either use the default namespace or all
namespaces without separation
To associate a user with a namespace, use the `ACL` command with the
`TENANT:<namespace>` flag:
```
ACL SETUSER user TENANT:namespace1 ON >user_pass +@all ~*
```
For more examples and up to date info check
`tests/dragonfly/acl_family_test.py` - specifically the
`test_namespaces` function.
* chore: refactoring around tiered storage
1. Renamed ReportXXX callbacks to NotifyXXX
2. Pulled RecordDelete/RecordAdded out of TieredStorage::ShardOpManager.
3. Moved TieredStorage::ShardOpManager functions to to private scope.
4. Streamlined code in TieredStorage::Delete
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: Preserve expiry upon uploading external values
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: properly clean tiered state upon flash
The bug was around io pending entries that have not been properly cleaned during flush.
This PR simplified the logic around tiered storage handling during flush, it always performs the
cleaning in the synchronous part of the command.
In addition, this PR improves error logging in tests if dragonfly process exits with an error.
Finally, a test is added that makes sure pending tiered items are flushed during the flash call.
Fixes#3252
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
In rare cases, the fuzzy cluster migration test detected missing keys.
It turns out that the missing keys were skipped at the source side due
to contention:
* The OnDbChange callback registered and got a `snapshot_id`
* It then blocked on a mutex, and could not add itself to the list of
callbacks
* When the mutex was released, it registered, but it missed all changes
that happened between registering (`snapshot_id`) and the moment it
registered
* fix: fix RegisterOnChange methods for journal and db_slice. Call db_slice and journal callbacks atomically. Made a hack to avoid deadlock during SAVE
* add partial support for CLIENT CACHING TRUE (only to be used with TRACKING OPTIN)
* add OPTIN to CLIENT TRACKING command
* refactor client tracking to respect transactional atomicity
* fixed multi/exec and disabled squashing with client tracking
* add tests
Done in preparation to make ShardArgs a smart iterable type,
but currently it's just a wrapper aroung ArgSlice.
Also refactored common.{h,cc} into tx_base.{h,cc}
In addition, fixed a bug in key tracking, where we wrongly created weak_ref
in a shard thread instead of doing this in the coordinator thread.
Finally, identified another bug (not fixed yet) where we track all the arguments
instead of tracking keys only.
Besides this, no functional changes around the moved code.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: get rid of lock keys
1. Introduce LockTag a type representing the part of the key that is used for locking.
2. Hash keys once in each transaction.
3. Expose swap_memory_bytes metric.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The main change here is introduction of the strong type LockTag
that differentiates from a string_view key.
Also, some testing improvements to improve the footprint of the next PR.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: LockTable tracks fingerprints of keys
It's a first step that will probably simplify dependencies in many places
where we need to keep key strings for that. A second step will be to reduce the CPU load
of multi-key operations like MSET and precompute Fingerprints once.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
A self-laundering iterator will enable us to, eventually, yield from fibers while holding an iterator. For example:
```cpp
auto it1 = db_slice.Find(...);
Yield(); // Until now - this could have invalidated `it1`
auto it2 = db_slice.Find(...);
```
Why is this a good idea? Because it will enable yielding inside PreUpdate() which will allow breaking down of writing huge entries in small quantities to disk/network, eliminating the need to allocate huge chunks of memory just for serialization.
Also, it'll probably unlock future developments as well, as yielding can be useful in other contexts.
* fix(flushslots): Don't miss updates in `FLUSHSLOTS`
This PR registers for PreUpdate() from inside the `FLUSHSLOTS` fiber so
that any attempt to update a to-be-deleted key will work as expected
(first delete, then apply the change).
This fixes several issues:
* Any attempt to touch bucket B (like insert a key), where another key
in B should be removed, caused us to _not_ remove the latter key
* Commands which use an existing value but not completely override then,
like `APPEND` and `LPUSH` did not treat the key as removed but instead
used the original value
Fixes#2771
* fix flushslots syntax in test
* EXPECT_EQ(key:0, xxxx)
* dbsize
* chore: prevent crashing upon inconsistent expiry table
Also, introduce "DFLY LOAD <filename>" command in addition to "DEBUG LOAD"
as an official command to load snapshots into the running server.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: add malloc-based stats and decommit
Provides more stats and control with glibc-malloc based allocator.
For example,
with v1.15.0 (--proactor_threads=2), empty database, `info memory`returns
```
used_memory:614576
used_memory_human:600.2KiB
used_memory_peak:614576
used_memory_peak_human:600.2KiB
used_memory_rss:19922944
used_memory_rss_human:19.00MiB
```
then during `memtier_benchmark -n 300000 --key-maximum 100000 --ratio 0:1 --threads=30 -c 100` (i.e GET-only with 3k connections):
```
used_memory:614576
used_memory_human:600.2KiB
used_memory_peak:614576
used_memory_peak_human:600.2KiB
used_memory_rss:59985920
used_memory_rss_human:57.21MiB
used_memory_peak_rss:59985920
```
connections overhead grows by ~39MB.
when the traffic stops, `used_memory_rss_human` becomes `30.35MiB`
and we do not know where 11MB gets lost and `MEMORY DECOMMIT` does not reduce the RSS.
With this change, `memory malloc-stats` return during the memtier traffic
```
malloc arena: 394862592
malloc fordblks: 94192
```
i.e. 395MB virtual memory was allocated by malloc and only 94KB is chunks available for reuse.
395MB is arena virtual memory, and not RSS obviously, but at least we have some visibility into malloc reservations.
The RSS usage is the same ~57MB and the difference between virtual and RSS is due to the fact we reserve fiber stacks of size 131KB but we touch less.
After the traffic stops, `arena` is reduced to 134520832 bytes, and fordblks are 133016592, i.e. majority of reserved ranges are also free (available to reuse) in the malloc pools.
RSS goes down similarly to before to ~31MB.
So far, this PR only demonstrated the increased visibility to mmapped ranges reserved by glibc malloc.
The additional functional change is in `MEMORY DECOMMIT` that now trims malloc RSS usage from reserved but unused (fordblks) pages
by calling `malloc_trim`.
After the call, RSS is: `used_memory_rss_human:20.29MiB` which is almost the same as when we started the empty process.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: fix build for older glibc environments
Disable these extensions for alpine and use legacy version
for older glibc libraries.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
In the fiber we used to call `mi_heap_collect()` when we're done
deleting items. But since that fiber captures a `vector` of intrusive
pointers to `DbTable`s, it can't free all memory used by the tables
themselves.
A local test shows that this fix helps almost entirely: when occupying a
5gb DB, `FLUSHALL` will reduce RSS by 4.7gb, leaving 300mb still used. A
follow up `MEMORY DECOMMIT` *will* indeed remove these 300mb, but I'm
still not sure why they are not released immediately. Still looking...
Addresses (1) of #2690
* fix: do not migrate during connection close
Fixes#2569
Before the change we had a corner case where Dragonfly would call
OnPreMigrateThread but would not call CancelOnErrorCb because OnBreakCb has already been called
(it resets break_cb_engaged_)
On the other hand in OnPostMigrateThread we called RegisterOnErrorCb if breaker_cb_ which resulted in double registration.
This change simplifies the logic by removing break_cb_engaged_ flag since CancelOnErrorCb is safe to call if nothing is registered.
Moreover, we now skip Migrate flow if a socket is being closed.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This should reduce allocations in a common case (not multi).
In addition, rename Transaction::args_ to kv_args_.
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Co-authored-by: Vladislav <vlad@dragonflydb.io>
* fix(server): mget crash on same key get
fix: #2465
the bug: on cache mode mget bumps up items. When executing mget with the same key several times i.e mget key key we will invalidate the iterator when we bump up the item in dash table.
the fix: bump up/down items only once by using bumped_items set
This PR also reverts c225113
and updates the bumped stats and bumped_items set if the item was bumped
Signed-off-by: adi_holden <adi@dragonflydb.io>