Now rdb_load supports RDB_TYPE_STREAM_LISTPACKS, RDB_TYPE_STREAM_LISTPACKS_2 and RDB_TYPE_STREAM_LISTPACKS_3 formats.
rdb_save still saves with RDB_TYPE_STREAM_LISTPACKS format - we want to release the DF version that can load everything first, and
then update the replicaion format in the next versions.
Also, update rdb_test.cc
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: enforce load limits when loading snapshot
Prevent loading snapshots with used memory higher than max memory limit.
1. Store the used memory metadata only inside the summary file
2. Load the summary file before loading anything else, and if the used-memory is higher,
abort the load.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Sometimes for large values during snapshot loading/saving we allocate a lot of extra memory. For that, we might need to manually run memory decommit for mimalloc to release memory pages back to the OS. This PR addresses that by manually running memory decommit after each shard finishes loading or saving a snapshot.
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* feat(rdb_load): add support for loading huge hmaps
* feat(rdb_load): add support for loading huge zsets
* feat(rdb_load): log DFATAL when append fails
* feat(cluster): Allow appending RDB to existing store
The goal of this PR is to support the loadoing of multiple RDB files into a single server, like when migrating from a Valkey cluster to Dragonfly with a different number of nodes.
It makes the following changes:
* Removes `DEBUG LOAD`, as we already have `DFLY LOAD`
* Adds `APPEND` option to `DFLY LOAD` (i.e. `DFLY LOAD <filename> APPEND`) that loads an RDB without first flushing the data store, overriding existing keys
* Does not load keys belonging to unowned slots, if in cluster mode
Fixes#2840
Now unit tests will run the same Hearbeat fiber like in prod.
The whole feature was redundant, with just few explicit settings of maxmemory_limit
I succeeeded to make all unit tests pass.
In addition, this change allows passing a global handler that is called by heartbeat from a single thread.
This is not used yet - preparation for the next PR to break hung up replication connections on a master.
Finally, this change has some non-functional clean-ups and warning fixes to improve code quality.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(json): Deserialize ReJSON format
This PR adds support for Redis-based JSON RDB format deserialization.
Since Redis uses ReJSON as a module, serialization is slightly different
from other types, but overall it's not a big change once we know where
all bits should be.
While this change knows how to _read_ Redis-based JSON keys, it does not
_save_ them in Redis format. That will be in a different PR.
This PR also ignores unknown (non-keys) module data instead of failing the load.
Fixes#2718
* Cleanup
* Add tests
* Skip unsupported modules
* Small refactor
* chore: prevent crashing upon inconsistent expiry table
Also, introduce "DFLY LOAD <filename>" command in addition to "DEBUG LOAD"
as an official command to load snapshots into the running server.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
entries_read and lag have been added to the output of XINFO GROUPS since Redis 7.0. This patch supports both for Dragonfly. This patch also fixes a bug that incorrectly sets the initial value of entries_read when a consumer group is created.
fixes#1948
1. Cherry-pick changes from Redis 7 that encode integer scores more efficiently
2. Introduces optimization that first checks if the new element should be the last
for listpack sorted sets.
3. Consolidates listpack related constants and tightens usage for listpack.
4. Introduce MEMORY USAGE command.
5. Introduce a small delay before decommitting memory pages back to OS.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* Bump up RDB_VERSION to 11
* Update RDB_JSON value to 30
* Fix HT being serialized to the wrong type
* Serialize HT as LIST_PACK
* Add support for deserializing SET_LISTPACK
fix: fix crash when inserting to listpack empty value.
We can not pass null pointers to listpack.
Fixes#1305 and probably fixes#1290
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(server): Add support for PFADD and PFCOUNT
This version does not create sparse-encoded HLLs, however it is fully compatible with such ones created by Redis as it converts them to the dense encoding.
Note that PFMERGE is not yet implemented.
* Set small string optimization to be 2^13 instead of 2^15.
This will allow dense-encoded HLL to *not* fit within the small string,
which will make it contiguous in memory, thus GetSlice() will not
allocate.
---------
Signed-off-by: chakaz <chakaz@chakaz>
Co-authored-by: chakaz <chakaz@chakaz>
* feat(server): Save snapshot on shutdown
* CR
* Change save on shutdown to be conditional on --dbfilename.
* Support SHUTDOWN [NO]SAVE and fix unit test
* Better wait for DB loading
* Fix DF format loading state bug
* Fix some fallout from auto save
1. Support tiered deletion.
2. Add notion of tiered entity in "DEBUG OBJECT" output.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(core): Added DenseSet & StringSet types with docs
- Improved documentation by adding labels to chain types & pointer tagging table
- Added potential improvements to the DenseSet types in the docs
- Added excalidraw save file for future editing
- Removed ambiguous overloading types
- Renamed iterators to be more clear
* feat(core): Cleaned up DenseSet and Docs
* feat(core): Made DenseSet more ergonomic
* feat(server): Integration of DenseSet into Server
- Integrated DenseSet with CompactObj and the Set Family commands
Signed-off-by: Braydn <braydn.moore@uwaterloo.ca>
Related to #159. Before this change, rdb loading thread has been creating all the redis objects as well.
Now we separate rdb file parsing and objects creation. File parsing phase produces a load trace of one or more binary blobs.
Those blobs are then passed to the threads that are responsible to manage the objects.
The second phase is object creation based on the trace that was passed. Finally those binary blobs are destroyed.
As a result, each thread creates objects using the memory allocator it owns and memory stats become consistent.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>