mirror of
https://github.com/dragonflydb/dragonfly.git
synced 2025-05-11 18:35:46 +02:00
fix(server): Dont apply memory limit when loading/replicating (#1760)
* fix(server): Dont apply memory limit when loading/replicating When loading a snapshot created by the same server configuration (memory and number of shards) we will create a different dash table segment directory tree, because the tree shape is related to the order of entries insertion. Therefore when loading data from snapshot or from replication the conservative memory checks might fail as the new tree might have more segments. Because we dont want to fail loading a snapshot from the same server configuration we disable this checks on loading and replication. Signed-off-by: adi_holden <adi@dragonflydb.io>
This commit is contained in:
parent
7c16329ef3
commit
ac3b8b2b74
3 changed files with 30 additions and 8 deletions
|
@ -2289,6 +2289,8 @@ void RdbLoader::LoadItemsBuffer(DbIndex db_ind, const ItemsBuf& ib) {
|
|||
void RdbLoader::ResizeDb(size_t key_num, size_t expire_num) {
|
||||
DCHECK_LT(key_num, 1U << 31);
|
||||
DCHECK_LT(expire_num, 1U << 31);
|
||||
// Note: To reserve space, it's necessary to allocate space at the shard level. We might
|
||||
// load with different number of shards which makes database resizing unfeasible.
|
||||
}
|
||||
|
||||
error_code RdbLoader::LoadKeyValPair(int type, ObjSettings* settings) {
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue