fix(server): Dont apply memory limit when loading/replicating (#1760)

* fix(server): Dont apply memory limit when loading/replicating

When loading a snapshot created by the same server configuration (memory and
number of shards) we will create a different dash table segment directory tree, because the
tree shape is related to the order of entries insertion. Therefore when loading data from
snapshot or from replication the conservative memory checks might fail as the new tree might
have more segments. Because we dont want to fail loading a snapshot from the same server
configuration we disable this checks on loading and replication.

Signed-off-by: adi_holden <adi@dragonflydb.io>
This commit is contained in:
adiholden 2023-09-03 11:24:52 +03:00 committed by GitHub
parent 7c16329ef3
commit ac3b8b2b74
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3 changed files with 30 additions and 8 deletions

View file

@ -2289,6 +2289,8 @@ void RdbLoader::LoadItemsBuffer(DbIndex db_ind, const ItemsBuf& ib) {
void RdbLoader::ResizeDb(size_t key_num, size_t expire_num) {
DCHECK_LT(key_num, 1U << 31);
DCHECK_LT(expire_num, 1U << 31);
// Note: To reserve space, it's necessary to allocate space at the shard level. We might
// load with different number of shards which makes database resizing unfeasible.
}
error_code RdbLoader::LoadKeyValPair(int type, ObjSettings* settings) {