Making RedisParser::Buffer const, some minor changes in dragonfly_connection code.
No functionality is changed.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(set_family): Update TTL for existing fields in SADDEX
In SADDEX a TTL is now also applied to existing fields, if the field
already exists in set, its TTL is updated.
A new flag legacy_saddex_keepttl is introduced which is false by
default. If this flag is set to true, then SADDEX keeps legacy behavior.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
---------
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
Mainly comments and refactorings.
There are two functional differrences:
1. flush serialized entries in case we gathered at least K delayed
entries coming from tiered entities.
2. allow loading snapshots larger than memory for tiered enabled datastores.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Co-authored-by: Kostas Kyrimis <kostas@dragonflydb.io>
* chore: reproduce a bug related to #4663
Add various debug logs to help tracking the deadlock.
Add more assertions in helio and provide state time for fibers
during stacktrace printings.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. Fix FreeMemWithEvictionStep that could preempt under FiberAtomicGuard.
This could happen during the return from the inner loop. Now, we break
from the guard first and then preempt in a safe place.
2. Rename LocalBlockingCounter to LocalLatch
because it's a variation of latch (see std::latch for example).
3. Rename PreUpdate to PreUpdateBlocking to emphasize it can block.
4. Fix mutations counting: consider either insertions or changing the existing entry.
Before that we incremented this counter for misses as well.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
feat(rdb_load): Added a flag to ignore key expiry #3858.
Added a new flag --rdb_ignore_expiry to ignore key expiry when loading from RDB Snapshot. Also cached this flag into RDBLoader object to reuse it.
FormatInfoMetrics used 18KB of stack size in debug mode.
Each call to append increased the stack even though the calls were done
from the scope blocks. This PR overcomes this by move the calls to lambda functions.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
We call PerformDeletion in an atomic block, which in turn calls SendInvalidationTrackingMessage
that could block. We fix it by separating the blocking logic by moving the invalidation messages into
a designated send queue and flush it later.
In addition rename the function to make it explicit that they are atomic (i.e. not blocking).
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This function analyzes the compressability of the keys
using a single huffman tree. For example,
```
>debug POPULATE 1000000 keyabcdef 10
OK
> debug compression
1) max_symbol
2) (integer) 121
3) max_bits
4) (integer) 5
5) raw_size
6) (integer) 7861817
7) compressed_size
8) (integer) 4372270
9) ratio
10) "0.5561398847111297"
```
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Support Cluster configuration for dfly_bench.
The load tester client opens a connection to each shard, in total
`num_shards * FLAGS_c * FLAGS_proactor_threads` connections.
There is no support for MOVED responses at the moment so only static clusters are supported.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
#4636 introduced a small race . The problem is that Start() might have failed (because a subsequent ReplicaOf command cancelled it). ReplicaOfInternal would ignore the ec and start the replication fiber via replica_->StartMainReplicationFiber but the second ReplicaOf command that followed updated replica_ so we tried to start replication twice on the same replica instance triggering a check failed that the fiber is running.
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
Latest helio fixes the bug in DnsResolve - fixes#4244
Also, add some comments and perform minor clean ups in tiered codebase.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
HyperLogLog is an efficient data structure for approximate counting of unique elements.
We use it to sample the traffic via "DEBUG KEYS ON/OFF" command.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: allow sampling of topk hottest keys.
The core data structure code was added long time ago, now
we allow using it via `DEBUG TOPK ON/OFF` subcommand.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
They were wrong, we usually do not need them,
and they complicated code. The right way to do it - to add them to OBJHIST statistics.
In addition, did some other code improvements without functional changes.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. cherry-pick valkey security fixes with stringmatchlen with exponential complexity of stars
See https://nvd.nist.gov/vuln/detail/cve-2022-36021
2. Fallback to stringmatchlen for short lengths.
3. Add another backend to GlobMatcher - PCRE2, though do not enable at the moment.
While this backend is the fastest one - it requires an additional shared lib dependency.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
We unconditionally flushed the replica even if replicaof failed to connect. We now flush only if replicaof succeeded to connect.
Signed-off-by: kostas <kostas@dragonflydb.io>
1. Reduce number of allocations when creating new members
2. Reverse the saving order because with bptree it's better to add elements
from the smallest to largest.
3. Add a runtime flag rdb_load_dry_run that allows us to load the dataset
without actually adding entries.
For zset rich dataset, the difference was 415s before vs 220s afterwards.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
1. rdb loader big string loading in chunks
2. snapshot compression logic is disabled in case of big buffer
Signed-off-by: adi_holden <adi@dragonflydb.io>