The issue is that Scan has high latency (in seconds) when keys are large and there are no matches. Iterating 10k buckets and string matching each of the keys is potentially an expensive operation that depends on the keyspace and the number of actual matches.
* replace heuristic in scan command to throttle based on time
Signed-off-by: kostas <kostas@dragonflydb.io>
* chore(hset_family): Support resp3 format for hrandfield
Return nested arrays if hrandfield is used with values.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
* feat(dfly_bench): allow regulated throughput in 3 modes
1. Coordinated omission - with --qps=0, each request is sent and then we wait for the response and so on.
For pipeline mode, k requests are sent and then we wait for them to return to send another k
2. qps > 0: we schedule sending requests at frequency "qps" per connection but if pending requests count crosses a limit
we slow down by throttling request sending. This mode enables gentle uncoordinated omission, where the schedule
converges to the real throughput capacity of the backend (if it's slower than the target throughput).
3. qps < 0, similar as (2) but does not adjust its scheduling and may overload the server
if target QPS is too high.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: change pipelining and coordinated omission logic
Before that the uncoordinated omission only worked without pipelining.
Now, with pipelining mode with send a burst of P requests and then:
a) For coordinated omission - wait for all of them to complete before proceeding
further
b) For non-coordinated omission - we sleep to pace our single connection throughput as
defined by the qps setting.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* evict in heartbeat if expire table is not empty
* add metrics around heartbeat evictions (total evictions and total evicted bytes)
Signed-off-by: kostas <kostas@dragonflydb.io>
fix(set_family): Transfer TTL flag from link to object in delete
When extracting DensePtr from LinkObject we need to transfer TTL flag
before this DensePtr is assigned.
Fixes#3915
Signed-off-by: mkaruza <mario@dragonflydb.io>
* chore: Make snapshotting more responsive
This should improve situation around #4787 -
maybe not solve it completely but improve it significantly.
On my tests when doing snapshotting under read traffic with master
(memtier_benchmark --ratio 0:1 -d 256 --test-time=400 --distinct-client-seed --key-maximum=2000000 -c 5 -t 2 --pipeline=3)
I got drop from 250K qps to 8K qps during the full sync phase.
With this PR, the throughput went up to 70-80K qps.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Close MONITOR connection if we overflow pipeline limits. It can happen
that MONITOR connection dispatches messages slower than they are received
causing memory to go out of bounds (and can result in process crash).
Signed-off-by: mkaruza <mario@dragonflydb.io>
StringSet object doesn't update time when FIELDEXPIRE is called. It will
use base time when object is created. Update object time when we want to
expire field in SET object.
Fixes#4894
Signed-off-by: mkaruza <mario@dragonflydb.io>
Remove key argument in DbSlice::CallChangeCallbacks which is used only
for DVLOG(2) logging. DbSlice::OnCbFinishBlocking needs to call this
function but it doesn't have key name so it has to retrieve it from
iterator.
Signed-off-by: mkaruza <mario@dragonflydb.io>
Pytest test_exit_on_s3_snapshot_load_err can raise exception on start in
some test environments. Now wait for exception in instance start and
stop.
Signed-off-by: mkaruza <mario@dragonflydb.io>
When setting TTL, the object is cloned. An extra 4 bytes is requested
during allocation. This can result in the object being allocated from a
larger page, eg moving from 16 byte page to 32 byte page. The page block
size is used to report object allocation size. Currently this change in
size is not reflected in the total memory usage of the dense set, so it
remains 16 bytes while the object allocated size is now 32 bytes.
If such an object is later replaced, we deduct the size of the object
from total memory usage of the set. Here we can run into an overflow,
because the size of the object is deducted from the tracked size of the
set, and the former is greater than the latter.
To avoid this, if during setting expiry time the new size is different
from old size, we update the set size.
Exit process if error is reported when we try to initially load snapshot from
cloud storage or local directory.
Fixes#4840
Signed-off-by: mkaruza <mario@dragonflydb.io>
When a search operation is performed on a hash set, expired fields are
removed as a side effect.
If at the end of such an operation the hash set becomes empty, its key
is removed from the database.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>