thread_queue_backpressure is a global array of per thread QueueBackpressure
objects. We referenced these objects incorrectly in 1.27.0-2.
Fixes#4770
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(server): Cluster MOVED response Prometheus metric
Get number of MOVED reported errors and send as Prometheus metric.
Closes#4749
Signed-off-by: mkaruza <mario@dragonflydb.io>
* Update help string
---------
Signed-off-by: mkaruza <mario@dragonflydb.io>
The bug: during the override of the existing external string, we called
`TieredStorage::Delete` to delete the external reference. This function
called CompactObj::Reset that cleared all the attributes on the value, including
expiry.
The fix: preserve the mask but clear the external state from the object.
Fixes#4672
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The bug: during the loading when appending to the existing object,
ItAndUpdater scope did not account for the appended data, and as a result
`object_used_memory` and its variation did not account for streamed objects.
The fix: to extend the scope of the ItAndUpdater object to cover appends.
Added a sanity DCHECK that ensures that object_used_memory is at least as the memory used
by a single object. This dcheck fails pre-fix.
Fixes#4773
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Using `pubsub.ssubscribe` function doesn't wait for any response from
server. Fixed by switching to execute SSUBSCRIBE commands in client directly.
Signed-off-by: mkaruza <mario@dragonflydb.io>
* feat(metrics): Add label for main and other listeners
The stats collected per connection are divided according to main or
other listener.
Metrics are decorated with labels listener= main or other.
The memcached listener is also labelled as main.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
Add functionality to use SAVE and BGSAVE commands with dynamic CLOUD storage path.
New syntax is:
SAVE [RDB|DF] [CLOUD_URI] [BASENAME] where CLOUD_URI should start with S3 or GCS prefix.
For example, now it should work to have working directory pointing to some local folder and executing
command `SAVE DF s3://bucket/snapshots my_snapshot` would save snapshots to `s3://bucket/snapshots`
with basename `my_snapshot`.
Resolves#4660
---------
Signed-off-by: mkaruza <mario@dragonflydb.io>
binary strings are supported by RESP protocol so we now generate commands
using this protocol, so that we could pass binary strings.
In addition, fixed "done" metric which did not account for number of shards
cluster mode.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(hset_family): Add support for KEEPTTL to HSETEX
The KEEPTTL option if specified makes sure that TTL is preserved for existing members.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
Channel store uses a read-copy-update to distribute the changes of the channel store to all proactors. The problem is that we use memory_order_relaxed to load the new pointer to the channel store for each proactor which *does not guarantee* that we fetch the latest value of the channel store. Hence, the fix is to use sequencial consistency such to force fetch the latest value of the channel store.
iouring allows to register a pool of predefined buffers used by kernel.
then during the recv operation the kernel will choose a buffer from the pool, copy data into it
and return it to the application. This is in contrast to prealocate buffers that need to be passed to
a regular Recv. So, for example, if we have 10000 connections, today we preallocate 10000 buffers,
even though we may have only 100 in-flight requests.
This PR does not retire the old approach, but extends with the new once
with the flag `--uring_recv_buffer_cnt=N` that specifies how many receive buffers per thread to preallocate.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Now the debug compression command runs over all the keys to count the histogram.
Based on the histogram it estimates potential savings for huffman compression of the keyspace.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
fix: improve stack margin for s3 operations.
our S3 code relies on aws sdk client, which is extremely stack hungry.
this PR moves some of s3 calls to one-off fibers with increased stacks,
which reduces stack usage for connection fibers executing snapshot save/load operations.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Add support for PUB SHARDCHANNELS and PUB SHARDNUMSUB and report error back if
sub command is not allow to run in non cluster mode.
resolves#847
Signed-off-by: mkaruza <mario@dragonflydb.io>
The bug is that expiring keys during heartbeat should not preempt while writing to the journal and we assert this with a FiberAtomicGuard. However, this atomicity guarantee is violated because the journal callback acquires a lock on a mutex that is already locked by on OnJournalEntry(). The fix is to release the lock when OnJournalEntry() preempts.
Signed-off-by: kostas <kostas@dragonflydb.io>
When `KEEPTTL` is optinally supplied after key, any existing members in the set will preserve their TTL values.
Only new members will get TTL applied to them.
fix(dfly_bench): support dns resolution for cluster hosts and multiple slot ranges.
Initial parsing of MOVED response is done but slot migration is not supported yet.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Making RedisParser::Buffer const, some minor changes in dragonfly_connection code.
No functionality is changed.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(set_family): Update TTL for existing fields in SADDEX
In SADDEX a TTL is now also applied to existing fields, if the field
already exists in set, its TTL is updated.
A new flag legacy_saddex_keepttl is introduced which is false by
default. If this flag is set to true, then SADDEX keeps legacy behavior.
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
---------
Signed-off-by: Abhijat Malviya <abhijat@dragonflydb.io>
Mainly comments and refactorings.
There are two functional differrences:
1. flush serialized entries in case we gathered at least K delayed
entries coming from tiered entities.
2. allow loading snapshots larger than memory for tiered enabled datastores.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Co-authored-by: Kostas Kyrimis <kostas@dragonflydb.io>