It's a bit more efficient than Boost.Fibers due to better integrations
of Fibers with Proactor loop.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
feat: increase the flexibility of how to assign threads in DF.
Specifically, introduce `conn_io_threads` and `conn_io_thread_start` flags that choose
which threads can handle I/O. In addition, introduce `num_shards` flag that may override
how many database shards exist in dragonfly process.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat(server): Handle GET parameter for SET command.
Return previous value, as per Redis, in case GET was specified on SET.
---------
Signed-off-by: chakaz <chakaz@chakaz>
Co-authored-by: chakaz <chakaz@chakaz>
Remove Boost.Fibers mentions and remove fibers_ext mentions.
Done in preparation to switch to helio-native fb2 implementation.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This change removes most mentions of boost::fibers or util::fibers_ext.
Instead it introduces "core/fibers.h" file that incorporates most of
the primitives under dfly namespace. This is done in preparation to
switching from Boost.Fibers to helio native fibers.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This PR updates the container release workflow to delete the
unstripped packages so that we prevent the stripped packages from
being overwritten.
This should reduce the size of the container images significantly.
Implements RCU (read-copy-update) for updating the centralized channel store.
Contrary to old mechanism of sharding subscriber info across shards, a centralized store allows avoiding a hop for fetching subscribers. In general, it only slightly improves the latency, but in case of heavy traffic on one channel it allows "spreading" the load, as the single shard no longer is a bottleneck, thus increasing throughput by multiple times.
See channel_store header for implementation details
fix: improve connection affinity heuristic.
1. fix potential crash bug when traversing connections with client list
2. fetch cpu/napi id information when handling a new connection.
3. add thread id (tid) and irqmatch fields to client list command.
4. Implement a heuristic under flag that puts a connection on the
CPU id that handles the IRQ queue that handles its socket.
However, if a too wide gap introduced between number of connections on
IRQ threads and other threads we fallback to other threads.
In my tests I saw 15-20% CPU reduction when this heuristic is enabled.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* Fix crash in ZPOPMIN
Crash was due to an attempt to access nullptr[-1], which is bad :)
* Add test to repro crash.
There's some leftover debugging statements, they're somewhat useful so I
kept them as the bug is not yet fixed.
* Copy patch by romange to solve the crash
Also re-enable (uncomment) the test in utility.py.
Signed-off-by: chakaz <chakaz@chakaz>
---------
Signed-off-by: chakaz <chakaz@chakaz>
Signed-off-by: Chaka <chakaz@users.noreply.github.com>
Co-authored-by: chakaz <chakaz@chakaz>