chore: refactorings around deletions
Done as a preparation to introduce asynchronous deletions for sets/zsets/hmaps.
1. Restrict the interface around DbSlice::Del. Now it requires for the iterator to be valid and the checks should
be explicit before the call. Most callers already provides a valid iterator.
2. Some minor refactoring in compact_object_test.
3. Expose DenseSet::ClearStep to allow iterative deletions of elements.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
The problem: apparently, jsoncons uses strtod by default when parsing doubles.
On some platforms (alpine/musl) this function uses lots of stack, which potentially can lead to stack corruption.
This PR configures jsoncons to use std::from_chars that is more efficient and less stack hungry.
The single include point to consume jsoncons/json.hpp should be "core/json_object.h"
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
For legacy mode:
1. For mutate commands, an empty result should throw an error
2. For read commands, it returns nil if path was not found, but if it was matched
but only with a wrong types, it will throw an error.
For non-legacy mode, objlen should throw an error for non existing key.
Simplified JsonCallbackResult a bit and made sure more fakeredis tests are passing.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* fix: JSON.STRAPPEND
JSON.STRAPPEND was completely broken.
First, it accepts exactly 3 arguments, i.e. a single value to append.
Secondly, the value must be a json string, not the regular string. Meaning it must be in double quotes.
So, before we parsed: `JSON.STRAPPEND key $.field bar` and now we parse:
`JSON.STRAPPEND key $.field "bar"`
In addition fixed the behavior of JSON.STRLEN to return "no such key" error in case the
json key does not exist and path is specified.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* feat: Support non-root paths for json.merge
Pass path argument and rewrite the JSON.MERGE code similar to OpToggle
or other mutating functions. Currently works only with --experimental_flat_json=false.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
* chore: comments
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
json.merge would throw an exception when the json object did not contain the element to replace because RecursiveMerge functions used &dest->at(k_v.key()) which threw the exception. Remove RecursiveMerge completely and use the one implemented in jsoncons lib.
* add test
* replace RecursiveMerge with mergepatch::apply_merge_patch
* add exception handling for that flow
---------
Signed-off-by: kostas <kostas@dragonflydb.io>
* Revert "Revert "chore: get rid of kv_args and replace it with slices to full_… (#3024)"
This reverts commit 25e6930ac3.
Fixing the performance bug caused by applying equality operator
on two spans inside ShardArgs::Iterator::operator==
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
Co-authored-by: Vladislav Oleshko <vlad@dragonflydb.io>
The main change is in tx_base.* where we introduce ShardArgs slice that
is only forward iterable. It allows us to go over sub-ranges of the full arguments
slice or read an index of any of its elements.
Since ShardArgs provide now indices into the original argument list we do not need to build the reverse index in transactions.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Done in preparation to make ShardArgs a smart iterable type,
but currently it's just a wrapper aroung ArgSlice.
Also refactored common.{h,cc} into tx_base.{h,cc}
In addition, fixed a bug in key tracking, where we wrongly created weak_ref
in a shard thread instead of doing this in the coordinator thread.
Finally, identified another bug (not fixed yet) where we track all the arguments
instead of tracking keys only.
Besides this, no functional changes around the moved code.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
A self-laundering iterator will enable us to, eventually, yield from fibers while holding an iterator. For example:
```cpp
auto it1 = db_slice.Find(...);
Yield(); // Until now - this could have invalidated `it1`
auto it2 = db_slice.Find(...);
```
Why is this a good idea? Because it will enable yielding inside PreUpdate() which will allow breaking down of writing huge entries in small quantities to disk/network, eliminating the need to allocate huge chunks of memory just for serialization.
Also, it'll probably unlock future developments as well, as yielding can be useful in other contexts.
* chore: implement path mutation for JsonFlat
This is done by converting the flat blob into mutable c++ json
and then building a new flat blob.
Also, add JsonFlat encoding to CompactObject class.
---------
Signed-off-by: Roman Gershman <roman@dragonflydb.io>