New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mget in the cluster performance issue on the client side #953
Comments
Please provide a minimal complete verifiable example so we can trace what's happening here. |
@mp911de thanks for your reply, I feedback example |
please confirm the node's free memory, if the node has no memory space, the mset() command would give the cluster huge pressure. |
For this case, I can sure cluster node has free memory, and when i test it by mget, but not mget/mset together. In addition, new cluster just set less 100 thousand short string key-values. |
According to the code here all is as it is expected. The JVM properly uses resources because of Lettuce's non-blocking I/O layer. If you find issues that we can streamline CPU usage with more light-weight activity (i.e. preventing unnecessary actions), I'm happy to assist you. As of now, there's nothing left to do. |
Hi @LinGoWei Were you able to RCA this? |
E-mail 已收到啦,谢谢!
|
We have a use case that we want to mget 100 keys in the cluster use RedisAdvancedClusterAsyncCommandsImpl.mget().
When we do stress tests on a single virtual machine with 4 cpu, more than 100 call mget(100...keys) times per second, vm cpu idle will down fastly and context switching high. Program run on the edge of crash.
As the official doc(https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster) says, regular Redis Cluster commands are limited to single-slot keys operation, mget 100 keys will be dispersed 100 slots approximately, and then send commands to redis server concurrently by netty NIO, one context switching for sending command, next context switching for receiving response.
Will I give up use redis cluster mode in this use case, and use client side partitioning ?
The text was updated successfully, but these errors were encountered: