Having tens of thousands of TS Keys, is it ok?


I plan on using RTS to save data of many many sensors. I will use downsampling so there shouldn’t be a lot of data for each key but there will be many different keys, one for each sensor (tens of thousands of them and different types of sensors).

Is there any limit on the amount of keys we can have? Or will there be no performance degradation for adding many different keys?


hi there Arturo, with regards to benchmarking TimeSeries we’ve started the process of supporting TSBS ( a common benchmark suite acknowledged both by Influx and TimeScale ). With that in mind, and breaking the answer in two parts:
- performance of adding data to a single time-series:

  • It is constant over time, and we’ve benchmarked adding up to 800K data-points to a single time-series with the performance remaining steady and stable and bellow a millisecond. Those values were presented today on Redis Day Bangalore and any time soon we should be able to provide links with material discussed.
    - maximum number of keys a single RedisTimeseries instance can hold?:
  • In RedisTimeSeries each time-series is a key in Redis, so in theory the same limits of Vanilla Redis Apply to TS ( 232 keys per instance, multiplied by the number of instances you have on your setup ). In pratice in our largest benchmark we’ve reached 1 million keys for a single OSS local instance - you can get larger numbers – it is only a matter of extending the datasets we use for testing.

This is a really cool question! In a very short term we’ll add a more detailed version of it to a section on the documentation with regards to the performance of RedisTimeSeries.
I’ll keep this thread up to date.