Has the RESET option in INCRBY command removed from REDIS TSDB

I was just revisiting the documentation of Redis TSDB on https://oss.redislabs.com/redistimeseries/commands/ and could not find the ‘RESET’ option in incrby command. Has there been any change regarding this, and if yes what is the alternative to work with counters and gauge in Redis TSDB.

Hi,

Yes, we removed this feature since it was very confusing and was inconsistent with the reset of the API.

What did you use it for?

Technically if you want to achieve similar functionality you can achieve it by creating a down sample rule with a similar timebucket and set a minimal retention time on the original timesries.

e.g.
TS.CREATE mykey RETENTION 1
TS.CREATE tokey
TS.CREATERULE mykey tokey AGGREGATION sum 10

``

Guy

I used the RESET option to have the value of a metric increase over a specific duration of time and then reset after that. By using the downsampling approach the time bucket option will always give value for the last ‘n’ minutes, which in turn wont give me the correct value for a specific period and will give overlapped or multiple times counted value. Right? So one event will be tracked in ‘n’ different time instances.

You can use TS.ADD all these values and then TS.RANGE the with aggregation SUM. If you used TS.DECRBY, you will have to make sure you add a negative value.

TS.ADD won’t do the job, as when I will write different values on the same timestamp for a metric, it will be overwritten. RESET helped in that case as it maintained the counter very well.

Due to the implementation of double delta compression we have disabled overwriting a sample. We may add an option to have multiple data points for one timestamp. Would that satisfy your use case?

So is this option added in the release, or will it be added in the future? And what do we mean by double data compression and is it implemented in the current release?

If adding over an existing value on timestamp is possible, post that, then it would be good for me.

Guy,

The TIMEBUCKET while creating the rule for destination key from source key, what does it mean exactly. So for example if the value of timebucket is 10 seconds, then the destination key will have values for every 10 sec gap, which in turn will have the aggregation(sum, avg, etc) in that timestamp during that 10 second time. No value will be counted twice in any time bucket? right?

The TIMEBUCKET while creating the rule for destination key from source key, what does it mean exactly.

So for example if the value of timebucket is 10 seconds, then the destination key will have values for every 10 sec gap, which in turn will have the aggregation(sum, avg, etc)
in that timestamp during that 10 second time.

Right

No value will be counted twice in any time bucket? right?

Right

@ariel, awaiting your response.

So is this option added in the release, or will it be added in the future? And what do we mean by double data compression and is it implemented in the current release?

Multiple values on one timestamp will not be included in this version.

Double delta compression saves you memory and works faster. It is a Win-Win and it is included in the coming v1.2 release