Redis Modules Commands

Accessing redis module commands requires the installation of the supported Redis module. For a quick start with redis modules, try the Redismod docker.

RedisBloom Commands

These are the commands for interacting with the RedisBloom module. Below is a brief example, as well as documentation on the commands themselves.

Create and add to a bloom filter

import redis
filter = redis.bf().create("bloom", 0.01, 1000)
filter.add("bloom", "foo")

Create and add to a cuckoo filter

import redis
filter = redis.cf().create("cuckoo", 1000)
filter.add("cuckoo", "filter")

Create Count-Min Sketch and get information

import redis
r = redis.cms().initbydim("dim", 1000, 5)
r.cms().incrby("dim", ["foo"], [5])
r.cms().info("dim")

Create a topk list, and access the results

import redis
r = redis.topk().reserve("mytopk", 3, 50, 4, 0.9)
info = r.topk().info("mytopk)
class redis.commands.bf.commands.BFCommands[source]

Bloom Filter commands.

add(key, item)[source]

Add to a Bloom Filter key an item. For more information see BF.ADD.

create(key, errorRate, capacity, expansion=None, noScale=None)[source]

Create a new Bloom Filter key with desired probability of false positives errorRate expected entries to be inserted as capacity. Default expansion value is 2. By default, filter is auto-scaling. For more information see BF.RESERVE.

exists(key, item)[source]

Check whether an item exists in Bloom Filter key. For more information see BF.EXISTS.

info(key)[source]

Return capacity, size, number of filters, number of items inserted, and expansion rate. For more information see BF.INFO.

insert(key, items, capacity=None, error=None, noCreate=None, expansion=None, noScale=None)[source]

Add to a Bloom Filter key multiple items.

If nocreate remain None and key does not exist, a new Bloom Filter key will be created with desired probability of false positives errorRate and expected entries to be inserted as size. For more information see BF.INSERT.

loadchunk(key, iter, data)[source]

Restore a filter previously saved using SCANDUMP.

See the SCANDUMP command for example usage. This command will overwrite any bloom filter stored under key. Ensure that the bloom filter will not be modified between invocations. For more information see BF.LOADCHUNK.

madd(key, *items)[source]

Add to a Bloom Filter key multiple items. For more information see BF.MADD.

mexists(key, *items)[source]

Check whether items exist in Bloom Filter key. For more information see BF.MEXISTS.

scandump(key, iter)[source]

Begin an incremental save of the bloom filter key.

This is useful for large bloom filters which cannot fit into the normal SAVE and RESTORE model. The first time this command is called, the value of iter should be 0. This command will return successive (iter, data) pairs until (0, NULL) to indicate completion. For more information see BF.SCANDUMP.

class redis.commands.bf.commands.CFCommands[source]

Cuckoo Filter commands.

add(key, item)[source]

Add an item to a Cuckoo Filter key. For more information see CF.ADD.

addnx(key, item)[source]

Add an item to a Cuckoo Filter key only if item does not yet exist. Command might be slower that add. For more information see CF.ADDNX.

count(key, item)[source]

Return the number of times an item may be in the key. For more information see CF.COUNT.

create(key, capacity, expansion=None, bucket_size=None, max_iterations=None)[source]

Create a new Cuckoo Filter key an initial capacity items. For more information see CF.RESERVE.

delete(key, item)[source]

Delete item from key. For more information see CF.DEL.

exists(key, item)[source]

Check whether an item exists in Cuckoo Filter key. For more information see CF.EXISTS.

info(key)[source]

Return size, number of buckets, number of filter, number of items inserted, number of items deleted, bucket size, expansion rate, and max iteration. For more information see CF.INFO.

insert(key, items, capacity=None, nocreate=None)[source]

Add multiple items to a Cuckoo Filter key, allowing the filter to be created with a custom capacity if it does not yet exist. items must be provided as a list. For more information see CF.INSERT.

insertnx(key, items, capacity=None, nocreate=None)[source]

Add multiple items to a Cuckoo Filter key only if they do not exist yet, allowing the filter to be created with a custom capacity if it does not yet exist. items must be provided as a list. For more information see CF.INSERTNX.

loadchunk(key, iter, data)[source]

Restore a filter previously saved using SCANDUMP. See the SCANDUMP command for example usage.

This command will overwrite any Cuckoo filter stored under key. Ensure that the Cuckoo filter will not be modified between invocations. For more information see CF.LOADCHUNK.

scandump(key, iter)[source]

Begin an incremental save of the Cuckoo filter key.

This is useful for large Cuckoo filters which cannot fit into the normal SAVE and RESTORE model. The first time this command is called, the value of iter should be 0. This command will return successive (iter, data) pairs until (0, NULL) to indicate completion. For more information see CF.SCANDUMP.

class redis.commands.bf.commands.CMSCommands[source]

Count-Min Sketch Commands

incrby(key, items, increments)[source]

Add/increase items to a Count-Min Sketch key by ‘’increments’’. Both items and increments are lists. For more information see CMS.INCRBY.

Example:

>>> cmsincrby('A', ['foo'], [1])
info(key)[source]

Return width, depth and total count of the sketch. For more information see CMS.INFO.

initbydim(key, width, depth)[source]

Initialize a Count-Min Sketch key to dimensions (width, depth) specified by user. For more information see CMS.INITBYDIM.

initbyprob(key, error, probability)[source]

Initialize a Count-Min Sketch key to characteristics (error, probability) specified by user. For more information see CMS.INITBYPROB.

merge(destKey, numKeys, srcKeys, weights=[])[source]

Merge numKeys of sketches into destKey. Sketches specified in srcKeys. All sketches must have identical width and depth. Weights can be used to multiply certain sketches. Default weight is 1. Both srcKeys and weights are lists. For more information see CMS.MERGE.

query(key, *items)[source]

Return count for an item from key. Multiple items can be queried with one call. For more information see CMS.QUERY.

class redis.commands.bf.commands.TOPKCommands[source]

TOP-k Filter commands.

add(key, *items)[source]

Add one item or more to a Top-K Filter key. For more information see TOPK.ADD.

count(key, *items)[source]

Return count for one item or more from key. For more information see TOPK.COUNT.

incrby(key, items, increments)[source]

Add/increase items to a Top-K Sketch key by ‘’increments’’. Both items and increments are lists. For more information see TOPK.INCRBY.

Example:

>>> topkincrby('A', ['foo'], [1])
info(key)[source]

Return k, width, depth and decay values of key. For more information see TOPK.INFO.

list(key, withcount=False)[source]

Return full list of items in Top-K list of key. If withcount set to True, return full list of items with probabilistic count in Top-K list of key. For more information see TOPK.LIST.

query(key, *items)[source]

Check whether one item or more is a Top-K item at key. For more information see TOPK.QUERY.

reserve(key, k, width, depth, decay)[source]

Create a new Top-K Filter key with desired probability of false positives errorRate expected entries to be inserted as size. For more information see TOPK.RESERVE.


RedisGraph Commands

These are the commands for interacting with the RedisGraph module. Below is a brief example, as well as documentation on the commands themselves.

Create a graph, adding two nodes

import redis
from redis.graph.node import Node

john = Node(label="person", properties={"name": "John Doe", "age": 33}
jane = Node(label="person", properties={"name": "Jane Doe", "age": 34}

r = redis.Redis()
graph = r.graph()
graph.add_node(john)
graph.add_node(jane)
graph.add_node(pat)
graph.commit()
class redis.commands.graph.node.Node(node_id=None, alias=None, label=None, properties=None)[source]

A node within the graph.

class redis.commands.graph.edge.Edge(src_node, relation, dest_node, edge_id=None, properties=None)[source]

An edge connecting two nodes.

class redis.commands.graph.commands.GraphCommands[source]

RedisGraph Commands

bulk(**kwargs)[source]

Internal only. Not supported.

commit()[source]

Create entire graph. For more information see CREATE. # noqa

config(name, value=None, set=False)[source]

Retrieve or update a RedisGraph configuration. For more information see GRAPH.CONFIG. # noqa

Args:

name : str
The name of the configuration
value :
The value we want to set (can be used only when set is on)
set : bool
Turn on to set a configuration. Default behavior is get.
delete()[source]

Deletes graph. For more information see DELETE. # noqa

explain(query, params=None)[source]

Get the execution plan for given query, Returns an array of operations. For more information see GRAPH.EXPLAIN. # noqa

Args:

query:
The query that will be executed.
params: dict
Query parameters.
flush()[source]

Commit the graph and reset the edges and the nodes to zero length.

list_keys()[source]

Lists all graph keys in the keyspace. For more information see GRAPH.LIST. # noqa

merge(pattern)[source]

Merge pattern. For more information see MERGE. # noqa

profile(query)[source]

Execute a query and produce an execution plan augmented with metrics for each operation’s execution. Return a string representation of a query execution plan, with details on results produced by and time spent in each operation. For more information see GRAPH.PROFILE. # noqa

query(q, params=None, timeout=None, read_only=False, profile=False)[source]

Executes a query against the graph. For more information see GRAPH.QUERY. # noqa

Args:

q : str
The query.
params : dict
Query parameters.
timeout : int
Maximum runtime for read queries in milliseconds.
read_only : bool
Executes a readonly query if set to True.
profile : bool
Return details on results produced by and time spent in each operation.
slowlog()[source]

Get a list containing up to 10 of the slowest queries issued against the given graph ID. For more information see GRAPH.SLOWLOG. # noqa

Each item in the list has the following structure: 1. A unix timestamp at which the log entry was processed. 2. The issued command. 3. The issued query. 4. The amount of time needed for its execution, in milliseconds.


RedisJSON Commands

These are the commands for interacting with the RedisJSON module. Below is a brief example, as well as documentation on the commands themselves.

Create a json object

import redis
r = redis.Redis()
r.json().set("mykey", ".", {"hello": "world", "i am": ["a", "json", "object!"]}

Examples of how to combine search and json can be found here.


RediSearch Commands

These are the commands for interacting with the RediSearch module. Below is a brief example, as well as documentation on the commands themselves.

Create a search index, and display its information

import redis
from redis.commands.search.field import TextField

r = redis.Redis()
r.ft().create_index(TextField("play", weight=5.0), TextField("ball"))
print(r.ft().info())
class redis.commands.search.commands.SearchCommands[source]

Search commands.

add_document(doc_id, nosave=False, score=1.0, payload=None, replace=False, partial=False, language=None, no_create=False, **fields)[source]

Add a single document to the index.

### Parameters

  • doc_id: the id of the saved document.
  • nosave: if set to true, we just index the document, and don’t
    save a copy of it. This means that searches will just return ids.
  • score: the document ranking, between 0.0 and 1.0
  • payload: optional inner-index payload we can save for fast

i access in scoring functions - replace: if True, and the document already is in the index, we perform an update and reindex the document - partial: if True, the fields specified will be added to the

existing document. This has the added benefit that any fields specified with no_index will not be reindexed again. Implies replace
  • language: Specify the language used for document tokenization.

  • no_create: if True, the document is only updated and reindexed

    if it already exists. If the document does not exist, an error will be returned. Implies replace

  • fields kwargs dictionary of the document fields to be saved

    and/or indexed.

    NOTE: Geo points shoule be encoded as strings of “lon,lat”

For more information: https://oss.redis.com/redisearch/Commands/#ftadd

add_document_hash(doc_id, score=1.0, language=None, replace=False)[source]

Add a hash document to the index.

### Parameters

  • doc_id: the document’s id. This has to be an existing HASH key
    in Redis that will hold the fields the index needs.
  • score: the document ranking, between 0.0 and 1.0
  • replace: if True, and the document already is in the index, we
    perform an update and reindex the document
  • language: Specify the language used for document tokenization.

For more information: https://oss.redis.com/redisearch/Commands/#ftaddhash

aggregate(query)[source]

Issue an aggregation query.

### Parameters

query: This can be either an AggregateRequest, or a Cursor

An AggregateResult object is returned. You can access the rows from its rows property, which will always yield the rows of the result.

Fpr more information: https://oss.redis.com/redisearch/Commands/#ftaggregate

aliasadd(alias)[source]

Alias a search index - will fail if alias already exists

### Parameters

  • alias: Name of the alias to create

For more information: https://oss.redis.com/redisearch/Commands/#ftaliasadd

aliasdel(alias)[source]

Removes an alias to a search index

### Parameters

  • alias: Name of the alias to delete

For more information: https://oss.redis.com/redisearch/Commands/#ftaliasdel

aliasupdate(alias)[source]

Updates an alias - will fail if alias does not already exist

### Parameters

  • alias: Name of the alias to create

For more information: https://oss.redis.com/redisearch/Commands/#ftaliasupdate

alter_schema_add(fields)[source]

Alter the existing search index by adding new fields. The index must already exist.

### Parameters:

  • fields: a list of Field objects to add for the index

For more information: https://oss.redis.com/redisearch/Commands/#ftalter_schema_add

batch_indexer(chunk_size=100)[source]

Create a new batch indexer from the client with a given chunk size

config_get(option)[source]

Get runtime configuration option value.

### Parameters

  • option: the name of the configuration option.

For more information: https://oss.redis.com/redisearch/Commands/#ftconfig

config_set(option, value)[source]

Set runtime configuration option.

### Parameters

  • option: the name of the configuration option.
  • value: a value for the configuration option.

For more information: https://oss.redis.com/redisearch/Commands/#ftconfig

create_index(fields, no_term_offsets=False, no_field_flags=False, stopwords=None, definition=None, max_text_fields=False, temporary=None, no_highlight=False, no_term_frequencies=False, skip_initial_scan=False)[source]

Create the search index. The index must not already exist.

### Parameters:

  • fields: a list of TextField or NumericField objects
  • no_term_offsets: If true, we will not save term offsets in

the index - no_field_flags: If true, we will not save field flags that allow searching in specific fields - stopwords: If not None, we create the index with this custom stopword list. The list can be empty - max_text_fields: If true, we will encode indexes as if there were more than 32 text fields which allows you to add additional fields (beyond 32). - temporary: Create a lightweight temporary index which will expire after the specified period of inactivity (in seconds). The internal idle timer is reset whenever the index is searched or added to. - no_highlight: If true, disabling highlighting support. Also implied by no_term_offsets. - no_term_frequencies: If true, we avoid saving the term frequencies in the index. - skip_initial_scan: If true, we do not scan and index.

For more information: https://oss.redis.com/redisearch/Commands/#ftcreate

delete_document(doc_id, conn=None, delete_actual_document=False)[source]

Delete a document from index Returns 1 if the document was deleted, 0 if not

### Parameters

  • delete_actual_document: if set to True, RediSearch also delete
    the actual document if it is in the index

For more information: https://oss.redis.com/redisearch/Commands/#ftdel

dict_add(name, *terms)[source]

Adds terms to a dictionary.

### Parameters

  • name: Dictionary name.
  • terms: List of items for adding to the dictionary.

For more information: https://oss.redis.com/redisearch/Commands/#ftdictadd

dict_del(name, *terms)[source]

Deletes terms from a dictionary.

### Parameters

  • name: Dictionary name.
  • terms: List of items for removing from the dictionary.

For more information: https://oss.redis.com/redisearch/Commands/#ftdictdel

dict_dump(name)[source]

Dumps all terms in the given dictionary.

### Parameters

  • name: Dictionary name.

For more information: https://oss.redis.com/redisearch/Commands/#ftdictdump

dropindex(delete_documents=False)[source]

Drop the index if it exists. Replaced drop_index in RediSearch 2.0. Default behavior was changed to not delete the indexed documents.

### Parameters:

  • delete_documents: If True, all documents will be deleted.

For more information: https://oss.redis.com/redisearch/Commands/#ftdropindex

explain(query)[source]

Returns the execution plan for a complex query.

For more information: https://oss.redis.com/redisearch/Commands/#ftexplain

get(*ids)[source]

Returns the full contents of multiple documents.

### Parameters

  • ids: the ids of the saved documents.

For more information https://oss.redis.com/redisearch/Commands/#ftget

info()[source]

Get info an stats about the the current index, including the number of documents, memory consumption, etc

For more information https://oss.redis.com/redisearch/Commands/#ftinfo

load_document(id)[source]

Load a single document by id

profile(query, limited=False)[source]

Performs a search or aggregate command and collects performance information.

### Parameters

query: This can be either an AggregateRequest, Query or string. limited: If set to True, removes details of reader iterator.

search(query)[source]

Search the index for a given query, and return a result of documents

### Parameters

  • query: the search query. Either a text for simple queries with
    default parameters, or a Query object for complex queries. See RediSearch’s documentation on query format

For more information: https://oss.redis.com/redisearch/Commands/#ftsearch

spellcheck(query, distance=None, include=None, exclude=None)[source]

Issue a spellcheck query

### Parameters

query: search query. distance*: the maximal Levenshtein distance for spelling

suggestions (default: 1, max: 4).

include: specifies an inclusion custom dictionary. exclude: specifies an exclusion custom dictionary.

For more information: https://oss.redis.com/redisearch/Commands/#ftspellcheck

sugadd(key, *suggestions, **kwargs)[source]

Add suggestion terms to the AutoCompleter engine. Each suggestion has a score and string. If kwargs[“increment”] is true and the terms are already in the server’s dictionary, we increment their scores.

For more information: https://oss.redis.com/redisearch/master/Commands/#ftsugadd

sugdel(key, string)[source]

Delete a string from the AutoCompleter index. Returns 1 if the string was found and deleted, 0 otherwise.

For more information: https://oss.redis.com/redisearch/master/Commands/#ftsugdel

sugget(key, prefix, fuzzy=False, num=10, with_scores=False, with_payloads=False)[source]

Get a list of suggestions from the AutoCompleter, for a given prefix.

Parameters:

prefix : str
The prefix we are searching. Must be valid ascii or utf-8
fuzzy : bool
If set to true, the prefix search is done in fuzzy mode. NOTE: Running fuzzy searches on short (<3 letters) prefixes can be very slow, and even scan the entire index.
with_scores : bool
If set to true, we also return the (refactored) score of each suggestion. This is normally not needed, and is NOT the original score inserted into the index.
with_payloads : bool
Return suggestion payloads
num : int
The maximum number of results we return. Note that we might return less. The algorithm trims irrelevant suggestions.

Returns:

list:
A list of Suggestion objects. If with_scores was False, the score of all suggestions is 1.

For more information: https://oss.redis.com/redisearch/master/Commands/#ftsugget

suglen(key)[source]

Return the number of entries in the AutoCompleter index.

For more information https://oss.redis.com/redisearch/master/Commands/#ftsuglen

syndump()[source]

Dumps the contents of a synonym group.

The command is used to dump the synonyms data structure. Returns a list of synonym terms and their synonym group ids.

For more information: https://oss.redis.com/redisearch/Commands/#ftsyndump

synupdate(groupid, skipinitial=False, *terms)[source]

Updates a synonym group. The command is used to create or update a synonym group with additional terms. Only documents which were indexed after the update will be affected.

Parameters:

groupid :
Synonym group id.
skipinitial : bool
If set to true, we do not scan and index.
terms :
The terms.

For more information: https://oss.redis.com/redisearch/Commands/#ftsynupdate

tagvals(tagfield)[source]

Return a list of all possible tag values

### Parameters

  • tagfield: Tag field name

For more information: https://oss.redis.com/redisearch/Commands/#fttagvals


RedisTimeSeries Commands

These are the commands for interacting with the RedisTimeSeries module. Below is a brief example, as well as documentation on the commands themselves.

Create a timeseries object with 5 second retention

import redis
r = redis.Redis()
r.ts().create(2, retension_msecs=5)
class redis.commands.timeseries.commands.TimeSeriesCommands[source]

RedisTimeSeries Commands.

add(key, timestamp, value, **kwargs)[source]

Append (or create and append) a new sample to the series. For more information see

Args:

key:
time-series key
timestamp:
Timestamp of the sample. * can be used for automatic timestamp (using the system clock).
value:
Numeric data value of the sample
retention_msecs:
Maximum age for samples compared to last event time (in milliseconds). If None or 0 is passed then the series is not trimmed at all.
uncompressed:
Since RedisTimeSeries v1.2, both timestamps and values are compressed by default. Adding this flag will keep data in an uncompressed form. Compression not only saves memory but usually improve performance due to lower number of memory accesses.
labels:
Set of label-value pairs that represent metadata labels of the key.
chunk_size:
Each time-serie uses chunks of memory of fixed size for time series samples. You can alter the default TSDB chunk size by passing the chunk_size argument (in Bytes).
duplicate_policy:
Since RedisTimeSeries v1.4 you can specify the duplicate sample policy (Configure what to do on duplicate sample). Can be one of: - ‘block’: an error will occur for any out of order sample. - ‘first’: ignore the new value. - ‘last’: override with latest value. - ‘min’: only override if the value is lower than the existing value. - ‘max’: only override if the value is higher than the existing value. When this is not set, the server-wide default will be used.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsadd

alter(key, **kwargs)[source]

Update the retention, labels of an existing key. For more information see

The parameters are the same as TS.CREATE.

For more information: https://oss.redis.com/redistimeseries/commands/#tsalter

create(key, **kwargs)[source]

Create a new time-series.

Args:

key:
time-series key
retention_msecs:
Maximum age for samples compared to last event time (in milliseconds). If None or 0 is passed then the series is not trimmed at all.
uncompressed:
Since RedisTimeSeries v1.2, both timestamps and values are compressed by default. Adding this flag will keep data in an uncompressed form. Compression not only saves memory but usually improve performance due to lower number of memory accesses.
labels:
Set of label-value pairs that represent metadata labels of the key.
chunk_size:
Each time-serie uses chunks of memory of fixed size for time series samples. You can alter the default TSDB chunk size by passing the chunk_size argument (in Bytes).
duplicate_policy:
Since RedisTimeSeries v1.4 you can specify the duplicate sample policy ( Configure what to do on duplicate sample. ) Can be one of: - ‘block’: an error will occur for any out of order sample. - ‘first’: ignore the new value. - ‘last’: override with latest value. - ‘min’: only override if the value is lower than the existing value. - ‘max’: only override if the value is higher than the existing value. When this is not set, the server-wide default will be used.

For more information: https://oss.redis.com/redistimeseries/commands/#tscreate

createrule(source_key, dest_key, aggregation_type, bucket_size_msec)[source]

Create a compaction rule from values added to source_key into dest_key. Aggregating for bucket_size_msec where an aggregation_type can be [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s]

For more information: https://oss.redis.com/redistimeseries/master/commands/#tscreaterule

decrby(key, value, **kwargs)[source]

Decrement (or create an time-series and decrement) the latest sample’s of a series. This command can be used as a counter or gauge that automatically gets history as a time series.

Args:

key:
time-series key
value:
Numeric data value of the sample
timestamp:
Timestamp of the sample. None can be used for automatic timestamp (using the system clock).
retention_msecs:
Maximum age for samples compared to last event time (in milliseconds). If None or 0 is passed then the series is not trimmed at all.
uncompressed:
Since RedisTimeSeries v1.2, both timestamps and values are compressed by default. Adding this flag will keep data in an uncompressed form. Compression not only saves memory but usually improve performance due to lower number of memory accesses.
labels:
Set of label-value pairs that represent metadata labels of the key.
chunk_size:
Each time-series uses chunks of memory of fixed size for time series samples. You can alter the default TSDB chunk size by passing the chunk_size argument (in Bytes).

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsincrbytsdecrby

delete(key, from_time, to_time)[source]

Delete data points for a given timeseries and interval range in the form of start and end delete timestamps. The given timestamp interval is closed (inclusive), meaning start and end data points will also be deleted. Return the count for deleted items. For more information see

Args:

key:
time-series key.
from_time:
Start timestamp for the range deletion.
to_time:
End timestamp for the range deletion.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsdel

deleterule(source_key, dest_key)[source]

Delete a compaction rule. For more information see

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsdeleterule

get(key)[source]

# noqa Get the last sample of key.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsget

incrby(key, value, **kwargs)[source]

Increment (or create an time-series and increment) the latest sample’s of a series. This command can be used as a counter or gauge that automatically gets history as a time series.

Args:

key:
time-series key
value:
Numeric data value of the sample
timestamp:
Timestamp of the sample. None can be used for automatic timestamp (using the system clock).
retention_msecs:
Maximum age for samples compared to last event time (in milliseconds). If None or 0 is passed then the series is not trimmed at all.
uncompressed:
Since RedisTimeSeries v1.2, both timestamps and values are compressed by default. Adding this flag will keep data in an uncompressed form. Compression not only saves memory but usually improve performance due to lower number of memory accesses.
labels:
Set of label-value pairs that represent metadata labels of the key.
chunk_size:
Each time-series uses chunks of memory of fixed size for time series samples. You can alter the default TSDB chunk size by passing the chunk_size argument (in Bytes).

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsincrbytsdecrby

info(key)[source]

# noqa Get information of key.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsinfo

madd(ktv_tuples)[source]

Append (or create and append) a new value to series key with timestamp. Expects a list of tuples as (key,`timestamp`, value). Return value is an array with timestamps of insertions.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsmadd

mget(filters, with_labels=False)[source]

# noqa Get the last samples matching the specific filter.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsmget

mrange(from_time, to_time, filters, count=None, aggregation_type=None, bucket_size_msec=0, with_labels=False, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, groupby=None, reduce=None, select_labels=None, align=None)[source]

Query a range across multiple time-series by filters in forward direction.

Args:

from_time:
Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).
to_time:
End timestamp for range query, + can be used to express the maximum possible timestamp.
filters:
filter to match the time-series labels.
count:
Optional maximum number of returned results.
aggregation_type:
Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s]
bucket_size_msec:
Time bucket for aggregation in milliseconds.
with_labels:
Include in the reply the label-value pairs that represent metadata labels of the time-series. If this argument is not set, by default, an empty Array will be replied on the labels array position.
filter_by_ts:
List of timestamps to filter the result by specific timestamps.
filter_by_min_value:
Filter result by minimum value (must mention also filter_by_max_value).
filter_by_max_value:
Filter result by maximum value (must mention also filter_by_min_value).
groupby:
Grouping by fields the results (must mention also reduce).
reduce:
Applying reducer functions on each group. Can be one of [sum, min, max].
select_labels:
Include in the reply only a subset of the key-value pair labels of a series.
align:
Timestamp for alignment control for aggregation.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsmrangetsmrevrange

mrevrange(from_time, to_time, filters, count=None, aggregation_type=None, bucket_size_msec=0, with_labels=False, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, groupby=None, reduce=None, select_labels=None, align=None)[source]

Query a range across multiple time-series by filters in reverse direction.

Args:

from_time:
Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).
to_time:
End timestamp for range query, + can be used to express the maximum possible timestamp.
filters:
Filter to match the time-series labels.
count:
Optional maximum number of returned results.
aggregation_type:
Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s]
bucket_size_msec:
Time bucket for aggregation in milliseconds.
with_labels:
Include in the reply the label-value pairs that represent metadata labels of the time-series. If this argument is not set, by default, an empty Array will be replied on the labels array position.
filter_by_ts:
List of timestamps to filter the result by specific timestamps.
filter_by_min_value:
Filter result by minimum value (must mention also filter by_max_value).
filter_by_max_value:
Filter result by maximum value (must mention also filter by_min_value).
groupby:
Grouping by fields the results (must mention also reduce).
reduce:
Applying reducer functions on each group. Can be one of [sum, min, max].
select_labels:
Include in the reply only a subset of the key-value pair labels of a series.
align:
Timestamp for alignment control for aggregation.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsmrangetsmrevrange

queryindex(filters)[source]

# noqa Get all the keys matching the filter list.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsqueryindex

range(key, from_time, to_time, count=None, aggregation_type=None, bucket_size_msec=0, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, align=None)[source]

Query a range in forward direction for a specific time-serie.

Args:

key:
Key name for timeseries.
from_time:
Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).
to_time:
End timestamp for range query, + can be used to express the maximum possible timestamp.
count:
Optional maximum number of returned results.
aggregation_type:
Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s]
bucket_size_msec:
Time bucket for aggregation in milliseconds.
filter_by_ts:
List of timestamps to filter the result by specific timestamps.
filter_by_min_value:
Filter result by minimum value (must mention also filter by_max_value).
filter_by_max_value:
Filter result by maximum value (must mention also filter by_min_value).
align:
Timestamp for alignment control for aggregation.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsrangetsrevrange

revrange(key, from_time, to_time, count=None, aggregation_type=None, bucket_size_msec=0, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, align=None)[source]

Query a range in reverse direction for a specific time-series.

Note: This command is only available since RedisTimeSeries >= v1.4

Args:

key:
Key name for timeseries.
from_time:
Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).
to_time:
End timestamp for range query, + can be used to express the maximum possible timestamp.
count:
Optional maximum number of returned results.
aggregation_type:
Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s]
bucket_size_msec:
Time bucket for aggregation in milliseconds.
filter_by_ts:
List of timestamps to filter the result by specific timestamps.
filter_by_min_value:
Filter result by minimum value (must mention also filter_by_max_value).
filter_by_max_value:
Filter result by maximum value (must mention also filter_by_min_value).
align:
Timestamp for alignment control for aggregation.

For more information: https://oss.redis.com/redistimeseries/master/commands/#tsrangetsrevrange