filter push down (by key) when querying redis using spark (pyspark)

Hi

I’m loading df from redis using this code:

df = (spark.read.format(“org.apache.spark.sql.redis”).option(“table”, f"state_store_ready_to_sell") .option(“key.column”, “msid”).option(“infer.schema”, “true”).load()

and then i’m running filter , for example:

ready_to_sell = df.filter(“msid in (‘12321’,‘12432’)”)

I looked at spark plan and spark does not push the msid filter into redis.

Which means that all redis records are loaded and filtered on spark memory (according to the sql tab in the spark ui)

msid is key.column in redis of course.

How do i make spark push down the filter the fetch only the relevant records?

Thanks!

Almog