在PySpark中将列值组合添加到数据框的更好方法
我有一个包含 3 列的数据集,id, day, value。我需要为和 的value所有组合添加带零的行。idday
# Simplified version of my data frame
data = [("1", "2020-04-01", 5),
("2", "2020-04-01", 5),
("3", "2020-04-02", 4)]
df = spark.createDataFrame(data,['id','day', 'value'])
我想出的是:
# Create all combinations of id and day
ids= df.select('id').distinct()
days = df.select('day').distinct()
full = ids.crossJoin(days)
# Add combinations back to df filling value with zeros
df_full = df.join(full, ['id', 'day'], 'rightouter')
.na.fill(value=0,subset=['value'])
哪个输出我需要的:
>>> df_full.orderBy(['id','day']).show()
+---+----------+-----+
| id| day|value|
+---+----------+-----+
| 1|2020-04-01| 5|
| 1|2020-04-02| 0|
| 2|2020-04-01| 5|
| 2|2020-04-02| 0|
| 3|2020-04-01| 0|
| 3|2020-04-02| 4|
+---+----------+-----+
问题是这两种操作的计算成本都非常高。当我用我的完整数据运行它时,它给我的工作比通常需要几个小时才能运行的工作大一个数量级。
有没有更有效的方法来做到这一点?或者有什么我想念的吗?
回答
这就是我要实施的方式。只是一点,两个数据帧必须具有相同的架构,否则stack函数将引发错误
import pyspark.sql.functions as f
# Simplified version of my data frame
data = [("1", "2020-04-01", 5),
("2", "2020-04-01", 5),
("3", "2020-04-02", 4)]
df = spark.createDataFrame(data, ['id', 'day', 'value'])
# Creating a dataframe with all distinct days
df_days = df.select(f.col('day').alias('r_day')).distinct()
# Self Join to find all combinations
df_final = df.join(df_days, on=df['day'] != df_days['r_day'])
# +---+----------+-----+----------+
# | id| day|value| r_day|
# +---+----------+-----+----------+
# | 1|2020-04-01| 5|2020-04-02|
# | 2|2020-04-01| 5|2020-04-02|
# | 3|2020-04-02| 4|2020-04-01|
# +---+----------+-----+----------+
# Unpivot dataframe
df_final = df_final.select('id', f.expr('stack(2, day, value, r_day, cast(0 as bigint)) as (day, value)'))
df_final.orderBy('id', 'day').show()
输出:
+---+----------+-----+
| id| day|value|
+---+----------+-----+
| 1|2020-04-01| 5|
| 1|2020-04-02| 0|
| 2|2020-04-01| 5|
| 2|2020-04-02| 0|
| 3|2020-04-01| 0|
| 3|2020-04-02| 4|
+---+----------+-----+