导入pysparkETL模块并使用pything子进程作为子进程运行时出错

我正在尝试使用导入模块和子进程从一个 main.py python 脚本动态调用 pyspark 模块列表。我试图调用的子模块不返回任何内容,它只是执行其 ETL 操作。我希望我的 main.py 程序等到子进程完成。在下面的代码中,每次我尝试调用子进程时,都会遇到错误“TypeError: 'NoneType' object is not iterable”。另一个问题是,在启动 subprocess.Popen 之后,我认为流程将继续在 main.py 中继续到下一行,直到它遇到 j1.wait(),但是立即打印语句 (print("etl_01_job is running ") 没有执行,我错过了什么吗?

我用谷歌搜索并尝试了很多其他方法,但没有任何效果。任何人都可以阐明我做错了什么吗?一旦我能够成功调用子进程,我必须根据子进程的返回码添加一些其他条件。但在这一点上,我想解决这个问题。谢谢

主文件

import json
import importlib
import subprocess
from datetime import datetime
from pyspark import SparkContext, SparkConf
from pyspark.sql.session import SparkSession


def main():
    with open('C:/Pyspark/test/config/config.json', 'r') as config_file:
        config = json.load(config_file)

    spark = SparkSession.builder
        .appName(config.get("app_name"))
        .getOrCreate()

    job_module1 = importlib.import_module("etl_01_job")
    print("main calling time :", datetime.now())
    j1 = subprocess.Popen(job_module1.run_etl_01_job(spark, config))
    print("etl_01_job is running")
    j1.wait() #I'm expecting the main.py to wait until child process finishes
    print("etl_01_job finished")

    job_module2 = importlib.import_module("etl_02_job")
    j2 = subprocess.Popen(job_module2.run_etl_02_job(spark, config))

if __name__ == "__main__":
    main()

Child pyspark job:etl_01_job.py :不是原始代码,只是一个示例脚本

from datetime import datetime
import time
import sys

def etl_01_job(spark, config):
    print("I'm in 01etljob")
    print(config)
    print(config.get("app_name"))
    time.sleep(10)
    print("etljob 1 ending time :", datetime.now())
def run_etl_01_job(spark, config):
    etl_01_job(spark, config)

我得到的错误是

Traceback (most recent call last):
  File "C:/py_spark/src/main.py", line 49, in <module>
    main()
  File "C:/py_spark/src/main.py", line 38, in main
    p1 = subprocess.run(job_module1.run_etl_01_job(spark, config))
  File "C:ProgramDataAnaconda3libsubprocess.py", line 489, in run
    with Popen(*popenargs, **kwargs) as process:
  File "C:ProgramDataAnaconda3libsubprocess.py", line 854, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "C:ProgramDataAnaconda3libsubprocess.py", line 1247, in _execute_child
    args = list2cmdline(args)
  File "C:ProgramDataAnaconda3libsubprocess.py", line 549, in list2cmdline
    for arg in map(os.fsdecode, seq):
TypeError: 'NoneType' object is not iterable

以上是导入pysparkETL模块并使用pything子进程作为子进程运行时出错的全部内容。
THE END
分享
二维码
< <上一篇
下一篇>>