Fastapipython代码执行速度受uvicorn与gunicorn部署的影响

我写了一个 fastapi 应用程序。现在我正在考虑部署它,但是我似乎遇到了奇怪的意外性能问题,这似乎取决于我使用 uvicorn 还是 gunicorn。特别是如果我使用 gunicorn,所有代码(甚至标准库纯 python 代码)似乎都会变慢。对于性能调试,我编写了一个小应用程序来演示这一点:

import asyncio, time
from fastapi import FastAPI, Path
from datetime import datetime

app = FastAPI()

@app.get("/delay/{delay1}/{delay2}")
async def get_delay(
    delay1: float = Path(..., title="Nonblocking time taken to respond"),
    delay2: float = Path(..., title="Blocking time taken to respond"),
):
    total_start_time = datetime.now()
    times = []
    for i in range(100):
        start_time = datetime.now()
        await asyncio.sleep(delay1)
        time.sleep(delay2)
        times.append(str(datetime.now()-start_time))
    return {"delays":[delay1,delay2],"total_time_taken":str(datetime.now()-total_start_time),"times":times}

使用以下命令运行 fastapi appi:

gunicorn api.performance_test:app -b localhost:8001 -k uvicorn.workers.UvicornWorker --workers 1

get to 的响应体http://localhost:8001/delay/0.0/0.0始终是这样的:

{
  "delays": [
    0.0,
    0.0
  ],
  "total_time_taken": "0:00:00.057946",
  "times": [
    "0:00:00.000323",
    ...smilar values omitted for brevity...
    "0:00:00.000274"
  ]
}

但是使用:

uvicorn api.performance_test:app --port 8001 

我不断地得到这样的时间

{
  "delays": [
    0.0,
    0.0
  ],
  "total_time_taken": "0:00:00.002630",
  "times": [
    "0:00:00.000037",
    ...snip...
    "0:00:00.000020"
  ]
}

当我取消注释该await asyncio.sleep(delay1)语句时,差异变得更加明显。

所以我想知道 gunicorn/uvicorn 对 python/fastapi 运行时做了什么,以创建代码执行速度的 10 倍差异。

值得一提的是,我在 OS X 11.2.3 和英特尔 I7 处理器上使用 Python 3.8.2 执行了这些测试。

这些是我pip freeze输出的相关部分

fastapi==0.65.1
gunicorn==20.1.0
uvicorn==0.13.4

回答

我无法重现您的结果。

我的环境:Windows 10 上 WSL2 上的 ubuntu

我的pip freeze输出的相关部分:

fastapi==0.65.1
gunicorn==20.1.0
uvicorn==0.14.0
fastapi==0.65.1
gunicorn==20.1.0
uvicorn==0.14.0

我稍微修改了代码:

除了第一次加载网站外,我对两种方法的结果几乎相同。

这两种方法的时间介于两者之间,0:00:00.000530并且0:00:00.000620大部分时间都是如此。

每个的第一次尝试需要更长的时间:大约0:00:00.003000. 但是,在我重新启动 Windows 并再次尝试这些测试后,我注意到服务器启动后首次请求的时间不再增加(我认为这要归功于重新启动后有大量可用 RAM)


非首次运行示例(3 次尝试):

import asyncio, time
from fastapi import FastAPI, Path
from datetime import datetime
import statistics

app = FastAPI()

@app.get("/delay/{delay1}/{delay2}")
async def get_delay(
    delay1: float = Path(..., title="Nonblocking time taken to respond"),
    delay2: float = Path(..., title="Blocking time taken to respond"),
):
    total_start_time = datetime.now()
    times = []
    for i in range(100):
        start_time = datetime.now()
        await asyncio.sleep(delay1)
        time.sleep(delay2)
        time_delta= (datetime.now()-start_time).microseconds
        times.append(time_delta)

    times_average = statistics.mean(times)

    return {"delays":[delay1,delay2],"total_time_taken":(datetime.now()-total_start_time).microseconds,"times_avarage":times_average,"times":times}

带注释的非首次运行示例await asyncio.sleep(delay1)(3 次尝试):

# `uvicorn performance_test:app --port 8083`

{"delays":[0.0,0.0],"total_time_taken":553,"times_avarage":4.4,"times":[15,7,5,4,4,4,4,5,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,5,4,4,5,4,4,5,4,4,4,4,4,5,4,5,5,4,4,4,4,4,4,5,4,4,4,5,4,4,4,4,4,4,5,4,4,5,4,4,4,4,5,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,4,5,4]}
{"delays":[0.0,0.0],"total_time_taken":575,"times_avarage":4.61,"times":[15,6,5,5,5,5,5,5,5,5,5,4,5,5,5,5,4,4,4,4,4,5,5,5,4,5,4,4,4,5,5,5,4,5,5,4,4,4,4,5,5,5,5,4,4,4,4,5,5,4,4,4,4,4,4,4,4,5,5,4,4,4,4,5,5,5,5,5,5,5,4,4,4,4,5,5,4,5,5,4,4,4,4,4,4,5,5,5,4,4,4,4,5,5,5,5,4,4,4,4]}
{"delays":[0.0,0.0],"total_time_taken":548,"times_avarage":4.31,"times":[14,6,5,4,4,4,4,4,4,4,5,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,4,5,4,4,4,4,4,4,4,4,5,4,4,4,4,4,4,5,4,4,4,4,4,5,5,4,4,4,4,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4]}


# `gunicorn performance_test:app -b localhost:8084 -k uvicorn.workers.UvicornWorker --workers 1`

{"delays":[0.0,0.0],"total_time_taken":551,"times_avarage":4.34,"times":[13,6,5,5,5,5,5,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,4,4,4,5,4,4,4,4,4,5,4,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,4,5,4,4,4,4,4,4,4,5,4,4,4,4,4,4,4,4,4,5,4,4,5,4,5,4,4,5,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,5,4,4,5]}
{"delays":[0.0,0.0],"total_time_taken":558,"times_avarage":4.48,"times":[14,7,5,5,5,5,5,5,4,4,4,4,4,4,5,5,4,4,4,4,5,4,4,4,5,5,4,4,4,5,5,4,4,4,5,4,4,4,5,5,4,4,4,4,5,5,4,4,5,5,4,4,5,5,4,4,4,5,4,4,5,4,4,5,5,4,4,4,5,4,4,4,5,4,4,4,5,4,5,4,4,4,5,4,4,4,5,4,4,4,5,4,4,4,5,4,4,4,5,4]}
{"delays":[0.0,0.0],"total_time_taken":550,"times_avarage":4.34,"times":[15,6,5,4,4,4,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,4,4,4,5,4,4,4,4,5,5,4,4,4,4,5,4,4,4,4,4,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,5,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4,4,4,4,5,4,4,5,4,4,4,4,4]}

我制作了一个 Python 脚本来更精确地对这些时间进行基准测试:

# `uvicorn performance_test:app --port 8083`

{"delays":[0.0,0.0],"total_time_taken":159,"times_avarage":0.6,"times":[3,1,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0,0,1,1,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,0,0,1,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0]}
{"delays":[0.0,0.0],"total_time_taken":162,"times_avarage":0.49,"times":[3,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0,1,0,0,0,0,1,1,1,1,1,0,0,0,0,1,1,1,1,0,0,1,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1]}
{"delays":[0.0,0.0],"total_time_taken":156,"times_avarage":0.61,"times":[3,1,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,0,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1]}


# `gunicorn performance_test:app -b localhost:8084 -k uvicorn.workers.UvicornWorker --workers 1`

{"delays":[0.0,0.0],"total_time_taken":159,"times_avarage":0.59,"times":[2,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0,0,0,1,0,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0,0,0,1,1,1,1,1,0,0]}
{"delays":[0.0,0.0],"total_time_taken":165,"times_avarage":0.62,"times":[3,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1]}
{"delays":[0.0,0.0],"total_time_taken":164,"times_avarage":0.54,"times":[2,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,1,1,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1]}

结果:

{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 586.5985, 'times_avarage_avarage': 4.820865}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 571.8415, 'times_avarage_avarage': 4.719035}
{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'to
{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 151.301, 'times_avarage_avarage': 0.602495}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 144.4655, 'times_avarage_avarage': 0.59196}
{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 151.301, 'times_avarage_avarage': 0.602495}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 144.4655, 'times_avarage_avarage': 0.59196}

带注释的结果 await asyncio.sleep(delay1)

我还制作了上述脚本的另一个版本,它每 1 个请求更改 url(它给出的时间略高):

结果:

带注释的结果 await asyncio.sleep(delay1)

这个答案应该可以帮助您更好地调试结果。

如果您分享有关您的操作系统/机器的更多

import statistics
import requests
from time import sleep

number_of_tests=1000

sites_to_test=[
    {
        'name':'only uvicorn    ',
        'url':'http://127.0.0.1:8083/delay/0.0/0.0'
    },
    {
        'name':'gunicorn+uvicorn',
        'url':'http://127.0.0.1:8084/delay/0.0/0.0'
    }]


for test in sites_to_test:

    total_time_taken_list=[]
    times_avarage_list=[]

    requests.get(test['url']) # first request may be slower, so better to not measure it

    for a in range(number_of_tests):
        r = requests.get(test['url'])
        json= r.json()

        total_time_taken_list.append(json['total_time_taken'])
        times_avarage_list.append(json['times_avarage'])
        # sleep(1) # results are slightly different with sleep between requests

    total_time_taken_avarage=statistics.mean(total_time_taken_list)
    times_avarage_avarage=statistics.mean(times_avarage_list)

    print({'name':test['name'], 'number_of_tests':number_of_tests, 'total_time_taken_avarage':total_time_taken_avarage, 'times_avarage_avarage':times_avarage_avarage})

详细信息,我

{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 589.4315, 'times_avarage_avarage': 4.789385}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 589.0915, 'times_avarage_avarage': 4.761095}

认为这可能有助于调查您的结果。

另外请重新

{'name': 'only uvicorn    ', 'number_of_tests': 2000, 'total_time_taken_avarage': 152.8365, 'times_avarage_avarage': 0.59173}
{'name': 'gunicorn+uvicorn', 'number_of_tests': 2000, 'total_time_taken_avarage': 154.4525, 'times_avarage_avarage': 0.59768}

启动您的计算机/服务器,它可能会产生影响。


更新 1:

我看到我使用的 uvicorn 版本0.14.0比问题中所述的要新0.13.4。我也用旧版本进行了测试,0.13.4但结果相似,我仍然无法重现您的结果。


更新 2:

我运行了更多的基准测试,我注意到一些有趣的事情:

在requirements.txt中使用uvloop:

整个需求.txt:

结果:

在requirements.txt中没有uvloop:

整个需求.txt:

结果:


更新 3:

我只Python 3.9.5在这个答案中使用。

uvicorn==0.14.0
fastapi==0.65.1
gunicorn==20.1.0
uvloop==0.15.2


uvicorn==0.14.0
fastapi==0.65.1
gunicorn==20.1.0
以上是Fastapipython代码执行速度受uvicorn与gunicorn部署的影响的全部内容。
THE END
分享
二维码
< <上一篇
下一篇>>