我的 flask 项目使用 logging 模块,使用 TimedRotatingFileHandler,然后 gunicorn 开了 32 个进程.
logging 模块不是进程安全的,这个 logging 模块的官网有写
Because there is no standard way to serialize access to a single file across multiple processes in Python. If you need to log to a single file from multiple processes, one way of doing this is to have all the processes log to a SocketHandler, and have a separate process which implements a socket server which reads from the socket and logs to file. (If you prefer, you can dedicate one thread in one of the existing processes to perform this function.)
但是我的项目跑了半年,从来没有看到日志错乱的情况,每一条都清清楚楚..
Google 了下,搜到了一个 gunicorn 的 issue:https://github.com/benoitc/gunicorn/issues/1272
但是看完了这个 issue 表示还是云里雾里没懂,gunicorn 作者给的解释如下
The fd is shared between all workers, and until it isn't over the limit (depending on your system) alls logs will go over it.
求详细解释,谢谢
1
janxin 2016-11-15 13:14:32 +08:00 via iPhone
因为子进程共享了父进程的文件句柄
|
2
9hills 2016-11-15 13:18:02 +08:00 2
Linux 有个特性,你使用 APPEND 方式打开文件句柄,每次写入的量小于 PIPE_BUF_MAX ,那么系统能保证多进程之间不冲突
|
3
wwqgtxx 2016-11-15 14:30:10 +08:00 via iPhone 1
因为 linux 控制台保证了输出的进程安全
|
4
ysymi 2018-02-13 15:03:17 +08:00
作者没有遇到多个日志文件切分时 有的 worker 进程还往老的文件里写的问题吗
|
5
motianya211314 2019-04-12 11:46:00 +08:00
@ysymi 多个进程,每个进程都重复切割一次,导致已切割的日志被删除。你是咋解决的?
|