事件處理模式
在《面向模式的軟件體系架構卷2:用於併發和網絡化對象模式》中,總結了對於當前比較流行的事件處理模式的四種基本模式,分別是反應器模式、主動器模式、異步完成標記和接收器-連接器模式。
- 反應器模式,該模式引入的結構將事件驅動的應用可以多路分解並分配從一個或者多個客戶機發送應用的服務請求,該模式逆轉了應用程序中的控制流,也就是好萊塢原則(不要打電話給我們、我們會打電話給你的),即當有事件準備完成之後就會通過應用程序,有事件準備好可以執行,然後應用程序調用對應的回調函數來處理對應的事件,這樣應用程序只需要實現具體事件處理程序來配合多路分解機制和分配機制,雖然該模式相對直觀但是該模式還是收到了一定的性能限制,特別是它還是不能同時支持大量的客戶機或者耗時長的客戶機請求,因爲它在事件多路分解層串行化了所有的事件處理程序的處理過程,處理性能並不是很高,當然現在也有好多反應器模式的變種來提高處理性能。
- 主動器模式,是事件驅動應用能有效的多路分解和分配由完成的異步操作所觸發的服務請求,在一定情況下,它獲得了併發的性能優勢,在該模式中,客戶機和完成處理程序所代表的應用程序稱爲主動性主體。與被動地等待指示事件的到達並作出響應的反應器模式不同,主動器模式中的客戶機和完成處理程序通過在一個異步操作處理器中主動地初始化一個或者多個異步操作請求,引起應用程序內部的控制流和數據流,異步操作完成後,異步操作處理器和指定的主動器組件協作,將產生的完成事件多路分解給相關的完成處理程序,並分配這些處理程序的回調處理方法,完成處理程序處理一個完成事件後,就主動地激活一個異步操作請求。限制就是異步操作還需要操作系統支持,如果操作系統不支持則需要通過多線程等其他方式來模擬實現。
- 異步完成標記模式,使應用程序能對它在服務中調用異步操作而引起的響應進行有效地多路分解和處理,從而提高異步處理的效率,主要是對主動器模式中任務的多路分解的優化。
- 接收器-接收器模式,該模式經常和反應器模式結合使用,將網絡化系統中同級服務的連接和協作初始化與隨後進行的處理分開,該模式允許應用程序配置它們的連接拓撲結構,進行這種配置不依賴於應用程序所提供的服務。
本文主要就是介紹最常用的反應器模式,該模式主要就是通過將事件進行多路分解然後通過不同的回調函數處理不同的事件。
反應器模式
當前的C10k問題,高性能的服務端的實現模型,基本上都是選用的反應器模式來實現,通過多路IO複用來進行處理請求的處理,在處理網絡請求的過程中主要就是通過connect、accept、read、write等操作接受網絡請求,接受網絡數據,發送處理結果等操作,首先來查看最基礎的反應器模式的實現方法。
單線程反應器模式
該時序圖就是簡易的描述了反應器模式所有事件的執行都是通過讀事件或者寫事件進行驅動的。
單線程模式下的服務端代碼相對比較簡單,如下所示;
import selectors
import socket
selector = selectors.DefaultSelector()
def application():
return "test response"
class RequestHandler(object):
def __init__(self, stream, address, server):
self.application = application
self.stream = stream
self.stream.setblocking(False)
self.address = address
self.server = server
self._recv_buff = ""
self._write_buff = b""
self.state = selectors.EVENT_READ
selector.register(self.stream, selectors.EVENT_READ, self._handle_event)
def parse_request(self):
try:
response = self.application()
resp = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
except Exception as e:
response = "error"
resp = "HTTP/1.1 500 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
self._write_buff += resp.encode(encoding="utf-8")
def _handle_event(self, fd, mask):
if mask & selectors.EVENT_READ:
self._handle_read()
elif mask & selectors.EVENT_WRITE:
self._handle_write()
state = 0
if self._recv_buff:
state |= selectors.EVENT_READ
if self._write_buff:
state |= selectors.EVENT_WRITE
# print(" state and check state ", state, self.state, mask)
if state != 0 and state != self.state:
self.state = state
self.modify_state(state)
def _handle_read(self):
data = self.stream.recv(1024)
if data:
self._recv_buff += data.decode("utf-8")
self.parse_request()
else:
self._handle_close()
def modify_state(self, state):
selector.modify(self.stream, state, self._handle_event)
def _handle_write(self):
while self._write_buff:
try:
length = self.stream.send(self._write_buff)
self._write_buff = self._write_buff[length:]
except Exception as e:
print("write error {0}".format(e))
def _handle_close(self):
print("handle close")
selector.unregister(self.stream)
try:
self.stream.close()
except Exception:
pass
class Server(object):
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
request_queue_size = 5
def __init__(self, server_bind, handle_class=RequestHandler):
self.__shutdown_request = False
self.allow_reuse_address = True
self.socket = None
self.handle_class = handle_class
self.server_address = server_bind
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
def server_bind(self):
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
self.socket.listen(self.request_queue_size)
def serve_forever(self, poll_interval=0.5):
selector.register(self.socket, selectors.EVENT_READ, self._handle_request_noblock)
while True:
ready = selector.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
callback = key.data
callback(key.fileobj, mask)
def _handle_request_noblock(self, fd, mask):
try:
conn, address = self.socket.accept()
except Exception:
return
try:
self.handle_class(conn, address, self)
except Exception as e:
print(" handle_class Error {0}".format(e))
def main():
server = Server(("127.0.0.1", 5555))
server.serve_forever()
if __name__ == '__main__':
main()
這段代碼就是簡單的單線程的反應器模式的簡單實現,當運行該腳本之後,在終端中或者瀏覽器中訪問http://127.0.0.1:5555就會得到如下返回;
curl 127.0.0.1:5555
test response
這行返回數據就是腳本中application函數返回的內容,因爲該腳本只是做原理性的說明,故沒有按照http協議的標準來解析數據只是做了簡單的數據返回而已,從該端結構也可看出所有的響應請求都是阻塞執行,事件的請求都是阻塞在異步事件驅動框架中進行。進行壓測查看一下性能。
wrk -t4 -c100 -d90s -T5 --latency http://127.0.0.1:5555
Running 2m test @ http://127.0.0.1:5555
4 threads and 1024 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 59.51ms 14.10ms 225.26ms 75.76%
Req/Sec 3.88k 448.64 5.62k 72.11%
Latency Distribution
50% 63.10ms
75% 66.94ms
90% 73.14ms
99% 83.63ms
1388445 requests in 1.50m, 103.28MB read
Socket errors: connect 0, read 1887, write 35, timeout 0
Requests/sec: 15421.53
Transfer/sec: 1.15MB
多線程單事件驅動改進
在單線程反應器模式中,由於一個線程進行事件的驅動,並在驅動的過程中來處理業務邏輯,此時我們嘗試改造成多個線程等待進來的請求,事件驅動模式還是單事件驅動。
import selectors
import socket
import queue
from threading import Thread
selector = selectors.DefaultSelector()
def application():
return "test response"
class RequestHandler(object):
def __init__(self, stream, address, server):
self.application = application
self.stream = stream
self.stream.setblocking(False)
self.address = address
self.server = server
self._recv_buff = ""
self._write_buff = b""
self.state = selectors.EVENT_READ
selector.register(self.stream, selectors.EVENT_READ, self._handle_event)
def parse_request(self):
try:
response = self.application()
resp = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
except Exception as e:
response = "error"
resp = "HTTP/1.1 500 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
self._write_buff += resp.encode(encoding="utf-8")
def _handle_event(self, fd, mask):
if mask & selectors.EVENT_READ:
self._handle_read()
elif mask & selectors.EVENT_WRITE:
self._handle_write()
state = 0
if self._recv_buff:
state |= selectors.EVENT_READ
if self._write_buff:
state |= selectors.EVENT_WRITE
# print(" state and check state ", state, self.state, mask)
if state != 0 and state != self.state:
self.state = state
self.modify_state(state)
def _handle_read(self):
data = self.stream.recv(1024)
if data:
self._recv_buff += data.decode("utf-8")
self.parse_request()
else:
self._handle_close()
def modify_state(self, state):
selector.modify(self.stream, state, self._handle_event)
def _handle_write(self):
while self._write_buff:
try:
length = self.stream.send(self._write_buff)
self._write_buff = self._write_buff[length:]
except Exception as e:
print("write error {0}".format(e))
def _handle_close(self):
print("handle close")
selector.unregister(self.stream)
try:
self.stream.close()
except Exception:
pass
class Server(object):
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
request_queue_size = 5
def __init__(self, server_bind, handle_class=RequestHandler):
self.__shutdown_request = False
self.allow_reuse_address = True
self.socket = None
self.handle_class = handle_class
self.server_address = server_bind
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
self.work_queue = queue.Queue()
self.start_worker()
def start_worker(self):
for i in range(10):
t = Thread(target=self.spawn_worker, args=(i, ))
t.start()
def spawn_worker(self, num):
while not self.__shutdown_request:
try:
conn, address = self.work_queue.get()
except Exception as e:
print("spawn_worrker get {0}".format(e))
return
print("worker thread num : {0}".format(num))
try:
self.handle_class(conn, address, self)
except Exception as e:
print(" handle_class Error {0}".format(e))
def server_bind(self):
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
self.socket.listen(self.request_queue_size)
def serve_forever(self, poll_interval=0.5):
selector.register(self.socket, selectors.EVENT_READ, self._handle_request_noblock)
while True:
ready = selector.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
callback = key.data
callback(key.fileobj, mask)
def _handle_request_noblock(self, fd, mask):
try:
conn, address = self.socket.accept()
except Exception:
return
self.work_queue.put((conn, address))
def main():
server = Server(("127.0.0.1", 5555))
server.serve_forever()
if __name__ == '__main__':
main()
通過加入線程池解決了併發響應客戶端數據的性能,但由於python本身在多線程中有GIL鎖的存在故利用線程池的解決方案可能性能未必有很好的提升,而且在響應方案中由於加入了線程安全的隊列,這也加重了在多線程條件下的搶佔的開銷,改進後的壓測數據如下所示,通過對比可知加入多線程的解決方案的性能還略有下降。
wrk -t4 -c1024 -d90s -T5 --latency http://127.0.0.1:5555
Running 2m test @ http://127.0.0.1:5555
4 threads and 1024 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 58.89ms 17.19ms 242.36ms 74.64%
Req/Sec 3.77k 446.73 5.53k 70.19%
Latency Distribution
50% 64.36ms
75% 68.54ms
90% 74.92ms
99% 85.79ms
1349814 requests in 1.50m, 100.41MB read
Socket errors: connect 0, read 1970, write 60, timeout 0
Requests/sec: 14987.46
Transfer/sec: 1.11MB
多事件驅動加多線程處理的反應器模式
在該模式中,新增加多個事件驅動模式,主事件驅動只需要接受新接受的連接請求,剩餘的連接的事件驅動都由子事件驅動來進行交互,從而比單事件驅動提高了事件驅動的效率。
import selectors
import socket
import queue
from threading import Thread, Lock
import random
selector = selectors.DefaultSelector()
def application():
return "test response"
class RequestHandler(object):
def __init__(self, stream, address, server, sel):
self.application = application
self.stream = stream
self.stream.setblocking(False)
self.address = address
self.server = server
self._recv_buff = ""
self._write_buff = b""
self.state = selectors.EVENT_READ
self.sel = sel
self.sel.register(self.stream, selectors.EVENT_READ, self._handle_event)
def parse_request(self):
try:
response = self.application()
resp = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
except Exception as e:
response = "error"
resp = "HTTP/1.1 500 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
self._write_buff += resp.encode(encoding="utf-8")
def _handle_event(self, fd, mask):
if mask & selectors.EVENT_READ:
self._handle_read()
elif mask & selectors.EVENT_WRITE:
self._handle_write()
state = 0
if self._recv_buff:
state |= selectors.EVENT_READ
if self._write_buff:
state |= selectors.EVENT_WRITE
# print(" state and check state ", state, self.state, mask)
if state != 0 and state != self.state:
self.state = state
self.modify_state(state)
def _handle_read(self):
data = self.stream.recv(1024)
if data:
self._recv_buff += data.decode("utf-8")
self.parse_request()
else:
self._handle_close()
def modify_state(self, state):
self.sel.modify(self.stream, state, self._handle_event)
def _handle_write(self):
while self._write_buff:
try:
length = self.stream.send(self._write_buff)
self._write_buff = self._write_buff[length:]
except Exception as e:
print("write error {0}".format(e))
def _handle_close(self):
print("handle close")
self.sel.unregister(self.stream)
try:
self.stream.close()
except Exception:
pass
class Server(object):
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
request_queue_size = 5
def __init__(self, server_bind, handle_class=RequestHandler):
self.__shutdown_request = False
self.allow_reuse_address = True
self.socket = None
self.handle_class = handle_class
self.server_address = server_bind
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
self.work_queue = queue.Queue()
self.start_worker()
self.sels = []
self.lock = Lock()
self.start_sels()
def start_sels(self):
for i in range(5):
t = Thread(target=self.sub_forever)
t.start()
def start_worker(self):
for i in range(10):
t = Thread(target=self.spawn_worker, args=(i, ))
t.start()
def spawn_worker(self, num):
while not self.__shutdown_request:
try:
conn, address = self.work_queue.get()
except Exception as e:
print("spawn_worrker get {0}".format(e))
return
print("worker thread num : {0}".format(num))
rand_index_sel = random.randint(0, 4)
print("random sels index : {0}".format(rand_index_sel))
sel = self.sels[rand_index_sel]
try:
self.handle_class(conn, address, self, sel)
except Exception as e:
print(" handle_class Error {0}".format(e))
def server_bind(self):
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
self.socket.listen(self.request_queue_size)
def sub_forever(self, poll_interval=0.5):
selector_sub = selectors.DefaultSelector()
print("start sub_selector_sub")
with self.lock:
self.sels.append(selector_sub)
print("current sels ", self.sels)
while True:
ready = selector_sub.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
print("sub ready : {0}".format(key))
callback = key.data
callback(key.fileobj, mask)
def serve_forever(self, poll_interval=0.5):
selector.register(self.socket, selectors.EVENT_READ, self._handle_request_noblock)
while True:
ready = selector.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
print("main selector events : {0}".format(key))
callback = key.data
callback(key.fileobj, mask)
def _handle_request_noblock(self, fd, mask):
try:
conn, address = self.socket.accept()
except Exception:
return
self.work_queue.put((conn, address))
def main():
server = Server(("127.0.0.1", 5555))
server.serve_forever()
if __name__ == '__main__':
main()
在該模式的改進下,主要通過新增多個子線程來,並在每個子線程中初始化一個事件驅動並單獨執行事件驅動,每個子事件驅動互相獨立,從而提高了事件驅動的響應效率。
wrk -t4 -c1024 -d90s -T5 --latency http://127.0.0.1:5555
Running 2m test @ http://127.0.0.1:5555
4 threads and 1024 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 181.72ms 57.96ms 471.36ms 72.76%
Req/Sec 1.19k 280.01 2.27k 71.15%
Latency Distribution
50% 194.47ms
75% 216.44ms
90% 239.99ms
99% 303.04ms
425025 requests in 1.50m, 31.62MB read
Socket errors: connect 0, read 2101, write 221, timeout 0
Requests/sec: 4719.33
Transfer/sec: 359.48KB
從壓測的效果來看,無疑簡單粗暴的修改爲這種形式效果很差,選用這種模式需要優化的點還有很多,而且因爲在Python中新加了幾個線程來執行,無疑更加重了調度的成本,後續有時間可繼續優化該模式的響應性能。
總結
本文主要是總結了反應器模式常用的一些示例,幾種不同的模式下的響應都不相同,故所面對的響應性能也有所差別,本文主要是原理性的示例而已,其中具體的優化的措施或者示例代碼有不對的地方並沒有做過多的考慮,單線程的反應器模式是目前應用比較廣泛的一種模式,例如Redis的事件驅動也採用該種模式。。由於本人才疏學淺,如有錯誤請批評指正。