ok
Direktori : /proc/self/root/proc/self/root/lib64/python3.6/__pycache__/ |
Current File : //proc/self/root/proc/self/root/lib64/python3.6/__pycache__/asyncore.cpython-36.pyc |
3 \�N � @ sh d Z ddlZddlZddlZddlZddlZddlZddlmZm Z m Z mZmZm Z mZmZmZmZmZmZmZ eee eeeeh�Zye W n ek r� i ZY nX dd� ZG dd� de�ZeeefZdd � Zd d� Z dd � Z!dd� Z"d&dd�Z#d'dd�Z$e$Z%d(dd�Z&G dd� d�Z'G dd� de'�Z(dd� Z)d)dd �Z*ej+d!k�rdG d"d#� d#�Z,G d$d%� d%e'�Z-dS )*a� Basic infrastructure for asynchronous socket service clients and servers. There are only two ways to have a program on a single processor do "more than one thing at a time". Multi-threaded programming is the simplest and most popular way to do it, but there is another very different technique, that lets you have nearly all the advantages of multi-threading, without actually using multiple threads. it's really only practical if your program is largely I/O bound. If your program is CPU bound, then pre-emptive scheduled threads are probably what you really need. Network servers are rarely CPU-bound, however. If your operating system supports the select() system call in its I/O library (and nearly all do), then you can use it to juggle multiple communication channels at once; doing other work while your I/O is taking place in the "background." Although this strategy can seem strange and complex, especially at first, it is in many ways easier to understand and control than multi-threaded programming. The module documented here solves many of the difficult problems for you, making the task of building sophisticated high-performance network servers and clients a snap. � N) �EALREADY�EINPROGRESS�EWOULDBLOCK� ECONNRESET�EINVAL�ENOTCONN� ESHUTDOWN�EISCONN�EBADF�ECONNABORTED�EPIPE�EAGAIN� errorcodec C s>