<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Rafiqul.dev</title><description>A Software Artisan, who attempts to share something to the world here</description><link>https://rafiqul.dev</link><item><title>Generic Function in Python with Singledispatch</title><link>https://rafiqul.dev/posts/generic-function-in-python-with-singledispatch</link><guid isPermaLink="true">https://rafiqul.dev/posts/generic-function-in-python-with-singledispatch</guid><description>An overview of how we can implement generic function in Python with Singledispatch</description><pubDate>Tue, 27 Mar 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Imagine, you can write different implementations of a function of the same name in the same scope, depending on the types of arguments. Wouldn’t it be great? Of course, it would be. There is a term for this. It is called “Generic Function”. Python recently added support for generic function in Python 3.4 (&amp;lt;a href=&quot;https://www.python.org/dev/peps/pep-0443/&quot; target=&quot;_blank&quot;&amp;gt;PEP 443&amp;lt;/a&amp;gt;). They did this to the &lt;code&gt;functools&lt;/code&gt; module by adding &lt;code&gt;@singledispatch&lt;/code&gt; decorator.&lt;/p&gt;
&lt;h2&gt;What is Singledispatch?&lt;/h2&gt;
&lt;p&gt;At this point, you may be wondering what is &lt;code&gt;singledispatch&lt;/code&gt;. Okay, let’s go with generic function again.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A generic function is composed of multiple functions implementing the same operation for different types. Which implementation should be used during a call is determined by the dispatch algorithm. When the implementation is chosen based on the type of a single argument, this is known as single dispatch.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In Python, implementation is chosen based on the type of the first argument of function. So in simple, you define a default function and then register additional versions of that functions depending on the type of the first argument.&lt;/p&gt;
&lt;h2&gt;Singledispatch in Action&lt;/h2&gt;
&lt;p&gt;Let’s see &lt;code&gt;singledispatch&lt;/code&gt; in action. There are few steps for writing a generic function with &lt;code&gt;singledispatch&lt;/code&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Import &lt;code&gt;singledispatch&lt;/code&gt; from &lt;code&gt;functools&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Define an default or fallback function with &lt;code&gt;@singledispatch&lt;/code&gt; decorator. It’s our generic function.&lt;/li&gt;
&lt;li&gt;Then, Register additional implementations by passing the type in &lt;code&gt;register()&lt;/code&gt; attribute of the generic function. It’s a decorator, so you decorate your implementations like this &lt;code&gt;@function_name.register(type)&lt;/code&gt;. You can also register lambdas and pre-existing functions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now, we will implement a generic function called &lt;code&gt;fprint&lt;/code&gt;, which will print something in a formatted way based on the type. For &lt;code&gt;list&lt;/code&gt; it will print index and value along with type of value and for &lt;code&gt;dict&lt;/code&gt; it will print key-value pair along with their type etc. By default it will print the passed argument along with it’s type. Let’s define our default function first.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from functools import singledispatch


@singledispatch
def fprint(data):
    print(f&apos;({type(data).__name__}) {data}&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I am not going to explain the implementation. It’s fairly basic. It’s the default or fallback implementation of our generic function. We define a function and decorate it with &lt;code&gt;@singledispatch&lt;/code&gt; decorator. If there is no registered implementation for a specific type, its method resolution order is used to find a more generic implementation. The original function decorated with &lt;code&gt;@singledispatch&lt;/code&gt; is registered for the base object type, which means it is used if no better implementation is found.&lt;/p&gt;
&lt;p&gt;Remember the next step? Now, it’s time for registering the overloaded implementations. Let’s implement for &lt;code&gt;list&lt;/code&gt; first.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@fprint.register(list)
def _(data):
    formatted_header = f&apos;{type(data).__name__} -&amp;gt; index : value&apos;
    print(formatted_header)
    print(&apos;-&apos; * len(formatted_header))
    for index, value in enumerate(data):
        print(f&apos;{index} : ({type(value).__name__}) {value}&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case you are wondering why I am using &lt;code&gt;_&lt;/code&gt; as name. It’s because, I want only one generic function. If you give it a name, you will get the option to use this specific function independently. Assume that we gave a name to the above function &lt;code&gt;list_print&lt;/code&gt; and didn’t decorate with &lt;code&gt;@fprint.register(list)&lt;/code&gt;. Now, we can use &lt;code&gt;fprint.register()&lt;/code&gt; as function like this &lt;code&gt;fprint.register(list, list_print)&lt;/code&gt;. We can also stack more than one decorator for multiple type just like this.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@fprint.register(list)
@fprint.register(set)
@fprint.register(tuple)
def _(data):
    formatted_header = f&apos;{type(data).__name__} -&amp;gt; index : value&apos;
    print(formatted_header)
    print(&apos;-&apos; * len(formatted_header))
    for index, value in enumerate(data):
        print(f&apos;{index} : ({type(value).__name__}) {value}&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We almost finished our generic function except for &lt;code&gt;dict&lt;/code&gt; type. Here is our full code along with implementation for &lt;code&gt;dict&lt;/code&gt; type.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from functools import singledispatch


@singledispatch
def fprint(data):
    print(f&apos;({type(data).__name__}) {data}&apos;)


@fprint.register(list)
@fprint.register(set)
@fprint.register(tuple)
def _(data):
    formatted_header = f&apos;{type(data).__name__} -&amp;gt; index : value&apos;
    print(formatted_header)
    print(&apos;-&apos; * len(formatted_header))
    for index, value in enumerate(data):
        print(f&apos;{index} : ({type(value).__name__}) {value}&apos;)


@fprint.register(dict)
def _(data):
    formatted_header = f&apos;{type(data).__name__} -&amp;gt; key : value&apos;
    print(formatted_header)
    print(&apos;-&apos; * len(formatted_header))
    for key, value in data.items():
        print(f&apos;({type(key).__name__}) {key}: ({type(value).__name__}) {value}&apos;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, it&apos;s time for the ultimate test. Let&apos;s call the same &lt;code&gt;fprint&lt;/code&gt; function with different data types.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; fprint(&apos;hello&apos;)
(str) hello
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; fprint(21)
(int) 21
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; fprint(3.14159)
(float) 3.14159
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; fprint([2, 3, 5, 7, 11])
list -&amp;gt; index : value
---------------------
0 : (int) 2
1 : (int) 3
2 : (int) 5
3 : (int) 7
4 : (int) 11
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; fprint({2, 3, 5, 7, 11})
set -&amp;gt; index : value
--------------------
0 : (int) 2
1 : (int) 3
2 : (int) 5
3 : (int) 7
4 : (int) 11
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; fprint((13, 17, 23, 29, 31))
tuple -&amp;gt; index : value
----------------------
0 : (int) 13
1 : (int) 17
2 : (int) 23
3 : (int) 29
4 : (int) 31
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; fprint({&apos;name&apos;: &apos;John Doe&apos;, &apos;age&apos;: 32, &apos;location&apos;: &apos;New York&apos;})
dict -&amp;gt; key : value
-------------------
(str) name: (str) John Doe
(str) age: (int) 32
(str) location: (str) New York
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summing It Up&lt;/h2&gt;
&lt;p&gt;As you can see, Python provides a clear way to define and extend generic functions. It opens some interesting possibilities to refactor your code. If you are interested to learn more about &lt;code&gt;singledispatch&lt;/code&gt;, you should check out &amp;lt;a href=&quot;https://www.python.org/dev/peps/pep-0443/&quot; target=&quot;_blank&quot;&amp;gt;PEP 443&amp;lt;/a&amp;gt; and &lt;code&gt;functools&lt;/code&gt; &amp;lt;a href=&quot;https://docs.python.org/3/library/functools.html#functools.singledispatch&quot; target=&quot;_blank&quot;&amp;gt;docs&amp;lt;/a&amp;gt;.&lt;/p&gt;
</content:encoded><author>Rafiqul Hasan</author></item><item><title>FastAPI Deconstructed: Anatomy of a Modern ASGI Framework</title><link>https://rafiqul.dev/posts/fastapi-deconstructed</link><guid isPermaLink="true">https://rafiqul.dev/posts/fastapi-deconstructed</guid><description>Written version of my talk at PyCon APAC 2024 in Indonesia</description><pubDate>Tue, 19 Nov 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently I had the opportunity to talk about the FastAPI under the hood at PyCon APAC 2024. The title of the talk was “FastAPI Deconstructed: Anatomy of a Modern ASGI Framework”. Then, I thought why not have a written version of the talk. And, I have decided to write. Something like a blog post. So, here it is.&lt;/p&gt;
&lt;p&gt;You can find the slide here: https://github.com/shopnilsazal/fastapi-deconstructed&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;FastAPI has quickly become one of the go-to frameworks for Python developers who need high performance and developer-friendly API frameworks. With support for asynchronous programming, dependency injection, and automatic OpenAPI documentation, FastAPI stands out for its speed and ease of use. This post will break down the core components of FastAPI, detailing how each part from ASGI and Uvicorn to Starlette and Pydantic works together to create a robust, modern web framework.&lt;/p&gt;
&lt;h3&gt;Hello World&lt;/h3&gt;
&lt;p&gt;Let’s begin with the fundamentals of a FastAPI application. A “Hello World” example in FastAPI is very straightforward.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from fastapi import FastAPI

app = FastAPI()

@app.get(&quot;/&quot;)
async def hello():
    return {&quot;message&quot;: &quot;Hello, World!&quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With a simple setup like this, FastAPI takes care of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Defining an asynchronous route.&lt;/li&gt;
&lt;li&gt;Parsing and validating requests.&lt;/li&gt;
&lt;li&gt;Serializing JSON responses.&lt;/li&gt;
&lt;li&gt;Generating automatic API docs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here’s how we can run this application.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uvicorn main:app
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;hypercorn main:app
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;granian --interface asgi main:app
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can see, there are multiple ways to run our application. The main thing is, we need an ASGI compliant server to run our application. We can use any server that implements ASGI protocol. But for simplicity, in this post I will use &lt;code&gt;uvicorn&lt;/code&gt; as the example of ASGI server to explain related things.&lt;/p&gt;
&lt;h3&gt;Building Blocks&lt;/h3&gt;
&lt;p&gt;FastAPI’s functionality is layered on top of several powerful components:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;ASGI&lt;/strong&gt;: The asynchronous protocol layer that handles communication between the server and the application.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uvicorn&lt;/strong&gt;: A high-performance ASGI server that serves FastAPI applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Starlette&lt;/strong&gt;: An ASGI framework providing routing, middleware, and request/response handling.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pydantic&lt;/strong&gt;: A library for data validation and parsing, used in FastAPI to ensure data consistency and reliability.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dependency Injection:&lt;/strong&gt; A built-in dependency injection system that makes it easy to inject dependencies like database connections, services, or configuration etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automatic API Doc:&lt;/strong&gt; Automatically generates an OpenAPI specification for API, which provides detailed documentation and interactive features.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;ASGI - The Protocol Layer&lt;/h3&gt;
&lt;p&gt;ASGI, or the Asynchronous Server Gateway Interface, serves as the foundation of FastAPI, enabling asynchronous programming by providing a standardized interface between the application and server. ASGI evolved from WSGI (Web Server Gateway Interface) to support real-time web features like WebSockets and multiple concurrent connections, allowing Python applications to handle high loads without blocking. Currently ASGI protocol describes HTTP/1.1, HTTP/2 and WebSocket.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;flowchart LR
    A[Client] --&amp;gt;|Sends HTTP Request| B[ASGI Server]
    B --&amp;gt; |Parse and Translate &amp;lt;br/&amp;gt; to Scope and Events| C[ASGI App]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here’s how a request flow of ASGI application looks like from a very high level. When client sends a HTTP request, the ASGI server accepts the request and parse &amp;amp; translate it to &lt;code&gt;scope&lt;/code&gt; and &lt;code&gt;events&lt;/code&gt; (we will see details of &lt;code&gt;scope&lt;/code&gt; and &lt;code&gt;events&lt;/code&gt; a little bit later). Then, the ASGI app receive the &lt;code&gt;scope&lt;/code&gt; and &lt;code&gt;events&lt;/code&gt; and process the request. Now let’s see some details about the ASGI protocol itself.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ASGI Components:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scopes&lt;/strong&gt;: ASGI defines a &lt;code&gt;scope&lt;/code&gt; for each connection. This is a dictionary containing the connection’s metadata. For HTTP requests, this includes method, path, query string, headers, etc. Each request or connection is encapsulated in a unique scope.&lt;/p&gt;
&lt;p&gt;Example HTTP scope:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;scope = {
    &quot;type&quot;: &quot;http&quot;,  # The type of connection (&quot;http&quot;, &quot;websocket&quot;)
    &quot;http_version&quot;: &quot;1.1&quot;,  # HTTP version
    &quot;method&quot;: &quot;GET&quot;,  # HTTP method, like GET, POST
    &quot;path&quot;: &quot;/hello&quot;,  # URL path requested by the client
    &quot;query_string&quot;: b&quot;name=John&quot;,  # Query string in the request
    &quot;headers&quot;: [  # HTTP/Websocket headers
        (b&quot;host&quot;, b&quot;example.com&quot;),
        (b&quot;user-agent&quot;, b&quot;Mozilla/5.0&quot;),
        (b&quot;accept&quot;, b&quot;text/html&quot;),
    ],
    &quot;client&quot;: (&quot;127.0.0.1&quot;, 12345),  # Client IP address and port
    &quot;server&quot;: (&quot;127.0.0.1&quot;, 8000),  # Server IP address and port
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Events&lt;/strong&gt;: ASGI operates on events for handling requests. Events are async functions used to receive incoming data or send outgoing data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Receive&lt;/strong&gt;: An &lt;code&gt;awaitable&lt;/code&gt; callable that the application calls to receive events (such as HTTP requests or WebSocket messages).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Send&lt;/strong&gt;: An &lt;code&gt;awaitable&lt;/code&gt; callable that the application uses to send responses back to the server.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Lifespan Events&lt;/strong&gt;: ASGI also supports lifespan events, which handle startup and shutdown operations. These events allow setup or cleanup tasks (such as initializing or closing a database connection) to run at the server start or stop.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is a simple ASGI app looks like. No framework, just a simple Python async function.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;async def app(scope, receive, send):
    assert scope[&apos;type&apos;] == &apos;http&apos;

    await send({
        &apos;type&apos;: &apos;http.response.start&apos;,
        &apos;status&apos;: 200,
        &apos;headers&apos;: [
            [b&apos;content-type&apos;, b&apos;text/plain&apos;],
        ],
    })
    await send({
        &apos;type&apos;: &apos;http.response.body&apos;,
        &apos;body&apos;: b&apos;Hello, world!&apos;,
    })
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Uvicorn - The ASGI Server&lt;/h3&gt;
&lt;p&gt;Uvicorn is the ASGI server that powers FastAPI applications. You could run a FastAPI app with any other ASGI server. Uvicorn is designed for speed and efficiency, making it an ideal choice for applications that require high concurrency. Uvicorn is built on top of &lt;code&gt;uvloop&lt;/code&gt;, a high-performance implementation of the asyncio event loop, which enhances its ability to handle I/O-bound tasks efficiently.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Request Lifecycle in Uvicorn&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Accept Connection&lt;/strong&gt;: Uvicorn accepts a connection and creates an ASGI scope for the incoming HTTP request, including metadata like headers and method.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dispatch Request&lt;/strong&gt;: The scope is dispatched to the FastAPI application. Uvicorn uses &lt;code&gt;uvloop&lt;/code&gt; to asynchronously manage the flow.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Receive Data&lt;/strong&gt;: Uvicorn processes incoming request data through ASGI &lt;code&gt;receive&lt;/code&gt; events.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Send Response&lt;/strong&gt;: FastAPI responds with an ASGI &lt;code&gt;send&lt;/code&gt; event. Uvicorn packages the response (status code, headers, body) and returns it to the client.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Starlette - The ASGI Framework Layer&lt;/h3&gt;
&lt;p&gt;We can’t talk about FastAPI without Starlette. Starlette is a lightweight ASGI framework that provides FastAPI with its core functionality. Starlette serves as the backbone of FastAPI, handling the low-level routing, middleware, and ASGI compatibility, while FastAPI adds Pydantic validation, dependency injection, and additional tools for building APIs efficiently.&lt;/p&gt;
&lt;h3&gt;Lifecycle of a Web Request&lt;/h3&gt;
&lt;p&gt;Now, let’s visualize the full lifecycle of a http request using a &lt;code&gt;starlette&lt;/code&gt; hello world example as the ASGI app.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Starlette Application&lt;/strong&gt; (&lt;code&gt;app.py&lt;/code&gt;):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route

# Route handler for &quot;/hello&quot;
async def hello(request):
    return JSONResponse({&apos;message&apos;: &apos;Hello, World!&apos;})

# Defining the routes
routes = [
    Route(&apos;/hello&apos;, hello),
]

# Creating the Starlette app
app = Starlette(debug=True, routes=routes)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Running the App with Uvicorn&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;uvicorn app:app --host 127.0.0.1 --port 8000
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Client Request&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;curl http://127.0.0.1:8000/hello
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will return the JSON response &lt;code&gt;{&quot;message&quot;: &quot;Hello, World!&quot;}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Now, let’s follow the request step-by-step, from the moment the client sends an HTTP request to the response being returned.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1: Client Sends HTTP Request&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The client sends an HTTP request to the server. For example, a &lt;code&gt;GET&lt;/code&gt; request to the &lt;code&gt;/hello&lt;/code&gt; endpoint.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET /hello HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: curl/7.64.1
Accept: */*
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Step 2: Uvicorn Accepts the Request&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Uvicorn runs a socket server that listens for incoming TCP connections on the specified host/port (e.g., &lt;code&gt;127.0.0.1:8000&lt;/code&gt;). When an HTTP request arrives, Uvicorn:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Accepts the TCP connection.&lt;/li&gt;
&lt;li&gt;Parses the HTTP request from the raw TCP data using &lt;code&gt;h11&lt;/code&gt; (a pure-Python HTTP/1.1 library) or &lt;code&gt;httptools&lt;/code&gt; (Python binding for the &lt;code&gt;nodejs&lt;/code&gt; HTTP parser).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here, Uvicorn will convert the incoming request into ASGI &lt;code&gt;scope&lt;/code&gt; and &lt;code&gt;events&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3: Uvicorn Converts HTTP Request to ASGI Scope&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When Uvicorn receives an HTTP request, it converts it into an ASGI &lt;code&gt;scope&lt;/code&gt; object.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;scope = {
    &quot;type&quot;: &quot;http&quot;,
    &quot;http_version&quot;: &quot;1.1&quot;,
    &quot;method&quot;: &quot;GET&quot;,
    &quot;path&quot;: &quot;/hello&quot;,
    &quot;query_string&quot;: b&quot;&quot;,
    &quot;headers&quot;: [
        (b&quot;host&quot;, b&quot;127.0.0.1:8000&quot;),
        (b&quot;user-agent&quot;, b&quot;curl/7.64.1&quot;),
        (b&quot;accept&quot;, b&quot;*/*&quot;),
    ],
    &quot;client&quot;: (&quot;127.0.0.1&quot;, 12345),
    &quot;server&quot;: (&quot;127.0.0.1&quot;, 8000),
}
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Type&lt;/strong&gt;: The type of connection, which is &lt;code&gt;http&lt;/code&gt; for an HTTP request.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;HTTP Version&lt;/strong&gt;: Version of the HTTP protocol (e.g., &lt;code&gt;1.1&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Method&lt;/strong&gt;: The HTTP method used in the request (&lt;code&gt;GET&lt;/code&gt;, &lt;code&gt;POST&lt;/code&gt;, etc.).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Path&lt;/strong&gt;: The URL path requested (e.g., &lt;code&gt;/hello&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Headers&lt;/strong&gt;: A list of header key-value pairs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client&lt;/strong&gt;: The client’s IP and port.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Server&lt;/strong&gt;: The server’s IP and port.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 4: Uvicorn Passes the Scope to the ASGI Application&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Once Uvicorn has created the ASGI scope, it will start the ASGI application (in this case, Starlette) by calling the application callable:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;async def app(scope, receive, send):
    ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Uvicorn invokes the Starlette app, passing in the &lt;code&gt;scope&lt;/code&gt; object.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 5: Starlette Processes the Request&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Starlette, being an ASGI-compliant framework, takes over at this point. It matches the route (in this case, &lt;code&gt;/hello&lt;/code&gt;) and invokes the corresponding route handler.&lt;/p&gt;
&lt;p&gt;In this case, the &lt;code&gt;hello&lt;/code&gt; function is called when the &lt;code&gt;/hello&lt;/code&gt; route is requested. Starlette internally uses the ASGI &lt;code&gt;scope&lt;/code&gt; to match the incoming request’s method and path with the defined route.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Request Object&lt;/strong&gt;: Starlette creates an HTTP request object from the &lt;code&gt;scope&lt;/code&gt; and ASGI events received from Uvicorn.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Receiving Events (&lt;code&gt;receive&lt;/code&gt;):&lt;/strong&gt; Starlette receives events that represent parts of the HTTP request, including the request body.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;request_event = {
    &quot;type&quot;: &quot;http.request&quot;,
    &quot;body&quot;: b&quot;&quot;,  # Request body
    &quot;more_body&quot;: False,  # Indicates whether more data will be sent
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;body&lt;/code&gt; field contains the request body (in case of a POST request), and &lt;code&gt;more_body&lt;/code&gt; tells the application whether the request body is complete or more data will follow (useful for streaming large files).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Response Handling&lt;/strong&gt;: The &lt;code&gt;hello&lt;/code&gt; route returns a &lt;code&gt;JSONResponse&lt;/code&gt;, which wraps the response data and sends it back as ASGI events.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 6: Starlette Returns the Response&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;After processing the request, Starlette sends back the response to Uvicorn by emitting ASGI events like &lt;code&gt;http.response.start&lt;/code&gt; and &lt;code&gt;http.response.body&lt;/code&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Starting the Response&lt;/strong&gt; (&lt;code&gt;http.response.start&lt;/code&gt;):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;response_start_event = {
    &quot;type&quot;: &quot;http.response.start&quot;,
    &quot;status&quot;: 200,  # HTTP status code
    &quot;headers&quot;: [
        (b&quot;content-type&quot;, b&quot;application/json&quot;),
    ],
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This tells Uvicorn to begin sending the HTTP response headers, with a status code of &lt;code&gt;200&lt;/code&gt; and a content type of &lt;code&gt;application/json&lt;/code&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Sending the Response Body&lt;/strong&gt; (&lt;code&gt;http.response.body&lt;/code&gt;):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;response_body_event = {
    &quot;type&quot;: &quot;http.response.body&quot;,
    &quot;body&quot;: b&apos;{&quot;message&quot;: &quot;Hello, World!&quot;}&apos;,  # JSON response body
    &quot;more_body&quot;: False,
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This sends the response body containing the JSON message &lt;code&gt;{&quot;message&quot;: &quot;Hello, World!&quot;}&lt;/code&gt;. The &lt;code&gt;more_body: False&lt;/code&gt; indicates that this is the final part of the body and that the response is complete.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 7: Uvicorn Sends the HTTP Response Back to the Client&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Uvicorn receives the ASGI events emitted by Starlette and translates them into HTTP responses. Specifically:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;http.response.start&lt;/code&gt;&lt;/strong&gt; triggers Uvicorn to send the HTTP status line and headers (e.g., &lt;code&gt;HTTP/1.1 200 OK&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;http.response.body&lt;/code&gt;&lt;/strong&gt; sends the response body (e.g., &lt;code&gt;{&quot;message&quot;: &quot;Hello, World!&quot;}&lt;/code&gt;) to the client.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Uvicorn closes the connection when it has sent all parts of the response.
Let&apos;s visualize the journey for better understandning:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    autonumber

    participant Client
    participant Server as ASGI Server
    participant App as ASGI App

    Client-&amp;gt;&amp;gt;Server: HTTP request (bytes)

    Note over Server: Parse HTTP &amp;lt;br/&amp;gt; Build ASGI scope
    Server-&amp;gt;&amp;gt;App: call app(scope, receive, send)

    alt Request body available
        App-&amp;gt;&amp;gt;Server: await receive()
        Server--&amp;gt;&amp;gt;App: http.request &amp;lt;br /&amp;gt;(body chunk)
    end

    App-&amp;gt;&amp;gt;App: Validate &amp;amp; process request

    alt Valid request
        App-&amp;gt;&amp;gt;Server: await send(http.response.start)
        App-&amp;gt;&amp;gt;Server: await send(http.response.body)
        Server--&amp;gt;&amp;gt;Client: HTTP response
    else Invalid request
        App-&amp;gt;&amp;gt;Server: await send(http.response.start)
        App-&amp;gt;&amp;gt;Server: await send(http.response.body (error))
        Server--&amp;gt;&amp;gt;Client: HTTP error response
    end
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;FastAPI - The High-Level Framework&lt;/h3&gt;
&lt;p&gt;FastAPI builds upon Starlette to create a framework that’s ideal for developing RESTful APIs. FastAPI’s focus on asynchronous programming, Pydantic integration for data validation, and dependency injection make it powerful and developer-friendly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Starlette-based routing and request handling.&lt;/li&gt;
&lt;li&gt;Pydantic-based data validation.&lt;/li&gt;
&lt;li&gt;Dependency Injection system.&lt;/li&gt;
&lt;li&gt;Automatic OpenAPI and API documentation generation.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Pydantic - Data Validation and Serialization&lt;/h3&gt;
&lt;p&gt;FastAPI’s data validation relies on Pydantic, a library that simplifies the handling of complex data types and validation. Pydantic enables FastAPI to enforce strict data validation rules on incoming request data and outgoing response data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pydantic Model Example&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from fastapi import FastAPI
from pydantic import BaseModel

# Initialize FastAPI app (this is like initializing Starlette)
app = FastAPI()

# Pydantic model for request data validation
class Item(BaseModel):
    name: str
    price: float
    is_offer: bool = None

# Route with path parameters and Pydantic request body validation
@app.post(&quot;/items/&quot;)
async def create_item(item: Item):
    return {&quot;item&quot;: item}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the example above:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Client sends a request:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;curl -X POST &quot;&amp;lt;http://127.0.0.1:8000/items/&amp;gt;&quot; -H &quot;Content-Type: application/json&quot; -d &apos;{&quot;name&quot;: &quot;Table&quot;, &quot;price&quot;: 150.0}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;FastAPI validates the request body and returns:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;item&quot;: {
    &quot;name&quot;: &quot;Table&quot;,
    &quot;price&quot;: 150.0,
    &quot;is_offer&quot;: null
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;If a required field (e.g., &lt;code&gt;name&lt;/code&gt;) is missing, FastAPI will return an automatic validation error with &lt;code&gt;422&lt;/code&gt; http status code:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;detail&quot;: [
    {
      &quot;loc&quot;: [&quot;body&quot;, &quot;name&quot;],
      &quot;msg&quot;: &quot;field required&quot;,
      &quot;type&quot;: &quot;value_error.missing&quot;
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pydantic also converts data types as needed, making it easier to handle complex data without extensive validation code.&lt;/p&gt;
&lt;h3&gt;Dependency Injection in FastAPI&lt;/h3&gt;
&lt;p&gt;FastAPI’s dependency injection system allows modular, reusable code by injecting resources like database connections, authentication layers, or shared configurations directly into route functions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dependency Injection Example&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from fastapi import Depends

def get_db():
    db = DatabaseConnection()
    try:
        yield db
    finally:
        db.close()

@app.get(&quot;/items/&quot;)
async def read_items(db=Depends(get_db)):
    return db.fetch_all_items()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With &lt;code&gt;Depends&lt;/code&gt;, FastAPI manages dependencies automatically, enabling clean, modular, and testable code. Dependency injection is especially useful for managing external services, as it allows centralized control of resource lifecycles.&lt;/p&gt;
&lt;h3&gt;OpenAPI and Swagger Documentation in FastAPI&lt;/h3&gt;
&lt;p&gt;FastAPI’s automatic documentation generation feature provides Swagger and ReDoc interfaces without additional setup. By using route definitions, parameter types, and data models, FastAPI creates real-time OpenAPI documentation, making it easy to test and integrate API endpoints.&lt;/p&gt;
&lt;p&gt;With documentation available at &lt;code&gt;/docs&lt;/code&gt; (Swagger UI) and &lt;code&gt;/redoc&lt;/code&gt; (ReDoc), FastAPI provides developers with a quick and interactive way to explore API routes, making it easier for teams and external developers to work with the API.&lt;/p&gt;
&lt;h3&gt;Request Lifecycle in FastAPI&lt;/h3&gt;
&lt;p&gt;Here’s a summary of how a request flows through FastAPI:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Client Sends Request&lt;/strong&gt;: The client sends an HTTP request to the server.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uvicorn (ASGI Server)&lt;/strong&gt;: Uvicorn receives the request and creates an ASGI scope.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Starlette (Routing)&lt;/strong&gt;: Starlette routes the request to the correct endpoint based on path and method.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FastAPI Endpoint&lt;/strong&gt;: FastAPI processes any dependencies, validates incoming data with Pydantic, and handles the request.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Response&lt;/strong&gt;: Uvicorn receives the response from FastAPI and sends it back to the client.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;FastAPI’s architecture combines multiple components to achieve a fast, reliable, and easy-to-use API framework:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ASGI&lt;/strong&gt; is the backbone of modern Python web frameworks. It enables asynchronous operations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uvicorn&lt;/strong&gt; provides efficient connection handling.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Starlette&lt;/strong&gt; is the core web framework handling routing and middlewares.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FastAPI&lt;/strong&gt; extends Starlette with data validation via Pydantic, dependency injection &amp;amp; automatic API docs.&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><author>Rafiqul Hasan</author></item><item><title>The Journey of an HTTP Request in Python: Part 1</title><link>https://rafiqul.dev/posts/python-http-journey-part-1</link><guid isPermaLink="true">https://rafiqul.dev/posts/python-http-journey-part-1</guid><description>First part of the Journey of an HTTP Request in Python: From Kernel to Runtime to Response</description><pubDate>Thu, 11 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When you type a URL into your browser and hit enter, an extraordinary journey begins. Within milliseconds, your request traverses network cables, gets processed by operating system kernels, flows through Python runtimes, and eventually produces the response you see on screen. This journey is so fast and seamless that we rarely stop to think about the complex choreography happening beneath the surface.&lt;/p&gt;
&lt;p&gt;This two-part series traces the complete journey of an HTTP request in Python&apos;s ASGI ecosystem. &lt;strong&gt;Part 1&lt;/strong&gt; focuses on the operating system layer, how packets arrive at your network card, flow through the kernel&apos;s TCP/IP stack and land in socket buffers. &lt;a href=&quot;/posts/python-http-journey-part-2&quot;&gt;&lt;strong&gt;Part 2&lt;/strong&gt;&lt;/a&gt; will explore the application layer, how Python&apos;s &lt;code&gt;asyncio&lt;/code&gt; event loop, ASGI servers like Uvicorn and your web framework collaborate to turn those bytes into responses.&lt;/p&gt;
&lt;p&gt;By the end of this series, you&apos;ll understand exactly what happens when someone makes a request to your FastAPI application, and why ASGI is designed the way it is.&lt;/p&gt;
&lt;p&gt;Let&apos;s begin at the lowest level, where photons become &lt;a href=&quot;https://en.wikipedia.org/wiki/Network_packet&quot;&gt;packets&lt;/a&gt;, and hardware becomes software.&lt;/p&gt;
&lt;h2&gt;The Big Picture&lt;/h2&gt;
&lt;p&gt;Before we dive into details, let&apos;s visualize the complete journey from a high level:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;flowchart TD

    A[Browser sends HTTP Request over TCP] 
        --&amp;gt;|TCP Packet sent over network| B[Network Interface Card &amp;lt;br/&amp;gt; NIC - Physical Layer]

    B --&amp;gt;|Hardware Interrupt| C[Kernel Network Stack &amp;lt;br/&amp;gt; L2 → L3 → L4 processing]

    C --&amp;gt;|Packet validated &amp;amp; queued| D[TCP/IP: Reassembly, checksum, flow-control]

    D --&amp;gt;|Payload copied into &amp;lt;br/&amp;gt; socket receive buffer| E[Socket File Descriptor &amp;lt;br/&amp;gt; Kernel socket + buffers]

    E --&amp;gt;|system call &amp;lt;/br&amp;gt; epoll/select/recv| F[Python Runtime &amp;lt;br/&amp;gt; Interpreter + asyncio]

    F --&amp;gt;|asyncio selector &amp;lt;/br&amp;gt; ready FD detected| G[ASGI Server - Uvicorn&amp;lt;br/&amp;gt;HTTP parsing, ASGI lifespan handler]

    G --&amp;gt;|Invoke ASGI application &amp;lt;br/&amp;gt; scope/receive/send| H[ASGI Application - FastAPI &amp;lt;br/&amp;gt; Routing, middleware, handler execution]

    H --&amp;gt;|Return Response object &amp;lt;br/&amp;gt; body, headers, status| G

    G --&amp;gt;|Serialize &amp;amp; write response bytes send| E

    E --&amp;gt;|Kernel places data into socket send buffer| D

    D --&amp;gt;|TCP segmentation, ACK handling, window checks| C

    C --&amp;gt;|Hand packet to NIC driver| B

    B --&amp;gt;|Transmit TCP Packet over network| A
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This diagram shows the complete path, but it barely scratches the surface. Let&apos;s explore each layer in detail.&lt;/p&gt;
&lt;h2&gt;1. The Kernel Receives Data&lt;/h2&gt;
&lt;h3&gt;Network Interface and Interrupts&lt;/h3&gt;
&lt;p&gt;When data arrives at your server, the first component to know about it isn&apos;t your Python application or even the operating system&apos;s high-level network code. It&apos;s the &lt;a href=&quot;https://en.wikipedia.org/wiki/Network_interface_controller&quot;&gt;Network Interface Card (NIC)&lt;/a&gt; hardware itself.&lt;/p&gt;
&lt;p&gt;Here&apos;s what happens:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Packet Arrival&lt;/strong&gt;: The NIC receives electrical signals (or light pulses for fiber) representing your HTTP request&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DMA Transfer&lt;/strong&gt;: The NIC uses &lt;a href=&quot;https://en.wikipedia.org/wiki/Direct_memory_access&quot;&gt;Direct Memory Access&lt;/a&gt; to write the packet data directly into a pre-allocated kernel memory buffer&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hardware Interrupt&lt;/strong&gt;: The NIC triggers a &lt;a href=&quot;https://en.wikipedia.org/wiki/Interrupt#Hardware_interrupts&quot;&gt;hardware interrupt&lt;/a&gt; to notify the CPU that data has arrived&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Interrupt Handler&lt;/strong&gt;: The kernel&apos;s &lt;a href=&quot;https://en.wikipedia.org/wiki/Interrupt_handler&quot;&gt;interrupt handler&lt;/a&gt; is invoked, which schedules the network stack to process this data&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This all happens in microseconds, and it&apos;s happening for potentially thousands of packets per second on a busy server.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    participant NIC as Network Card
    participant KM as Kernel Memory
    participant CPU as CPU
    participant NS as Network Stack
    
    NIC-&amp;gt;&amp;gt;KM: DMA write packet data
    NIC-&amp;gt;&amp;gt;CPU: Hardware interrupt
    CPU-&amp;gt;&amp;gt;CPU: Save current state
    CPU-&amp;gt;&amp;gt;NS: Schedule packet processing
    NS-&amp;gt;&amp;gt;NS: Process TCP/IP layers
    NS-&amp;gt;&amp;gt;KM: Write to socket buffer
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;TCP/IP Stack Processing&lt;/h3&gt;
&lt;p&gt;Once the interrupt handler schedules packet processing, the kernel&apos;s network stack processes the packet through multiple protocol layers. This happens entirely in kernel space.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Ethernet Layer&lt;/strong&gt;: Strips off the Ethernet frame, validates the destination MAC address&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;IP Layer&lt;/strong&gt;: Validates IP header, checks destination IP, handles fragmentation if needed&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Transmission_Control_Protocol&quot;&gt;&lt;strong&gt;TCP Layer&lt;/strong&gt;&lt;/a&gt;: This is where the real work happens. The kernel:
&lt;ul&gt;
&lt;li&gt;Validates checksum: Ensures data integrity&lt;/li&gt;
&lt;li&gt;Looks up connection: Uses (src_ip, src_port, dst_ip, dst_port) to find the socket&lt;/li&gt;
&lt;li&gt;Checks sequence numbers: Handles out-of-order packets and duplicates&lt;/li&gt;
&lt;li&gt;Updates state machine: Manages TCP states (ESTABLISHED, FIN_WAIT, etc.)&lt;/li&gt;
&lt;li&gt;Performs flow control: Adjusts receive window based on buffer availability&lt;/li&gt;
&lt;li&gt;Sends ACK: Acknowledges received data automatically&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Delivering to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Berkeley_sockets&quot;&gt;socket&lt;/a&gt;:&lt;/strong&gt; Once the kernel identifies the destination socket, it:
&lt;ul&gt;
&lt;li&gt;Copies payload data into the socket&apos;s receive buffer&lt;/li&gt;
&lt;li&gt;Updates buffer pointers and available byte count&lt;/li&gt;
&lt;li&gt;Wakes any process/coroutine waiting on this socket (via the wait queue)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let&apos;s visualize the journey:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    participant NIC as Network Interface Card
    participant LinkL as Link Layer (Ethernet)
    participant IP as Network Layer (IP)
    participant TCP as Transport Layer (TCP)
    participant Socket as Socket Buffer
    
    NIC-&amp;gt;&amp;gt;LinkL: Packet arrives via DMA
    Note over LinkL: Strip Ethernet header (14 bytes)&amp;lt;br/&amp;gt;Validate MAC
    
    LinkL-&amp;gt;&amp;gt;IP: Pass to IP layer
    Note over IP: Strip IP header (20+ bytes)&amp;lt;br/&amp;gt;Validate checksum, TTL, destination IP
    
    IP-&amp;gt;&amp;gt;TCP: Pass to TCP layer
    Note over TCP: Strip TCP header (20+ bytes)&amp;lt;br/&amp;gt;Hash lookup: find socket&amp;lt;br/&amp;gt;Validate checksum
    
    alt Sequence in order
        TCP-&amp;gt;&amp;gt;Socket: Copy payload to recv buffer
        TCP-&amp;gt;&amp;gt;Socket: Update expected_seq
        TCP-&amp;gt;&amp;gt;NIC: Send ACK (automatic)
    else Sequence out of order
        TCP-&amp;gt;&amp;gt;TCP: Queue in reassembly buffer
        TCP-&amp;gt;&amp;gt;NIC: Send duplicate ACK
    end
    
    TCP-&amp;gt;&amp;gt;Socket: Update bytes_available
    TCP-&amp;gt;&amp;gt;Socket: Update receive window
    Note over Socket: Data ready for application&amp;lt;br/&amp;gt;Process woken via wait queue
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The entire journey from NIC interrupt to socket buffer typically takes 10-100 microseconds on modern hardware. This processing happens for every single packet, which is why kernel optimization is crucial for high-performance networking.&lt;/p&gt;
&lt;h3&gt;Socket Buffers: The Kernel-Userspace Bridge&lt;/h3&gt;
&lt;p&gt;Once the kernel identifies which socket the packet belongs to using the connection&apos;s four-tuple &lt;code&gt;source IP:port, dest IP:port&lt;/code&gt;, it writes the data into that socket&apos;s &lt;strong&gt;receive buffer&lt;/strong&gt;, a ring buffer allocated in kernel memory. This buffer is the critical interface between kernel space and user space.&lt;/p&gt;
&lt;p&gt;Each socket maintains two buffers:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Kernel maintains these internally (simplified representation)
struct socket_buffers {
    char recv_buffer[SO_RCVBUF];  // Default: 208KB on Linux
    char send_buffer[SO_SNDBUF];  // Default: 208KB on Linux
    size_t recv_bytes_available;
    size_t send_bytes_used;
    wait_queue_head_t wait_queue; // Processes waiting on this socket
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When TCP segments arrive, the kernel:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Validates&lt;/strong&gt; sequence numbers and checksums&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reassembles&lt;/strong&gt; out-of-order segments using sequence numbers&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Writes&lt;/strong&gt; data to the receive buffer&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sends ACK&lt;/strong&gt; back to sender (automatic)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Wakes&lt;/strong&gt; any process/coroutine blocked on this socket&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The receive buffer size determines TCP&apos;s &lt;strong&gt;receive window&lt;/strong&gt;—how much unacknowledged data the sender can transmit. If your application doesn&apos;t read fast enough and the buffer fills, the kernel advertises a zero window, throttling the sender.&lt;/p&gt;
&lt;p&gt;For sending, when you call &lt;code&gt;send()&lt;/code&gt;, data is copied from user space to the kernel&apos;s send buffer. The kernel then:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Segments&lt;/strong&gt; data into TCP packets (Maximum Segment Size typically 1460 bytes)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transmits&lt;/strong&gt; segments with sequence numbers&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retains&lt;/strong&gt; copies until ACKed (for potential retransmission)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Removes&lt;/strong&gt; acknowledged data from buffer&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can inspect and tune buffer sizes:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import socket

sock = socket.socket()
# Check current sizes
recv_buf = sock.getsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF)
send_buf = sock.getsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF)

# Increase for high-bandwidth connections
sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 1048576)  # 1MB
sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, 1048576)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These buffers decouple application read/write speed from network speed, enabling efficient pipelining and flow-control essential for TCP&apos;s reliability guarantees. Let&apos;s take a look at the full picture:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    participant Net as Network
    participant Kernel as Kernel Space&amp;lt;br/&amp;gt;(TCP + Buffers)
    participant Syscall as System Call&amp;lt;br/&amp;gt;Boundary
    participant App as Application&amp;lt;br/&amp;gt;(User Space)
    
    Note over Net,App: Receiving Data
    
    Net-&amp;gt;&amp;gt;Kernel: TCP segments arrive
    Kernel-&amp;gt;&amp;gt;Kernel: Validate &amp;amp; reassemble
    Kernel-&amp;gt;&amp;gt;Kernel: Write to receive buffer (208KB)
    Kernel-&amp;gt;&amp;gt;Net: Send ACK automatically
    Kernel-&amp;gt;&amp;gt;Kernel: Update bytes_available
    Kernel--&amp;gt;&amp;gt;App: Wake blocked coroutine
    
    App-&amp;gt;&amp;gt;Syscall: recv(4096)
    Note over Syscall: Copy from kernel to user space
    Syscall-&amp;gt;&amp;gt;App: Return data bytes
    
    Note over Net,App: Sending Data
    
    App-&amp;gt;&amp;gt;Syscall: send(response)
    Note over Syscall: Copy from user to kernel space
    Syscall-&amp;gt;&amp;gt;Kernel: Write to send buffer (208KB)
    
    Kernel-&amp;gt;&amp;gt;Kernel: Segment into TCP packets (MSS 1460)
    Kernel-&amp;gt;&amp;gt;Kernel: Retain copy until ACKed
    Kernel-&amp;gt;&amp;gt;Net: Transmit segments
    
    Net--&amp;gt;&amp;gt;Kernel: ACK received
    Kernel-&amp;gt;&amp;gt;Kernel: Remove acknowledged data from buffer
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;2. System Calls and File Descriptors&lt;/h2&gt;
&lt;h3&gt;The Socket File Descriptor&lt;/h3&gt;
&lt;p&gt;In Unix-like systems, the philosophy is &lt;a href=&quot;https://en.wikipedia.org/wiki/Everything_is_a_file&quot;&gt;&quot;everything is a file&quot;&lt;/a&gt;, including &lt;a href=&quot;https://en.wikipedia.org/wiki/Network_socket&quot;&gt;network sockets&lt;/a&gt;. When your application creates a socket, the kernel returns a &lt;a href=&quot;https://en.wikipedia.org/wiki/File_descriptor&quot;&gt;&lt;strong&gt;file descriptor&lt;/strong&gt;&lt;/a&gt;: a small non-negative integer that serves as an index into the process&apos;s file descriptor table.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import socket

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
fd = sock.fileno()  # Returns an integer like 3, 4, 5, etc.
print(f&quot;Socket file descriptor: {fd}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What the file descriptor represents:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The kernel maintains a per-process table mapping file descriptors to kernel data structures. For sockets, this points to a &lt;code&gt;struct socket&lt;/code&gt; containing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Socket buffers (send/receive)&lt;/li&gt;
&lt;li&gt;Connection state (ESTABLISHED, CLOSE_WAIT, etc.)&lt;/li&gt;
&lt;li&gt;Peer address (IP:port)&lt;/li&gt;
&lt;li&gt;Protocol-specific data (TCP sequence numbers, window size)&lt;/li&gt;
&lt;li&gt;File operations table (read, write, close functions)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Why file descriptors matter:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Every socket operation requires passing this file descriptor to the kernel:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import os

# All these operations use the file descriptor internally
data = sock.recv(4096)        # → recv(fd, buffer, 4096)
sock.send(b&quot;data&quot;)            # → send(fd, &quot;data&quot;, 4)
sock.close()                  # → close(fd)

# You can even use low-level os functions
os.read(fd, 4096)             # Works! Treats socket like a file
os.write(fd, b&quot;data&quot;)         # Also works!
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;File descriptor limits:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Each process has limits on open file descriptors (typically 1024 by default, configurable up to system limits). This matters for servers handling thousands of connections:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Check current limits
ulimit -n  # Soft limit: 1024

# Increase for production servers
ulimit -n 65536
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you run out of file descriptors, &lt;code&gt;accept()&lt;/code&gt; fails with &quot;Too many open files&quot;, a common production issue. ASGI servers handle this by configuring appropriate limits and connection pooling.&lt;/p&gt;
&lt;h3&gt;System Calls: Crossing the Boundary&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/System_call&quot;&gt;System calls&lt;/a&gt; are the mechanism for transitioning from user space (where your Python code runs) to kernel space (where the OS manages hardware and resources). This transition is expensive compared to normal function calls. It involves context switching, privilege level changes, and potentially copying data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What happens during a system call:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import socket

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This innocent-looking Python call triggers:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Python → C library wrapper&lt;/li&gt;
&lt;li&gt;C library → software interrupt (syscall instruction)&lt;/li&gt;
&lt;li&gt;CPU switches to kernel mode&lt;/li&gt;
&lt;li&gt;Kernel validates parameters, performs operation&lt;/li&gt;
&lt;li&gt;CPU switches back to user mode&lt;/li&gt;
&lt;li&gt;Return value propagated back to Python&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here are the key system calls involved in our HTTP request journey:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import socket

# socket() - Create socket structure in kernel
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

# bind() - Associate socket with address
sock.bind((&apos;0.0.0.0&apos;, 8000))

# listen() - Mark socket as passive, create accept queue  
sock.listen(128)

# accept() - Retrieve connection from queue (blocks if empty)
client, addr = sock.accept()

# recv() - Copy data from kernel buffer to user space
data = client.recv(4096)

# send() - Copy data from user space to kernel buffer
client.send(b&quot;HTTP/1.1 200 OK\r\n\r\n&quot;)

# close() - Release socket resources
client.close()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s look at what happens during a &lt;code&gt;recv()&lt;/code&gt; call:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    participant App as Python Application
    participant Lib as Python Socket Library
    participant Kernel as Linux Kernel
    participant Buffer as Socket Buffer
    
    App-&amp;gt;&amp;gt;Lib: sock.recv(4096)
    Lib-&amp;gt;&amp;gt;Kernel: recv() syscall
    Note over Kernel: Switch to kernel mode
    Kernel-&amp;gt;&amp;gt;Buffer: Check if data available
    alt Data available
        Buffer-&amp;gt;&amp;gt;Kernel: Return data
        Kernel-&amp;gt;&amp;gt;Lib: Copy to user space
        Lib-&amp;gt;&amp;gt;App: Return bytes
    else No data available
        Kernel-&amp;gt;&amp;gt;Kernel: Block process
        Note over Kernel: Process sleeps until data arrives
        Buffer-&amp;gt;&amp;gt;Kernel: Data arrives
        Kernel-&amp;gt;&amp;gt;Lib: Copy to user space
        Lib-&amp;gt;&amp;gt;App: Return bytes
    end
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Blocking vs Non-Blocking Sockets&lt;/h3&gt;
&lt;p&gt;Traditional socket programming is &lt;strong&gt;blocking&lt;/strong&gt;. When you call &lt;code&gt;recv()&lt;/code&gt; and there&apos;s no data available, your process goes to sleep until data arrives. This is fine for simple applications, but it&apos;s disastrous for servers that need to handle thousands of concurrent connections.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import socket

# Blocking socket (default)
sock = socket.socket()
sock.connect((&apos;example.com&apos;, 80))
data = sock.recv(4096)  # This BLOCKS until data arrives
print(data)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To handle multiple connections, you have three main options:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Multi-threading&lt;/strong&gt;: One thread per connection (expensive, doesn&apos;t scale to thousands)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multi-processing&lt;/strong&gt;: One process per connection (even more expensive)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Non-blocking I/O with event loops&lt;/strong&gt;: Handle many connections in one thread (efficient, scalable)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;ASGI is built on non-blocking sockets and uses an &lt;a href=&quot;https://en.wikipedia.org/wiki/Event_loop&quot;&gt;event loop&lt;/a&gt; to efficiently multiplex between many connections.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import socket
import select

# Non-blocking socket
sock = socket.socket()
sock.setblocking(False)  # This is the key!

try:
    sock.connect((&apos;example.com&apos;, 80))
except BlockingIOError:
    pass  # Connect is in progress

# Use select/epoll to wait for socket to be ready
select.select([sock], [sock], [], 5.0)
# Now socket is ready for I/O
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;We&apos;ve journeyed from the physical layer to the kernel-userspace boundary, witnessing how an HTTP request traverses the operating system&apos;s networking stack. From the moment network packets arrive at your NIC, through hardware interrupts and DMA transfers, to TCP/IP processing and socket buffers, every step is orchestrated by the kernel with microsecond precision.&lt;/p&gt;
&lt;p&gt;The key insights from this exploration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Hardware and kernel handle the heavy lifting&lt;/strong&gt;: Your application never sees individual packets, TCP handshakes, or retransmissions, the kernel manages all of this automatically&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Socket buffers are the critical interface&lt;/strong&gt;: These kernel-space ring buffers decouple network speed from application speed, enabling efficient flow control&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;System calls are expensive&lt;/strong&gt;: Every transition between user and kernel space involves context switches, which is why minimizing these calls matters for performance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At this point in our journey, data sits ready in socket receive buffers, waiting for your application to read it. The kernel has done its job ensuring reliable, ordered delivery of bytes. But how does Python efficiently monitor potentially thousands of these sockets without blocking? How does a single thread handle concurrent connections?&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;/posts/python-http-journey-part-2&quot;&gt;&lt;strong&gt;Part 2&lt;/strong&gt;,&lt;/a&gt; we&apos;ll explore the application layer—where asyncio&apos;s event loop, epoll multiplexing, and the ASGI protocol come together to build the scalable web applications you write every day. We&apos;ll see how Python bridges the gap between low-level socket operations and high-level framework code, making concurrent programming both powerful and elegant.&lt;/p&gt;
</content:encoded><author>Rafiqul Hasan</author></item><item><title>The Journey of an HTTP Request in Python: Part 2</title><link>https://rafiqul.dev/posts/python-http-journey-part-2</link><guid isPermaLink="true">https://rafiqul.dev/posts/python-http-journey-part-2</guid><description>Second part of the Journey of an HTTP Request in Python: From Kernel to Runtime to Response</description><pubDate>Sat, 13 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In &lt;a href=&quot;/posts/python-http-journey-part-1&quot;&gt;Part 1&lt;/a&gt;, we followed an HTTP request from the network interface card through the kernel&apos;s TCP/IP stack to socket buffers. We saw how hardware interrupts trigger kernel processing, how TCP ensures reliability, and how socket buffers bridge kernel and user space. But our journey isn&apos;t complete. The data is sitting in kernel memory and your Python application still hasn&apos;t touched it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Part 2&lt;/strong&gt; picks up where we left off, at the boundary between the operating system and your application. We&apos;ll explore how Python&apos;s &lt;code&gt;asyncio&lt;/code&gt; &lt;a href=&quot;https://docs.python.org/3/library/asyncio-eventloop.html&quot;&gt;event loop&lt;/a&gt; efficiently monitors thousands of sockets using epoll, how ASGI servers like Uvicorn translate raw bytes into structured messages, and how your FastAPI application processes requests without blocking.&lt;/p&gt;
&lt;p&gt;This is where the elegance of ASGI shines. While &lt;a href=&quot;/posts/python-http-journey-part-1&quot;&gt;Part 1&lt;/a&gt; showed us the raw power of the kernel handling packets at microsecond speeds, Part 2 reveals how Python leverages that power to build scalable, concurrent web applications with clean, readable code.&lt;/p&gt;
&lt;p&gt;Let&apos;s cross the system call boundary and enter the world of async Python.&lt;/p&gt;
&lt;h2&gt;3. The Event Loop and I/O Multiplexing&lt;/h2&gt;
&lt;h3&gt;The Problem with Blocking I/O&lt;/h3&gt;
&lt;p&gt;Imagine a web server handling 10,000 concurrent connections. With blocking I/O, you&apos;d need 10,000 threads or processes. Each thread consumes memory (typically 1-8 MB of stack space), and context switching between thousands of threads destroys CPU cache efficiency.&lt;/p&gt;
&lt;p&gt;The solution is &lt;strong&gt;I/O multiplexing&lt;/strong&gt;: using a single thread to monitor many file descriptors and only processing them when they&apos;re ready.&lt;/p&gt;
&lt;h3&gt;Enter epoll/kqueue/select&lt;/h3&gt;
&lt;p&gt;Operating systems provide system calls for efficient I/O multiplexing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Select_(Unix)&quot;&gt;&lt;strong&gt;select&lt;/strong&gt;&lt;/a&gt;: The oldest, works on all platforms, limited to ~1024 file descriptors&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;poll&lt;/strong&gt;: Similar to select but no hard limit on file descriptors&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Epoll&quot;&gt;&lt;strong&gt;epoll&lt;/strong&gt;&lt;/a&gt; (Linux): Highly efficient (uses a red–black tree), O(1) performance for ready events&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Kqueue&quot;&gt;&lt;strong&gt;kqueue&lt;/strong&gt;&lt;/a&gt; (BSD/macOS): Similar to epoll&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Python&apos;s &lt;code&gt;asyncio&lt;/code&gt; uses the most efficient mechanism available on your platform. On Linux, that&apos;s &lt;code&gt;epoll&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import select
import socket

# Create a listening socket
server = socket.socket()
server.setblocking(False)
server.bind((&apos;0.0.0.0&apos;, 8000))
server.listen(128)

# Create epoll object
epoll = select.poll()
epoll.register(server.fileno(), select.POLLIN)

# Event loop
connections = {}
while True:
    # Wait for events (this is a blocking syscall, but efficient!)
    events = epoll.poll()
    
    for fd, event in events:
        if fd == server.fileno():
            # New connection
            client, addr = server.accept()
            client.setblocking(False)
            epoll.register(client.fileno(), select.POLLIN)
            connections[client.fileno()] = client
        else:
            # Data on existing connection
            client = connections[fd]
            data = client.recv(4096)
            print(data.decode())
            client.send(b&quot;HTTP/1.1 200 OK\r\n\r\nHello&quot;)
            epoll.unregister(fd)
            client.close()
            del connections[fd]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This pattern register file descriptors with epoll, wait for events, handle ready sockets is the foundation of all async Python frameworks.&lt;/p&gt;
&lt;h3&gt;Python&apos;s asyncio Event Loop&lt;/h3&gt;
&lt;p&gt;Python&apos;s &lt;code&gt;asyncio&lt;/code&gt; wraps this low-level epoll/kqueue machinery in a high-level API with &lt;a href=&quot;https://en.wikipedia.org/wiki/Coroutine&quot;&gt;coroutines&lt;/a&gt;, tasks, and futures. Here&apos;s a simplified view of what the event loop does:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    autonumber

    participant Kernel as Kernel (epoll/kqueue)
    participant FD as Socket FD
    participant eLoop as asyncio Event Loop
    participant Task as Coroutine / Future

    Note over eLoop: Event loop initialization
    eLoop-&amp;gt;&amp;gt;Kernel: epoll_ctl(ADD, FD)

    Note over eLoop,Kernel: Main event loop cycle
    eLoop-&amp;gt;&amp;gt;Kernel: epoll_wait()
    Kernel--&amp;gt;&amp;gt;eLoop: ready FD events

    alt FD readable
        eLoop-&amp;gt;&amp;gt;FD: non-blocking recv()
        FD--&amp;gt;&amp;gt;eLoop: data or EAGAIN
        eLoop-&amp;gt;&amp;gt;Task: schedule coroutine to handle data
    end

    alt FD writable
        Task-&amp;gt;&amp;gt;FD: non-blocking send()
        FD--&amp;gt;&amp;gt;Task: bytes_written or EAGAIN
        Task-&amp;gt;&amp;gt;eLoop: yield until FD writable
    end

    Note over eLoop: Loop repeats
    eLoop-&amp;gt;&amp;gt;Kernel: epoll_wait()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here&apos;s what this looks like in actual asyncio code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import asyncio

async def handle_client(reader, writer):
    &quot;&quot;&quot;Handle a single client connection&quot;&quot;&quot;
    # reader.read() doesn&apos;t block the thread!
    # It yields control back to the event loop
    data = await reader.read(4096)
    
    # Process request
    response = b&quot;HTTP/1.1 200 OK\r\n\r\nHello World&quot;
    
    # writer.write() is synchronous, but drain() waits for the write to complete
    writer.write(response)
    await writer.drain()
    
    writer.close()
    await writer.wait_closed()

async def main():
    # Start server
    server = await asyncio.start_server(
        handle_client, 
        &apos;0.0.0.0&apos;, 
        8000
    )
    
    async with server:
        await server.serve_forever()

# Run the event loop
asyncio.run(main())
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When &lt;code&gt;await reader.read(4096)&lt;/code&gt; is called:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;asyncio&lt;/code&gt; event loop checks if data is available in the kernel socket buffer (&lt;code&gt;epoll_ctl&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;If not available, it registers interest in this socket with epoll (&lt;code&gt;epoll_wait&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;The coroutine is suspended, event loop continues with other tasks&lt;/li&gt;
&lt;li&gt;When data arrives, kernel notifies epoll, event loop resumes the coroutine (performs &lt;code&gt;recv()&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Data is read from kernel buffer to user space (via &lt;code&gt;recv()&lt;/code&gt;) and returned&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When &lt;code&gt;writer.write(response)&lt;/code&gt; is called:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The asyncio event loop checks if the socket is writable (if the kernel send buffer has space)&lt;/li&gt;
&lt;li&gt;If the send buffer is full, the event loop registers interest in &lt;code&gt;writable&lt;/code&gt; events using &lt;code&gt;epoll_wait&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The coroutine is suspended, and the event loop continues running other tasks&lt;/li&gt;
&lt;li&gt;When kernel buffer becomes available, kernel notifies epoll and event loop resumes the suspended coroutine (performs &lt;code&gt;send()&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Data is copied from user space into the kernel buffer (via &lt;code&gt;send()&lt;/code&gt;) and returns once the write is accepted&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;4. ASGI Protocol&lt;/h2&gt;
&lt;h3&gt;Why ASGI Exists&lt;/h3&gt;
&lt;p&gt;We&apos;ve now covered how data gets from the network card to the Python event loop. But there&apos;s still a gap: how does the event loop pass HTTP requests to your web application (FastAPI)? This is where ASGI comes in. ASGI is a &lt;a href=&quot;https://asgi.readthedocs.io/en/latest/specs/main.html&quot;&gt;&lt;strong&gt;specification&lt;/strong&gt;&lt;/a&gt;, an agreed-upon interface between web servers (like Uvicorn) and web applications (like FastAPI).&lt;/p&gt;
&lt;p&gt;Before ASGI, we had WSGI (Web Server Gateway Interface), which worked perfectly for synchronous Python. But WSGI has a fundamental limitation: it&apos;s synchronous and blocking. Every request blocks a worker thread. ASGI solves this by defining an async interface that allows servers and applications to communicate using coroutines.&lt;/p&gt;
&lt;h3&gt;The ASGI Interface&lt;/h3&gt;
&lt;p&gt;At its core, ASGI defines two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Application&lt;/strong&gt;: A coroutine that receives events and can send events back&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Message format&lt;/strong&gt;: Standardized dictionaries for communication&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here&apos;s the signature of an ASGI application:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;async def application(scope, receive, send):
    &quot;&quot;&quot;
    scope: dict - Information about the connection (HTTP or WebSocket)
    receive: coroutine - Receive messages from the server
    send: coroutine - Send messages to the server
    &quot;&quot;&quot;
    pass
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s build a simple ASGI application to see how this works:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;async def hello_world_app(scope, receive, send):
    &quot;&quot;&quot;Simplest possible ASGI application&quot;&quot;&quot;
    
    # scope contains connection metadata
    if scope[&apos;type&apos;] == &apos;http&apos;:
        # Wait for the HTTP request body (even if empty)
        await receive()
        
        # Send response start (status and headers)
        await send({
            &apos;type&apos;: &apos;http.response.start&apos;,
            &apos;status&apos;: 200,
            &apos;headers&apos;: [
                [b&apos;content-type&apos;, b&apos;text/plain&apos;],
            ],
        })
        
        # Send response body
        await send({
            &apos;type&apos;: &apos;http.response.body&apos;,
            &apos;body&apos;: b&apos;Hello, World!&apos;,
        })
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This looks simple, but notice what&apos;s happening:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The application is completely async&lt;/li&gt;
&lt;li&gt;It communicates with the server through the &lt;code&gt;receive&lt;/code&gt; and &lt;code&gt;send&lt;/code&gt; coroutines&lt;/li&gt;
&lt;li&gt;Messages are just dictionaries with standardized keys&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;ASGI Server: Bridging Sockets and Applications&lt;/h3&gt;
&lt;p&gt;An ASGI server like Uvicorn sits between the socket layer and your application. Its job is to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Accept connections from clients&lt;/li&gt;
&lt;li&gt;Parse HTTP requests from raw bytes&lt;/li&gt;
&lt;li&gt;Convert them to ASGI messages&lt;/li&gt;
&lt;li&gt;Call your application&lt;/li&gt;
&lt;li&gt;Convert ASGI response messages back to HTTP bytes&lt;/li&gt;
&lt;li&gt;Send them back through the socket&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here&apos;s a simplified view of what Uvicorn does (this is a fully working code, you may run and test it yourself):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import asyncio
import httptools


class ASGIServer:
    def __init__(self, app):
        self.app = app

    async def handle_connection(self, reader, writer):
        parser = httptools.HttpRequestParser(self)
        self.parser = parser  # attached for callbacks
        self.writer = writer
        self.reader = reader
        self.headers = []
        self.body = b&quot;&quot;
        self.url = None
        self.method = None
        self.complete = False
        self.upgrade = False

        try:
            while not self.complete:
                data = await reader.read(65536)
                if not data:
                    return  # client disconnected
                parser.feed_data(data)  # may raise HttpParserError → caught below

            # After on_message_complete(), we have a full request → run ASGI app
            await self._run_app()

        except httptools.HttpParserError:
            self._write_response(b&quot;HTTP/1.1 400 Bad Request\r\n\r\n&quot;)
        except Exception as e:
            print(&quot;Error:&quot;, e)
            self._write_response(b&quot;HTTP/1.1 500 Internal Error\r\n\r\n&quot;)
        finally:
            writer.close()
            await writer.wait_closed()

    # ─────── httptools callbacks ───────
    def on_url(self, url: bytes):
        self.url = url
        parsed = httptools.parse_url(url)
        self.path = parsed.path.decode() if parsed.path else &quot;/&quot;
        self.query_string = parsed.query or b&quot;&quot;

    def on_header(self, name: bytes, value: bytes):
        self.headers.append((name.lower(), value))

    def on_headers_complete(self):
        self.method = self.parser.get_method().decode()

    def on_body(self, body: bytes):
        self.body += body

    def on_message_complete(self):
        self.complete = True

    # ─────── Helper to send raw HTTP response (used only on errors) ───────
    def _write_response(self, data: bytes):
        try:
            self.writer.write(data)
            asyncio.create_task(self.writer.drain())
        except:
            pass

    # ─────── Run the actual ASGI application ───────
    async def _run_app(self):
        scope = {
            &quot;type&quot;: &quot;http&quot;,
            &quot;asgi&quot;: {&quot;version&quot;: &quot;3.0&quot;, &quot;spec_version&quot;: &quot;2.3&quot;},
            &quot;http_version&quot;: &quot;1.1&quot;,
            &quot;method&quot;: self.method,
            &quot;scheme&quot;: &quot;http&quot;,
            &quot;path&quot;: self.path,
            &quot;raw_path&quot;: self.url,
            &quot;query_string&quot;: self.query_string,
            &quot;headers&quot;: self.headers,
            &quot;server&quot;: (&quot;127.0.0.1&quot;, 8000),
            &quot;client&quot;: self.writer.get_extra_info(&quot;peername&quot;),
        }

        # receive() – streams body if app asks for it
        async def receive():
            if self.body:
                body, self.body = self.body, b&quot;&quot;
                return {&quot;type&quot;: &quot;http.request&quot;, &quot;body&quot;: body, &quot;more_body&quot;: False}
            return {&quot;type&quot;: &quot;http.request&quot;, &quot;body&quot;: b&quot;&quot;, &quot;more_body&quot;: False}

        # send() – converts ASGI messages → raw HTTP via httptools
        async def send(message):
            if message[&quot;type&quot;] == &quot;http.response.start&quot;:
                status = message[&quot;status&quot;]
                headers = message.get(&quot;headers&quot;, [])

                # Build response line + headers
                out = f&quot;HTTP/1.1 {status} OK\r\n&quot;.encode()
                for name, value in headers:
                    out += name + b&quot;: &quot; + value + b&quot;\r\n&quot;
                out += b&quot;\r\n&quot;

                self.writer.write(out)
                await self.writer.drain()

            elif message[&quot;type&quot;] == &quot;http.response.body&quot;:
                body = message.get(&quot;body&quot;, b&quot;&quot;)
                if body:
                    self.writer.write(body)
                    await self.writer.drain()

        # Run the ASGI app
        await self.app(scope, receive, send)

    async def serve(self, host=&quot;127.0.0.1&quot;, port=8000):
        server = await asyncio.start_server(self.handle_connection, host, port)

        print(f&quot;ASGI server running on http://{host}:{port}&quot;)
        async with server:
            await server.serve_forever()


# —————————— Simple application ——————————
async def app(scope, receive, send):

    await send(
        {
            &quot;type&quot;: &quot;http.response.start&quot;,
            &quot;status&quot;: 200,
            &quot;headers&quot;: [(b&quot;content-type&quot;, b&quot;text/plain&quot;)],
        }
    )
    await send({&quot;type&quot;: &quot;http.response.body&quot;, &quot;body&quot;: b&quot;Hello from ASGI server!\n&quot;})


# —————————— Run the server ——————————
if __name__ == &quot;__main__&quot;:
    server = ASGIServer(app)
    try:
        asyncio.run(server.serve(&quot;127.0.0.1&quot;, 8000))
    except KeyboardInterrupt:
        print(&quot;\nServer stopped&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The Complete Flow&lt;/h3&gt;
&lt;p&gt;Let&apos;s trace a complete HTTP request through all layers:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    participant Client
    participant NIC
    participant Kernel
    participant EventLoop as asyncio Event Loop
    participant Server as Uvicorn
    participant App as FastAPI App
    
    Client-&amp;gt;&amp;gt;NIC: TCP packet with HTTP request
    NIC-&amp;gt;&amp;gt;Kernel: Hardware interrupt
    Kernel-&amp;gt;&amp;gt;Kernel: TCP/IP processing
    Kernel-&amp;gt;&amp;gt;Kernel: Write to socket buffer
    Kernel-&amp;gt;&amp;gt;EventLoop: epoll notification
    EventLoop-&amp;gt;&amp;gt;Server: Socket is readable
    Server-&amp;gt;&amp;gt;Kernel: recv() syscall
    Kernel-&amp;gt;&amp;gt;Server: Return HTTP bytes
    Server-&amp;gt;&amp;gt;Server: Parse HTTP request
    Server-&amp;gt;&amp;gt;App: Call app(scope, receive, send)
    App-&amp;gt;&amp;gt;App: Process request
    App-&amp;gt;&amp;gt;Server: send(http.response.start)
    App-&amp;gt;&amp;gt;Server: send(http.response.body)
    Server-&amp;gt;&amp;gt;Server: Build HTTP response bytes
    Server-&amp;gt;&amp;gt;Kernel: send() syscall
    Kernel-&amp;gt;&amp;gt;Kernel: Write to socket send buffer
    Kernel-&amp;gt;&amp;gt;Kernel: TCP/IP processing
    Kernel-&amp;gt;&amp;gt;NIC: Schedule packet transmission
    NIC-&amp;gt;&amp;gt;Client: TCP packet with HTTP response
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Wait! Did we miss something? What about TCP handshake? Well, let&apos;s take a look at that briefly.&lt;/p&gt;
&lt;h4&gt;Three-Way Handshake&lt;/h4&gt;
&lt;p&gt;When a client connects, the kernel automatically completes a three-step handshake:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Client → Server (SYN)&lt;/strong&gt;: &quot;I want to connect, my sequence number is x&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Server → Client (SYN-ACK)&lt;/strong&gt;: &quot;Acknowledged, my sequence number is y&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client → Server (ACK)&lt;/strong&gt;: &quot;Acknowledged, connection established&quot;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Your application doesn&apos;t see these packets—the kernel handles everything. When you call &lt;code&gt;accept()&lt;/code&gt;, you get a fully-established connection:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;client, addr = server.accept()  # Returns after handshake completes
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Data Transfer&lt;/h4&gt;
&lt;p&gt;Now HTTP data flows bidirectionally. Each &lt;code&gt;send()&lt;/code&gt; and &lt;code&gt;recv()&lt;/code&gt; moves data between your application and the kernel&apos;s socket buffers:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;request = client.recv(4096)      # Read HTTP request
client.send(b&quot;HTTP/1.1 200 OK&quot;)  # Send HTTP response
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The kernel handles TCP&apos;s reliability—retransmitting lost packets, reordering segments, and managing flow control.&lt;/p&gt;
&lt;h4&gt;Four-Way Teardown&lt;/h4&gt;
&lt;p&gt;Closing requires four steps because TCP is full-duplex (two independent pipes):&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Client sends FIN&lt;/strong&gt;: &quot;I&apos;m done sending&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Server sends ACK&lt;/strong&gt;: &quot;Got it&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Server sends FIN&lt;/strong&gt;: &quot;I&apos;m done too&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client sends ACK&lt;/strong&gt;: &quot;Got it, goodbye&quot;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;client.close()  # Initiates graceful shutdown
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This graceful close ensures no data is lost. After closing, the port enters &lt;code&gt;TIME_WAIT&lt;/code&gt; state (60-120 seconds) to handle any delayed packets.&lt;/p&gt;
&lt;p&gt;Here&apos;s the full trace of a complete HTTP request through all layers:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sequenceDiagram
    participant Client
    participant NIC
    participant Kernel
    participant EventLoop as asyncio Event Loop
    participant Server as Uvicorn
    participant App as FastAPI App
    
    Note over Client,App: TCP 3-Way Handshake
    Client-&amp;gt;&amp;gt;NIC: SYN (seq=x)
    NIC-&amp;gt;&amp;gt;Kernel: Hardware interrupt
    Kernel-&amp;gt;&amp;gt;Kernel: TCP processing
    Kernel-&amp;gt;&amp;gt;NIC: SYN-ACK (seq=y, ack=x+1)
    NIC-&amp;gt;&amp;gt;Client: SYN-ACK
    Client-&amp;gt;&amp;gt;NIC: ACK (ack=y+1)
    NIC-&amp;gt;&amp;gt;Kernel: Hardware interrupt
    Kernel-&amp;gt;&amp;gt;Kernel: Connection ESTABLISHED
    Kernel-&amp;gt;&amp;gt;EventLoop: epoll notification (new connection)
    EventLoop-&amp;gt;&amp;gt;Server: accept() ready
    Server-&amp;gt;&amp;gt;Kernel: accept() syscall
    Kernel-&amp;gt;&amp;gt;Server: Return client socket fd
    
    Note over Client,App: HTTP Request/Response
    Client-&amp;gt;&amp;gt;NIC: TCP packet with HTTP request
    NIC-&amp;gt;&amp;gt;Kernel: Hardware interrupt
    Kernel-&amp;gt;&amp;gt;Kernel: TCP/IP processing
    Kernel-&amp;gt;&amp;gt;Kernel: Write to socket buffer
    Kernel-&amp;gt;&amp;gt;EventLoop: epoll notification (data ready)
    EventLoop-&amp;gt;&amp;gt;Server: Socket is readable
    Server-&amp;gt;&amp;gt;Kernel: recv() syscall
    Kernel-&amp;gt;&amp;gt;Server: Return HTTP bytes
    Server-&amp;gt;&amp;gt;Server: Parse HTTP request
    Server-&amp;gt;&amp;gt;App: Call app(scope, receive, send)
    App-&amp;gt;&amp;gt;App: Process request
    App-&amp;gt;&amp;gt;Server: send(http.response.start)
    App-&amp;gt;&amp;gt;Server: send(http.response.body)
    Server-&amp;gt;&amp;gt;Server: Build HTTP response bytes
    Server-&amp;gt;&amp;gt;Kernel: send() syscall
    Kernel-&amp;gt;&amp;gt;Kernel: Write to socket send buffer
    Kernel-&amp;gt;&amp;gt;Kernel: TCP/IP processing
    Kernel-&amp;gt;&amp;gt;NIC: Schedule packet transmission
    NIC-&amp;gt;&amp;gt;Client: TCP packet with HTTP response
    
    Note over Client,App: TCP Connection Close
    Client-&amp;gt;&amp;gt;Client: CLOSE call
    Note over Client: State: FIN-WAIT-1
    Client-&amp;gt;&amp;gt;NIC: FIN (seq=m)
    NIC-&amp;gt;&amp;gt;Kernel: Hardware interrupt
    Kernel-&amp;gt;&amp;gt;Kernel: TCP processing
    Note over Kernel: State: CLOSE-WAIT
    Kernel-&amp;gt;&amp;gt;EventLoop: epoll notification (FIN received)
    EventLoop-&amp;gt;&amp;gt;Server: Socket readable (EOF)
    Server-&amp;gt;&amp;gt;Kernel: recv() returns 0
    Kernel-&amp;gt;&amp;gt;NIC: ACK (ack=m+1)
    NIC-&amp;gt;&amp;gt;Client: ACK
    Note over Client: State: FIN-WAIT-2
    Note over Kernel: Server can still send data
    Server-&amp;gt;&amp;gt;Server: CLOSE call
    Note over Kernel: State: LAST-ACK
    Kernel-&amp;gt;&amp;gt;NIC: FIN (seq=n)
    NIC-&amp;gt;&amp;gt;Client: FIN
    Client-&amp;gt;&amp;gt;NIC: ACK (ack=n+1)
    Note over Client: State: TIME-WAIT (2MSL)
    NIC-&amp;gt;&amp;gt;Kernel: Hardware interrupt
    Note over Kernel: State: CLOSED
    Note over Client: After 2MSL → CLOSED
    
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;We&apos;ve completed our journey from hardware to handler, connecting the dots between &lt;a href=&quot;/posts/python-http-journey-part-1&quot;&gt;Part 1&lt;/a&gt;&apos;s kernel-level operations and Part 2&apos;s application-layer abstractions. In &lt;a href=&quot;/posts/python-http-journey-part-1&quot;&gt;Part 1&lt;/a&gt;, we saw how the kernel handles packets, manages TCP reliability, and fills socket buffers. Now we understand how Python&apos;s asyncio efficiently monitors those buffers, how ASGI servers translate bytes into structured messages, and how frameworks route requests to your code.&lt;/p&gt;
&lt;p&gt;The complete picture reveals why ASGI-based applications scale:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Efficient delegation&lt;/strong&gt;: The event loop delegates socket monitoring to the kernel&apos;s epoll, avoiding expensive polling&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Non-blocking operations&lt;/strong&gt;: Coroutines yield control instead of blocking threads, enabling massive concurrency&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Minimal overhead&lt;/strong&gt;: ASGI adds almost no latency, the expensive parts remain network transmission and system calls&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clean abstractions&lt;/strong&gt;: Despite the complexity underneath, you write simple &lt;code&gt;async def&lt;/code&gt; handlers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In future posts, we&apos;ll use this knowledge to build our own ASGI server from scratch, implementing every layer we&apos;ve discussed today. But for now, the next time you write &lt;code&gt;@app.get(&quot;/&quot;)&lt;/code&gt;, you&apos;ll know the incredible journey that happens when someone visits that endpoint. And that understanding makes you a better developer.&lt;/p&gt;
</content:encoded><author>Rafiqul Hasan</author></item></channel></rss>