SoFunction
Updated on 2024-11-14

Creating a PostgreSQL Database Connection Pool in Python

It is customary to create a connection pool before using a database, even for single-threaded applications, as long as there are multiple methods that require a database connection, the establishment of one or two connections will also consider pooling them first.Connection pooling has many benefits

  • 1) If creating connections repeatedly is quite time consuming.
  • 2) For applications that use a single connection all the way to the end, connection pooling avoids the need to pass around database connection objects.
  • 3) If you forget to close a connection, connection pooling may help to close it after a certain amount of time, but of course applications that fetch connections intensively will run out of connections.
  • 4) The number of connections an app opens is controllable

touchPython After using thePostgreSQL It's also natural to consider creating connection pools, taking them from the pool when you use them and returning them when you're done, rather than creating a physical one every time you need a connection.Python groutPostgreSQL is that there are two main packages. py-postgresql cap (a poem)psycopg2 , and the latter will be used in the examples in this paper.

Psycopg exist Two connection pool implementations are provided in the module, both of which inherit from the,

The basic methods of this abstract class are

  • getconn(key=None): Get Connection
  • putconn(conn, key=None, close=False): Restitution of connections
  • closeall(): Close all connections in the connection pool

The implementation classes for the two connection pools are

  • (minconn, maxconn, *args, **kwars) : for single-threaded applications
  • (minconn, maxconn, *args, **kwars) : It's safer when multi-threaded, which is actually when thegetconn() cap (a poem) putconn() I've put a lock on it to control it.

So the safest and safest thing to do is still to useThreadedConnectionPool, in single-threaded applications.SimpleConnectionPool  It's not much better thanThreadedConnectionPool How much more efficient.

Here's a look at a specific connection pooling implementation that uses theContext Manager, used in conjunction withwith Keywords are more convenient, no need to explicitly call them after useputconn() Restitution of connections

db_helper.py

from psycopg2 import pool
from  import RealDictCursor
from contextlib import contextmanager
import atexit


class DBHelper:
    def __init__(self):
        self._connection_pool = None

    def initialize_connection_pool(self):
        db_dsn = 'postgresql://admin:password@localhost/testdb?connect_timeout=5'
        self._connection_pool = (1, 3,db_dsn)

    @contextmanager
    def get_resource(self, autocommit=True):
        if self._connection_pool is None:
            self.initialize_connection_pool()

        conn = self._connection_pool.getconn()
         = autocommit
        cursor = (cursor_factory=RealDictCursor)
        try:
            yield cursor, conn
        finally:
            ()
            self._connection_pool.putconn(conn)

    def shutdown_connection_pool(self):
        if self._connection_pool is not None:
            self._connection_pool.closeall()


db_helper = DBHelper()


@
def shutdown_connection_pool():
    db_helper.shutdown_connection_pool()
from psycopg2 import pool

from psycopg2 . extras import RealDictCursor

from contextlib import contextmanager

import atexit

class DBHelper :

     def __init__ ( self ) :

         self . _connection_pool = None

     def initialize_connection_pool ( self ) :

         db_dsn = 'postgresql://admin:password@localhost/testdb?connect_timeout=5'

         self . _connection_pool = pool . ThreadedConnectionPool ( 1 , 3 , db_dsn )

     @ contextmanager

     def get_resource ( self , autocommit = True ) :

         if self . _connection_pool is None :

             self . initialize_connection_pool ( )

         conn = self . _connection_pool . getconn ( )

         conn . autocommit = autocommit

         cursor = conn . cursor ( cursor_factory = RealDictCursor )

         try :

             yield cursor , conn

         finally :

             cursor . close ( )

             self . _connection_pool . putconn ( conn )

     def shutdown_connection_pool ( self ) :

         if self . _connection_pool is not None :

             self . _connection_pool . closeall ( )

db_helper = DBHelper ( )

@ atexit . register

def shutdown_connection_pool ( ) :

     db_helper . shutdown_connection_pool ( )

A few notes:

  • Only on the first call toget_resource() connection pool is created when thefrom db_helper import db_helper Connection pools are created when referenced
  • Context Manager returned two objects.cursor cap (a poem)connection, Requiredconnection Use it to manage things.
  • default timecursor The records returned are dictionaries, not arrays
  • Connections are autocommitted by default
  • final @ that oneShutdownHook may be a bit redundant, the connection is also closed when the process exits, theTIME_WAIT It should take a little longer.

Usage.

If you don't use things

from db_helper import db_helper


with db_helper.get_resource() as (cursor, _):
    ('select * from users')
    for record in ():
        ... process record, record['name'] ...
from db_helper import db_helper

with db_helper . get_resource ( ) as ( cursor , _ ) :

     cursor . execute ( 'select * from users' )

     for record in cursor . fetchall ( ) :

         . . . process record , record [ 'name' ] . . .

If you need to use things

with db_helper.get_resource(autocommit=False) as (cursor, _):
    try:
        ('update users set name = %s where id = %s', ('new_name', 1))
        ('delete from orders where user_id = %s', (1,))
        ()
    except:
        ()
with db_helper . get_resource ( autocommit = False ) as ( cursor , _ ) :

     try :

         cursor . execute ( 'update users set name = %s where id = %s' , ( 'new_name' , 1 ) )

         cursor . execute ( 'delete from orders where user_id = %s' , ( 1 , ) )

         conn . commit ( )

     except :

         conn . rollback ( )

While writing this article, check outpsycopg When I found the official website of thePsycopg 3.0 The official version was released on 2021-10-13 (Psycopg 3.0 released ), better support for async. asynchronous support was introduced in Psycopg2 version 2.2. And also note that thePsycopg is implemented in C, which makes it more efficient, and it's no wonder that it's often used inpip install psycopg2 The installation was unsuccessful and to use the pip install psycopg2-binary to install the reason.

Add the parameter when creating the connection poolkeepalivesXxx can make the server break the dead link in time, otherwise in theLinux By default, it will take 2 hours for the link to be disconnected. Dead links occur when the client exits abnormally (e.g. power failure) and the previously established link becomes dead.

 (1, 3, db_dsn, keepalives=1, keepalives_idle=30, keepalives_interval=10, keepalives_count=5) 


PostgreSQL The server will make a call to the server that is connected in the idle tcp_keepalives_idle After a second, the active sendstcp_keepalives_count individual tcp_keepliveDetective packages, each in tcp_keepalives_interval When there is no response within seconds, it is considered a dead connection and it is disconnected.

Up to this point this article onPython CreatePostgreSQL Database connection pooling article is introduced to this, more related to thePostgreSQL PythonFor content please search my previous posts or continue to browse the related articles below I hope you will support me in the future!