Skip to content

Use the connection pool

The connection pool amortizes the ~11 ms login handshake across many queries and gives you a thread-safe / task-safe acquire-release API. Use it any time the same process makes more than a handful of queries.

import informix_db
pool = informix_db.create_pool(
host="db.example.com", port=9088,
user="informix", password="...",
database="mydb", server="informix",
min_size=2, # warm up at least 2 connections at create time
max_size=10, # hard cap; acquires beyond this block
acquire_timeout=5.0, # raise PoolTimeout if no connection in 5s
max_idle=600.0, # close connections idle longer than 10 min
)
with pool.connection() as conn:
cur = conn.cursor()
cur.execute("SELECT id, name FROM users WHERE id = ?", (42,))
print(cur.fetchone())
pool.close()

The context manager guarantees the connection returns to the pool on normal exit and on exception. Connections returned to the pool get rolled back automatically — you never see a dirty connection from pool.connection().

import asyncio
from informix_db import aio
async def main():
pool = await aio.create_pool(
host="db.example.com",
user="informix", password="...",
database="mydb",
min_size=2, max_size=10,
)
async with pool.connection() as conn:
cur = await conn.cursor()
await cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
print(await cur.fetchone())
await pool.close()
asyncio.run(main())

Same semantics, async-aware. Cancellation is cancellation-safe — a cancelled task does not leak an in-flight worker onto a recycled connection.

A reasonable starting point: min_size = 2, max_size = (CPU cores) × 2. Most Informix workloads are I/O-bound, so the right size is “enough to saturate the network plus some headroom for spikes” — usually 8–16 for typical web/API services.

max_size should be smaller than the server’s MAX_CONCURRENT_CONNECTIONS — the server fails new logins past its limit, and the pool will surface that as OperationalError after waiting acquire_timeout.

PEP 249 says: connections should not be shared between threads. The pool gives each thread its own connection naturally — pool.connection() returns a different connection each time and each one stays held until the context manager exits.

Phase 27 added a per-connection wire lock that makes accidental sharing safe (interleaved PDUs serialize correctly), but you should still give each thread its own connection. The lock is a backstop, not a license.