Skip to content

Compared to IfxPy

IfxPy is IBM’s official Python driver — a C extension that wraps the OneDB Client SDK (CSDK), which itself wraps the same SQLI wire protocol informix-driver speaks directly. It’s the reasonable comparison: same protocol, same server, same workload, different transport.

Numbers below are median + IQR over 10+ rounds, all against the same IBM Informix Developer Edition Docker container on the same host. Methodology and reproduction steps live in tests/benchmarks/compare/ in the repo.

BenchmarkIfxPy 3.0.5 (C)informix-driver (pure Python)Result
Single-row SELECT round-trip118 µs114 µscomparable
~10-row server-side query130 µs159 µsIfxPy 22% faster
Cold connect (login handshake)11.0 ms10.5 mscomparable
executemany(1k) in transaction23.5 ms23.2 mstied
executemany(10k) in transaction259 ms161 msinformix-driver 1.6× faster
executemany(100k) in transaction2376 ms1487 msinformix-driver 1.6× faster
SELECT 1k rows1.34 ms1.72 msIfxPy 1.28× faster
SELECT 10k rows11.7 ms16.1 msIfxPy 1.07× faster
SELECT 100k rows116 ms169 msIfxPy 1.15× faster

The clearest win is bulk insert throughput. executemany(10_000_rows) runs in 161 ms vs IfxPy’s 259 msinformix-driver is 1.6× faster.

The mechanism is pipelining. Phase 33 changed executemany to send all N BIND+EXECUTE PDUs back-to-back before draining any response. IfxPy’s C-level IfxPy.execute(stmt, tuple) makes one round-trip per row — N RTTs at ~80 µs each adds up to the 100 ms gap.

# Both drivers
cur.executemany(
"INSERT INTO orders VALUES (?, ?, ?)",
rows, # list of 10_000 tuples
)
# informix-driver: 161 ms — 10k PDUs sent, then 10k responses drained
# IfxPy: 259 ms — 10k round-trips, each blocking on response

informix-driver ships as a 50 KB pure-Python wheel with zero runtime dependencies. Your Dockerfile is:

FROM python:3.13-slim
RUN pip install informix-driver

IfxPy’s deployment surface is dramatically larger:

  • 92 MB IBM OneDB Client tarball
  • setuptools < 58 build pin
  • LD_LIBRARY_PATH configuration for four directories
  • libcrypt.so.1 (deprecated 2018 — missing on Arch, Fedora 35+, RHEL 9)
  • C compiler in the build image

For slim images, multi-stage builds, FaaS deployments, or anywhere build-toolchain-on-the-runtime is friction, informix-driver is the only reasonable option.

IfxPy works on Python ≤ 3.11 currently. The C extension breaks on 3.12+ (PyConfig changes, removed _PyImport_AcquireLock, etc.).

informix-driver works unmodified on 3.10, 3.11, 3.12, 3.13, and 3.14. We’ve kept a CI matrix on every minor version since 3.10 from the start.

informix-driver ships an async API:

from informix_db import aio
async def main():
pool = await aio.create_pool(...)
async with pool.connection() as conn:
cur = await conn.cursor()
await cur.execute("SELECT ...")
rows = await cur.fetchall()

IfxPy has no async support — every call blocks the event loop. Using IfxPy from FastAPI requires loop.run_in_executor() boilerplate, and the thread pool isn’t connection-aware so you give up the natural fairness of an async pool.

For queries pulling 10k+ rows where per-row decode cost dominates, IfxPy is currently 5–15% faster. The C-level fetch_tuple decoder is ~1.1 µs/row; our Python parse_tuple_payload is ~2.0 µs/row after Phase 39 (down from ~2.7 before). At 100k rows the gap is ~80 ms wall-clock — meaningful but not disqualifying.

The gap is closing phase by phase:

PhaseBulk-fetch ratio vs IfxPy
Phase 362.40× slower
Phase 37 (per-column readers)2.10× slower
Phase 38 (codegen-inlined decoders)2.04× slower
Phase 39 (connection-scoped buffered reader)1.15× slower

If you’re running analytical reports that pull millions of rows in a single SELECT and the per-row decode overhead is a measurable cost, IfxPy may be marginally faster today. For most application workloads it isn’t.

If your existing code uses IBM-specific cursor extensions (cursor.callproc with named parameters, IBM’s specific scrollable cursor semantics around last/prior/relative, cursor.set_chunk_size for fetch tuning), the migration to informix-driver is straightforward but not zero-cost. We support the core PEP 249 surface plus our own scrollable cursor API — see the migration guide.

Benchmarks are pytest-benchmark fixtures in tests/benchmarks/compare/ against the official icr.io/informix/informix-developer-database:15.0.1.0.3DE image, running on the same loopback as the Python process.

Reported numbers are median over 10+ rounds, with IQR included. Why median over mean: the first round of any run includes JIT warmup, page-cache miss, and a TCP slow-start round-trip. The mean is contaminated by these one-shot costs in a way that misrepresents steady-state behavior. Median + IQR is what we report.

IfxPy’s IQR on the 100k-row SELECT is ~21% (Docker→host loopback noise, plus the C extension’s allocation patterns). Our IQR is ~0.2%. The headline 1.15× ratio at 100k rows is partly that noise — a fair reading is “5–15% slower than IfxPy on large fetches”, and the lower bound may already be within measurement noise.

To reproduce:

Terminal window
git clone https://git.supported.systems/warehack.ing/informix-db
cd informix-db/tests/benchmarks/compare
make ifx-up
make compare

The Makefile handles the IfxPy install gauntlet (Python ≤ 3.11 environment, setuptools < 58, libcrypt.so.1 symlink, OneDB CSDK download, the four LD_LIBRARY_PATH exports) so you don’t have to learn it manually.

Use informix-driver when:

  • You’re writing new code in Python ≥ 3.10
  • Your workload is bulk-insert / ETL / log-shipping
  • You want async / FastAPI integration
  • You’re deploying in containers or to Python environments where build toolchains are friction
  • Your platform doesn’t have libcrypt.so.1

Use IfxPy when:

  • You have an existing IfxPy codebase
  • You’re running large analytical SELECTs and the 5–15% decode-side gap matters
  • You’re constrained to Python ≤ 3.11 anyway

For everything else — the cost-benefit favors pip install informix-driver.