Compared to IfxPy
IfxPy is IBM’s official Python driver — a C extension that wraps the OneDB Client SDK (CSDK), which itself wraps the same SQLI wire protocol informix-driver speaks directly. It’s the reasonable comparison: same protocol, same server, same workload, different transport.
Numbers below are median + IQR over 10+ rounds, all against the same IBM Informix Developer Edition Docker container on the same host. Methodology and reproduction steps live in tests/benchmarks/compare/ in the repo.
Headline numbers
Section titled “Headline numbers”| Benchmark | IfxPy 3.0.5 (C) | informix-driver (pure Python) | Result |
|---|---|---|---|
| Single-row SELECT round-trip | 118 µs | 114 µs | comparable |
| ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster |
| Cold connect (login handshake) | 11.0 ms | 10.5 ms | comparable |
executemany(1k) in transaction | 23.5 ms | 23.2 ms | tied |
executemany(10k) in transaction | 259 ms | 161 ms | informix-driver 1.6× faster |
executemany(100k) in transaction | 2376 ms | 1487 ms | informix-driver 1.6× faster |
SELECT 1k rows | 1.34 ms | 1.72 ms | IfxPy 1.28× faster |
SELECT 10k rows | 11.7 ms | 16.1 ms | IfxPy 1.07× faster |
SELECT 100k rows | 116 ms | 169 ms | IfxPy 1.15× faster |
When informix-driver wins
Section titled “When informix-driver wins”Bulk inserts at scale
Section titled “Bulk inserts at scale”The clearest win is bulk insert throughput. executemany(10_000_rows) runs in 161 ms vs IfxPy’s 259 ms — informix-driver is 1.6× faster.
The mechanism is pipelining. Phase 33 changed executemany to send all N BIND+EXECUTE PDUs back-to-back before draining any response. IfxPy’s C-level IfxPy.execute(stmt, tuple) makes one round-trip per row — N RTTs at ~80 µs each adds up to the 100 ms gap.
# Both driverscur.executemany( "INSERT INTO orders VALUES (?, ?, ?)", rows, # list of 10_000 tuples)
# informix-driver: 161 ms — 10k PDUs sent, then 10k responses drained# IfxPy: 259 ms — 10k round-trips, each blocking on responseContainerized deployment
Section titled “Containerized deployment”informix-driver ships as a 50 KB pure-Python wheel with zero runtime dependencies. Your Dockerfile is:
FROM python:3.13-slimRUN pip install informix-driverIfxPy’s deployment surface is dramatically larger:
- 92 MB IBM OneDB Client tarball
setuptools < 58build pinLD_LIBRARY_PATHconfiguration for four directorieslibcrypt.so.1(deprecated 2018 — missing on Arch, Fedora 35+, RHEL 9)- C compiler in the build image
For slim images, multi-stage builds, FaaS deployments, or anywhere build-toolchain-on-the-runtime is friction, informix-driver is the only reasonable option.
Modern Python
Section titled “Modern Python”IfxPy works on Python ≤ 3.11 currently. The C extension breaks on 3.12+ (PyConfig changes, removed _PyImport_AcquireLock, etc.).
informix-driver works unmodified on 3.10, 3.11, 3.12, 3.13, and 3.14. We’ve kept a CI matrix on every minor version since 3.10 from the start.
informix-driver ships an async API:
from informix_db import aio
async def main(): pool = await aio.create_pool(...) async with pool.connection() as conn: cur = await conn.cursor() await cur.execute("SELECT ...") rows = await cur.fetchall()IfxPy has no async support — every call blocks the event loop. Using IfxPy from FastAPI requires loop.run_in_executor() boilerplate, and the thread pool isn’t connection-aware so you give up the natural fairness of an async pool.
When IfxPy wins
Section titled “When IfxPy wins”Large analytical fetches
Section titled “Large analytical fetches”For queries pulling 10k+ rows where per-row decode cost dominates, IfxPy is currently 5–15% faster. The C-level fetch_tuple decoder is ~1.1 µs/row; our Python parse_tuple_payload is ~2.0 µs/row after Phase 39 (down from ~2.7 before). At 100k rows the gap is ~80 ms wall-clock — meaningful but not disqualifying.
The gap is closing phase by phase:
| Phase | Bulk-fetch ratio vs IfxPy |
|---|---|
| Phase 36 | 2.40× slower |
| Phase 37 (per-column readers) | 2.10× slower |
| Phase 38 (codegen-inlined decoders) | 2.04× slower |
| Phase 39 (connection-scoped buffered reader) | 1.15× slower |
If you’re running analytical reports that pull millions of rows in a single SELECT and the per-row decode overhead is a measurable cost, IfxPy may be marginally faster today. For most application workloads it isn’t.
Workloads built around CSDK extensions
Section titled “Workloads built around CSDK extensions”If your existing code uses IBM-specific cursor extensions (cursor.callproc with named parameters, IBM’s specific scrollable cursor semantics around last/prior/relative, cursor.set_chunk_size for fetch tuning), the migration to informix-driver is straightforward but not zero-cost. We support the core PEP 249 surface plus our own scrollable cursor API — see the migration guide.
Methodology
Section titled “Methodology”Benchmarks are pytest-benchmark fixtures in tests/benchmarks/compare/ against the official icr.io/informix/informix-developer-database:15.0.1.0.3DE image, running on the same loopback as the Python process.
Reported numbers are median over 10+ rounds, with IQR included. Why median over mean: the first round of any run includes JIT warmup, page-cache miss, and a TCP slow-start round-trip. The mean is contaminated by these one-shot costs in a way that misrepresents steady-state behavior. Median + IQR is what we report.
IfxPy’s IQR on the 100k-row SELECT is ~21% (Docker→host loopback noise, plus the C extension’s allocation patterns). Our IQR is ~0.2%. The headline 1.15× ratio at 100k rows is partly that noise — a fair reading is “5–15% slower than IfxPy on large fetches”, and the lower bound may already be within measurement noise.
To reproduce:
git clone https://git.supported.systems/warehack.ing/informix-dbcd informix-db/tests/benchmarks/comparemake ifx-upmake compareThe Makefile handles the IfxPy install gauntlet (Python ≤ 3.11 environment, setuptools < 58, libcrypt.so.1 symlink, OneDB CSDK download, the four LD_LIBRARY_PATH exports) so you don’t have to learn it manually.
Summary
Section titled “Summary”Use informix-driver when:
- You’re writing new code in Python ≥ 3.10
- Your workload is bulk-insert / ETL / log-shipping
- You want async / FastAPI integration
- You’re deploying in containers or to Python environments where build toolchains are friction
- Your platform doesn’t have
libcrypt.so.1
Use IfxPy when:
- You have an existing IfxPy codebase
- You’re running large analytical SELECTs and the 5–15% decode-side gap matters
- You’re constrained to Python ≤ 3.11 anyway
For everything else — the cost-benefit favors pip install informix-driver.