Skip to content

WTF did you build this for?

The existing tools were not my style.

Every Informix driver in any language — IfxPy, the legacy informixdb, ODBC bridges, JPype/JDBC, Perl DBD::Informix — wraps either IBM’s C Client SDK or the JDBC JAR. To our knowledge informix-driver is the first pure-socket Informix driver in any language.

The IBM Informix Client SDK (CSDK), now packaged as part of OneDB Client, is a 92 MB tarball with a non-trivial install gauntlet:

  • Python ≤ 3.11 (IfxPy is broken on 3.12+)
  • setuptools < 58 (legacy build system)
  • Permissive CFLAGS for the C extension build
  • Manual download of the 92 MB ODBC tarball
  • Four LD_LIBRARY_PATH directories
  • libcrypt.so.1 — deprecated in 2018, missing on Arch, Fedora 35+, RHEL 9

For containerized deployments, ETL pipelines, FastAPI services, or anywhere Python lives and IBM’s C SDK is friction, the friction compounds. informix-driver’s install is pip install informix-driver (import informix_db — the distribution name dodges PyPI’s 2008-vintage informixdb package, the import name is what you’d expect). The wheel is ~50 KB. There are zero runtime dependencies.

informix-driver opens a TCP socket to an Informix server’s SQLI listener and speaks the wire protocol directly — the same protocol IBM’s JDBC driver uses, the same protocol the CSDK speaks under the hood. No native code is in the thread of execution.

The wire protocol was reverse-engineered through three sources:

  1. Decompiled IBM JDBC driver (com.informix.jdbc.IfxConnection and friends), used as a clean-room reference for PDU shapes and protocol semantics.
  2. Annotated socat captures of real client/server traffic against the IBM Informix Developer Edition Docker image.
  3. Differential testing against IfxPy — every codec path is tested against the C driver’s behavior on the same data.

The result is a PEP 249 compliant driver with a sync API, an async API (FastAPI / asyncio compatible), a connection pool, TLS support, smart-LOB read/write, scrollable cursors, fast-path stored procedure invocation, and bulk-insert / bulk-fetch performance within ~10–60% of the C driver depending on workload.

The places where informix-driver is unambiguously the right choice:

  • ETL and bulk-load pipelines. Pipelined executemany (Phase 33) is 1.6× faster than IfxPy at scale because every BIND+EXECUTE PDU goes out before any responses are drained. IfxPy still pays one round-trip per IfxPy.execute(stmt, tuple) call.
  • Container deployments. The 50 KB wheel and absent native deps mean a slim base image works. No multi-stage build to compile the CSDK.
  • Modern Python. Works on 3.10 through 3.14 unmodified. IfxPy hasn’t shipped 3.12 wheels.
  • Async / FastAPI. Native async support via thread-pool wrapping. IfxPy is fully synchronous; using it from FastAPI requires run_in_executor boilerplate and gives up the connection pool’s natural async semantics.
  • Anywhere libcrypt.so.1 is missing. Modern Linux distributions ship libcrypt.so.2. IfxPy refuses to load without libcrypt.so.1. We don’t link against either.

Honesty matters here:

  • Large analytical fetches. IfxPy’s C-level fetch_tuple decoder is faster than our Python parse_tuple_payload (~1.1 µs/row vs ~2.0 µs/row after Phase 39). For workloads pulling 10k+ rows in a single SELECT where the per-row decode cost dominates, IfxPy is currently 5–15% faster. The gap is shrinking phase by phase.
  • Workloads built around the CSDK. If your existing code already uses IfxPy idioms (IfxPyDbi.connect_pooled, IBM’s specific cursor extensions), the migration to informix-driver is straightforward but not zero-cost.

The honest summary table from the comparison page:

WorkloadWinnerMargin
Bulk insert (executemany 10k–100k rows)informix-driver1.6× faster
Bulk SELECT (10k–100k rows)IfxPy1.05–1.15× faster
Single-row queriestiedwithin noise
Cold connecttiedwithin noise
Containerized deploymentinformix-driverno contest
Python 3.12+informix-driveronly option

Every finding from a system-wide failure-mode audit (data correctness, wire safety, resource leaks, concurrency, async cancellation) has been addressed:

  • Pool no longer returns connections with open transactions
  • Per-connection wire lock prevents PDU interleaving from accidental sharing
  • Async cancellation cannot leak running workers onto recycled connections
  • _raise_sq_err no longer masks wire desync via bare-except
  • Cursor finalizers release server-side resources on mid-fetch raise
  • 5 medium-severity hardening items resolved

0 critical, 0 high, 0 medium audit findings remain. Every architectural change went through a Margaret Hamilton-style review focused on silent-failure modes, recovery paths, and documented invariants. Each documented invariant is paired with either a runtime guard or a CI tripwire test.

300+ tests across unit / integration / benchmark suites. Integration tests run against the official IBM Informix Developer Edition Docker image (15.0.1.0.3DE).