Optimize bulk SELECT
The connection-scoped buffered reader (Phase 39) is enabled by default as of 2026.05.05.12. For most workloads you don’t need to touch anything — the bulk-fetch gap against IfxPy is now ~5–15% rather than ~140%.
For the architectural rationale, see The buffered reader →.
Disabling the buffered reader
Section titled “Disabling the buffered reader”IFX_BUFFERED_READER=0 python my_app.pyThe flag is read once at connection construction. To flip behavior on existing connections, close and reopen the pool.
There’s no expected reason to disable it in production. The flag exists so you can A/B-measure your own workload and so we can debug regressions if they appear.
A/B-measuring your workload
Section titled “A/B-measuring your workload”# Baseline: no buffered readerIFX_BUFFERED_READER=0 python -m mybench
# With buffered readerIFX_BUFFERED_READER=1 python -m mybenchFor typical bulk-SELECT workloads expect a 30–40% wall-time reduction. For workloads dominated by single-row queries the impact is small (small queries are RTT-bound, not framing-bound).
When the speedup is largest
Section titled “When the speedup is largest”Workloads where every column read makes ~4–5 small recv() calls — i.e. tabular data, narrow rows, large row counts. The buffered reader replaces N small recv() calls with one recv(64K) per ~64 KB of incoming data.
| Workload shape | Speedup |
|---|---|
| Wide row, single fetch (1 row × 100 cols) | minimal |
| Narrow row, large fetch (100k rows × 5 cols) | 30–40% |
executemany response drain (1k inserts) | 25–30% |
Memory profile
Section titled “Memory profile”The buffer is per-connection, sized to grow up to the largest single PDU it sees. Typical steady-state: 64 KB to a few hundred KB per connection. The buffer is freed when the connection closes; for long-lived pool connections it’s amortized.
If you’re running 10,000 connections at idle, the buffer cost is ~1–2 GB across the fleet. For typical pool sizes (10–50 connections) it’s ~1–10 MB total.