NautilusTrader
How-To Guides

Configure a Live Trading Node

Set up a TradingNode for live market connectivity. For background on live trading architecture and reconciliation, see the Live trading concept guide.

Jupyter notebooks not recommended for live trading

Do not run live trading nodes in Jupyter notebooks. Event loop conflicts and operational risks make them unsuitable:

  • Jupyter runs its own asyncio event loop, which conflicts with TradingNode's event loop.
  • Workarounds like nest_asyncio are not production-grade.
  • Cells can run out of order, kernels can crash, and state can disappear.
  • Notebooks lack the logging, monitoring, and graceful shutdown needed for production trading.

Use Jupyter for backtesting, analysis, and experimentation. For live trading, run nodes as standalone Python scripts or services.

One TradingNode per process

Running multiple TradingNode instances concurrently in the same process is not supported due to global singleton state. Add multiple strategies to a single node, or run additional nodes in separate processes for parallel execution.

See Processes and threads for details.

Do not block the event loop

User code on the event loop thread (strategy callbacks, actor handlers, on_event methods) must return quickly. This applies to both Python and Rust. Blocking operations like model inference, heavy calculations, or synchronous I/O cause missed fills, stale data, and delayed order submissions. Offload long-running work to an executor or a separate thread/process.

Platform differences

Windows signal handling differs from Unix-like systems. If you are running on Windows, please read the note on Windows signal handling for guidance on graceful shutdown behavior and Ctrl+C (SIGINT) support.

TradingNodeConfig

TradingNodeConfig inherits from NautilusKernelConfig and adds live-specific options:

from nautilus_trader.config import TradingNodeConfig

config = TradingNodeConfig(
    trader_id="MyTrader-001",

    # Component configurations
    cache=CacheConfig(),
    message_bus=MessageBusConfig(),
    data_engine=LiveDataEngineConfig(),
    risk_engine=LiveRiskEngineConfig(),
    exec_engine=LiveExecEngineConfig(),
    portfolio=PortfolioConfig(),

    # Client configurations
    data_clients={
        "BINANCE": BinanceDataClientConfig(),
    },
    exec_clients={
        "BINANCE": BinanceExecClientConfig(),
    },
)

Core configuration parameters

SettingDefaultDescription
trader_id"TRADER-001"Unique trader identifier (name‑tag format).
instance_idNoneOptional unique instance identifier.
timeout_connection30.0Connection timeout in seconds.
timeout_reconciliation10.0Reconciliation timeout in seconds.
timeout_portfolio10.0Portfolio initialization timeout.
timeout_disconnection10.0Disconnection timeout.
timeout_post_stop5.0Post‑stop cleanup timeout.

Cache database configuration

from nautilus_trader.config import CacheConfig
from nautilus_trader.config import DatabaseConfig

cache_config = CacheConfig(
    database=DatabaseConfig(
        host="localhost",
        port=6379,
        username="nautilus",
        password="pass",
        timeout=2.0,
    ),
    encoding="msgpack",  # or "json"
    timestamps_as_iso8601=True,
    buffer_interval_ms=100,
    flush_on_start=False,
)

MessageBus configuration

from nautilus_trader.config import MessageBusConfig
from nautilus_trader.config import DatabaseConfig

message_bus_config = MessageBusConfig(
    database=DatabaseConfig(timeout=2),
    timestamps_as_iso8601=True,
    use_instance_id=False,
    types_filter=[QuoteTick, TradeTick],  # Filter specific message types
    stream_per_topic=False,
    autotrim_mins=30,  # Automatic message trimming
    heartbeat_interval_secs=1,
)

Multi-venue configuration

A node can connect to multiple venues. This example configures both spot and futures markets for Binance:

config = TradingNodeConfig(
    trader_id="MultiVenue-001",

    # Multiple data clients for different market types
    data_clients={
        "BINANCE_SPOT": BinanceDataClientConfig(
            account_type=BinanceAccountType.SPOT,
            testnet=False,
        ),
        "BINANCE_FUTURES": BinanceDataClientConfig(
            account_type=BinanceAccountType.USDT_FUTURES,
            testnet=False,
        ),
    },

    # Corresponding execution clients
    exec_clients={
        "BINANCE_SPOT": BinanceExecClientConfig(
            account_type=BinanceAccountType.SPOT,
            testnet=False,
        ),
        "BINANCE_FUTURES": BinanceExecClientConfig(
            account_type=BinanceAccountType.USDT_FUTURES,
            testnet=False,
        ),
    },
)

ExecutionEngine configuration

LiveExecEngineConfig controls order processing, execution events, and venue reconciliation. For full details see the API Reference.

Reconciliation

Recovers missed order and position events to keep system state consistent with the venue.

SettingDefaultDescription
reconciliationTrueActivate reconciliation at startup to align internal state with the venue.
reconciliation_lookback_minsNoneHow far back (minutes) to request past events for reconciling uncached state.
reconciliation_instrument_idsNoneInclude list of instrument IDs to reconcile.
filtered_client_order_idsNoneClient order IDs to skip during reconciliation (for venue‑side duplicates).

See Execution reconciliation for details.

Order filtering

Controls which order events and reports the system processes, preventing conflicts across trading nodes.

SettingDefaultDescription
filter_unclaimed_external_ordersFalseDrop unclaimed external orders so they do not affect the strategy.
filter_position_reportsFalseDrop position status reports. Useful when multiple nodes trade one account.

Order tagging behavior

Reconciliation tags orders by origin:

  • VENUE tag: external orders discovered at the venue (placed outside this system).
  • RECONCILIATION tag: synthetic orders generated to align position discrepancies.

When filter_unclaimed_external_orders is enabled, only VENUE-tagged orders are filtered. RECONCILIATION-tagged orders are never filtered, so position alignment always succeeds.

Continuous reconciliation

A background loop starts after startup reconciliation completes. It:

  • Monitors in-flight orders for delays exceeding a configured threshold.
  • Reconciles open orders with the venue at configurable intervals.
  • Audits internal own order books against the venue's public books.

The loop waits for startup reconciliation to finish before starting periodic checks. The reconciliation_startup_delay_secs parameter adds a further delay after startup reconciliation completes, giving the system time to stabilize.

When retries are exhausted, the engine resolves the order as follows:

In-flight order timeout resolution (venue does not respond after max retries):

Current statusResolved toRationale
SUBMITTEDREJECTEDNo confirmation received from venue.
PENDING_UPDATECANCELEDModification remains unacknowledged.
PENDING_CANCELCANCELEDVenue never confirmed the cancellation.

Order consistency checks (when cache state differs from venue state):

Cache statusVenue statusResolutionRationale
SUBMITTEDNot foundREJECTEDOrder never confirmed by venue (e.g., lost during network error).
ACCEPTEDNot foundREJECTEDOrder doesn't exist at venue, likely was never successfully placed.
ACCEPTEDCANCELEDCANCELEDVenue canceled the order (user action or venue‑initiated).
ACCEPTEDEXPIREDEXPIREDOrder reached GTD expiration at venue.
ACCEPTEDREJECTEDREJECTEDVenue rejected after initial acceptance (rare but possible).
PARTIALLY_FILLEDCANCELEDCANCELEDOrder canceled at venue with fills preserved.
PARTIALLY_FILLEDNot foundCANCELEDOrder doesn't exist but had fills (reconciles fill history).

Reconciliation caveats:

  • "Not found" resolutions only apply in full-history mode (open_check_open_only=False). Open-only mode (the default) skips these checks because venue "open orders" endpoints exclude closed orders by design, making it impossible to distinguish missing orders from recently closed ones.
  • Recent order protection: the engine skips reconciliation for orders whose last event falls within the open_check_threshold_ms window (default 5s). This prevents false positives from race conditions where the venue is still processing.
  • Targeted query safeguard: before marking an order REJECTED or CANCELED when "not found", the engine issues a single-order query to the venue. This catches false negatives from bulk query limitations or timing delays.
  • FILLED orders that are "not found" at the venue are silently ignored. Venues commonly drop completed orders from their query results.

Retry coordination and lookback behavior

The inflight loop and open-order loop share a single retry counter (_recon_check_retries), bounded by inflight_check_retries and open_check_missing_retries respectively. The stricter limit wins, and avoids duplicate venue queries for the same order state.

When the open-order loop exhausts retries, the engine issues one targeted GenerateOrderStatusReport probe before applying a terminal state. If the venue returns the order, reconciliation proceeds and the retry counter resets.

Single-order query protection: the engine caps single-order queries per cycle via max_single_order_queries_per_cycle (default: 10). Remaining orders are deferred to the next cycle. A configurable delay (single_order_query_delay_ms, default: 100ms) spaces out consecutive queries to avoid rate limits. This handles bulk query failures across hundreds of orders without overwhelming the venue API.

Orders older than open_check_lookback_mins rely on this targeted probe. Keep the lookback generous for venues with short history windows. Increase open_check_threshold_ms if venue timestamps lag the local clock, so recently updated orders are not marked missing prematurely.

SettingDefaultDescription
inflight_check_interval_ms2,000 msHow often to check in‑flight order status. Set to 0 to disable.
inflight_check_threshold_ms5,000 msTime before an in‑flight order triggers a venue status check. Lower if colocated.
inflight_check_retries5 retriesRetry attempts to verify an in‑flight order with the venue.
open_check_interval_secsNoneHow often (seconds) to check open orders at the venue. None or 0.0 disables. Recommended: 5-10s.
open_check_open_onlyTrueWhen true, query only open orders; when false, fetch full history (resource‑intensive).
open_check_lookback_mins60 minLookback window (minutes) for order status polling. Only orders modified within this window.
open_check_threshold_ms5,000 msMinimum time since last cached event before acting on venue discrepancies.
open_check_missing_retries5 retriesMax retries before resolving an order open in cache but not found at venue.
max_single_order_queries_per_cycle10Cap on single‑order queries per cycle. Prevents rate‑limit exhaustion.
single_order_query_delay_ms100 msDelay (ms) between single‑order queries to avoid rate limits.
reconciliation_startup_delay_secs10.0 sDelay (seconds) after startup reconciliation before continuous checks begin.
own_books_audit_interval_secsNoneInterval (seconds) between auditing own order books against public books.
position_check_interval_secsNoneInterval (seconds) between position consistency checks. On discrepancy, queries for missing fills. None disables. Recommended: 30-60s.
position_check_lookback_mins60 minLookback window (minutes) for querying fill reports on position discrepancy.
position_check_threshold_ms5,000 msMinimum time since last local activity before acting on position discrepancies.
position_check_retries3 retriesMax attempts per instrument before the engine stops retrying that discrepancy. Once exceeded, an error is logged and the discrepancy is no longer actively reconciled until it clears.
  • open_check_lookback_mins: do not reduce below 60 minutes. A short window triggers false "missing order" resolutions because orders fall outside the query range.
  • reconciliation_startup_delay_secs: do not reduce below 10 seconds in production. The delay lets the system stabilize after startup reconciliation before continuous checks begin.

Additional options

SettingDefaultDescription
allow_overfillsFalseAllow fills exceeding order quantity (logs warning). Useful when reconciliation races fills.
generate_missing_ordersTrueGenerate LIMIT orders during reconciliation to align position discrepancies (strategy EXTERNAL, tag RECONCILIATION).
snapshot_ordersFalseTake order snapshots on order events.
snapshot_positionsFalseTake position snapshots on position events.
snapshot_positions_interval_secsNoneInterval (seconds) between position snapshots.
debugFalseEnable debug logging for execution.

Memory management

Periodically purges closed orders, closed positions, and account events from the in-memory cache, keeping memory bounded during long-running or HFT sessions.

SettingDefaultDescription
purge_closed_orders_interval_minsNoneHow often (minutes) to purge closed orders from memory. Recommended: 10-15 min.
purge_closed_orders_buffer_minsNoneHow long (minutes) an order must be closed before purging. Recommended: 60 min.
purge_closed_positions_interval_minsNoneHow often (minutes) to purge closed positions from memory. Recommended: 10-15 min.
purge_closed_positions_buffer_minsNoneHow long (minutes) a position must be closed before purging. Recommended: 60 min.
purge_account_events_interval_minsNoneHow often (minutes) to purge account events from memory. Recommended: 10-15 min.
purge_account_events_lookback_minsNoneHow old (minutes) an account event must be before purging. Recommended: 60 min.
purge_from_databaseFalseAlso delete from the backing database (Redis/PostgreSQL). Use with caution.

Setting an interval enables the purge loop; leaving it unset disables scheduling and deletion. Database records are unaffected unless purge_from_database is true. Each loop delegates to the cache APIs described in Cache.

Queue management

SettingDefaultDescription
qsize100,000Size of internal queue buffers.
graceful_shutdown_on_exceptionFalseGracefully shut down on unexpected queue processing exceptions (not user code).

Strategy configuration

For a complete parameter list see the StrategyConfig API Reference.

Identification

SettingDefaultDescription
strategy_idNoneUnique strategy identifier.
order_id_tagNoneUnique tag appended to this strategy's order IDs.

Order management

SettingDefaultDescription
oms_typeNoneOMS type for position ID and order processing.
use_uuid_client_order_idsFalseUse UUID4 values for client order IDs.
external_order_claimsNoneInstrument IDs whose external orders this strategy claims.
manage_contingent_ordersFalseAutomatically manage OTO, OCO, and OUO contingent orders.
manage_gtd_expiryFalseManage GTD expirations for orders.

Windows signal handling

Windows: asyncio event loops do not implement loop.add_signal_handler. As a result, the legacy TradingNode does not receive OS signals via asyncio on Windows. Use Ctrl+C (SIGINT) handling or programmatic shutdown; SIGTERM parity is not expected on Windows.

On Windows, asyncio event loops do not implement loop.add_signal_handler, so Unix-style signal integration is unavailable. TradingNode does not receive OS signals via asyncio on Windows and will not stop gracefully unless you intervene.

Recommended approaches:

  • Wrap run with try/except KeyboardInterrupt and call node.stop() then node.dispose(). Ctrl+C raises KeyboardInterrupt in the main thread, giving you a clean teardown path.
  • Publish a ShutdownSystem command programmatically (or call shutdown_system(...) from an actor/component) to trigger the same shutdown path.

The "inflight check loop task still pending" message appears because the normal graceful shutdown path is not triggered. This is tracked as #2785.

The v2 LiveNode already handles Ctrl+C via tokio::signal::ctrl_c() and a Python SIGINT bridge, so runner and tasks shut down cleanly.

Example pattern for Windows:

try:
    node.run()
except KeyboardInterrupt:
    pass
finally:
    try:
        node.stop()
    finally:
        node.dispose()

On this page