Wine-NSPA – Client Scheduler Architecture

This page documents the client-side scheduler hosts and the current consumers routed through them.

Table of Contents

  1. Overview
  2. Thread model
  3. API surface
  4. Current consumers
  5. Validation and current controls
  6. Relationship to the rest of Wine-NSPA
  7. References

1. Overview

The architectural prerequisite was upstream spawn-main: the Unix bootstrap thread no longer becomes the application’s Win32 main thread. Instead, the bootstrap thread parks in sched_run() and becomes a per-process scheduler host, while the app main thread is created separately and continues through normal Win32 startup.

On top of that split, Wine-NSPA uses a client-side scheduler substrate:

The purpose is not to replace wineserver dispatch. Gamma remains wineserver-side. This scheduler is the client-process sidecar used to host small helper loops and timer dispatchers without adding more dedicated helper threads per subsystem.

Per-process thread model after spawn-main Wine process spawn-main separates the bootstrap scheduler host from the Win32 app main thread application main thread runs Win32 startup and normal user code no longer doubles as the Unix bootstrap loop `wine-sched` default-class scheduler thread SCHED_OTHER by design hosts poll / timer / async / call work always present after process bootstrap `wine-sched-rt` lazy RT-class scheduler thread spawned on first RT-class registration SCHED_FIFO at `NSPA_RT_PRIO - 1` spawn-main split RT-class only Load-bearing invariant the sched host stays separate from the app main thread gamma and wineserver remain separate server-side machinery

2. Thread model

The thread model has two classes.

Class Thread name Spawn policy Scheduler policy Purpose
Default wine-sched always present after spawn-main SCHED_OTHER general poll/timer/async/call hosting
RT wine-sched-rt lazy, first RT-class registration only SCHED_FIFO at NSPA_RT_PRIO - 1 precision timer consumers that used to own dedicated RT helper threads

Two details matter:

The scheduler implementation itself uses:

`ntdll_sched` routing model public entry points `ntdll_sched_register_poll` `ntdll_sched_register_timer` `ntdll_sched_async` `ntdll_sched_call` `ntdll_sched_cancel` generation-tagged handles ABA-safe cancel across instances self-call runs inline on sched thread default instance poll users + timer users PI mutex + non-blocking wake pipe RT instance same API, separate sched instance lazy-spawned only when needed `wine-sched` loop poll() dispatch callbacks outside producer locks `wine-sched-rt` loop same dispatch core RT-only consumers Cancel and wake discipline generation-checked cancel avoids stale-handle ABA non-blocking wake writes avoid producer-side deadlock

3. API surface

The API surface is:


NTSTATUS ntdll_sched_register_poll( int fd, int events,
                                    poll_callback callback,
                                    void *private,
                                    sched_handle_t *handle );

NTSTATUS ntdll_sched_register_timer( const LARGE_INTEGER *timeout,
                                     async_callback callback,
                                     void *private,
                                     sched_handle_t *handle );

NTSTATUS ntdll_sched_async( async_callback callback, void *private );
NTSTATUS ntdll_sched_call( call_callback callback, void *private );
NTSTATUS ntdll_sched_cancel( sched_handle_t handle );

NSPA adds class routing through NTDLL_SCHED_CLASS_DEFAULT and NTDLL_SCHED_CLASS_RT. Consumers that need RT dispatch call the class-aware registration helpers; consumers that only need a general callback host stay on the default instance.

The important semantics:


4. Current consumers

4.1 Async close queue on wine-sched

The first real consumer is the local-file async close queue. For eligible fully-shareable local-file handles, NtClose no longer pays unix close() and server close_handle latency inline on the caller thread. Instead it pushes a bounded queue entry to the default sched thread.

The rules are conservative:

This is a latency and consolidation feature, not a semantic change. Restrictive sharing closes still go inline immediately so any blocked opener sees the close at the same point as before.

4.2 local_timer and local_wm_timer on wine-sched-rt

The next consumers are the timer dispatchers that used to own separate RT helper threads:

When RT is available, both route onto the shared RT sched instance instead of running dedicated pthread loops. The priority class is unchanged from the legacy design: SCHED_FIFO at NSPA_RT_PRIO - 1. The win is consolidation and shared infrastructure, not a different scheduling policy.

When both migrations are active together, the process loses one helper thread relative to the pre-migration layout because two legacy loops collapse onto one shared wine-sched-rt host.

4.3 Observability sampler on the default class

NSPA_SCHED_OBS_INTERVAL_MS enables a periodic sampler hosted on the default class. It is not a production fast path, but it is active and useful because it exercises the timer and cancel paths continuously with a real in-tree consumer.

Current built-in output is written to /dev/shm/nspa-obs.<pid> and includes stats such as:

This sampler remains default OFF.

Current consumer map local-file close path eligible full-share closes enqueue work `NtSetTimer` / `WM_TIMER` precision timer expiries and reposts observability sampler periodic stats snapshot when env-enabled `wine-sched` async close queue drain observability timer `wine-sched-rt` `local_timer` dispatch `local_wm_timer` dispatch Fallback path legacy dedicated thread or inline close if routing is unavailable

5. Validation and current controls

The public status here is based on targeted validation of the current consumers, not on a new full-suite publish.

The scheduler consumers no longer expose per-feature opt-out gates in the public surface. Async close routing, local_timer, and local_wm_timer all run on the normal path when their own eligibility checks pass and RT is available. The one remaining public control here is the optional sampler:

Item Default Purpose
NSPA_SCHED_OBS_INTERVAL_MS OFF opt-in scheduler-host sampler for observability only

Targeted 2026-05-02 results:


6. Relationship to the rest of Wine-NSPA

This page is client-side infrastructure. It composes with, but does not replace:

The main decomposition consequence is that the client side has a cleaner place to host helper loops. That shrinks the number of ad-hoc per-subsystem threads and moves more timing-sensitive work out of the wineserver process without changing wineserver ownership of cross-process semantics.


7. References