Wine 11.6 + NSPA RT patchset | Kernel 6.19.x-rt with NTSync PI | 2026-04-15 Author: Jordan Johnston
nspa_rt_test.exe is a multi-subcommand PE binary that validates the Wine-NSPA real-time scheduling, priority inheritance, synchronization primitives, io_uring integration, memory management, and process creation paths. It is built with mingw as a static PE console application and runs under Wine.
The binary contains 13 subcommands, each targeting a specific NSPA component or kernel interaction. Tests are designed to run in two modes:
WINEDEBUG=-all only) – no RT promotion, all threads SCHED_OTHER. Establishes reference behavior.NSPA_RT_PRIO=80 NSPA_RT_POLICY=FF WINEPRELOADREMAPVDSO=force) – full NSPA RT promotion active. TIME_CRITICAL threads become SCHED_FIFO, PI boost is active, the vDSO is remapped for RT-safe clock access.The companion script nspa/run_rt_tests.sh orchestrates both modes, captures per-run logs, and produces a summary matrix comparing baseline vs RT results.
i686-w64-mingw32-gcc -O2 -static programs/nspa_rt_test/main.c -o nspa_rt_test.exe -lws2_32
Single subcommand, RT mode:
NSPA_RT_PRIO=80 NSPA_RT_POLICY=FF ./wine nspa_rt_test.exe cs-contention
Full matrix (baseline + RT):
nspa/run_rt_tests.sh
The following diagram shows how nspa_rt_test.exe exercises each layer of the Wine-NSPA stack, from the PE binary through Wine’s ntdll Unix layer down to the Linux kernel. Each test subcommand targets specific components.
Each subcommand targets a specific cross-section of the stack:
| Test | ntdll component | Kernel mechanism | What breaks if the component regresses |
|---|---|---|---|
| priority | thread.c | sched_setattr | Wrong FIFO priorities, threads stay TS |
| cs-contention | sync.c (CS-PI) | futex_lock_pi | RT thread starved behind SCHED_OTHER holder |
| rapidmutex | sync.c (CS fast path) | futex_lock_pi | Throughput collapse, RT max_wait unbounded |
| philosophers | sync.c (transitive PI) | futex_lock_pi | Deadlock or starvation in PI chain |
| ntsync | ntsync client | /dev/ntsync | PI not firing, wrong wakeup order |
| socket-io | io_uring.c | io_uring | Async recv latency regression |
| signal-recursion | virtual.c | segv_handler | Deadlock in recursive mutex path |
| large-pages | virtual.c | hugetlbfs | Silent fallback to 4KB pages |
| fork-mutex | process.c | posix_spawn | Child hangs from corrupted mutex |
| srw-bench | sync.c (SRW) | futex | Acquire latency regression |
| # | Subcommand | Tests | Key Metrics | NSPA Component |
|---|---|---|---|---|
| 1 | priority |
RT priority mapping (11 threads, 2 phases) | Thread scheduling class + FIFO priority | thread.c RT mapping |
| 2 | cs-contention |
CS-PI under SCHED_FIFO vs SCHED_OTHER | wait time (ms), samples captured | sync.c CS-PI |
| 3 | rapidmutex |
CS throughput stress (1 RT + N-1 load) | ops/sec, max_wait (us), counter integrity | sync.c CS fast path |
| 4 | philosophers |
Dining philosophers with transitive PI | meals/phil, RT max_wait (us), spread | sync.c transitive PI |
| 5 | fork-mutex |
Rapid CreateProcess stress (N spawns) | spawn time, exit code, success rate | process.c opt-out |
| 6 | signal-recursion |
PAGE_GUARD fault stress (N threads) | iters completed, fault count, elapsed | virtual.c recursive mutex |
| 7 | large-pages |
VirtualAlloc(MEM_LARGE_PAGES) 2MB + PAGEMAP | HugePages_Free delta, LargePage flag | virtual.c large pages |
| 8 | ntsync |
5 sub-tests: rapid mutex, PI, prio, chain, WFMO | per-sub PASS/FAIL, wait times | /dev/ntsync driver |
| 9 | socket-io |
TCP loopback: immediate + deferred recv | latency (us) p50/p95/p99/max, msgs/sec | io_uring Phase 3 |
| 10 | srw-bench |
SRW lock contention benchmark | acquire latency (ns) p50/p99/max, ops/sec | sync.c SRW |
| 11 | child-quickexit |
Internal helper for fork-mutex | exit code 42 | (internal) |
| 12 | help |
Usage display | – | – |
priority – RT Priority MappingWhat it tests: The NSPA v1/v2 priority mapping that converts Win32 thread priorities to Linux scheduling classes. Spawns 11 worker threads across two phases, each sleeping long enough for external inspection via ps and chrt.
Phase 1 (3 threads, default process class – Tier 1 lenient path):
- P1-TC – TIME_CRITICAL, expected FF 80
- P1-MCSS – via avrt AvSetMmThreadCharacteristicsW hint, expected FF 80
- P1-NORM – NORMAL, expected TS (or FF 73 under RT class)
Phase 2 (8 threads, after SetPriorityClass(REALTIME_PRIORITY_CLASS)):
- IDLE (FF 65), LOWEST (FF 71), BELOW_NORMAL (FF 72), NORMAL (FF 73), ABOVE_NORMAL (FF 74), HIGHEST (FF 75), TIME_CRITICAL (FF 80)
Expected output (with NSPA_RT_PRIO=80):
[P1-TC] -> FF 80 [P1-MCSS] -> FF 80 [P1-NORM] -> TS / FF 73
[P2-IDLE] -> FF 65 [P2-LOWEST] -> FF 71 [P2-BELOW] -> FF 72
[P2-NORMAL] -> FF 73 [P2-ABOVE] -> FF 74 [P2-HIGHEST] -> FF 75
[P2-TC] -> FF 80
PASS criteria: All threads spawned successfully and SetPriorityClass(REALTIME) succeeded. Priority values are informational – verified by external ps -eLo pid,tid,class,rtprio inspection.
Note: Skipped by the runner script by default (INCLUDE_PRIORITY=0) because it sleeps 10 seconds for the external observation window.
cs-contention – CS-PI ValidationWhat it tests: Whether the CRITICAL_SECTION priority inheritance path is working. An RT (SCHED_FIFO) thread blocks on a CS held by a SCHED_OTHER thread while background SCHED_OTHER load threads compete for CPU. With PI, the kernel boosts the holder to FIFO for the duration of the hold, so the RT thread’s wait time approximates the holder’s uncontended work time. Without PI, the holder is preempted by load threads and the RT thread waits much longer.
Thread model:
- Load threads (default 8): SCHED_OTHER infinite busyloops, spawned before SetPriorityClass(REALTIME) so they stay OTHER.
- Holder thread: SCHED_OTHER, acquires CS, does ~1 second of CPU-bound work inside, releases. Repeats for CS_ITERATIONS (default 5).
- Waiter thread: SCHED_FIFO (TIME_CRITICAL), waits for the holder to enter the CS, then calls EnterCriticalSection. Measures wall-clock wait time.
Key metrics: min/max/avg wait time in ms.
Expected behavior:
- With PI: avg wait close to ~1 second (uncontended work time)
- Without PI: avg wait materially larger, scaling with load thread count
PASS criteria: All CS_ITERATIONS samples captured (no deadlock, no lost wakeup).
rapidmutex – CS Throughput StressWhat it tests: CRITICAL_SECTION throughput under high contention. N threads (default 4) hammer a shared CS in a tight EnterCS/LeaveCS loop. Thread 0 is TIME_CRITICAL (SCHED_FIFO under RT mode); other threads are NORMAL.
Usage: rapidmutex [n_threads] [iters_per_thread] – default 4 threads, 500,000 iters each.
Key metrics:
- shared_counter integrity (must equal N * iters)
- Per-thread: max_wait_us, avg_wait_us, iters_done, elapsed_ms
- Aggregate: ops/sec throughput
Expected behavior:
- Without NSPA_RT_PRIO: all threads see comparable max-wait
- With NSPA_RT_PRIO + CS-PI: thread 0’s max-wait is bounded by CS body time, not by load-thread scheduling
PASS criteria: shared_counter == N * iters (CS atomicity holds) and no errors.
philosophers – Dining Philosophers / Transitive PIWhat it tests: Transitive priority inheritance through a chain of CriticalSections. 5 philosophers share 5 chopsticks (CS objects). Phil 0 is TIME_CRITICAL; phils 1-4 are load (NORMAL). Background busyloop threads starve the OTHER phils for CPU.
Usage: philosophers [meals_per_phil] [n_load_threads] – default 50 meals, 4 load threads.
Transitive PI chain: Phil 0 (RT) waits on chopstick[0] held by phil 1, who holds chopstick[1] and waits on chopstick[2] held by phil 2, etc. The PI boost must propagate transitively: RT -> phil 1 -> phil 2 -> … until the tail holder finishes eating and releases.
Key metrics:
- meals_done per philosopher (starvation check)
- RT max_wait (us) – phil 0’s worst-case chopstick acquire time
- spread – max_meals - min_meals (fairness measure)
- Per-philosopher: max_wait, avg_wait, eat_total, elapsed
PASS criteria: All philosophers complete their target meal count within the timeout (default 60 seconds). Timeout implies deadlock.
fork-mutex – CreateProcess StressWhat it tests: Wine’s process spawn path and the librtpi sweep’s process.c opt-out. Spawns N copies of itself (default 100) via CreateProcess, each running the internal child-quickexit subcommand, waits for each, and verifies the exit code.
Usage: fork-mutex [count] – default 100, max 10,000.
What bugs it catches:
1. process.c accidentally converted by the librtpi sweep – child inherits corrupted pi_mutex with parent TID
2. pthread_atfork handler regression – spawn hangs in parent or child
3. Wine CreateProcess race or handle leak – later spawns fail
4. Wineserver process-table overflow – off-by-one or leak under load
5. ntdll/unix/loader.c posix_spawn regression under RT scheduling
Key metrics: spawn time (min/max/avg us), child total time, success/failure counts.
PASS criteria: All N children spawned, waited, and returned exit code 42.
signal-recursion – Guard-Page Fault StressWhat it tests: Wine’s virtual_mutex recursive locking and the signal dispatch path. N worker threads (default 4) repeatedly: allocate a 2-page region, set PAGE_GUARD on the first page, touch the guard page (triggering STATUS_GUARD_PAGE_VIOLATION -> SIGSEGV -> segv_handler -> virtual_handle_fault), catch the exception via VEH, verify the page is accessible, and free the region.
Usage: signal-recursion [n_threads] [iters_per_thread] – default 4 threads, 1,000 iters.
What bugs it catches:
1. virtual_mutex converted to pi_mutex without NSPA_RTPI_MUTEX_RECURSIVE flag – deadlock on self-re-entry
2. PAGE_GUARD clear-on-first-access broken – infinite fault loop
3. VirtualAlloc/VirtualFree race with fault handler
4. Signal delivery to wrong thread
Key metrics: iters completed, faults caught (VEH), per-thread elapsed time.
PASS criteria: All iterations completed within timeout (60 seconds). Fault count is informational only – Wine may handle PAGE_GUARD internally.
large-pages – VirtualAlloc(MEM_LARGE_PAGES)What it tests: End-to-end large page allocation using the NSPA RT v2.5 port. Exercises: RtlAdjustPrivilege(SE_LOCK_MEMORY_PRIVILEGE), VirtualAlloc(MEM_LARGE_PAGES), /proc/meminfo cross-check, page touch round-trip, and QueryWorkingSetEx PAGEMAP_SCAN validation.
Skip conditions (produce PASS):
- /proc/meminfo not readable
- HugePages_Total == 0
- GetLargePageMinimum returns 0
- RtlAdjustPrivilege fails
Validation sequence:
1. Read HugePages_Free from /proc/meminfo (before)
2. Enable SeLockMemoryPrivilege via RtlAdjustPrivilege
3. VirtualAlloc(NULL, 4 * page_size, MEM_LARGE_PAGES, PAGE_READWRITE)
4. Read HugePages_Free again – must have decremented by at least 4
5. Touch every page: write unique byte, read it back
6. K32QueryWorkingSetEx – verify LargePage flag is set (PAGEMAP_SCAN)
7. VirtualFree – verify HugePages_Free is restored
PASS criteria: Allocation succeeds, hugepages consumed in /proc/meminfo, pages accessible, LargePage flag set.
ntsync – NTSync Kernel Driver ValidationWhat it tests: 5 sub-tests targeting the NTSync kernel driver’s PI boost, priority-ordered waiter queues, transitive PI chain walk, and mixed WaitForMultipleObjects dispatch.
Usage: ntsync [chain_depth] [rapid_threads] [rapid_iters] [pi_iters] [prio_waiters]
NTSync detection: Probes CreateMutex handle range. Handles >= 2,080,000 indicate ntsync is active (client-side). Lower handles mean wineserver futex fallback.
An RT waiter (TIME_CRITICAL) blocks on a kernel CreateMutex held by a SCHED_OTHER holder while background load threads compete. Measures wait time per iteration. With ntsync PI, wait times stay near uncontended work time. Without PI, they scale with load count.
N threads (1 RT + N-1 load) hammer a CreateMutex in a tight acquire/release loop. Measures throughput (ops/sec), RT max_wait, and counter integrity. Catches excessive overhead from the ntsync raw_spinlock conversion or priority-ordered insertion.
Main thread holds a mutex. N waiter threads at different Win32 priorities (TIME_CRITICAL through IDLE) all block. Main releases. Verifies that TIME_CRITICAL wakes first and IDLE wakes last (timestamp ordering). Waiters are launched lowest-priority-first to ensure FIFO ordering would be wrong. Standard priority levels tested: TC (FF 80), HIGH (FF 75), ABOVE (FF 74), NORMAL (FF 73), BELOW (FF 72), LOW (FF 71), IDLE (FF 65).
Chain of N mutexes. Holder[0] holds mutex[0] and blocks on mutex[1]. Holder[1] holds mutex[1] and blocks on mutex[2]. … Holder[N-1] holds mutex[N-1] and does CPU work. RT thread waits on mutex[0]. PI boost must propagate from RT all the way to holder[N-1] through the entire chain. Background load threads ensure that without transitive PI, the tail holder is starved.
Runner configurations:
- ntsync-d4: depth 4, 4 rapid threads, 100K iters, 8 PI iters, 5 prio waiters
- ntsync-d8: depth 8, 4 rapid threads, 100K iters, 3 PI iters, 10 prio waiters
- ntsync-d12: depth 12, 8 rapid threads, 50K iters, 3 PI iters, 16 prio waiters
Tests ntsync WAIT_ANY and WAIT_ALL paths with heterogeneous object types: - 5a: WFMO with 1 signaled mutex + 5 unsignaled objects – verifies correct index 0 - 5b: WFMO after signaling semaphore at index 2 – verifies semaphore wakeup - 5c: WaitAll with 2 signaled objects (event + unowned mutex) – verifies wait-all atomicity - 5d: Cross-thread signal into WFMO – verifies event wakeup timing PASS criteria: All 5 sub-tests plus 5a-5d PASS. Aggregate: total PASS vs total FAIL.
socket-io – Async TCP Loopback LatencyWhat it tests: io_uring Phase 3 socket I/O bypass. Creates a TCP loopback pair and measures per-message recv latency using overlapped WSARecv in two phases.
Phase A – Immediate recv: Sender sends data before receiver calls WSARecv. Exercises the fast path where try_recv succeeds immediately. Data is already buffered when recv is called.
Phase B – Deferred recv: Receiver calls WSARecv before sender sends. Forces the async wait path: WSARecv returns WSA_IO_PENDING, then the sender is signaled. This exercises the server epoll monitoring path (or the io_uring poll bypass when Phase 3 is active).
Key metrics per phase:
- Latency: min, avg, p50, p95, p99, max (us)
- Throughput: msgs/sec
- Pending count: how many recvs went async (Phase B only)
Usage: socket-io (no arguments, 2000 iterations per phase, 256-byte messages).
PASS criteria: Both phases complete without recv errors.
srw-bench – SRW Lock Contention BenchmarkWhat it tests: SRWLOCK acquire latency under contention. N threads (default 4) acquire/release a shared SRWLOCK in exclusive mode in a tight loop, measuring per-acquire latency in nanoseconds.
Usage: srw-bench [threads] [iterations] – default 4 threads, 500,000 iterations each.
Key metrics per thread: avg, p50, p99, max acquire latency (ns), ops/sec.
PASS criteria: All threads complete. Outputs overall aggregate.
child-quickexit – Internal HelperInternal subcommand used by fork-mutex. Prints a marker line and exits with code 42. Not intended for direct use.
help – Usage DisplayPrints the list of subcommands, environment variables, and example invocations.
nspa/run_rt_tests.shThe runner script orchestrates the full test matrix. It runs every configured subcommand twice – once in baseline mode and once in RT mode – captures per-run logs, parses the binary’s PASS/FAIL verdict line, and prints a summary.
The runner script defines the test list as an array. Each entry is: "display_name subcmd [args...]".
tests=(
"rapidmutex rapidmutex 4 500000"
"philosophers philosophers 50 4"
"fork-mutex fork-mutex 100"
"cs-contention cs-contention"
"signal-recursion signal-recursion 4 500"
"large-pages large-pages"
"ntsync-d4 ntsync 4 4 100000 8 5"
"ntsync-d8 ntsync 8 4 100000 3 10"
"ntsync-d12 ntsync 12 8 50000 3 16"
"socket-io socket-io"
)
The priority subcommand is included only when INCLUDE_PRIORITY=1.
The runner determines each test’s verdict with this priority:
rc=124 or rc=137) -> TIMEOUT^ PASS\s*$ in stdout) -> PASS^ FAIL in stdout) -> FAILrc=0 -> PASS*, else -> FAIL* (rc=N)The * marker in the summary distinguishes tests that emitted an explicit verdict from those relying on exit code.
All logs are written to $LOG_DIR (default /tmp/nspa_rt_test_logs/):
/tmp/nspa_rt_test_logs/
baseline_rapidmutex.log
baseline_philosophers.log
baseline_fork-mutex.log
baseline_cs-contention.log
baseline_signal-recursion.log
baseline_large-pages.log
baseline_ntsync-d4.log
baseline_ntsync-d8.log
baseline_ntsync-d12.log
baseline_socket-io.log
rt_rapidmutex.log
rt_philosophers.log
rt_fork-mutex.log
rt_cs-contention.log
...
| Variable | Default | Description |
|---|---|---|
WINE |
/usr/bin/wine |
Wine binary path |
WINEPREFIX |
/home/ninez/Winebox/winebox-master |
Wine prefix |
TEST_EXE |
nspa_rt_test.exe |
PE binary path (searched in Wine lib dirs) |
LOG_DIR |
/tmp/nspa_rt_test_logs |
Per-run log output directory |
TIMEOUT_SECS |
120 |
Per-test timeout (seconds) |
RT_PRIO |
80 |
NSPA_RT_PRIO for RT mode |
RT_POLICY |
FF |
NSPA_RT_POLICY for RT mode |
INCLUDE_PRIORITY |
0 |
Set to 1 to include the priority subcommand |
| Code | Meaning |
|---|---|
| 0 | All runs PASS |
| 1 | At least one FAIL, TIMEOUT, or UNKNOWN |
| 2 | Prerequisites missing (test binary not built, Wine not found) |
Every subcommand runs with a watchdog timer armed on entry. The watchdog is a dedicated thread at THREAD_PRIORITY_TIME_CRITICAL that calls ExitProcess(99) after a configurable timeout.
NSPA_TEST_TIMEOUT=N environment variable (seconds)The watchdog is the inner safety net (inside the PE binary). The runner script’s timeout --kill-after=5 is the outer safety net (at the shell level). Together they guarantee that no test can hang indefinitely, even if SCHED_FIFO busyloop threads have saturated all cores.
A SetConsoleCtrlHandler callback is registered before any subcommand runs. On Ctrl+C:
g_global_abort, g_stop_load, phil_load_stop, nts_pi_stop_load, nts_chain_stop_loadExitProcess(1)This ensures that pressing Ctrl+C cleanly terminates even tests with active SCHED_FIFO busyloop threads.
Each subcommand uses its own volatile LONG stop flag, checked by busyloop threads via InterlockedCompareExchange. All flags are set atomically by both the Ctrl+C handler and the watchdog’s ExitProcess path.
The runner script adds an additional safety layer:
cleanup_stale runs between every test, using pgrep -f '[n]spa_rt_test\.exe$' with the bracket trick to avoid self-matching. First pass: SIGTERM. Second pass: SIGKILL.timeout --kill-after=5 $TIMEOUT_SECS wraps each Wine invocation. If the test ignores SIGTERM, SIGKILL arrives 5 seconds later.pi_cond_bench.c – Requeue-PI Condvar BenchmarkA native Linux benchmark (not a Wine program) that measures condvar signal-to-wake latency under RT priority contention. Located at nspa/tests/pi_cond_bench.c.
What it measures: An RT waiter (SCHED_FIFO) sleeps on a pi_cond. A normal-priority signaler signals it after a delay. Background load threads compete for CPU. With requeue-PI, the wake-to-mutex-reacquire is atomic (kernel-side). Without it, there is a gap where no PI boost is in effect.
Build (native Linux, not Wine):
gcc -O2 -o pi_cond_bench nspa/tests/pi_cond_bench.c -lpthread -I../../libs/librtpi
Run:
sudo chrt -f 80 ./pi_cond_bench [iterations] [load_threads]
Output: Wake latency histogram: avg, p50, p99, max in nanoseconds.
Purpose: Validates the underlying kernel requeue-PI mechanism that Wine-NSPA’s condvar path depends on. Running this benchmark outside Wine isolates kernel behavior from Wine’s ntdll layer.
The test harness is designed for easy extension. To add a new subcommand foo:
In programs/nspa_rt_test/main.c:
static int cmd_foo(int argc, char argv)
{
print_banner(“foo”, “description of what foo tests”);
print_section(“parameters”);
/ … test logic … /
/* Verdict: explicit PASS/FAIL for the runner to parse */
if (success) {
print_verdict(1, NULL); /* prints " PASS" */
return 0;
} else {
print_verdict(0, "reason for failure"); /* prints " FAIL: reason" */
return 1;
}
}
static struct command commands[] = {
/* ... existing entries ... */
{ "foo", "short description of foo", cmd_foo },
{ NULL, NULL, NULL } /* sentinel */
};
In nspa/run_rt_tests.sh, add to the tests array:
tests=(
# ... existing entries ...
"foo foo [optional args]"
)
Format: "display_name subcmd [args...]".
print_banner(), print_section(), print_kv(), print_verdict() for consistent output formattingprint_worker_start() when spawning named threadsenter_realtime_class() / leave_realtime_class() to switch to REALTIME_PRIORITY_CLASSenter_realtime_class() so they stay OTHERsafe_load_count() to cap load threads at (n_cpus - 1) to avoid saturating the machineg_global_abort in long-running loops for Ctrl+C responsivenessnow_us() / now_ms() for timing measurements (QPC-based)print_verdict() so the runner can parse verdicts without relying on exit codes| Variable | Default | Description |
|---|---|---|
NSPA_RT_PRIO |
(unset) | Enables v1 RT promotion. Sets the ceiling FIFO priority for TIME_CRITICAL threads. Typical value: 80. |
NSPA_RT_POLICY |
(unset) | Scheduler policy for the lower RT band. FF = SCHED_FIFO, RR = SCHED_RR, TS = SCHED_OTHER. |
NSPA_TEST_TIMEOUT |
120 |
Watchdog timeout in seconds. If the test has not exited after this many seconds, ExitProcess(99) is called. |
| Variable | Default | Description |
|---|---|---|
WINE |
/usr/bin/wine |
Path to Wine binary |
WINEPREFIX |
/home/ninez/Winebox/winebox-master |
Wine prefix directory |
TEST_EXE |
nspa_rt_test.exe |
Path to the PE test binary |
LOG_DIR |
/tmp/nspa_rt_test_logs |
Directory for per-run log files |
TIMEOUT_SECS |
120 |
Per-test timeout (shell-level, seconds) |
RT_PRIO |
80 |
NSPA_RT_PRIO value for RT mode passes |
RT_POLICY |
FF |
NSPA_RT_POLICY value for RT mode passes |
INCLUDE_PRIORITY |
0 |
Set to 1 to include the priority subcommand |
| Variable | Value | Purpose |
|---|---|---|
WINEDEBUG |
-all |
Suppress debug output (both modes) |
WINEPREFIX |
$WINEPREFIX |
Wine prefix (both modes) |
NSPA_RT_PRIO |
$RT_PRIO |
RT mode only – enables FIFO promotion |
NSPA_RT_POLICY |
$RT_POLICY |
RT mode only – sets scheduler policy |
WINEPRELOADREMAPVDSO |
force |
RT mode only – remap vDSO for RT-safe clock access |
| Requirement | Check | Purpose |
|---|---|---|
ntsync module loaded |
sudo modprobe ntsync |
Required for ntsync sub-tests |
| Hugepages reserved | /proc/meminfo HugePages_Total > 0 |
Required for large-pages test |
| RT-capable kernel | uname -r shows -rt |
Required for SCHED_FIFO promotion |
| CAP_SYS_NICE or root | ulimit -r |
Required for RT scheduling |