A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

Navigator HW Concurrency

From WHATWG Wiki
Revision as of 16:39, 11 May 2014 by Eligrey (talk | contribs) (spec version 1.0)
Jump to navigation Jump to search

Proposed navigator.hardwareConcurrency API for smarter Worker pool allocation in parallel applications

Editor: Eli Grey

Version 1.0


This specification defines an API for reading the system's total number of logical processors available to the user agent.

The intended use for the API is to help developers make informed decisions regarding the size of their worker threadpools to perform parallel algorithms.

Developers can easily take advantage of this in existing parallel applications implemented with web workers by replacing code that does workers = # with workers = navigator.hardwareConcurrency || # in order to split up parallel tasks between every logical core. The OS or UA scheduler will handle balancing the load of these threads with everything else on the system.

Currently, highly parallel algorithms must prompt the user for how many cores they have, but many users don't know this information or understand where to get it. Giving users control over thread count can also cause issues where the user thinks the highest option is best. For example, this can result in 32 threads being run on a user's dual core laptop.

Example use cases

  • Image processing in online photo editors is highly parallelizable but often hardcoded to a specific worker count. For example, this recent blog post on image processing with worker threads in JavaScript suggests hardcoding the worker count to 4. All the author has to do to is replace the 4 with navigator.hardwareConcurrency || 4 to increase performance in computers with more cores.
  • Using LZMA2 in JavaScript with as many cores as possible to compress data faster without having to prompt the user for their core count.
  • Physics engines for WebGL games: Many physics engines are highly parallelizable, but currently there is no method to determine how many threads to use without prompting the user for their core count.
  • Running realtime object/face/movement/etc. detection algorithms efficiently on webcam input or video file input, without prompting the user for their core count.
  • Multithreaded silent OCR: A current attempt at automatic silent OCR is http://projectnaptha.com/ (single-threaded). If Project Naptha is ever going to use the multithreaded Ocrad mode to increase performance, it must currently prompt the user for a core count. This defeats the purpose of a silent background processing script by interrupting the user with a prompt.


On getting, the hardwareConcurrency property should return the number of logical processors available to the user agent. For example on OS X this should be equivalent to running sysctl -n hw.availcpu

The number must be >= 1.


[NoInterfaceObject, Exposed=Window,Worker]
interface NavigatorCPU {
    readonly attribute unsigned long hardwareConcurrency;

Navigator implements NavigatorCPU;
WorkerNavigator implements NavigatorCPU;

Privacy considerations

The user agent MAY report fewer than the number of actual logical cores to reduce the efficacy of fingerprinting.

The total number of cores available to the user agent can already be approximated with high accuracy given enough time using the polyfill in the appendix on a system with low to moderate system load. Chrome also exposes it through PNaCl.


An open source O(log n) (in the number of cores) polyfill in JavaScript can be found at:


The polyfill works by running a timing attack on the measured runtime of a worker threadpool that is resized according to a binary search and statistical results until performance no longer increases with the number of threads.

The default configuration is tuned for medium accuracy in order to finish the estimation in a timely manner. If you care about accuracy more than runtime length, increase the workload as you see fit.