What DCP Is (and Isn't)
DCP is a compute-sharing platform designed to securely execute untrusted code across distributed compute networks. These networks may be global and public, or privately scoped and confined to on-prem environments. To enable this flexibility, DCP deliberately adopts the web execution model for both computation and communication. This choice provides strong isolation guarantees for host machines and allows workloads to run consistently across heterogeneous devices, operating systems, and environments, all without requiring trust in the code being executed.
That security and portability comes with an important tradeoff. Because DCP uses the web stack, supported workloads today are limited to JavaScript, WebAssembly (Wasm), and WGSL (WebGPU Shader Language). Native runtimes, direct system access, and CUDA-based execution are intentionally out of scope.
However, the web ecosystem is rapidly closing this gap. Many widely used tools already have Wasm or WebGPU backends, including:
- Pyodide (Wasm builds of NumPy, SciPy, pandas, scikit-learn, and more)
- ONNX Runtime with WebGPU for model inference
- TensorFlow.js for Web-based TensorFlow execution
- and the list goes on…
Developers can also author distributed jobs in Python using the dcp pip package, which targets these web-native runtimes.
By definition, compute sharing across many dispersed machines forms a loosely coupled distributed system. As a result, DCP is best suited to data-parallel / embarrassingly parallel (map-reduce style) workloads, rather than tightly synchronized computation.
What This Enables
DCP is a strong fit for workloads such as:
- Classical machine learning (e.g., random forests, SVMs, linear models)
- Batch inference pipelines for vision, audio, OCR, NLP, and small LLMs
- Numerical modelling
- Digital twin parameter-space exploration
- Monte Carlo simulations
- Large-scale parameter sweeps
- Heavy statistical analyses
- And the list goes on…
These workloads benefit directly from horizontal scale without requiring low-latency coordination or shared GPU memory.
What It Does Not Enable (Today)
DCP is not designed for:
- Tightly coupled distributed training
- CUDA-dependent workloads
- Deep learning training frameworks that require synchronized GPU execution (e.g., PyTorch with CUDA)
Some of these limitations may evolve as WebGPU matures, but they are intentionally outside DCP’s current design envelope.
Bottom Line
DCP does not attempt to make every workload distributable. Instead, it focuses on making secure, global, trustless computation possible. If your workload can be expressed in web-native runtimes and parallelized across independent tasks, DCP can distribute and execute it at scale—safely, anywhere.