About
Our Research Focus
ZSpaceLabs is a Research Group focusing on Accelerated Rust Tensor and AI Systems Engineering.
Our goal is to enable full machine throughput (CPU, GPU, Memory and Network bandwith) saturation for AI and Tensor Rust applications in both high performance clusters and embedded systems.
Our primary tools are aggressively complete code coverage, purpose built performance and allocation metrics and benchmarks, compliance burn in tests, and end-to-end analysis.
The Context
The industry still largely depends upon Python based systems for Research and Development (R&D) in AI and machine learning. The languageās flexibility and ease of use make it a great choice for sketching applications, but introduce expensive challenges for scaling thread use for large multi-core systems, and processing high volume asynchronous IO, particularly for large datasets and networking.
Interpreted Python programs, even using hardware GPU acceleration, leave a large amount of throughput on the table. Compared to systems languages, Python suffers under asynchronous IO and networking, fails to effectively exploit additional CPU cores, and lacks an optimizing compiler.
At present, the gap between what the available hardware performance, and the Python development environments can be measured in orders of magnitude, which will not be addressed by simply making the silicon faster.
Rust has become a promising target AI, ML, and Tensor industry migration. Rust has many throughput advantages; in code efficiency, in the performance of the optimizing compiler, in the handling of threading and CPU resources, and in the ability to target embedded systems (and even WASM).
However, Rust is still behind on the support machinery in the AI, ML, and Tensor spaces. GPU acceleration frameworks exist, but much of the secondary ecosystem fabric is lacking.
We aim to help fix that.