PyTorch team unveils framework for programming clusters

Read more at:

The PyTorch team at Meta, stewards of the PyTorch open source machine learning framework, has unveiled Monarch, a distributed programming framework intended to bring the simplicity of PyTorch to entire clusters. Monarch pairs a Python-based front end, supporting integration with existing code and libraries such as PyTorch, and a Rust-based back end, which facilitates performance, scalability, and robustness, the team said. .

Announced October 22, Monarch is a framework based on scalable actor messaging that lets users program distributed systems the way a single machine would be programmed, thus hiding the complexity of distributed computing, the PyTorch team said. Monarch is currently in an experimental stage; installation instructions can be found at meta-pytorch.org.

Monarch organizes processes, actors, and hosts into a scalable multidimensional array, or mesh, that can be manipulated directly. Users can operate on entire meshes, or slices of them, with simple APIs, with Monarch handling distribution and vectorization automatically. Developers can write code as if nothing fails, according to the PyTorch team. But when something does fail, Monarch fails fast by stopping the whole program. Later on, users can add fine-grained fault handling where needed, catching and recovering from failures.

Source link

Multi-Function Air Blower: Blowing, suction, extraction, and even inflation
spot_img

Leave a reply

Please enter your comment!
Please enter your name here