When we try to make a purchase with a shopping app, we can quickly skim through the list of recommendations while admitting that the machine knows us – at least it’s learning how to. As an effective emerging technology, machine learning (ML) has become quite ubiquitous with a spectrum of applications ranging from miscellaneous to supercomputing.
Dedicated ML computers are thus developed at different scales, but their productivity is somewhat limited: the workload and development costs are largely concentrated in their software stacks, which must be developed or reworked ad hoc to support each scale model.
To solve the problem, researchers from the Chinese Academy of Sciences (CAS) came up with a fractal parallel computing model and published their research in Intelligent Computing September 5.
“To solve the productivity problem, we proposed ML computers with fractal von Neumann architecture (FvNA),” said Yongwei Zhao, researcher at State Key Lab of Processors, Institute of Computing Technology at CAS.
“Fractality” is a borrowed geometric concept that describes self-similar patterns applied at any scale. If a system is “fractal”, according to the researchers, this implies that the system always uses the same program whatever the scale.
FvNA, a multi-layered and parallelized von Neumann architecture, is not only fractal but also isostratal, which literally means “identical through layered structures”.
That is, unlike the conventional anisostratal ML computing architecture, FvNA adopts the same instruction set architecture (ISA) for each layer. “The bottom layer is fully controlled by the top layer, thus only the top layer is exposed to the programmer as a monolithic processor. Therefore, ML computers built with FvNA are programmable under an invariant, homogeneous, and sequential view.” explained the researchers.
Although FvNA has been attested to be applicable to the ML domain and capable of mitigating the programming productivity problem while performing efficiently like its ad-hoc counterparts, some issues remain to be addressed. In this article, the following three elements have been discussed:
- How could FvNA remain efficient enough with such a strict architectural constraint?
- Is FvNA also applicable to payloads from other domains?
- If so, what are the exact prerequisites?
To answer these questions, the researchers started by modeling the Fractal Parallel Machine (FPM), an abstract parallel computer modeled from FvNA. FPM was built on Valiant’s multi-BSP, a seamless multi-layer parallel model, with only minor extensions.
An instance of FPM is a tree of nested components; each component contains memory, processor, and child components. Components can run fracops – the pattern of payloads on fractal parallel computing systems, such as reading some input data from external storage, performing calculations on the CPU, and then writing data output to external storage.
“Compared to Valiant’s multi-BSP, FPM has minimized parameters for simpler abstraction,” the researchers said. “More importantly, FPM imposes explicit restrictions on programming by exposing only one processor to the programming interface. The processor is only aware of its parent component and its child components, but not of the overall system specification.” In other words, the program never knows where it resides in the tree. Therefore, FPM cannot be programmed to be scale dependent by definition.
Meanwhile, the researchers proposed two different ML-targeting FvNA architectures – the specific Cambricon-F and the universal Cambricon-FR – and demonstrated FPM’s fractal programming style by running several example general-purpose programs. The samples covered embarrassing parallel, divide-and-conquer, and dynamic programming algorithms, all of which were found to be efficiently programmable.
“We clarified that, although originally developed from the field of ML, fractal parallel computing is quite generally applicable,” the researchers concluded, drawing from their preliminary results that the general-purpose, optimal FPM in terms of cost, is as powerful as many fundamental parallel computing models such as BSP and the alternative Turing machine. They also believed that fully implementing FPM could be useful in various scenarios, from the entire global web to microscale in vivo devices.
Yet the researchers pointed to a remarkable finding from this study that FPM limits the entropy of programming by applying constraints on the control model of parallel computing systems. “Currently, fractal machines, such as Cambricon-F/FR, only take advantage of this entropy reduction to simplify software development,” they observed. “Whether energy reduction can be achieved by introducing fractal control into conventional parallel machines is an interesting open question.”
Yongwei Zhao et al, fractal parallel computing, Intelligent Computing (2022). DOI: 10.34133/2022/9797623
Powered by Intelligent Computing
Quote: Fractal parallel computing, a geometry-inspired productivity booster (2022, December 5) Retrieved December 5, 2022 from https://techxplore.com/news/2022-12-fractal-parallel-geometry-inspired-productivity-booster .html
This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.
#Fractal #parallel #computing #geometryinspired #productivity #booster