The Flux Supercomputing Workload Manager: Improving on Innovation and Planning for the Future – Exascale Computing Project
By Scott Gibson
High-performance computing, or supercomputing, combined with new data-science approaches such as machine learning and artificial intelligence (AI) give scientists the ability to explore phenomena at levels of detail that would otherwise be impossible or impractical. This ranges from solving the most difficult physics calculations, to designing better drugs for cancer and COVID-19, to optimizing additive manufacturing, and more. Finding answers to those important and timely types of problems and others on supercomputing systems involves the coordination of numerous interconnected computational tasks through complex scientific workflows.
“Nowadays, many science and engineering disciplines require far more applications than ever before in their scientific workflows,” said Dong H. Ahn, a computer scientist at Lawrence Livermore National Laboratory’s (LLNL’s) Livermore Computing (LC). “In many cases, a single job needs to run multiple simulation applications at different scales along with data analysis, in situ visualization, machine learning, and AI.”
Nowadays, many science and engineering disciplines require far more applications than ever before in their scientific workflows. In many cases, a single job needs to run multiple simulation applications at different scales along with data analysis, in situ visualization, machine learning, and AI. — Dong H. Ahn, LLNL computer scientist and principal investigator (PI) of the Flux project
Scientific workflow requirements combined with hardware innovations—for example, extremely heterogeneous resources such as GPUs, multitiered storage, AI accelerators, quantum computing of various configurations, and cloud computing resources—are increasingly rendering traditional computing resource management and scheduling software incapable of handling the workflows and adapting to the emerging supercomputing architectures. As this challenging phenomenon has emerged, the LC in-house system software team has been situated at a good vantage point to observe it.
During the course of many years, the team has developed products to run and manage science simulation codes across multiple compute clusters. Among the fruit of their labor is Slurm which is a workload manager used worldwide. The team realized, however, that today’s workflow management and resource scheduling challenges called for a fundamental rethinking of software design that would transcend conventional solutions.
Overcoming Conventional Limitations
“We all knew our goal was a giant undertaking,” Ahn said. “But our vision for next-gen manager software was compelling enough and well-received by our stakeholders within the Department of Energy [DOE] Advanced Simulation and Computing [ASC] program and then later, the Exascale Computing Project [ECP].”
Computer scientists at LLNL devised an open-source, modular, fully hierarchical software framework called Flux that manages and schedules computing workflows to use system resources more efficiently and provide results faster. Flux’s modular development model enables a rich and consistent API that makes it easy to launch Flux instances from within scripts. Fully hierarchical means that every Flux “job step” can be a full Flux instance, with the ability to schedule more job steps on its resources.
Flux offers the advantages of higher job throughput, better specialization of the scheduler, and portability to different computing environments, yet it manages complex workflows as simply as conventional, or traditional, ones.
Because traditional resource data models are largely ineffective to cope with computing resource heterogeneity, the Flux team established graph-based scheduling to manage complex combinations of extremely heterogeneous resources.
With a graph consisting of a set of vertices and edges, graph-based scheduling puts the relationships between resources on an equal footing, and complex scheduling can be expressed without changing the scheduler code. Consequently, one case in which Flux’s graph approach solves a critical scheduling need is the use of HPE’s Rabbit multi-tiered storage modules on the upcoming exascale-class supercomputer El Capitan at LLNL.
Flux users can push more work through the supercomputing system quicker and spin up their own personalized Flux instance in the system. Additionally, workflows that must run at different sites no longer need to code to each site-specific scheduler—they can instead code to Flux and rely on Flux to handle the nuances of each individual site.
The Impact of Flux
Flux has showcased its innovative characteristics by enabling COVID-19, cancer, and advanced manufacturing projects.
Using Flux, a highly scalable drug design workflow demonstrated the ability to expediently produce potential COVID-19 drug molecules for further clinical testing. The paper documenting that work was one of four finalists for the Gordon Bell Prize at the SC20 supercomputing conference.
In cancer research, simulating RAS protein interactions at the micromolecular-dynamics level is a critical aim of life science and biomedical researchers, because when mutated, RAS is implicated in one third of human cancers. Flux enabled the Multiscale Machine-Learned Modeling Infrastructure (MuMMI) project to successfully execute its complex scientific workflow to simulate RAS on the pre-exascale Sierra supercomputer at LLNL. MuMMI paves the way for a new genre of multiscale simulation for cancer research, coupling multiple scales using a hypothesis-driven selection process.
ECP’s Exascale Additive Manufacturing (ExaAM) project is focused on accelerating the widespread adoption of additive manufacturing by enabling routine fabrication of qualifiable metal parts. By incorporating Flux into a portion of their ExaConstit workflow, the ExaAM team demonstrated a 4× job throughput performance improvement.
“Flux allowed them to bundle their small jobs into a large IBM LSF [short for load sharing facility] resource allocation, and they were able to run them all together with only about five lines of changes in their script,” Ahn said.
To multiply such benefits to a wider range of ECP workflows, the Flux team has also played a major role in ECP ExaWorks project as one of the four founding members. The goal of ExaWorks is to build a Software Development Kit, or SDK, for workflows and a community around common APIs and open-source functional components that can be leveraged individually, or in combination, to create sophisticated dynamic workflows that can leverage exascale-class supercomputers.
Innovation and Collaboration
Flux won a 2021 R&D 100 Award in the Software/Services category.
“The reason Flux won is probably a mix of our past success and the future prospects,” Ahn said. “The scientific and enterprise computing community reviewed our technology, and I believe they saw it as future ready in terms of how we support the workflows and how a diverse set of specialty hardware needs to be supported in cloud computing as well as HPC. So, my guess is that all that factored in.”
Flux’s graph approach has positioned the product for the convergence of HPC and cloud computing for the El Capitan system and beyond and fostered the creation of strategic multi-disciplinary collaborations. Under a memorandum of understanding, the Flux team formed a partnership with RedHat OpenShift and the IBM Thomas J. Watson Research Center, which led to the publishing of key findings in two papers: one for the 2021 Smoky Mountain Conference (SMC21), and the other for CANOPIE HPC Workshop at the SC21 supercomputing conference.
The collaborations with RedHat OpenShift and IBM informed Flux’s converged computing directions, setting the stage for the enablement and testing of HPE Rabbit storage. That accomplishment stemmed from scientists at T. J. Watson creating KubeFlux, which used one of the core components of Flux to make intelligent and sophisticated pod placement in Kubernetes, an open-source container orchestration system for automating software deployment, scaling, and management. Pods are the smallest, most basic deployable container objects in Kubernetes.
“As we’ve collaborated more closely with IBM and now RedHat OpenShift partners, we’ve gone several steps further with KubeFlux and more tightly integrated and provided a more feature-rich plugin,” said LLNL computer scientist and Flux team member Dan Milroy. “We take the scheduling component of Flux and plug that into Kubernetes, and then that is an engine that drives these very sophisticated scheduling decisions in terms of where to place pods on hardware resources.”
The Flux team’s SMC21 paper contains a description of the new KubeFlux plugin and details about their background research on Kubernetes and its scheduling framework, which allows third-party plugins to supply scheduling-decision information to Kubernetes.
“We discovered there is a pretty significant limitation in terms of the API that Kubernetes exposes and how third-party plugins can integrate with that, which can result in split-brain states for schedulers,” Milroy said.
Based on the SMC21 paper, the Flux team published in the CANOPIE HPC Workshop, taking their work a step further to include performance studies comparing the performance of GROMACS, one of the most widely used open-source software codes in chemistry, when it’s scheduled either by the Kubernetes default scheduler or by KubeFlux.
The team discovered that KubeFlux makes far more sophisticated and intelligent decisions with respect to where to place pods on resources in contrast with the Kubernetes default scheduler. KubeFlux is enabling a 4× improvement in the performance of the GROMACS application under certain circumstances.
“Part of the reason behind this is that Kubernetes is designed to facilitate and declaratively manage microservices rather than high-performance applications, and now that we’re seeing a movement toward integrating HPC and cloud together, we’re seeing applications that demand more performance, but truly rely on the Kubernetes default scheduler,” Milroy said. “That’s exposing limitations in terms of its decision-making capabilities and its orientation toward microservices rather than toward more high-performance applications. The demand for greater performance is increasing, and this KubeFlux plugin scheduler that we’ve created is designed to meet that demand.”
Forward Motion
Among the next actions for the Flux project is enhancing the software’s system resource manager to ensure the product’s multi-user mode schedules jobs in a manner that gives multiple simultaneous users their fair share of access to system resources based on the amount they requested up front. Working on that aspect are LLNL computer scientists Mark Grondona, Jim Garlick, Al Chu, Chris Moussa, James Corbett, and Ryan Day.
Moussa handles job accounting, which he described as having two forks.
“One is balancing the order of jobs when they’re submitted in multi-user environment, and then there’s just the general administration and management of users in the database,” Moussa said. “So that’s where most of my work is focused, and we’re continuing to make progress on that front in preparation for the system instances of Flux.”
Flux is composed of many communication brokers that create a tree-based overlay network that must remain resilient so that messages can be passed between different parts of the system. Because of that dynamic, many issues in the software design revolve around resiliency.
“If brokers that are supposed to be routing messages go down, a lot of problems happen,” Grondona said. “So, we’re focusing on the initial system instances, keeping the tree-based network simple, and then adding a lot of functionality to deal with exceptional events like brokers going down or other problems happening. We have to be able to keep the system instances running for weeks or months without losing user jobs or data, and so we’re developing a lot of low-level testing to help us achieve those goals.”
A multi-user computing environment makes properly designed and implemented security features critical. The Flux team has learned from experience with many projects that if security-significant bits—i.e., the ones that run with privilege—can be isolated and that layer of code kept small, the occurrence of harmful bugs that could lead to privilege attacks is less likely. A separate Flux security project contains all the security significant code, the main part of which is called IMP, or the individual minister of privilege, a security helper for Flux.
Flux security runs as an unprivileged user across the network, and if nothing bad happens there, no escalation occurs. The IMP process is used during the transition to a user.
“We use cryptographic signatures to ensure that the IMP only runs work that a user has requested,” Grondona said. “And then the plan is to make heavy use of Linux cgroups to isolate different users’ jobs and allocate them using a Linux feature. The users are given only the resources they’re allowed to have. At the system instance level, the plan now is to have every job that’s submitted spin up as a single user of Flux. Everything under that is contained. It’s running as that one user, and they have all the features of Flux within that one job. We feel pretty good about the security design in Flux.”
Flux’s security is designed such that the circuit space of a component that’s running as root in privilege mode is very small.
“So, it drastically lowers the possibility of being compromised, whereas other products that run the whole thing as root can have bad things happen if even a small component of the product gets compromised,” Ahn said.
To propel Flux to its next stage of development, the project’s core team will deploy a superset consisting of a system instance and a single-user mode instance running simultaneously. Then the next part of the plan of record is to replace the existing solution on large Linux clusters in Livermore systems.
“That means that once we get into Livermore systems, the bits will go into our sister laboratories that include Los Alamos and Sandia,” Ahn said. “So that’s one big area. Then we’re going to continue to support single-user mode, where users can use Flux’s single-user mode without waiting for their corresponding center to replace their workload manager with Flux. We’re going to support that mode of operation for a while. But as users use Flux, there will be more and more requests to the center to replace their workload manager. So, I can see two to three years down the road, there’ll be more system instances popping up at other high-end computing sites.”
With respect to cloud computing, the Flux team is in learning mode, researching the challenges and forming strategic R&D collaborations with the aim of pursuing that approach two or three years to find product solutions that can be channeled into the R&D efforts.
“RedHat recently told us they want to place a product around KubeFlux, so that’s going to be another interesting bit,” Ahn said. “And I’m very excited to see what the cloud guys say when KubeFlux is available on the cloud side like Amazon and when they run HPC workloads on Amazon AWS or Microsoft Azure.”
As part of the Flux team’s next big R&D effort, they are preparing a pitch that will offer a new perspective on how scientific applications are mapped to computing resources at large centers like LC. The aim is to counter the decade-old assumption that users can effectively prescribe every small detail concerning how their applications will be mapped to the computing resources at the center.
“Say I have a drug design workflow and some of the components are working really well on CPU-only supercomputers, while other components are working better on GPUs, and then I try to co-schedule those two things simultaneously with precise imperative prescriptions,” Ahn said. “That’s a very difficult thing to do at this point. And even if scientists can live with that kind of mapping complexity, when their recipes come to a center, the center cannot do a good job of mapping for optimization. So, I’m trying to start a project where we change the fundamental ways to map the scientific application to the entire center resources without having to ask and require users to prescribe every little detail.”
If the application mapping project is approved and funded, users will have higher level and more flexible idioms to enable users to describe their resource needs without specifying the supercomputers and nodes to be applied simultaneously or in a staggered way.
Flexibility for the Future
Application of the descriptive rather than prescriptive approach to application mapping will become even more relevant after the exascale era has been established and the HPC convergence with cloud computing deepens.
“In a cloud software stack, users aren’t asked to prescribe every little detail,” Ahn said. “They select what we call the declarative-style idiom. They want this number of services, and they don’t care where the services are running. The cloud will take care of that. And if this kind of paradigm change is made at the HPC level, our stack will be an inch closer to being more compatible with cloud computing. Cloud computing is huge. It’s like an order of magnitude larger than the HPC market, and we want to make sure HPC software is completely compatible with the cloud, which will be very important for post-exascale.”
The Flux product is well-positioned for the HPC–cloud convergence.
“It’s designed such that it integrates very, very well with, and facilitates resource dynamism,” Milroy said. “Part of that is the hierarchical nature of it, and the other is the geographic resource representation. It turns out that in a cloud environment, resources can change. They can change not only in quantity but also in type and in time. Representing the resources in a graph and then having Flux instances be created hierarchically is extremely conducive to managing cloud-based resources and scheduling cloud resources. And that’s going to be a key component of HPC and cloud convergence in the future, where we see Kubernetes merging even closer together with HPC resource managers.”
“To do that, you have to have a resource representation that considers all the flexibility of the cloud, and Flux already enables that, which is a huge advantage,” Milroy said. “One of the Flux subprojects is directed at using Flux to instantiate Kubernetes and then co-manage resources.”
Along with the HPC convergence with the cloud, another expected trend is the era of highly specialized hardware.
“Gone are the days HPC could get its high performance using a few homogeneous compute processors,” Ahn said. “Starting in 2018, new additions to the TOP500 list of the most powerful supercomputers drove more performance from specialized hardware, including GPUs, than general-purpose hardware like CPUs. That trend will be accelerated. Part of that is AI. If you look at the current industry, they are making specialized hardware. About 50 startups are working on ASICs, or application-specific integrated circuits, which include AI accelerators. LC has already put accelerators such as Cerebras and SambaNova in place, and this trend will happen more.”
Some of today’s systems apply heterogeneity through the use of multiple partitions containing different specialized hardware.
“One example is Perlmutter at the National Energy Research Scientific Computing Center, NERSC, which has two partitions, each with a different compute hardware type,” Ahn said. “And if you look at European supercomputers, they have a snowflake-like architecture where they have five or six different partitions with a supercomputer. And our users want to use different collections of hardware in their workflows. The mapping of their workflows, which consist of many applications across different specialized partitions and specialized hardware, will be very hard. Flux has enough flexibility, including its graph-based and API-based approaches, to help us overcome what I call this post-exascale crisis.”