Taiwan’s new recruit from NASA aims to enhance tech innovation and industrial transformation through HPC

High-performance computing technology has been widely adopted in advanced countries to facilitate technological innovation, commercial big-data analytics, and aerospace missions. Taiwan’s National Center for High-Performance Computing (NCHC), an institution under the National Applied Research Laboratories (NARLabs), also provides important high-speed computing resources such as high-speed network infrastructure and computing capabilities to support academia and industries in Taiwan. DIGITIMES interviewed NCHPC’s new director-general Chau-Lyan Chang, a former senior researcher in the Computational Aeronautics Science Division at NASA Langley Research Center. He returned to Taiwan in April 2022 to lead the Center. We asked him about his vision for the center and his perspective on high-performance computing technology’s future outlook.

Q: You originally studied mechanical engineering, but eventually became an engineer of aerospace fluid dynamics and engineering software design. What inspired you to make the cross-disciplinary career path? Why did you decide to leave your career in the US behind and come back to serve this post in Taiwan?

A: I was born and grew up in Taiwan. Since I was young, I have always wanted to have the opportunity to make a contribution here. When I learned there is an opening that suits me so well, I just grabbed the opportunity. I cannot wait until the common retirement age at NASA, because I probably would be too old to make a difference by then. It’s now or never!

When I was a junior in the Department of Mechanical Engineering at National Taiwan University, my professor of fluid mechanics recommended a popular science book on fluid mechanics called “Shape and Flow”, written by MIT professor A. H. Shapiro. This is the main reason why I fell in love with fluid mechanics.

As for why I started my career in software, I grew up in a time when computers were still rare and I did not take any software programming courses. Out of curiosity, I grab a book on Fortran programming to learn. So after spending a month or so punching a deck of cards back and forth, I was able to finish testing a simple program about number guessing, and I was very excited to get every single line of the codes right. When I was in graduate school, I started to learn the programming language of computing in-depth, so I took it more seriously and felt a sense of accomplishment, thus finding my true interest. These two studies were combined and became the so-called computational fluid dynamics (CFD). This is exactly what I did when I went to the U.S. for my Ph.D. study and later on got a job at NASA.

Q: The vision of NCHC is to “become an international high-speed computing center for scientific discovery and technological innovation.” In the past, it focused on cloud applications such as environmental and disaster prevention, biomedical science, science and engineering, and digital culture and creativity, and conducted research and development of forward-looking innovative application technologies and services such as deep learning and artificial intelligence (AI). What other new areas do you think it can play in its role as an enabler? What areas of work will you promote in the future? What are your hopes and visions for the Center?

A: Thanks to the leadership of my predecessors, NCHC has remarkable accomplishments in IaaS (infrastructure as a service) and PaaS (platform as a service), through supporting the technology policies of the government. In the past few years, we have made outstanding contributions to digital transformation, AI applications, and information security. From the perspective of being a national laboratory, we hope to play the role of HPC and AI technology enabler, not only to provide IaaS and PaaS but also to continue to enhance network capabilities and introduce tools for big data analytics.

In terms of hardware, we have built Taiwania 1~3 supercomputers since 2017, and the cluster has not only CPU but also GPU. I would like to emphasize that the speed of these three machines has been ranked among the top 500 in the world for some time, which also highlights Taiwan’s technological strength. In the past, NCHC has provided platform services in the fields of environmental disaster prevention, biomedical science, and scientific/engineering research through collaborations with multiple government agencies, and has provided valuable high-speed computing research resources for the academia, even including digital services for culture and creativity in the humanities, which is very unique in the world of high-performance computing.

There are several large high-speed networks in Taiwan, and we have found a way to link them together and smooth out the bottleneck. The Executive Yuan’s forward-looking plan includes the establishment of a 5G network, as well as undersea cables and a 5G interconnection center to link the fiber-optic high-speed networks from the south to the north. This provides an alternative data transmission path to reach South and Southeast Asia, in hopes of attracting multinational corporations and international organizations to set up data centers in Taiwan. These are all areas where we can act as an enabler.

In addition to the six major directions declared by the Ministry of Science and Technology, which includes digital transformation of industries, precision healthcare, 5G/6G networks, information security, and network infrastructure, we are also carrying out research in encryption technology and how to apply the Internet of Things (IoT) and blockchain technologies to various industries in Taiwan, as well as going around the world to promote and assist industry stakeholders in the adoption of these technologies. Following the lead of the success of international companies such as Amazon, Microsoft, and Google in cloud services, NCHC has also leased 50% of its Taiwania 2 supercomputer capacity to a private HPC cloud company in order to serve the industrial customers better. This is a new model of successful collaboration between a national lab and a private company, and it is not easy to find similar examples around the world.

In the future, I want to emphasize the importance of data, which will be a key to our future technological development and innovation. This can be seen from several perspectives: first, how to collect data, e.g., how to collect data with sensors in the civil biotechnology network, and how to use data in biomedicine. The second is how to generate data, in fact, in many applications, data generation relies heavily on simulations. Therefore, HPC is inseparable from modeling and big data analysis, and this will be one of the key points we want to develop in the future.

After getting the data, it is imperative to ensure the integrity of the data. Therefore, we need information security as well as large-scale high-performance data servers and high-speed networks to provide a complete service system for the data chain, which will be one of our future efforts.

Third, the development of artificial intelligence models has been relatively mature so far, and many algorithms have not changed much, but the quality of data becomes the key to success. Large, consistent, and good-quality data are more likely to produce the best models. We want to work in this direction to obtain data, process it, protect it, and scale up the trained models in the most efficient way. The next century will probably be the era of data, and data will be one of the most important fields that all competitors will fight for.

We have introduced a lot of AI knowledge and tools to different industries and have proven helpful to them, and we hope to enhance their existing models in the future. Even though some industries never thought they could generate data, we will help them do that, and then enhance their productivity through the application of AI models.

Some of our previous successes with the industry have been where the industry has brought its needs to the table and commissioned the project. In the future, we would like to extend our cooperation with the industry to create more stakeholders. For example, we found that very few companies use high-performance computing in the engineering and manufacturing field in Taiwan, and even if they do, they are limited to commercial software. We hope that we can help them by promoting more applications through HPC simulations.

We also work with academia and start-ups because we can provide IaaS, PaaS and SaaS technologies and integration mechanisms in big data, AI, and high-speed computing, and they have more specific domain expertise. We complement each other to solve problems for the industry.

There are still many industry players who have not thought that they can use HPC to design products and solve problems, nor have they thought that the use of data starts with asking the right questions. Taiwan needs a paradigm shift from only manufacturing parts to integration into a complete solution, and we can play an important role in that.

In addition, Taiwan has just announced the composition of the Quantum Computing National Team, which includes hardware development, breakthroughs in low-temperature environment technology, and software. The Ministry of Science and Technology, Academia Sinica, and various university research units are joining the effort. But even with the hardware and quantum bits in place, another supercomputer is still needed to interpret the data, so NCHC definitely will also play a role. We also use the concept of quantum computing to develop innovative encryption technologies, and we hope to make great contributions to quantum computing in the future.

Q: Recently, low-orbit satellites have become the focus of the ICT industry in Taiwan and are seen as a new growth opportunity for the future. How will the Center help the industry to make the best use of its resources for technological innovation?

A: The U.S. is using supercomputers, high-speed computing, plus AI and big data analysis in literally all areas of engineering applications. For example, we’ve seen news reports on TV about NASA’s off-road exploration vehicles going to the surface of Mars. First of all, when designing the rocket, supercomputers are needed to do aerodynamic design, enhance its performance, reduce vibration, and avoid the impact of the payload on the vehicle during the launch phase. On the way to Mars, a rocket engine is used for navigation and control, computer simulations are carried out to model the dynamics of the vehicle. Then, after a long flight to Mars, breaking through the atmosphere there, and then landing on Mars, the whole process requires high-performance computation to simulate the entire process from entry, deploying the parachute, until the vehicle is safely landed on the Mars surface.

Therefore, we can say that the development of the entire aerospace industry relies very heavily on computer simulations using HPC technologies. This is why the US is the world leader in technology. Its strength stems from its ability to apply supercomputing technology to innovation, which is also the direction Taiwan should be heading.

In the U.S., there is no need to sell the industrial stakeholders on what we can do with high-speed computing simulation because it is already a consensus. Some people here in Taiwan might think, “Why do we need to invest so much money in innovation or HPC?” This explains why most people are satisfied with making parts but don’t have the ambition to make the entire machines or a turn-key solution. Achieving an integrated solution or machine requires system integration and simulation. This is a necessary process for digital transformation.

In addition, we have some preliminary ideas on how to use digital twins to solve complex problems in the future, and we hope to contribute to this through active cross-government agency collaborations.

Bio of Chau-Lyan Chang:

B.S., M.S., Department of Mechanical Engineering, National Taiwan University, and Ph.D., Pennsylvania State University, U.S.A. Chang has formerly been a senior researcher at the Langley Research Center of the National Aeronautics and Space Administration (NASA). He is also an associate fellow of the American Institute of Aeronautics and Space. His expertise is in computational fluid dynamics and high-performance parallel computation, mainly in laminar turbulence transition and space-time conservation methods. He is well known for his ability to accurately predict the behavior of various high-speed and complex fluid flow phenomena, especially in challenging aerospace applications such as supersonic and turbulent flows. The engineering software he developed is not only used at NASA, but also widely used by the US industry, national laboratories, and academia.