VisIt
Lawrence Livermore National Laboratory
VisIt is an Open Source, interactive, scalable, visualization, animation and analysis tool.
Earth System Sciences, Engineering, Life Sciences, Materials and Chemical Sciences, Other
VisIt is an Open Source, interactive, scalable, visualization, animation and analysis tool.
Earth System Sciences, Engineering, Life Sciences, Materials and Chemical Sciences, Other
ParaView is an open-source, multi-platform data analysis and visualization application.
The AMD Optimizing C/C++ and Fortran Compilers (“AOCC”) are a set of production compilers optimized for software performance when running on AMD host processors using the AMD “Zen” core architecture.
A chemical reaction model, consisting of two gas-phase and a surface reaction, for the deposition of copper from copper amidinate is investigated, by comparing results of an efficient, reduced order CFD model with experiments. The film deposition rate over a wide range of temperatures, 473K-623K, is accurately captured, focusing specifically on the reported drop of the deposition rate at higher temperatures, i.e above 553K that has not been widely explored in the literature. This investigation is facilitated by an efficient computational tool that merges equation-based analysis with data-driven reduced order modeling and artificial neural networks. The hybrid computer-aided approach is necessary in order to address, in a reasonable time-frame, the complex chemical and physical phenomena developed in a three-dimensional geometry that corresponds to the experimental set-up. It is through this comparison between the experiments and the derived simulation results, enabled by machine-learning algorithms that the prevalent theoretical hypothesis is tested and validated, illuminating the possible underlying dominant phenomena.
Engineering, Materials and Chemical Sciences
High Performance Data Analysis
https://www.sciencedirect.com/science/article/pii/S0098135421000673?via%3Dihub
The adoption of detailed mechanisms for chemical kinetics often poses two types of severe challenges: First, the number of degrees of freedom is large; and second, the dynamics is characterized by widely disparate time scales. As a result, reactive flow solvers with detailed chemistry often become intractable even for large clusters of CPUs, especially when dealing with direct numerical simulation (DNS) of turbulent combustion problems. This has motivated the development of several techniques for reducing the complexity of such kinetics models, where, eventually, only a few variables are considered in the development of the simplified model. Unfortunately, no generally applicable a priori recipe for selecting suitable parameterizations of the reduced model is available, and the choice of slow variables often relies upon intuition and experience. We present an automated approach to this task, consisting of three main steps. First, the low dimensional manifold of slow motions is (approximately) sampled by brief simulations of the detailed model, starting from a rich enough ensemble of admissible initial conditions. Second, a global parametrization of the manifold is obtained through the Diffusion Map (DMAP) approach, which has recently emerged as a powerful tool in data analysis/machine learning. Finally, a simplified model is constructed and solved on the fly in terms of the above reduced (slow) variables. Clearly, closing this latter model requires nontrivial interpolation calculations, enabling restriction (mapping from the full ambient space to the reduced one) and lifting (mapping from the reduced space to the ambient one). This is a key step in our approach, and a variety of interpolation schemes are reported and compared. The scope of the proposed procedure is presented and discussed by means of an illustrative combustion example.
Engineering, Materials and Chemical Sciences
The big data revolution has ushered an era with ever increasing volumes and complexity of data requiring ever faster computational analysis. During this very same era, CPU performance growth has been stagnating, pushing the industry to either scale their computation horizontally using multiple nodes in datacenters, or to scale vertically using heterogeneous components to reduce compute time. However, networking and storage continue to provide both higher throughput and lower latency, which allows for leveraging heterogeneous components, deployed in data centers around the world. Still, the integration of big data analytics frameworks with heterogeneous hardware components such as GPGPUs and FPGAs is challenging, because there is an increasing gap in the level of abstraction between analytics solutions developed with big data analytics frameworks, and accelerated kernels developed with heterogeneous components. In this article, we focus on FPGA accelerators that have seen wide-scale deployment in large cloud infrastructures. FPGAs allow the implementation of highly optimized hardware architectures, tailored exactly to an application, and unburdened by the overhead associated with traditional general-purpose computer architectures. FPGAs implementing dataflow-oriented architectures with high levels of (pipeline) parallelism can provide high application throughput, often providing high energy efficiency. Latency-sensitive applications can leverage FPGA accelerators by directly connecting to the physical layer of a network, and perform data transformations without going through the software stacks of the host system. While these advantages of FPGA accelerators hold promise, difficulties associated with programming and integration limit their use. This article explores the existing practices in big data analytics frameworks, discusses the aforementioned gap in development abstractions, and provides some perspectives on how to address these challenges in the future.