A few weeks ago, San Francisco hosted the annual OSIsoft users conference, where multiple private and public entities shared their experiences about real-time data analytics. OSIsoft has developed the PI server (that’s pi, as in 3.1459) that integrates electronic “tags” from multiple sensors into one platform for real-time situational awareness and historical analysis. Starting out in oil & gas and manufacturing, OSIsoft has expanded their domain to include a diverse set of fields, from renewable energy to High-Performance Computing (HPC) and sustainability metrics for industry. What OSIsoft has been doing, fairly quietly, is organizing the data needed for a “computational revolution.”
The evolution of computers has focused on faster and more precise processing speeds. In the next few months, LLNL will be installing a 20 petaFLOPS computer, “Sequoia.” This massive computer will be able to calculate 20 Quadrillion (that’s a 1 with 15 zeros) floating point operations per second. That is about a million times faster than a desktop computer. Sequoia will be used to solve models and equations with daunting accuracy, resolution, and speed.
The connection between companies like OSIsoft and HPC is twofold. First, LLNL is using OSIsoft’s PI server technology to meter and model its existing HPC facility. Every second, information about the electrical demand, temperature, airflow, and computer jobs running and in the queue are being collected and integrated in one place for real-time management. An HPC facility is a fine-tuned instrument; yet one that is playing thousands of different songs at any one time. The data being collected are also used to calibrate models of future HPC facilities at LLNL. One of the most limiting factors in running an HPC facility is energy use. As computational load increases, cooling and electricity demand increases as well. The approach LLNL is taking is to model, design, and then build a new facility that leverages all of the lessons learned from earlier HPC facilities so that the new facility will have the smallest electrical and GHG footprint possible. And with tools like PI, the data are easily retrieved, analyzed and visualized.
The second connection between a business like OSIsoft and HPC is that OSIsoft is helping establish a beachhead in Big Data. We are now at a time where all of the data streams from multitudes of sensors can be used in novel ways. Using HPC, data integration tools will help close the loop from science to performance. For example, a simulation of an innovative windmill blade will only work so well using equations derived from “first principles,” the basic physics and engineering assumptions that describe established laws and phenomena. Through integrating real data, and possibly terabytes of data from live sensors, the real performance of a blade, a single windmill, or a whole wind farm can be captured and fed back to help calibrate the “true” performance of that blade. From smart meters in homes, to solar power output, to the numbers of cars plugged in to smart meters, all of this data will be invaluable in modeling, simulating, and managing the energy system of the near future. OSIsoft’s PI server as well as other approaches such as consistent data models, communication standards and data consortia, all provide examples of invaluable data integration methods. Through the use of these tools, we can innovate the operation of HPC facilities and leverage the data “storm” brewing towards achieving a low-carbon future.
For more on how HPC will change industry, read Noah Goldstein’s recent article in Sustainable Industries.
About the Author:
Noah Goldstein is a spatial scientist in LLNL’s Engineering Directorate, as well as the Laboratory’s Scientific Lead for Site Sustainability. He has a B.A. in biology from the University of California (UC) at Santa Cruz and an M.A. and Ph.D. in geography from UC Santa Barbara. He is an Accredited Professional under the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) program. Read full bio.