Google Cloud today announced new developments designed to accelerate the pace of scientific and technical research and break the barriers of high performance computing (HPC) and artificial intelligence (AI).
New features include supercomputing-class infrastructure and state-of-the-art AI to provide researchers with new capabilities and significantly increasing the computational power they have at their disposal to address some of the most complex problems in fields such as climate modeling and drug discovery.
A major highlight is the launch of new, higher performance VMs. The HPC VMs based on AMD EPYC processor, and connected via Cloud RDMA that allow researchers to have supercomputing clusters with superior scaling and parallel efficiency for tightly-linked workloads.
In addition to this, the A-Series VMs powered by the latest NVIDIA GPU’s offer the horsepower needed for AI, AI-assisted data analysis, genomics, computational fluid dynamics, finite element analysis, and functions leveraging GPUs to execute operations on data. The RDMA support allows the faster communication across compute nodes, which is essential for efficient execution of large-scale scientific applications.
Acknowledging the growing data-centric nature of contemporary research, Google Cloud also launched Managed Lustre, a high-performance, fully managed parallel file system.
This offering ensures researchers will have the necessary Input/Output (I/O) performance to accommodate even the most extreme-scale HPC and AI workloads faster. In addition, Google Cloud is broadening its range of compute options with a wide array of offerings from TPUs to custom silicon, equipping scientists to more closely tailor their computational resources to the unique needs of their applications and maximizing performance while being cost-effective.
To make it easier to deploy and operate challenging HPC environments, Google Cloud released what it called Blueprints in its Cluster Toolkit. Pre-built templates enable automated deployment of prerequisite infrastructure such as compute, storage, networking, and job scheduling services, which shortens deployment time and creates a stable, tuned environment for high performance workloads.
In doing so, by removing the need for researchers to roll their own hardware and design towers of babel of IoT infrastructure, Blueprints offers a proven starting point for both lean and fast deployments.
In addition to increasing the volume, Google Cloud also underscored the burgeoning importance of the role intelligence agents will play in helping research become even more productive.
AI-powered systems like these can help with a variety of tasks, enhancing the ability for analysis, and aiding in the process of discovery. By automating the mundane tasks, and suggesting intelligent insights, the AI agents remove some of the barriers and allow researchers to focus on the creative and strategic parts of the job, potentially resulting more impactful research.
To support and drive collaboration and knowledge share amongst the market’s most advanced and specialist users, Google Cloud has additionally created the Advanced Computing Community.
The series will gather specialists from Google, our technology partners, and HPC, AI and quantum computing research centers and industry, to talk about cloud technologies for scientific and technical computing in the cloud.
These new developments further exemplify Google Cloud’s mission to become the world’s leading research and science platform, providing researchers with the tools, technology and infrastructure to drive change at an accelerated pace and solve some of the most pressing issues in the world.