Penguin Computing Announces Support for New 24GB NVIDIA Tesla M40 GPU…

News Image

Penguin Computing, a provider of high-performance computing, enterprise data center and cloud solutions, announced Open Compute Project (OCP)-based systems that reinforce both its continued collaboration with NVIDIA and new options in Penguin Computing’s Magna family of OpenPOWER-based servers.

“Customers benefit when we partner with exceptional organizations like NVIDIA, the OpenPOWER Foundation and Open Compute Foundation in developing our systems,” said Jussi Kukkonen, Director Product Management, Penguin Computing. “An essential part of our mission is to provide customers with form factor flexibility, choice of architecture and peak performance, which are all hallmarks of Penguin Computing.”

Penguin Computing introduced the company’s latest systems based on OpenPOWER architecture at the OpenPOWER Summit. The Penguin Magna 2002 combines the dual processor OpenPOWER platform with the NVIDIA® Tesla® Accelerated Computing Platform in a conventional EIA form factor. This new architecture option is a demonstration of the company’s continuing commitment and investment in accelerated computing and customer choice.

NVIDIA’s Tesla M40 GPU, the most powerful accelerator designed for training deep neural networks, now provides 24GB RAM of GDDR5 memory. It is being validated on all Penguin Computing GPU host platforms, including both Intel x86 and OpenPOWER host architectures. Penguin Computing provides optimized systems for accelerated computing, ranging from 1:1, 1:2 and 1:4 ratio of CPUs to GPUs.

Penguin Computing also announced support for the NVIDIA Tesla M4 GPU accelerator in its OCP-based Tundra ES 1930g open compute server. The Tesla M4 GPU is a low-power, small form-factor accelerator for deep learning inference, as well as streaming image and video processing.

“Our hyperscale accelerator line enables developers to drive deep learning development in large data centers and create new classes of applications for artificial intelligence,” said Roy Kim, group product manager of Accelerated Computing at NVIDIA. “Penguin Computing offers rich deployment options for NVIDIA GPU technologies, including high-density, low TCO platforms supporting the Tesla M4 GPU, and systems with memory and I/O subsystem scalability designed for developing deep neural networks with our Tesla M40 GPUs.”

Visit http://www.penguincomputing.com or contact your local Penguin Computing Representative for more information and product availability on these Penguin Computing systems. Visit Penguin Computing’s booth #510 at the NVIDIA GPU Technology Conference (GTC) and booth #1409 at the co-located OpenPOWER Summit.

About Penguin Computing

Penguin Computing is one of the largest private suppliers of enterprise and high performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing on Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivering of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets. Visit http://www.penguincomputing.com to learn more about the company, and follow @PenguinHPC on Twitter.

Penguin Computing, Scyld ClusterWare, Scyld Cloud Manager, Scyld Cloud Workstation, Relion, Altus, Penguin Computing on Demand, POD, Tundra and Arctica are trademarks or registered trademarks of Penguin Computing, Inc.

Media Contact:

Phillip Bergman


Email: pbergman(at)viewstream(dot)com

Cell: 845-728-3984

About The Author

Related posts