A new high-performance computing (HPC) initiative announced this week by the U.S. Department of Energy will help U.S. industry accelerate the development of new or improved materials for use in severe environments. Los Alamos National Laboratory, with a strong history in the materials science field, will be taking an active role in the initiative.
CS Digest Section: High Performance Computing
Argonne National Laboratory, a US Department of Energy (DOE) science and energy lab located outside of Chicago, provides supercomputing resources aimed at accelerating the pace of discovery and innovation. It is home to Mira, currently the ninth fastest supercomputer in the world, and its new Theta system, which will serve as a bridge between Mira and its
The prototype now on display at The Atlantic's On the Launchpad: Return to Deep Space conference in Washington, D.C., features 1,280 high-performance microprocessor cores—each of which reads and executes program instructions in unison with the others-with access to a whopping 160 terabytes (TB), or 160 trillion bytes, of memory.
Many now consider simulation the third pillar of scientific inquiry, alongside the centuries-old pillars of theory and experiment.
It's no secret that Google has developed its own custom chips to accelerate its machine learning algorithms. The company first revealed those chips, called Tensor Processing Units (TPUs), at its I/O developer conference back in May 2016, but it never went into all that many details about them, except for saying that they were optimized around the company’s
To find out whether quantum computers will work properly, scientists must simulate them on a classical computer. Now a record-breaking experiment has simulated the largest quantum computer yet.
LEGO-style Building Method Has Potential for Making One-Dimensional Materials with Extraordinary Properties.
The Cray systems will be located at the U.S. Army Engineer Research and Development Center DoD Supercomputing Resource Center (ERDC DSRC) in Vicksburg, Mississippi.
The pod system, with scalable, low size, weight and power hardware architecture, is designed to enable on-board processing of large quantities of sensor data through high-performance embedded computing (HPEC).
Traditional programmable computers are facing fundamental limitations in terms of speed and size, inspiring increasing interest in alternative paradigms such as neuromorphic computing.