A new high-performance computing (HPC) initiative announced this week by the U.S. Department of Energy will help U.S. industry accelerate the development of new or improved materials for use in severe environments. Los Alamos National Laboratory, with a strong history in the materials science field, will be taking an active role in the initiative.
CS Digest Section: High Performance Computing
Argonne Steps up to the Exascale Computing Project
Argonne National Laboratory, a US Department of Energy (DOE) science and energy lab located outside of Chicago, provides supercomputing resources aimed at accelerating the pace of discovery and innovation. It is home to Mira, currently the ninth fastest supercomputer in the world, and its new Theta system, which will serve as a bridge between Mira and its
https://insidehpc.com/2017/08/argonne-steps-exascale-computing-project/
HPE Debuts Its Next-Gen Computer–Sans Much-Anticipated Memristors
The prototype now on display at The Atlantic's On the Launchpad: Return to Deep Space conference in Washington, D.C., features 1,280 high-performance microprocessor cores—each of which reads and executes program instructions in unison with the others-with access to a whopping 160 terabytes (TB), or 160 trillion bytes, of memory.
ORNL Researchers Bridge the Gap Between R, HPC Communities
Many now consider simulation the third pillar of scientific inquiry, alongside the centuries-old pillars of theory and experiment.
https://www.hpcwire.com/off-the-wire/ornl-researchers-bridge-gap-r-hpc-communities/
Google Says Its Custom Machine Learning Chips Are Often 15-30x Faster Than GPUs and CPUs
It's no secret that Google has developed its own custom chips to accelerate its machine learning algorithms. The company first revealed those chips, called Tensor Processing Units (TPUs), at its I/O developer conference back in May 2016, but it never went into all that many details about them, except for saying that they were optimized around the company’s
Supercomputer Simulation Offers Peek at the Future of Quantum Computers
To find out whether quantum computers will work properly, scientists must simulate them on a classical computer. Now a record-breaking experiment has simulated the largest quantum computer yet.
https://www.technologyreview.com/s/604140/a-milestone-for-quantum-computing/
Researchers Use World’s Smallest Diamonds to Make Wires Three Atoms Wide
LEGO-style Building Method Has Potential for Making One-Dimensional Materials with Extraordinary Properties.
Cray Awarded $26 Million Contract From the Department of Defense High Performance Computing Modernization Program
The Cray systems will be located at the U.S. Army Engineer Research and Development Center DoD Supercomputing Resource Center (ERDC DSRC) in Vicksburg, Mississippi.
USAF Lab Gets First Agile Condor Pod
The pod system, with scalable, low size, weight and power hardware architecture, is designed to enable on-board processing of large quantities of sensor data through high-performance embedded computing (HPEC).
https://www.shephardmedia.com/news/digital-battlespace/usaf-lab-gets-first-agile-condor-pod/
All Phase-Change Neuromorphic Device Recognises Correlations
Traditional programmable computers are facing fundamental limitations in terms of speed and size, inspiring increasing interest in alternative paradigms such as neuromorphic computing.