MLCommons
There has been a strong desire for a series of industry standard machine learning benchmarks, akin to the SPEC benchmarks for CPUs, in order to compare relative solutions. Over the past two years, MLCommons, an open engineering consortium, have been discussing and disclosing its MLPerf benchmarks for training and inference, with key consortium members releasing benchmark numbers as the series of tests gets refined. Today we see the full launch of MLPerf Inference v1.0, along with ~2000 results into the database. Alongside this launch, a new MLPerf Power Measurement technique to provide additional metadata on these test results is also being disclosed.
Graphcore Series E Funding: $710m Total, $440m Cash-in-Hand
For those that aren’t following the AI industry, one of the key metrics to observe for a number of these AI semiconductor startups is the amount of funding they...
12 by Dr. Ian Cutress on 1/4/2021