Introduction to ARM Servers

"Intel does not have any competition whatsoever in the midrange and high-end (x86) server market". We came to that rather boring conclusion in our review of the Xeon E5-2600 v2. That date was September 2013.

At the same time, the number of announcements and press releases about ARM server SoCs based on the new ARMv8 ISA were almost uncountable. AppliedMicro was announcing their 64-bit ARMv8 X-Gene back in late 2011. Calxeda sent us a real ARM-based server at the end of 2012. Texas Instruments, Cavium, AMD, Broadcom, and Qualcomm announced that they would be challenging Intel in the server market with ARM SoCs. Today, the first retail products have finally appeared in the HP Moonshot server.

There has been no lack of muscular statements about the ARM Server SoCs. For example, Andrew Feldman, the founder of micro server pioneer Seamicro and the former head of the server department at AMD stated: "In the history of computers, smaller, lower-cost, and higher-volume CPUs have always won. ARM cores, with their low-power heritage in devices, should enable power-efficient server chips." One of the most infamous silicon valley insiders even went so far as to say, "ARM servers are currently steamroller-ing Intel in key high margin areas but for some reason the company is pretending they don’t exist."

Rest assured, we will not stop at opinions and press releases. As people started talking specifications, we really got interested. Let's see how the Cavium Thunder-X, AppliedMicro X-Gene, Broadcom Vulcan, and AMD Opteron A1100 compare to the current and future Intel Server chips. We are working hard to get all these contenders in our lab, and we are having some success, but it is too soon for a full blown shoot out.

Micro Servers and Scale-out Servers

Micro servers were the first the target of the ARM licensees. Typically, a discussion about Micro servers quickly turns into a wimpy versus brawny core debate. One of the reasons for that is that Seamicro, the inventor of the micro server, first entered the market with Atom CPUs. The second reason is that Calxeda, the pioneer of ARM based servers, had to work with the fact that the Cortex-A9 core was a wimpy core that could not deal with most server workloads. Wikipedia also associates micro servers with very low power SoCs: “Very low power and small size server based on System-on-Chip, typically centered around ARM processor”.

Micro servers are typically associated with low end servers that serve static HTML, cache web objects, and/or function as slow storage servers. It's true that you will not find a 150W high-end Xeon inside a micro server, but that does not mean that micro servers are defined by low power SoCs. In fact, the most successful micro servers are based on 15-45W Xeon E3s. Seamicro, the pioneer of micro servers, clearly indicated that there was little interest in the low power Atom based systems, but that sales spiked once they integrated Xeon E3s.

Currently micro servers are still a niche market. But micro servers are definitely not hype; they are here to stay, although we don't think they will be as dominant as rack servers or even blade servers in the near future. To understand why we would make such a bold statement, it is important to understand the real reason why micro servers exist.

Let us go back to the past decade (2005-2010). Virtualization was (and is) embraced as the best way to make enterprises with many heterogeneous applications running on underutilized servers more efficient. RAM capacity and core counts shot up. Networking and storage lagged but caught up – more or less – as flash storage, 10 Gbit Ethernet, and SRIOV became available. But the trend to notice was that virtualization made servers more I/O feature rich: the number and speed of network NICs and PCI-e expansion slots for storage increased quickly. Servers based on the Xeon E5 and Opterons have become "software defined datacenters in a box" with virtual switching and storage. The main driver for buying complex servers with high processor counts and more I/O devices is simple: professionals want the benefits that highly integrated virtualization software brings. Faster provisioning, high availability (HA), live migration (vMotion), disaster recovery (DR), keeping old services alive (running on Windows 2000 for example): virtualization made everything so much easier.

But what if you did not need those features because your application is spread among many servers and can take a few hardware outages? What if you do not need the complex hardware sharing features such as SRIOV and VT-d? The prime example is an application like Facebook, but quite a few smaller web farms are in a similar situation. If you do not need the features that come with enterprise virtualization software, you are just adding complexity and (consultancy/training) costs to your infrastructure.

Unfortunately, as always, the industry analysts came with unrealistic high predictions for the new micro server market: in 2016, they would be 10% of the market, no less than "a 50 fold jump"! The simple truth is that there is a lot of demand for "non-virtualized" servers, but they do not all have to be as dense and low power as the micro servers inside the Boston Viridis. The "very low power", extremely dense micro servers with their very low power SoCs are not a good match for most workloads out there, with the exception of some storage and memcached machines. But there is a much larger market for servers denser than the current rack servers, but less complex and cheaper than the current blade servers, and there's a demand for systems with a relatively strong SoC, currently the SoCs with a TDP in the 20W-80W range.

Not convinced? ARM and the ARM licensees are. The first thing that Lakshmi Mandyam, the director of ARM servers systems at ARM, emphasized when we talked to her is that ARM servers will be targeting scale-out servers, not just micro servers. The big difference is that micro servers are using (very) low power CPUs, while scale-out servers are just servers that can run lots and lots of threads in parallel.

The Current Intel Offerings
Comments Locked

78 Comments

View All Comments

  • beginner99 - Tuesday, December 16, 2014 - link

    Agree. I just don't see it. What wasn't mentioned or I might have missed is Intels turbo technology. Does ARM have anything similar? Single-threaded performance matters. If a websites takes double the time to be built by the server the user can notice this. And given complexity of modern web sites this is IMHO a real issue. Latency or "service time" is greatly affected by single-threaded performance. That's why visualization is great. Put tons of low-usage stuff on the same physical server and yet each request profits from the single-threaded performance.

    Now these ARM guys are targeting this high single-threaded performance but why would any company change? Whole software stack would have to change as well at don't forget the software usually cost way, way more than the hardware it runs on. So if you save 10% on the SOC you maybe save less than 1% on the total BOM including software. They can't win on price and on performance/watt Intel still hast best process. So no i don' see it except for niche markets like these Mips SOCs from cavium.
  • Ratman6161 - Wednesday, December 17, 2014 - link

    "Xeon performance at ridiculous prices" I just don't get the "ridiculous prices" comment. To me, it seems like hardware these days is so cheap they are practically giving it away. I remember in the days of NT 4.0 Servers we paid $40K each for dual socket Dell systems with 16 GB Ram.

    A few years later we were doing Windows 2000 Server on Dell 2850's that were less than half the price.

    Then in 2007 we went the VMWare route on Dell 2950's where the price actually went up to $23K but we were getting dual sockets/8 cores and 32GB of RAM so they made the $40K servers we bought years before look like toys.

    Four years later we got R-710's that were dual socket/12 cores and 64GB or RAM and made the $23K 2950's look like clunkers but the price was once again almost half at about $12K.

    Today we are looking at replacing the R-710's with the latest generation which will be even more cores and more RAM for about the same price.

    So to me, the prices don't seem ridiculous at all. The servers themselves now make up only a fraction of our hardware costs with the expensive items being SAN storage. But that too is a lot cheaper. We are looking at going from our two SANS with 4GB fiber channel connections to a single SAN with 10GB Ethernet and more storage than the two old units combined...but still costing less than the old SANs did for just one. So prices there are expensive but less than half of what we paid in 2007 for more storage.

    The real costs in the environment are in Software licensing and not I'm not talking about Microsoft or even VMware. Licensing those products are chump change compared to the Enterprise Software crooks...that's where the real costs are. The infrastructure of servers, storage and "plumbing" sorts of software like Windows Server and VMWare are cheap in comparison.
  • mrdude - Tuesday, December 16, 2014 - link

    Great article, Johan

    I think the last page really describes why so many people, myself included, feel that ARM servers/vendors have a very good chance of entrenching themselves in the market. Server workloads are more complex and varied today than they ever have been in the past and it isn't high volume either: the Facebook example is a good one. These companies buy hardware by the truckload and can benefit immensely from customization that Intel may not have on offer.

    To add to that, what wasn't mentioned is that ARM, due to its 'license everything' business model, provides these same companies the opportunity to buy ready-made bits of uArch and, with a significantly smaller investment, build them own as-close-to-ideal SoC/CPU/co-processor that they need.

    Competition is a great thing for everyone.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    True. Although it seems that only AMD really went for the "license almost everything" model of ARM.
  • mrdude - Tuesday, December 16, 2014 - link

    Yep. And that's likely due to the budget/timing constraints. I think they were gunning for the 'first to market' branding but they couldn't meet their own timelines. Something of a trend with that company. I'm curious as to why we haven't heard a peep from AMD or partners regarding performance or perf-per-watt. Iirc, we were supposed to see Seattle boards in Q3 of 2014.

    I also feel like ARM isn't going to stop at the interconnect. There's still quite a bit of opportunity for them to expand in this market.
  • cjs150 - Tuesday, December 16, 2014 - link

    Ultimately, my interest in servers is limited but I would like a simple home server that would tie all my computers, NAS, tablets and the other bits and bobs that a geek household has.
  • witeken - Tuesday, December 16, 2014 - link

    Who's interested in Intel's data center strategy, can watch Diane Bryant's recent presentation (including PDF): http://intelstudios.edgesuite.net/im/2014/live_im.... The Q&A from 2013 also has some comments about ARM servers: http://intelstudios.edgesuite.net/im/2013/live_im....
  • Kevin G - Tuesday, December 16, 2014 - link

    "Now combine this with the fact that Windows on Alpha was available." - Except that Windows NT was available for Alpha. There was a beta for Windows 2000 in both 32 bit and 64 bit flavors for the curious.

    I disagree with the reason why Intel beat the RISC players. Two of the big players were defeated by corporate politics: Alpha and PA-RISC were under the control of HP who was planning to migrate to Itanium. That leaves POWER, SPARC, MIPs and Intel's own Itanium architecture at the turn of the millennium. Of those, POWER and SPARC are still around as they continue to execute. So the only two victims that can be claimed by better execution is MIPs and Intel's own Itanium.

    While IBM and Oracle are still executing on hardware, the Unix market as a whole has decreased in size as a whole. The software side isn't as strong as it'd use to be. Linux has risen and proven itself to be a strong competitor to the traditional Unix distribution. Open source software has emerged to fill many of the roles Unix platforms were used to. Further more, many of these applications like Hadoop and Casandra are designed to be clustered and tolerate node failures. No need to spend extra money on big iron hardware if the software doesn't need that level of RAS for uptime. The general lower cost of Linux and open source software (though they're not free due to the need for support) combined with furhter tightening of budgets during the great recession has made many businesses reconsider their Unix platforms.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    My main argument was that the RISC market was fragmented, and not comparable to what the x86 market is now (Intel dominating with a very large software base).

    While I agree with many of your points, you can not say that SPARC is not a victim. In 90ies, Sun had a very broad product range from entry-level workstation to high-end server. The same is true for the Power CPUs.

  • Kevin G - Wednesday, December 17, 2014 - link

    The RISC market was fragmented on both hardware and software. The greatest example of this would be HP that had HPUX, Tru64, OpenVMS, and Nonstop as operating system and tried to get them all migrated to a common hardware platform: Itanium. How each platform handled backwards compatibility with their RISC roots was different (and Tru64 was killed in favor of HPUX).

    The midrange RISC workstation suffered the same fate as the dual socket x86 workstation market: good enough hardware and software existed for less. The race to 1Ghz between Intel and AMD cut out the performance advantage RISC platforms carried. Not to say that the RISC a chips didn't improve performance but vendors never took steps to improve their price. Window 2000 and the rise of Linux early in the 2000's gave x86 a software price advantage too while having good enough reliability.

    Sun's hardware business did suffer some horrible delays which helped lead the company into Oracle's acquisition. Notably was the Rock chip which featured out-of-order execution but also out-of-order instruction retirement. Sun was never able to validate any prototype silicon and ship it to customers.

Log in

Don't have an account? Sign up now