For users that have been following our recent coverage of Aquantia’s new multi-gigabit Ethernet solutions for consumers, the AQtion AQC107 and AQC108 controllers (along with their corresponding PCIe cards), the running theme through all of the >1 Gb Ethernet standards on RJ-45 has been the availability of switches. There are plenty of enthusiasts that would happily upgrade their home network infrastructure to something bigger than gigabit ethernet if there was a realistic price alternative. Current 10GBase-T solutions, for example, can cost >$150 per port for the systems and >$100 per port for the switch, whereas gigabit ethernet is ~$2-5 per port. Aquantia is hoping to break that mould, and showed some of the systems that related partners are moving towards working.

I should state at this point that what was on display was early prototypes – Aquantia is working with ODMs and OEMs on getting the fundamentals of such switches right first, before those partners actually coming to market. Aside from the slew of typical enterprise players showing enterprise switches, Aquantia wasn’t prepared to state on record who they are partnering with in the consumer space for switches, although we were told so suspect the usual suspects. Any information we got from the meeting today we were told was expected to be preliminary and non-final, with potentially large differences between now and final products.

All that being said, we were told that Aquantia  is working on three main solutions for ODMs to look into: a 4-port solution, an 5-port solution, and an 8-port solution. The heart of these platforms is Aquantia silicon supporting four ports, with the 5-port switch version using a 4-port plus 1-port silicon design. The models on display, and used as the top image in this news piece, were done in collaboration with Cameo, who will be one of the first vendors (if not the first) to come to market with a product.


An older reference design

Aquantia demonstrated basic iPerf performance over the network using the switch in 10G mode with two Aquantia AQC107 add-in cards between two systems, showing 9.5G bandwidth in a basic test. The demo switch that was being used was not a final version by any means, in terms of looks and noise levels (it was overengineered for the demo), but this is something Aquantia expects OEMs to address rapidly.

Naturally, we asked about pricing of the switches and availability. With the aforementioned caveats, we were told that the switch vendors themselves will be the ones dictating pricing. That being said, after suggesting that pricing in the region of $250-$300 for an 8-port switch that supports Aquantia 10G solutions (so likely 5GBase-T and 2.5GBase-T as well) would be great, we were told that this was likely a good estimate. Previously in this price range, options were limited to a sole provider: ASUS’ XG-U1008, a switch with two 10GBase-T ports and six one-gigabit Ethernet ports for $200. Above that, some Netgear solutions were running almost $800 for an 8-port managed solution. So moving to eight full 10G ports in this price bracket would be amazing, and I told Aquantia to tell OEMs that at that price ($~30 per port), those switches will fly off the shelves with enthusiasts who want to upgrade.

Given the early nature of the designs on show, discussions on availability are expected to happen later this year, although Aquantia is likely to let partners announce their own products and time scales for the roll-outs. 

Related Reading

Comments Locked

40 Comments

View All Comments

  • vladx - Monday, June 5, 2017 - link

    Great news now I can finally make use of SSDs full speeds at a reasonable price.
  • DanNeely - Monday, June 5, 2017 - link

    Would a 4+1 port setup be a good base to build a router on? If not, I'm not sure I see the logic in that sort of setup; just adding a 5th port to a switch seems kinda meh as a usecase.
  • Black Obsidian - Monday, June 5, 2017 - link

    A router might make sense.
    Alternately, so would a switch; 4 10GbE ports for clients, then 1 additional for backhaul to a router or larger 1GbE switch.
  • CaedenV - Tuesday, June 6, 2017 - link

    It depends on how much overprovisioning is going on. A typical small business grade 1gbps 16 port switch cannot get anywhere near 16gbps of throughput, and that is probably how these are saving so much money too. You have 4 10gig ports... but that does not mean the controller chips can do 40gbps of throughput (80gbps bidirectional!). I would guess that these chips do somewhere in the negiborhood of 12-15gbps on a single controller. Plenty to be substantially faster than a 1gig switch, but you would be hard pressed to get more than 2 heavy connections running at once.
    So, add a 5th port on a 2nd controller, and then you have a dedicated 10gig port that will always do 10 gig of traffic to a server or larger switch. Then a few intermittant power users go on the not-so-dedicated 10gig ports that share a single controller.

    As for a router... that all depends on what you are doing. Most likely not. I mean, your home/business internet connection likely isn't going to be faster than 1gbps for a while yet. You would probably still be better served having a 1gbps router, and use this switch as a go-between for your server and power users.
  • SharpEars - Thursday, June 8, 2017 - link

    I beg to differ, almost all switches have fabric that handles full bidirectional throughput on all ports. This isn't the '90s.
  • nagi603 - Monday, June 5, 2017 - link

    Finally!
    I was looking at speeding up my local SOHO network and the 10Gbit prices are just ghastly (50x mroe expensive switches) compared to 1Gbit. Frankly, I was thinking about adding extra lines and using aggregation instead, as I would not need the full 10Gbit.
  • dgingeri - Monday, June 5, 2017 - link

    Link aggregation is for use on machines that have many connections. Each transfer is still limited to the speed of just one connection, regardless of the protocol.

    On the other hand, The Dlink DSG-1500 series is pretty inexpensive for 10G. I got a DGS-1510-28X, with 24 1G ports and 4 10G SFP ports, for just under $500 with the SFPs back in December. I just looked up the price, and it is a little cheaper now than when I bought it. The cards I use are available for under $100 each on ebay right now, including the SFPs. It's still a bit pricey, but not absurdly so anymore.
  • azazel1024 - Monday, June 5, 2017 - link

    No it isn't and hasn't been for awhile.

    Windows 8 supported SMB Multichannel and so does 8.1 and 10. In fact I think the latest versions of Samba may also support it in their flavor of SMB3+ as well.

    I have two GbE links to my switch for my server and for my desktop. I regularly get 238MiB/sec file transfers between both machines. It works across aggregated ports on both of my 16 port switches as well (though I currently have my desktop and server on the same switch, I did test it across switches).

    So to clarify, link aggregation for the CLIENT is limited to single port speeds, because if I had my ports aggregated, SMB multichannel would not work. However, link aggregation across switches THAT IS NOT THE CASE. Also you CAN use multiple ports if you have the client support and get >single port speed (with SMB multichannel anyway).

    I would love to see a "low cost" 2.5GbE 24 port switch to move my entire network up a step. Really only my desktop and server could leverage it right now, but wireless is moving towards being able to and hopefully USB3 (or 3.1) 2.5GbE adapters are coming as well as integrated 5/2.5GbE ports for laptops sometime.

    Right now some of the nicer 802.11ac 3:3 APs and clients can push around 90-100MiB/sec same room performance. Stepping up to 160MHz channels on 802.11ac or 802.11ax in good conditions and/or with MU:MIMO is absolutely going to saturate a 1GbE link.
  • dgingeri - Monday, June 5, 2017 - link

    That's a software side multiple link connection, and is exclusive to SMB 3. That's not link aggregation. Link aggregation is a different animal. LACP and Round Robin still just have one link per stream, so it is narrowed down. Even Microsoft's software network connection linking is narrowed to one connection per stream. The SMB 3 multilink transfers can only take place between machines capable of SMB 3, and only through a SMB 3 link. So, iSCSI or NFS won't be able to take advantage of that at all. That's a pretty narrow set of criteria.
  • nils_ - Thursday, June 8, 2017 - link

    I think it's also possible with the linux bonding driver, but this needs compatible devices on either end.

Log in

Don't have an account? Sign up now