I love high bandwidth network access as much as the next guy, but I recognize where it's needed. For example, I use a gigabit switch and cat 6 cable for all my internal home networking, but then my router is still 10.100 and connected by a cat 5e cable.
Same goes with cell phones. Do you need 10+ mbps to improve your phone experience? Generally I don't find a great difference between 3G and wifi connected to a 26 mbps cable modem. Whenever Anandtech writes about LTE I keep thinking it's like they are comparing 9-9-9-18 RAM with 8-8-8-18 RAM... the real world benefit just isn't tangible. Anandtech usually does a great job in this area. If you believe that faster cell phone network connections really improve the cell phone experience, show us in real world test and not a synthetic benchmark like speedtest.
To me it seems LTE is largely marketing hype. Yes, it is good the technology is being implemented and it is largely a step forward. But, I refuse to get excited about it until there are no drawbacks compared with 3G since there are certainly aren't many tangible benefits. When battery life is competitive and there is wider coverage, then LTE will be more attractive.
I was really excited to see LTE take off and I thought it would mean you could get high speed internet out in the boon docks finally and then I saw how they priced it and realized that rural America is still hosed.
Not only are they going to take forever to put LTE out there, they want a king's ransom for it too.
Well the value of network speed greatly depends on usage, i suppose that if you tether a lot and/or use a lot of data intensive applications you could find some use for lte. Still I agree that if you just browse facebook 3g is plenty fine.
On a side note... Any eta for the galaxy nexus review? :D
I'd agree with you to an extent if all I did on my smartphone was check facebook or lightly browse the web. But I want to be able to stream video, music, and remote desktop from anywhere I go, including in the car, train, etc. 3g just doesn't cut it for those use cases. At&t's 3g almost does, but it's still not consistent enough to watch movies without occasionally buffering. Verizon and Sprint's 3g is simply just too slow. (I've used both and currently have both an AT&T and sprint smartphone)
I've got the Thunderbolt and live in NYC. I can tell a huge real wold difference when out using my phone with 4g vs 3g.
Opening emails, having instantaneous picture loading vs 3-4 seconds for them to pop up on 3g. Little stuff like that.
I think its more what you use your phone for. Calls and texts won't see a difference, but web pages, downloading apps, email its all a lot more enjoyable with 10x the bandwidth to download the content. It's similar to always being on Wifi.
The closest thing I can compare it to is going from a slow DSL to a cable modem. Things are just overall more seamless and you don't have to sit and wait for anything to load.
Battery life is the biggest thing they need to improve upon IMHO.
The issue that MATTERS is not so much the raw speed a particular user sees as how efficiently the limited total bandwidth is shared between multiple users.
The fact that I can get say 30Mb/s today on LTE is nice, but that's not going to last as soon as LTE becomes standard. I see 30Mbps when I'm the only person using the cell, but soon enough there will be 2, then 4, then 16 people sharing the cell with me. So the big picture is that efficiency matters because we all win when resources are used more sensibly. Averaged across a wide range of environments, the 3G has a spectral efficiency of around .5b/Hz; 3.5G (throw MIMO in the mix) pulls that up to about 1. LTE increases that to about 1.6 --- which is pretty much what Brian is seeing.
The fundamental LTE technologies are - all internet architecture (so lower latency, less obsolete overhead) - OFDM (better utilization of large available frequency bands, with the ability to easily notch out frequency bands that are still off limits or DSF) - MACs (MU-MIMO and OFDMA) that are more efficient than the CDMA MAC of 3G - MIMO (for higher throughput in certain environments, or further range in other environments). These are ALL good things.
Coming up, the next stage of LTE deployment will focus on reducing interference through tower antennas that can use beam steering, along with dynamic negotiations between towers over frequency and beam angle allocation --- also good things.
I do wish that Brian's reporting would focus more on the underlying tech --- what's making this work, what have the carriers deployed in each location, what new chip sets are available at the carrier level, how many of the newest 3D directional antennas are being sold, etc. Likewise --- how common are LTE devices these days, and how much data is LTE data vs 3G data? How oversubscribed currently are cells in a large city? Do the carriers plan to switch soon from FDD to the more efficient (because you have better open-loop channel knowledge) TDD. Are they using the best available algorithms in running their new kit (eg waterfilling across all the OFDM channels) or are they using just the basic algorithms to get the thing going? etc etc Talking to the carriers as a journalist, they might tell him some of this stuff, and more of it is available in the trade press.
The obsession with "this is how fast my phone goes" rather than "this is the total capacity available to most users in [random large city]" is somewhat puerile, but, to be fair, it is an easy measurement to make, and it does say SOMETHING interesting --- in this case it states that the info we've been given regarding the expected spectral efficiency of this wave of LTE is actually right on the money. (And depressingly low. 10MHz with 64QAM, 2x2 MIMO and 5/6 FEC would get you a nominal 10*6*(5/6)*2 =100Mbps throughput. In a clean WiFi environment you'd get close to that. [You'd see half that because of the WiFi MAC, but that's not relevant here, since the cell phone MAC is so different.] We actually see an average of about 15Mb/s. It's not that WiFi is so much smarter designed, it's that the cell phone job is just so much harder. You can't use a wimpy 5/6 FEC, you have to use half your bits for error correction. Although 64QAM is in the spec (because it's easy) you have to be pretty close to the tower to use it. Most of the time you're using 16QAM or even 4QAM. And your 2x2 antennas, much of the time have to be used for diversity, not spatial multiplexing, to cope with fading.)
"Do the carriers plan to switch soon from FDD to the more efficient (because you have better open-loop channel knowledge) TDD[?]"
No. Sprint/Clearwire is the only carrier in the US that currently has large scale plans for TD-LTE because it is the only carrier that has large swaths of unpaired spectrum (BRS/EBS 2500-2600 MHz, upwards of 100 MHz bandwidth per market).
Both AT&T (just acquired from Qualcomm) and Dish Network also hold unpaired spectrum (Lower 700 MHz D/E block 6 MHz licenses). But their licenses are in much smaller blocks, hence less than ideal for TD-LTE. So, they plan to utilize their unpaired spectrum with their other paired spectrum for LTE-Advanced carrier aggregation (supplemental downlink).
All other commonly held mobile spectrum (Lower/Upper 700 MHz, Cellular 850 MHz, SMR 800/900 MHz, PCS 1900 MHz, AWS 2100+1700 MHz) is FDD paired spectrum, for which FCC regulations generally prohibit TDD unpaired operation.
Imagine that, in a given market, carriers assigned the AWS A 20 MHz and AWS C 10 MHz licenses, respectively, follow standard FDD operation. The uplink is in each 1700 MHz segment, while the downlink is in each 2100 MHz segment. But the carrier assigned the AWS B 20 MHz license decides to forgo standard FDD operation and, instead, to utilize TDD operation in both 1700 MHz and 2100 MHz segments.
To continue, mobile station X operating in FDD mode attempts to receive from its serving base station in the AWS A 2100 MHz downlink, and the received signal at the mobile is -100 dBm. Concurrently, mobile station Y operating in TDD mode attempts to transmit/receive to/from its serving base station in the adjacent AWS B 2100 MHz segment (which is now operated as both uplink and downlink). Also, mobile station Y transmits at 23 dBm and is only 10 ft away from mobile station X. Do you now see the potential for adjacent channel interference?
Certainly, bandpass filters are supposed to keep adjacent channels from interfering with one another. However, when adjacent channels are at wildly different received power levels (upwards of 100 dB difference in my example above), even the best bandpass filters may not be able to attenuate adjacent channel emissions enough to prevent substantial interference. For this reason, traditional spectrum management policy almost always places uplink operations adjacent to other uplink operations of comparable power levels, and downlink operations adjacent to other downlink operations of comparable power levels, not to mention guard bands in between uplink and downlink operations.
As an aside, the LightSquared-GPS interference issue is a similar matter of wildly different received power levels in adjacent spectrum.
Hmm. Very interesting. That was a great explanation.
It's sad when one hits these real world issues that limit exactly how well things can work (I'm reminded of the PAPR issue with OFDM) but you're obviously correct. Given the history and the current mix of random users of adjacent spectrum, I guess we're stuck for a while, until, presumably, horse trading slowly rationalizes things, one band, in one market, at a time.
Awesome write up and I completely agree with all of it - the carriers tell us some things, but are still very closed about talking much about the actual network. Getting metrics from Field Test on devices is just about as far as we are right now.
I don't know if you saw it but we actually did take a decently in-depth look at LTE as a radio access technology and some of the bigger parts of eUTRAN when Verizon's deployment was just starting: http://www.anandtech.com/show/4289/verizon-4g-lte-...
The point of this little look under the pipeline banner is just to show what LTE looked like in Las Vegas during one of the world's largest trade shows with many other users onboard. Last year only the elite had 4G LTE dongles on VZW, this time I saw practically everyone carrying around either a VZW LTE handset or a VZW LTE datacard like I did (Pantech UML290 and the LG VL600 with MDM9600 and LG's L2000 basebands, respectively). I don't think I'd say either network was unloaded, though obviously AT&T has far fewer devices on it.
You are comparing a Verizon LTE network that has millions of phones actually running on it (since it has been up for more than a year) with a network that has hardly any devices sold. Other users restrict throughput, and I can't think of any gathering of existing Verizon LTE users that would be greater than CES. I'm sure that AT&T's HSPA network is also a bit bogged down in Las Vegas.
So the 4G connections are also far less congested than 3G, and I had a Verizon 4G mobile broadband card plugged into my laptop during the show. It was so much faster than any of the wifi connections that were available, and disabling the 3G radio meant that I never had problems with dropping from 4G to 3G and then having to wait 15+ minutes (or sometimes more) for the connection to step back up to 4G. As more users get on 4G this probably won't be the case (e.g. CES in 2014?), but for this show 4G support was amazingly responsive everywhere I used it, including at the convention center.
AT&T WCDMA (3G) has been pretty much completely dead for me the entire show. Every time I go to Las Vegas I spend the entire time on GSM/EDGE to get anything done at all. It's awful.
I would take these results with an entire bag of salt.
I have the Galaxy Nexus, and I recently upgraded my home internet to 30/5 service. I can use speedtest.net's flash site on my laptop and get expected speed (dead on the 30/5). Testing from my phone, it is showing that my cable connection is only good for ~3-4Mbit downstream, while the upstream is testing OK right around 5.
I've seen higher speedtests from the GNex, at work and at home on wifi, and also via the LTE, so I'm not sure what the deal is here. I've personally witnessed it give very wrong results over multiple tests over a period of 30-some minutes in the dead of the night, so I am not sure that I fully trust the app right now.
There's something weird with your WLAN configuration at your home - I get results on par with what I'd expect for the BCM4330 inside the GN at home on 5 GHz.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
24 Comments
Back to Article
3DoubleD - Friday, January 13, 2012 - link
I love high bandwidth network access as much as the next guy, but I recognize where it's needed. For example, I use a gigabit switch and cat 6 cable for all my internal home networking, but then my router is still 10.100 and connected by a cat 5e cable.Same goes with cell phones. Do you need 10+ mbps to improve your phone experience? Generally I don't find a great difference between 3G and wifi connected to a 26 mbps cable modem. Whenever Anandtech writes about LTE I keep thinking it's like they are comparing 9-9-9-18 RAM with 8-8-8-18 RAM... the real world benefit just isn't tangible. Anandtech usually does a great job in this area. If you believe that faster cell phone network connections really improve the cell phone experience, show us in real world test and not a synthetic benchmark like speedtest.
To me it seems LTE is largely marketing hype. Yes, it is good the technology is being implemented and it is largely a step forward. But, I refuse to get excited about it until there are no drawbacks compared with 3G since there are certainly aren't many tangible benefits. When battery life is competitive and there is wider coverage, then LTE will be more attractive.
phantom505 - Friday, January 13, 2012 - link
I agree with you entirely.I was really excited to see LTE take off and I thought it would mean you could get high speed internet out in the boon docks finally and then I saw how they priced it and realized that rural America is still hosed.
Not only are they going to take forever to put LTE out there, they want a king's ransom for it too.
Pozz - Friday, January 13, 2012 - link
Well the value of network speed greatly depends on usage, i suppose that if you tether a lot and/or use a lot of data intensive applications you could find some use for lte. Still I agree that if you just browse facebook 3g is plenty fine.On a side note... Any eta for the galaxy nexus review? :D
phantom505 - Friday, January 13, 2012 - link
With the per GB fees tethering is expensive as hell. Sure you can play games on perhaps, too bad it'll cost you a small fortune to do so.joncat - Friday, January 13, 2012 - link
I'd agree with you to an extent if all I did on my smartphone was check facebook or lightly browse the web. But I want to be able to stream video, music, and remote desktop from anywhere I go, including in the car, train, etc. 3g just doesn't cut it for those use cases. At&t's 3g almost does, but it's still not consistent enough to watch movies without occasionally buffering. Verizon and Sprint's 3g is simply just too slow. (I've used both and currently have both an AT&T and sprint smartphone)collegeguypat - Friday, January 13, 2012 - link
I've got the Thunderbolt and live in NYC. I can tell a huge real wold difference when out using my phone with 4g vs 3g.Opening emails, having instantaneous picture loading vs 3-4 seconds for them to pop up on 3g. Little stuff like that.
I think its more what you use your phone for. Calls and texts won't see a difference, but web pages, downloading apps, email its all a lot more enjoyable with 10x the bandwidth to download the content. It's similar to always being on Wifi.
The closest thing I can compare it to is going from a slow DSL to a cable modem. Things are just overall more seamless and you don't have to sit and wait for anything to load.
Battery life is the biggest thing they need to improve upon IMHO.
name99 - Friday, January 13, 2012 - link
The issue that MATTERS is not so much the raw speed a particular user sees as how efficiently the limited total bandwidth is shared between multiple users.The fact that I can get say 30Mb/s today on LTE is nice, but that's not going to last as soon as LTE becomes standard. I see 30Mbps when I'm the only person using the cell, but soon enough there will be 2, then 4, then 16 people sharing the cell with me.
So the big picture is that efficiency matters because we all win when resources are used more sensibly. Averaged across a wide range of environments, the 3G has a spectral efficiency of around .5b/Hz; 3.5G (throw MIMO in the mix) pulls that up to about 1. LTE increases that to about 1.6 --- which is pretty much what Brian is seeing.
The fundamental LTE technologies are
- all internet architecture (so lower latency, less obsolete overhead)
- OFDM (better utilization of large available frequency bands, with the ability to easily notch out frequency bands that are still off limits or DSF)
- MACs (MU-MIMO and OFDMA) that are more efficient than the CDMA MAC of 3G
- MIMO (for higher throughput in certain environments, or further range in other environments).
These are ALL good things.
Coming up, the next stage of LTE deployment will focus on reducing interference through tower antennas that can use beam steering, along with dynamic negotiations between towers over frequency and beam angle allocation --- also good things.
I do wish that Brian's reporting would focus more on the underlying tech --- what's making this work, what have the carriers deployed in each location, what new chip sets are available at the carrier level, how many of the newest 3D directional antennas are being sold, etc. Likewise --- how common are LTE devices these days, and how much data is LTE data vs 3G data? How oversubscribed currently are cells in a large city? Do the carriers plan to switch soon from FDD to the more efficient (because you have better open-loop channel knowledge) TDD. Are they using the best available algorithms in running their new kit (eg waterfilling across all the OFDM channels) or are they using just the basic algorithms to get the thing going?
etc etc
Talking to the carriers as a journalist, they might tell him some of this stuff, and more of it is available in the trade press.
The obsession with "this is how fast my phone goes" rather than "this is the total capacity available to most users in [random large city]" is somewhat puerile, but, to be fair, it is an easy measurement to make, and it does say SOMETHING interesting --- in this case it states that the info we've been given regarding the expected spectral efficiency of this wave of LTE is actually right on the money.
(And depressingly low. 10MHz with 64QAM, 2x2 MIMO and 5/6 FEC would get you a nominal 10*6*(5/6)*2 =100Mbps throughput. In a clean WiFi environment you'd get close to that. [You'd see half that because of the WiFi MAC, but that's not relevant here, since the cell phone MAC is so different.]
We actually see an average of about 15Mb/s. It's not that WiFi is so much smarter designed, it's that the cell phone job is just so much harder.
You can't use a wimpy 5/6 FEC, you have to use half your bits for error correction. Although 64QAM is in the spec (because it's easy) you have to be pretty close to the tower to use it. Most of the time you're using 16QAM or even 4QAM.
And your 2x2 antennas, much of the time have to be used for diversity, not spatial multiplexing, to cope with fading.)
WiWavelength - Friday, January 13, 2012 - link
"Do the carriers plan to switch soon from FDD to the more efficient (because you have better open-loop channel knowledge) TDD[?]"No. Sprint/Clearwire is the only carrier in the US that currently has large scale plans for TD-LTE because it is the only carrier that has large swaths of unpaired spectrum (BRS/EBS 2500-2600 MHz, upwards of 100 MHz bandwidth per market).
Both AT&T (just acquired from Qualcomm) and Dish Network also hold unpaired spectrum (Lower 700 MHz D/E block 6 MHz licenses). But their licenses are in much smaller blocks, hence less than ideal for TD-LTE. So, they plan to utilize their unpaired spectrum with their other paired spectrum for LTE-Advanced carrier aggregation (supplemental downlink).
All other commonly held mobile spectrum (Lower/Upper 700 MHz, Cellular 850 MHz, SMR 800/900 MHz, PCS 1900 MHz, AWS 2100+1700 MHz) is FDD paired spectrum, for which FCC regulations generally prohibit TDD unpaired operation.
AJ
name99 - Friday, January 13, 2012 - link
Why does FCC prevent TDD unpaired operation for these spectrum allocations? What possible point is there to this?WiWavelength - Friday, January 13, 2012 - link
The reason is simple: possibly disastrous adjacent channel interference between transmitters and receivers.To illustrate, see the AWS 2100+1700 MHz band plan:
http://wireless.fcc.gov/services/aws/data/awsbandp...
Imagine that, in a given market, carriers assigned the AWS A 20 MHz and AWS C 10 MHz licenses, respectively, follow standard FDD operation. The uplink is in each 1700 MHz segment, while the downlink is in each 2100 MHz segment. But the carrier assigned the AWS B 20 MHz license decides to forgo standard FDD operation and, instead, to utilize TDD operation in both 1700 MHz and 2100 MHz segments.
To continue, mobile station X operating in FDD mode attempts to receive from its serving base station in the AWS A 2100 MHz downlink, and the received signal at the mobile is -100 dBm. Concurrently, mobile station Y operating in TDD mode attempts to transmit/receive to/from its serving base station in the adjacent AWS B 2100 MHz segment (which is now operated as both uplink and downlink). Also, mobile station Y transmits at 23 dBm and is only 10 ft away from mobile station X. Do you now see the potential for adjacent channel interference?
Certainly, bandpass filters are supposed to keep adjacent channels from interfering with one another. However, when adjacent channels are at wildly different received power levels (upwards of 100 dB difference in my example above), even the best bandpass filters may not be able to attenuate adjacent channel emissions enough to prevent substantial interference. For this reason, traditional spectrum management policy almost always places uplink operations adjacent to other uplink operations of comparable power levels, and downlink operations adjacent to other downlink operations of comparable power levels, not to mention guard bands in between uplink and downlink operations.
As an aside, the LightSquared-GPS interference issue is a similar matter of wildly different received power levels in adjacent spectrum.
AJ
name99 - Friday, January 13, 2012 - link
Hmm. Very interesting. That was a great explanation.It's sad when one hits these real world issues that limit exactly how well things can work (I'm reminded of the PAPR issue with OFDM) but you're obviously correct. Given the history and the current mix of random users of adjacent spectrum, I guess we're stuck for a while, until, presumably, horse trading slowly rationalizes things, one band, in one market, at a time.
SerafinaWeathers - Monday, February 8, 2016 - link
Thoughtful writing . I loved the details ! Does someone know where my business might get access to a template a form example to edit ?Brian Klug - Saturday, January 14, 2012 - link
Awesome write up and I completely agree with all of it - the carriers tell us some things, but are still very closed about talking much about the actual network. Getting metrics from Field Test on devices is just about as far as we are right now.I don't know if you saw it but we actually did take a decently in-depth look at LTE as a radio access technology and some of the bigger parts of eUTRAN when Verizon's deployment was just starting: http://www.anandtech.com/show/4289/verizon-4g-lte-...
The point of this little look under the pipeline banner is just to show what LTE looked like in Las Vegas during one of the world's largest trade shows with many other users onboard. Last year only the elite had 4G LTE dongles on VZW, this time I saw practically everyone carrying around either a VZW LTE handset or a VZW LTE datacard like I did (Pantech UML290 and the LG VL600 with MDM9600 and LG's L2000 basebands, respectively). I don't think I'd say either network was unloaded, though obviously AT&T has far fewer devices on it.
-Brian
mcnabney - Friday, January 13, 2012 - link
You are comparing a Verizon LTE network that has millions of phones actually running on it (since it has been up for more than a year) with a network that has hardly any devices sold. Other users restrict throughput, and I can't think of any gathering of existing Verizon LTE users that would be greater than CES. I'm sure that AT&T's HSPA network is also a bit bogged down in Las Vegas.phatboye - Friday, January 13, 2012 - link
If you read the article you will see that they already pointed that outirev210 - Friday, January 13, 2012 - link
I think this is a rather misleading article... interesting but misleading.A) AT&T's LTE network is not loaded and not fully built-out.
B) AT&T covers 1/3rd of the country with 10x10 @ 700, 1/3rd of the country with 5x5 @ 700, and the rest of the country is zero 700MHz.
Verizon has 22MHz of 700MHz spectrum nationwide and 44MHz in some urban markets.
The problem is, the wireless landscape is changing SO quickly, your results today will be 100% irrelevant by next CES.
Brian Klug - Friday, January 13, 2012 - link
This is all very true, however at least for the Las Vegas market, both AT&T and Verizon are running 10 MHz FDD (10x10).-Brian
umbrel - Friday, January 13, 2012 - link
There has been a lot of news about LTE lately, did it completely killed WiMax in the US? What about the rest of the world?ddrum2000 - Friday, January 13, 2012 - link
Mor or less. Sprint is moving away from WImax to LTE but this just began in the past month or so.jeffbui - Friday, January 13, 2012 - link
Were you able to use AT&T data throughout Vegas during CES? I can't even make phone calls there, let alone use data whenever there's an event there.JarredWalton - Friday, January 13, 2012 - link
So the 4G connections are also far less congested than 3G, and I had a Verizon 4G mobile broadband card plugged into my laptop during the show. It was so much faster than any of the wifi connections that were available, and disabling the 3G radio meant that I never had problems with dropping from 4G to 3G and then having to wait 15+ minutes (or sometimes more) for the connection to step back up to 4G. As more users get on 4G this probably won't be the case (e.g. CES in 2014?), but for this show 4G support was amazingly responsive everywhere I used it, including at the convention center.Brian Klug - Friday, January 13, 2012 - link
AT&T WCDMA (3G) has been pretty much completely dead for me the entire show. Every time I go to Las Vegas I spend the entire time on GSM/EDGE to get anything done at all. It's awful.-Brian
ZPrime - Friday, January 13, 2012 - link
I would take these results with an entire bag of salt.I have the Galaxy Nexus, and I recently upgraded my home internet to 30/5 service. I can use speedtest.net's flash site on my laptop and get expected speed (dead on the 30/5). Testing from my phone, it is showing that my cable connection is only good for ~3-4Mbit downstream, while the upstream is testing OK right around 5.
I've seen higher speedtests from the GNex, at work and at home on wifi, and also via the LTE, so I'm not sure what the deal is here. I've personally witnessed it give very wrong results over multiple tests over a period of 30-some minutes in the dead of the night, so I am not sure that I fully trust the app right now.
Brian Klug - Friday, January 13, 2012 - link
There's something weird with your WLAN configuration at your home - I get results on par with what I'd expect for the BCM4330 inside the GN at home on 5 GHz.-Brian