I'm sorry for hijacking the thread, but why is it that air (LTE) can handle much higher bandwidth than copper (i.e. DSL or DOCSIS)? I can easily get 150 Mbps speed per device, but can barely get DSL that's 10 Mbps (and that's over VDSL2!). The distance from the DSLAM to the DSL modem by wire is less than the distance from the base station / cell tower and the LTE modem by air. Does air have more bandwidth even though there are many more signals and users/subscribers than with DSL? I don't know, can anyone explain this to me?
The answer is super complex and many factors play a role in the technical bits - if we ignore the commercial bits.
DSL lines are capacitively coupled to ground over their entire run length. This limits the frequency at which the line will operate reliably (a typical DSL modem will perform a whole series of tests to figure out which bands are 'safe' to use).
Other factors are dodgy connections in the line, other DSL lines in the same trunk cable causing interference, crappy termination causing impedance mis-match between the transmission lines and the driver hardware and limited usable bandwidth for a carrier wave to superimpose your data on top of. The smaller the available frequency band(s) the less data you can send.
Think of 'POTS' (the Plain Old Telephone Service) wiring as just about the worst possible means of transporting data that you could find. All things considered that DSL works at all is a small miracle and it took a good bit of engineering and some changes to the infrastructure (mostly the removal of the boost coils in the lines) to allow it to work.
LTE ('air' as you call it) has a huge advantage. It starts off as a GHz range carrier signal with modulation bands up to 200MHz wide (but more typically between 20 and 60MHz). That's an awful lot of bandwidth. The lack of connectivity is made up for by having a fairly dense network of base stations.
Of course you need to share those base stations with everybody else but as long as it is just you you can see phenomenal speeds, far in excess of what your DSL line is capable of.
If your DSL line could use carrier frequencies in the 100's of MHz without losing definition then DSL could be just as fast. Unfortunately the physical medium is crap and there isn't much to be done about that short of replacing it with coax (and that's called cable, and it's faster than DSL if you are on your own but - again - you will share that run of coax with a lot of your neighbours).
> All things considered that DSL works at all is a small miracle and it took a good bit of engineering and some changes to the infrastructure (mostly the removal of the boost coils in the lines) to allow it to work.
Indeed, brings to mind DSL over wet string[1] that was posted here a while back[2].
DSL speed has an inverse relationship with wire length. The marginal benefit of VDSL over ASDL is almost entirely gone at 1km distance. Telcos that care about offering competitive speeds will install more VDSL hubs in neighborhoods to keep average length short. Or better yet they will install fiber. Telcos that rely on monopoly status to stay in business or think wireless is a more productive use of capital will leave DSL to rot.
LTE service usually comes with very low usage quotas and punitive overage fees or throttling. 150 Mbps burst speed is very different from 150 Mbps service you can freely use. Several thousand users have to share that 150 Mbps!
At high frequencies the skin depth of copper decreases and the signal is confined to a small amount of material, which provides a high ohmic resistance to the signal. So for copper there is always a maximum frequency for the signal thus limiting the bandwidth. For air the signal propagates as a free EM wave, essentially in the vacuum, and isn't band limited by the material its traveling through.
This is my understanding at least, I am open to correction/ nuance.
It kind of depends on your distance, and the actual power used in those transmission. You could get 100Mbps or even 600Mbps with G.Fast DSL within short distance and decent cable quality.
And it is also a matter of investment. The 4G, Smartphone evolution we are talking about 10s of billions of monthly recurring revenue worldwide, and a % of those are going into making faster 4G, more capacity, and 5G. I think we sort of forget 10 years ago when the iPhone first came out, No one was really using 3G ( Apart from Japan ). We literally went from End of 2G era to start of 5G in mere 10 years! Because consumers are willing to pay more for faster network access, better phones, better modem chips. You would have been told you are insane if you tell anyone in 10 years ago ~300M people on earth will spend average $750+ on a phone, EVERY YEAR.
A lot of these 4G / 5G tech, innovation are actually moving to other areas. WiFi upcoming 802.11ax is like mini LTE, the G.Fast for DSL and DOCSIS 3.1 for Cable also take many learnings from 4G.
As a matter of fact, someone is trying to use wireless signals in the tiny space between twisted cable wiring to get you Terabit DSL.
The speeds are theoretical and plummet as spectrum utilization increases. If everybody replaced their xDSL/DOCSIS/FTTx connections with 4G/5G and utilized them the same, the speeds would be terrible.
Restrictive data transfer quotas and the proliferation of free Wifi access make existing 4G+ implementations look fast.
Wi-Fi is still a largely under-utilized resource. With the right incentives a lot of LTE usage could be off-loaded and hopefully create significant competitive pressure on cellular plans. Indoor usage should be all Wi-Fi, with LTE capacity saved for truly mobile scenarios like riding in a car or walking down the street.
Those incentives being a kickback/royalty payment to Wi-Fi operators who allow roaming and cheaper bills and higher quotas for customers. Right now I have a choice between $10/GB fast LTE and free but very slow store Wi-Fi. Free means the venue operator spends as little money as possible and users over-consume. A revenue stream for accepting off-loaded traffic would give them a business justification to subscribe to a decent connection and install additional access points.
To this day it seems like only the equipment manufacturers are profiting off Wi-Fi. Venue owners only indirectly profit by attracting more foot traffic. Cellular operators plug coverage holes with Wi-Fi calling, and address severely congested areas with efforts like attwifi, but they do not want to disrupt their main product.
I really wish Wifi to 4g handoff was truly seamless since that's a lot of the reason I end up turning wifi off. As I'm walking around an office building or a college campus, loading webpages is a subpar experience at best since it's constantly losing wifi, switching to 4g, and re-establishing all of my connections. I have the same issue with sticky wifi connections where my phone stays connected for 10-20 seconds after I am out of range of the network, during which nothing works.
I would love to either have a vpn that connects over 4g and wifi and seamlessly redirects traffic depending on conditions, or have widespread support for something like multipath-tcp that supposedly fixes this issue.
1. WiFi, even the current 802.11ac were never designed for those usage. It was for a few people in small area using it lightly, not constantly.
2. WiFi / Unlicensed Spectrum has limited power compared to licensed Spectrum. Which means full coverage is few order of magnitude harder.
3. I still think weak signal and hand off is an software and implementation issues. We should just disconnect WiFi if its signal is only 2 bars and not try to do edge location transfer.
The upcoming 802.11ax fixes that somewhat, and it should be able to handle 32+ concurrent users, and with up 8x8 antenna in phase 1 ( Not confirmed yet ) and hopefully 16x16 in phase 2. I am not sure if Phase 1 and Phase 2 are compatible in such a way so Phase 2 router would benefits Phase 1 users as well. After-all these manufacture are there to make you spend more money to upgrade. Another problem is all these are good on paper, but once 802.11ax are mixed with 802.11ac, its performance and capacity drops dramatically. So I actually wish in the far future we force more users on ax, drop ac from 5Ghz band and users uses N on the 2.4Ghz band.
Or better yet, LAA, ( License Assisted Access ). While technically superior I am not entirely comfortable of putting everything in carrier's hand. And Full LTE in Unlicensed Spectrum; MulteFire, is an entirely Qualcomm tech.
2.4 GHz is indeed congested. There's nine 20 MHz channels in the 5 GHz band in USA. An additional 16 are available if client devices support DFS channels. Even if there's weather radar and you lose channels 120-128, that's 22 channels.
> The speeds are theoretical and plummet as spectrum utilization increases. If everybody replaced their xDSL/DOCSIS/FTTx connections with 4G/5G and utilized them the same, the speeds would be terrible.
The provider A1 in Austria (around 45% marketshare for broadband) uses ADSL for most of its broadband lines, which is significantly slower than cable. A few years ago they merged with the largest wireless provider and now use hybrid-ADSL-LTE modems that use ADSL by default, but which automatically switch to LTE when higher bandwidths are required.
In addition there are several wireless providers (and MVNOs) that sell data flatrates which are cheaper and faster than ADSL. E.g., I recently replaced my ADSL connection with a LTE flatrate for 20 EUR a month that allows 20mbit down and 6mbit up (and almost all of the time it also reaches that speed).
I often hear the argument that LTE would be too congested if everyone uses it. But I can't see any evidence of that in places where LTE is heavily used.
> I recently replaced my ADSL connection with a LTE flatrate for 20 EUR a month that allows 20mbit down and 6mbit up
Jealous! Here in Australia I'm paying A$70/month (~44 EUR) for 12/1 4G home broadband service. Still better than the 3/0.5 service I was getting with ADSL for the same price...
DSL frequency is usually between .25 and 1 MHz as that's what CAT1 was designed for, while LTE is usually sent over frequencies in the hundreds or low thousands of MHz. Shorter or higher quality runs of DSL lines can have slightly higher frequencies, but it's still not much.
It can't. Coax can easily support Gbps connections (I used to have a 800 Mbps connection where the last few meters were over regular coax).
Now if you're talking about "why can't my copper twisted pair support that bandwidth", then it is a function of quality (the twisted pair is likely old, heavily bended, subject to interference, and running over old technologies.
Bandwidth is essentially a function of 2 things: spectrum availability (measured in Hz) and spectral efficiency (measured in bits/second/Hz).
For example, if a carrier "owns" 20 MHz of spectrum (let's say, from 700 MHz to 720 MHz), split in two symmetrical up/down channels, with a spectral efficiency of 4bps/Hz (a common value for LTE), and the cell towers have 3 sectors, each cell tower can provide 2043/2 120 Mbps up and down (although only 40 Mbps max per device, since a device can only be in one sector).
In wired communications, spectrum is less of a limit: every single wire has the full spectrum available. And if you are using fiber, then there is no interference, both from outside the wire or from different signals.
So air will never beat wired, because every single strand of wire has, in practice, the entire capacity of air.
PS: There are more things in play, such as attenuation, actual frequencies used (1 to 2 MHz has 1 MHz of spectrum; 1 to 2 GHz has 1000x more spectrum and therefore more capacity), etc.
IIRC the wireless modulations got more attention because of the spectrum scarcity and so are very efficient coming close to the Shannon limit. I don't know if the same is true for wired; at least Ethernet uses a pretty dumb modulation and started out focusing on other useful features such as simple hardware, listen before talk, etc.
You don't really care that much about modulation when you have gigahertz of spectrum to work with, and the cost to add another wire (or fiber) is marginal.
The group velocity[0] of an electromagnetic wave in copper is about 95% as fast in copper as it is in air, so that's going to add a few nanoseconds to the ideal latency.
[0] The speed of light, but not the speed of light.
5G antennas are getting close to AESA (active electronically scanned array) radars used in military (X-band is 8 to 12 GHz). Unlike radars in fighters, 5G array can't cost millions per unit.
Which is great news; I'm looking forward to the cost of phased array radar dropping for all applications (military, weather, communications, life safety, etc), as well as improved colocation capabilities (for example, the next gen NEXRAD weather network is going to be a combined phased array weather and aircraft transportation surveillance [TSA & FAA] system enabled by solid state phased array hardware [+], which is going to save $5 billion dollars over the system lifetime).
Self-driving likely will require multiple sensing modalities. One big draw towards radar is it works through rain, fog, dust, etc which optical techniques (like our eyeballs) struggle with
They dont. The power levels are less than 1% of military radars. Plus mass production. They are talking 10,000$ per unit, which isnt all that bad for cell equipment that can aim itself.
Military radars are for detecting quiet targets, not cellphones wanting internet. It is total apples and oranges in terms of energy levels.
Last time I looked you need a huge amount of elements to get good angular resolution. For Wifi two things.
1. All you care about is getting the ratio of multipath energy high enough that you can resolve independent signals.
2. Remember RF power is measured in db and your radio's power consumption is in linear watts. 6db better signal margin means you use half the amount of energy. Useful for devices that run off batteries, have issues with heat dissipation.[1]
3. Less energy transmitted means your effective cell size gets smaller.
[1] Crusty old EE pointed out to me once the difference between a 85% efficient power supply and a 90% one is 65% less heat to get rid of. Same thing is true of RF power amps.
Yeah, I've heard from ham types that it's really common to see near 7-figure buy-outs for prime tower land(50' x 50' + access) before even talking about costs of putting up 100'+ tower, power and hardware.
Sure.
Now imagine if a wifi router cost a company 10000$ instead of 100$.
These devices would need a density closer to wifi access points, compared to cellular towers.
An aside, but when it comes to electronic beam-steering, it's hard not to talk about LOFAR[1], a low-frequency radio interferometer for astronomy spread across Europe.