Internet traffic grows, will satellites be able to compete with submarine cables

Started by merlinraj, Aug 01, 2022, 05:24 AM

Previous topic - Next topic

merlinrajTopic starter

Over past year, the expansion of global Internet traffic amounted to only 26%, according to "experts" this is the lowest figure in the last 15 years. Has Moore's law, considered in the concept of doubling bandwidth over a certain period necessary to meet the needs of network visitors, been violated? Perhaps so, if not for one moment.
In the previous 5 years, this figure was 28% per year. That is, every 2 years, the throughput still does not double, as the number of transistors in microcircuits, so that somebody does not say, and the so-called decrease in growth rates is at the level of error.

Today, the entire global traffic is in the range of 766 terabits (766 million megabits) and although the growth rate has slowed since 2017, the bandwidth used now is almost 3 times more than what was consumed 4 years ago.

Today, not surprisingly, the growth rate of traffic generation by our subscribers is declining to a large extent. And this is facilitated, first of all, by the complication of interaction with copyright holders and global Internalization. Now content is produced not only by professional studios, but also by individuals who, in order to deliver their content to viewers, use various broadcast services, including such large ones as YouTube.

Now our main clients are not tubes, but large and small business that deploy their solutions to the Internet, solutions that supply some kind of service or even ensure the delivery of content from large aggregators, but at the same time do not generate large traffic on their own, because they, in basically - a shell that provides access and monetization of traffic from large market players.

Now the traffic of our subscribers is not growing so rapidly, and sometimes it is decreasing, despite the increase in the number of servers. The same situation is observed with many other hosting providers. Due to what, then, growth?

Global traffic is growing due to the connection of new subscribers to the network and the development of communication channels. A significant increase in overall traffic growth rates is provided by Africa and Asia. Africa saw the fastest expansion in international Internet bandwidth consumption, at 45% per annum in 2015-2019. Every year, Asia lagged behind Africa, averaging 42% per year.

Interestingly, since the monitoring of international bandwidth consumption was started in 1999, the busiest route was the route from Europe to the US and Canada, but this route was later eclipsed by the new direction - Latin America - the US and Canada. Capacity on this route exceeded that on the European-American route for the first time in 2013. Six years later, this route has turn into more than 2 times loaded than the route Europe - USA and Canada. In 2019 alone, operators increased throughput by 9.5 Tbps, a 27% increase from a year earlier. Thus, at the moment, the entire
 throughput of all connection lines in this direction is almost 43 Tbit / s.

But why is there such a major shift? The fact is that Latin America is connected mainly with North, while Europe and Asia have a large variety of connections, as can be seen from the diagram. Of course, countries' desires to be independent of US monitoring led to the fact that direct links were built among Latin America and Europe, but their entire contribution to the total bandwidth is still small. At the same time, major US content providers have built their own fiber optic links to deliver their services to these regions. This is cheaper and faster than delivering content from more remote European data centers.

It is important to note that submarine cables and terrestrial links currently supply a greater need for bandwidth than satellites, we told about this in 2015 in an interesting article - Messages in Depth: The Amazing Story of the Underwater Internet, which has not lost its relevance. Nevertheless, a lot has changed since then. If earlier the bandwidth of individual subscribers of satellite Internet access did not exceed a few megabits, then offers with access at speeds up to 100 Mbit / s are already beginning to appear (for downloading, of course, for uploading the channel is an order of magnitude weaker or even more).
For example, in 2017 the entire bandwidth of the Viasat operator was 230 Gb / s, and by 2020 it is planned to reach 1 Tb / s only thanks to three satellites that will form a new Viasat-3 satellite network and supply many subscribers in hard-to-reach regions with high-quality Internet access. However, even these numbers are just percentages of the entire bandwidth needed by all Internet users.

Can new satellites end the dominance of submarine cables? Obviously not, at least not in the near future. Cables supply orders of magnitude more bandwidth than satellites. Nevertheless, until now, about half of the inhabitants of the Earth do not have high-quality and high-speed access to the network. And this is 4.6 billion potential Internet visitors, some of whom are not active users of the Internet at all because of the high cost or complexity of access. As for those who are more or less active users of the network at the moment - this is 57% of the population, 4.4 billion visitors. Internet access is worth it and could be improved.

There is a huge difference among geostationary satellites, the size of a truck, and low-orbit satellites, weighing 200-300 kg, which, thanks to advances in technology, are ten times smaller and much cheaper, but need more frequent replacement and a more complex control system.

An interesting fact is that as of March 31, 2019, there are 2062 satellites worldwide, the distribution by country is as follows:

USA: 901
Russia: 153
China: 299
Others: 709

But what are these satellites?

LEO: 1338
MEO: 125
Elliptical: 45
GEO: 554

As you can see, most of them are low-orbit satellites. And over time, their number will only grow. So Elon Musk's Starlink project provides for the commissioning of up to 42,000 low-orbit satellites, which will supply fast and affordable Internet access around the world and, possibly, in terms of delays, will become more efficient than fiber optic connection lines.

The fact is that the speed of light in fiber optic communication lines is only 60% of the speed of light observed in a vacuum. Data transmission to satellites is devoid of such a flaw. For a long time, there was only one flaw - the geostationary orbit (when the satellite's rotation speed coincides with the Earth's rotation speed and due to this the satellite is located above the same point on the Earth's surface) is too far from the Earth (35,786 km above sea level). Due to this, the minimum delay (ping) was about 477 ms, although in practice the values reached 600 ms and higher.

Low orbit satellites will be located at altitudes from 1/105 to 1/30 of geostationary, and in this case, the delay of the satellite segment of the network will be 25-35 ms, which is comparable to transmission delays over cable and optical networks. And if we take into account the fact that the signal transmission will take place not at a speed of 0.6 times the speed of light, but practically at the speed of light, then Europe and America can be connected by a link with a much lower ping than is currently provided by submarine connection lines.
Thus, customers looking for the lowest latency and highest performance will find satellite Internet to be ideal for their needs if the source and destination are separated by a distance that is twice the height of the satellites. One example is that the fastest fiber from London to Singapore provides a latency of 186ms. The satellite system can reduce this value to 112ms, which greatly improves network performance.

The approximate capacity of each of the satellites of the Starlink project is 21 Gb / s, it is planned to put into operation up to 4425 satellites in the near future (the first batch of 60 satellites was put into orbit in May this year), however, according to various forecasts, the project is not likely will be able to bite off a piece of the pie, in serving users, in more than 21 Tbit / s, which is 21/466 * 100 = 4.506% of the entire bandwidth of the Internet network, or slightly less than one pair of underwater backbone from MAREA provides.
Of course, when 42,000 satellites are brought into the network (not so long ago the company received permission to commission 12,000, and then another 30,000 satellites), then the system throughput can turn into equal to 1 Pb / s, which exceeds the current needs of connectivity. 2 times. Nevertheless, this may happen no earlier than 2027, or even much later. And if you apply Moore's law, according to which the bandwidth doubles every 2 years, in the most rosy forecasts Starlink will not receive a share of Internet traffic exceeding 25%. The company also builds forecasts to receive up to 50% of all Internet traffic and about 10% of traffic in cities with high population density.

Nevertheless, when making forecasts, everyone forgets about the errors that occur during satellite transmission, which has a significant impact on the actual throughput of the entire system. If the error rate is one error per 1000 bits, you will see three errors (on average) in a 4000 bit block. Which means complete inaccessibility when working with blocks of 4000 bits (none of the blocks can be transmitted as a whole). If the error rate is one bit per million, you can send 999 blocks of 1000 bits (theoretically) without errors, which means good throughput. What frequency satellite systems will supply in reality is a question, so far we have only bright and bold forecasts.

More than 90% of all world Internet traffic is currently transmitted by landlines, but we should not obliterate about local traffic, within cities and even individual networks. For example, in 2004, when a boom in building home networks began in Kyiv, many visitors did not need external world traffic at all, then the Internet for many was represented by "UA-IX" and a server with movies from a home network provider, or intranet torrents .
There were even buttons to disable external and Ukrainian Internet traffic. In our time, cheaper access and the development of social services have upset this balance, local traffic and global traffic have changed places. Nevertheless, it is possible that in the near future a technology for storing and processing data on user devices, and not data center servers, will be invented, in which case the expansion  in global traffic consumption will slow down, and local traffic will again increase its relevance, provided that the inhabitants of the planet continue interact and communicate for the most part via the Internet with people living nearby, since now the "borders" among network visitors are practically not felt. Today, most submarine links have a capacity of 40 Gbps per channel, while the average satellite channel provides 40 Mbps per user, which is 3 orders of magnitude less efficient.
We should also not obliterate that FOCLs are also developing, the 100 Gb / s FastEthernet standard already exists quite well, which in no way contributes to narrowing the "gap" among satellite and fiber-optic Internet and once again indicates that Starlink is more marketing, than a real project that will make a revolution. Yes, thanks to the project, it will be possible to access the network in hard-to-reach regions, but FOCL will not be able to compete in the coming years, and statements about a 50% traffic share are another myth that has already been dispelled.

It is worth noting that at the moment, not only Starlink is developing a solution with low-orbit satellites, there are many companies - OneWeb, Space Norway, Telesat, and even Facebook is looking in this direction. But I would like to recall that not all such projects were growing, as they may not take into account the needs of future subscribers and the economic efficiency of the solution, which is quite difficult to predict.
Here I would like to recall as an example the Teledesic project, Bill Gates, who invested a billion dollars in it in the 90s and intended to launch 840 satellites to build the Internet from the sky solution, but failed in 2003, when it was decided to curtail the project after after the amount of investments approached 9 billion dollars with no prospects of payback. Other similar projects also did not take place due to economic inefficiency (the same Iridium) and it was decided to abandon the plan. In any case, new projects are an unconditional positive stage in the development of the Internet network. Sometimes you need to go through some stage in order to achieve a successful new technology. Perhaps the current business will be more fortunate in this.

Nevertheless, astronomers around the world have already expressed concerns, in particular the International Astronomical Union (IAU), that new satellite systems are likely to interfere with astronomical observations. Whether commercialization will win science this time remains an open question. But the most important thing is that this new race does not lead to irreversible consequences. There is the so-called Kessler syndrome, which is increasingly remembered by astronomers.

According to this syndrome, with a high density of satellites, a situation is possible when, in the event of a crash of just one of the elements, the debris will damage and destroy others, and as a result, space will be closed for some time, since any other object put into orbit will be attacked by thousands debris flying in unpredictable directions at speeds of tens of thousands of kilometers per hour.
Of course, that over time, all the debris will fall to Earth or burn up in the atmosphere, but any satellite connection can be lost for some rather long time. It is difficult to say how likely such an event is. It is possible that this is an invention of scientists in order to protect science.



Question is - why such containers, if, as we see from the reports, they are used only by a few percent? The answer to this question is just as banal. People think about the growth of traffic consumption and lay sufficient capacity for future consumption. We have already calculated and expect traffic growth.
And this, as far as I understand, goes against the article.
As for whether Amazon has so much traffic - well, you don't work there and don't know, but  which means you fantasize. And there is no traffic and the fibers are not theirs. It's not beautiful.

Amazon, in addition to retail, has a big layer of business - virtual private clouds. This article is about just such things. And there is a lot of traffic there.
This is the Internet, and not quite the Internet. These are private virtual networks, which are sometimes isolated from the Internet, and sometimes not. Corporate traffic goes to them and that explains the slowdown in the growth of public traffic.