The technology deployed on any network has a set limit, something which anyone who gets slow video speeds when their siblings are playing Call of Duty will attest to.
This is referred to as the maximum capacity of the network and most people have encountered it at a specific point. Think back to trying to use your phone at a concert or to send a text on New Year’s Eve. There was a period when the internet was a niche thing and capacity was less of a problem (think after 56k modems but before iPhones) as less people demanded its resources. Then the minority of users were responsible for the majority of traffic and could be fairly free with its use.
However, with the rise of social everything, “internet of things” and an abundance of random cat videos, more and more people are demanding video and rich media content across their network connections.
To be fair to the networks they are also improving to match. We have seen mobile connections move from GPRS to 2G, to 3G and 4G, in wired connections we have seen 10G turn into 40G and now 100G and work is already underway on 400G – it cannot be denied that the internet itself is getting faster.
Yet for cities like London, with population density on the rise and the level of service people demand ever increasing, experts are predicting an upcoming “capacity bottleneck” – the amount of available capacity just won’t be enough for what people want to do with it.
This will of course eventually be solved by the next generation of technology but that is thought to not be soon enough to prevent the immediate problem of delays and under-resourcing.
This, as I understand it, is part of the argument being used by operators and ISPs to argue against the idea of Net Neutrality which is that equal access should be given to all content on the internet for any who wish to access it. This is a practice which network operators argue is impractical since there are already capacity issues when multiple people are streaming content on the same local area – like trying to use social media during the Olympics. Since there is a limited supply and consumers demand a high quality service, operators should be able to prioritise content more than they already do by charging a company for its services to be delivered at premium speeds over the network. This has been predicted to have broad implications for start-ups who cannot afford to pay for the same priority treatment as a large content provider.
Although this idea has been widely discussed in the media, it hasn’t gained the necessary traction in the public mind to affect the debate, however, if it was rebranded as the potential “Monetisation of Capacity” it might benefit from more widespread debate as it would tie the probable outcome to its effect on consumers.
It is true that, capacity bottlenecking will become more prevalent as content demand growth outpaces the development of the next technology standard (5G, 400G). The question is does this provide the justification for this approach?
Should operators be entitled to control content access for financial means and place restrictions on the wishes and decisions of their customers? There is a large question as to whether we can maintain the principle of free access which provides a lot of the strength of the internet as a content-sharing and idea generation platform.
This is what troubles me. Smaller indie platforms which can’t buy priority speeds will be relegated by market forces as consumers are fickle beings. If Netflix or Amazon is more reliable due to these agreements then that is what the consumers will use, which will essentially create a monopoly in certain spaces where existing content providers who already have the market share can dominate their smaller competitors. This causes me concern as it could have a negative effect on tech start-ups and innovation as well as the freedom of content which has made the internet the powerful tool in society that it is today.