Google Cloud Platform Network: Premium Tier vs Standard Tier

But there's a caveat. While optimizing our site is essential for improved speed but it's not the only factor to consider. The network and hardware which support our website as well as connecting it to our visitors also have a lot to do with it. Much.
This week we'll look at why Google's making a huge investment into their networking infrastructure, and some of the differences between Google Cloud Platform's premium-tier network as well as the standard tier network.
Bandwidth and latency (Key Factors for Hosting Infrastructure Performance)
Before getting into details about Google Cloud's network it is crucial to know the two key concepts that are bandwidth and latency.
Bandwidth refers to the capacity for throughput of the network. It's measured in Mbps; while it is the total of delays different routers are able to add to the web's request and response.
In a metaphorical sense, bandwidth or throughput is often portrayed as the capacity of a water hose that allows a specific amount of water per second. It is possible to compare latency to the amount of time beginning from the time the water pipe is opened to the point that it begins to flow.
Due to the low cost of establishing the connection between the routers "hop" in the course of the connection adds a small amount of delay to the end-to-end requests and replies.
So, the farther the visitor and the server where the website is located, the higher the delay. Also, the more fragmented the network is, the higher the delay.
It is possible to visualize this making use of a tool known as traceroute which is also called tracert for Windows. The next screen shots show how we have used it to check the routing delay of two requests, made by Europe. Specifically:
One to weibo.com:

and another to bbc.co.uk:

Like we anticipated, the number of hops to the site in China is almost 2x bigger than to those in the European one. It's because of the additional time to request to a website that is hosted in Britain. United Kingdom.
The three columns tracert displays represent three roundtrips (RTT). Each row is a representation of various hops or routers along the way. They often have URLs that help us determine where that particular router is situated.
The round-trip time to routers in China and Hong Kong takes close to one third of one second.

These are the findings of Belshe, which are presented in a neat graph :

Networks vs Internet Peering vs Transit
For us to comprehend our subject a bit better, we need to explain the basic principles of the internet's topology. The world wide web is comprised of multiple international, regional, and local networks.
In 2018, there are more than 60,000. AS (Autonomous Systems). These networks are owned by governments or universities. ISPs.
In these networks, we differentiate between Tier 1, Tier 2 and Tier 3 networks. They represent the independence of each network's position on the internet in general.
- Tier-1 networks are independent by the fact that they do not have to pay to connect to any other point on the internet.
- Tier 2 networks have peering agreements with other ISPs However, they must have to pay for transportation.
- Tier 3 networks, the lower level, connect with the other internet services by buying transit from higher levels. They're basically like customers who pay to access the internet.
Peering relationships are when two networks share traffic on an an equal basis, so that neither of them pays the other for the transit while they return it at no cost.
The main benefit of peering is the drastically reduced time to complete.

Arrows represent the journey of web requests. The arrows that are dashed represent the connection to the transit network, while full-line arrows represent peering connections.
Once the tier 1 provider has been reached it's relationship to another service on the same level is known as a peer connection. Tier 1 networks are connected to others and relay their requests exclusively via peering partners. They have access to all network on the internet, and without the cost of transit.
There is also another scenario in which two Tier 2 providers sign an agreement for peering, marked with turquoise color. The hop count in this scenario is lower and the site is much quicker to load.
Border Gateway Protocol
BGP is an unpopular protocol mentioned, aside from highly technical circumstances. It is, however, at the very core of the internet as we see it today. It's the foundational element for our capability to connect to all kinds of information online and is one of the vulnerable web protocol stack.
Border Gateway Protocol is defined in IETFs Request for Comments #4271 in 2006 and it has since been updated several times. The RFC states:
"The principal purpose of a BGP communicating system is the exchange of network reachability information with others BGP system."
To put it simply, BGP is a protocol which determines the best direction of a network request across hundreds of thousands of nodes that could be the destination.

We can picture every node as an Autonomous System , or a network that would consist of multiple routers, nodes as well as servers and systems connected to it.
In BGP protocol, there is no algorithm for auto-discovery (a process or method by that every new node is able to find nearby nodes to connect through) Instead, each BGP peer is required to be specified by hand. In terms of the path algorithm, to say from the perspective of the words of a Cisco expert:
"BGP doesn't have any metric that can be used to determine which is the most effective route. Instead, it advertises an extensive set of attributes with each route and uses a complex algorithm comprising more than 13 steps to determine which route is most beneficial."
Autonomous Systems transmit routing data to peers. However there aren't any strict regulations that would be enforced regarding the path choosing. BGP is a system that is implicitly based on trust and this may be one of the biggest security weaknesses that we have today on the web. It was the year of theft, in which MyEtherWallet.com website traffic was hacked and over 200 Ether was stolen (value of $152,000) revealed this security flaw.
Development of Cloud Computing, CDN, as well as the Edge Market
In light of the increasing demands for those in the IT market, ranging from the internet industry, online gaming, towards the Internet of Things and others, the market space for service providers and products that solve the latency problem was evident.
[email protected[email protected] created by Amazon is another instance of this pattern along with an Intel as well as Alibaba Cloud partnership to deliver Joint Edge Computing Platform targeting the IoT market.
GaaS is short to mean Gaming as a Service. GaaS is a cloud service that gives users the ability to participate in games on servers that run games in the cloud. This piece compares some prominent GaaS products. GaaS segment.
Everyone who has ever shopped for a TV or a video projector and spent time configuring Miracast or another casting connection between a television and another device, knows how critical the latency is. Yet, there are GaaS service providers that offer game streaming at 4k resolution and 60Hz refresh rate...and users don't need to purchase equipment.
The drama of the most recent Huawei banning by the US has brought attention to the need for 5G networks as well as the critical necessity of establishing a clearly defined path to upgrade the world's networking infrastructure.
Sensors that relay huge amounts of information instantly, at a minimum of latency, in order for the coordination of smart cities, smart homes, autonomous vehicles will depend upon dense networks of edges devices. latency is currently the ceiling for things like self-driving automobiles, equipped with various sensors, LIDAR data, processing of this data and data of different vehicles.
What can cloud providers do to solve the Latency Problem?

In the year 2000, Google was already way ahead of its competitors in setting up submarine backbones. In the year preceding Amazon's first venture, ITWorld published an article that read: "Google's data centers grow too fast for normal networks, and so Google builds the own".
Are you interested in knowing the ways we have increased volume by more than 1000 percent?
Join 20,000+ others who receive our newsletter each week that contains insider WordPress advice!
It was in 2005 that a Tech journalist Mark Stephens, aka Robert Cringely was writing in his column on PBS.org, commenting about Google's purchase spree for dark fiber (laid out, but not used fiber-optic technology):
"This is much more than a typical Akamai, or perhaps an Akamai with a slew of steroids. This is a dynamically-driven, smart, thermonuclear Akamai with a dedicated back-channel and application-specific hardware. There will be the Internet in the first place, then there'll be the Google Internet, superimposed on the top."

In 2010 in an article published on zdnet.com, Tom Foremski says:
"Google is among the companies that own a large chunk of Internet" And continues: "Google has focused on developing the most effective, lowest cost to operate, private Internet. This is the foundation of Google and is essential to comprehending Google."
At that time, Cringley's piece raised concerns over Google seeking to control the internet but things became more clear when Google launched Google Fiber, its attempt to take over the ISP market, particularly in the largest US cities. The project has since slowed down, so much that TechRepublic released a review of the project in 2016, but investment in infrastructure, currently globally, did not slow down.
Google's new investment, set to be launched this year will be a backbone that connects Los Angeles in the US and Valparaiso in Chile, with a branch that will connect to Panama.
"The internet is usually described as an cloud. However, in reality it's an assortment of moist fragile tubes and Google is about to own an alarming number of these." -- VentureBeat
Why Is Google spending so much on the Network Infrastructure of its company?

- The largest video platform is owned by the company.
This is why Google needs minimum latency possible and maximal bandwidth. Google will also want to be the owner of the infrastructure itself, since its "insatiable desire" for greater bandwidth and latency puts Google along with its fellow big-scale companies, such as Amazon or Microsoft placed in a spot in which they must come up with completely custom technology and solutions for hardware.

Points of Presence, also known as edge PoP nodes, are at the edges of Google's global private cable network. They serve as gateways and points of exit that connect to the Google data centres.
Moore's Law refers to an observation of Gordon Moore, co-founder of Intel and Intel's co-founder, who stated that each two years, the quantity of transistors available on an integrated circuit will nearly double. Over the years, this prediction held true, but now the industry of computers is about to put, Moore's law through a tough test , and could be ending it in a close future. It is important to note that NVIDIA the CEO declared Moore's Law in a statement earlier this year.
So how does this relate to the cloud industry, as well as Google's infrastructure for networks?
In the Open Networking Foundation Connect Event on December 2018 Google's Vice-President as well as TechLead to Networking, Amin Vahdat, confirmed the closure of Moore's Law and explained the company's conundrum:
"Our computing demand is continuing to expand at an astonishing pace. We're going to need accelerators and more tightly-coupled computing. The network fabric is likely be a key factor to connect these two."
One way for cloud providers to meet the ever-growing need for computational capacity is through clustering. Clustering, to put it simply, means the use of multiple computers to work on a single problem, to execute processes of a single application. Obviously, one precondition for gaining the benefits of such a setup is low latency or serious network capacity.
As Google started designing its own hardware, it was 2004 network hardware manufacturers began thinking of boxes, and routers and switches needed to be managed individually, through a command line. Up until then, Google was buying clusters of switches from vendors such as Cisco and spending huge sums per single switch. The equipment could not keep pace with the growing demand.
Google required a new structure for its network. Demand on Google's infrastructure was increasing rapidly (a research paper published by Google from 2015 claims the capacity of their network grew by 100 times in ten years) and their growth was so fast that the price of purchasing the current hardware made them want to create their own solutions. Google began building custom switches using silicon chips and adopted the network's topology in a way that was more modular.
Google's engineers have begun to build on an old telephony network model called Clos Network, which cuts down on the amount of ports needed per switch:
"The benefit for using the Clos network is that you are able to make use of a collection of similar and low-cost devices to build the tree. You can also benefit from the superior performance and durability that would otherwise cost must more to construct." -" Clos Networks: What's Old Is New Again, Network World
For this new, modular hardware, Google's team was also required to redesign the existing protocols as well as create a custom Network Operating System. The challenge they were facing was to take vast numbers of routers and switches and treat them as they were a single system.
The custom networking stack along with the need for redefined protocols led Google to switch to Software Defined Networking (SDN). Here's a keynote delivered by Amin Vahdat, Google Vice-President, Engineering Fellow and network infrastructure team leader, in 2015. He explains the problems and solutions that they developed:
For the most curious ones For those who are curious, check out an interesting blog post worth taking a look at.
Espresso
Espresso is the newest element of the Google SDN. It allows Google's network to surpass the limitations of physical routers in discovering and controlling the data flowing in and out to Google's peering partners.
Espresso enables Google to monitor the performance of connections in real time and then base its decision upon the optimal Point of Presence for a specific visitor on real-time information. So, Google's network can react dynamically to various congestions and outages, as well as slowdowns in it's peering / ISP partner.
Additionally Espresso makes it possible to utilize Google's computing capabilities distributed across the globe to examine all of the network information of its fellow peers. All routing control and processing no longer lies with individual routers and Border Gateway Protocol but is instead transferred to Google's computing network.
"We utilize our vast-scale computing infrastructure and signals from the application to understand what each flow is doing according to the perceptions of users of their experience." -Espresso makes Google Cloud faster, 2017 Espresso makes Google Cloud speedier, 2017
Which of these is Related in Google Cloud Network?
What we covered so far has been a good way to illustrate all the issues and challenges (both in software and hardware) Google went through to create what's probably the best world-wide private network available today.

The year 2014 was the time that GigaOM published an article in which they compared AWS and Google Cloud Platform but just one week later, they released another article that reads: "What I missed in the Google debate. Amazon cloud debate - fiber!" where they recognize the fact that Google has a long way ahead of Amazon in terms of infrastructure.
"Having huge, quick pipes to you -- and the traffic of your customers- is a huge deal." -- Barb Darrow GIGAOM
Google's Premium Network vs Standard Tier Networks

Google Cloud Platform offers two different network tiers which differ in price as well as performance.
Google Premium Tier Network
When returning All data that is sent through the data center to the user is routed using Cold Potato policies. In contrast to the Hot Potato routing that is which is utilized on the Standard Tier network, where the data is whenever possible, handed (or or dropped) over to ISPs other than Google and then transferred to premium Tier routing ensures that egress traffic is kept longer than is feasible on Google's own fiber, and it is transferred to other ISPs, or to transit ISPs as close to the visitor as is possible.
To put it in layman's terms. Premium-tier packages spend more time on Google's networks, and are more stable which means they perform better (but will cost you more).
If you are a fan of sci-fi, us, it could be compared to a cosmic wormhole that directs the traffic of our computer directly to the destination, without having to travel across the web.
We use Google Cloud's Premium Tier Network with every one of our managed WordPress hosting plans. It reduces hops and distances, resulting in faster and more secure worldwide transmission of your data.

Google Standard Tier Network
In contrast, Standard Tier Network uses Points of Presence near the location where our data or web app resides. Our visitors' data will be routed across multiple networks, Autonomous Systems ISPs, as well as via many hops before it arrives at the final goal. When this happens, the speed of your website is affected.
Content traveling on Standard Tier will not be capable of fully reaping the benefits of Google's SDN as well as the massive computing power used to compute optimal routes in real time. The traffic will be dependent on the BGP policies of all the systems in between Google and the visitor.
In layman's words. Standard tier connections are less active in Google's network, but they spend more time playing on public networks, and thus, perform worse (but they cost less).
Additionally, Premium Tier uses Global Load Balancing. The Standard Tier offers only Regional Load Balancing, which adds more complexity and additional "footwork" for clients who are on Standard.
Premium Tier Network offers a global Service Level Agreement (SLA), which implies that Google accepts contractual responsibility to deliver a certain level of service. It's a kind of quality-guarantee symbol. Standard Network Tiers do not have this type of SLA.
If you're interested in finding out more about the service, there's an extremely detailed comparison and documentation of the two tiers available on the Google Cloud website. Google Cloud also provides a useful chart to help you more efficiently determine which network tier is the best one to use:

Summary
For years, Google has invested in the creation of a global network infrastructure. It has its own protocols as well as custom network stacks made of hardware and software. When Moore's Law appears to be weakening each year, Google's infrastructure allows Google to keep pace with ever-growing demands for cloud services.
We are likely to see Google to play an important part in the development of IoT, smart cities, driverless vehicles, and as the need for edge computing to continue to grow.
Google Cloud Network Premium Tier is the first product which makes the most of Google's groundbreaking network capabilities. It allows clients to take advantage of Google's network, as well as its entire stack to deliver content at premium speed. Thanks to Google's guarantee of latency.
Save time, costs and improve site performance by:
- Help is available immediately from WordPress hosting experts, 24/7.
- Cloudflare Enterprise integration.
- Global audience reach with 34 data centers across the globe.
- Optimization through the built-in Application for Performance Monitoring.