Gone are the days, when we used to keep in physical server computers to host a site on the web as having a lone server or say two for that matter holding on to the site was the idea that came to be about at the dawn of the internet from the 1980s up till around the late 20th century.
Up till that point, the amount of information spread across the internet was limited to the number of websites on a slow growth. With search engines like Yahoo, Excite etc to name a few. Then came Google with its search engine and there was a gold rush of information and data on the internet. With their motto of ‘Don’t be evil’, they started collecting all the available data and digitizing it, doing this along with optimized and accurate search results created unprecedented amount of data which made it necessary to scale the servers to Data centers as they are referred to as today but having physical servers as in the early 21st century, had their own pitfalls and downsides. One of them being a major one power failure, what if the power fails? What if there is a natural disaster? What if there is a virus attack and most of all what is the components in the server computers fails? These are very major questions the companies or the people who host their sites on these physical machines had in their mind at the dawn of the information age.
All these worries led many brilliant minds to pick up up their tools and set to work and discover a new way to do things which could give people solace from these worries. All the efforts and manpower that these people put into gave rise to a new technology called ‘Cloud Computing’. Cloud computing is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. Doing so, there are many advantages. One among them is scalability i.e to increase the overall capacity of the system as the users and traffic to a particular system grows, also this reduces the cost of the upgrade, greater flexibility, elasticity and optimal resource utilization.
Cloud computing offers many types of services like Infrastructure as a service (IaaS) and Platform as a service (PaaS).
When it comes to IaaS, using an existing infrastructure on a pay-per-use scheme seems to be an obvious choice for companies saving on the cost of investing in acquiring, managing and maintaining an IT infrastructure. There are also instances where organizations turn to PaaS for the same reasons while also seeking to increase the speed of development on a ready-to-use platform to deploy applications. Examples of such offerings by different companies include Amazon AWS, Windows Azure, Google Compute Engine, Rackspace, Open Cloud, IBM SmartCloud Enterprise and HP Enterprise Converged Infrastructure. All these IaaS services provide the same set of services with varying degree of features and cost. At the core, all IaaS is a cloud model which allows organizations to outsource computing equipment and resources such as servers, storage, networking as well as services, such as load balancing and content delivery networks. The IaaS provider owns and maintains the equipment while the organization rents out the specific services it needs, usually on a “pay as you go” basis. Today, the question is less about whether or not to use IaaS services, but rather which IaaS providers to use.
Now with all this premise of information of what cloud computing is? What its features and offering are by Big Names, let us take our way into the company that had a big role to play in making this technology common, cost effective and mainstream Google.
Google with its offering, called the Google Compute Engine is also an IaaS service. Google Compute Engine is well suited for big data, data warehousing, high-performance computing and other analytics-focused applications. It is well integrated with other Google services, such as Google Cloud Storage, Google BigQuery and Google Cloud SQL. Although Google Compute Engine is still relatively new in the IaaS market, the fact that it runs on Google’s global infrastructure, including the company’s private global fiber network and high-efficiency data centres sets it apart. Google announced Compute Engine on June 28, 2012, at Google I/O
Google’s computing engine consists of resources, the beauty of these resources is that they are scoped which means they are independent and works as a cluster together. They are:
1. Persistent Disks
Every Google Compute Engine instance starts with a disk resource called persistent disk. Persistent Disks provide straightforward, consistent and reliable storage at a consistent and reliable price.
An Image provides everything our instance needs to run.An image must be selected while creating an instance or during the creation of a root persistent disk. At the free level the, Google Compute Engine installs CentOS and Debian images. At the enterprise level or at the premium level we have Red Hat Enterprise Linux (RHEL) and Microsoft Windows Server 2008 R2.
A network defines the address range and gateway address of all instances connected to it. It defines how instances communicate with each other, with other networks, and with the outside world. Each instance belongs to a single network and any communication between instances in different networks must be through a public IP address.
A network belongs to only one project, and each instance can only belong to one network. All Compute Engine networks use the IPv4 protocol. Compute Engine currently does not support IPv6. However, Google is a major advocate of IPv6 and it is an important future direction.
A firewall resource contains one or more rules that permit connections into or from the instance. Every firewall resource is associated with one and only one network.
When an instance is created, an ephemeral external IP address is automatically assigned to the instance by default. This address is attached to the instance for the life of the instance and is released once the instance has been terminated. There are two types of address one internal I.P address and one optional external I.P address.
Google Compute Engine offers a routeing table to manage how traffic destined for a certain IP range should be routed. Similar to a physical router in the local area network, all outbound traffic is compared to the routes table are appropriately forwarded if the outbound packet matches any rules in the routes table.
7. Regions & zones
A region refers to a geographic location of Google’s infrastructure facility. Google Compute Engine is available in central US region, Western Europe and Asia East region. A zone is an isolated location within a region.Zones have high-bandwidth, low-latency network connections to other zones in the same region.
8. Billing and discounts
If an instance is used for 50% of the month, one will get a 10% discount over the on-demand prices.
If an instance is used for 75% of the month, one will get a 20% discount over the on-demand prices.
If an instance is used for 100% of the month, one will get a 30% discount over the on-demand prices.
This will only take effect the instance of the VM we had created for use runs more than 25% of the month.
9. Scope of resources
Remember when mentioned earlier in the google compute engine everything is scoped. This is the actual big picture of the things and how it is.
Zone Machine Type