Until I built the server a few years ago, I remember buying a server, installing Linux on it, and building the server I wanted. This was in the early 2000s, a decade old, and my last memory of the server.
Over time, I noticed that many different concepts such as hosting, virtual servers, utility computing, and the cloud have come up and become increasingly unfamiliar with servers. I have learned and organized my own.
I have listed them according to the order of appearance of concepts.
1. Physical Server
In the beginning, when the server-client structure was created, developers had to build the server as a server. This means you have to buy your server hardware and build your own server.
It is a structure that installs the server OS on the purchased server, installs the desired server (for example, http server, ftp server, etc.), and incurs a lot of initial costs to construct the server in this way.
Every time I created an independent service, I had to purchase an additional physical server.
For five independent services, five independent servers had to be purchased.
There is a waste here, but five servers do not always run at 100% utilization. In a physical server, if the service is set to 5% utilization, the remaining 95% of the server will be idle and wasted.
The cost is the expense, which is not the usual caustic rain.
Therefore, a virtual machine (VM) that virtualizes servers on one physical server has begun to be introduced.
2. Virtual Server
To overcome the waste of existing physical servers, a virtual server has begun to be introduced. This has the same effect as having multiple computers with a virtual layer on top of the physical server.
The above example is an example of having five virtual servers in a physical server, making it look as though five independent servers are running. (That is, one physical server, but from the outside it looks like there are 5 servers).
The advantage of doing this is as follows.
-. Cost: The costs are reduced to one-fifth compared to five existing physical servers.
-. On the performance side: when the usage is low, for example, if 5 virtual servers only use 5% each, they use 25% of total, so they can run on one physical server and there is no performance degradation.
(However, if each server goes back to 100% utilization, its performance may be limited rather than running on one physical server.)
This allows people with servers to isolate idle resources through the Virtual Layer and think about the use of the remaining resources.
3. Utility Computing
The definition of utility computing is different.
- How you pay for using your computer’s hardware or software
The idea is to configure idle resources independently through a virtual server, receive money and service it.
It is not a technical aspect, but a concept to utilize the idle resources generated through the virtual server.
Virtual servers have enabled some degree of efficient use of idle resources, but there are still constraints to this.
Since the ratio of virtualization is fixed, there is a case where one physical server can not use the resource constrained by 20%, for example, even if it needs more.
To overcome this, the cloud was introduced. It is a service that makes idle resources available in a flexible way depending on the server situation. For example, a virtual server created with 20%, 20%, 20%, 20%, and 20% can not exceed 20% of the performance of a virtual server, but 5%, 95%, 0% 0% and so on, it is possible to perform resource balancing flexibly according to the usage amount.
(Even if one virtual server is missing, the resulting idle resources will be available to other virtual servers.)
Currently, such cloud-based hosting is becoming popular. Amazon’s Amazon Web Service (AWS) and Microsoft’s Azure are one of major services.