Results 1 to 24 of 24
  1. #1

    What is the difference between virtualization & cloud?

    Hello Folks,

    I recently come across one forum thread where many people were talking about the advantages of cloud hosting. But after researching for a while on cloud, I discovered that cloud is nothing but the cluster of multiple servers which scales the resources according to the requirement.

    But it can simply be done or called virtualization, where we can maintain the cluster and resources of hardware dynamically. So the question I had that moment was, is the big cloud hosting giants fooling us by just creatively naming their products and selling it on high cost. and such services can be offered by any hosting company with required resources.

    No offense please, I am not technically sound but being a small customer who wants to upgrade on cloud had a thought/doubt about the service after researching little. I hope that your thoughts will definitely help me understand cloud better.

    Thanks in advance.

  2. #2
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by Olivia_Avid View Post
    So the question I had that moment was, is the big cloud hosting giants fooling us by just creatively naming their products and selling it on high cost. and such services can be offered by any hosting company with required resources.
    Virtualization refers to the process of mimicking real physical hardware (VPS does this). There are distinctions there as well; for instance, para-virtualization relies on your operating system having drivers to facilitate using virtualized hardware. There are then things like "virtio", which are drivers for accessing IO devices virtually. Basically they're just trying to make your operating system run as efficiently as possible on virtual computer hardware.

    As far as what "cloud" is... there's no set definition and there are plenty of hosts that use it wrong. I suppose the way I define the cloud is "what features and services are offered within the infrastructure the virtual machine runs in"... those things compromise "the cloud". A very basic example of this is how most of the big names have GUIs for adding a firewall in front of your VM (virtual machine). Even though that doesn't require additional physical equipment, it's still something that exists outside of the context of your virtual machine. I like to think that the "cloud" should be all of the various conveniences that come with deploying your virtualized machine within their infrastructure.

    Where does a VPS provider become a "Cloud Provider"? In my opinion it's just a matter of seeing how many extra "goodies" they add to improve the quality of your VPS hosting. Can you attach extra drives? Can you scale your VM yourself even if there isn't space on the current hardware? Can you deploy a private network interconnecting your VMs exclusively? The more of those types of things that make your VMs more easily integrated, the more "cloud"-like the service in my opinion.

    So in summary, "What is the difference between virtualization & cloud?"... virtualization refers to the job of presenting an operating system with everything it normally expects from physical hardware so that it can run. "Cloud" is a choose your own adventure for the most part where you get to decide what is and is not a cloud.

    I think right now there are a small number of very expensive companies (AWS, Azure...) that do offer so many extra goodies that they must surely be considered clouds. For every 1 of them though, there are a lot that don't do anything special and just use the term for marketing (to be fair though, most consumers don't care).

  3. #3
    Join Date
    Oct 2007
    Location
    9.9N 76.2E , Planet Earth
    Posts
    1,003
    Cloud is all about the abstraction. You don't need to think about the underlying hardware or network. Think like you need 100 watts to light a bulb, you get it from the grid and you pay for what you use. Similalrly ..you have a computational job ..you get a compute power(network/compute etc) from the cloud. You pay for what you use.

    Virtualization is a technology that enables the creation of a cloud.
    A U T O M 8 N . C O M
    High Available webstack for cPanel
    Active-Active redundancy and High Availability plugin for cPanel

  4. #4
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by gnusys View Post
    You don't need to think about the underlying hardware or network.
    Of course you do. Cloud doesn't magically make the those things irrelevant.

  5. #5
    Join Date
    Oct 2007
    Location
    9.9N 76.2E , Planet Earth
    Posts
    1,003
    Quote Originally Posted by slicie View Post
    Of course you do. Cloud doesn't magically make those things irrelevant.
    Unless you are a cloud service provider, you should not be bothered about the underlying hardware. It should be cloudy.thats the whole point!
    A U T O M 8 N . C O M
    High Available webstack for cPanel
    Active-Active redundancy and High Availability plugin for cPanel

  6. #6
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by gnusys View Post
    Unless you are a cloud service provider, you should not be bothered about the underlying hardware. It should be cloudy.thats the whole point!
    Unfortunately what you think is both the popular opinion and definitively incorrect. I don't know how you think cloud platforms work, but the underlying hardware is the foundation for its performance. Maybe you're envisioning a sort of "magical" cloud in which hardware is no longer a concept, but the reality is that hardware makes a tremendous amount of difference. I don't mind explaining the more technical aspects of that point, but it's hard to know where to begin. Unless these things are artificially limited, disk (network and local) IO performance in the VM is directly tied to the physical hardware, network latency is directly tied to the physical network, CPU thread performance is directly tied to the physical CPU... everything.

  7. #7
    Join Date
    Oct 2007
    Location
    9.9N 76.2E , Planet Earth
    Posts
    1,003
    Quote Originally Posted by slicie View Post
    Unfortunately what you think is both the popular opinion and definitively incorrect. I don't know how you think cloud platforms work, but the underlying hardware is the foundation for its performance. Maybe you're envisioning a sort of "magical" cloud in which hardware is no longer a concept, but the reality is that hardware makes a tremendous amount of difference. I don't mind explaining the more technical aspects of that point, but it's hard to know where to begin. Unless these things are artificially limited, disk (network and local) IO performance in the VM is directly tied to the physical hardware, network latency is directly tied to the physical network, CPU thread performance is directly tied to the physical CPU... everything.
    I think you may be talking of cloud performance . But OP's question is diff between cloud and virtualization. When someone ask you "whats difference between dedicated server and vps"- will you say a dedicated server with SSD is better than with sata ?. Of course the SSD is better ..but that was not the question
    A U T O M 8 N . C O M
    High Available webstack for cPanel
    Active-Active redundancy and High Availability plugin for cPanel

  8. #8
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    1,307
    I work everyday with clouds and clouds providers, I can try to answer that.

    A cloud always provides a layer of virtualisation.

    But

    Not all virtual systems are a cloud.

    Example 1:
    I have a server with ESXi or Hyper-V and 10 virtual machines (VMs) on top of it. This is clearly a virtualised system. Is it a cloud? No. We have a single point of failure (the server)

    Example 2:
    I have 20 ESXi / Hyper-V servers.
    They are setup to forum a single virtualization cluster
    They don't have internal storage. They all boot from the SAN (using iSCSI or Fiber Chanel (FC))
    The storage system has multiple controllers that are clustered (example: HP 3PAR)
    The storage system replicates in real time to a standby replication system (synchronous replication. An IO operation is not considered finished unless it's committed to both arrays)
    There are at least 2 network switches / fabric switches (FC) [usually 4 or more]
    Hosts are using multipathing / MPIO
    There are people and vendors with stricts SLAs monitoring in real time the health of various components. Everything is covered by 4 to 2 hours onsite warranties 7/7 24/24. In some cases, there are resident Field Engineers who literally live in the data center ready to intervene when needed.
    There is another remote location with other servers ready to take over in case of failure of the primary site. Vmware SRM (site recovery manager) automates the failover from a site to another.


    Example 2 is a cloud. This setup is typically what you find at the heart of banks, utility companies, big retailers head offices, some IT providers... etc. You see that the virtualization is just part of the story. The system has many layers of redundancy to make sure no single failure can cause an outage.

    Any part can fail without causing the whole infrastructure to go down.

    The clouds I described above, I have seen them mostly working as private clouds (they belong to a single company).

    But it's possible to have a provider running such a cloud and selling virtual machines in it. You can then pay to have your own slice of that cloud. If everything goes well, the uptime is 100%. The only issues you can have sometimes (and that's why they pay me) are performance issues. When companies pay for a cloud like that, it costs them millions and they want to push it to its limits. Or sometimes, the nature of the business put a lot of pressure on the cloud. Retailers are very nervous about their cloud around Christmas, Black Friday... etc. Because they know a lot of transactions are coming and they don't want to lose any


    This is, my friend, in a broad picture, the difference between virtualization and a cloud Feel free to ask if you have further questions.
    Last edited by FIAHOST; 08-17-2017 at 02:46 AM. Reason: precision
    Enterprise Consultant
    CCNP Enterprise - CCNP Security
    .:. Travels From West to East .:.

  9. #9
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by gnusys View Post
    When someone ask you "whats difference between dedicated server and vps"- will you say a dedicated server with SSD is better than with sata ?. Of course the SSD is better ..but that was not the question
    The answer to that question, and OP's, still should not involve the suggestion that hardware is irrelevant. You say "of course the SSD is better", but do you understand that some clouds are not run on SSDs (similarly that not all SSDs are equally fast)? So why would you suggest that you don't need to think about it when it comes to cloud?

    I answered OP's question. I also commented on your incorrect suggestion. The whole line of thinking you put out there: "Think like you need 100 watts to light a bulb, you get it from the grid and you pay for what you use" ... is just not the reality of cloud hosting. It's not just overly simplified, it's a false equivalency.

  10. #10
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by FIAHOST View Post
    Example 2:
    I have 20 ESXi / Hyper-V servers.
    They are setup to forum a single virtualization cluster
    ...
    Any part can fail without causing the whole infrastructure to go down.
    Just to clarify though, if the physical server in that cluster goes down in an outage, the VMs it was virtualizing for are *booted* from the shared storage on a different physical server. It doesn't magically continue running as if nothing happened, the OS boots up just the same as if the power was unplugged from any other VM. The advantage to your setup (which is certainly great) is that the VM immediately boots elsewhere.

    What you've described in example 2 has its use cases (very important things that can't allow for extended downtime), but it's still not bullet-proof (which is why as you point out, engineers are on standby).

    All that said, what you've described is an awesome setup (albeit expensive), but that sort of thing is mostly deployed privately.

  11. #11
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    1,307
    Quote Originally Posted by slicie View Post
    Just to clarify though, if the physical server in that cluster goes down in an outage, the VMs it was virtualizing for are *booted* from the shared storage on a different physical server.
    Yes, that's right. But if you know that a host/node is having issues, you can use vMotion to migrate its VMs to another one. The good think with that, you don't need to reboot them. It's just their live image in the RAM that is moved across the local network. As their storage (disks) are not node-dependents, you don't need to copy them. All nodes in the cluster see the same shared storage.

    There is a solution, I have seen a demo of it, which would allow you to have the same VM running in 2 hosts at the very same time. It's amazing! You can move the mouse and see the mousse pointer moving on the other VM. Their memory is copied in the real time. In this case, you can survive a host crash without having to reboot the VM. But it's a hassle to deploy that. Most customers I worked with don't use it.

    One of the issues also is the creeping complexity. As more and more vendors get inboard, you will see more and more systems coming into the equation. Therefore, when the system has an issue, it can take time to understand what's going on.
    Enterprise Consultant
    CCNP Enterprise - CCNP Security
    .:. Travels From West to East .:.

  12. #12
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    1,307
    Quote Originally Posted by slicie View Post
    The answer to that question, and OP's, still should not involve the suggestion that hardware is irrelevant. You say "of course the SSD is better", but do you understand that some clouds are not run on SSDs (similarly that not all SSDs are equally fast)? So why would you suggest that you don't need to think about it when it comes to cloud?

    I answered OP's question. I also commented on your incorrect suggestion. The whole line of thinking you put out there: "Think like you need 100 watts to light a bulb, you get it from the grid and you pay for what you use" ... is just not the reality of cloud hosting. It's not just overly simplified, it's a false equivalency.
    Most clouds I have came across use a mix for SAS disks + SSDs. SSDs are used mostly for caching. They are too costly to store inactive data. You can pin critical VMs to the cache which mean you run them directly from SSDs (you pin their volume = their datastore).

    In the HP Storageworks XP (from Hitachi), they have a board with DIMMs (RAM sticks) that are used for cache. They are even faster than SSDs. If I remember well, you can pin a volume to that cache. Which means you run virtual machines from a bank of RAMs!!!

    They have massive batteries that keep the RAM bank under sufficient voltage for a week in case of power outage. So you don't lose your data in case of power outage.

    Amazing stuff! But they are expensive
    Enterprise Consultant
    CCNP Enterprise - CCNP Security
    .:. Travels From West to East .:.

  13. #13
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by FIAHOST View Post
    Yes, that's right. But if you know that a host/node is having issues, you can use vMotion to migrate its VMs to another one. The good think with that, you don't need to reboot them. It's just their live image in the RAM that is moved across the local network. As their storage (disks) are not node-dependents, you don't need to copy them. All nodes in the cluster see the same shared storage.
    I agree that's super cool. KVM allows for live migrations as well.

    Quote Originally Posted by FIAHOST View Post
    There is a solution, I have seen a demo of it, which would allow you to have the same VM running in 2 hosts at the very same time. It's amazing! You can move the mouse and see the mousse pointer moving on the other VM. Their memory is copied in the real time. In this case, you can survive a host crash without having to reboot the VM. But it's a hassle to deploy that. Most customers I worked with don't use it.
    While that's very neat and I'm sure there are high-availability uses for it, you have to imagine there are performance impacts of copying memory changes (synchronously, if it's safe) to the network. Thinking on this you'd probably have to have 3 hosts to avoid a split-brain dilemma during an outage.

    As far as the RAM storage idea is concerned that's obviously really awesome... but as you've pointed out those are extremely expensive relative to the performance/value of NVME drives. I'd still love to do one though, lol. Do you have a link to that product? I can't seem to find anything about it.
    Last edited by slicie; 08-17-2017 at 04:29 AM.

  14. #14
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    1,307
    Here is a document about the HP XP 9500:

    http://h20565.www2.hpe.com/hpsc/doc/...ocLocale=en_US

    Check page 59: "The P9500 can be configured with up to 512 GB of cache memory per controller chassis (1024
    GB for a two-module system). The cache is nonvolatile and is protected from data loss with onboard
    batteries to backup cache data into the onboard Cache Backup Memory Solid States Disk drive."

    So here, RAM is used as cache then in case of power loss, the system has batteries that would give time to copy the content of the cache to SSD drives.

    When you reboot after a power loss, it's MESS.

    Because the system has to copy data back from SSD to RAM, check integrity, check integrity of disks, volumes... etc. Could potentially take hours.
    Enterprise Consultant
    CCNP Enterprise - CCNP Security
    .:. Travels From West to East .:.

  15. #15
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by FIAHOST View Post
    Here is a document about the HP XP 9500:

    http://h20565.www2.hpe.com/hpsc/doc/...ocLocale=en_US

    Check page 59: "The P9500 can be configured with up to 512 GB of cache memory per controller chassis (1024
    GB for a two-module system). The cache is nonvolatile and is protected from data loss with onboard
    batteries to backup cache data into the onboard Cache Backup Memory Solid States Disk drive."

    So here, RAM is used as cache then in case of power loss, the system has batteries that would give time to copy the content of the cache to SSD drives.

    When you reboot after a power loss, it's MESS.

    Because the system has to copy data back from SSD to RAM, check integrity, check integrity of disks, volumes... etc. Could potentially take hours.
    That's pretty crazy. Firstly, I'd think for the physical size of such a device it could handle more than 512GB of data in RAM. Secondly, you'd not expect it to be so slow at recovering. You'd think for the cost it could have a couple of fast enterprise NVME (or even just decent SSD) drives. Copying 512GB of data isn't that big of a deal when done sequentially especially.

    Anyway, it's hard to imagine the use-case for such a device. My only guess is they came up with this idea before NVME drives got really fast.

  16. #16
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    1,307
    Quote Originally Posted by slicie View Post
    That's pretty crazy. Firstly, I'd think for the physical size of such a device it could handle more than 512GB of data in RAM. Secondly, you'd not expect it to be so slow at recovering. You'd think for the cost it could have a couple of fast enterprise NVME (or even just decent SSD) drives. Copying 512GB of data isn't that big of a deal when done sequentially especially.

    Anyway, it's hard to imagine the use-case for such a device. My only guess is they came up with this idea before NVME drives got really fast.

    NVMe disks are new in the market. They are not yet integrated into these massive data storage systems. The development cycle for the hardware is slow and there are many challenges when you want your system to be enterprise ready.

    I have seen an array with 1 Tb RAM. But this RAM is not the storage, it's the cache. It caches frequently accessed areas of volumes in order to serve them faster. There are algorithms that detects patterns in a seemingly random IO traffic and cache information or even read ahead. In a normal volume, usually less than 5% of the data is really hot and accessed frequently. Imagine you store this forum in a database that sits on top of a volume. Most people visit the main page and interact with new posts. This means that only a small part of the data that makes up this forum is "hot". The rest, is less frequently accessed.

    In terms of storage, these systems can scale over 1 PetaBytes!

    They are not supposed to crash, to be stopped or get rebooted. However, if a crash happens, there are integrity checks that need to take place. That's why a recovery can take a long long time. When I used to work on similar systems, we used to offer to the customer the option to bypass integrity checks. It's his responsibility if he wants us to skip the checks. In 99% of the cases, they skip them. I have never seen any issue or data corrupted due to skipping integrity checks but in theory they can happen.
    Enterprise Consultant
    CCNP Enterprise - CCNP Security
    .:. Travels From West to East .:.

  17. #17
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by FIAHOST View Post
    I have seen an array with 1 Tb RAM. But this RAM is not the storage, it's the cache. It caches frequently accessed areas of volumes in order to serve them faster. There are algorithms that detects patterns in a seemingly random IO traffic and cache information or even read ahead. In a normal volume, usually less than 5% of the data is really hot and accessed frequently. Imagine you store this forum in a database that sits on top of a volume. Most people visit the main page and interact with new posts. This means that only a small part of the data that makes up this forum is "hot". The rest, is less frequently accessed.

    They are not supposed to crash, to be stopped or get rebooted. However, if a crash happens, there are integrity checks that need to take place. That's why a recovery can take a long long time. When I used to work on similar systems, we used to offer to the customer the option to bypass integrity checks. It's his responsibility if he wants us to skip the checks. In 99% of the cases, they skip them. I have never seen any issue or data corrupted due to skipping integrity checks but in theory they can happen.
    Well so I guess the point is that when you said:

    "When you reboot after a power loss, it's MESS. Because the system has to copy data back from SSD to RAM, check integrity, check integrity of disks, volumes... etc. Could potentially take hours."

    ... I doubt that it even has some process of copying anything back into the cache (why bother, it will be rebuilt naturally during use). What is likely the reason it's a mess is because it's a huge storage system and recovering from crashes of huge storage systems is always a mess. Do you think it's any more slow because it has RAM in the cache vs having SSD backing the cache?

    Really I think the mess you're describing is having a system that is responsible for so much storage (petabytes as you point out).

    I'm wondering what percentage of private clouds you see use exclusively shared storage. It seems like a lot of this redundancy is better handled at the application layer with local drives.

  18. #18
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    1,307
    Quote Originally Posted by slicie View Post
    (why bother, it will be rebuilt naturally during use)
    It's matter of commit. Or when you ACK the write to the OS layer.

    On these systems, once the write operation is in the cache, the OS will get back an acknowledgement that the data is committed to disk (search: write through vs write back). This is to accelerate the performance but it comes at a cost: if the cache is lost, or corrupted, your volumes are potentially corrupted.

    That's why, this cache is very critical. When the system is unexpectedly powered down, it has batteries to keep the cache running for the system to copy the RAM to SSDs. It can't commit the cache to the main disks because they are not running a that time.

    Now, when you restart it, it has to copy data from SSH to RAM, check its integrity, commit it to disks, check the integrity of the disks... and this can take time.

    On some other systems, the cache is not that critical. If it is lost, it will be discarded and rebuilt later. These systems work on write through mode. It means they don't ACK the write until its effectively in the disks or in some non volatile storage. They are slightly slower but more resilient.

    The write back systems provide much lower latency but with the risk of losing integrity in case of crash. Usually they have batteries, redundant power supplies, redundant power feeds...

    An HP XP such as the P9500 can deliver more than 500'000 IO operations per second at a very low latency. You cannot approach that with local disks / raids.
    Enterprise Consultant
    CCNP Enterprise - CCNP Security
    .:. Travels From West to East .:.

  19. #19
    Join Date
    Mar 2017
    Posts
    67
    I think you're confusing which direction the data is copied from. When the system loses power, it has to wait for the storage drives to be powered up before it can begin copying data from the RAM to the disks... not the other way around. Even if it "primes" the cache for reads, that would be a very fast part. I think you're just seeing how long it takes to do integrity tests of the RAID drives (and choosing to skip that). I don't think the fact that there's RAM caching in front of the drives slows it down.

    Quote Originally Posted by FIAHOST View Post
    An HP XP such as the P9500 can deliver more than 500'000 IO operations per second at a very low latency. You cannot approach that with local disks / raids.
    There's no way the P9500 has lower latency IO than a local NVME drive. As far as the 500k IO/sec, I think you're getting beat there with local NVME for a lot of use cases (which single-threaded synchronous IO is important). Even if you're doing shared storage over fiber, you're still going to bottleneck on things that NVME drives outperform on nowadays.

    It sounds like you're describing a lot of cost and the expense of performance to try and mitigate downtime for applications which should handle that at the application layer. I'd be willing to bet for instance that MongoDB would never recommend deploying a replica set on the P9500 over local NVME (or even frankly regular SATA SSD) drives.

  20. #20
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    1,307
    These systems are very complex. Let me clarify:

    - When there is a power loss, all the shelves with disks stop immediately. The SAS disks spin down and stop responding.

    The backup batteries provide power to:

    - The RAM cache
    - The recovery SSDs.

    The recovery SSDs are used only in case of power loss. They are not used otherwise. The reason SSDs are used it's because they are fast to write and require less battery power.

    Now, you array is loses power. Immediately the system start copying the cache RAM (up to 1 Tb of data+) to the recovery SSDs.


    When you power up, the opposite happens: the data is copied from the SSDs to the RAM again. This RAM is populated. Now, you need to actually commit this data to the storage disks in order to make sure your volumes are consistent. You have to do this before your start your hosts.

    You can't replace that with an NVMe. We are talking about hundreds of TBs that are stored and need to be accessed within 1 ms. You need to have a system that is resilient in order for it to quality as a cloud. A server with an NVMe disk is not a cloud.

    I don't know what are the best practices for MangoDB. But the customers we work with all the time run MS SQL (majority), Oracle (a lot), mySQL (rarely).
    Enterprise Consultant
    CCNP Enterprise - CCNP Security
    .:. Travels From West to East .:.

  21. #21
    Join Date
    Mar 2017
    Posts
    67
    Quote Originally Posted by FIAHOST View Post
    These systems are very complex.

    You can't replace that with an NVMe. We are talking about hundreds of TBs that are stored and need to be accessed within 1 ms. You need to have a system that is resilient in order for it to quality as a cloud. A server with an NVMe disk is not a cloud.

    I don't know what are the best practices for MangoDB. But the customers we work with all the time run MS SQL (majority), Oracle (a lot), mySQL (rarely).
    (let's not imply 1ms is low latency these days)

    Applications can handle failing parts and eliminate single point of failure within the application layer. Databases do this with replica sets. So your point about resiliency is irrelevant to an application which handles it.

    As far as the problem of storing "hundreds of TB" of data, that's again solved with sharding in the application layer.

    I'd take a replica set, or sharded replica sets, spread across many different physical machines over what you're deploying. You'd get better performance for a lot less money, and a lot less complexity. There may be use-cases where sharding is not possible, but I'd still maintain that almost certainly that's a result of poor design.

    Now if your intention is to move the goal posts of what a cloud is to only systems with centralized shared storage, then we'll just have to agree to disagree. I can't think of a single cloud feature that requires shared storage.

    I think as a Cisco person you'd know the old "Keep it simple, stupid". I'm sure you can imagine how it applies to this storage method.

  22. #22
    Hello All,

    I want to thank you all for participating and helping me to understand the architecture of cloud.

    But still, my question remains unanswered. Should we really pay extra for such technology which itself not clear. I think, cloud (made by using virtualization) is only the next version of web hosting technology. Just like we moved to plasma and LED TV screens after CRTs.

    We should not pay five times of web hosting for this so called cloud. It seems just a hype.

  23. #23
    Join Date
    Mar 2016
    Location
    Netherlands
    Posts
    32

    Thumbs up

    Quote Originally Posted by Olivia_Avid View Post
    ....
    But still, my question remains unanswered. Should we really pay extra for such technology which itself not clear...
    We should not pay five times of web hosting for this so called cloud. It seems just a hype.
    Hi. I will attempt to answer this for you. I have been doing infrastructure and operations for the last 20 years; and for the last 5 years helping companies do cloud architecture and deployments. I will try to explain you the whole cloud thing

    So what is the difference between virtualization and cloud ?
    The short answer will be .. cloud takes virtualization to a very high level ..

    Think of rechargeable batteries .. .. and then Tesla ..

    Let me elaborate this further with an example ..

    pre-virtaulizaiton

    I am say a manager of a typical software/game/SaaS company . Lets assume that I run something like wordpress.com where people can come in, pay and make account and use their product.

    I have say 2 developers ( who make games/software/SaaS) --they know just php/mysql
    I have say 1 sysadmin ( who knows linux and runs the server )
    I have say 1 network admin ( who knows routers and switches )

    I pay them salary as well.($$)

    Now say if the developers need a test machine, or if the sysadmin sees the server is full, we need to order another server ($$). Now I have to check my finances, place order at dell/hp/supermicro, wait 1 month for shipment and delivery to data-center, then send the network admin to allocate ports and fix networkign for the new server(s), and then send the sysadmin to place the servers and configure it. So 1 month just for this .. And hardware once purchased, if its a test machine, it is a test machine .. and if its for production, its for production .. i cannot mix and match .. so maybe I have to buy 2 machines now. So my test server is not used in evenings, weekends as the developers are not working at that time. And hey .. deprecation is there as well.


    virtualization
    Now with virtualization, the developers can use virtualbox to test systems in their own laptops. The sysadmins can use kvm/vmware in the server to slice the server into multiple independent servers, and I do not have to buy more servers until they are properly full. There is no wait time of 1 month. Everyone happy
    one server (hypervisor) can now run multiple virtual machines -- windows, linux, bsd, etc . This is CPU/compute virtualization.

    But you know .. human nature .. we are never happy .. we always want more and more ..

    So after the CPU/compute virtualiation, cloud-developers started to check if the other parts of the server also can be virtuazlized .. the storage and the network.

    For a server to be in service, it needs the OS/CPU's, hard-disks and IP address. Example virtualbox .. you can have multiple OS.. but the disk and networking is still of your laptop. So developers moved on to virtualize software and networking.

    cloud
    So the underlying concept of the cloud that open source software like openstack/cloudstack/proxmox provide is a virtualized compute, storage and networking.

    So what this does is openstack/cloudstack/proxmox presents you the cloud view ..

    Assume you have 30 servers with 4 cpu each and 2x 2TB disk = 120 cpu .. say you want to sell at 1:4 ratio = 120 x 4 = 480 vcpus.
    Now what you get is a unified view of all the CPU and storage combined of the 30 servers.
    Cloud Capacity: 480 vcpu
    cloud storage: 30 x 2 = 60TB / 3 = 20 TB [i divide by 3 as you want to save data in at least 3 disks independently]

    so now you have 480 cpu and 20 TB space that you can use. Based on what cloud software you select, you can from one place create instance and you do not now care in which servers they live. So they may live in any of the 30 servers. If one of the server dies, you do not care .. the instance simply migrates to another server and you can now plan sending the networking and sysadmin guy to change the server at a later time.

    now this is from an operator prospective..the thing explained above ^^ is if you maintain the cloud infrastructure. As a manager, I could not be more happy .. I can now have a dashboard where i can see and visualize all parts of my cloud platform and none of the servers go idle .. as its one big resource pool. An instance can be moved from one server to another .. and it brings with it the storage and the IP address. Now i talk about just 30 servers here .. But think of this in a bigger company prospective ... say 1000 servers each in 3 different datacenters ..

    Is that all ? definitely not ... the best is yet to come ..

    So what cloud platform like openstack provides is an API layer that allows you to infrastructure as a code.
    For example: lets assume I sell a platform that always has 4 servers

    1x load balancer with public IP address
    3x web servers with mysql (galera) in internal network.

    Say I sell this .. means the developers need to develop on this, the QA team need to test on this.
    Assume I want the load balancer to be 10.10.10.10 and the 3 servers as 10.10.10.11 10.10.10.12 and 10.10.10.13

    Even with virtauliation and click-click-click, this will be a lot of clicks-clicks to do. Create 4 servers, add ssh keys, create network ports, link the networks, run some Ansible when its done etc . Imagine I have to demo this to 30 people per day .. I have to find someone to click-click all day.

    What if I can write just 1 text file and it gets created exactly the same .. every-time . no matter who runs it ? Well that is the true power of the cloud now .. as a power user.

    <<snipped>>

    I hope I have been able to explain. Please feel free to ask if some points are not clear and I can explain more.


    Cheers
    Last edited by Postbox; 11-19-2017 at 11:01 PM.
    * cloudtoko - cloud consultation, setup, automation, integration
    * For ISPs and Hosting Companies - OpenStack cloud without recurring costs
    * No Vendor Lock-In, No Monthly cloud costs, No licensing costs -- all open source
    * We setup together with you, you take control .. you operate .. we support when you need us

  24. #24
    Join Date
    Mar 2016
    Location
    Netherlands
    Posts
    32
    continuing from above: [text snipped due to us being new to WHT and its rules]

    .. so another benefit of the cloud is Infrastructure as a code.

    So what this means is instead of click-click around to create infrastructure, you just create one text file and with 1 click .. your whole infrastructure is up.

    here is an example:

    Suppose you want to sell managed services package.
    Where you sell

    1x one virtual router ( for vpn/gateway )
    1x cpanel/web server
    1x anti-Spam gateway
    1x backup server
    1x voip server (pabx)

    where the default gateway of all the servers will be the virtual router...

    so with cloud .. you write 1 text file ..and then whenever order comes in and is validated, the infrastructure is configured and delivered to customers . automatically. So instead of people taking weeks to deliver, all they have to care about is perfect 1 text file to automate this process. This is the case of just 5 severs .. imagine company applications that can easily be 50 servers with complex interconnection between them. So with the cloud, your developers do not care about the underlying virtualization being used .. . so if there is a cluster of 50 servers to be created, well based on which hyper-visor they land, some could be windows, some could be vmware , some could be kvm etc .. but you get what resources you want and in all places. This is due to standardization. Because the complexity is gone to create infrastructure, you can now focus on the time and money saved on other stuff that you should really focus on.


    Now to the question.. should you buy a cloud server ?

    The answer is .. it depends on what aspects of the cloud you can and want to utilize and also what your goals are .. what you intend to develop on top of it.

    Regular cpanel hosting -- i would go with a dedicated server with SLAs.

    Dynamic website: If you have a website that gets say 100 visitors in night but 100000 in the day .. then this is a perfect use case to utilize cloud hosting, where in the night you have one server while as load grows in the server, new servers automatically pop-up to handle the load .. and when the visits start to decrease, the servers automatically destroy themselves .. while you are billed only per minute of the whole thing. But you have to set this up yourself, and your cloud provider platform must support auto-scaling.


    PaaS/Saas - cloud .. but do ask the cloud provider what are they using .. as there is not a lot of cloud platform software out there. Open Source most popular one is OpenStack. Then its followed by CloudStack. Minus the APIs and automation, next is ProxMox. And then there are commercial ones.

    Ask the provider what cloud platform they use and what features are there. This will help you understand what you get and then can make an informed decision.


    Cheers
    * cloudtoko - cloud consultation, setup, automation, integration
    * For ISPs and Hosting Companies - OpenStack cloud without recurring costs
    * No Vendor Lock-In, No Monthly cloud costs, No licensing costs -- all open source
    * We setup together with you, you take control .. you operate .. we support when you need us

Similar Threads

  1. What is the difference between cloud hosting and vps cloud
    By sighthosting in forum Cloud Hosting
    Replies: 30
    Last Post: 09-02-2016, 01:41 AM
  2. what is the difference between cloud server and vps
    By naziaaa in forum Cloud Hosting
    Replies: 19
    Last Post: 04-19-2012, 03:09 PM
  3. What is the difference between Cloud and VPS
    By NextDoorWebHosting in forum Cloud Hosting
    Replies: 15
    Last Post: 05-02-2011, 05:35 AM
  4. What is the difference between a normal image and a splash image?
    By chucklehead in forum Web Hosting Lounge
    Replies: 2
    Last Post: 07-19-2001, 09:28 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •