For additional guidance, check out our community articles detailing the process of migrating from your current platform to Carbonio CE.
With this article we want to go over a general overview of load balancing. We’re going to get a better understanding of what it is, how it works, and what benefits it brings.
What is a Load Balancer?
Load balancing is the process of distributing network traffic across multiple servers. This allows each individual server to not have to take on too much work. By distributing work evenly, load balancing ensures that the responsiveness of applications benefits, as does their availability to users. So load balancers go to manage the flow of information between the server and an end device (PC, laptop, tablet or smartphone). The server can be physical or virtualized, on-premises, in a data center or in the public cloud. The load balancer optimizes the use of application deployment resources and prevents server overloads.
Load balancers perform continuous checks on servers to verify their health. If necessary, they exclude damaged ones from the pool until they are restored.
Security and Load Balancing
The role of load balancing is quite important in the security arena, especially in the cloud. The off-loading function of a load balancer defends an organization from DDoS attacks by going out and diverting attack traffic from the corporate server to a public cloud provider. Software load balancers with cloud offloading thus go a long way toward providing an efficient and cost-effective protection solution over more traditional defenses, such as using a hardware firewall.
Load Balancing and SSL
We saw in the article ” SSL Certificates ” how the SSL/TLS protocol works. SSL traffic is often decrypted at the load balancer. When decrypting, the latter saves web servers the burden of having to use extra CPU cycles for decryption, significantly improving application performance.
The problem remains, however, that during this step the traffic between the load balancers and the web servers is no longer encrypted. The risk of vulnerability can be reduced by keeping the load balancer inside the same data center as the web servers. Another possible alternative is SSL pass-through, where the load balancer simply passes an encrypted request to the web server, which then goes on to perform decryption. This process goes on to use more CPU power on the web server, but on the other hand, organizations that need more security may find the extra overhead useful.
Benefits of Load Balancer
There are several benefits that load balancing can provide beyond managing network traffic. For example, software load balancers provide benefits such as predictive analytics that determine traffic bottlenecks before they happen, a crucial step for automation. As enterprises move toward the cloud, significant changes in the capacity of load balancers are beginning to be seen. This becomes both a challenge and an opportunity for infrastructure and operations leaders.
Hardware vs Software Load Balancers
Load balancers can operate as hardware devices or be software defined. As for hardware devices, they typically run proprietary software optimized to run on custom processors. As traffic increases, additional load balancing devices are added to manage volume.
Software-defined load balancers, on the other hand, usually run on cheaper hardware. In addition, installing software in cloud environments goes a long way toward eliminating the need for a physical appliance.
The benefits of the software solution are greater flexibility to adapt to different and changing needs. Greater ability to scale from initial capacity by adding additional software instances. A reduction in cost by being able to run the software on standard devices without the need to purchase and maintain specific hardware. It also allows a solution to be managed through the cloud.
Conversely, going beyond the initial capacity may result in some minor delays when configuring the load balancer software. And, of course, there are upgrade costs.
By using a hardware solution, you have the advantage of greater actual speed by having the software running on specialized processors. Increased security since the organization can physically access the servers. On the other hand, however, you need more specialized people to properly configure the systems. There is no ability to scale up once the limit set on the number of connections has been made, resulting in connections being rejected and service degradation until new machines are added. Finally, clearly, there are higher costs for purchase and maintenance.
Types of Load Balancing
There are several types of load balancers. Let’s take a look at some of them together:
- SDN — Load balancing using software-defined networking (SDN) separates the control plane from the data plane for application delivery. This enables multiple load balancing control and helps the network function as virtualized versions of compute and storage. In this mode, network policies and parameters can be programmed directly, making networks more agile.
- UDP — An UDP load balancer makes use of the protocol of the same name: User Datagram Protocol (UDP). It is normally used in situations where speed is important, such as live broadcasts or online games.
- TCP — In this case, the Transmission Control Protocol (TCP) is used. A reliable, error-controlled flow of packets to IP addresses is provided, which can otherwise be easily lost or corrupted.
- SLB— Server Load Balancing (SLB), uses a series of balancing algorithms to deliver network services and content delivery, prioritizing responses to specific client requests on the network. Consistent, high-performance application delivery is ensured.
- Virtual — Virtual load balancing aims to mimic software-driven infrastructure through virtualization.
- Elastic — Elastic Load Balancing scales traffic to an application as demand changes over time. Through system health checks it knows the status of application pool members and then routes traffic in the most appropriate manner to available servers. It can manage fail-over to high availability targets or automatically spin up additional capacity.
- Geographic — Geographic load balancing redistributes application traffic among data centers located in different locations to ensure maximum efficiency and security.
- Multi-site — Also known as global server load balancing (GSLB), it distributes traffic among servers located in multiple sites or locations around the world. The servers can be on-premises or hosted in a public or private cloud. Its importance has relevance for rapid disaster recovery and business continuity when a disaster at one location renders a server unusable.
- Load Balancer as a Service (LBaaS) — Load Balancers as a Service (LBaaS) goes to meet the agility and application traffic needs in those enterprises deploying a private cloud infrastructure.