If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

Choosing a platform for IT infrastructure virtualization

Started by missveronica, Mar 29, 2023, 07:19 AM

Previous topic - Next topic

missveronicaTopic starter

The project aims to establish a well-managed and predictable platform for virtualizing basic IT services and processes within a small enterprise (10 Servers, 150 PCs). To accomplish this, I would appreciate advice from experienced individuals who have already implemented similar projects and encountered potential challenges.

Our company operates almost 24/7, with at least 50 active 1C users, 100 active Internet users, and additional specific services that are better discussed separately. These services are not policy-defining and can be implemented separately from the main infrastructure.

In terms of service criticality:
- All services, including 1C, should not experience more than one interruption per day and should be available within 5-10 minutes after a failure.

The planned hardware consists of two servers on the same platform, differing in the number of Processors, Memory, and storage. The first server will include 2*E5-2630/64/8-channel controller with 4*SAS+ 4*SSD, while the second server will have 1*E5—2630/32/integrated 4*SATA controller.

Our network infrastructure is of high quality, consisting of a 1GB server farm and three "user" switches (two 48*100Mb + one 24*1000Mb). We also have a small number of wireless clients, which are mostly non-critical.

Currently, our structure includes several servers with various roles and our plans involve virtualization. These include:
1. Win Domain Controller, DHCP, DNS (mandatory client for virtualization)
2. Win Backup Domain Controller, Print Server, File Server (separate roles, but also required)
3. Win 1C Enterprise Server, Terminal Server (must be virtualized, separate roles)
4. Win 1C Enterprise Server (weaker machine) for small databases and multiple terminal clients
5. Win MSSQL database Server (currently running on a 4-year-old machine with a 6*SATA storage on a built-in controller)
(Note: There are questions regarding virtualization feasibility and the possibility of spreading the databases)
6. Enterprise 1C Linux Server (to be virtualized)
7. Win Server for routing with manufacturer's web services (less likely to be virtualized)
8. Plus, there are other scattered services awaiting their designated place.

Although the choices here may seem limited, there are numerous nuances that I would like to address in advance. I respectfully welcome any recommendations and would happily respond to further inquiries.
  •  


NoelJones

Regardless of the chosen virtualization platform, it is essential to use external storage. Without it, a failure in the host can result in a complete shutdown of virtual machines, with recovery only possible after the host is restored. The host's RAID system can take several hours to recover. Therefore, if there are strict requirements, external storage with two controllers and other features is necessary.

Additionally, if there are only two servers, they should have the same configuration. This ensures that if one node fails, the second server can handle the load by starting all the virtual machines and maintaining stability.

As for the virtualization platform, it ultimately depends on personal preference. However, based on real conditions, there are only two viable options: Hyper-V and VMware. In your case, it may be worth considering Acceleration/Essentials Kits. However, keep in mind that these solutions tend to be slightly more expensive than similar options on Hyper-V.
  •  

etdigital

Using VMware vSphere, it is possible to create fault-tolerant systems without relying on external storage by utilizing the Virtual Storage Appliance. However, this requires specific hardware compatibility and a minimum of 4 ethernet ports and 2 independent network cards on each node. It is crucial to ensure the purchased hardware is on the compatibility list provided by ESX, especially when acquiring the second network card.

This setup can be achieved with either 2 nodes and an external vSphere or 3 nodes. It is impossible to achieve fault tolerance on two servers without external storage. VMware offers High Aviation (restarting the machine on the backup node after the main one fails) and fault tolerance (the machine continues to operate on the backup after the main one fails, with a brief delay), but the latter performs better in 10-gigabit networks and may experience slowdowns on Gigabit networks. Additionally, enabling fault tolerance consumes resources on both nodes simultaneously in terms of memory and processor usage.

Alternatively, Hyper-V is an option, but it requires external storage without any alternatives. However, compatible hardware includes any that can run Windows Server. It is important to note that, as far as I recall, Hyper-V does not offer fault tolerance, only high availability.
  •  

missveronicaTopic starter

A few years ago, I faced the decision of virtualizing our infrastructure and storage systems. After thorough consideration and testing, my conclusion was that vSphere is the most effective and justified solution for the investment.

For our VMware cluster, we opted for 2 nodes (with a minimum license for 3 to allow for future growth). This setup includes two Proliants 360 and a P2000 shelf (which can be expanded) with two SAS controllers. Our storage allocation consists of a 50/50 ratio between SAS and SATA drives, with SAS prioritized for system images and SATA for data. However, you can adjust these proportions based on your specific requirements. Additionally, I recently implemented a separate SSD-based LUN for caching purposes, which significantly improved overall VM performance.

Another advantage of using vSphere is its ability to effectively trunk interfaces, unlike HyperV. This means that you can combine all interfaces into one and provide higher bandwidth to clients.

By configuring features like High Availability (HA) and Load Management (LM), we were able to achieve a flexible, productive, and fault-tolerant solution that is well worth the investment. However, it is crucial to calculate the correct ratio of cores and RAM to avoid performance degradation. VM has specific guidelines in place, such as a limit per channel (2 modules per core), to ensure optimal performance. These considerations are essential when working with ESX.
  •  

ldhsuo

The primary benefit of virtualization is its ability to effectively reduce IT infrastructure support costs by optimizing physical resources, automating processes, and enhancing business adaptability and scalability. It eliminates the need for purchasing additional servers and their maintenance while maximizing resource utilization.
Moreover, virtualization provides reliability through easy data restoration using VM backup in critical situations. This process can be automated to ensure all relevant information is stored in backups, minimizing the risk of business downtime.
Virtualization platforms also create a flexible environment for testing various projects, such as software development, and establish the foundation for implementing cloud solutions that enhance business control over critical data.

In the market, there are numerous virtualization solutions available, including VMware products, vStack, Microsoft platforms, and others. Each solution has its own advantages suitable for addressing different business needs. Let's explore them further:

VMware:
VMware is a leading American company in the virtualization solutions industry. Their offerings include vSphere (server virtualization software), vCenter Server (centralized server management software), NSX Data Center (virtualized network and security services), and Horizon 7 (platform for virtual computers and applications). The company continually expands its functionality.

While VMware products may be complex and expensive for small and medium-sized businesses unfamiliar with the technology, they offer premium solutions at a higher price point. However, in large enterprise environments with ample budgets covering various platforms, operating systems, and architectures, VMware remains the top choice.

vStack:
vStack is a platform developed by ITGLOBAL.COM LABS that enables the implementation of virtual data centers using conventional and cost-effective equipment. It is a hyper-converged solution designed for enterprises utilizing open-source technologies. vStack offers accessibility without compromising performance when compared to enterprise storage and virtualization solutions like VMware.

vStack features include compatibility with consumer-grade devices, no vendor lock-in, development based on FreeBSD OS (UNIX family), ZFS file system for handling large amounts of data, and the bhyve hypervisor with UEFI interface, NVMe support, and high performance. It is an affordable alternative provided by a Russian supplier that can match popular Western solutions.

Citrix (Xen):
Citrix offers a range of cost-effective enterprise-level virtualization products that serve as alternatives to VMware. The company is responsible for the development of the Xen Project, a cross-platform open-source hypervisor.

Citrix features include secure application and desktop delivery, easy management through centralized consoles, support for multiple operating systems, and integration with third-party software.

Overall, choosing the right virtualization solution depends on specific business requirements and factors such as budget, expertise, scalability, and support needs. Each solution mentioned above has its unique set of features and advantages to address different use cases.
  •  

KayammaNony

I would recommend evaluating the storage architecture further to ensure optimal performance and fault tolerance.
In terms of virtualizing your existing server roles, it's crucial to conduct a thorough assessment of each workload's resource utilization, I/O patterns, and dependencies. For instance, the MSSQL database server might require specialized considerations for virtualization due to its high I/O demands. Additionally, spreading databases across virtual machines should be approached cautiously, considering the potential impact on performance and data integrity.

Regarding service criticality, implementing a failover clustering or high availability solution would be essential to meet the stringent availability requirements for 1C and other key services. This would involve leveraging features such as Windows Server Failover Clustering or VMware High Availability to minimize downtime and ensure rapid recovery in the event of a hardware or software failure.

The network infrastructure plays a pivotal role in supporting the virtualized environment. Ensuring adequate bandwidth, low latency, and proper VLAN segmentation for different service types is imperative. Any potential integration with cloud services or off-site disaster recovery should also be factored into the network design.

Lastly, the gradual migration of services to the virtualized environment should be carefully planned, considering compatibility, performance testing, and user acceptance. It's essential to have a comprehensive rollback plan in case of unforeseen issues during the migration process.
The successful implementation of this virtualization project will rely on meticulous planning, collaboration with relevant stakeholders, and continuous monitoring and optimization post-deployment. I'd be more than happy to delve into the specifics and provide tailored recommendations based on your unique infrastructure and business requirements.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...