If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

How networking for Dell M1000e with M630 nodes work?

Started by kpripper, Jun 28, 2022, 05:19 AM

Previous topic - Next topic

kpripperTopic starter

I am currently examining the Dell M1000e, which includes M630 nodes capable of supporting E5-2600 v4 CPUs. However, I have been unable to locate any information or Youtube videos outlining the process for connecting networking to the server blade. The system consists of 16 M630 blade servers, 9 cooling fans, 6 2000W power supplies, 2 CMC modules, and iKVM installed, along with Dell Force10 MXL 10/40Gbe switch module with 2x QSFP+ modules and two Dell Force10 M I/O Aggregator switch 2x QSFP+ modules.
I'm wondering whether I should simply get a router and connect it to the switches, how networking is directed to the server nodes, and whether it's possible to assign specific network speeds to individual server nodes.
  •  

Sevad

Many individuals tend to steer clear of large blade systems as they consume significant amounts of power unless the system is completely filled and you intend to utilize all its features.
This is particularly applicable in regions where the cost of electricity is relatively high.

gstarspas

Hello there. We have several systems running with the Force10 MXL switches. With the correct daughter card installed in each M630 blade, each switch provides a 10 Gbps port to every blade along with 2x 40 Gbps uplinks. The required daughter card is the Dell 10GbE 57810S-K, which has two 10Gbps ports. The two Force10 MXL switches should be placed in fabrics A1 and A2.

The MXL's are L3-capable. In our case, we use a Clos topology where each MXL functions as a leaf switch, connecting to 2x Juniper QFX5100's acting as spines. Here is an example shell of the MXL's:

xхxхx-pub-leaf#show interfaces status
Port Description Status Speed Duplex Vlan
Te 0/1 Up 10000 Mbit Full 11
Te 0/2 Up 10000 Mbit Full 11
...
Fo 0/33 "To spine1 Up 40000 Mbit Full --
Fo 0/37 "To spine2 Up 40000 Mbit Full --

Ports 0/1 to 0/16 are for the 16x M630 blades while 0/17 to 0/32 are reserved for quarter-length blades (which we don't use). Ports 0/33 and 0/37 are for the 40 Gbps uplinks. To change the interface speed, you can follow these steps:

xхxX-pub-leaf(conf)#Interface Tengigabitethernet 0/6
xхxX-pub-leaf(conf-if-te-0/6)#speed ?
100 100 Mbps
1000 1000 Mbps
10000 10000 Mbps
auto Auto negotiation (default)

The Force10 M I/O Aggregator has the same port specifications as the MXL, but it lacks L3 capabilities. If you want to use the 4 switches (2x MXL and 2x M), you will need an additional daughter card in each blade in fabric B. This configuration will provide 4x 10Gbps speeds to each blade.

Please let us know if there's anything else we can help you with.
  •  

Wiley Harding

The Dell PowerEdge M630 Blade Server is a high-performance server solution designed to handle large amounts of data processing for virtualization and real-time business intelligence applications. The M630 is compatible with the PowerEdge M1000e blade server and the PowerEdge VRTX modular infrastructure.

The M630 utilizes Intel Xeon E5-2600 v4 processors, with each processor supporting up to 22 cores. Its memory bandwidth has increased by 28% compared to the previous generation, accommodating more virtual machines. This server can accommodate up to 24 DIMM RAM strips, providing up to 1.5 TB of total RAM for complex research and development and high-performance computing.

The data storage subsystem includes four local SSD drives in a 1.8-inch form factor, with support for up to two 2.5-inch form factor SSD drives or two 2.5-inch Express Flash PCIe solid-state drives. Additionally, Dell EMC Select modular network adapters are available to access storage resources, allowing the selection of the switching network and its speed and supplier of network products.

The Dell PowerEdge M630 server features built-in algorithms for easier management and automation, including the Dell EMC Chassis Management Controller (CMC) and the integrated Dell EMC iDRAC remote access controller. The Lifecycle Controller continuously monitors the server and local data storage in real-time, while iDRAC monitors the status of servers and system performance. The server is managed through an intuitive user interface, providing options for template-based server configuration, library replenishment, and new cloud infrastructure opportunities.

Overall, the PowerEdge M630 is an easily scalable, quick-to-deploy, easy-to-master, and cost-effective server suitable for medium to large businesses, educational institutions, and scientific centers.
  •  

arpitapatel9689

The Dell M1000e blade server chassis is designed to provide networking connectivity to the server blades through switch modules. In your specific setup, you have the Dell Force10 MXL 10/40Gbe switch module and the Dell Force10 M I/O Aggregator switch module.

To connect networking to the server blades, you would typically connect your external network connections (such as your router or upstream switches) to the switch modules installed in the Dell M1000e chassis. The Dell M I/O Aggregator switch module acts as an intermediate connectivity point between the Dell Force10 MXL switch module and the server blades. This allows for additional networking flexibility, aggregation of network bandwidth, and simplification of cabling.

You can connect the networking devices to the switch modules, configure the appropriate networking settings on the switch modules, and then the server blades will be able to communicate with the external network.

Regarding the ability to assign specific network speeds to individual server nodes, it typically depends on the capabilities and configuration options provided by the switch modules. The Dell Force10 MXL switch module supports 10GbE and 40GbE interfaces, so you may have some flexibility in assigning different network speeds to individual server nodes. However, the specific capabilities and configuration options will depend on the firmware and software features available on the switch modules.

The Dell M1000e is a modular blade server enclosure that supports a variety of networking options. The chassis itself provides high-speed internal networking fabric called the Dell FlexIO, which allows for direct communication between the server blades and various internal components.

In your described setup, you have the Dell Force10 MXL 10/40Gbe switch module and the Dell Force10 M I/O Aggregator switch module installed in the chassis. The Dell Force10 MXL is a top-of-rack switch module that provides high-performance networking connectivity to the server blades. It supports both 10GbE and 40GbE interfaces, allowing for high-speed data transfer.

The Dell Force10 M I/O Aggregator is an I/O virtualization switch module that provides enhanced networking functionality. It acts as a central aggregation point, consolidating multiple connections from the Dell Force10 MXL switch module into fewer uplink connections. This helps reduce cabling and simplifies networking configuration.

By using the Dell Force10 MXL and M I/O Aggregator switch modules, you can establish a flexible and scalable networking infrastructure for your blade servers. You can configure VLANs, link aggregation, Quality of Service (QoS), and other networking features to optimize performance and manage traffic effectively.

To assign specific network speeds to individual server nodes, you'll need to configure the networking settings on the switch modules accordingly. This typically involves configuring the network interfaces, assigning VLANs, and setting bandwidth limits or priorities for different traffic flows. The exact steps and options may vary depending on the firmware and software features provided by the switch modules.

details about the Dell M1000e and its networking capabilities:

1. Networking Flexibility: The Dell M1000e chassis is designed to offer flexibility in terms of networking options. It supports various switch modules from different vendors, allowing you to choose the modules that best meet your requirements.

2. Switch Module Compatibility: In addition to the Dell Force10 MXL and M I/O Aggregator switch modules you mentioned, the Dell M1000e also supports other switch module options such as Cisco Catalyst, Brocade, and others. You can select the modules that align with your networking infrastructure preferences.

3. Uplink Connections: The switch modules installed in the Dell M1000e chassis typically have uplink ports that connect to the external network. These uplink ports allow network traffic to flow between the server blades and the external network.

4. Internal Networking Fabric: The Dell M1000e chassis also features an internal networking fabric called Dell FlexIO. This fabric provides a high-speed interconnect between the server blades and other internal components within the chassis. It enables efficient communication between the server nodes and shared resources like storage, management modules, and power supplies.

5. Networking Configuration: To configure networking for the server blades, you'll need to set up the switch modules installed in the chassis. This can involve configuring VLANs, link aggregation, port settings, addressing, and other networking parameters. You may also need to configure routing and firewall rules, depending on your specific networking requirements.

6. Network Speed Allocation: While the switch modules in the Dell M1000e chassis generally provide high-speed networking capabilities, the allocation of network speeds to individual server nodes may depend on various factors. This can include the number of available uplinks, the configuration of switch ports, the capacity of the switch modules, and the overall network design. It's advisable to consult the documentation or seek guidance from Dell or networking experts to implement the desired network speed allocation.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...