An Introduction to Dell PowerEdge VRTX

IT

An Introduction to Dell PowerEdge VRTX

Webinar Transcription

GS: I’d like to start by welcoming everyone to our introduction to Dell PowerEdge VRTX. We’re really excited to present this great technology to you, and we thank you for tuning in.

Some of you know us, but for those of you who don’t: InterWorks is a total IT Services and data consulting firm, specializing in business solutions that remove distractions and improve efficiency. Partnering with industry leading names such as Dell VMware, Ruckus Wireless, ShoreTel and ESET, InterWorks offers the hardware and know-how for any IT need.

Your presenters today are myself, Garrett Sauls, I’m a Content Strategist here at InterWorks, and the gentleman to my left, who will be doing most of the heavy lifting today, is our Account Executive, Russell Parker. Russell actually comes to us from Dell, where he spent the past two years implementing these solutions. So, he knows quite a bit about VRTX.

To give you an agenda of what we’re doing today; it’s pretty straightforward. We’re going to go through an overview of VRTX; then Russell is going to take us through a live demo of VRTX; then we’re going to round things off with an open Q&A session.

We encourage you guys to send us your questions, so while Russell is presenting feel free to submit any questions you have about VRTX using the GoTo Webinar interface. We’ll then answer these questions after the live demo, during the brief Q&A session.

If you have a question that requires immediate attention or if you need to elaborate, we can turn on your mic and let you speak. Just use the little “Raise Your Hand” feature. You’ll see that in your GoTo Webinar interface.

And of course, you can contact us afterwards. If for some reason we didn’t get to your questions, or if you have more questions, or if you simply want to know more about VRTX or any other of our Dell products, you can contact us through the methods below. That’s interworks.com, you can give us a call, find us on social media or you can just email Russell directly with your questions. Without further ado, I think I’m going to turn things over to Russell.

RP: We’re going to be running through the Dell PowerEdge VRTX today. We’re going to start, like Garrett said, with a brief slideshow and then we’re going to hop right into a live demo to try and make this as interactive as possible.

Some of the IT concerns: complexity, inefficiency, IT rigidness, the old boundaries of physical server and physical footprints, higher cost as you expand. I’m not going to read all of these slides to you, most of you out there know the IT struggles and roadblocks that we’re dealing with today. PowerEdge VRTX addresses a lot of these and can address a lot of different IT needs from a workload standpoint as well.

The VRTX is a truly revolutionary solution: the first of its kind, brought to the marketplace by Dell, fully integrated, easy to manage, optimized for the remote office, for the data center, can be very versatile and has multiple use case.

The dimension, the acoustics, the secure bezel on the front, all make it ideal for a remote office location where you don’t necessarily have a server room or a data center per se. It is virtualization ready, very scalable and has integrated storage servers and networking, all in a single box.

We’re just going take a look at some of the physical features here like the M1000e blade chassis from Dell. It does come with the LCD display screen on the front for easy management without even taking off the bezel on the rear. You’re going to see dedicated remote management ports for the CMC, the Chassis Management Controller. There are eight external PCIe slots that can be mapped to individual blades. There is an option of up to four supplies, all with standard plugs so there are no power concerns. It can run 120 or 208, but there are no special power requirements out of the box. Four hot-plug redundant blowers that keep the acoustics down and make it office livable.

On the front, you’ll see your standard KVM ports, along with the shared DVD which is optional. We’ve got four (4) half-height sever slots for up to four server nodes. All of these servers have access to the shared storage, which can come configured in either twelve 12 3 ½ in. hard drives or 25 2 ½ in. hard drives, which gives you a total of 48 raw TB of capacity capabilities.

The two server nodes that are available currently to be installed and shipped with the VRTX are the PowerEdge M520 and the PowerEdge M620. These are twelfth generations servers, both dual processors. They now can also include in the M620 Intel’s latest version two processors, the Ivy Bridge processors. The value set that will be compatible with the M520 will coming out, I believe, later this year.

Both of those can be configured with the hard drives internal in the blade or with a disc list configuration and a boot from redundant SD module, so they’re very versatile in configuration for the servers as well.

The shared storage is truly shared. What the VRTX does is it virtualizes the PCI traffic so it can share the storage between all four blades. You also have the ability to pin a specific RAID set or specific volume to an individual blade. We‘ll go through that a little bit more in the demo, a little bit more in depth. Anything from SSD to mirror line fast hard drives can be mixed or matched in here, so you’re not confined to a single hard drive type, size or speed.

The Dell OpenManage Essentials Console; we’ll go through a little bit of this in the demo as well. It’s pretty much the same setup as the standard twelfth generation rack servers or tower servers from a management standpoint. All the blades include the iDRAC controller, which can be accessed directly from the CMC. We’ll go into that a little bit as well.

One thing that has been added to the Chassis Management Controller for the VRTX is a geographical view. This can give you a map of multiple VRTXs, and where they reside throughout the country or throughout the globe. Also giving you, as you can see, in the small picture, a quick health status. It’ll be green, yellow or red. Hovering over those would give a little bit of brief health information. Clicking on them gets you to that individual VRTX’s Chassis Management Controller GUI, so that you can begin to work on the issue.

The PowerEdge VRTX also comes with an integrated 1 GbE switch, so we’ve got the eight flexible PCIe slots, which can be mapped to individual blades. We’ll go through that as well. We’ve also got the option for the integrated switch, which currently is a 1 GbE internal switch module. It has 16 internal ports, four for each blade, and eight external ports. There’s a fully functioning layer to 1 GbE switch. The VRTX can also be configured with a one 1 GbE Pass-Through Module as well.

As you can see, we’ve gone through a little bit of this: very easy to tailor to any specific configuration. It can configured as a 5U tower, or a 5U rackmount configuration. Again, it can go with 12 3 ½ drives or 25 2 ½ in. drives. It comes with an integrated one GbE switch and obviously up to four nodes; very versatile configuring.

Some of the benefits, some of the use cases we’ve seen for these are VDI deployments, siloed applications that need multiple compute nodes and shared storage, but maybe (you) want it to be kept out of the general pool of (your) production environment. Branch office, remote office locations or small office locations that don’t have the infrastructure to run a full environment of server storage networking equipment; this is a great fit for those types of scenarios. Before we jump in to Q&A, I’m going to switch to my second monitor and hopefully, everyone can see the Chassis Management Controller.

What you’re looking at is just a standard CRTX with two server nodes, five hard drives are populated, (it) looks like six hard drives are populated. This main page is just your Chassis Overview page to give you general information, critical alerts, informational messages and the status of the overall chassis itself. As you can see it has a very familiar, very user friendly feel to it. It has the tree on the left and tabs at the top, some quick links down at the bottom left here that you can see. There are many different ways to get to different areas within the Chassis Management Controller.

We’ll start with power. This is a VRTX running ion the Austin Data Center, so we are linked in remotely. So the powers going to give you an overall view. You can look at real-time energy statistics, power statics and you can set actual power limits from here.

Back to the Chassis Overview. As you can see it’s got a graphical output of everything here. Hovering over anything will give a brief status and give you information about either the server node or the storage components.

As you can see with the option of configuring, you can go with minimal power supplies if you’re only starting with two server nodes. These are hot-plugged and field upgradeable, so they can be upgraded in the future as you add server nodes or as you add storage.

Here’s a look at just the integrated 1 GbE switch module, which we’ll go into further, and obviously there are the blowers on the back end. These are the PCIe slots. We have two PCIe slots populated. These can be mapped on a one-to-one basis with each server node, and we’ll go through that here in just a second as well.

Here’s the inside of the chassis, CMC slot one. As you can see, there is a second slot for Chassis Management Controller. This one for demo purposes is set up non-redundantly, but you can set up a redundant Chassis Management Controller.

The health of your fans, as you can see, appropriate power management; they’re only spinning and using as much power as is necessary for the tow blades and the storage that the chassis contains currently.

Here’s the shared PERC card, which is the means to which the blade will be sharing the storage. Let’s hop into the Server Overview. While that’s pulling up we can hop into the OpenManage Essentials. As it pulls up, we’ll hop back. Here’s your Server Overview. Obviously, we’ve got two M620s. You can go over each individual server.

It will give you the overall status. Your jump to links (are) here to get you to anywhere on the page. From this page, you can launch directly into the individual server’s iDRAC, which if you’re familiar with PowerEdge servers, is the Dell Remote Access Controller or iDRAC Intelligent Dell Remote Access Controller.

From here, it’s just like the management of any other sever. You can cycle the power on and off, you can view logs, you can rest the iDRAC, you can launch the virtual console and actually manage the server remotely as well.

While that pulls up, we’ll just come back into the OpenMange Essentials. This has got everything they have in their lab by IP address. Here’s the demo VRTX, as you can see, discovered in inventory. We’ve only got two servers and on CMC.

Back to the Chassis Management Controller. As you can see, you can do everything within the server that you can with a normal rack server or iDRAC via the Chassis Management Controller without really leaving the same pane of glass. It just hops you into the iDRAC, and from there you can launch the virtual console and manage the server.

Take a look at the I/O Module Overview. Same thing here. You’re going to have the one GbE ethernet switch. Straight from this console, you can launch the GUI for the switch itself. This will allow you to manage the switch, set policies and cycle ports on and off. As I said before, it’s a fully functioning layer two, 1 GbE switch, so the management is going look obviously very similar to a Dell PowerConnect switch, if you’re familiar with those.

The M620s come with a 10 GbE onboard Daughter Card, so that’s why you see the internal ports. There are only two being used. They auto-negotiate down to one for this one gig switch that’s why only two of the four internal ports on those two serve slots are being used.

The M520 comes with a 1 GbE onboard card, so it would be using all four of those ports. On the roadmap is a 10 GbE switch module. I do not have any information on when that will be actually released.

As you can see, (there is) the same familiar look, ports where you can configure jumbo frames (and you can) do any port or lag configurations. Enable spanning tree, set up your VLANs, (there is a) fully functioning layer two switch, as said before.

This place just gives you a further overview of the switch without hopping in to the GUI. Like I said before, there are several different methods of getting to different areas of the Chassis Management Controller, so everything is built to be user friendly and allow you to get to it from multiple areas.

In the PCIe overview, you’re going to see all eight slots. There are three full-height slots and five half-height slots. Those can all be mapped on a one-to-one basis. We’re going to go through that. Opening will give you the status and the health check of each PCIe adapter.

This VRTX chassis has two 10 GB dual-port adapters. You can do anything from 1 GbE mixed, 10 GbE mixed Sash HPAs. Any PCIe card can be mapped on a one-to-one basis, you just have to remember you have three full-height slots available but you have five half-heights.

So, we’ll hop into setup. As you can see, this is not mapped. It’s powered off, so the first thing we’re going to do is map it. That operation is successful, it’ll give you a notification that it is. Slot six and map this one to the second server. As you can see, they’re still powered off, but they’re now mapped to each individual server slot.

Now, we’ll take a look at the storage. This is shared storage between all four server nodes or you can also map an individual server node to an individual RAID set or an individual disk. This gives you the overview of your physical disk that we have installed, the controllers (and) any recently logged events. As you use, you can hop to any controller by going to the left tree or just simply clicking controllers as I did.

You go to setup, properties will give an overview as usual. You go to physical disk. This gives you the option of grouping them by controller, grouping them by enclosure or grouping them by virtual disk if you have virtual disks already setup. They currently have no virtual disks setup for this machine, so the status of all of them is going to be ready.

They’re all unassigned. You can set any disk in the VRTX chassis as a global hot spare. We only have a single type of disk in this demo chassis, so the only option is global hot spare. If there were different types of disks, you would be able to set specific hot spares. 

Enclosure is just going to give you an overview of the enclosure, kind of the same as the initial, first page of the storage.

We’re going to create virtual disk. We’re going to use the shared controller. You can name it or use auto-naming. Choose your RAID set. Choose your read and write policies and any cache policy. Select which physical disk you would like to include.

We’re just going to include all of them and create a large, grade five volume or RAID set.

The operation was completed, so let’s go back over here. We’ve got our virtual disks. We can rename (and) edit any policies form here. If we want to assign this virtual disk, we want to give full access. That has been assigned to virtual adapter one.

So, go to our storage overview. Now we have one virtual disk. You have your recently logged events; that we created a disk. We’ve got our total use capacity within the chassis, so we’re using all the hard drives within the chassis. The server is giving me a little bit of trouble trying to sign in or I would assign that to a hard drive letter in the server.

Here are our virtual adapters. We’ve already mapped our virtual disk to virtual adapter one. We’re going to map virtual adapter one to slot one, which is the first in M620 and slot two, the second one, we’re going map virtual adapter two. Down below me, you’ll see the option for single assignment, which allows a virtual disk to be assigned to a single virtual adapter at a time. We’re going to select multiple assignment. This is going to allow us to assign that virtual disk to both of the virtual adapters, so that both servers can see our RAID 5.

Now, we’ll go back to our virtual disk. We’ve already assigned this to virtual adapter one. We’re going to add virtual adapter two, now that it is mapped to the second server. A virtual or cluster environment, for example both of those M620s, will have full read and write access to the RAID 5 that we’ve created.

Like I said, if their server console wasn’t giving me a fit, I would hop in and show.

While I’m working on that; Garrett, do we have any questions in the queue?

GS: Yeah, we do actually. Let’s just jump to our first one while you’re figuring out that.

Our first question is: “Is this recommended for FCoE configurations?”

RP: For Fibre Channel over Ethernet?

GS: Correct.

RP: You can add the PCIe cards and map them to the server the same way with anything. We’ve seen this where they’ve had HVAs or mix mapped to a fibre channel or a 1 GB or a 10 GB storage area network outside of the VRTX. So, whether it’s a development environment, or a siloied application or anything like that, they still have access to the central storage for backup or any other purposes.

GS: Great. We’ve got another question here. Someone is wanting to know: “Is there a 10 GbE option?

RP: For the internal switch? Not currently, it is on their roadmap for an internal 10 GbE option.

GS: Another question here is asking: “Can the shared storage be extended with DAS?”

RP: It cannot. That is a short answer. It cannot. There is no method of externally extending the shared storage currently. Storage can be added via HVAs. Direct attached storage can be added to individual server nodes via HVAs in the PCIe slots, but currently there is no method of extending the shared storage, and it is not on the roadmap as far as I know.

GS: No luck signing in yet? Or, do you want to go ahead and take another one while you’re trying to figure that out?

RP: I’m trying right now. There we go.

So, we’re hopping into the Server Manager, I believe, of Slot 1. As you can see, you’ve got all the features and functionality of Server 2012. Let’s go to storage. Looks like their password is about to expire on this. This cursor, I will apologize, is bouncing around on me, making it a little hard to grab what I’m trying to do. 

It’s not picking it up yet. It’s seeing the internal storage, but it’s not picking up the new storage. Let’s return to questions while I’m trying to refresh that.

Garrett, do we have any additional questions?

GS: Yeah, our first one is: “What server models are available for the VRTX chassis?

RP: Currently, it’s the M520 and M630, both dual-socket, half-height servers. The M520 has 12 memory slots while M620 has 24 memory slots. With the M620, you’ve got a total footprint of just about ¾ of a TB. About 768 GB, I believe is the number.

GS: Ok, our next question asks: “How is the storage shared between all four servers?”

RP: It’s shared via the Shared PERC Controller, which is an LSI PERC RAID controller. As you can see, when we were assigning virtual adapters, what Dell has done in the VRTX is virtualize the PCI traffic. They have assigned virtual adapters to each slot and then you can assign that shared PERC to multiple virtual adapters. Each server has a virtual adapter that connects with the shared PERC. The shared PERC controls the storage.

GS: So, we’ve got two more questions here. Do you want to go ahead and answer another one, or?

RP: Yeah go ahead. For some reason, this server, when I rescan the disk, is not picking up the storage yet.

GS: That’s ok. Our next question asks: “What parts are field replaceable or upgradeable?

RP: Good question. All of your standard parts, (the) same parts as your standard rack servers on the chassis, are hot-plug and hot-swappable, like your power supplies, the blowers. The fans inside are field replaceable. I do not believe they’re hot swappable. (I’m) Trying to think of anything else on the chassis that would be field upgradable or field replaceable. The switch module will be field upgradable because it is just modular. Obviously, a second Chassis Management Controller could be ordered. It would take some downtime to put it in. PERC controllers are field upgradable or field interchangeable with downtime, and that would be about it.

GS: Alright, the last question we have in the queue is asking: “Are there any single points of failure in the VRTX?”

RP: Currently, the only single point of failure would be the shared PERC card. I’m going to go back to the Chassis Management Overview to help answer this question. So, it is a single shared PERC card, just as you would have in a standard rack server. It can be considered currently a single point of failure. As you can see, once this refreshes on the internal view of the chassis itself, as that pulls up … well right now you can see there are two shared PERC spots. Currently, the VRTX ships with only one shared PERC controller. There is a firmware upgrade coming that will allow you to have both slots populated and run an active/passive shared PERC. So, the answer is the PERC currently is the single point of failure. On the roadmap in the very near future, from what hear, is the firmware update that will allow you to add the second PERC controller and allow you to remove that single point of failure.

GS: So, Russ, Let’s go ahead and move into a brief scenario, since I’m not sure if we’ll be able to find out any more on that demo. I think we’re all out of questions too.

RP: I was going apologize to everybody. I would walk through assigning the volume to the server, giving it a letter and showing the second sever, but for some reason, whenever I rescan, it’s not showing it. We can hop into a scenario.

GS: Great. I’m just going to switch over to my screen right now. I’ve got the scenario pulled up.

Alright. So, if you can see my screen, Russell, the scenario we have is that of a small legal office, and the scenario is:

“A growing legal practice continues to add new staff and needs a more flexible IT solution that supports the growing practice. Specifically, they need to improve collaboration between associates and clients, improve case reporting and ensure confidentiality of client information.”

RP: As you can see, some of the problems there are limited space taken up by hardware sprawl, the need to adopt an it platform that can grow with the business, boost systems uptime, enhance client service with faster, more reliable access and reduce the noise in the office. VRTX is the perfect solution for something like this. Your IT floor space is reduced dramatically. You don’t have a full, one-use switch. You don’t have one or two use servers; four, five, six or seven of them. You don’t have multiple UPSs, multiple noisy blowers trying to keep the room cool. It frees up a lot of space for other purposes of the business, especially when you’re growing. (It has) easy compute and storage scalability. As you can see, the VRTX we demoed had two compute nodes. With a M620, you can get up to 12 cores per socket, which is 24 cores per server node. As you can see, that can grow with four server node capability. The storage (is) up to 48 TB raw, and you can start with as few as three hard drives within the configuration and you grow from there.

GS: Let’s go ahead and move on into another scenario, unless you’ve got anything else on that.

RP: No, the only other thing would be the easy management and geographical view if you’ve got multiple offices with this scenario.

GS: Alright, our next scenario is that of a regional healthcare organization. The scenario is: 

“A growing regional practice needs to meet compliance requirements for protecting patient information, increase collaboration for referrals, implements electronic record, and improve care and electronic access for patients.”

RP: Sorry about that, muted myself. This is another one trying to overcome hardware sprawl, accommodate large data growth and be flexible, control operational expenses by enhancing the productivity of a very small staff, ensure delivery of patient care to ensure your clients and customers are happy, and have no issue or downtime to implement virtualization to control the expenditures that come along with growth of this kind of business. The VRTX scalability is great here. (It) avoids costs of possibly having to relocate, having to find something that can accommodate more space, power needs or cooling needs. 

The secure storage capacity which traps that storage is only available to those four blades. (It) can be very nice for siloed applications for a healthcare office that needs to have total control and lockdown of those records to makes sure they’re not shared by any other servers or anyone else on the network. (It) unifies systems management again with the OpenManage Essentials. The geographical viewer allows at glance detections of any issues, remote systems management (and) remote sever management.

One of the things to remember is this hardware is certified by Dell, and we have some technical documents that we can put on our website. Or, if you want to request them from me, I believe my information was in the beginning of the presentation. Shoot me a note afterwards. These are certified and tested for virtualization. They can even be preconfigured and drop-shipped in a cluster with a clustered hypervisor. (There are) lots of things to remember if you’re considering one of these scenarios where remote management is a key, easy deployment is a key and virtualization is key. 

GS: If that’s all we have on those scenarios, I think we can go ahead and wrap this up. 

Again, I want to thank everyone for tuning into the webinar. Like we said at the beginning of the presentation, if you have any questions or want to know more about VRTX or any other of our products, feel free to contact us www.interworks.com, shoot us a call, find us on social media; however you want to get into touch with us. 

With that said, I think that’s all we’ve got. Thanks for tuning in.

More About the Author

Garrett Sauls

Content Manager
InterWorks Earns 2 ThoughtSpot 2023 Partner of the Year Awards Our friends at ThoughtSpot just made a huge announcement concerning their partner ecosystem, and we’re thrilled to share that ...
InterWorks’ Tableau Conference 2023 Primer Whether you’re currently picking up your badge in Las Vegas or anxiously awaiting the livestream, Tableau Conference 2023, a.k.a. ...

See more from this author →

InterWorks uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy. Review Policy OK

×

Interworks GmbH
Ratinger Straße 9
40213 Düsseldorf
Germany
Geschäftsführer: Mel Stephenson

Kontaktaufnahme: markus@interworks.eu
Telefon: +49 (0)211 5408 5301

Amtsgericht Düsseldorf HRB 79752
UstldNr: DE 313 353 072

×

Love our blog? You should see our emails. Sign up for our newsletter!