- 2 months ago
Category
🦄
CreativityTranscript
00:00here and uh npiv okay we'll take a quick break after this and then we'll do uh fiber channel
00:08over ethernet and then uh move on to other topics so here's here's the basic idea behind uh npiv
00:18and vmware lists this as npiv everywhere technically what the esxi host is actually
00:31doing is npv which is n-port virtualization npiv is n-port id virtualization and that has to be
00:43enabled on the upstream fiber channel switch okay the basic idea behind it is quite simply this if
00:52we go back over here to our mds okay and we say
00:58show floggy database what the switch is expecting by default is one login
01:12so in other words one port name one node name to be coming in on each physical interface
01:21it's expecting a one-to-one ratio between n-ports and and and f-ports so one n-port login per f-port
01:34and that's what we're talking about um here on the slide where we said the ability to have a single
01:41n-port send multiple floggies to the fc switch or the fiber channel switch this is not the norm
01:52okay the norm is to have a one-to-one uh ratio now in order to support this the first thing you're
02:01going to have to do is depending on your switch vendor oops okay you are going to have to enable
02:13npiv and as you can see on this switch i've already done that okay um on the cisco switches you literally
02:24just have to type feature npiv okay and then you can say show npiv status is like the only choice and
02:34it just says okay it's good it's running all you're really saying here uh is it's okay go ahead and accept
02:44more than one floggy on an f-port because if i say uh show interface brief and we look
02:54at that fc11 interface that's connected to our host notice that it is an f-port administratively
03:05and it's operationally and it's operationally an f-port so default behavior would be again to accept
03:13one and only one floggy on that interface okay npv says no we want multiple devices to be able to get to
03:28that why well again maybe we need the virtual machine to actually be able to see the storage
03:38directly itself and or maybe we need the storage to be able to see the virtual machine itself so for
03:46whatever reason if we need a virtual machine to get uh direct access to our storage this is a way we
03:57can do it where the virtual machine will actually create uh its own floggy in order to do this
04:05all right so let's take a look at this it's actually uh incredibly simple to do all right um again remember
04:13the requirements as we switch away here uh the esxi can do npv that's what we're going to turn on
04:20um and it does require that the virtual machine uh use a raw device map that's how this works
04:26so i told you we're going to get into raw device maps uh pretty soon today uh so here we go
04:33so let's switch back over to our client all right let's go over to virtual machines and i actually
04:42already have a virtual machine built uh to do this with but it's not configured uh to to get the
04:50information yet uh directly okay so uh here's what we do so we have to go into edit settings
04:57okay so step one is you have to give it a raw device map okay so you come down here to the bottom
05:06new device okay and we're going to say now we could do a new hard disk that would just be creating a
05:13dvmdk and it would ask us what data store to put it on we could say existing hard disk so if we had
05:22a drive that already existed we wanted to use uh we would do it there okay um rdm disk pretty self
05:30explanatory okay that's what we're going to do okay raw device map disk that's what we want to create
05:36that's what we're here for so we hit add and the next thing the host is going to do and this is sort
05:42of important is the host is going to come up and say okay uh here are the luns here are the devices
05:52that are being presented to me from storage and now in this case it's it's fiber channel
05:59all right um keep in mind that for a raw device map it does not have to be fiber channel
06:05all right it could also be i scuzzy locally connected it doesn't matter the important thing
06:13is for the feature we're looking at of course it's got to be fiber channel because this is a fiber
06:19channel uh feature all right um but notice by the way on this list here that lun zero is not listed
06:30that's because the host is already using lun zero we already assigned lun zero uh to a data store
06:39so uh of course it's not going to show it here because it's already in use okay i'm going to go
06:47ahead and choose the highest lun just in case uh we were to decide to use uh the other luns for something
06:54else later um obviously a real world here you would choose the appropriate lun that you're trying
07:01uh to do this with of course all right the other thing to keep in mind that that's sort of important
07:07about this list is it's presenting the luns that the host can see very important okay the host needs to be
07:20able to see these luns if the host couldn't see these luns it would not be on this list
07:26and therefore you could not assign it to a virtual machine so one of the prerequisites here is that the
07:33host first has to be able to see the lun now if you think that through to go to go back a couple minutes
07:39uh when we were talking about lun masking and zoning that means that you you know as as cool as this
07:46feature is npv and npiv working together here what you cannot do is go to your storage array
07:55and or the switch fabric either one either with zoning or masking either one you you can't go and say
08:03that only the worldwide node name and worldwide port name of the virtual machine can see this lun
08:13um so i mean it'd be cool if we could i think um that way you could say you know this lun gets
08:21presented only to this virtual machine well the problem is that's not how it works because the host
08:28needs to see it first so you better you know you you really need to make sure that the host is allowed
08:36access to the lun as well as uh the virtual machines and we'll see how that works here in just a minute
08:44okay but the host has to see it first and that also means if you're going to do v motion with this
08:51because this is all supported you can do v motion with a raw device map if you're going to v motion this
08:57you also need to make sure that any host that it's going to be v motion to can also see it okay
09:04so we're going to go ahead and choose five here hit okay so it's going to create this new hard disk
09:11okay in order to do npv the compatibility mode has to be physical now there's also virtual
09:27compatibility mode and again uh we'll get back to rdms a little bit later but uh nutshell version
09:36the very quick version here is uh in virtual compatibility mode you can still do something
09:43called snapshots and in physical compatibility mode you cannot okay physical compatibility we're
09:50basically just passing the scuzzy commands directly through to the array so uh this needs to be in
09:57physical compatibility mode as you as you might have noticed when we came in uh that was the default
10:02okay we're going to get into the shares and iops and all this is all for later today we'll get back to
10:09this stuff so for now we're just going to say okay it's going to go through the task it's going to
10:15reconfigure it already done and then we're going to go back into edit settings again now the only
10:19reason i'm coming in twice is i have had issues with trying to do this second part when you don't
10:25have the raw device map configured yet um i've had it work i've had it not work and i didn't want any
10:32issues so the second step as it is right now if i were to boot this virtual machine he would have access
10:40to that drive it would work but it would be accessing it through uh vmware using the worldwide node name
10:53and the worldwide port name of the host that would not be npv npiv so to do npv npiv you come over here to
11:04vm options which is the second tab on this window and you come down here to this option that says
11:10fiber channel npiv okay click on that and it expands it now typically when you come in here for the
11:18first time and i'll show you on uh one of the other virtual machines we can actually uh show you here in
11:26just a second on this other virtual machine that's running but down here at the bottom the worldwide name
11:32assignments so this part down here this would typically be empty and up here at the right here in the middle you would say
11:42generate new or generate them for the first time and you would generate these names now for the sake of the uh course
11:53the nodes here today i already did this so that i could already have those node names because
12:00again real world what would i have to do at this point i would have to take those two numbers i would have the
12:09have to give them to my storage engineers and say i need you to zone and mask these two addresses so that they can get to this
12:21LUN. So in order to get my, well, the underlying switch, as you saw, it's wide open, so the switch
12:30didn't matter. But in order to get my storage array to allow these two, well, specifically,
12:37it's the port worldwide name that I needed. But I had to assign that to my storage array
12:44to allow him to have access to these LUNs. Okay, so that's why these numbers are already here.
12:50I generated them a while. Normally, they would not be there. You would have to generate them the first
12:56time you came in here. Okay, and then you really got to watch this checkbox right here. Okay, this
13:08can really get you because basically what you're doing is you're turning off NPIV.
13:16Now, it says temporarily disable, but it's also the default. So you could come in here and set this
13:26up. You generate your numbers and, you know, you give it to your storage guys and they set it all up
13:31and you're still not actually doing NPIV. You're not seeing the login on the storage array from the
13:39virtual machine. And you're wondering why. Because if you don't uncheck that, you're not doing NPIV.
13:47Okay, so what we're going to want to do here, of course, is to uncheck that box. Okay, this will
13:54enable NPIV. Okay, so it's sort of one of those, you know, negative checkboxes, right? You check it to
14:01disable it. You uncheck it to enable it. And then we hit okay. Reconfigures it. Okay. Now, if we switch
14:16over to our MDS and take a look at our database, we currently have two floggies. One of them is our
14:29host. One of them is our storage array. Very simple environment. Okay. Now let's go over here and let's
14:37turn on this virtual machine.
14:52It should start booting up.
14:59Launch the console here. So we can actually see it boot up. While it's booting, we should actually be able to switch
15:06over already. And see that it now has its own floggy to the fabric. That's what we're talking about
15:18NPIV right there. So from the switch's perspective, the same interface, FC11, now has two fiber channel IDs.
15:32Okay. Fiber channel ID, if you're not real familiar with fiber channel, to put it in networking perspective
15:41for you, you could sort of consider that like a layer three address. Okay. That address is actually
15:47assigned to you by the fabric. Okay. So sort of like DHCP, so to speak. So that number gets assigned to the
15:57device from the fabric. And then these are, of course, like your layer two addresses or your very similar
16:04to MAC addresses. Okay. So we can see here that that machine is already directly accessing the SAN
16:13login on its own, going directly through, not using the same login as our host. And if we switch back over here,
16:22here, and let's go into storage management and see if we see a 205 gig hard drive. 205 gig Fiber channel Lung 5. As you can see,
16:48I did have it already formatted for us. Okay. But it's already formatted NTFS, went ahead and grabbed it.
16:56And like we said, from the guest, from the Windows side here, this looks no different than any other hard drive, right?
17:05I can just come right in here, I can go into computer, and right there is my E drive. And I can make a new folder,
17:12you know, whatever I want to do. And shared files for the network, whatever. So from the guest perspective,
17:19they have no idea that this is going on. A lot of people, when they start messing with NPIV here,
17:28they do have a misconception. And I just want to make sure that we're very clear.
17:34Let me go back and... Actually, I could have stayed right in management, couldn't I? Sorry about that.
17:50Going to device manager. Okay. If we look at, you know, storage controllers,
17:57it's not that much. Notice it thinks it has an LSI SAS. Now, first off, remember from earlier this morning,
18:07I said that's one of the adapters that VMware will simulate into the guest. So, that's the simulated
18:14VMware adapter. Okay? This is, of course, just the built-in Microsoft iSCSI initiator that I also mentioned earlier.
18:24If he wanted to talk directly to the iSCSI storage array, I could fire that up, and we could do that.
18:32And maybe I will later. We'll play it by ear, see how much time we end up with.
18:36Because, of course, we have an iSCSI array, so we can certainly do that.
18:41Not really a target of this course, though, because now we're talking simply about Windows
18:49talking to an iSCSI storage array. So, that's not, like I mentioned earlier,
18:54not really involving VMware aside from a networking perspective. Okay?
19:00But the main thing I want to show you in here is he does not see an HBA. Okay?
19:06So, when you turn on NPIV, you're not going to then start simulating a fiber channel HBA into the virtual machine.
19:17Okay? It doesn't go quite to that level. So, Windows here doesn't really know that it's using NPIV.
19:28All right? I have a question here. So, different VMs get different fiber channel IDs when you enable NPIV on those VMs.
19:41That's absolutely correct. Yes. Yes. If we switch back over here.
19:47Yeah. It gets a whole separate fiber channel ID. It's a whole separate Fabric login.
19:58And, in fact, if we say show the name server database, it shows up in there as a VMware-based initiator.
20:12So, from the Fabric's perspective, it looks like it's a whole separate HBA.
20:20Windows, the guest doesn't see it that way. But the Fabric does.
20:26So, again, if you wanted to do zoning or anything based on this, you could.
20:32But just keep in mind the basic rule, though, that the host has to be able to see it as well.
20:40All right. So, what we'll do is we'll take a quick break here.
20:44So, I'll throw it up on the slide here. Quick 10-minute break.
20:48And we'll start back up when the timer expires in the session.
20:52We're just going to have to talk about this one rather than demo just because I don't have the equipment right here to do this one.
20:59The basic idea, very simple, fiber channel over Ethernet is nothing more than taking the fiber channel protocol that we just looked at and sending it over Ethernet as the Layer 2 transport.
21:17So, it's going to add another Layer 2 transport. So, you'll have the Ethernet header as well as the fiber channel header.
21:26All the same concepts are the same. You still have worldwide node names, worldwide port names.
21:32We still have fabric login. So, you still have floggies and all of that.
21:37It's handled a little bit differently.
21:40We have to have something called fiber channel initiation protocol or FIP that handles those processes since it's running over Ethernet instead of fiber channel.
21:51So, again, there's some technical differences, of course, as far as how those processes function.
22:00But overall, it's going to be the same basic idea.
22:04And probably the biggest thing to realize about FCOE is this big problem we have right here, which is Ethernet is by nature a lossy media.
22:18So, you know, buffers get filled on switches.
22:21You know, you have a 10-gig link going to a 1-gig link.
22:25Obviously, the 1-gig link is not going to be able to keep up.
22:28The buffers get full.
22:30Ethernet's behavior is simply to drop the frames.
22:33Now, if that was UDP running over that, that could potentially be a problem.
22:41If it was streaming video, voice communication, anything like that, you could have a dropout, missing frames.
22:50But generally speaking, if there was a minor dropout, the person might still be able to follow the conversation.
22:57If it was a drop in video, you lose a couple frames.
23:02Maybe there's a little bit of a jitter in the video.
23:04I guess my main point is life would still go on.
23:08Invoice, worst case, you would lose the call.
23:12You'd have to call back.
23:13And I'm not saying these are things we would want to have happen.
23:16I'm just saying that at the end of it, somebody could back up their video and watch it again.
23:23Somebody could make the phone call again.
23:25And in the case of storage, if you lose data, you have corrupted data on your storage.
23:34So this becomes far more important when we're talking about storage as far as not losing data.
23:44As you can see, the second part here says that, you know, Fiber Channel is a lossless media.
23:50It uses something called B2B credits or buffer-to-buffer credits for flow control through Fiber Channel.
23:59And again, you know, we don't have to go real deep into Fiber Channel here.
24:04But basically the way it works is that the sender, you know, if you're a machine and you're trying to save to your storage,
24:14the storage has a certain number of buffers that it can handle as well as every device in the middle has a certain number of buffers.
24:23And until you get a clear to send from all of those buffers, you're not allowed to send your data.
24:29So when you send data to the storage array or you're getting it from the storage array either way,
24:37but when the data is being sent over Fiber Channel, we already know before we even send that data
24:44that there's enough bandwidth and throughput on the Fiber Channel fabric to handle that traffic.
24:50This is established ahead of time. So there's no chance for loss because there's not going to be any buffers filling up
24:58because you can't send unless there is space in the receiver's buffers.
25:03And like I said, every device in the middle as well.
25:06So Fiber Channel is lossless. And as such, it has no error correcting. It's not like TCP.
25:15You know, TCP, of course, if we drop frames, we just do a TCP retransmit.
25:20In the case of Fiber Channel, that's not the case.
25:24You can't resend the data or anything along those lines.
25:30So the solution, of course, is that Ethernet has to be turned into a lossless fabric
25:37so that we can send the data across without losing data, okay?
25:46So to support this, we have a, it's actually a set of features called data center bridging or DCB.
25:54The most important of those we have listed at the bottom here, which is the priority flow control.
26:01This is the most important of the features, but certainly there are other ones as well that go along with this.
26:10I just wanted to hit the important stuff.
26:12Priority flow control works along the exact same lines as buffer-to-buffer credits, okay?
26:20We give strict priority. We don't allow drops from Fiber Channel traffic.
26:26And these are all things that are going to be required in order to do FCOE.
26:32This, of course, means that these features have to be supported by every switch in the transit path.
26:40So Nexus, of course, is Cisco's product line that supports this.
26:45So the Nexus switching platform, they all support data center bridging.
26:52Okay? But these are things that are going to be required in order to support this.
26:57And it's got to be supported end-to-end.
27:00Now, ESXi can support this either in hardware or in software.
27:07All right?
27:09Now, to do it in hardware, it's actually sort of simple.
27:15You need what's called a CNA or a converged network adapter.
27:20This is really nothing more than a network card that does FCOE and acts like a Fiber Channel HBA.
27:29So from VMware's perspective, it's going to see this CNA, which of course needs to be on the hardware compatibility list.
27:37And it's simply going to see that HBA as, sorry, the CNA, as both an HBA and a network card.
27:49So it's just going to see both.
27:51So for the HBA side, you're going to configure the Fiber Channel just like we did a couple minutes ago using just a standard Fiber Channel connection.
28:02So if we were running this on, you know, a Cisco UCS blade, which only does FCOE down to the blades,
28:12then the mezzanine adapter on the UCS blade is simply going to be presented as a Fiber Channel HBA and a NIC.
28:25It will just show up as two pieces of hardware.
28:29So from that perspective, that's where the last point comes in there.
28:34No special configuration on the host.
28:36It just, you know, all the configuration is going to happen on the CNA.
28:40Of course, in the case of Cisco, if we're talking about UCS, that would all happen in UCS Manager.
28:48Okay.
28:49For software, the only requirement is that you have to have a network card that supports data center bridging.
28:58And then, by the way, the data center bridging, that's negotiated for most equipment.
29:05You can turn it on manually, too, if you need to, but it's generally negotiated with LLDP, which is the industry standard equivalent of Cisco's CDP, Cisco Discovery Protocol.
29:21Link level discovery protocol can negotiate data center bridging capabilities.
29:28Okay.
29:29And ESXi also requires IO offload feature on the network card.
29:35So, of course, you can look at VMware.
29:38It's changing all the time, of course.
29:40But you can look at VMware's hardware compatibility list and find out what network cards support this and which ones do not.
29:51Okay.
29:52So, for the software, FCOE, all you really need to do is a couple requirements is you have to disable spanning tree on the switch that facing the host, at least on the interface facing the host.
30:11The reason for this is that spanning tree can interfere with, remember I talked before about FIP, Fiber Channel Initiation Protocol.
30:22It's FCOE's protocol for doing the floggies.
30:27It can interfere with that, so you need to turn off spanning tree.
30:33You also need to enable priority flow control on the switch that's facing, again, I already said, priority flow control, in other words, data center bridging, needs to be enabled the whole way through, the whole way from storage all the way to the host.
30:50So, that needs to be turned on on every single port.
30:55You also have to have a completely separate VMware kernel port for each adapter.
31:03So, there has to be a one-to-one ratio between the VM kernel port and the storage adapter.
31:11And VM kernel ports will create some of these a little bit later.
31:15We need them for iSCSI storage and some other things as well.
31:19Just some limits that ESX has.
31:23You can have up to four FCOE adapters in each host.
31:28And then, once you've created them, so you've put the appropriate hardware in, you've created the network for them, you've created the VM kernel port, tied it to just that one adapter.
31:41Then, the last step is you simply go into the storage adapters, where we were a couple minutes ago, looking at my fiber channel storage adapter, and you simply activate them.
31:55So, basically, I'll show you where you do it real quick.
31:59Like I said, I can't unfortunately do it with my hardware, but if I come in here and go back to my hosts, and we go to storage adapters.
32:13And if I were to hit add a new storage adapter here, it gives you two choices, software iSCSI adapter and software FCOE adapter, which, as you can see, is grayed out.
32:27Now, that would not be grayed out if I had met these previous requirements.
32:31It's just that my network cards do not support that.
32:34They have to be 10 gig just as a bare minimum to support data center bridging.
32:40So, like I said, mine do not support that.
32:45But that's where you would go to do it.
32:47You would simply add the FCOE adapter here.
32:50And, again, if your storage is configured correctly, the LUNs should just appear, and you treat them as any other LUN.
33:01Okay, next is booting from either one of these.
33:15So, from fiber channel and FCOE.
33:20Booting from this is relatively straightforward.
33:25First one is really for fiber channel or hardware FCOE is you simply have to configure the HBA.
33:37If it's a standalone machine and you just have something like I do, like a Qlogic HBA or something like that in there,
33:46then the device has its own BIOS, and you simply go into the BIOS.
33:53You know, obviously you have to consult your device's documentation on how to do that.
33:58But you simply go into the BIOS for the device, and you configure it to boot from SAM, point it to the correct LUN.
34:09Again, masking has to be correct and all that kind of stuff in your storage array and your storage network already.
34:16But as long as that's done, you should be able to boot from it.
34:21Again, if this was Cisco UCS, that's all done from UCS Manager, setting it up to boot from LUN.
34:28So, I mean, to a large extent, again, this particular topic really doesn't have anything to do with VMware.
34:38This is configuration of a piece of hardware as a boot device.
34:45So it's going to be done based on the BIOS and based on the equipment.
34:49It's just, you know, of course, when you install ESXi, you just have to make sure, you know, you boot from whatever, you know, local CD or, you know, if you have any kind of, you know, IPMI type card, you know, mount the CD-ROM remotely.
35:05However, however you're going to do it, when you're going through the ESXi installation process, you know, just, of course, make sure that you're installing VMware to this LUN that you're then going to boot from, of course.
35:18This one's sort of important.
35:20We talked earlier in the last section there about shared storage.
35:26Might have been the first section, but we talked about shared storage.
35:30Shared storage is very important for things like vMotion, high availability, fault tolerance, DRS, all of these things that need to be able to move a virtual machine from one device, you know, from one host to another.
35:45They have to have access to the same drives, all right, which of course implies that we are going to have multiple hosts accessing the exact same storage, which we're absolutely going to.
36:01This is not a problem at all because VMFS is a cluster-aware filing system, so it handles things like locking files and stuff, and we'll get to that a bit later.
36:13But it's not a problem.
36:15It's all handled, okay?
36:17But when it comes to booting, when it comes to the drive that ESX is installed on, then that has to be exclusive just to that host.
36:28And I'm sure, at least I'm hoping that everybody can understand why, right?
36:33Because, I mean, the configuration and all that kind of stuff for the host is stored on that drive.
36:39So, you know, it would be a little difficult if you had multiple devices booting up with the same IP addresses and MAC addresses and everything else.
36:49So that would obviously be a problem.
36:52So, as far as when it comes to boot lines, they, of course, have to be exclusive, okay?
36:59If you want to be able to boot from the software, FCOE adapter, okay?
37:07This, and we'll see this sort of same concept.
37:12The terms are different, but the same sort of concept when it comes to booting from iSCSI, particularly, again, when you're talking about software, it's a little bit of a logic problem.
37:26I mean, if you think it through for a minute, you're going to boot from an adapter that needs to get its configuration from the operating system that you're trying to boot.
37:40So, I hope you can understand there's a little bit of a dilemma there.
37:47The way FCOE handles it, we have it right on here for you, the FCOE boot table, sorry, the boot firmware table or the boot parameter table, the boot format table is a standard that was developed by Intel.
38:03They use it on their FCOE-capable network cards.
38:10VMware has, I mean, I don't want to call it a standardized way because VMware, you know, is, of course, not a standards body, but we'll just say a more generic way of, instead of each vendor writing their own method of doing this, VMware wrote FBPT to allow vendors to simply write to that standard,
38:35and then, you know, VMware doesn't have to get rewritten for every single network card that comes out that supports this, okay?
38:43So, there are two different ways of accomplishing the same thing, but essentially what it's doing is it's taking the configuration from VMware and writing it back to the network card.
38:57It's storing it on the NIC so that when it powers back on and the config on the NIC loads, it knows which fiber channel, or in this case, this is, of course, specifically for FCOE, which FCOE LUN to connect to and everything to boot the operating system.
39:15Once the operating system starts booting, it will, of course, switch over to the VMware kernel handling IO.
39:23Taking SCSI commands and sending them over TCPIP as the transport.
39:32So, this is, again, another, this would fall under the classification of SAN.
39:37Again, so it's going to be block level storage, okay?
39:42We're going to mount it just like, again, aside from the configuration changes, when we get to the actual LUNs,
39:51we're going to be treating this very much like we did the fiber channel, okay?
39:59So, the naming convention for this, we don't have, you know, worldwide node names and all that stuff with iSCSI.
40:08It uses something called an IQN or an iSCSI qualified name.
40:14That's what most storage devices use.
40:18Just to throw it out there, it can also use something called the EUI or the extended unique ID.
40:23For your reference, if you want to dig into these a little bit more, both of these are referred to in RFC 3721 and RFC 3722.
40:36So, if you want a little bit more reference on that.
40:39But, basically, the way this is formatted, it's a little bit interesting, I think, anyway.
40:46I find this just a little bit entertaining.
40:49It's the basic formatting for the IQN.
40:52I'm not going to go over the EUI so much because, like I said, you're not going to see it as much in VMware.
40:58But, basically, what you're looking at is the first part of this.
41:02The first part is simply IQN.
41:05And that, I think, is pretty self-explanatory since this is called an IQN.
41:10And the next part is who, and it's not the next part.
41:16I know I'm jumping ahead, I'm sort of talking about this all together, but it's the domain name of the vendor making the equipment.
41:27So, in this case, since, in this example that I'm showing you here, this is right from VMware.
41:34It's off of the VMware software iSCSI adapter.
41:38So, the vendor is VMware.com.
41:42So, it's the inverted domain name preceded by, I know we're going a little bit backwards here, but preceded by the year and month that that DNS name was registered for the first time.
41:58So, that's a lot of fun.
42:02Followed, of course, then, the rest of this is basically just a unique identifier on most platforms that can be modified, changed, programmed.
42:14So, it's just, the rest of it's pretty much just a unique identifier.
42:19So, we'll see these names as we go.
42:20So, we'll see the names that are going to get presented from my SAN and stuff like that.
42:26So, we'll see these as we go.
42:29And just like FCOE, just like FCOE, this also can be implemented either in software or hardware.
42:39If we're talking about software, this is simply a software stack that's implemented directly in the ESXi kernel.
42:48So, that's the one that we're going to be spending most of our time on today just because, well, if you take a look at the hardware side of it, let's take a look here.
43:01Dependent hardware, now, there's two types of hardware iSCSI adapters, okay?
43:07If we're talking about a dependent hardware adapter, this means that it still gets its configuration from ESXi.
43:18It really just means that it supports specialized iSCSI hardware on the network card.
43:24So, it's really just a regular network card with some added functionality.
43:30I mean, obviously, things like TCP offload and things like that.
43:33So, basically, it just supports taking the workload of iSCSI and doing it in the hardware of the network card.
43:43But the configuration is still coming from ESXi itself.
43:48All right?
43:48So, this would be very similar to the example we just looked at a minute ago, which is the FCOE software adapter,
43:58where there's specialized hardware behind it, but the actual configuration is still being done in VMware.
44:07Independent hardware is basically just like an HPA.
44:12All right, this would be the equivalent of, you know, the hardware FCOE adapter or the regular FC adapter that, you know, I have in my host,
44:25where it's just a piece of hardware.
44:27That hardware has been configured in its BIOS for what it's supposed to connect to.
44:32So, all of the SCSI information, the iSCSI, sorry, you're going to go into the BIOS of that network card
44:45and tell it that it's supposed to connect to this iSCSI storage array, this LUN, and that's it.
44:56But at that point, again, we're not going to talk about independent hardware iSCSI beyond this
45:03because it's out of VMware's hands at that point.
45:07That is all being done in hardware.
45:09And again, as long as you configure the independent hardware correctly, VMware has nothing to do with it.
45:15It's going to see the LUNs that it's supposed to see.
45:17And at that point, it's just going to go right to formatting them or whatever it is we need to do with them.
45:21We have a question here.
45:24Fiber channel can only be implemented in hardware.
45:28Am I correct?
45:28Yes, you are absolutely correct.
45:32Just so we're clear, we're talking about FC.
45:36Fiber channel itself is hardware.
45:40FCOE, of course, can be either one.
45:44Okay, but fiber channel itself, I mean, unless there's a trick that I'm not aware of,
45:49which is always possible, but yeah, fiber channel itself is a hardware implementation.
Recommended
3:23
|
Up next
6:30
7:22
1:23:56
1:03:10
1:08:17
1:06:27
48:44
56:23
1:22:27
1:22:34
31:07
59:43
1:59:39
1:19:21
1:28:40
1:48:44
1:36:31
42:10
57:17
Be the first to comment