Follow me on Twitter:

Understanding Fibre Channel

Posted: March 15th, 2010 | Author: | Filed under: SAN 101 | Tags: , , , | 1 Comment »

As we dive deeper into SAN technology, it’s Fibre Channel’s turn to be examined. FC is the underpinning of all SAN technologies these days, as it won the protocol war roughly 25 years ago.

FC wouldn’t be much use without something on top of it, namely SCSI. FC is the low-level transport that ships data, but hosts are normally talking SCSI as far as they’re concerned. The hubs, switches, and HBAs in a SAN all speak FC, while the applications that use SAN storage continue to use familiar protocols, like SCSI.

The idea behind FC was to create a high throughput, low latency, reliable and scalable protocol. Ethernet wouldn’t quite cut it for highly-available storage needs. FC can currently operate of speeds up to 10Gb/s (10GFC) for uplinks, and 4Gb for standard host connections. FC also provides small connectors. As silly as it sounds, SCSI cables become unruly after time, and small strands of fibre are certainly easier to manage. The equipment required to connect to a FC SAN (multiple HBAs for each host, fibre, and switches) is extremely expensive, and was the main reason SAN technologies took so long to become widely adopted.

In reality, two different protocols, or topologies, make up the FC protocol. FC supports all topologies, but the behavior of the protocol changes depending on the topology. The following three types of topologies are supported:

  • PTP (point to point): normally used for DAS configurations.
  • FC-AL (FC Arbitrated Loop): Fabric Loop ports, or FL ports on a switch, and NL_Ports (node loop) on an HBA, support loop operations.
  • FC-SW (FC Switched): the mode when operating on a switched SAN.

FC-AL operation is quite scary, but sometimes a device doesn’t support FC-SW operations, and there’s no choice. A hub has no choice but to operate in FC-AL mode, and therefore attached hosts must as well. When a device joins an FC-AL, or when there’s any type of error or reset, the loop must reinitialize. All communication is temporarily halted during this process, so it can cause problems for some applications. FC-AL is limited to 127 nodes due to the addressing mechanism, in theory, but in reality closer to 20. FC-AL is mostly relegated to niche uses now, including but not limited to internal disk array communications and internal storage for high-end servers.

FC switches can be connected any way you please, since the FC protocol avoids the possibility of a loop by nature. Ethernet isn’t so lucky. The addressing scheme used does impose a limit of 239 switches though. FC switches use FSPF, a link-state protocol like OSPF in the IP world, to ensure loop-free and efficient connectivity.

FC networks are generally designed in one of two ways: either one big star, or one big star with edge switches hanging off it. These are commonly known as “core-only” and “core-edge” configurations. Normally a SAN will contain two of these networks, and each host’s HBA or storage device’s controller will attach to each. Keeping these networks separate isn’t as necessary as it is with FC-AL topologies, but even with FC-SW setups it still provides complete isolation and assurance that a problem in one fabric won’t impact the other. An FSPF recalculation, for example, could cause a brief interruption in service.

As previously mentioned, there are different port types in a SAN, and it can get confusing. Let’s try to clear up some of that terminology:

  • N_Port: Node Port; the node connection point; end points for FC traffic
  • F_Port: Fabric Port; a switch-connected port, that is a “middle point” connection for two N_Ports
  • NL_Port: Node Loop Port; connects to others via their NL_Ports, or to a switched fabric via a single FL_Port; or NL_port to F_Port to F_Port to N_Port (through a switch)
  • FL_Port: Fabric Loop Port; a shared point of entry into a fabric for AL devices; example  NL_Port to FL_Port to F_Port to N_Port
  • E_Port: Expansion Port; used to connect multiple switches together via ISL (inter-switch links)
  • G_Port: Generic Port; can switch between F_Port and E_Port operation depending on how it’s connected
  • TE_Port: Trunked Expansion Port; link aggregation of multiple E_Ports for higher throughput

You’ll generally only see F_Ports and FL_Ports when looking at a single SAN switch, and knowing the difference helps. FL means that you’re talking FC-AL, and there’s a device attached that is either a hub, something that can’t do anything but FC-AL, or something strange. Ports will automatically configure themselves as an FL_Port if the attached device is Loop-only, otherwise it will be an F_Port. It’s also worth noting that some brands of FC switches don’t allow you to have an E_Port unless you pay a higher licensing fee. It’s something to think about if you ever plan to connect multiple switches together.

FC Layers
FC has its own layers, so in fact, calling it “like Ethernet” isn’t quite accurate, even if it helps for understanding. They are:

  • FC-0: The interface to the physical media; cables, etc
  • FC-1: Transmission protocol or data-link layer, encodes and decodes signals
  • FC-2: Network Layer; the core of FC
  • FC-3: Common services, like hunt groups
  • FC-4: Everything! Protocol mapping for SCSI, iSCSI, FCP, IP, and others

The bulk of FC is really in FC-2. FC-PH refers to FC-0 through FC-2, which are strangely dubbed the physical layers.

FC also supports its own naming and addressing mechanism, which sheds light on the previously mentioned limitations in FC-AL and FC-SW topologies. Next time, we’ll discuss the header format for FC-2 as well as FC address assignment and name resolution.

In a Nutshell:

  • FC is the transport mechanism, and SCSI or even IP rid atop FC
  • FC-AL is a loop, where all connected devices see each other, and a re-initialization takes out the entire SAN
  • Port types reveal what is actually happening, and knowing what they stand for can aid in topology visualization when looking at a switch’s configuration

1 Comment »

SAN 101: Intro to SANs and Storage

Posted: March 6th, 2010 | Author: | Filed under: SAN 101 | Tags: , , , | 2 Comments »

Welcome! We begin outr Storage Networking 101 series with an introduction to Storage Area Networks and storage technologies. In case you missed it, be sure to read the entire Networking 101 series (link coming soon) before embarking on the Storage journey—a solid understanding of various network protocols is required.

What is a storage network?
A storage network is any network that’s designed to transport block-level storage protocols. Hosts (servers), disk arrays, tape libraries, and just about anything else can connect to a SAN. Generally, one would use a SAN switch to connect all devices, and then configure the switch to allow friendly devices to pair up. The entire concept is about flexibility: in a SAN environment you can move storage between hosts, virtualize your storage at the SAN level, and obtain a new level of redundancy than was ever possible with direct-attached storage.

A FC-SAN, or Fiber Channel SAN, is a SAN comprised of the Fiber Channel protocol. Think of Fiber Channel (FC) as an Ethernet replacement. In fact, Fiber Channel can transport other protocols, like IP, but it’s mostly used for transporting SCSI traffic. Don’t worry about the FC protocol itself for now; we’ll cover that in another article later on.

A fairly new type of SAN is the IP-SAN: an IP network that’s been designated as a storage network. Instead of using FC, an IP-SAN uses Ethernet with IP and TCP to transport iSCSI data. There’s nothing to stop you from shipping iSCSI data over your existing network, but an IP-SAN typically means that you’re using plumbing dedicated for the storage packets. Operating system support for the iSCSI protocol has been less than stellar, but the state of iSCSI is slowly improving.

Another term you’ll frequently see thrown around is NAS. Network Attached Storage doesn’t really have anything to do with SANs—they’re just file servers. A NAS device runs something like Linux, and serves files using NFS or CIFS over your existing IP network. Nothing fancy to see here; move along.

There is one important take-away from the NAS world, however. That is the difference between block-level storage protocols and file-level protocols. A block-level protocol is SCSI or ATA, where as file protocols can be anything from NFS or CIFS to HTTP. Block protocols ship an entire disk block at once, and it gets written to disk as a whole block. File-level protocols could ship one byte at a time, and depend on the lower-level block protocol to assemble the bytes into disk blocks.

Block-level protocols
A protocol always defines a method by which two devices communicate. Block storage protocols are no different: they define how storage interacts with storage controllers. There are two main block protocols used today: SCSI and ATA.

ATA operates in a bus topology, and allows for two devices on each bus. Your IDE disk drive and CD ROM are, you guessed it, using the ATA protocol. There are many different ATA standards, but we’ll cover just the important ones here. ATA-2 was also known as EIDE, or enhanced IDE. It was the first of the ATA protocol we know today. ATA-4 introduced ATAPI, or the ATA Packet Interface, which allows for CD ROM devices to speak SCSI-like on the same bus as a regular ATA device.

The neat thing about ATA is that the controllers are integrated. The only “traffic” sent over the ATA bus is plain electrical signals. The host operating system is actually responsible for implementing the ATA protocol, in software. This means that ATA devices will never, ever be as fast as SCSI, because the CPU has to do so much work to just talk to these devices. As far as SANs are concerned, ATA isn’t that important. There are some ATA-based devices that allow you to connect cheap disks, but they translate operations into SCSI before sending them out to the SAN.

SCSI, on the other hand, is very confusing. SCSI-1 and SCSI-2 devices were connected via a parallel interface to a bus that could support 8 or 16 devices, depending on the bus width. Don’t worry about the details unless you’re unfortunate enough to have some older SCSI gear lying around.

SCSI-3 separated the device-specific commands into a different category. The primary SCSI-3 command set includes the standard commands that every SCSI-3 device speaks, but the device-specific commands can be anything. This opened up a whole new world for SCSI, and it has been used to support many strange and wonderful new devices.

SCSI controllers normally contain a storage processor, and the commands are processed on-board so that the host operating system doesn’t become burdened to do so, as with ATA. Such a SCSI controller is called a Host Bus Adapter. In the SAN world, the FC card is always called an HBA.

The main thing to know about SCSI is that it operates in a producer/consumer manner. One SCSI device (the initiator) will initiate the communication with another device, which is known as the target. The roles can be reversed! Most people call this a command/response protocol, because the initiator sends a command to a target, and awaits a response, but not always. In asynchronous mode, the host (initiator) can simply blast the target with data until it’s done. The SCSI bus, parallel in nature, can only support a single communication at a time, so subsequent sessions must wait their turn. SAS, or Serial Attached SCSI, does away with this limitation by automatically switching back and forth.

SCSI is tremendously more complex, but that’s the gist of it.

We need to understand SCSI to know how our storage network is going to ship data. The SCSI protocol plays an enormous role in storage networking, so you may even want to look at it more in-depth.

Next up, we’ll begin talking about Fiber Channel itself, which, as chance would have it, is much more complex than Ethernet. This is certainly going to be a fun journey.

In A Nutshell:

– A FC-SAN is a network that uses Fiber Channel at Layer 2, instead of Ethernet, and is dedicated to moving around SCSI commands.
– SCSI initiator, generally a host’s storage controller, is called an HBA; the SCSI target is most often the storage device you’re talking to.
– iSCSI can transport SCSI over your existing network, but a network dedicated to iSCSI is called an IP-SAN.