Storage

storage area network (SAN) or storage network is a Computer network which provides access to consolidated, block level data storage. SANs are primarily used to enhance the accessibility of storage devices, such as disk arrays and to servers so that the devices appear to the operating system as locally attached devices. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN) by other devices, thereby preventing interference of LAN traffic in data transfer.

The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.

A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems.

 

Storage architectures

Storage area networks (SANs) are sometimes referred to as network behind the servers[1]:11 and historically developed out of the  model, but with its own. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic of data, and the monitoring of the storage as well as the backup process.[2]:16–17 A SAN is a combination of hardware and software.[2]:9 It grew out of data-centric , where clients in a network can connect to several that store different types of data.[2]:11 To scale storage capacities as the volumes of data grew, (DAS) was developed, where or  (JBODs) were attached to servers. In this architecture storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is a and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct attached shared storage architecture was implemented, where several servers could access the same storage device.[2]:16–17

DAS was the first network storage system and is still widely implemented where data storage requirements are not very high. Out of it developed the (NAS) architecture, where one or more dedicated or storage devices are made available in a LAN.[2]:18 Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck.[2]:21–22 Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the storage network, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent.[2]:22 While in a NAS architecture data is transferred using the protocols over, distinct protocols were developed for SANs, such as. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.[2]:29

Host layer

Servers that allow access to the SAN and its storage devices are said to form the host layer of the SAN. Such servers have host bus adapters (HBAs), which are cards that attach to slots on the server main board (usually PCI slots) of the server can communicate with the storage devices in the SAN.[3]:26 A cable connects to the host bus adapter card through the gigabit interface converter (GBIC). These interface converters are also attached to switches and storage devices within the SAN, and they convert digital bits into light impulses that can then be transmitted over the fiber channel cables. Conversely, the GBIC converts incoming light impulses back into digital bits. The predecessor of the GBIC was called gigabit link module (GLM).[3]:27 This is applicable for fiber channel deployments only.

The various storage devices in a SAN are said to form the storage layer. It can include a variety of hard disk and magnetic tape devices that store data. In SANs disk arrays are joined through a RAID, which makes a lot of hard disks look and perform like one big storage device.[3]:48 Every storage device, or even partition on that storage device, has a logical unit number (LUN) assigned to it. This is a unique number within the SAN and every node in the SAN, be it a server or another storage device, can access the storage through the LUN. The LUNs allow for the storage capacity of a SAN to be segmented and for the implementation of access controls. A particular server, or a group of servers, may, for example, be only given access to a particular part of the SAN storage layer, in the form of LUNs. When a storage device receives a request to read or write data, it will check its access list to establish whether the node, identified by its LUN, is allowed to access the storage area, also identified by a LUN.[3]:148–149 LUN masking is a technique whereby the host bus adapter and the SAN software of a server restrict the LUNs for which commands are accepted. In doing so LUNs that should in any case not be accessed by the server are masked Another method to restrict server access to particular SAN storage devices is fabric-based access control, or zoning, which has to be implemented on the SAN networking devices and the servers. Thereby server access is restricted to storage devices that are in a particular SAN zone.[4]