We are still actively working on the spam issue.

Home Server/RAID/ZFS

From InstallGentoo Wiki
Revision as of 01:39, 24 February 2024 by Cyberes (talk | contribs) (ECC RAM: expand tldr)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Zfs-logo.png
Tip: ZFS performs best when you spend time planning and researching your storage pool layout. Avoid jumping in and making mistakes with ZFS.

ZFS is a long standing, reliable file system and software RAID solution that works on *BSD and Linux. Supports the standard RAID levels as well as RAID-Z.

ZFS needs at least 8GB RAM minimum. If it is starved for memory then your transfer speeds will decline. More RAM tends to increase the overall performance of your pool. 1GB per TB is a good rule if you are poor and 5GB per TB if you are rich.


Hardware RAID Cards

Tip: Hardware RAID cards are not compatible with ZFS.

Many enterprise servers come with a hardware RAID card. While you could disable the RAID features and pass the drives through to the operating system, ZFS requires direct access to the drives. Your enterprise server needs a RAID card in IT mode, which presents the raw disks to the operating system.

Example Ebay listings:

ECC RAM

ECC RAM is a type of RAM that can detect and correct the most common kinds of internal data corruption. This type is recommended for obvious reasons and normally isn't much of a premium over regular RAM.

ECC RAM isn't strictly required, but is an important component of reliable data storage. Here’s a quote from Matt Ahrens, one of the co-founders of the ZFS project at Sun Microsystems.

There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag. This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
—Matt Ahrens, Ars OpenForum


There is endless debate over the requirement of ECC RAM for ZFS. The TL;DR of these innumerable arguments boils down to this:

ECC RAM is not strictly necessary for ZFS, but it is highly recommended. ECC RAM can detect and correct common kinds of data corruption, which complements ZFS's own data protection features. Without ECC RAM, certain types of data corruption may not be detected or corrected, potentially leading to data loss. In a homelab or small office environment with only a few servers these errors are not much concern. But in large enterprise environments with many servers these errors can accumulate and introduce a noticeable amount of corruption.

RAID-Z

RAID-Z, a data/parity distribution scheme similar to RAID 5, but using dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the write hole error. RAID-Z can also detect and correct silent data corruption, offering "self-healing data". RAID-Z does not require any special hardware, such as NVRAM for reliability, or write buffering for performance.
—Wikipedia, Non-standard RAID levels


RAID-Z has a number of benefits as well as drawbacks. Do your research and determine if RAID-Z is suitable for your use case.

L2ARC

ZFS uses two types of caching mechanisms to improve performance: ARC (Adaptive Replacement Cache) and L2ARC (Level 2 Adaptive Replacement Cache).

ARC is the primary cache in ZFS. It is stored in the system's main memory (RAM). The ARC dynamically adjusts the size of the cache to optimize the hit rate. It uses a sophisticated algorithm to predict which data blocks are likely to be accessed again, and keeps them in the cache. The ARC is very fast because it is in RAM, but its size is limited by the amount of available RAM.

L2ARC is a secondary cache in ZFS. It is used when the ARC is full and more caching is needed. The L2ARC is typically stored on a fast SSD. While it is slower than the ARC because it is not in RAM, it is still much faster than accessing the data from a hard drive. The L2ARC can be much larger than the ARC, because SSDs can have much larger capacities than RAM.

The main drawback of L2ARC is that it can actually decrease performance in certain scenarios. This is because when an L2ARC is present, ZFS tends to move data from the faster ARC to the slower L2ARC. This means that data which could have been served from the faster ARC is now being served from the slower L2ARC, which can result in slower read times.

The more L2ARC you have, the less RAM is available for the ARC. This can lead to a situation where important data is evicted from the ARC to make room for the L2ARC index, which can further degrade performance.

In general, it's often better to invest in more RAM for the ARC rather than setting up an L2ARC, as the ARC provides faster access to data and does not have these drawbacks.

However, there may be specific scenarios where an L2ARC can be beneficial, such as when dealing with very large data sets that do not fit into the ARC and are accessed infrequently enough to be evicted from the ARC but not the L2ARC If you think you need an L2ARC, the answer is probably "no".

SLOG

SLOG (Separate Intent LOG) is a dedicated device used by the ZFS file system to store data (the ZFS Intent Log [ZIL]) before it's written to the main storage pool. The SLOG is designed to improve the speed and efficiency of write operations.

The SLOG is beneficial in situations where synchronous write operations are frequently performed. Synchronous writes require each write operation to be confirmed as successful before the next one can begin. This can slow down the overall system performance. By using a SLOG, these write operations can be completed more quickly, as they're first written to the high-speed SLOG and then transferred to the main storage pool in the background.

This is particularly useful in environments with databases and virtual machines, or any application that requires a high number of Input/Output operations per second (IOPS). It can also be beneficial in a power loss situation, as the SLOG can help prevent data loss.

Data is read from the SLOG only in the event of a system crash or power failure. In normal operation, when data is successfully written to the main storage pool, the corresponding entries in the SLOG are discarded and the space is freed up for new write operations. Therefore, under normal conditions, data is never read from the SLOG.

When a synchronous write request is made, ZFS writes the data to the SLOG first. The data is then asynchronously written to the storage pool from the RAM, not the SLOG. The SLOG only exists to recover data that was not written to the pool.

A SLOG is useful in NFS shares. NFS uses synchronous writes and each write operation must be confirmed as successful before the next one can begin, which can slow down the performance of the share.

SLOG Hardware Suggestions

Tip: It's good practice to mirror your SLOG devices.

Intel Optane SSDs are an affordable choice for a SLOG device.

ZFS Storage Overhead

Also known as "slop space", this is extra storage space reserved by ZFS to ensure its smooth operation. This space is typically a fraction of the total disk space.

The slop space is not available for general storage use, and it helps to prevent the file system from completely filling up, which could lead to data corruption or other issues. It is automatically managed by ZFS and does not require any user intervention.

Read more.

Useful Reading