Installing and configuring an NFS server. Setting up an NFS server Removing an NFS mount


Good afternoon, readers and guests. There was a very long break between posts, but I’m back in action). In today's article I will look at NFS protocol operation, and setting up NFS server and NFS client on Linux.

Introduction to NFS

NFS (Network File System - network file system) in my opinion - an ideal solution on a local network, where fast (faster compared to SAMBA and less resource-intensive compared to remote file systems with encryption - sshfs, SFTP, etc...) data exchange is needed and is not at the forefront security of transmitted information. NFS protocol allows mount remote file systems over the network into a local directory tree, as if it were a mounted disk file system. This allows local applications to work with a remote file system as if they were a local one. But you need to be careful (!) with setting up NFS, because with a certain configuration it is possible to freeze the client’s operating system waiting for endless I/O. NFS protocol work based RPC protocol, which is still beyond my understanding)) so the material in the article will be a little vague... Before you can use NFS, be it a server or a client, you must make sure that your kernel has support for the NFS file system. You can check whether the kernel supports the NFS file system by looking at the presence of the corresponding lines in the file /proc/filesystems:

ARCHIV ~ # grep nfs /proc/filesystems nodev nfs nodev nfs4 nodev nfsd

If the specified lines in the file /proc/filesystems does not appear, then you need to install the packages described below. This will most likely allow you to install dependent kernel modules to support the required file systems. If, after installing the packages, NFS support is not displayed in the specified file, then you will need to enable this function.

Story Network File System

NFS protocol developed by Sun Microsystems and has 4 versions in its history. NFSv1 was developed in 1989 and was experimental, running on the UDP protocol. Version 1 is described in . NFSv2 was released in the same 1989, described by the same RFC1094 and also based on the UDP protocol, while allowing no more than 2GB to be read from a file. NFSv3 finalized in 1995 and described in . The main innovations of the third version were support for large files, added support for the TCP protocol and large TCP packets, which significantly accelerated the performance of the technology. NFSv4 finalized in 2000 and described in RFC 3010, revised in 2003 and described in . The fourth version included performance improvements, support for various authentication means (in particular, Kerberos and LIPKEY using the RPCSEC GSS protocol) and access control lists (both POSIX and Windows types). NFS version v4.1 was approved by the IESG in 2010 and received the number . An important innovation in version 4.1 is the specification of pNFS - Parallel NFS, a mechanism for parallel NFS client access to data from multiple distributed NFS servers. The presence of such a mechanism in the network file system standard will help build distributed “cloud” storage and information systems.

NFS server

Since we have NFS- This network file system, then necessary. (You can also read the article). Next is necessary. On Debian this is a package nfs-kernel-server And nfs-common, in RedHat this is a package nfs-utils. And also, you need to allow the daemon to run at the required OS execution levels (command in RedHat - /sbin/chkconfig nfs on, in Debian - /usr/sbin/update-rc.d nfs-kernel-server defaults).

Installed packages in Debian are launched in the following order:

ARCHIV ~ # ls -la /etc/rc2.d/ | grep nfs lrwxrwxrwx 1 root root 20 Oct 18 15:02 S15nfs-common -> ../init.d/nfs-common lrwxrwxrwx 1 root root 27 Oct 22 01:23 S16nfs-kernel-server -> ../init.d /nfs-kernel-server

That is, it starts first nfs-common, then the server itself nfs-kernel-server. In RedHat the situation is similar, with the only exception that the first script is called nfslock, and the server is called simply nfs. About nfs-common The debian website tells us this verbatim: shared files for NFS client and server, this package must be installed on the machine that will operate as an NFS client or server. The package includes programs: lockd, statd, showmount, nfsstat, gssd and idmapd. Viewing the contents of the launch script /etc/init.d/nfs-common you can track the following sequence of work: the script checks for the presence of an executable binary file /sbin/rpc.statd, checks for presence in files /etc/default/nfs-common, /etc/fstab And /etc/exports parameters that require running daemons idmapd And gssd , starts the daemon /sbin/rpc.statd , then before launch /usr/sbin/rpc.idmapd And /usr/sbin/rpc.gssd checks the presence of these executable binary files, then for daemon /usr/sbin/rpc.idmapd checks availability sunrpc,nfs And nfsd, as well as file system support rpc_pipefs in the kernel (that is, having it in the file /proc/filesystems), if everything is successful, it starts /usr/sbin/rpc.idmapd . Additionally, for the demon /usr/sbin/rpc.gssd checks kernel module rpcsec_gss_krb5 and starts the daemon.

If you view the content NFS server startup script on Debian ( /etc/init.d/nfs-kernel-server), then you can follow the following sequence: at startup, the script checks the existence of the file /etc/exports, Availability nfsd, availability of support NFS file system in (that is, in the file /proc/filesystems), if everything is in place, then the daemon starts /usr/sbin/rpc.nfsd , then checks whether the parameter is specified NEED_SVCGSD(set in the server settings file /etc/default/nfs-kernel-server) and, if given, starts the daemon /usr/sbin/rpc.svcgssd , launches the daemon last /usr/sbin/rpc.mountd . From this script it is clear that NFS server operation consists of daemons rpc.nfsd, rpc.mountd and if Kerberos authentication is used, then the rcp.svcgssd daemon. In the red hat, the rpc.rquotad and nfslogd daemon are still running (For some reason in Debian I did not find information about this daemon and the reasons for its absence, apparently it was deleted...).

From this it becomes clear that The Network File System server consists of the following processes (read: daemons), located in the /sbin and /usr/sbin directories:

In NFSv4, when using Kerberos, additional daemons are started:

  • rpc.gssd- The NFSv4 daemon provides authentication methods via GSS-API (Kerberos authentication). Works on client and server.
  • rpc.svcgssd- NFSv4 server daemon that provides server-side client authentication.

portmap and RPC protocol (Sun RPC)

In addition to the above packages, an additional package is required for NFSv2 and v3 to work correctly portmap(replaced in newer distributions by renamed to rpcbind). This package is usually installed automatically with NFS as a dependent package and implements the operation of the RPC server, that is, it is responsible for the dynamic assignment of ports for some services registered in the RPC server. Literally, according to the documentation, this is a server that converts RPC (Remote Procedure Call) program numbers into TCP/UDP port numbers. portmap operates on several entities: RPC calls or requests, TCP/UDP ports,protocol version(tcp or udp), program numbers And software versions. The portmap daemon is launched by the /etc/init.d/portmap script before NFS services start.

In short, the job of an RPC (Remote Procedure Call) server is to process RPC calls (so-called RPC procedures) from local and remote processes. Using RPC calls, services register or remove themselves to/from the port mapper (aka port mapper, aka portmap, aka portmapper, aka, in new versions, rpcbind), and clients use RPC calls to send requests to the portmapper receive the necessary information. User-friendly names of program services and their corresponding numbers are defined in the /etc/rpc file. As soon as any service has sent the corresponding request and registered itself on the RPC server in the port mapper, the RPC server assigns, maps to the service the TCP and UDP ports on which the service started and stores in the kernel the corresponding information about the running service (name), a unique number service (in accordance with /etc/rpc), about the protocol and port on which the service runs and about the version of the service and provides the specified information to clients upon request. The port converter itself has a program number (100000), version number - 2, TCP port 111 and UDP port 111. Above, when specifying the composition of the NFS server daemons, I indicated the main RPC program numbers. I've probably confused you a little with this paragraph, so I'll say a basic phrase that should make things clear: the main function of a port mapper is to return, upon request of a client who has provided an RPC program number (or RPC program number) and version to him (the client) the port on which the requested program is running. Accordingly, if a client needs to access RPC with a specific program number, it must first contact the portmap process on the server machine and determine the communication port number with the RPC service it needs.

The operation of an RPC server can be represented by the following steps:

  1. The port converter should start first, usually when the system boots. This creates a TCP endpoint and opens TCP port 111. It also creates a UDP endpoint that waits for a UDP datagram to arrive on UDP port 111.
  2. At startup, a program running through an RPC server creates a TCP endpoint and a UDP endpoint for each supported version of the program. (An RPC server can support multiple versions. The client specifies the required version when making the RPC call.) A dynamically assigned port number is assigned to each version of the service. The server logs each program, version, protocol, and port number by making the appropriate RPC call.
  3. When the RPC client program needs to obtain the necessary information, it calls the port resolver routine to obtain a dynamically assigned port number for the specified program, version, and protocol.
  4. In response to this request, the north returns a port number.
  5. The client sends an RPC request message to the port number obtained in step 4. If UDP is used, the client simply sends a UDP datagram containing the RPC challenge message to the UDP port number on which the requested service is running. In response, the service sends a UDP datagram containing an RPC response message. If TCP is used, the client actively opens to the TCP port number of the desired service and then sends an RPC challenge message over the established connection. The server responds with an RPC response message on the connection.

To obtain information from the RPC server, use the utility rpcinfo. When specifying parameters -p host the program displays a list of all registered RPC programs on the host host. Without specifying the host, the program will display services on localhost. Example:

ARCHIV ~ # rpcinfo -p prog-ma vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 59451 status 100024 1 tcp 60872 status 100021 1 udp 44310 nlockmgr 100 021 3 udp 44310 nlockmgr 100021 4 udp 44310 nlockmgr 100021 1 tcp 44851 nlockmgr 100021 3 tcp 44851 nlockmgr 100021 4 tcp 44851 nlockmgr 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100005 1 udp 51306 mountd 100005 1 tcp 41405 mountd 100005 2 udp 51306 mountd 100005 2 tcp 41405 mountd 100005 3 udp 51306 mountd 100005 3 tcp 41405 mountd

As you can see, rpcinfo displays (in columns from left to right) the registered program number, version, protocol, port and name. Using rpcinfo you can remove a program's registration or get information about a specific RPC service (more options in man rpcinfo). As you can see, portmapper daemons version 2 are registered on udp and tcp ports, rpc.statd version 1 on udp and tcp ports, NFS lock manager versions 1,3,4, nfs server daemon version 2,3,4, as well as the mount daemon versions 1,2,3.

The NFS server (more precisely, the rpc.nfsd daemon) receives requests from the client in the form of UDP datagrams on port 2049. Although NFS works with a port resolver, which allows the server to use dynamically assigned ports, UDP port 2049 is hardcoded to NFS in most implementations .

Network File System Protocol Operation

Mounting remote NFS

The process of mounting a remote NFS file system can be represented by the following diagram:

Description of the NFS protocol when mounting a remote directory:

  1. An RPC server is launched on the server and client (usually at boot), serviced by the portmapper process and registered on the tcp/111 and udp/111 port.
  2. Services are launched (rpc.nfsd, rpc.statd, etc.), which are registered on the RPC server and registered on arbitrary network ports (if a static port is not specified in the service settings).
  3. the mount command on the client computer sends a request to the kernel to mount a network directory, indicating the type of file system, host and directory itself; the kernel sends and forms an RPC request to the portmap process on the NFS server on port udp/111 (if the option to work via tcp is not set on the client )
  4. The NFS server kernel queries the RPC for the presence of the rpc.mountd daemon and returns to the client kernel the network port on which the daemon is running.
  5. mount sends an RPC request to the port on which rpc.mountd is running. The NFS server can now validate a client based on its IP address and port number to see if the client can mount the specified file system.
  6. The mount daemon returns a description of the requested file system.
  7. The client's mount command issues the mount system call to associate the file handle obtained in step 5 with the local mount point on the client's host. The file handle is stored in the client's NFS code, and from now on any access by user processes to files on the server's file system will use the file handle as a starting point.

Communication between client and NFS server

A typical access to a remote file system can be described as follows:

Description of the process of accessing a file located on an NFS server:

  1. The client (user process) does not care whether it is accessing a local file or an NFS file. The kernel interacts with hardware through kernel modules or built-in system calls.
  2. Kernel module kernel/fs/nfs/nfs.ko, which performs the functions of an NFS client, sends RPC requests to the NFS server via the TCP/IP module. NFS typically uses UDP, however newer implementations may use TCP.
  3. The NFS server receives requests from the client as UDP datagrams on port 2049. Although NFS can work with a port resolver, which allows the server to use dynamically assigned ports, UDP port 2049 is hard-coded to NFS in most implementations.
  4. When the NFS server receives a request from a client, it is passed to a local file access routine, which provides access to the local disk on the server.
  5. The result of the disk access is returned to the client.

Setting up an NFS server

Server Tuning in general consists of specifying local directories that are allowed to be mounted by remote systems in a file /etc/exports. This action is called export directory hierarchy. The main sources of information about exported catalogs are the following files:

  • /etc/exports- the main configuration file that stores the configuration of the exported directories. Used when starting NFS and by the exportfs utility.
  • /var/lib/nfs/xtab- contains a list of directories mounted by remote clients. Used by the rpc.mountd daemon when a client attempts to mount a hierarchy (a mount entry is created).
  • /var/lib/nfs/etab- a list of directories that can be mounted by remote systems, indicating all the parameters of the exported directories.
  • /var/lib/nfs/rmtab- a list of directories that are not currently unexported.
  • /proc/fs/nfsd- a special file system (kernel 2.6) for managing the NFS server.
    • exports- a list of active exported hierarchies and clients to whom they were exported, as well as parameters. The kernel gets this information from /var/lib/nfs/xtab.
    • threads- contains the number of threads (can also be changed)
    • using filehandle you can get a pointer to a file
    • and etc...
  • /proc/net/rpc- contains “raw” statistics, which can be obtained using nfsstat, as well as various caches.
  • /var/run/portmap_mapping- information about services registered in RPC

Note: In general, on the Internet there are a lot of interpretations and formulations of the purpose of the xtab, etab, rmtab files, I don’t know who to believe. Even on http://nfs.sourceforge.net/ the interpretation is not clear.

Setting up the /etc/exports file

In the simplest case, the /etc/exports file is the only file that requires editing to configure the NFS server. This file controls the following aspects:

  • What kind of clients can access files on the server
  • Which hierarchies? directories on the server can be accessed by each client
  • How will custom customer names be be displayed to local usernames

Each line of the exports file has the following format:

export_point client1 (options) [client2(options) ...]

Where export_point absolute path of the exported directory hierarchy, client1 - n name of one or more clients or IP addresses, separated by spaces, that are allowed to mount export_point . Options describe mounting rules for client, specified before options .

Here's a typical one exports file configuration example:

ARCHIV ~ # cat /etc/exports /archiv1 files(rw,sync) 10.0.0.1(ro,sync) 10.0.230.1/24(ro,sync)

In this example, computers files and 10.0.0.1 are allowed access to the export point /archiv1, while host files has read/write access, and host 10.0.0.1 and subnet 10.0.230.1/24 have read-only access.

Host descriptions in /etc/exports are allowed in the following format:

  • The names of individual nodes are described as files or files.DOMAIN.local.
  • The domain mask is described in the following format: *DOMAIN.local includes all nodes of the DOMAIN.local domain.
  • Subnets are specified as IP address/mask pairs. For example: 10.0.0.0/255.255.255.0 includes all nodes whose addresses begin with 10.0.0.
  • Specifying the name of the @myclients network group that has access to the resource (when using an NIS server)

General options for exporting directory hierarchies

The exports file uses the following general options(options used by default in most systems are listed first, non-default ones in brackets):

  • auth_nlm (no_auth_nlm) or secure_locks (insecure_locks)- specifies that the server should require authentication of lock requests (using the NFS Lock Manager protocol).
  • nohide (hide)- if the server exports two directory hierarchies, with one nested (mounted) within the other. The client needs to explicitly mount the second (child) hierarchy, otherwise the child hierarchy's mount point will appear as an empty directory. The nohide option results in a second directory hierarchy without an explicit mount. ( note: I couldn't get this option to work...)
  • ro(rw)- Allows only read (write) requests. (Ultimately, whether it is possible to read/write or not is determined based on file system rights, and the server is not able to distinguish a request to read a file from a request to execute, so it allows reading if the user has read or execute rights.)
  • secure (insecure)- requires NFS requests to come from secure ports (< 1024), чтобы программа без прав root не могла монтировать иерархию каталогов.
  • subtree_check (no_subtree_check)- If a subdirectory of the file system is exported, but not the entire file system, the server checks whether the requested file is in the exported subdirectory. Disabling verification reduces security but increases data transfer speed.
  • sync (async)- specifies that the server should respond to requests only after the changes made by those requests have been written to disk. The async option tells the server not to wait for information to be written to disk, which improves performance but reduces reliability because In the event of a connection break or equipment failure, information may be lost.
  • wdelay (no_wdelay)- instructs the server to delay executing write requests if a subsequent write request is pending, writing data in larger blocks. This improves performance when sending large queues of write commands. no_wdelay specifies not to delay execution of a write command, which can be useful if the server receives a large number of unrelated commands.

Export symbolic links and device files. When exporting a directory hierarchy containing symbolic links, the link object must be accessible to the client (remote) system, that is, one of the following rules must be true:

The device file belongs to the interface. When you export a device file, this interface is exported. If the client system does not have a device of the same type, the exported device will not work. On the client system, when mounting NFS objects, you can use the nodev option so that device files in the mounted directories are not used.

The default options may vary between systems and can be found in /var/lib/nfs/etab. After describing the exported directory in /etc/exports and restarting the NFS server, all missing options (read: default options) will be reflected in the /var/lib/nfs/etab file.

User ID display (matching) options

For a better understanding of the following, I would advise you to read the article. Each Linux user has its own UID and main GID, which are described in the files /etc/passwd And /etc/group. The NFS server assumes that the remote host's operating system has authenticated the users and assigned them the correct UID and GID. Exporting files gives users of the client system the same access to those files as if they were logged directly on the server. Accordingly, when an NFS client sends a request to the server, the server uses the UID and GID to identify the user on the local system, which can lead to some problems:

  • a user may not have the same identifiers on both systems and therefore may be able to access another user's files.
  • because If the root user's ID is always 0, then this user is mapped to the local user depending on the specified options.

The following options set the rules for displaying remote users in local ones:

  • root_squash (no_root_squash)- With the option specified root_squash, requests from the root user are mapped to the anonymous uid/gid, or to the user specified in the anonuid/anongid parameter.
  • no_all_squash (all_squash)- Does not change the UID/GID of the connecting user. Option all_squash sets the display of ALL users (not just root) as anonymous or specified in the anonuid/anongid parameter.
  • anonuid= UID And anongid= GID - Explicitly sets the UID/GID for the anonymous user.
  • map_static= /etc/file_maps_users - Specifies a file in which you can set the mapping of remote UID/GID to local UID/GID.

Example of using a user mapping file:

ARCHIV ~ # cat /etc/file_maps_users # User mapping # remote local comment uid 0-50 1002 # mapping users with remote UID 0-50 to local UID 1002 gid 0-50 1002 # mapping users with/span remote GID 0-50 to local GID 1002

NFS Server Management

The NFS server is managed using the following utilities:

  • nfsstat
  • showmsecure (insecure)mount

nfsstat: NFS and RPC statistics

The nfsstat utility allows you to view statistics of RPC and NFS servers. The command options can be found in man nfsstat.

showmount: Display NFS status information

showmount utility queries the rpc.mountd daemon on the remote host about mounted file systems. By default, a sorted list of clients is returned. Keys:

  • --all- a list of clients and mount points is displayed indicating where the client mounted the directory. This information may not be reliable.
  • --directories- a list of mount points is displayed
  • --exports- a list of exported file systems is displayed from the point of view of nfsd

When you run showmount without arguments, information about the systems that are allowed to mount will be printed to the console local catalogues. For example, the ARCHIV host provides us with a list of exported directories with the IP addresses of hosts that are allowed to mount the specified directories:

FILES ~ # showmount --exports archive Export list for archive: /archiv-big 10.0.0.2 /archiv-small 10.0.0.2

If you specify the hostname/IP in the argument, information about this host will be displayed:

ARCHIV ~ # showmount files clnt_create: RPC: Program not registered # this message tells us that the NFSd daemon is not running on the FILES host

exportfs: manage exported directories

This command serves the exported directories specified in the file /etc/exports, it would be more accurate to write that it does not serve, but synchronizes with the file /var/lib/nfs/xtab and removes non-existent ones from xtab. exportfs is executed when the nfsd daemon is started with the -r argument. The exportfs utility in 2.6 kernel mode communicates with the rpc.mountd daemon through files in the /var/lib/nfs/ directory and does not communicate directly with the kernel. Without parameters, displays a list of currently exported file systems.

exportfs parameters:

  • [client:directory-name] - add or remove the specified file system for the specified client)
  • -v - display more information
  • -r - re-export all directories (synchronize /etc/exports and /var/lib/nfs/xtab)
  • -u - remove from the list of exported
  • -a - add or remove all file systems
  • -o - options separated by commas (similar to the options used in /etc/exports; i.e. you can change the options of already mounted file systems)
  • -i - do not use /etc/exports when adding, only current command line options
  • -f - reset the list of exported systems in kernel 2.6;

NFS client

Before accessing a file on a remote file system, the client (client OS) must mount it and receive from the server pointer to it. NFS Mount can be done with or using one of the proliferating automatic mounters (amd, autofs, automount, supermount, superpupermount). The installation process is well demonstrated in the illustration above.

On NFS clients no need to run any daemons, client functions executes a kernel module kernel/fs/nfs/nfs.ko, which is used when mounting a remote file system. Exported directories from the server can be mounted on the client in the following ways:

  • manually using the mount command
  • automatically at boot, when mounting file systems described in /etc/fstab
  • automatically using the autofs daemon

I will not consider the third method with autofs in this article, due to its voluminous information. Perhaps there will be a separate description in future articles.

Mounting the Network Files System with the mount command

An example of using the mount command is presented in the post. Here I will look at an example of the mount command for mounting an NFS file system:

FILES ~ # mount -t nfs archiv:/archiv-small /archivs/archiv-small FILES ~ # mount -t nfs -o ro archiv:/archiv-big /archivs/archiv-big FILES ~ # mount ..... .. archiv:/archiv-small on /archivs/archiv-small type nfs (rw,addr=10.0.0.6) archiv:/archiv-big on /archivs/archiv-big type nfs (ro,addr=10.0.0.6)

The first command mounts the exported directory /archiv-small on server archive to local mount point /archivs/archiv-small with default options (i.e. read and write). Although mount command in the latest distributions it can understand what type of file system is used even without specifying the type, but still indicate the parameter -t nfs desirable. The second command mounts the exported directory /archiv-big on server archive to local directory /archivs/archiv-big with read-only option ( ro). mount command without parameters, it clearly shows us the mounting result. In addition to the read-only option (ro), it is possible to specify other Basic options when mounting NFS:

  • nosuid- This option prohibits executing programs from the mounted directory.
  • nodev(no device - not a device) - This option prohibits the use of character and block special files as devices.
  • lock (nolock)- Allows NFS locking (default). nolock disables NFS locking (does not start the lockd daemon) and is useful when working with older servers that do not support NFS locking.
  • mounthost=name- The name of the host on which the NFS mount daemon is running - mountd.
  • mountport=n - Port used by the mountd daemon.
  • port=n- port used to connect to the NFS server (default is 2049 if the rpc.nfsd daemon is not registered on the RPC server). If n=0 (default), then NFS queries the portmap on the server to determine the port.
  • rsize=n(read block size - read block size) - The number of bytes read at a time from the NFS server. Standard - 4096.
  • wsize=n(write block size - write block size) - The number of bytes written at a time to the NFS server. Standard - 4096.
  • tcp or udp- To mount NFS, use the TCP or UDP protocol, respectively.
  • bg- If you lose access to the server, try again in the background so as not to block the system boot process.
  • fg- If you lose access to the server, try again in priority mode. This option can block the system boot process by repeating mount attempts. For this reason, the fg parameter is used primarily for debugging.

Options affecting attribute caching on NFS mounts

File attributes, stored in (inodes), such as modification time, size, hard links, owner, typically change infrequently for regular files and even less frequently for directories. Many programs, such as ls, access files read-only and do not change file attributes or content, but waste system resources on expensive network operations. To avoid wasting resources, you can cache these attributes. The kernel uses the modification time of a file to determine whether the cache is out of date by comparing the modification time in the cache and the modification time of the file itself. The attribute cache is periodically updated in accordance with the specified parameters:

  • ac (noac) (attrebute cache- attribute caching) - Allows attribute caching (default). Although the noac option slows down the server, it avoids attribute staleness when multiple clients are actively writing information to a common hierarchy.
  • acdirmax=n (attribute cache directory file maximum- maximum attribute caching for a directory file) - The maximum number of seconds that NFS waits before updating directory attributes (default 60 sec.)
  • acdirmin=n (attribute cache directory file minimum- minimum attribute caching for a directory file) - Minimum number of seconds that NFS waits before updating directory attributes (default 30 sec.)
  • acregmax=n (attribute cache regular file maximum- attribute caching maximum for a regular file) - The maximum number of seconds that NFS waits before updating the attributes of a regular file (default 60 sec.)
  • acregmin=n (attribute cache regular file minimum- minimum attribute caching for a regular file) - Minimum number of seconds that NFS waits before updating the attributes of a regular file (default 3 seconds)
  • actimeo=n (attribute cache timeout- attribute caching timeout) - Replaces the values ​​for all the above options. If actimeo is not specified, then the above values ​​take on the default values.

NFS Error Handling Options

The following options control what NFS does when there is no response from the server or when I/O errors occur:

  • fg(bg) (foreground- foreground, background- background) - Attempts to mount a failed NFS in the foreground/background.
  • hard (soft)- displays the message "server not responding" to the console when the timeout is reached and continues to attempt to mount. With option given soft- during a timeout, informs the program that called the operation about an I/O error. (it is recommended not to use the soft option)
  • nointr (intr) (no interrupt- do not interrupt) - Does not allow signals to interrupt file operations in a hard-mounted directory hierarchy when a large timeout is reached. intr- enables interruption.
  • retrans=n (retransmission value- retransmission value) - After n small timeouts, NFS generates a large timeout (default 3). A large timeout stops operations or prints a "server not responding" message to the console, depending on whether the hard/soft option is specified.
  • retry=n (retry value- retry value) - The number of minutes the NFS service will repeat mount operations before giving up (default 10000).
  • timeo=n (timeout value- timeout value) - The number of tenths of a second the NFS service waits before retransmitting in case of RPC or a small timeout (default 7). This value increases with each timeout up to a maximum of 60 seconds or until a large timeout occurs. If the network is busy, the server is slow, or the request is going through multiple routers or gateways, increasing this value may improve performance.

Automatic NFS mount at boot (description of file systems in /etc/fstab)

You can select the optimal timeo for a specific value of the transmitted packet (rsize/wsize values) using the ping command:

FILES ~ # ping -s 32768 archiv PING archiv.DOMAIN.local (10.0.0.6) 32768(32796) bytes of data. 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=1 ttl=64 time=0.931 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=2 ttl=64 time=0.958 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=3 ttl=64 time=1.03 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=4 ttl=64 time=1.00 ms 32776 bytes from archive .domain.local (10.0.0.6): icmp_req=5 ttl=64 time=1.08 ms ^C --- archive.DOMAIN.local ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms rtt min/avg/max/mdev = 0.931/1.002/1.083/0.061 ms

As you can see, when sending a packet of size 32768 (32Kb), its travel time from the client to the server and back floats around 1 millisecond. If this time exceeds 200 ms, then you should think about increasing the timeo value so that it exceeds the exchange value by three to four times. Accordingly, it is advisable to do this test during heavy network load.

Launching NFS and setting up Firewall

The note was copied from the blog http://bog.pp.ru/work/NFS.html, for which many thanks!!!

Run NFS server, mount, block, quota and status with "correct" ports (for firewall)

  • It is advisable to first unmount all resources on clients
  • stop and disable rpcidmapd from starting if you do not plan to use NFSv4: chkconfig --level 345 rpcidmapd off service rpcidmapd stop
  • if necessary, allow the portmap, nfs and nfslock services to start: chkconfig --levels 345 portmap/rpcbind on chkconfig --levels 345 nfs on chkconfig --levels 345 nfslock on
  • if necessary, stop the nfslock and nfs services, start portmap/rpcbind, unload the modules service nfslock stop service nfs stop service portmap start # service rpcbind start umount /proc/fs/nfsd service rpcidmapd stop rmmod nfsd service autofs stop # somewhere later it must be launched rmmod nfs rmmod nfs_acl rmmod lockd
  • open ports in
    • for RPC: UDP/111, TCP/111
    • for NFS: UDP/2049, TCP/2049
    • for rpc.statd: UDP/4000, TCP/4000
    • for lockd: UDP/4001, TCP/4001
    • for mountd: UDP/4002, TCP/4002
    • for rpc.rquota: UDP/4003, TCP/4003
  • for the rpc.nfsd server, add the line RPCNFSDARGS="--port 2049" to /etc/sysconfig/nfs
  • for the mount server, add the line MOUNTD_PORT=4002 to /etc/sysconfig/nfs
  • to configure rpc.rquota for new versions, you need to add the line RQUOTAD_PORT=4003 to /etc/sysconfig/nfs
  • to configure rpc.rquota it is necessary for older versions (however, you must have the quota package 3.08 or newer) add to /etc/services rquotad 4003/tcp rquotad 4003/udp
  • will check the adequacy of /etc/exports
  • run the services rpc.nfsd, mountd and rpc.rquota (rpcsvcgssd and rpc.idmapd are launched at the same time, if you remember to delete them) service nfsd start or in new versions service nfs start
  • for the blocking server for new systems, add the lines LOCKD_TCPPORT=4001 LOCKD_UDPPORT=4001 to /etc/sysconfig/nfs
  • for the lock server for older systems, add directly to /etc/modprobe[.conf]: options lockd nlm_udpport=4001 nlm_tcpport=4001
  • bind the rpc.statd status server to port 4000 (for older systems, run rpc.statd with the -p 4000 switch in /etc/init.d/nfslock) STATD_PORT=4000
  • start the lockd and rpc services.statd service nfslock start
  • make sure that all ports are bound normally using "lsof -i -n -P" and "netstat -a -n" (some of the ports are used by kernel modules that lsof does not see)
  • if before the “rebuilding” the server was used by clients and they could not be unmounted, then you will have to restart the automatic mounting services on the clients (am-utils, autofs)

Example NFS server and client configuration

Server configuration

If you want to make your NFS shared directory public and writable, you can use the option all_squash in combination with options anonuid And anongid. For example, to set permissions for user "nobody" in group "nobody", you could do the following:

ARCHIV ~ # cat /etc/exports # Read and write access for client on 192.168.0.100, with rw access for user 99 with gid 99 /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99) ) # Read and write access for client on 192.168.0.100, with rw access for user 99 with gid 99 /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99))

This also means that if you want to allow access to the specified directory, nobody.nobody must be the owner of the shared directory:

man mount
man exports
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/nfs_perf.htm - NFS performance from IBM.

Best regards, McSim!

When administering servers based on Linux OS in an environment where Windows is used as the main client OS, from time to time you have to deal with the need to copy something from a client Windows system to a Linux system, or vice versa, from a Linux system to Windows. Most often, the capabilities of the SSH/SCP protocols are used for this using tools such as the pscp.exe utility. But when you have to deal with Linux file servers that allow you to use the capabilities of the protocol NFS, we can ask questions like “can a Windows client OS act as an NFS client?”, “is there some kind of built-in NFS client implementation in the Windows client OS?” These were the questions I had during the period of time that coincided with the period when we were moving from Windows 8.1 to the first release of Windows 10. The information that at that time I was able to find on this issue was that only "senior" editions of Windows client operating systems, such as Windows 7 Ultimate/Enterprise, Windows 8/8.1 Enterprise And Windows 10 Enterprise. However, in our case the OS was used Windows 10 editorial staff Professional, so I had to discard these thoughts.

Recently, while reading discussions on the TechNet forums, I came across information that at some point in Windows 10 Professional edition it became possible to use NFS client functionality. According to information from some sources, such an opportunity appeared in Windows 10 versions 1607 (10.0.14393 / Anniversary Update).

Deciding to check this information on what I have at hand Windows 10 1803(10.0.17134 / April 2018 Update) editorial staff Professional, I discovered that we now actually have the ability to use this functionality.

To enable the NFS client, we can use the Programs and Features Management snap-in appwiz.cpl. Here in the list of "Windows components" you can find those available for inclusion " Services for NFS".

After installation is complete, the component is in the Control Panel under " Administration"a snap will appear" Services for NFS" (nfsmgmt.msc), in which we can manage some parameters of the NFS client.

We assume that on the NFS server side permissions for access from the client system are already configured, for example, access to the client’s IP address is explicitly allowed. The simplest example of installing and configuring an NFS server on the CentOS Linux side can be found in the Wiki article “Installing and configuring an NFS server and client in CentOS Linux 7.2”.

After setting up access rights on the NFS server side, switch to Windows 10 and connect the network directory using the " mount". The simplest example of an anonymous connection to a network directory looks like this:

mount-o anon \\KOM-FS01\mnt\vdo-vd1\ovirt-iso-domain I:
  • "-o anon" - connect with anonymous user rights;
  • "KOM-FS01" - NFS server name;
  • "mnt\vdo-vd1\ovirt-iso-domain" - local path to the directory on the NFS server;
  • "I" is the Windows drive letter

Other available parameters and utility keys can be viewed with the command " mount/?". For example, when connecting, we can explicitly specify the username and password on the NFS server.

When opening the properties of directories and files in a connected NFS directory, we will see a special tab " NFS Attributes" with the appropriate attributes, including information about the current permissions on the directory/file, which, if we have sufficient rights, we can manage.

When running the command again mount without specifying parameters, we will receive information about the current NFS client connections and the properties of these connections:

Here we can see with what UID And GUID, connection completed. For anonymous connections this is the default -2 /-2 . If for some reason we need to change these identifiers for all anonymous client connections, then we can add a couple of missing default registry settings like DWORD(32-bit):

  • AnonymousUid
  • AnonymousGid

to the registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default

In the values ​​of the created parameters, you can write the required UID and GUID, which will be used for all anonymous connections. The screenshot below uses an example with values 1000 :1000 (decimal).

If we want all anonymous connections to use root- new identifiers, then in the corresponding registry parameters you need to specify AnonymousUid = 0 and AnonymousGid = 0 . Specifying root identifiers can be useful if, for example, we need not only reading, but writing in the connected NFS directory, and the remote NFS server allows writing only to the root user and/or members of the root group.

For the changes to take effect, you will need to stop and restart the NFS Client service from the previously mentioned Services for NFS snap-in (nfsmgmt.msc).

Or, if restarting the computer is not a problem, then you can restart the client computer for the changes to take effect.

My attempts to restart the system service" Client for NFS" (NfsClnt) through standard mechanisms such as the Service Control snap-in services.msc or utility " net", showed that for some reason this leads to the impossibility of starting the service after it is stopped. Therefore, to restart the NFS client, it is better to use the native snap-in. Although, again, it was noticed that repeated stopping/starting of the service in the snap-in " Services for NFS" can also cause the NFS client to not work properly. As a result, for example, the utility " mount" may stop mounting NFS directories, giving a network error:

In such cases, the only thing that helps is rebooting the client computer, after which everything starts working again.

After the changes we need have been made to the registry and the NFS client service has been successfully restarted, we will again try to connect the NFS directory and look with the command " mount" connection information.

As you can see, now the security identifiers are exactly those that we previously specified in the registry.

Disabling network resources connected via the NFS protocol is as simple as connecting, only using another utility - " umount"

In general, it is good that now users of Windows 10 Professional edition have the standard ability to work with network file resources using the NFS protocol. We will use this in our work.

Everyone knows that on UNIX systems, a file system is logically a collection of physical file systems connected to a single point. One of the main advantages of such an organization, in my opinion, is the ability to dynamically modify the structure of an existing file system. Also, thanks to the efforts of the developers, today we have the opportunity to connect a file system of almost any type and in any convenient way. By “method”, I first of all want to emphasize the ability of the OS kernel to work with file systems via network connections.

Many network protocols provide us with the ability to work with remote files, be it FTP, SMB, Telnet or SSH. Thanks to the ability of the kernel to ultimately not depend on the type of file system being connected, we have the ability to connect anything and however we want using the mount program.

Today I would like to talk about NFS - Network File System. This technology allows you to connect individual file system points on a remote computer to the file system of the local computer. The NFS protocol itself allows you to perform file operations quite quickly, safely and reliably. What else do we need? :-)

What is needed for this to work

In order not to rant for a long time on the topic of NFS versions and their support in various kernels, we will immediately make the assumption that your kernel version is not lower than 2.2.18. In the official documentation, the developers promise full support for NFS version 3 functionality in this kernel and later versions.

Installation

To run the NFS server in my Ubuntu 7.10 - the Gutsy Gibbon, I needed to install the nfs-common and nfs-kernel-server packages. If you only need an NFS client, then nfs-kernel-server does not need to be installed.

Server Tuning

After all packages have been successfully installed, you need to check if the NFS daemon is running:

/etc/init.d/nfs-kernel-server status

If the daemon is not running, you need to start it with the command

/etc/init.d/nfs-kernel-server start

After everything has started successfully, you can begin exporting the file system. The process itself is very simple and takes minimal time.

The main NFS server configuration file is located in /etc/exports and has the following format:

Directory machine1(option11,option12) machine2(option21,option22)

directory— absolute path to the FS server directory to which you need to give access

machineX— DNS name or IP address of the client computer from which access is allowed

optionXX— FS export parameters, the most commonly used of them:

  • ro- file access is read-only
  • rw— read/write access is granted
  • no_root_squash— by default, if you connect to an NFS resource as root, the server, for the sake of security, on its side will access files as the nobody user. However, if you enable this option, files on the server side will be accessed as root. Be careful with this option.
  • no_subtree_check— by default, if you export not the entire partition on the server, but only part of the file system, the daemon will check whether the requested file is physically located on the same partition or not. If you are exporting the entire partition or the mount point of the exported file system does not affect files from other physical volumes, then you can enable this option. This will give you an increase in server speed.
  • sync— enable this option if there is a possibility of a sudden connection loss or server power outage. If this option is not enabled, there is a very high risk of data loss if the NFS server suddenly stops.

So, let's say we need to give access to the ashep-desktop computer to the /var/backups directory of the ashep-laptop computer. Directory access is required to copy backup files from ashep-desktop. My file turned out like this:

/var/backups ashep-desktop(rw,no_subtree_check,sync)

After adding the line to /etc/exports, you must restart the NFS server for the changes to take effect.

/etc/init.d/nfs-kernel-server restart

That's all. You can start connecting the exported FS on the client computer.

Client setup

On the client side, the remote file system is mounted in the same way as all others - with the mount command. Also, no one forbids you to use /etc/fstab if you need to connect the FS automatically when the OS boots. So, the mount option will look like this:

Mount -t nfs ashep-laptop:/var/backups/ /mnt/ashep-laptop/backups/

If everything went well and you need to connect to the remote FS automatically at boot, just add the line to /etc/fstab:

Ashep-laptop:/var/backups /mnt/ashep-laptop/backups nfs auto 0 0

What else

So we have a practical, tiny overview of the capabilities of NFS. Of course, this is just a small part of what NFS can do. This is enough for use at home or in a small office. If this is not enough for you, I recommend reading first

When on NFS-on the server there is one main user and on the computer that acts as NFS-client, also one user, and also included in the sudousers list -: The NFS partition is connected using sudo, the UID and GID on the NFS server and NFS client are the same, there are no problems with read and write rights.

I had a situation where the NFS client had a regular user without access to sudo and he had to be able to read and write to the connected NFS partition. Let's call this user reguser. There was also another user on this computer (NFS client) who had sudo access. Let's call him: admuser.

So, I had two tasks:

  1. Make sure that reguser can write to files and directories on the NFS server.
  2. Make it so that reguser can connect and disconnect the NFS partition itself.

How to allow writing to the NFS server by users from an NFS client that has a different UID from the UID of the user who owns the files on the NFS server

Actions are performed on the NFS server as the root user.
Edit /etc/exports:
nano /etc/exports
We insert or change the line that indicates which directory will be accessible (exported) via NFS:

/home/nfs 192.168.1.1/24(rw,async,no_subtree_check,all_squash,anonuid=1000,anongid=1000)

Where:

  • /home/nfs— directory that will be accessible (exported) to the NFS client;
  • 192.168.1.1/24 — IP address or, as in this case, the range of addresses from which you are allowed to connect to NFS;
  • rw— permission to read and write;
  • async— asynchronous mode of operation, in which responses to requests will occur immediately, without waiting for writing to disk. In this case, reliability is lower, however, performance is greater;
  • no_subtree_check- When allowing access to a subdirectory of the file system, rather than the entire file system, the server checks whether the requested file is in the exported subdirectory or not. no_subtree_check disables this check, which reduces security, however, increases data transfer speed;
  • all_squash— this option ensures that any NFS client users will be considered anonymous on the NFS server or those NFS server users whose identifiers are specified in anonuid and anongid;
  • anonuid— OS user identifier on the NFS server. Taken from /etc/passwd. For example, if you need the first non-system user (the one whose login was specified when installing the OS, in my case nfs) and in the file /etc/passwd there is a line " nfs:x:1000:1000:NFS:/home/nfs:/bin/bash» the value for anonuid will be the first number 1000;
  • anongid— OS group identifier on the NFS server. Taken from /etc/group. For example, if you need a group www-data and in the file /etc/group there is a line " www-data:x:33:» the value for anongid will be 33;

If you need to more accurately indicate which users on the NFS client correspond to users on the NFS server, you can enable user mapping by adding the option map_static=/etc/file_maps_users. File /etc/file_maps_users should look like this:

# Mapping users # remote local comment uid 0-33 1002 # mapping users with remote UID 0-50 to local UID 1002 gid 0-33 1002 # mapping users with remote GID 0-50 to local GID 1002

We restart the nfs daemon and this completes the server setup:
/etc/init.d/nfs-kernel-server restart

How to allow a regular user to mount and unmount an NFS partition

Create a directory in which we will mount:
sudo mkdir /media/nfs

Add to /etc/fstab mounting rule. Open the file:
sudo nano /etc/fstab
Add a rule:
192.168.1.50:/home/nfs /media/nfs nfs rw,noauto,user 0 0
Where:

  • 192.168.1.50 — IP address of the NFS server;
  • /home/nfs— directory on the NFS server that we are mounting. He should be on the list /etc/exports on an NFS server;
  • /media/nfs— directory on the NFS client in which we mount the NFS partition;
  • nfs— file system type;
  • rw- with the right to write;
  • noauto— an option indicating that the partition does not need to be mounted automatically at boot;
  • user— an option that allows any user to mount and unmount this partition.

To disable NFS:
nano ~/nfs.umount
With code:
#!/bin/bash
umount/media/nfs

Allow scripts to be executed:
chmod ug+x ~/nfs.mount ~/nfs.umount

And finally, connecting the NFS resource:
~/nfs.mount

Disabling an NFS resource:
~/nfs.umount

That's it, all tasks are completed.

When talking about computer networks, you can often hear mention of NFS. What does this abbreviation mean?

It is a distributed file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network, similar to accessing local storage. NFS, like many other protocols, is based on the Open Network Computing Remote Procedure Call (ONC RPC) system.

In other words, what is NFS? It is an open standard, defined by Request for Comments (RFC), allowing anyone to implement the protocol.

Versions and variations

The inventor used only the first version for his own experimental purposes. When the development team added significant changes to the original NFS and released it outside of Sun's authorship, they designated the new version as v2 so they could test interoperability between distributions and create a fallback.

NFS v2

Version 2 initially worked only over the User Datagram Protocol (UDP). Its developers wanted to keep the server side without blocking implemented outside the main protocol.

The virtual file system interface allows for a modular implementation reflected in a simple protocol. By February 1986, solutions had been demonstrated for operating systems such as System V release 2, DOS and VAX/VMS using Eunice. NFS v2 only allowed the first 2 GB of a file to be read due to 32-bit limitations.

NFS v3

The first proposal to develop NFS version 3 at Sun Microsystems was announced shortly after the release of the second distribution. The main motivation was to try to mitigate the performance problem of synchronous recording. By July 1992, practical improvements had resolved many of the shortcomings of NFS version 2, leaving only insufficient file support (64-bit file sizes and file offsets).

  • support for 64-bit file sizes and offsets to handle data larger than 2 gigabytes (GB);
  • support for asynchronous recording on the server to improve performance;
  • additional file attributes in many answers to avoid having to re-fetch them;
  • READDIRPLUS operation to obtain data and attributes along with file names when scanning a directory;
  • many other improvements.

During the introduction of version 3, support for TCP as a transport layer protocol began to increase. The use of TCP as a means of transferring data, performed using NFS over a WAN, began to allow large file sizes to be transferred for viewing and writing. Thanks to this, developers were able to overcome the 8 KB limits imposed by the User Datagram Protocol (UDP).

What is NFS v4?

Version 4, influenced by the Endres File System (AFS) and Server Message Block (SMB, also called CIFS), includes performance improvements, provides better security, and introduces a compliance protocol.

Version 4 was the first distribution developed by the Internet Engineering Task Force (IETF) after Sun Microsystems outsourced protocol development.

NFS version 4.1 aims to provide protocol support for leveraging clustered server deployments, including the ability to provide scalable parallel access to files distributed across multiple servers (pNFS extension).

The newest file system protocol, NFS 4.2 (RFC 7862), was officially released in November 2016.

Other extensions

With the development of the standard, corresponding tools for working with it appeared. For example, WebNFS, an extension for versions 2 and 3, allows the Network File System Access Protocol to more easily integrate into web browsers and enable work across firewalls.

Various third party protocols have also become associated with NFS. The most famous of them are:

  • Network Lock Manager (NLM) with byte protocol support (added to support UNIX System V file locking API);
  • Remote Quota (RQUOTAD), which allows NFS users to view storage quotas on NFS servers;
  • NFS over RDMA is an adaptation of NFS that uses remote direct memory access (RDMA) as the transmission medium;
  • NFS-Ganesha is an NFS server running in user space and supporting CephFS FSAL (File System Abstraction Layer) using libcephfs.

Platforms

Network File System is often used with Unix operating systems (such as Solaris, AIX, HP-UX), Apple's MacOS, and Unix-like operating systems (such as Linux and FreeBSD).

It is also available for platforms such as Acorn RISC OS, OpenVMS, MS-DOS, Microsoft Windows, Novell NetWare and IBM AS/400.

Alternative remote file access protocols include Server Message Block (SMB, also called CIFS), Apple Transfer Protocol (AFP), NetWare Core Protocol (NCP), and OS/400 Server File System (QFileSvr.400).

This is due to the requirements of NFS, which are aimed mostly at Unix-like “shells”.

However, the SMB and NetWare (NCP) protocols are used more often than NFS on systems running Microsoft Windows. AFP is most common on Apple Macintosh platforms, and QFileSvr.400 is most common on OS/400.

Typical implementation

Assuming a typical Unix-style scenario in which one computer (the client) needs access to data stored on another (the NFS server):

  • The server implements Network File System processes, running by default as nfsd, to make its data publicly available to clients. The server administrator determines how to export directory names and settings, typically using the /etc/exports configuration file and the exportfs command.
  • Administering server security ensures that it can recognize and approve an authenticated client. Its network configuration ensures that eligible clients can negotiate with it through any firewall system.
  • The client machine requests access to the exported data, usually by issuing a command. It queries the server (rpcbind) that is using the NFS port and subsequently connects to it.
  • If everything happens without errors, users on the client machine will be able to view and interact with the installed file systems on the server within the permitted parameters.

It should also be noted that automation of the Network File System process can also take place - perhaps using etc/fstab and/or other similar tools.

Development to date

By the 21st century, competing protocols DFS and AFS had not achieved any major commercial success compared to the Network File System. IBM, which previously acquired all commercial rights to the above technologies, donated most of the AFS source code to the free software community in 2000. The Open AFS project still exists today. In early 2005, IBM announced the end of sales of AFS and DFS.

In turn, in January 2010, Panasas proposed NFS v 4.1 based on technology that improves parallel data access capabilities. The Network File System v 4.1 protocol defines a method for separating file system metadata from the location of specific files. So it goes beyond simple name/data separation.

What is NFS of this version in practice? The above feature distinguishes it from the traditional protocol, which contains the names of files and their data under one connection to the server. With Network File System v 4.1, some files can be shared across multi-node servers, but client involvement in sharing metadata and data is limited.

When implementing the fourth distribution of the protocol, the NFS server is a set of server resources or components; they are assumed to be controlled by the metadata server.

The client still contacts a single metadata server to traverse or interact with the namespace. As it moves files to and from the server, it can directly interact with a set of data owned by an NFS group.