This chapter describes how to install the first AFS machine in your cell, configuring it as both a file server machine and a client machine. After completing all procedures in this chapter, you can remove the client functionality if you wish, as described in Removing Client Functionality.
To install additional file server machines after completing this chapter, see Installing Additional Server Machines.
To install additional client machines after completing this chapter, see Installing Additional Client Machines.
The instructions in this chapter assume that you meet the following requirements.
You must make the following configuration decisions while installing the first AFS machine. To speed the installation itself, it is best to make the decisions before beginning. See the chapter in the IBM AFS Administration Guide about issues in cell administration and configuration for detailed guidelines.
This chapter is divided into three large sections corresponding to the three parts of installing the first AFS machine. Perform all of the steps in the order they appear. Each functional section begins with a summary of the procedures to perform. The sections are as follows:
In the first phase of installing your cell's first AFS machine, you install file server and database server functionality by performing the following procedures:
The first AFS machine you install must have sufficient disk space to store AFS volumes. To take best advantage of AFS's capabilities, store client-side binaries as well as user files in volumes. When you later install additional file server machines in your cell, you can distribute these volumes among the different machines as you see fit.
These instructions configure the first AFS machine as a database server machine, the binary distribution machine for its system type, and the cell's system control machine. For a description of these roles, see the IBM AFS Administration Guide.
Installation of additional machines is simplest if the first machine has the lowest IP address of any database server machine you currently plan to install. If you later install database server functionality on a machine with a lower IP address, you must first update the /usr/vice/etc/CellServDB file on all of your cell's client machines. For more details, see Installing Database Server Functionality.
Create the /usr/afs and /usr/vice/etc directories on the local disk, to house server and client files respectively. Subsequent instructions copy files from the AFS CD-ROM into them. Create the /cdrom directory as a mount point for CD-ROMs, if it does not already exist.
# mkdir /usr/afs # mkdir /usr/vice # mkdir /usr/vice/etc # mkdir /cdrom
Several of the initial procedures for installing a file server machine differ for each system type. For convenience, the following sections group them together for each system type:
The kernel on every AFS file server and client machine must incorporate AFS extensions. On machines that use a dynamic kernel module loader, it is conventional to alter the machine's initialization script to load the AFS extensions at each reboot.
Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes (for convenience, the documentation hereafter refers to partitions only). Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first 26 partitions are mounted on the directories called /vicepa through /vicepz, the 27th one is mounted on the /vicepaa directory, and so on through /vicepaz and /vicepba, continuing up to the index corresponding to the maximum number of server partitions supported in the current version of AFS (which is specified in the IBM AFS Release Notes).
The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location).
You can also add or remove server partitions on an existing file server machine. For instructions, see the chapter in the IBM AFS Administration Guide about maintaining server machines.
Note: | Not all file system types supported by an operating system are necessarily supported as AFS server partitions. For possible restrictions, see the IBM AFS Release Notes. |
To continue, proceed to the appropriate section:
Begin by running the AFS initialization script to call the AIX kernel extension facility, which dynamically loads AFS modifications into the kernel. Then use the SMIT program to configure partitions for storing AFS volumes, and replace the AIX fsck program helper with a version that correctly handles AFS volumes. If the machine is to remain an AFS client machine, incorporate AFS into the AIX secondary authentication system.
The AIX kernel extension facility is the dynamic kernel loader provided by IBM Corporation. AIX does not support incorporation of AFS modifications during a kernel build.
For AFS to function correctly, the kernel extension facility must run each time the machine reboots, so the AFS initialization script (included in the AFS distribution) invokes it automatically. In this section you copy the script to the conventional location and edit it to select the appropriate options depending on whether NFS is also to run.
After editing the script, you run it to incorporate AFS into the kernel. In later sections you verify that the script correctly initializes all AFS components, then configure the AIX inittab file so that the script runs automatically at reboot.
# cd /cdrom/rs_aix42/root.client/usr/vice/etc
# cp -rp dkload /usr/vice/etc # cp -p rc.afs /etc/rc.afs
If the machine is not to function as an NFS/AFS Translator, set the NFS variable as follows.
NFS=$NFS_NONE
If the machine is to function as an NFS/AFS Translator and is running AIX 4.2.1 or higher, set the NFS variable as follows. Note that NFS must already be loaded into the kernel, which happens automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists.
NFS=$NFS_IAUTH
# /etc/rc.afs
Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures.
To configure server partitions on an AIX system, perform the following procedures:
# mkdir /vicepxx
Also configure the partitions so that they are mounted automatically at each reboot. For more information, refer to the AIX documentation.
In this section, you make modifications to guarantee that the appropriate fsck program runs on AFS server partitions. The fsck program provided with the operating system must never run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data, it removes all of the data. To repeat:
Never run the standard fsck program on AFS server partitions. It discards AFS volumes.
On AIX systems, you do not replace the fsck binary itself, but rather the program helper file included in the AIX distribution as /sbin/helpers/v3fshelper.
# cd /sbin/helpers # mv v3fshelper v3fshelper.noafs # cp -p /cdrom/rs_aix42/root.server/etc/v3fshelper v3fshelper
Note: | If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. |
Follow the instructions in this section to incorporate AFS modifications into the AIX secondary authentication system.
# ls /usr/vice/etc
If the files do not exist, mount the AFS CD-ROM for AIX (if it is not already), change directory as indicated, and copy them.
# cd /cdrom/rs_aix42/root.client/usr/vice/etc # cp -p afs_dynamic* /usr/vice/etc
registry = DCE
If the machine is an AFS client only, set the following value:
SYSTEM = "AFS OR (AFS[UNAVAIL] AND compat[SUCCESS])"
If the machine is both an AFS and a DCE client, set the following value (it must appear on a single line in the file):
SYSTEM = "DCE OR DCE[UNAVAIL] OR AFS OR (AFS[UNAVAIL] \ AND compat[SUCCESS])"
root: registry = files
If you use the AFS Authentication Server (kaserver process):
DCE: program = /usr/vice/etc/afs_dynamic_auth
If you use a Kerberos implementation of AFS authentication:
DCE: program = /usr/vice/etc/afs_dynamic_kerbauth
If you use the AFS Authentication Server (kaserver process):
AFS: program = /usr/vice/etc/afs_dynamic_auth
If you use a Kerberos implementation of AFS authentication:
AFS: program = /usr/vice/etc/afs_dynamic_kerbauth
Begin by building AFS modifications into a new static kernel; Digital UNIX does not support dynamic loading. Then create partitions for storing AFS volumes, and replace the Digital UNIX fsck program with a version that correctly handles AFS volumes. If the machine is to remain an AFS client machine, incorporate AFS into the machine's Security Integration Architecture (SIA) matrix.
Use the following instructions to build AFS modifications into the kernel on a Digital UNIX system.
# cd /usr/sys/conf # cp machine_name AFS
. . . . options UFS options NFS options AFS . . . .
. . . . . . OPTIONS/nfs optional nfs OPTIONS/afs optional afs OPTIONS/nfs_server optional nfs_server . . . . . .
. . . . . . . . # MODULE/nfs_server optional nfs_server Binary nfs/nfs_server.c module nfs_server optimize -g3 nfs/nfs3_server.c module nfs_server optimize -g3 # MODULE/afs optional afs Binary afs/libafs.c module afs #
. . . . #include <afs.h> #if defined(AFS) && AFS extern struct vfsops afs_vfsops; #endif . . . .
. . . . . . &fdfs_vfsops, "fdfs", /* 12 = MOUNT_FDFS */ #if defined(AFS) &afs_vfsops, "afs", #else (struct vfsops *)0, "", /* 13 = MOUNT_ADDON */ #endif #if NFS && INFS_DYNAMIC &nfs3_vfsops, "nfsv3", /* 14 = MOUNT_NFS3 */
# cd /cdrom/alpha_dux40/root.client
# cp usr/vice/etc/afs.rc /sbin/init.d/afs
If the machine's kernel supports NFS server functionality:
# cp bin/libafs.o /usr/sys/BINARY/afs.mod
If the machine's kernel does not support NFS server functionality:
# cp bin/libafs.nonfs.o /usr/sys/BINARY/afs.mod
# doconfig -c AFS
# mv /vmunix /vmunix_noafs # cp /sys/AFS/vmunix /vmunix
# cd / # shutdown -r now login: root Password: root_password
Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures.
# mkdir /vicepxx
/dev/disk /vicepxx ufs rw 0 2
The following is an example for the first partition being configured.
/dev/rz3a /vicepa ufs rw 0 2
# newfs -v /dev/disk
In this section, you make modifications to guarantee that the appropriate fsck program runs on AFS server partitions. The fsck program provided with the operating system must never run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data, it removes all of the data. To repeat:
Never run the standard fsck program on AFS server partitions. It discards AFS volumes.
On Digital UNIX systems, the files /sbin/fsck and /usr/sbin/fsck are driver programs. Rather than replacing either of them, you replace the actual binary included in the Digital UNIX distribution as /sbin/ufs_fsck and /usr/sbin/ufs_fsck.
# cd /cdrom/alpha_dux40/root.server/etc # cp vfsck /sbin/vfsck # cp vfsck /usr/sbin/vfsck
# cd /sbin # mv ufs_fsck ufs_fsck.noafs # ln -s vfsck ufs_fsck # cd /usr/sbin # mv ufs_fsck ufs_fsck.noafs # ln -s vfsck ufs_fsck
Note: | If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. |
On Digital UNIX systems, the AFS initialization script automatically incorporates the AFS authentication library file into the Security Integration Architecture (SIA) matrix on the machine, so that users with AFS accounts obtain a token at login. In this section you copy the library file to the appropriate location.
For more information on SIA, see the Digital UNIX reference page for matrix.conf, or consult the section on security in your Digital UNIX documentation.
Note: | If the machine runs both the DCE and AFS client software, AFS must start after DCE. Consult the AFS initialization script for suggested symbolic links to create for correct ordering. Also, the system startup script order must initialize SIA before any long-running process that uses authentication. |
Perform the following steps to enable AFS login.
# cd /cdrom/alpha_dux40/lib/afs
If you use the AFS Authentication Server (kaserver process) in the cell:
# cp libafssiad.so /usr/shlib
If you use a Kerberos implementation of AFS authentication, rename the library file as you copy it:
# cp libafssiad.krb.so /usr/shlib/libafssiad.so
Begin by building AFS modifications into a new kernel; HP-UX does not support dynamic loading. Then create partitions for storing AFS volumes, and install and configure the AFS-modified fsck program to run on AFS server partitions. If the machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme.
Use the following instructions to build AFS modifications into the kernel on an HP-UX system.
# cp /stand/vmunix /stand/vmunix.noafs # cp /stand/system /stand/system.noafs
# cd /cdrom/hp_ux110/root.client
# cp usr/vice/etc/afs.rc /sbin/init.d/afs
# cp usr/vice/etc/afs.driver /usr/conf/master.d/afs
If the machine's kernel supports NFS server functionality:
# cp bin/libafs.a /usr/conf/lib
If the machine's kernel does not support NFS server functionality, change the file's name as you copy it:
# cp bin/libafs.nonfs.a /usr/conf/lib/libafs.a
# sam -display local_hostname:0
login: root Password: root_password
# cd /stand/build # mk_kernel
# mv /stand/build/vmunix_test /stand/vmunix # cd / # shutdown -r now login: root Password: root_password
Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures.
# mkdir /vicepxx
In this section, you make modifications to guarantee that the appropriate fsck program runs on AFS server partitions. The fsck program provided with the operating system must never run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data, it removes all of the data. To repeat:
Never run the standard fsck program on AFS server partitions. It discards AFS volumes.
On HP-UX systems, there are several configuration files to install in addition to the AFS-modified fsck program (the vfsck binary).
format_revision 1 fsck 0 m,P,p,d,f,b:c:y,n,Y,N,q,
# mkdir /sbin/fs/afs # cd /sbin/fs/afs
# cp -p /cdrom/hp_ux110/root.server/etc/* .
# mv vfsck fsck # chmod 755 *
The sixth line in the following example of an edited file shows an AFS server partition, /vicepa.
/dev/vg00/lvol1 / hfs defaults 0 1 /dev/vg00/lvol4 /opt hfs defaults 0 2 /dev/vg00/lvol5 /tmp hfs defaults 0 2 /dev/vg00/lvol6 /usr hfs defaults 0 2 /dev/vg00/lvol8 /var hfs defaults 0 2 /dev/vg00/lvol9 /vicepa afs defaults 0 2 /dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
Note: | If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. |
At this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine.
Explaining PAM is beyond the scope of this document. It is assumed that you understand the syntax and meanings of settings in the PAM configuration file (for example, how the other entry works, the effect of marking an entry as required, optional, or sufficient, and so on).
The following instructions explain how to alter the entries in the PAM configuration file for each service for which you wish to use AFS authentication. Other configurations possibly also work, but the instructions specify the recommended and tested configuration.
Note: | The instructions specify that you mark each entry as
optional. However, marking some modules as optional can mean
that they grant access to the corresponding service even when the user does
not meet all of the module's requirements. In some operating
system revisions, for example, if you mark as optional the module that
controls login via a dial-up connection, it allows users to login without
providing a password. See the IBM AFS Release Notes for a
discussion of any limitations that apply to this operating system.
Also, with some operating system versions you must install patches for PAM to interact correctly with certain authentication programs. For details, see the IBM AFS Release Notes. |
The recommended AFS-related entries in the PAM configuration file make use of one or more of the following three attributes.
Perform the following steps to enable AFS login.
# cd /usr/lib/security
If you use the AFS Authentication Server (kaserver process) in the cell:
# cp /cdrom/hp_ux110/lib/pam_afs.so.1 . # ln -s pam_afs.so.1 pam_afs.so
If you use a Kerberos implementation of AFS authentication:
# cp /cdrom/hp_ux110/lib/pam_afs.krb.so.1 . # ln -s pam_afs.krb.so.1 pam_afs.so
First edit the standard entries, which refer to the HP-UX PAM module (usually, the file /usr/lib/security/libpam_unix.1) in their fourth field. For each service for which you want to use AFS authentication, edit the third field of its entry to read optional. The pam.conf file in the HP-UX distribution usually includes standard entries for the login and ftp services, for instance.
If there are services for which you want to use AFS authentication, but for which the pam.conf file does not already include a standard entry, you must create that entry and place the value optional in its third field. For instance, the HP-UX pam.conf file does not usually include standard entries for the remsh or telnet services.
Then create an AFS-related entry for each service, placing it immediately below the standard entry. The following example shows what the Authentication Management section looks like after you have you edited or created entries for the services mentioned previously. Note that the example AFS entries appear on two lines only for legibility.
login auth optional /usr/lib/security/libpam_unix.1 login auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root setenv_password_expires ftp auth optional /usr/lib/security/libpam_unix.1 ftp auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root remsh auth optional /usr/lib/security/libpam_unix.1 remsh auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root telnet auth optional /usr/lib/security/libpam_unix.1 telnet auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root setenv_password_expires
dtlogin auth optional /usr/lib/security/libpam_unix.1 dtlogin auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root dtaction auth optional /usr/lib/security/libpam_unix.1 dtaction auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root
To incorporate AFS into the kernel on IRIX systems, choose one of two methods:
Then create partitions for storing AFS volumes. You do not need to replace the IRIX fsck program because SGI has already modified it to handle AFS volumes properly. If the machine is to remain an AFS client machine, verify that the IRIX login utility installed on the machine grants an AFS token.
In preparation for either dynamic loading or kernel building, perform the following procedures:
# cd /cdrom/sgi_65/root.client
# cp -p usr/vice/etc/afs.rc /etc/init.d/afs
# uname -m
The ml program is the dynamic kernel loader provided by SGI for IRIX systems. If you use it rather than building AFS modifications into a static kernel, then for AFS to function correctly the ml program must run each time the machine reboots. Therefore, the AFS initialization script (included on the AFS CD-ROM) invokes it automatically when the afsml configuration variable is activated. In this section you activate the variable and run the script.
In later sections you verify that the script correctly initializes all AFS components, then create the links that incorporate AFS into the IRIX startup and shutdown sequence.
# mkdir /usr/vice/etc/sgiload
(You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of space.)
If the machine's kernel supports NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.o /usr/vice/etc/sgiload
If the machine's kernel does not support NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.nonfs.o \ /usr/vice/etc/sgiload
# /etc/chkconfig -f afsml on
If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate the afsxnfs variable.
# /etc/chkconfig -f afsxnfs on
You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/init.d/afs start
Use the following instructions to build AFS modifications into the kernel on an IRIX system.
# cp -p bin/afs.sm /var/sysgen/system # cp -p bin/afs /var/sysgen/master.d
If the machine's kernel supports NFS server functionality:
# cp -p bin/libafs.IPxx.a /var/sysgen/boot/afs.a
If the machine's kernel does not support NFS server functionality:
# cp -p bin/libafs.IPxx.nonfs.a /var/sysgen/boot/afs.a
# /etc/chkconfig -f afsml off
If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate the afsxnfs variable.
# /etc/chkconfig -f afsxnfs on
# cp /unix /unix_noafs # autoconfig
# cd / # shutdown -i6 -g0 -y login: root Password: root_password
Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures.
AFS supports use of both EFS and XFS partitions for housing AFS volumes. SGI encourages use of XFS partitions.
# mkdir /vicepxx
For an XFS partition or logical volume:
/dev/dsk/disk /vicepxx xfs rw,raw=/dev/rdsk/disk 0 0
For an EFS partition:
/dev/dsk/disk /vicepxx efs rw,raw=/dev/rdsk/disk 0 0
The following are examples of an entry for each file system type:
/dev/dsk/dks0d2s6 /vicepa xfs rw,raw=/dev/rdsk/dks0d2s6 0 0 /dev/dsk/dks0d3s1 /vicepb efs rw,raw=/dev/rdsk/dks0d3s1 0 0
For XFS file systems, include the indicated options to configure the partition or logical volume with inodes large enough to accommodate AFS-specific information:
# mkfs -t xfs -i size=512 -l size=4000b raw_device
For EFS file systems:
# mkfs -t efs raw_device
# /usr/afs/bin/xfs_size_check
Note: | If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. |
The standard IRIX command-line login program and the graphical xdm login program both automatically grant an AFS token when AFS is incorporated into the machine's kernel. However, some IRIX distributions use another login utility by default, and it does not necessarily incorporate the required AFS modifications. If that is the case, you must disable the default utility if you want AFS users to obtain AFS tokens at login. For further discussion, see the IBM AFS Release Notes.
If you configure the machine to use an AFS-modified login utility, then the afsauthlib.so and afskauthlib.so files (included in the AFS distribution) must reside in the /usr/vice/etc directory. Issue the ls command to verify.
# ls /usr/vice/etc
If the files do not exist, mount the AFS CD-ROM for IRIX (if it is not already), change directory as indicated, and copy them.
# cd /cdrom/sgi_65/root.client/usr/vice/etc # cp -p *authlib* /usr/vice/etc
After taking any necessary action, proceed to Starting the BOS Server.
Begin by running the AFS initialization script to call the insmod program, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes. You do not need to replace the Linux fsck program. If the machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme.
The insmod program is the dynamic kernel loader for Linux. Linux does not support incorporation of AFS modifications during a kernel build.
For AFS to function correctly, the insmod program must run each time the machine reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. The script also includes commands that select the appropriate AFS library file automatically. In this section you run the script.
In later sections you verify that the script correctly initializes all AFS components, then activate a configuration variable, which results in the script being incorporated into the Linux startup and shutdown sequence.
# cd /cdrom/i386_linux22/root.client/usr/vice/etc
# cp -rp modload /usr/vice/etc
# cp -p afs.rc /etc/rc.d/init.d/afs
# /etc/rc.d/init.d/afs start
Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures.
# mkdir /vicepxx
/dev/disk /vicepxx ext2 defaults 0 2
The following is an example for the first partition being configured.
/dev/sda8 /vicepa ext2 defaults 0 2
# mkfs -v /dev/disk
Note: | If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. |
At this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine.
Explaining PAM is beyond the scope of this document. It is assumed that you understand the syntax and meanings of settings in the PAM configuration file (for example, how the other entry works, the effect of marking an entry as required, optional, or sufficient, and so on).
The following instructions explain how to alter the entries in the PAM configuration file for each service for which you wish to use AFS authentication. Other configurations possibly also work, but the instructions specify the recommended and tested configuration.
The recommended AFS-related entries in the PAM configuration file make use of one or more of the following three attributes.
Perform the following steps to enable AFS login.
If you are using a Linux distribution from Red Hat Software:
# cd /lib/security
If you are using another Linux distribution:
# cd /usr/lib/security
If you use the AFS Authentication Server (kaserver process):
# cp /cdrom/i386_linux22/lib/pam_afs.so.1 . # ln -s pam_afs.so.1 pam_afs.so
If you use a Kerberos implementation of AFS authentication:
# cp /cdrom/i386_linux22/lib/pam_afs.krb.so.1 . # ln -s pam_afs.krb.so.1 pam_afs.so
Place the AFS entry below any entries that impose conditions under which you want the service to fail for a user who does not meet the entry's requirements. Mark these entries required. Place the AFS entry above any entries that need to execute only if AFS authentication fails.
Insert the following AFS entry if using the Red Hat distribution:
auth sufficient /lib/security/pam_afs.so try_first_pass ignore_root
Insert the following AFS entry if using another distribution:
auth sufficient /usr/lib/security/pam_afs.so try_first_pass ignore_root
The following example illustrates the recommended configuration of the configuration file for the login service (/etc/pam.d/login) on a machine using the Red Hat distribution.
#%PAM-1.0 auth required /lib/security/pam_securetty.so auth required /lib/security/pam_nologin.so auth sufficient /lib/security/pam_afs.so try_first_pass ignore_root auth required /lib/security/pam_pwdb.so shadow nullok account required /lib/security/pam_pwdb.so password required /lib/security/pam_cracklib.so password required /lib/security/pam_pwdb.so shadow nullok use_authtok session required /lib/security/pam_pwdb.so
Begin by running the AFS initialization script to call the modload program distributed by Sun Microsystems, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes, and install and configure the AFS-modified fsck program to run on AFS server partitions. If the machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme.
The modload program is the dynamic kernel loader provided by Sun Microsystems for Solaris systems. Solaris does not support incorporation of AFS modifications during a kernel build.
For AFS to function correctly, the modload program must run each time the machine reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. In this section you copy the appropriate AFS library file to the location where the modload program accesses it and then run the script.
In later sections you verify that the script correctly initializes all AFS components, then create the links that incorporate AFS into the Solaris startup and shutdown sequence.
# cd /cdrom/sun4x_56/root.client/usr/vice/etc
# cp -p afs.rc /etc/init.d/afs
If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, its kernel supports NFS server functionality, and the nfsd process is running:
# cp -p modload/libafs.o /kernel/fs/afs
If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and its kernel does not support NFS server functionality or the nfsd process is not running:
# cp -p modload/libafs.nonfs.o /kernel/fs/afs
If the machine is running the 64-bit version of Solaris 7, its kernel supports NFS server functionality, and the nfsd process is running:
# cp -p modload/libafs64.o /kernel/fs/sparcv9/afs
If the machine is running the 64-bit version of Solaris 7, and its kernel does not support NFS server functionality or the nfsd process is not running:
# cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afs
# /etc/init.d/afs start
When an entry called afs does not already exist in the local /etc/name_to_sysnum file, the script automatically creates it and reboots the machine to start using the new version of the file. If this happens, log in again as the superuser root after the reboot and run the initialization script again. This time the required entry exists in the /etc/name_to_sysnum file, and the modload program runs.
login: root Password: root_password # /etc/init.d/afs start
In this section, you make modifications to guarantee that the appropriate fsck program runs on AFS server partitions. The fsck program provided with the operating system must never run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data, it removes all of the data. To repeat:
Never run the standard fsck program on AFS server partitions. It discards AFS volumes.
# mkdir /usr/lib/fs/afs # cd /usr/lib/fs/afs
# cp /cdrom/sun4x_56/root.server/etc/vfsck fsck
# ln -s /usr/lib/fs/ufs/clri # ln -s /usr/lib/fs/ufs/df # ln -s /usr/lib/fs/ufs/edquota # ln -s /usr/lib/fs/ufs/ff # ln -s /usr/lib/fs/ufs/fsdb # ln -s /usr/lib/fs/ufs/fsirand # ln -s /usr/lib/fs/ufs/fstyp # ln -s /usr/lib/fs/ufs/labelit # ln -s /usr/lib/fs/ufs/lockfs # ln -s /usr/lib/fs/ufs/mkfs # ln -s /usr/lib/fs/ufs/mount # ln -s /usr/lib/fs/ufs/ncheck # ln -s /usr/lib/fs/ufs/newfs # ln -s /usr/lib/fs/ufs/quot # ln -s /usr/lib/fs/ufs/quota # ln -s /usr/lib/fs/ufs/quotaoff # ln -s /usr/lib/fs/ufs/quotaon # ln -s /usr/lib/fs/ufs/repquota # ln -s /usr/lib/fs/ufs/tunefs # ln -s /usr/lib/fs/ufs/ufsdump # ln -s /usr/lib/fs/ufs/ufsrestore # ln -s /usr/lib/fs/ufs/volcopy
afs AFS Utilities
case "$2" in ufs) foptions="-o p" ;; afs) foptions="-o p" ;; s5) foptions="-y -t /var/tmp/tmp$$ -D" ;; *) foptions="-y" ;;
# For fsck purposes, we make a distinction between ufs and # other file systems # if [ "$fstype" = "ufs" ]; then ufs_fscklist="$ufs_fscklist $fsckdev" saveentry $fstype "$OPTIONS" $special $mountp continue fi
with the following section of code:
# For fsck purposes, we make a distinction between ufs/afs # and other file systems. # if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then ufs_fscklist="$ufs_fscklist $fsckdev" saveentry $fstype "$OPTIONS" $special $mountp continue fi
Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures.
# mkdir /vicepxx
/dev/dsk/disk /dev/rdsk/disk /vicepxx afs boot_order yes
The following is an example for the first partition being configured.
/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes
# newfs -v /dev/rdsk/disk
Note: | If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. |
At this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine.
Explaining PAM is beyond the scope of this document. It is assumed that you understand the syntax and meanings of settings in the PAM configuration file (for example, how the other entry works, the effect of marking an entry as required, optional, or sufficient, and so on).
The following instructions explain how to alter the entries in the PAM configuration file for each service for which you wish to use AFS authentication. Other configurations possibly also work, but the instructions specify the recommended and tested configuration.
Note: | The instructions specify that you mark each entry as
optional. However, marking some modules as optional can mean
that they grant access to the corresponding service even when the user does
not meet all of the module's requirements. In some operating
system revisions, for example, if you mark as optional the module that
controls login via a dial-up connection, it allows users to login without
providing a password. See the IBM AFS Release Notes for a
discussion of any limitations that apply to this operating system.
Also, with some operating system versions you must install patches for PAM to interact correctly with certain authentication programs. For details, see the IBM AFS Release Notes. |
The recommended AFS-related entries in the PAM configuration file make use of one or more of the following three attributes.
Perform the following steps to enable AFS login.
# cd /usr/lib/security
If you use the AFS Authentication Server (kaserver process):
# cp /cdrom/sun4x_56/lib/pam_afs.so.1 . # ln -s pam_afs.so.1 pam_afs.so
If you use a Kerberos implementation of AFS authentication:
# cp /cdrom/sun4x_56/lib/pam_afs.krb.so.1 . # ln -s pam_afs.krb.so.1 pam_afs.so
First edit the standard entries, which refer to the Solaris PAM module (usually, the file /usr/lib/security/pam_unix.so.1) in their fourth field. For each service for which you want to use AFS authentication, edit the third field of its entry to read optional. The pam.conf file in the Solaris distribution usually includes standard entries for the login, rlogin, and rsh services, for instance.
If there are services for which you want to use AFS authentication, but for which the pam.conf file does not already include a standard entry, you must create that entry and place the value optional in its third field. For instance, the Solaris pam.conf file does not usually include standard entries for the ftp or telnet services.
Then create an AFS-related entry for each service, placing it immediately below the standard entry. The following example shows what the Authentication Management section looks like after you have you edited or created entries for the services mentioned previously. Note that the example AFS entries appear on two lines only for legibility.
login auth optional /usr/lib/security/pam_unix.so.1 login auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root setenv_password_expires rlogin auth optional /usr/lib/security/pam_unix.so.1 rlogin auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root setenv_password_expires rsh auth optional /usr/lib/security/pam_unix.so.1 rsh auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root ftp auth optional /usr/lib/security/pam_unix.so.1 ftp auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root telnet auth optional /usr/lib/security/pam_unix.so.1 telnet auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root setenv_password_expires
dtlogin auth optional /usr/lib/security/pam_unix.so.1 dtlogin auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root dtsession auth optional /usr/lib/security/pam_unix.so.1 dtsession auth optional /usr/lib/security/pam_afs.so \ try_first_pass ignore_root
The first possible alteration is to add the -local flag to the existing command, so that it looks like the following:
find $dir -local -name .nfs\* -mtime +7 -mount -exec rm -f {} \;
Another alternative is to exclude any directories whose names begin with the lowercase letter a or a non-alphabetic character.
find /[A-Zb-z]* remainder of existing command
Do not use the following command, which still searches under the /afs directory, looking for a subdirectory of type 4.2.
find / -fstype 4.2 /* do not use */
You are now ready to start the AFS server processes on this machine. Begin by copying the AFS server binaries from the CD-ROM to the conventional local disk location, the /usr/afs/bin directory. The following instructions also create files in other subdirectories of the /usr/afs directory.
Then issue the bosserver command to initialize the Basic OverSeer (BOS) Server, which monitors and controls other AFS server processes on its server machine. Include the -noauth flag to disable authorization checking. Because you have not yet configured your cell's AFS authentication and authorization mechanisms, the BOS Server cannot perform authorization checking as it does during normal operation. In no-authorization mode, it does not verify the identity or privilege of the issuer of a bos command, and so performs any operation for anyone.
Disabling authorization checking gravely compromises cell security. You must complete all subsequent steps in one uninterrupted pass and must not leave the machine unattended until you restart the BOS Server with authorization checking enabled, in Verifying the AFS Initialization Script.
As it initializes for the first time, the BOS Server creates the following directories and files, setting the owner to the local superuser root and the mode bits to limit the ability to write (and in some cases, read) them. For a description of the contents and function of these directories and files, see the chapter in the IBM AFS Administration Guide about administering server machines. For further discussion of the mode bit settings, see Protecting Sensitive AFS Directories.
The BOS Server also creates symbolic links called /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB to the corresponding files in the /usr/afs/etc directory. The AFS command interpreters consult the CellServDB and ThisCell files in the /usr/vice/etc directory because they generally run on client machines. On machines that are AFS servers only (as this machine currently is), the files reside only in the /usr/afs/etc directory; the links enable the command interpreters to retrieve the information they need. Later instructions for installing the client functionality replace the links with actual files.
# cd /cdrom/sysname/root.server/usr/afs # cp -rp * /usr/afs
# /usr/afs/bin/bosserver -noauth &
# ls -l /usr/vice/etc
If either or both of /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB do not exist, or are not links, issue the following commands.
# cd /usr/vice/etc # ln -s /usr/afs/etc/ThisCell # ln -s /usr/afs/etc/CellServDB
Now assign your cell's name. The chapter in the IBM AFS Administration Guide about cell configuration and administration issues discusses the important considerations, explains why changing the name is difficult, and outlines the restrictions on name format. Two of the most important restrictions are that the name cannot include uppercase letters or more than 64 characters.
Use the bos setcellname command to assign the cell name. It creates two files:
Note: | In the following and every instruction in this guide, for the machine name argument substitute the fully-qualified hostname (such as fs1.abc.com) of the machine you are installing. For the cell name argument substitute your cell's complete name (such as abc.com). |
# cd /usr/afs/bin # ./bos setcellname <machine name> <cell name> -noauth
Because you are not authenticated and authorization checking is disabled, the bos command interpreter possibly produces error messages about being unable to obtain tickets and running unauthenticated. You can safely ignore the messages.
# ./bos listhosts <machine name> -noauth Cell name is cell_name Host 1 is machine_name
Next use the bos create command to create entries for the four database server processes in the /usr/afs/local/BosConfig file and start them running. The four processes run on database server machines only:
Note: | AFS's authentication and authorization software is based on algorithms and other procedures known as Kerberos, as originally developed by Project Athena at the Massachusetts Institute of Technology. Some cells choose to replace the AFS Authentication Server and other security-related protocols with Kerberos as obtained directly from Project Athena or other sources. If you wish to do this, contact the AFS Product Support group now to learn about necessary modifications to the installation. |
The remaining instructions in this chapter include the -cell argument on all applicable commands. Provide the cell name you assigned in Defining Cell Name and Membership for Server Processes. If a command appears on multiple lines, it is only for legibility.
# ./bos create <machine name> kaserver simple /usr/afs/bin/kaserver \ -cell <cell name> -noauth
You can safely ignore the messages that tell you to add Kerberos to the /etc/services file; AFS uses a default value that makes the addition unnecessary. You can also ignore messages about the failure of authentication.
# ./bos create <machine name> buserver simple /usr/afs/bin/buserver \ -cell <cell name> -noauth
# ./bos create <machine name> ptserver simple /usr/afs/bin/ptserver \ -cell <cell name> -noauth
# ./bos create <machine name> vlserver simple /usr/afs/bin/vlserver \ -cell <cell name> -noauth
Now initialize the cell's security mechanisms. Begin by creating the following two initial entries in the Authentication Database:
After you complete the installation of the first machine, you can continue to have all administrators use the admin account, or you can create a separate administrative account for each of them. The latter scheme implies somewhat more overhead, but provides a more informative audit trail for administrative operations.
In Step 7, you also place the initial AFS server encryption key into the /usr/afs/etc/KeyFile file. The AFS server processes refer to this file to learn the server encryption key when they need to decrypt server tickets.
You also issue several commands that enable the new admin user to issue privileged commands in all of the AFS suites.
The following instructions do not configure all of the security mechanisms related to the AFS Backup System. See the chapter in the IBM AFS Administration Guide about configuring the Backup System.
# kas -cell <cell name> -noauth ka>
Do not provide passwords on the command line. Instead provide them as afs_passwd and admin_passwd in response to the kas command interpreter's prompts as shown, so that they do not appear on the standard output stream.
You need to enter the afs_passwd string only in this step and in Step 7, so provide a value that is as long and complex as possible, preferably including numerals, punctuation characters, and both uppercase and lowercase letters. Also make the admin_passwd as long and complex as possible, but keep in mind that administrators need to enter it often. Both passwords must be at least six characters long.
ka> create afs initial_password: afs_passwd Verifying, please re-enter initial_password: afs_passwd ka> create admin initial_password: admin_passwd Verifying, please re-enter initial_password: admin_passwd
ka> examine afs User data for afs key (0) cksum is checksum . . .
ka> setfields admin -flags admin ka> examine admin User data for admin (ADMIN) . . .
ka> quit
# ./bos adduser <machine name> admin -cell <cell name> -noauth
Do not provide the password on the command line. Instead provide it as afs_passwd in response to the bos command interpreter's prompts, as shown. Provide the same string as in Step 2.
# ./bos addkey <machine name> -kvno 0 -cell <cell name> -noauth Input key: afs_passwd Retype input key: afs_passwd
# ./bos listkeys <machine name> -cell <cell name> -noauth key 0 has cksum checksum
You can safely ignore any error messages indicating that bos failed to get tickets or that authentication failed.
If the keys are different, issue the following commands, making sure that the afs_passwd string is the same in each case. The checksum strings reported by the kas examine and bos listkeys commands must match; if they do not, repeat these instructions until they do, using the -kvno argument to increment the key version number each time.
# ./kas -cell <cell name> -noauth ka> setpassword afs -kvno 1 new_password: afs_passwd Verifying, please re-enter initial_password: afs_passwd ka> examine afs User data for afs key (1) cksum is checksum . . . ka> quit # ./bos addkey <machine name> -kvno 1 -cell <cell name> -noauth Input key: afs_passwd Retype input key: afs_passwd # ./bos listkeys <machine name> -cell <cell name> -noauth key 1 has cksum checksum
By default, the Protection Server assigns AFS UID 1 (one) to the admin user, because it is the first user entry you are creating. If the local password file (/etc/passwd or equivalent) already has an entry for admin that assigns it a UNIX UID other than 1, it is best to use the -id argument to the pts createuser command to make the new AFS UID match the existing UNIX UID. Otherwise, it is best to accept the default.
# ./pts createuser -name admin -cell <cell name> [-id <AFS UID>] -noauth User admin has id AFS UID
# ./pts adduser admin system:administrators -cell <cell name> -noauth # ./pts membership admin -cell <cell name> -noauth Groups admin (id: 1) is a member of: system:administrators
# ./bos restart <machine name> -all -cell <cell name> -noauth
Start the fs process, which consists of the File Server, Volume Server, and Salvager (fileserver, volserver and salvager processes).
# ./bos create <machine name> fs fs /usr/afs/bin/fileserver \ /usr/afs/bin/volserver /usr/afs/bin/salvager \ -cell <cell name> -noauth
Sometimes a message about Volume Location Database (VLDB) initialization appears, along with one or more instances of an error message similar to the following:
FSYNC_clientInit temporary failure (will retry)
This message appears when the volserver process tries to start before the fileserver process has completed its initialization. Wait a few minutes after the last such message before continuing, to guarantee that both processes have started successfully.
You can verify that the fs process has started successfully by issuing the bos status command. Its output mentions two proc starts.
# ./bos status <machine name> fs -long -noauth
For the partition name argument, substitute the name of one of the machine's AFS server partitions (such as /vicepa).
# ./vos create <machine name> <partition name> root.afs \ -cell <cell name> -noauth
The Volume Server produces a message confirming that it created the volume on the specified partition. You can ignore error messages indicating that tokens are missing, or that authentication failed.
# ./vos syncvldb <machine name> -cell <cell name> -verbose -noauth # ./vos syncserv <machine name> -cell <cell name> -verbose -noauth
You can ignore error messages indicating that tokens are missing, or that authentication failed.
Start the server portion of the Update Server (the upserver process), to distribute the contents of directories on this machine to other server machines in the cell. It becomes active when you configure the client portion of the Update Server on additional server machines.
Distributing the contents of its /usr/afs/etc directory makes this machine the cell's system control machine. The other server machines in the cell run the upclientetc process (an instance of the client portion of the Update Server) to retrieve the configuration files. Use the -crypt argument to the upserver initialization command to specify that the Update Server distributes the contents of the /usr/afs/etc directory only in encrypted form, as shown in the following instruction. Several of the files in the directory, particularly the KeyFile file, are crucial to cell security and so must never cross the network unencrypted.
(You can choose not to configure a system control machine, in which case you must update the configuration files in each server machine's /usr/afs/etc directory individually. The bos commands used for this purpose also encrypt data before sending it across the network.)
Distributing the contents of its /usr/afs/bin directory to other server machines of its system type makes this machine a binary distribution machine. The other server machines of its system type run the upclientbin process (an instance of the client portion of the Update Server) to retrieve the binaries.
The binaries in the /usr/afs/bin directory are not sensitive, so it is not necessary to encrypt them before transfer across the network. Include the -clear argument to the upserver initialization command to specify that the Update Server distributes the contents of the /usr/afs/bin directory in unencrypted form unless an upclientbin process requests encrypted transfer.
Note that the server and client portions of the Update Server always mutually authenticate with one another, regardless of whether you use the -clear or -crypt arguments. This protects their communications from eavesdropping to some degree.
For more information on the upclient and upserver processes, see their reference pages in the IBM AFS Administration Reference. The commands appear on multiple lines here only for legibility.
# ./bos create <machine name> upserver simple \ "/usr/afs/bin/upserver -crypt /usr/afs/etc \ -clear /usr/afs/bin" -cell <cell name> -noauth
Keeping the clocks on all server and client machines in your cell synchronized is crucial to several functions, and in particular to the correct operation of AFS's distributed database technology, Ubik. The chapter in the IBM AFS Administration Guide about administering server machines explains how time skew can disturb Ubik's performance and cause service outages in your cell.
The AFS distribution includes a version of the Network Time Protocol Daemon (NTPD) for synchronizing the clocks on server machines. If a time synchronization program is not already running on the machine, then in this section you start the runntp process to configure NTPD for use with AFS.
Note: | Do not run the runntp process if NTPD or another time
synchronization protocol is already running on the machine. Some
versions of some operating systems run a time synchronization program by
default, as detailed in the IBM AFS Release Notes.
Attempting to run multiple instances of the NTPD causes an error. Running NTPD together with another time synchronization protocol is unnecessary and can cause instability in the clock setting. |
If you run the runntp process and your cell has reliable network connectivity to machines outside your cell, then it is conventional to configure the first AFS machine to refer to a time source outside the cell. When you later install the runntp program on other server machines in the cell, it configures NTPD to choose a time source at random from among the database server machines listed in the /usr/afs/etc/CellServDB file. Time synchronization therefore works in a chained manner: this database server machine refers to a time source outside the cell, the database server machines refer to the machine among them that has access to the most accurate time (NTPD itself includes code for determining this), and each non-database server machine refers to a local database server machine chosen at random from the /usr/afs/etc/CellServDB file. If you ever decide to remove database server functionality from this machine, it is best to transfer responsibility for consulting an external time source to a remaining database server machine.
If your cell does not have network connectivity to external machines, or if the connectivity is not reliable, include the -localclock flag to the runntp command as indicated in the following instructions. The flag tells NTPD to rely on the machine's internal clock when all external time sources are inaccessible. The runntp command has other arguments that are possibly useful given your cell configuration; see the IBM AFS Administration Reference.
Choosing an appropriate external time source is important, but involves more considerations than can be discussed here. If you need help in selecting a source, contact the AFS Product Support group.
As the runntp process initializes NTPD, trace messages sometimes appear on the standard output stream. You can ignore them, but they can be informative if you understand how NTPD works.
# ./bos create <machine name> runntp simple \ "/usr/afs/bin/runntp <host>+" -cell <cell name> -noauth
# ./bos create <machine name> runntp simple \ "/usr/afs/bin/runntp -localclock" -cell <cell name> -noauth
# ./bos create <machine name> runntp simple \ "/usr/afs/bin/runntp -localclock <host>+" \ -cell <cell name> -noauth
The machine you are installing is now an AFS file server machine, database server machine, system control machine, and binary distribution machine. Now make it a client machine by completing the following tasks:
Before installing and configuring the AFS client, copy the necessary files from the AFS CD-ROM to the local /usr/vice/etc directory.
This step places a copy of the AFS initialization script (and related files, if applicable) into the /usr/vice/etc directory. In the preceding instructions for incorporating AFS into the kernel, you copied the script directly to the operating system's conventional location for initialization files. When you incorporate AFS into the machine's startup sequence in a later step, you can choose to link the two files.
On some system types that use a dynamic kernel loader program, you previously copied AFS library files into a subdirectory of the /usr/vice/etc directory. On other system types, you copied the appropriate AFS library file directly to the directory where the operating system accesses it. The following commands do not copy or recopy the AFS library files into the /usr/vice/etc directory, because on some system types the library files consume a large amount of space. If you want to copy them, add the -r flag to the first cp command and skip the second cp command.
# cd /cdrom/sysname/root.client/usr/vice/etc # cp -p * /usr/vice/etc # cp -rp C /usr/vice/etc
Every AFS client machine has a copy of the /usr/vice/etc/ThisCell file on its local disk to define the machine's cell membership for the AFS client programs that run on it. The ThisCell file you created in the /usr/afs/etc directory (in Defining Cell Name and Membership for Server Processes) is used only by server processes.
Among other functions, the ThisCell file on a client machine determines the following:
# cd /usr/vice/etc # rm ThisCell
# cp /usr/afs/etc/ThisCell ThisCell
The /usr/vice/etc/CellServDB file on a client machine's local disk lists the database server machines for each cell that the local Cache Manager can contact. If there is no entry in the file for a cell, or if the list of database server machines is wrong, then users working on this machine cannot access the cell. The chapter in the IBM AFS Administration Guide about administering client machines explains how to maintain the file after creating it.
As the afsd program initializes the Cache Manager, it copies the contents of the CellServDB file into kernel memory. The Cache Manager always consults the list in kernel memory rather than the CellServDB file itself. Between reboots of the machine, you can use the fs newcell command to update the list in kernel memory directly; see the chapter in the IBM AFS Administration Guide about administering client machines.
The AFS distribution includes the file CellServDB.sample, and you have already copied it to the /usr/vice/etc directory. It includes an entry for all AFS cells that agreed to share their database server machine information at the time your AFS CD-ROM was created. The AFS Product Support group also maintains a copy of the file, updating it as necessary. If you are interested in participating in the global AFS namespace, it is a good policy to consult the file occasionally for updates. Ask the AFS Product Support group for a pointer to its location.
The CellServDB.sample file can be a good basis for the client CellServDB file, because all of the entries in it use the correct format. You can add or remove cell entries as you see fit. Later (in Enabling Access to Foreign Cells) you perform additional steps that enable the Cache Manager actually to reach the cells.
In this section, you add an entry for the local cell to the local CellServDB file. The current working directory is still /usr/vice/etc.
# rm CellServDB # mv CellServDB.sample CellServDB
# cat /usr/afs/etc/CellServDB >> CellServDB
Then open the file in a text editor to verify that there are no blank lines, and that all entries have the required format, which is described just following. The ordering of cells is not significant, but it can be convenient to have the client machine's home cell at the top; move it there now if you wish.
>cell_name #organization
where cell_name is the cell's complete Internet domain name (for example, abc.com) and organization is an optional field that follows any number of spaces and the number sign (#). By convention it names the organization to which the cell corresponds (for example, the ABC Corporation).
IP_address #machine_name
where IP_address is the machine's IP address in dotted decimal format (for example, 192.12.105.3). Following any number of spaces and the number sign (#) is machine_name, the machine's fully-qualified hostname (for example, db1.abc.com). In this case, the number sign does not indicate a comment; machine_name is a required field.
The following example shows entries for two cells, each of which has three database server machines:
>abc.com #ABC Corporation (home cell) 192.12.105.3 #db1.abc.com 192.12.105.4 #db2.abc.com 192.12.105.55 #db3.abc.com >stateu.edu #State University cell 138.255.68.93 #serverA.stateu.edu 138.255.68.72 #serverB.stateu.edu 138.255.33.154 #serverC.stateu.edu
The Cache Manager uses a cache on the local disk or in machine memory to store local copies of files fetched from file server machines. As the afsd program initializes the Cache Manager, it sets basic cache configuration parameters according to definitions in the local /usr/vice/etc/cacheinfo file. The file has three fields:
The values you define must meet the following requirements.
afsd: memCache allocation failure at number KB
The number value is how many kilobytes were allocated just before the failure, and so indicates the approximate amount of memory available.
Within these hard limits, the factors that determine appropriate cache size include the number of users working on the machine, the size of the files with which they work, and (for a memory cache) the number of processes that run on the machine. The higher the demand from these factors, the larger the cache needs to be to maintain good performance.
Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on the factors mentioned previously and is difficult to predict.
Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use a smaller memory cache.
Note: | Not all file system types that an operating system supports are necessarily supported for use as the cache partition. For possible restrictions, see the IBM AFS Release Notes. |
To configure the disk cache, perform the following procedures:
# mkdir /usr/vice/cache
# echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfo
The following example defines the disk cache size as 50,000 KB:
# echo "/afs:/usr/vice/cache:50000" > /usr/vice/etc/cacheinfo
To configure a memory cache, create the cacheinfo file to define the configuration parameters discussed previously. The following instruction shows the standard mount location, /afs, and the standard cache location, /usr/vice/cache (though the exact value of the latter is irrelevant for a memory cache).
# echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfo
The following example allocates 25,000 KB of memory for the cache.
# echo "/afs:/usr/vice/cache:25000" > /usr/vice/etc/cacheinfo
By convention, the Cache Manager mounts the AFS filespace on the local /afs directory. In this section you create that directory.
The afsd program sets several cache configuration parameters as it initializes the Cache Manager, and starts daemons that improve performance. You can use the afsd command's arguments to override the parameters' default values and to change the number of some of the daemons. Depending on the machine's cache size, its amount of RAM, and how many people work on it, you can sometimes improve Cache Manager performance by overriding the default values. For a discussion of all of the afsd command's arguments, see its reference page in the IBM AFS Administration Reference.
The afsd command line in the AFS initialization script on each system type includes an OPTIONS variable. You can use it to set nondefault values for the command's arguments, in one of the following ways:
You use two variables in the AFS initialization script to specify the path to the options file: CONFIG and AFSDOPT. On system types that define a conventional directory for configuration files, the CONFIG variable indicates it by default; otherwise, the variable indicates an appropriate location.
List the desired afsd options on a single line in the options file, separating each option with one or more spaces. The following example sets the -stat argument to 2500, the -daemons argument to 4, and the -volumes argument to 100.
-stat 2500 -daemons 4 -volumes 100
Note: | Do not set the OPTIONS variable to $SMALL, $MEDIUM, or $LARGE on a machine that uses a memory cache. The arguments it sets are appropriate only on a machine that uses a disk cache. |
The script (or on some system types the afsd options file named by the AFSDOPT variable) defines a value for each of SMALL, MEDIUM, and LARGE that sets afsd command arguments appropriately for client machines of different sizes:
# mkdir /afs
afs 4 none none
# cp /usr/vice/etc/afs.conf /etc/sysconfig/afs
Use one of the methods described in the introduction to this section to add the following flags to the afsd command line. If you intend for the machine to remain an AFS client, also set any performance-related arguments you wish.
The machine is now configured as an AFS file server and client machine. In this final phase of the installation, you initialize the Cache Manager and then create the upper levels of your AFS filespace, among other procedures. The procedures are:
At this point you run the AFS initialization script to verify that it correctly invokes all of the necessary programs and AFS processes, and that they start correctly. The following are the relevant commands:
On system types that use a dynamic loader program, you must reboot the machine before running the initialization script, so that it can freshly load AFS modifications into the kernel.
If there are problems during the initialization, attempt to resolve them. The AFS Product Support group can provide assistance if necessary.
# /usr/afs/bin/bos shutdown <machine name> -wait
# ps appropriate_ps_options | grep bosserver # kill bosserver_PID
On AIX systems:
# cd / # shutdown -r now login: root Password: root_password
# /etc/rc.afs
On Digital UNIX systems:
# /sbin/init.d/afs start
On HP-UX systems:
# /sbin/init.d/afs start
On IRIX systems:
# cd / # shutdown -i6 -g0 -y login: root Password: root_password
# /etc/chkconfig -f afsserver on # /etc/chkconfig -f afsclient on
# /etc/init.d/afs start
On Linux systems:
# cd / # shutdown -r now login: root Password: root_password
# /etc/rc.d/init.d/afs start
On Solaris systems:
# cd / # shutdown -i6 -g0 -y login: root Password: root_password
# /etc/init.d/afs start
On machines that use a disk cache, it can take a while to initialize the Cache Manager for the first time, because the afsd program must create all of the Vn files in the cache directory. Subsequent Cache Manager initializations do not take nearly as long, because the Vn files already exist.
As a basic test of correct AFS functioning, issue the klog command to authenticate as the admin user. Provide the password (admin_passwd) you defined in Initializing Cell Security.
# /usr/afs/bin/klog admin Password: admin_passwd
# /usr/afs/bin/tokens Tokens held by the Cache Manager: User's (AFS ID 1) tokens for afs@abc.com [Expires May 22 11:52] --End of list--
# /usr/afs/bin/bos status <machine name>
# cd / # /usr/afs/bin/fs checkvolumes
Now that you have confirmed that the AFS initialization script works correctly, take the action necessary to have it run automatically at each reboot. Proceed to the instructions for your system type:
rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
# cd /usr/vice/etc # rm rc.afs # ln -s /etc/rc.afs
# cd /sbin/init.d # ln -s ../init.d/afs /sbin/rc3.d/S67afs # ln -s ../init.d/afs /sbin/rc0.d/K66afs
# cd /usr/vice/etc # rm afs.rc # ln -s /sbin/init.d/afs afs.rc
# cd /sbin/init.d # ln -s ../init.d/afs /sbin/rc2.d/S460afs # ln -s ../init.d/afs /sbin/rc2.d/K800afs
# cd /usr/vice/etc # rm afs.rc # ln -s /sbin/init.d/afs afs.rc
# cd /etc/init.d # ln -s ../init.d/afs /etc/rc2.d/S35afs # ln -s ../init.d/afs /etc/rc0.d/K35afs
# cd /usr/vice/etc # rm afs.rc # ln -s /etc/init.d/afs afs.rc
# /sbin/chkconfig --add afs
# cd /usr/vice/etc # rm afs.rc afs.conf # ln -s /etc/rc.d/init.d/afs afs.rc # ln -s /etc/sysconfig/afs afs.conf
# cd /etc/init.d # ln -s ../init.d/afs /etc/rc3.d/S99afs # ln -s ../init.d/afs /etc/rc0.d/K66afs
# cd /usr/vice/etc # rm afs.rc # ln -s /etc/init.d/afs afs.rc
If you have not previously run AFS in your cell, you now configure the top levels of your cell's AFS filespace. If you have run a previous version of AFS, the filespace is already configured. Proceed to Storing AFS Binaries in AFS.
You created the root.afs volume in Starting the File Server, Volume Server, and Salvager, and the Cache Manager mounted it automatically on the local /afs directory when you ran the AFS initialization script in Verifying the AFS Initialization Script. You now set the access control list (ACL) on the /afs directory; creating, mounting, and setting the ACL are the three steps required when creating any volume.
After setting the ACL on the root.afs volume, you create your cell's root.cell volume, mount it as a subdirectory of the /afs directory, and set the ACL. Create both a read/write and a regular mount point for the root.cell volume. The read/write mount point enables you to access the read/write version of replicated volumes when necessary. Creating both mount points essentially creates separate read-only and read-write copies of your filespace, and enables the Cache Manager to traverse the filespace on a read-only path or read/write path as appropriate. For further discussion of these concepts, see the chapter in the IBM AFS Administration Guide about administering volumes.
Then replicate both the root.afs and root.cell volumes. This is required if you want to replicate any other volumes in your cell, because all volumes mounted above a replicated volume must themselves be replicated in order for the Cache Manager to access the replica.
When the root.afs volume is replicated, the Cache Manager is programmed to access its read-only version (root.afs.readonly) whenever possible. To make changes to the contents of the root.afs volume (when, for example, you mount another cell's root.cell volume at the second level in your filespace), you must mount the root.afs volume temporarily, make the changes, release the volume and remove the temporary mount point. For instructions, see Enabling Access to Foreign Cells.
Note that there is already an ACL entry that grants all seven access rights to the system:administrators group. It is a default entry that AFS places on every new volume's root directory.
# /usr/afs/bin/fs setacl /afs system:anyuser rl
For the partition name argument, substitute the name of one of the machine's AFS server partitions (such as /vicepa). For the cellname argument, substitute your cell's fully-qualified Internet domain name (such as abc.com).
# /usr/afs/bin/vos create <machine name> <partition name> root.cell # /usr/afs/bin/fs mkmount /afs/cellname root.cell # /usr/afs/bin/fs setacl /afs/cellname system:anyuser rl
# cd /afs # ln -s full_cellname short_cellname
By convention, the name of a read/write mount point begins with a period, both to distinguish it from the regular mount point and to make it visible only when the -a flag is used on the ls command.
Change directory to /usr/afs/bin to make it easier to access the command binaries.
# cd /usr/afs/bin # ./fs mkmount /afs/.cellname root.cell -rw
# ./vos addsite <machine name> <partition name> root.afs # ./vos addsite <machine name> <partition name> root.cell
# ./fs examine /afs # ./fs examine /afs/cellname
# ./vos release root.afs # ./vos release root.cell
# ./fs checkvolumes # ./fs examine /afs # ./fs examine /afs/cellname
In the conventional configuration, you make AFS client binaries and configuration files available in the subdirectories of the /usr/afsws directory on client machines (afsws is an acronym for AFS workstation). You can conserve local disk space by creating /usr/afsws as a link to an AFS volume that houses the AFS client binaries and configuration files for this system type.
In this section you create the necessary volumes. The conventional location to which to link /usr/afsws is /afs/cellname/sysname/usr/afsws, where sysname is the appropriate system type name as specified in the IBM AFS Release Notes. The instructions in Installing Additional Client Machines assume that you have followed the instructions in this section.
If you have previously run AFS in the cell, the volumes possibly already exist. If so, you need to perform Step 8 only.
The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries. In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set.
# vos create <machine name> <partition name> sysname # vos create <machine name> <partition name> sysname.usr # vos create <machine name> <partition name> sysname.usr.afsws
# fs mkmount -dir /afs/.cellname/sysname -vol sysname # fs mkmount -dir /afs/.cellname/sysname/usr -vol sysname.usr # fs mkmount -dir /afs/.cellname/sysname/usr/afsws -vol sysname.usr.afsws # vos release root.cell # fs checkvolumes
# cd /afs/.cellname/sysname # fs setacl -dir . usr usr/afsws -acl system:anyuser rl
If you wish, you can set the volume's quota to a finite value after you complete the copying operation. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota that is slightly larger.
# fs setquota /afs/.cellname/sysname/usr/afsws 0
# cd /afs/.cellname/sysname/usr/afsws # cp -rp /cdrom/sysname/bin . # cp -rp /cdrom/sysname/etc . # cp -rp /cdrom/sysname/include . # cp -rp /cdrom/sysname/lib .
# cd /afs/.cellname/sysname/usr/afsws # fs setacl -dir etc include lib -acl system:authuser rl \ system:anyuser none
# ln -s /afs/cellname/@sys/usr/afsws /usr/afsws
The AFS distribution includes the following documents:
The AFS CD-ROM for each system type has a top-level Documentation directory, with a subdirectory for each document format provided. The different formats are suitable for online viewing, printing, or both.
This section explains how to create and mount a volume to house the documents, making them available to your users. The recommended mount point for the volume is /afs/cellname/afsdoc. If you wish, you can create a link to the mount point on each client machine's local disk, called /usr/afsdoc. Alternatively, you can create a link to the mount point in each user's home directory. You can also choose to permit users to access only certain documents (most probably, the IBM AFS User Guide) by creating different mount points or setting different ACLs on different document directories.
The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries you use to create and mount volumes. In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set.
If you wish, you can set the volume's quota to a finite value after you complete the copying operations. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota that is slightly larger.
# vos create <machine name> <partition name> afsdoc -maxquota 0
# fs mkmount -dir /afs/.cellname/afsdoc -vol afsdoc # vos release root.cell # fs checkvolumes
# cd /afs/.cellname/afsdoc # fs setacl . system:anyuser rl
# mkdir format_name # cd format_name # cp -rp /cdrom/Documentation/format .
If you choose to store the HTML version of the documents in AFS, note that in addition to a subdirectory for each document there are several files with a .gif extension, which enable readers to move easily between sections of a document. The file called index.htm is an introductory HTML page that contains a hyperlink to each of the documents. For online viewing to work properly, these files must remain in the top-level HTML directory (the one named, for example, /afs/cellname/afsdoc/html).
# ln -s /afs/cellname/afsdoc/format_name /usr/afsdoc
An alternative is to create a link in each user's home directory to the /afs/cellname/afsdoc/format_name directory.
You can also choose to store other system binaries in AFS volumes, such as the standard UNIX programs conventionally located in local disk directories such as /etc, /bin, and /lib. Storing such binaries in an AFS volume not only frees local disk space, but makes it easier to update binaries on all client machines.
The following is a suggested scheme for storing system binaries in AFS. It does not include instructions, but you can use the instructions in Storing AFS Binaries in AFS (which are for AFS-specific binaries) as a template.
Some files must remain on the local disk for use when AFS is inaccessible (during bootup and file server or network outages). The required binaries include the following:
In most cases, it is more secure to enable only locally authenticated users to access system binaries, by granting the l (lookup) and r (read) permissions to the system:authuser group on the ACLs of directories that contain the binaries. If users need to access a binary while unauthenticated, however, the ACL on its directory must grant those permissions to the system:anyuser group.
The following chart summarizes the suggested volume and mount point names for storing system binaries. It uses a separate volume for each directory. You already created a volume called sysname for this machine's system type when you followed the instructions in Storing AFS Binaries in AFS.
You can name volumes in any way you wish, and mount them at other locations than those suggested here. However, this scheme has several advantages:
Volume Name | Mount Point |
---|---|
sysname | /afs/cellname/sysname |
sysname.bin | /afs/cellname/sysname/bin |
sysname.etc | /afs/cellname/sysname/etc |
sysname.usr | /afs/cellname/sysname/usr |
sysname.usr.afsws | /afs/cellname/sysname/usr/afsws |
sysname.usr.bin | /afs/cellname/sysname/usr/bin |
sysname.usr.etc | /afs/cellname/sysname/usr/etc |
sysname.usr.inc | /afs/cellname/sysname/usr/include |
sysname.usr.lib | /afs/cellname/sysname/usr/lib |
sysname.usr.loc | /afs/cellname/sysname/usr/local |
sysname.usr.man | /afs/cellname/sysname/usr/man |
sysname.usr.sys | /afs/cellname/sysname/usr/sys |
In this section you create a mount point in your AFS filespace for the root.cell volume of each foreign cell that you want to enable your users to access. For users working on a client machine to access the cell, there must in addition be an entry for it in the client machine's local /usr/vice/etc/CellServDB file. (The instructions in Creating the Client CellServDB File suggest that you use the CellServDB.sample file included in the AFS distribution as the basis for your cell's client CellServDB file. The sample file lists all of the cells that had agreed to participate in the AFS global namespace at the time your AFS CD-ROM was created. As mentioned in that section, the AFS Product Support group also maintains a copy of the file, updating it as necessary.)
The chapter in the IBM AFS Administration Guide about cell administration and configuration issues discusses the implications of participating in the global AFS namespace. The chapter about administering client machines explains how to maintain knowledge of foreign cells on client machines, and includes suggestions for maintaining a central version of the file in AFS.
Note: | You need to issue the fs mkmount command only once for each foreign cell's root.cell volume. You do not need to repeat the command on each client machine. |
Substitute your cell's name for cellname.
# cd /afs/.cellname # /usr/afs/bin/fs mkmount temp root.afs
Repeat the fs mkmount command for each foreign cell you wish to mount at this time.
# /usr/afs/bin/fs mkmount temp/foreign_cell root.cell -c foreign_cell
Issue the following commands only once.
# /usr/afs/bin/fs rmmount temp # /usr/afs/bin/vos release root.afs # /usr/afs/bin/fs checkvolumes
For each cell that does not already have an entry, complete the following instructions:
# /usr/afs/bin/fs newcell <foreign_cell> <dbserver1> \ [<dbserver2>] [<dbserver3>]
# mkdir common # mkdir common/etc # cp /usr/vice/etc/CellServDB common/etc # /usr/afs/bin/vos release root.cell
# ls /afs/foreign_cell
This section discusses ways to improve the security of AFS data in your cell. Also see the chapter in the IBM AFS Administration Guide about configuration and administration issues.
As on any machine, it is important to prevent unauthorized users from logging onto an AFS server or client machine as the local superuser root. Take care to keep the root password secret.
The local root superuser does not have special access to AFS data through the Cache Manager (as members of the system:administrators group do), but it does have the following privileges:
Following are suggestions for managing AFS administrative privilege:
Some subdirectories of the /usr/afs directory contain files crucial to cell security. Unauthorized users must not read or write to these files because of the potential for misuse of the information they contain.
As the BOS Server initializes for the first time on a server machine, it creates several files and directories (as mentioned in Starting the BOS Server). It sets their owner to the local superuser root and sets their mode bits to enable writing by the owner only; in some cases, it also restricts reading.
At each subsequent restart, the BOS Server checks that the owner and mode bits on these files are still set appropriately. If they are not, it write the following message to the /usr/afs/logs/BosLog file:
Bosserver reports inappropriate access on server directories
The BOS Server does not reset the mode bits, which enables you to set alternate values if you wish.
The following charts lists the expected mode bit settings. A
question mark indicates that the BOS Server does not check that mode
bit.
/usr/afs | drwxr?xr-x |
/usr/afs/backup | drwx???--- |
/usr/afs/bin | drwxr?xr-x |
/usr/afs/db | drwx???--- |
/usr/afs/etc | drwxr?xr-x |
/usr/afs/etc/KeyFile | -rw????--- |
/usr/afs/etc/UserList | -rw?????-- |
/usr/afs/local | drwx???--- |
/usr/afs/logs | drwxr?xr-x |
Follow the instructions in this section only if you do not wish this machine to remain an AFS client. Removing client functionality means that you cannot use this machine to access AFS files.
# cd /usr/vice/etc # rm * # rm -rf C
# ln -s /usr/afs/etc/ThisCell ThisCell # ln -s /usr/afs/etc/CellServDB CellServDB
# /etc/chkconfig -f afsclient off
# cd / # shutdown appropriate_options