| Sun Certified System Administrator for Solaris* 2.6 (Part 2) - Cramsession |
The Solaris 2.X network environment supports:
Server - A system that provides services to other systems in its networked environment. These services include OS services for diskless clients and AutoClients, Solaris CD image and JumpStart* directory access for remote installations, and file sharing via the Network File System (NFS) service.
Client - A system that uses remote services from a server. Clients with limited disk space such as diskless, AutoClient and JavaStation* systems require a server to function.
File Server - Provides access to application and user data via the Network File System (NFS) service.
AutoClient Server - A system that provides access to operating system and applications to AutoClient systems via the network
Standalone - A system that can operate autonomously and does not require a server to function. It has enough disk space to contain root (/), /usr, and /export/home file systems and swap space. Thus it has local access to operating system software, executables, virtual memory space and user created files.
All three configurations require a CPU, memory, monitor, mouse and keyboard. A network interface is required for the diskless and AutoClient systems and is optional for the standalone workstation. A disk is required for the AutoClient and standalone configurations. A CD-ROM drive is also required for the standalone.
The diskless client accesses file systems remotely from a server. The disk on the AutoClient is used for local swapping and caching the root (/) and /usr file systems obtained from the server. The disk on the standalone workstation is used for root (/), /usr, and /export/home file systems and swap space. Thus it has local access to operating system software, executables, virtual memory space and user created files.
The diskless client does not have a disk and must remotely access its root (/), /usr, /home and any other needed file systems from a server.
The AutoClient system requires a minimum of a 100 MB local disk for swapping and for caching the root (/) and /usr file system downloaded from the AutoClient server. All other file systems must be remotely accessed from the server.
System configuration: There are two methods to preconfigure system information. The first involves the use of the sysidcfg file. The second involves using a name service.
System installation: There are four methods for installing Solaris. These are interactive, Web Start, JumpStart and Custom JumpStart.
Post Installation: Post installation consists of adding the appropriate patches or packages.
The Sparc software groups range from 281 MB to a maximum of 616 MB. Swap space must be a minimum of 32 MB.
Verify that the hardware is supported using the Hardware Compatibility List.
Software Package: A collection of files and directories required for a software product delivered in a standardized installable/removable format.
Software Cluster: A collection of related software packages that work together to provide as service or capability.
The software groups provide various clusters:
Solstice AdminSuite is a collection of GUI tools and commands used to perform administrative tasks such as managing users, groups, hosts, system files, printers, disks, file systems, terminals and modems. These tools and commands are faster than using numerous Solaris commands to perform the same tasks, update system files automatically which eliminates the risks of editing errors and allows sysems to be managed remotely.
The AdminSuite consist of the following GUI tools:
In addition, several commands provide additional functionality such as software usage monitoring and halting/rebooting remote systems.
Installation Process:
Local Installation Process:
To add support for a standalone system, OS server or other type of system using the Host Manager:
The command line equivalent for adding a host uses the admhostadd with the following arguments:
admhostadd -i client_ip_address -e client_ethernet_address specific_settings client
where specific_settings arguments such as -x type=DATALESS, -x tz=US/Mountain, -x os=sparc.sun4c.Solaris_2.5, etc. and client is the system name of the client.
The Storage Manager consists of the Load Context window, the File Manager and the Disk Manager Tools. The Load Context window allows the ability to select the host to manage with the File Manager and the disk set to manage with the Disk Manager.
To view mount point information using the Storage Manager:
To view disk slice information using the Storage Manager:
The Database Manager is a graphical user interface for managing the various network-related (/etc) system files such as hosts, passwd, services and timezone. To view timezone information (the contents of /etc/timezone) using the Database Manager:
To view the characteristics of a serial port using the Serial Port Manager:
To add a user account using the User Manager:
To add a user to a group using the Group Manager:
The Printer Manager can be used to install both locally attached printers and network printers.
To install a local printer using the Print Manager:
To install a network printer using the Print Manager:
The nvalias command can be used to create a custom device alias. The format of the command is:
nvalias alias device-path
This command is stored in the nvramrc parameter. The contents of the nvramrc parameter is called the script. In additon to storing user defined commands, this parameter is used by device drivers to save start-up configuration variables, to patch device driver code, bug patches and installation-specific device configuration.
If the use-nvramrc parameter is set to true, then the script is executed during start-up. The script editor nvedit can be used to copy the contents of the script into a temporary buffer where it can be edited. After editing, the nvstore command can be used to copy the contents of the temporary buffer to nvramrc. The nvquit command is used to discard the contents of the temporary buffer.
The alias defined by the nvalias command remains in the script until either the nvunalias or set-defaults command is executed. The set-defaults command can be undone by the nvrecover (if the script has not been editied).
Any aliases defined by the devalias command are lost during a reboot or system reset. Aliases defined by the nvalias command are not lost.
The nvalias alias command deletes the specified alias from nvramrc.
System configuration parameters are stored in the system non-volatile RAM (NVRAM) otherwise known as EEPROM. These parameters determine the initial configuration and related communication characteristics of the system and retain their value even if the power to the system is shut off.
The value of these parameters can be viewed via the Forth Monitor (OpenBoot) printenv command and modified by use the setenv OpenBoot command.
The eeprom(1M) system command can be used to both view and modify parameter values.
To view a parameter, use the syntax:
     eeprom parameter
where parameter is the name of the NVRAM parameter.
To modify a parameter, use the command:
     eeprom parameter=value
where parameter is the name of the NVRAM parameter and value is the value to assign to the parameter.
Setting the diag-switch? parameter to true allows displaying of power-on initialization messages on TTYA.
The Stop A keyboard command or keyboard chord is used to abort the system and return to OpenBoot Monitor mode. The following tables lists the available SPARC System Keyboard chords:
| COMMAND | DESCRIPTION |
| Stop | Bybass POST |
| Stop A | Abort |
| Stop D | Enter diagnostic mode |
| Stop F | Enter FORTH Monitor on TTYA instead of probing |
| Stop N | Reset contents of NVRAM to default values |
| Run Level | State | Functionality |
|---|---|---|
| 0 | Power-down | Safe to turn off power to the system. |
| 1 | Administrative Single-user | All available file systems with user logins allowed. The terminal from which you issue this command becomes the Console. |
| 2 | Multiuser | For normal operations. Multiple users can access the system and the entire file system. All daemons are running except for NFS server and syslog. |
| 3 | Multiuser w/ NFS | For normal operations with NFS resource-sharing available. |
| 4 | Alternative multiuser | This level is currently unavailable. |
| 5 | Power-down | Shutdown the system and automatically turn off system power (if possible). |
| 6 | Reboot | Shutdown the system to run level 0, and then reboot to multiuser state (or whatever level is the default in the inittab file). |
| s or S | Single-user | Single user mode with all file systems mounted and accessible. |
Boot PROM (SPARC) or BIOS (x86), then boot programs (bootblk and ufsboot), then kernel initialization followed by the init process.
The init progam is a general process spawner. Its primary purpose is to create processes or stop processes based run level and information stored in the /init/inittab. In addition, it sets the default environment variables defined in /etc/default/init.
The kernel consists of a small generic core with a platform-specific component and a set of modules. The system determines which devices are attached at boot time. Then the kernel configures itself dynamically, loading needed modules into memory. Device drivers are loaded automatically when devices are accessed. This dynamic loading is called autoconfiguration.
Autoconfiguration:
| Directory | Contains |
|---|---|
| /platform/'uname -m'/kernel | Platform-specific kernel modules |
| /kernel | Common kernel modules needed by all platforms for booting |
| /usr/kernel | Common kernel modules for all platforms within a particular instruction set |
The directories that the kernel searches for kernel modules can be changed by use of the moddir variable in the /etc/system file.
The /etc/system file is used to customize the way in which the kernel modules are loaded.
There are 10 commands that can be used to change the run level:
| Command | Path | Run Level(s) | Description |
|---|---|---|---|
| fastboot | usr/ucb | 6 | Restart the operating system without checking the disks |
| fasthalt | usr/ucb | 0 | Stop the processor without checking the disks | halt | /usr/sbin | 0 | Stop the processor |
| init | /sbin | 012356S | Process control initialization | poweroff | /usr/sbin | 5 | Stop the processor and power off the system (if possible) | reboot | /usr/sbin | 6 | Restart the operating system | shutdown | /usr/sbin | 012356S | Shutdown system | shutdown | /usr/ucb | 6S | Shutdown system at a given time | telinit | /etc | 012356S | Process control initialization | uadmin | /sbin | 056 | Administrative Control |
The init command can be used to change to any of the 8 run levels by executing the the commands identified in the /etc/inittab and sending a sending a SIGTERM and possibly a SIGKILL to any processes not in /etc/inittab. Three psuedo-states (a, b, and c) can be dedfined to execute commands without actually changing run levels. For each run level there is an entry in the /etc/inittab to run the appropriate /etc/rc? script which in turn executes the scripts in the appropriate /etc/rc?.d directory.
The /usr/sbin/shutdown command provides a grace period and warning message capability along with executing the appropriate /etc/rc?.d scripts.
The /usr/ucb/shutdown command shuts the system down to single user mode at the specified time. At intervals, a warning message is displayed on the terminals of logged in users. The time can be now to indicate immediate shutdown.
The telinit(1M) command is for compatibility and is actually linked to the init(1M) command.
The uadmin(1M) command provides basic administrative functions such as shutting down or rebooting a system.
The init(1M) and shutdown(1M) commands can be used to change to the various run levels. Both execute the commands in the /etc/rc?.d directories. The shutdown(1M) also provides a grace period and warning message.
When the system is booted, the kernel builds a device hierarchy referred to as the device tree to represent the devices attached to the system. This tree is a hierarchy of interconnected buses with the devices attached to the buses as nodes. The root node is the main physical address bus.
Each device node can have:
The full device path name identifies a device in terms of its location in the device tree by identifying a series of node names separated by slashes with the root indicated by a leading slash. Each node name in the full device path name has the form:
driver-name@unit-address:device arguments
Where driver-name identifies the device name, @unit-address is the physical address of the device in the address space of the parent and :device arguments is used to define additional information regarding the device software.
Devices are referenced in three ways:
The physical device name of a device is the same as the full device path name. The physical device file are located under the /devices directory.
Logical device names are used to identify disk, tape and CD-ROM devices and provide either raw access (one character at a time) or block access (via a buffer for accessing large blocks of data). The logical name of SCSI devices identify the SCSI controller (bus), target (SCSI tap ID), drive (almost always 0)and slice (partition).
For example: /dev/dsk/c1t2d0s3
dsk identifies the device as a block disk (rdsk would indicate a raw disk) addressed as SCSI controller 1, target 2 drive 0 and slice 3.
Logical device names are located under the /dev directory and are linked to the appropriate physical device name file under the /devices directory.
Logical device names are used by the following commands:
The format(1M) (logical device names) and dmesg(1M) (physical/instance names) commands can be used to display the disk devices
An abbreviated name for a device that are displayed by the dmesg(1M), sysdef(1M) and prtconf(1M) commands. For disks it typically consists of a driver binding name and an instance number such as sd0.
The prtconf(1M) command displays device information using both physical and instance names.
The function of the /etc/path_to_inst file is to map the full device path name of devices to the instance name (driver binding name and instance number)of those devices.
The format of the file is:
     "physical name"    instance number    "driver binding number"
To grow a file system, the following steps must be accomplished:
The metadevices are located under the /dev/md/rdsk directory.
A disk label or Volume Table of Contents (VTOC) is a special area of every disk set aside to store information about the disk controller, geometry and slices (partitions). The interactive format(1M) or the commands fmthard(1M) and fdisk(1M) can be used to create a VTOC.
The slices or partitions of a disk are defined by a slice number, a tag that identifies its intended use and the starting/ending cylinder numbers. These partitions are then formatted and mounted as file systems. The interactive format(1M) or the commands fmthard(1M) and fdisk(1M) can be used to create a VTOC.
The prtvtoc(1M) command expects as an argument either a block disk name (/dev/dsk) or a raw disk name (/dev/rdsk) of an existing slice or partition.
File system inconsistences caused by operator errors or defective hardware/software can result in the corruption and loss of data, the inability to perform operations or even system failure. The fsck command checks the integrity of the internal set of tables used by a file system to keep track of inodes used and available blocks and attempts to correct any discovered inconsistences.
The fsck command is used to check and repair file systems. File systems are usually checked automatically as they are mounted during a system boot. Also, fsck can be executed manually whenever file system damage is suspected. The file system should be umounted while it is being checked. The fsck(1M) command can check cachefs, s5sf and ufs file systems.
The supreblock includes the following parameters:
The following components of a UFS file system are checked by fsck:
There are three types of data blocks. Regular (or plain) data blocks, which contain the data of a file, symbolic-link data blocks which contain the path name associated with a symbolic-link and directory data blocks which contain directory entries. The fsck command can only check directory data blocks.
A file system state flag is used to record the condition of a file system:
| Value | Meaning |
|---|---|
| FSCLEAN | The file system was unmounted cleanly. Will not be checked during boot. |
| FSSTABLE | The file system has not changed since its last checkpoint. |
| FSACTIVE | The file system has been modified and may not be synchronized with the in-memory copy of the superblock |
| FSBAD | The root file system was mounted when the state was not FSSTABLE or FSCLEAN. |
Disk-based file systems reside on hard disks, CD-ROMs and diskettes. They provide data storage and access for the system to which they are attached. The data is permanent in that when the system is shutdown in an orderly manner, the data is not lost. The types of disk-based file systems are Unix (UFS), High Serria (HSFS) and DOS-based (PCFS).
RAM-based or virtual file systems are in-memory file systems that provide access to special kernel information and facilities. When the system is shutdowm, the information is lost. The types of virtual file systems are Cache (CacheFS), Temporary (TMPFS), Loopback (LOFS), Process (PROCFS), Named Pipe (FIFOFS), File Descriptor (FDFS), Dynmaic File Descriptors (NAMEFS), Special (SPECFS) and Swap (SWAPFS).
Network-based files systems are typically disk-based file files that are accessible via a network and provide data storage and access for remote systems. The Network File System (NFS) is the only network-based file system available in the Solaris environment.
To create a UFS file system on a disk slice or partition, the slice is divided into one or more cylinder groups. A cylinder group is one or more consecutive disk cylinders. A disk cylinder is a set of tracks across a group of platters that are the same radial distance from the center of the platter.
The cylinder group is divided into blocks. There are four types of blocks: the boot block, the superblock, inode blocks, and data blocks. The boot block is used to store information when booting the system. The superblock is used to record information about the file system. Inode blocks store all the information about a file except its name (which his stored in a directory). Data blocks are used to store the data associated with files and directories.
When the file system is created, the size of the data blocks can be specified as either 4096 or 8192 (default) bytes. To reduce waste and make more efficent use of storage, a data block can be divided into a subunit called a fragment. The default fragment is 1024 bytes. Thus a single data block can be used to store data from more than one file. Note that only the last data block of a file can be a fragment. As data is added, the blocks are reallocated.
The mkfs(1M), mkfs_ufs or newfs(1M) commands can be used to create a new ufs file system. Although all types of file systems can be mounted and most can be checked using fsck(1M), the ufs file system is the only type that Solaris 2.6 can create.
The mount(1M) and umount(1M) commands are used to mount and unmount file systems. Mounted file systems are listed in the mount table (/etc/mnttab). Also, the mountall and umountall commands can be used to mount or unmount all file systems specified in the default file system table (/etc/vfstab).
When a file system is mounted using mount(1M) the type of file system is specified by the -F argument. The following types of file systems can be mounted: cachefs hsfs nfs pcfs s5fs tmpfs and ufs.
By default, ufs file systems are mounted to support files that are larger than 2 GB in size. Support for largefiles can be disabled at mount time by specifying the -o nolargefiles option. However, if a file larger than 2 GB in size existed on the file system since that last time fsck(1M) was executed, then the mount will fail.
A local file system can be set up to mount automatically by adding and entry for the file system in the default file system table, /etc/vfstab.
A swap file is created using the mkfile(1M) command. Then it is activated (made available) by using the swap(1M) command. In addition, an entry for the new swap file should be added to the default file systems table, /etc/vfstab.
The NFS service enables computers of difference architectures running different operating systems to share files across a network. It allows multiple computers to access the same files which eliminates redundancy and improves consistency while reducing administration.
The NFS server provides access to disk resources to other computers over the network. A NFS client is not required to have local disk storage space since it can access the resources shared by a NFS server on a as-needed basis.
The system must be at run level 3 or NFS has to be manually started at run level 2 and functioning properly. The system must be on a network and be accessible by other systems. The resources must be made available using the share(1M) commnad.
Resources and made available and unavailable using the share(1M) and unshare(1M) commands. The -d option of the share(1M) command can be used to specify a description of the share which can viewed using the dfshares(1M) command.
Also if the resources are added to the /etc/dfs/dfstab they can be made available and unavailable using the shareall(1M) and unshareall commands.
By default, all file systems shared via NFS are available for WebNFS access.
To make modifications to the manner in which the resource is shared, edit the /etc/dfs/dfstab entry and restart NFS. To allow read/write access, remove the ro option (read only) if specified and include the rw option. To make URLs relative to the resource as opposed to the servers root directory, include the public option. To load a HTML file instead of listing the directory when an NFS URL is accessed, include the index option.
In addition, if the NFS server is separated from the Internet via a firewall, the firewall must be configured to allow TCP connections on port 2049.
Entries in /etc/dfs/dfstab are shared automatically whenever NFS is started. To enable the share of a resource, modify the /etc/dfs/dfstab with any supported text editor and add a line consists of a share(1M) command for the resource.
If the system is not in run level 3, enter init 3 to start NFS. If NFS is already running then stop and restart NFS to enable the new share:
     /etc/init.d/nfs.server stop
     /etc/init.d/nfs.server start
The dfshares(1M) command lists available resources shared by either the local or a remote system. Also currently shared resources are listed in the /etc/dfs/sharetab file.
Use the mount(1M) command to mount a remote resource:
     mount -F nfs -o options server:resource mount-point
Where options are any desired NFS options, server is the host name or IP address of the remote system, resource is the shared directory name of the remote resource and mount-point is the local directory where the resource should be mounted.
If the -F nfs argument is not used, then mount(1M) command uses the default network FS type as specified in the /etc/dfs/fstypes file. Since, Solaris only supports NFS, this default type is nfs.
Options include:
On the client NFS system, modify the /etc/vfstab file using any supported editor and add the following:
      server:resource - mount-point nfs - yes mode
Where server is the name of the NFS server, resource is the path name of the shared resource, the - implies no automatic fsck on mount, mount-point is the directory on the client where the resource is to be mounted, nfs is the type of file system, the - implies no fsck check, the yes implies mount at boot and mode is the access mode such as rw for read/write or ro for read only.
Automount is a client-side service that automatically mounts the appropriate file system when a client attempts to access a file system that was not mounted. This simplifies keeping track of which resources are needed or mounted at any particular time. Also it eliminates the need of having remote file systems (NFS mounts) listed in /etc/vfstab which allows faster booting and shutdown.
The three types of maps are:
After creating the direct map and edit the /etc/auto_master to include an entry in the form:
     /-   direct_map   options
Where direct_map is the name of the direct map in the /etc directory and options are any desired mount options.
The automount(1M) program should be restarted when any changes occur to the auto_master map or when additions or deletions are made to a direct map. Modifcations to existing entries in a direct map or any changes to a indirect map do not require restarting automount(1M).
Authentication is a way to restrict access to specific users when accessing a remote system, which can be setup at both the system and network level. For NIS+, every access request is authenticated by checking credentials.
Authorization is a way to restrict operations that the user can perform on the remote system once the user gains access. For NIS+, every component in the namespace specifies the type of operation it will accept and from whom.
| Security Level | Description |
|---|---|
| 0 | Designed for the initial setup and testing of a NIS+ namespace. The NIS+ server grants full access rights to everyone. |
| 1 | Not supported by NIS+ |
| 2 | Default. The highest security level. Authenticates all requests via credential mechanism. |
| Access Right | Description |
|---|---|
| read | The prinicpal can view the contents of the object. |
| modify | The principal can change the contents of the object. |
| destroy | The principal can delete the object. |
| create | The principal can create new tables in a directory or new columns or entries in tables. |
| Class | Description |
|---|---|
| owner | The prinicpal is the owner of the object. |
| group | The principal is a member of the object's group. |
| world | The principal has been authenticated but is not a owner or a member of any group. |
| nobody | The principal has not been authenticated and gets no respect (The Rodney Dangerfields of NIS+). |
The name service switch is a file (/etc/nsswitch.conf) that controls how network information is obtained. Each sytsem has a switch file. Entries in the file determines how a particular type of information is obtained. That is which naming services (NIS, NIS+, DNS, etc.) can be used to obtain which types of information (host, password, group) and in which order the naming services should be queried.
A name service provides a centralized place to store the information necessary for users and systems to communicate with each other across a network. This includes:
Without a centralized service, each system would have to maintain its own copy of the information (for example, using the /etc files of the original UNIX naming system). A centralized service eliminates redundancy, improves consistency and reduces administration.
In addition, a naming service also:
Solaris 2.6 supports five name services:
The Domain Name System (DNS) is part of the TCP/IP protocol suite and is the name service used by the Internet. It provides host name to IP address resolution as well as IP address to host name resolution. The namespace is divided into domains which in turn is divided into subdomains (or zones), where one or more DNS servers are responsible for providing resolution services. All the DNS servers work together to provide resolution services across the entire namespace. The DNS server provided with Solaris 2.6 is version 4.9.4 (patch level 1) of the Berkeley Internet Name Domain (BIND) program which is referred to as the Internet name daemon (in.named). Included with BIND are several DNS utilities such as nslookup, dig and dnsquery.
The host name and IP address information is stored in a set of ASCII files using a prefined syntax known as records.
NIS is a distributed name service. It is an mechanism for identifying and locating network objects and resources. It provides a uniform storage and retrieval method for network wide information in a transport protocol and media-independent fashion. The database (called maps) can be distributed among NIS servers (master and slaves) and be updated from a central location in an automatic and reliable fashion.
To configure an NIS master:
To configure an NIS Slave:
To configure a NIS Client:
NIS+ is a network name service that can be used to store and retrieve information about workstaion addresses, security information, mail information, Ethernet interfaces, and network services in a cetral location where all workstations have access to it. As with most name services, it provides a centralized service that eliminates redundancy, improves consistency and reduces administration costs.
The following table summarizes the 16 preconfigured NIS+ tables:
| Table | Description |
|---|---|
| auto_home | Location of all user home directories |
| auto_master | Automounter map information |
| bootparams | location of root, swap, and dump partitions of every diskless client in the domain |
| cred | credentials of the principals |
| ethers | Ethernet addresses of every workstation |
| group | Group name, ID, password and members of every UNIX group in the domain |
| hosts | Network address and every workstation |
| mail_aliases | Information about the mail aliases of users in the domain |
| netgroup | Network groups and their members |
| netmasks | Networks in the domain and thier netmasks |
| networks | Networks in the domains and their canonical names |
| passwd | Password information about every user in the domain |
| protocols | List of IP protocols used in the domain |
| RPC | The RPC program numbers of RPC services available in the domain |
| services | Names of IP services used in the domain and their port numbers |
| timezone | Timezone of every workstation in the domain |
Support of network clients takes several forms:
For Diskless clients and AutoClients, an server must provide ability to remotely access the operating system and application file systems via NFS. Diskless clients access this information remotely while AutoClients will locally cache root (/) and /usr.
The Host Manager which is a tool provided with Solstice AdminSuite is used to add support for AutoClients, Diskless clients, JavaStation clients and Dataless clients. Adding support for all types of clients follows the same high level procedure:
Using the Host Manager, a standalone system, dataless client or generic system can be converted to an OS Server.
The command admhostmod can be used instead of Host Manager to convert a system to an OS Server.
The following files can be modified by Host Manager:
Jumpstart is a method to automatically install Solaris on a new SPARC system by inserting the Solaris Operating System CD-ROM in the drive and powering on the system. The software installed is determined by a default profile based on system model and size of disk(s). All new SPARC systems have the JumpStart software pre-installed on its boot disk.
Custom Jumpstart is a method to automatically install groups of identical systems. To customize Jumpstart, a text file called rules must created that lists one or more profiles. A profile is a text file that defines how Solaris is to be installed on a group of systems. Once these files are completed, they are validated using the check script. In a non-networked environment, the validated files are placed on a diskette in the jumpstart directory and the system is booted. Then the appropriate profile is selected to direct the installation of Solaris. In a networked environment, the jumpstart directory is located on a network server.
Note: Any of the four install methods (not just JumpStart) can be used when installing over the network.
The main components for setting up a network for automatic install are:
When a system is installed automatically, it needs to be able to locate network information about itself. In a NIS or NIS+ environment, it will attempt to use the name service to obtain this information. Use the Solstice Host Manager to add the information about the new client. If a name service is not being used, then the network information about the new client must be added to the /etc files of the install server or the boot server if required.
Another method to preconfigure system information is by creating a sysidcfg(4) file and making it available via the diskette drive or an NFS share.
On the system that will be the install server:
On the system that will be the boot server:
The custom JumpStart files can either be located on a diskette or on a server (referred to as a profile server) where they are shared via NFS. Preparing a custom JumpStart dierctory and files consists of:
On the system that will be the install server or boot server (if required):
If booting a standalone system insert the Solaris CD in the CD-ROM drive and if appropriate the diskette with the JumpStart or other configuration information in the diskette drive. If booting a networked system setup to install over the network, verify that the system is attached to the network and the install server and any other required servers are available along with any configuration information.
Power on the system and if necessary, identify the CD or the network as the source of the install.
*Trademarks of Sun Microsystems, Inc.
| Special Thanks to Darrell Ambro for writing this Cramsession.
Make sure to check out his extensive Solaris study guide at: http://ns.netmcr.com/~ambro/intro.htm |