Install operating system: Difference between revisions

From DISI
Jump to navigation Jump to search
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
Here we assume you already have the necessary hardware for a cluster, as described in [[Acquire and deploy hardware]].  This article is part of a series called [[So you want to set up a lab]].
Here we assume you already have the necessary hardware for a cluster, as described in [[Acquire and deploy hardware]].  This article is part of a series called [[So you want to set up a lab]].
 
To begin, you will either need 6 computers to host the central services, or you will need a hypervisor to host 6 VMs, or some mixture of the above. We recommend the hypervisor if you can bear it and the 6 physical computers if you can afford the space and energy.  
To begin, you will either need 6 computers to host the central services, or you will need a hypervisor to host 6 VMs, or some mixture of the above. We recommend the hypervisor if you can bear it and the 6 physical computers if you can afford it (space, energy, money).  


{{TOCright}}
{{TOCright}}


== Hypervisor ==  
== Hypervisor ==  
We use (xxx), but any should do, including virtualbox, vmware, among many others.
We use libvirt, http://libvirt.org. Others also work well, including virtualbox, vmware. [[Hypervisor]]
 
[[Hypervisor]]


== Foreman ==
== Foreman ==
Foreman is the node creation and provisioning server.  
Foreman is the provisioning server, available from http://theforeman.org/.  We recommend using the latest [[Centos]], currently 6.5.
We recommend [[Centos]] 6.3.
Here is how to set one up: [[Foreman]]
Here is how to set one up: [[Foreman]]
== Rack Organization Planning ==
This becomes more important as your cluster grows.  Put the managed switch in the middle for shorter cable lengths. Buy power and ethernet cables of several short lengths, 1.5', 3'.  Use an addressable PDU if you can afford it. Put disks, generally heavier, at the bottom, cpu machines at the top. Label machines front and back with public and private IP address and names. Give every enclosure a name so you can refer to it.
== DHCP and Cluster DNS ==
This requires planning. Please see our [[Cluster IP planning worksheet]]
== Set up provisioning services ==
* Create local repositories
* Automatic Provision (PXE)


== Authentication server ==  
== Authentication server ==  
We use 389, but other authentication systems will work fine, including kerberos.  
Set up authentication (389) server. Other authentication systems, such as kerberos, are fine, but are beyond the scope of this tutorial.
 
* create users. If you interoperate with another cluster, you may have to pay attention to name and/or UID collisions.
* DNS


== Fileservers and NFS ==  
== NFS and Public DNS ==  
We use XFS over NFS.  We tend to hang several enclosures off a head node. We recommend SAS, which has finally come down in price, and RAID6 formatting.  We tend to use enclosures that host 12 disks of 4TB each.  
We use EXT4 over NFS.  We tend to hang several enclosures off a head node. Do not mix equipment from different vendors.  We recommend SAS, which has finally come down in price, and RAID6 formatting.  We currently use enclosures that host 12 disks of 4TB each.  


== Portal and Security ==  
== Disk planning ==
We recommend setting up a portal and blocking all inbound access to all other computers. Use two portals at distinct geographical locations for added robustness.


* Perimeter security
Suggest /nfs/home on a fast dedicated machine.  Suggest /nfs/work for large work area.  Suggest /nfs/store for online archive.
Depending on your local environment, you may need to coordinate the use of public IP names and addresses with your ISP or department.


== Queuing system ==
== Set up queuing system ==
We recommend free versions of Sun Grid Engine [[SGE]].
We recommend free versions of Sun Grid Engine [[SGE]].
See our guidelines to [[get a queuing system working]].
See our guidelines to [[get a queuing system working]].
* Create SGE master
* Setup SGE
* Provision sgehead
* Configure SGE hosts/groups
== Set up License server(s) ==
Will be used for PGF and Epik in the middleware step.


== Set up a database server ==
== Set up a database server ==
[[Psql]] and [[rdkit]] as well as [[MySQL]]
== Portal and Security ==
We recommend setting up a portal and blocking all inbound access to all other computers. Use two portals at distinct geographical locations for added robustness. We recommend not using


== Add a new node to the cluster ==  
== Add a new node to the cluster ==  
Line 43: Line 59:
[[Workstation Install]]
[[Workstation Install]]


== Document ==
[http://wiki.bkslab.org Document]] the system configuration, licenses, access codes.  We encourage you to set up your own wiki, wordpress or other site for this purpose, but this is of course optional.


Return to [[So you want to set up a lab]].
Return to [[So you want to set up a lab]].

Latest revision as of 18:32, 22 September 2014

Here we assume you already have the necessary hardware for a cluster, as described in Acquire and deploy hardware. This article is part of a series called So you want to set up a lab. To begin, you will either need 6 computers to host the central services, or you will need a hypervisor to host 6 VMs, or some mixture of the above. We recommend the hypervisor if you can bear it and the 6 physical computers if you can afford the space and energy.

Hypervisor

We use libvirt, http://libvirt.org. Others also work well, including virtualbox, vmware. Hypervisor

Foreman

Foreman is the provisioning server, available from http://theforeman.org/. We recommend using the latest Centos, currently 6.5. Here is how to set one up: Foreman

Rack Organization Planning

This becomes more important as your cluster grows. Put the managed switch in the middle for shorter cable lengths. Buy power and ethernet cables of several short lengths, 1.5', 3'. Use an addressable PDU if you can afford it. Put disks, generally heavier, at the bottom, cpu machines at the top. Label machines front and back with public and private IP address and names. Give every enclosure a name so you can refer to it.

DHCP and Cluster DNS

This requires planning. Please see our Cluster IP planning worksheet

Set up provisioning services

  • Create local repositories
  • Automatic Provision (PXE)

Authentication server

Set up authentication (389) server. Other authentication systems, such as kerberos, are fine, but are beyond the scope of this tutorial.

  • create users. If you interoperate with another cluster, you may have to pay attention to name and/or UID collisions.

NFS and Public DNS

We use EXT4 over NFS. We tend to hang several enclosures off a head node. Do not mix equipment from different vendors. We recommend SAS, which has finally come down in price, and RAID6 formatting. We currently use enclosures that host 12 disks of 4TB each.

Disk planning

Suggest /nfs/home on a fast dedicated machine. Suggest /nfs/work for large work area. Suggest /nfs/store for online archive. Depending on your local environment, you may need to coordinate the use of public IP names and addresses with your ISP or department.

Set up queuing system

We recommend free versions of Sun Grid Engine SGE. See our guidelines to get a queuing system working.

  • Create SGE master
  • Setup SGE
  • Provision sgehead
  • Configure SGE hosts/groups

Set up License server(s)

Will be used for PGF and Epik in the middleware step.

Set up a database server

Psql and rdkit as well as MySQL

Portal and Security

We recommend setting up a portal and blocking all inbound access to all other computers. Use two portals at distinct geographical locations for added robustness. We recommend not using

Add a new node to the cluster

How to spin up a new virtual machine

Add new disk to the cluster

Configure new disk

Deploy a workstation

Workstation Install

Document

Document] the system configuration, licenses, access codes. We encourage you to set up your own wiki, wordpress or other site for this purpose, but this is of course optional.

Return to So you want to set up a lab.