- Virtualbox 5 with extension pack
- Centos-7
- warewulf 3.6.99 built from svn source code
- one manage node and three storage node(or client nodes)
- stateful provision
Manage node setup:
- 1 OS disk
- two network interfaces, first as NAT, second as internal network
Storage node setup:
- each has 1 OS disk, 1 journal disk, 3 OSD disk, all disk is 8Gb in size
- OS disk use ext4, others use xfs format ( just letting you know, no config needed)
- one network interface as internal network, name eth1 (do not use eth0!)
- enable network boot and set as first choice
If you want to add more NICs to your client nodes, such as NAT to connect to outside, my experience is to make the internal network as first(named eth1), NAT second, otherwise the client nodes can't find manage node from DHCP or tftp. And you will need to put /etc/sysconfig/network-scripts/ifcfg-eth1 or enp0s8, or things like that in your WW vnfs image.
Here is how to do it:
1. install centos-7 to manage node, everything default
2. edit /etc/sysconfig/network-scripts/ifcfg-enp0s? and enable network for both network interface, set first as DHCP and second as static with IP=172.16.2.250
3. systemclt restart network
4. yum install git
5. git clone https://github.com/ruilinchu/warewulf_ceph
you will have the warewulf_ceph folder, inside which there are bash scripts indexed in sequence. All you need to do is run them one by one.
Ok, let's get started:
6. run script 1, this will prepare for and install ceph to manage node and prepare to build warewulf, after done system will reboot
7. run script 2, this takes a while to build and install warewulf
8. run script 3, this will set up warewulf, build the vnfs image, at the end it will prompt you to boot up the client nodes
9. power on the client nodes, warewulf will record them in object store, after all are recorded ctrl-c to stop script 3
10. check every node is booted ok, this is a stateless provision. You should be able to ssh to client nodes but not directly log in
11. run script 4, 5 and 6 one by one, this will install pdsh to manage nodes, enable ssh password-free login, install ceph to client node image
12. run script 7, this will install kernel and grub to client node image, at the end will reboot all client nodes and statefully provision them with centos-7, this will take a while, be patient and wait
13. check every client node is booted and installed ok
14. run script 8, this will set client nodes to boot from local disk and reboot them
now the OS are done on all nodes, we are ready to deploy Ceph:
15. run script 9, this will set up a Ceph cluster, install 3 ceph monitor(1 on each) and 9 OSD( 3 on each) to client nodes, takes a while.
Ceph cluster is ready! If you see clock skew just like in the picture, check if ntp and ntpdate service is enabled and running, restart ceph mon and you will be fine.
Having some fun with block device, I'm using the manage node also as a ceph file system client:
No comments:
Post a Comment