Saturday, June 20, 2015

OpenShift 3 (Fedora 22 Host + Vagrant with libvirt + OpenShift 3 Docker)

Red Ha recently introduced OpenShift 3, which is different from OpenShift 2 by adopting Docker + Kubernetes. Following doc covers some note I took during experimenting with it as it seem there is still some gaps in online docs that is different from my installation experience (I may just miss something too)

Update (06/27/2015): The latest binary v1.0.0 from OpenShift 3 along with latest CentOS patch (that includes Docker 1.6.2 along with necessary SELinux Policy), its example seem to be working okay. But I found that race condition may happen during a lot of image pull (but maybe it is just because I have a single all in one instance)

The code can be found at here:
https://github.com/openshift/origin

The OS in this docs is Fedora 22 WorkStation version, so some settings will be oriented with Fedora specific settings.

Install Vagrant + libvirt (one may check this doc for more detail http://fedoramagazine.org/running-vagrant-fedora-22/):
sudo dnf install vagrant vagrant-libvirt

Install Vagrant Plugin (details can be found at here: https://github.com/openshift/vagrant-openshift)
vagrant plugin install vagrant-openshift

(Optional: Install Docker for local debug)
sudo dnf install docker

Install nfs-server packages (one may find the detail docs at here: http://www.server-world.info/en/note?os=Fedora_22&p=nfs)
sudo dnf install nfs-utils 

Install firewall rule for vagrant + libvirt NFS mount (as vagrant + libvirt use NFS mount for share the data between host and VM, it is a requirement), following commands must be run even if you choose to disable your firewall (I am not sure why, but without following rules, NFS will not happen even if your firewall is off), (detail explanation can be found at here http://fedoramagazine.org/running-vagrant-fedora-22/):

sudo firewall-cmd --permanent --add-service=nfs &&sudo firewall-cmd --permanent --add-service=rpc-bind &&sudo firewall-cmd --permanent --add-service=mountd &&sudo firewall-cmd --reload

Install git (for cloing the repository):
sudo dnf install git

For people who like single line command:
sudo -y dnf install vagrant vagrant-libvirt git docker nfs-utils &&
sudo firewall-cmd --permanent --add-service=nfs &&
sudo firewall-cmd --permanent --add-service=rpc-bind &&
sudo firewall-cmd --permanent --add-service=mountd &&
sudo firewall-cmd --reload && 
vagrant plugin install vagrant-openshift
Move to a folder where OpenShift source can be stored and clone it down:

git clone https://github.com/openshift/origin.git

Move into the directory and create a CentOS 7 configuration file instead of default Fedora 21 ones (the reason is Fedora 7 seem to be close its end of life, so thought to choose a long support edition)
cd origin
vagrant origin-init --stage inst --os centos7 openshift
Check the configuration:
cat .vagrant-openshift.json  | grep centos7
Should Produce following:
  "os": "centos7",
    "box_name": "centos7_inst",
    "box_url": "http://mirror.openshift.com/pub/vagrant/boxes/openshift3/centos7_virtualbox_inst.box"
    "box_name": "centos7_base",
    "box_url": "http://mirror.openshift.com/pub/vagrant/boxes/openshift3/centos7_libvirt_inst.box"
    "ssh_user": "centos"
    "ssh_user": "centos"
Proceed to stand out the vagrant box (in the origin folder above):
vagrant up
It should continues to stand out the machine, enter password as needed for the network settings and NFS mount. If it is has errors on NFS mount, please use the above docs and other info to help debug. Once the machine completes success, proceed to ssh into the VM (and type password as necessary):

vagrant ssh

Once ssh into the machine, we can pull the image to ensure they are latest

. /data/src/examples/sample-app/pullimages.sh

Download and start up the necessary binary for OpenShift v3
# Download the binary
curl -L https://github.com/openshift/origin/releases/download/v1.0.0/openshift-origin-v1.0.0-67617dd-linux-amd64.tar.gz | tar xzv
# Change to root user (for below scripts)
sudo su -
# http://fabric8.io/guide/openShiftInstall.html
export OPENSHIFT_MASTER=https://$(hostname -I | cut -d ' ' -f1):8443
echo $OPENSHIFT_MASTER
export PATH=$PATH:$(pwd)
# Create the log directory in advance
mkdir -p /var/lib/openshift/
chmod 755 /var/lib/openshift
# Remove previous generated config if anyrm -rf openshift.local.config
nohup openshift start \
        --cors-allowed-origins='.*' \
        --master=$OPENSHIFT_MASTER \
        --volume-dir=/var/lib/openshift/openshift.local.volumes \
        --etcd-dir=/var/lib/openshift/openshift.local.etcd \
        > /var/lib/openshift/openshift.log &
tail -f   /var/lib/openshift/openshift.log

Please wait until the log is print out listening before proceed to next steps

On may then use the web browser to visit the OpenShift console with VM_ADDRESS:8443 in the host machine, one may found the VM_ADDRESS by using following command in the VM

ifconfig 

Should have similar output at eth0 (other ones are for Docker and OpenShift)

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.121.169  netmask 255.255.255.0  broadcast 192.168.121.255
        inet6 fe80::5054:ff:fec6:ba8d  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:c6:ba:8d  txqueuelen 1000  (Ethernet)
        RX packets 184375  bytes 442794303 (422.2 MiB)
        RX errors 0  dropped 11  overruns 0  frame 0
        TX packets 119825  bytes 15028821 (14.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
In the Vagrant VM, start following command,

# Link the current config
mkdir -p ~/.kube
ln -s `pwd`/openshift.local.config/master/admin.kubeconfig ~/.kube/config
export CURL_CA_BUNDLE=`pwd`/openshift.local.config/master/ca.crt
sudo chmod a+rwX openshift.local.config/master/admin.kubeconfig
sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
 
# Create local registryoadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
# Check the statusoc describe service docker-registry --config=openshift.local.config/master/admin.kubeconfig

Above should print out docker registry info (please check until Endpoint parameter is no longer <NONE>)

Use following commands to log in as a sample user
oc login --certificate-authority=openshift.local.config/master/ca.crt -u test-admin -p test-admin

Use following commands then to create a javaee sample application with WildFly (please check the original blog: http://blog.arungupta.me/openshift-v3-getting-started-javaee7-wildfly-mysql/)

# Create new project
oc new-project test --display-name="OpenShift 3 WildFly" --description="This is a test sample project to test WildFly on OpenSHift 3"

 # Create new app
oc new-app -f https://raw.githubusercontent.com/bparees/javaee7-hol/master/application-template-jeebuild.json

# Trace the build 
oc build-logs jee-sample-build-1

 The above command should yield something similar to following (may need to wait for few minutes):
I0628 04:40:08.327578       1 sti.go:388] Copying built war files into /wildfly/standalone/deployments for later deployment...
I0628 04:40:08.352066       1 sti.go:388] Copying config files from project...
I0628 04:40:08.353790       1 sti.go:388] ...done
I0628 04:40:16.662515       1 sti.go:96] Using provided push secret for pushing 172.30.6.81:5000/test/jee-sample image
I0628 04:40:16.662562       1 sti.go:99] Pushing 172.30.6.81:5000/test/jee-sample image ...
I0628 04:40:24.410955       1 sti.go:103] Successfully pushed 172.30.6.81:5000/test/jee-sample


One then should be able to use describe service
oc describe services

Output:
NAME       LABELS                                   SELECTOR        IP(S)           PORT(S)
frontend   template=application-template-jeebuild   name=frontend   172.30.88.128   8080/TCP
mysql      template=application-template-jeebuild   name=mysql      172.30.250.39   3306/TCP


One may use ssh tunnel to post the website on your local workstation by opening up a new terminal, move into the origin code directory (the ip may be different front above, please check the ip for frontend service and change as necessary)

From local workstation (not the vagrant box), use following
cd origin 
vagrant ssh -- -L 9999:172.30.167.74:8080 
The IP and Port depends on the output above, 9999 is the local machine available port. One may then open localhost:9999 in local workstation browser and it should open the Java app.

Clean up everything (it will remove everything, please be warned)

oc delete pods,services,projects --all

Shutdown the vagrant boxes by using the command on host workstation:

vagrant halt 

Or use the following command on host workstation to destroy the VM (in case something goes wrong):

vagrant destroy --force