Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts

Saturday, April 22, 2017

Kubernetes Share Config across services

Following blog post shows one way to allow one Kubernetes service to generate configs and share with other services. It may not be the best approach, but just one approach to allow the share to happen.

Example Scenario

One Kubernetes (like raik-cs or mysql random password generator) service that generates a random generated credential during first time start up. We will mention it as datastore service in below article.

Another Kubernetes service (like webapplication) needs the credential to access the service. We will mention it as client service in below article.


There is several approach:

  • Utilize another key-value service like Valut
    • The only downside is need to manage another service and it still kind remain a chicken and egg question that if you use auto generated credential, the credential needs store somewhere.
  • Store the credential somewhere in the Pod and then somehow have an orchestrator to use kuebctl exec to pull data from one service and present as secret to another
    • It seem to be not a generic approach (attach a process through exec seem never a really good idea for a production system), and the third party orchestrator parsing may not be reliable 

So the conclusion solution that I come up with is to utilize Kubernetes Secret API and share the secrets across services within the same namespace

Prerequisite

Kubernetes 1.6+ (due to the requirement of RBCA authentication) setup.
Following example will use minikube (0.18.0)


Setup

The datasotre service will be created, and the pod will auto generated an account, and push the account credential to a new secret in the namespace

The client service will be created with the secret and able to consume the service


During normal setup, a pod in non the kube-system namespace will not able to access the Kubernetes Secret object api

So we need to create additional role to address this requirement. The reason we choose RBCA mode over the current default ABAC (1.6) is because ABAC does not allow dynamic role creation (it will require a restart, and the control on resource is not that fine grained).

More doc can be read in following section
https://kubernetes.io/docs/admin/authorization/

Start up minikube with RBAC mode

minikube start --vm-driver=kvm --extra-config=apiserver.Authorization.Mode=RBAC
Create a namespace with a role to access all secrets info in a ymal file

cat > create_default_namespace_role.yaml << END
kind: Namespace
apiVersion: v1
metadata:
  name: dummy
  labels:
    name: dummy
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: dummy
  name: dummy-default-role
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dummy-default-rolebinding
  namespace: dummy
subjects:
  - kind: ServiceAccount
    name: default
    namespace: dummy
roleRef:
  kind: Role
  name: dummy-default-role
  apiGroup: rbac.authorization.k8s.io
END
Create the namespace and default role
kubectl create -f  create_default_namespace_role.yaml
Prepare a docker image for datastore service, during the line where auto generated credential is exposed, please do following


DUMMY_ACCESS_KEY='keyid'
DUMMY_ACCESS_SECRET='keysecret'  

curl -X POST -H "Content-Type:application/json" -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -k https://${KUBERNETES_PORT_443_TCP_ADDR}/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/secrets -d "{\"apiVersion\":\"v1\",\"data\":{\"DUMMY_ACCESS_KEY\":\"$(echo ${DUMMY_ACCESS_KEY} | base64)\",\"DUMMY_ACCESS_SECRET\":\"$(echo ${DUMMY_ACCESS_SECRET} | base64)\"},\"kind\":\"Secret\",\"metadata\":{\"name\":\"dummy-admin-credential\"}}"
Note: The data fields will require base64 encoded before can pass into the API

If  it is possible that credential changes in later state, one can use PUT to update the secret (please note the client service may need to be restarted to adopt the change)



DUMMY_ACCESS_KEY='keyid'
DUMMY_ACCESS_SECRET='keysecret'

curl -X PUT -H "Content-Type:application/json" -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -k https://${KUBERNETES_PORT_443_TCP_ADDR}/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/secrets/dummy-admin-credential -d "{\"apiVersion\":\"v1\",\"data\":{\"DUMMY_ACCESS_KEY\":\"$(echo ${DUMMY_ACCESS_KEY} | base64)\",\"DUMMY_ACCESS_SECRET\":\"$(echo ${DUMMY_ACCESS_SECRET} | base64)\"},\"kind\":\"Secret\",\"metadata\":{\"name\":\"dummy-admin-credential\"}}"

Once the datastore service is up, on the outide, one can use kubectl to check the generated credential

kubectl get dummy-admin-credential -o yaml --namespace dummy

and base64 decode the DUMMY_ACCCESS_KEY and DUMMY_ACCESS_SECRT

In the client service, the docker container can either do following to get the credential (to use base64 decode + json process)

curl -X GET -H "Content-Type:application/json" -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -k https://${KUBERNETES_PORT_443_TCP_ADDR}/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/secrets/dummy-admin-credential


or do the proper way by attaching the secret to the pod





Saturday, June 20, 2015

OpenShift 3 (Fedora 22 Host + Vagrant with libvirt + OpenShift 3 Docker)

Red Ha recently introduced OpenShift 3, which is different from OpenShift 2 by adopting Docker + Kubernetes. Following doc covers some note I took during experimenting with it as it seem there is still some gaps in online docs that is different from my installation experience (I may just miss something too)

Update (06/27/2015): The latest binary v1.0.0 from OpenShift 3 along with latest CentOS patch (that includes Docker 1.6.2 along with necessary SELinux Policy), its example seem to be working okay. But I found that race condition may happen during a lot of image pull (but maybe it is just because I have a single all in one instance)

The code can be found at here:
https://github.com/openshift/origin

The OS in this docs is Fedora 22 WorkStation version, so some settings will be oriented with Fedora specific settings.

Install Vagrant + libvirt (one may check this doc for more detail http://fedoramagazine.org/running-vagrant-fedora-22/):
sudo dnf install vagrant vagrant-libvirt

Install Vagrant Plugin (details can be found at here: https://github.com/openshift/vagrant-openshift)
vagrant plugin install vagrant-openshift

(Optional: Install Docker for local debug)
sudo dnf install docker

Install nfs-server packages (one may find the detail docs at here: http://www.server-world.info/en/note?os=Fedora_22&p=nfs)
sudo dnf install nfs-utils 

Install firewall rule for vagrant + libvirt NFS mount (as vagrant + libvirt use NFS mount for share the data between host and VM, it is a requirement), following commands must be run even if you choose to disable your firewall (I am not sure why, but without following rules, NFS will not happen even if your firewall is off), (detail explanation can be found at here http://fedoramagazine.org/running-vagrant-fedora-22/):

sudo firewall-cmd --permanent --add-service=nfs &&sudo firewall-cmd --permanent --add-service=rpc-bind &&sudo firewall-cmd --permanent --add-service=mountd &&sudo firewall-cmd --reload

Install git (for cloing the repository):
sudo dnf install git

For people who like single line command:
sudo -y dnf install vagrant vagrant-libvirt git docker nfs-utils &&
sudo firewall-cmd --permanent --add-service=nfs &&
sudo firewall-cmd --permanent --add-service=rpc-bind &&
sudo firewall-cmd --permanent --add-service=mountd &&
sudo firewall-cmd --reload && 
vagrant plugin install vagrant-openshift
Move to a folder where OpenShift source can be stored and clone it down:

git clone https://github.com/openshift/origin.git

Move into the directory and create a CentOS 7 configuration file instead of default Fedora 21 ones (the reason is Fedora 7 seem to be close its end of life, so thought to choose a long support edition)
cd origin
vagrant origin-init --stage inst --os centos7 openshift
Check the configuration:
cat .vagrant-openshift.json  | grep centos7
Should Produce following:
  "os": "centos7",
    "box_name": "centos7_inst",
    "box_url": "http://mirror.openshift.com/pub/vagrant/boxes/openshift3/centos7_virtualbox_inst.box"
    "box_name": "centos7_base",
    "box_url": "http://mirror.openshift.com/pub/vagrant/boxes/openshift3/centos7_libvirt_inst.box"
    "ssh_user": "centos"
    "ssh_user": "centos"
Proceed to stand out the vagrant box (in the origin folder above):
vagrant up
It should continues to stand out the machine, enter password as needed for the network settings and NFS mount. If it is has errors on NFS mount, please use the above docs and other info to help debug. Once the machine completes success, proceed to ssh into the VM (and type password as necessary):

vagrant ssh

Once ssh into the machine, we can pull the image to ensure they are latest

. /data/src/examples/sample-app/pullimages.sh

Download and start up the necessary binary for OpenShift v3
# Download the binary
curl -L https://github.com/openshift/origin/releases/download/v1.0.0/openshift-origin-v1.0.0-67617dd-linux-amd64.tar.gz | tar xzv
# Change to root user (for below scripts)
sudo su -
# http://fabric8.io/guide/openShiftInstall.html
export OPENSHIFT_MASTER=https://$(hostname -I | cut -d ' ' -f1):8443
echo $OPENSHIFT_MASTER
export PATH=$PATH:$(pwd)
# Create the log directory in advance
mkdir -p /var/lib/openshift/
chmod 755 /var/lib/openshift
# Remove previous generated config if anyrm -rf openshift.local.config
nohup openshift start \
        --cors-allowed-origins='.*' \
        --master=$OPENSHIFT_MASTER \
        --volume-dir=/var/lib/openshift/openshift.local.volumes \
        --etcd-dir=/var/lib/openshift/openshift.local.etcd \
        > /var/lib/openshift/openshift.log &
tail -f   /var/lib/openshift/openshift.log

Please wait until the log is print out listening before proceed to next steps

On may then use the web browser to visit the OpenShift console with VM_ADDRESS:8443 in the host machine, one may found the VM_ADDRESS by using following command in the VM

ifconfig 

Should have similar output at eth0 (other ones are for Docker and OpenShift)

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.121.169  netmask 255.255.255.0  broadcast 192.168.121.255
        inet6 fe80::5054:ff:fec6:ba8d  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:c6:ba:8d  txqueuelen 1000  (Ethernet)
        RX packets 184375  bytes 442794303 (422.2 MiB)
        RX errors 0  dropped 11  overruns 0  frame 0
        TX packets 119825  bytes 15028821 (14.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
In the Vagrant VM, start following command,

# Link the current config
mkdir -p ~/.kube
ln -s `pwd`/openshift.local.config/master/admin.kubeconfig ~/.kube/config
export CURL_CA_BUNDLE=`pwd`/openshift.local.config/master/ca.crt
sudo chmod a+rwX openshift.local.config/master/admin.kubeconfig
sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
 
# Create local registryoadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
# Check the statusoc describe service docker-registry --config=openshift.local.config/master/admin.kubeconfig

Above should print out docker registry info (please check until Endpoint parameter is no longer <NONE>)

Use following commands to log in as a sample user
oc login --certificate-authority=openshift.local.config/master/ca.crt -u test-admin -p test-admin

Use following commands then to create a javaee sample application with WildFly (please check the original blog: http://blog.arungupta.me/openshift-v3-getting-started-javaee7-wildfly-mysql/)

# Create new project
oc new-project test --display-name="OpenShift 3 WildFly" --description="This is a test sample project to test WildFly on OpenSHift 3"

 # Create new app
oc new-app -f https://raw.githubusercontent.com/bparees/javaee7-hol/master/application-template-jeebuild.json

# Trace the build 
oc build-logs jee-sample-build-1

 The above command should yield something similar to following (may need to wait for few minutes):
I0628 04:40:08.327578       1 sti.go:388] Copying built war files into /wildfly/standalone/deployments for later deployment...
I0628 04:40:08.352066       1 sti.go:388] Copying config files from project...
I0628 04:40:08.353790       1 sti.go:388] ...done
I0628 04:40:16.662515       1 sti.go:96] Using provided push secret for pushing 172.30.6.81:5000/test/jee-sample image
I0628 04:40:16.662562       1 sti.go:99] Pushing 172.30.6.81:5000/test/jee-sample image ...
I0628 04:40:24.410955       1 sti.go:103] Successfully pushed 172.30.6.81:5000/test/jee-sample


One then should be able to use describe service
oc describe services

Output:
NAME       LABELS                                   SELECTOR        IP(S)           PORT(S)
frontend   template=application-template-jeebuild   name=frontend   172.30.88.128   8080/TCP
mysql      template=application-template-jeebuild   name=mysql      172.30.250.39   3306/TCP


One may use ssh tunnel to post the website on your local workstation by opening up a new terminal, move into the origin code directory (the ip may be different front above, please check the ip for frontend service and change as necessary)

From local workstation (not the vagrant box), use following
cd origin 
vagrant ssh -- -L 9999:172.30.167.74:8080 
The IP and Port depends on the output above, 9999 is the local machine available port. One may then open localhost:9999 in local workstation browser and it should open the Java app.

Clean up everything (it will remove everything, please be warned)

oc delete pods,services,projects --all

Shutdown the vagrant boxes by using the command on host workstation:

vagrant halt 

Or use the following command on host workstation to destroy the VM (in case something goes wrong):

vagrant destroy --force


Sunday, September 7, 2014

CentOS Docker Image with Tomcat 7

Start to experiment docker. However, most blog posts currently used example with Ubuntu and Tomcat. I am more interest in a combination of Fedora/CentOS/RHEL example with Tomcat 7. So I decide to do my own.

Instead of using a pre-build images, I decide I will experiment with building my own centos image.

Thanks for the contribution on GitHub from https://github.com/blalor/docker-centos-base

I modified it a little.

Following is the step I took (following steps do not assume any OS as base OS may not be centos as some people may want to produce one even without install on the hardware)

The overall summary is following:

Use Vagrant and VirtualBox to setup a minimum centos machine -> Use the minimum virtual machine to build a supermin Docker base image -> Use host to build docker image

Please download vagrant

https://www.vagrantup.com/

Please then download VirtualBox

https://www.virtualbox.org/

Installed both

Execute following
mkdir centos_docker_builder;

cd centos_docker_builder;

vagrant box add centos65-x86_64-20140116 https://github.com/2creatives/vagrant-centos/releases/download/v6.5.3/centos65-x86_64-20140116.box

cat << EOF > Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "centos65-x86_64-20140116"
#  config.vm.provision "shell", path: "auto_build_setup.sh"
  config.vm.synced_folder ".", "/vagrant", :mount_options => ["dmode=777","fmode=666"]
end
EOF

cat << EOFF > start-tomcat.sh
#!/bin/bash

# From https://github.com/arcus-io/docker-tomcat7

ADMIN_USER=${ADMIN_USER:-admin}
ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
MAX_UPLOAD_SIZE=${MAX_UPLOAD_SIZE:- 52428800}

cat << EOF > /opt/apache-tomcat/conf/tomcat-users.xml




EOF

if [ -f "/opt/apache-tomcat/webapps/manager/WEB-INF/web.xml" ];
then
   chmod 664 /opt/apache-tomcat/webapps/manager/WEB-INF/web.xml
   sed -i "s^.*max-file-size.*^\t${MAX_UPLOAD_SIZE}^g" /opt/apache-tomcat/webapps/manager/WEB-INF/web.xml
sed -i "s^.*max-request-size.*^\t${MAX_UPLOAD_SIZE}^g" /opt/apache-tomcat/webapps/manager/WEB-INF/web.xml
fi

/bin/sh -e /opt/apache-tomcat/bin/catalina.sh run
EOFF

wget http://www.carfab.com/apachesoftware/tomcat/tomcat-7/v7.0.55/bin/apache-tomcat-7.0.55.tar.gz

cat << EOF > build_centos.sh
#!/bin/bash

set -e
## Following script is coming from GitHub from https://github.com/blalor/docker-centos-base
## Thanks for the code

## requires running as root because filesystem package won't install otherwise,
## giving a cryptic error about /proc, cpio, and utime.  As a result, /tmp
## doesn't exist.
[ $( id -u ) -eq 0 ] || { echo "must be root"; exit 1; }

tmpdir=$( mktemp -d )
trap "echo removing ${tmpdir}; rm -rf ${tmpdir}" EXIT

febootstrap \
    -u http://mirrors.mit.edu/centos/6.5/updates/x86_64/ \
    -i centos-release \
    -i yum \
    -i iputils \
    -i tar \
    -i which \
    -i http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm \
    centos65 \
    ${tmpdir} \
    http://mirrors.mit.edu/centos/6.5/os/x86_64/

febootstrap-run ${tmpdir} -- sh -c 'echo "NETWORKING=yes" > /etc/sysconfig/network'

## set timezone of container to UTC
febootstrap-run ${tmpdir} -- ln -f /usr/share/zoneinfo/Etc/UTC /etc/localtime

febootstrap-run ${tmpdir} -- yum clean all

## xz gives the smallest size by far, compared to bzip2 and gzip, by like 50%!
febootstrap-run ${tmpdir} -- tar -cf - . | xz > centos65.tar.xz
EOF

chmod a+x build_centos.sh

cat << EOF > Dockerfile
From scratch

MAINTAINER Danil Ko

ADD centos6.5.tar.xz /

# Need to update additional selinux packages due to https://bugzilla.redhat.com/show_bug.cgi?id=1098120
RUN yum -y install yum install java-1.7.0-openjdk-devel wget http://mirror.centos.org/centos/6.5/centosplus/x86_64/Packages/libselinux-2.0.94-5.3.0.1.el6.centos.plus.x86_64.rpm http://mirror.centos.org/centos/6.5/centosplus/x86_64/Packages/libselinux-utils-2.0.94-5.3.0.1.el6.centos.plus.x86_64.rpm

# Use copy to preserve the the tar file without untar
COPY apache-tomcat-7.0.55.tar.gz /tmp/

RUN cd /opt; tar -xzf /tmp/apache-tomcat-7.0.55.tar.gz; mv /opt/apache-tomcat* /opt/apache-tomcat; rm /tmp/apache-tomcat-7.0.55.tar.gz;

ADD start-tomcat.sh /usr/local/bin/start-tomcat.sh

EXPOSE 8080

CMD ["/bin/sh", "-e", "/usr/local/bin/start-tomcat.sh"]

EOF

vagrant up

vagrant ssh "cd /vagrant; . build_centos.sh"

vagrant destroy

# Move the  centos_docker_builder folder to a docker host machine, in this example, it is the same machine
# Also download the tomcat file and name it as apache-tomcat-7.0.55.tar.gz

docker build -t centosimage .

# Run as a background daemon process and with the container name webtier
docker run -d --name webtier centosimage

# On a separate terminal, run (consider one only have the webtier running, otherwise, one will use docker ps -a to find the container id and use docker inspect )

docker inspect -f '{{ .NetworkSettings.IPAddress }}' webtier | grep IP
        "IPAddress": "172.17.0.60",
        "IPPrefixLen": 16,

# On a web browser, do

172.17.0.60:8080

The tomcat webpage will show up



To clean up the container, one may run

docker stop webtier; docker rm webtier;


To clean up the image, after the above command one may run

docker rmi centosimage;

Sunday, July 13, 2014

OpenShift Git History CLean Up

Recently, I found that git history start to fill up the allowed disk space in the gear as I was always deployed the build war files into the gear through git. And git store the entire binary delta every time when there is a new push.

So I start to look around for an ultimate solutions for the problem.

The first solution I encounter is from OpenShift online answer that describe how to clean up remote repo within the gear

https://www.openshift.com/forums/openshift/how-to-erase-all-history-from-a-git-repository-on-openshift-and-start-over-with

This solution works rather well. It removes the entire history. So I used for quite some time.
#!/bin/bash



# Clean up gear info

ssh $gear_ssh_url "cd git; rm -rf *.git; cd *.git; git init --bare; exit"

# Create bare repo to overwrite history (or do a new git clone, it is just I found it is faster just do a new repo and point remote repo)

git init

current_date=`date`

git commit -m "Automatic Push as of $current_date due to code change"

git add origin $gear_git_url

git push origin master -u --force



However, recently, I start to find that I should at least keep one history of backup, so I can rollback more easily (I found that binary-deployment seem to be harder to management then the git, maybe I did not understand the full feature yet). So I start to look for a git solution to clean up history.

Following is the other solution I found

http://stackoverflow.com/questions/11929766/how-to-delete-all-git-commits-except-the-last-five

It works very well. As it do remove the commit history and there reduce the size but also leave at least one history commit that I can revert back to.

What I found is that there is one thing I do not recognize in the solution, it will only impact local repository until there is a new commit to push the change along with these new repo data clean up to effect local repo. So without a new commit, if somebody clone it again, all history will still be there as the change is not in repo yet.

So I did some adjustment

#!/bin/bash




# Reference from the article http://stackoverflow.com/questions/11929766/how-to-delete-all-git-commits-except-the-last-five

current_branch="$(git branch --no-color | cut -c3-)" ;

current_head_commit="$(git rev-parse $current_branch)" ;

echo "Current branch: $current_branch $current_head_commit" ;

# A B C D (D is the current head commit), B is new_history_begin_commit

new_history_begin_commit="$(git rev-parse $current_branch~1)" ;

echo "Recreating $current_branch branch with initial commit $new_history_begin_commit ..." ;

git checkout --orphan new_start $new_history_begin_commit ;

git commit -C $new_history_begin_commit ;

git rebase --onto new_start $new_history_begin_commit $current_branch;

git branch -d new_start ;



git reflog expire --expire=now --all;

git gc --prune=now;



# Still require a push for remote to take effect, otherwise the push will not go through as there is no change

if [ -f .invoke_update ];

then

      rm -rf .invoke_update;

else

      touch .invoke_update;

fi

git add -A .;
current_date=`date`;
git commit -m "Force clean up history $current_date";
git push origin master --force;

It first did as in post to clean up the local repo, make a dummy commit, push the change to remote.

Thanks.

Sincerely,
Danil

Sunday, February 10, 2013

Jersey + Multipart + Maven

I thought I should write about a problem that takes a long time for me to solve in case I forget about it.

I looked through tutorials and other articles to learn about Jersey Multipart. However, I ran into issues with missing dependency by following these tutorials. I finally solved it by setting up my maven POM file in the following way:

<dependencies>
  <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
        <dependency>
      <groupId>com.sun.jersey</groupId>
      <artifactId>jersey-server</artifactId>
      <version>1.9</version>
    </dependency>
   <dependency>
      <groupId>com.sun.jersey.contribs</groupId>
      <artifactId>jersey-multipart</artifactId>
      <version>1.9</version>
    </dependency>
    <dependency>
      <groupId>com.sun.jersey</groupId>
      <artifactId>jersey-json</artifactId>
      <version>1.9</version>
    </dependency>
     <dependency>
      <groupId>org.jvnet.mimepull</groupId>
      <artifactId>mimepull</artifactId>
      <version>1.9</version>
    </dependency>
  </dependencies>

In short, problems seem to come from version differences across jersey libraries as well as mimepull libraries. All libraries should be the same version. So even if other Jersey libraries  already had 1.14 version, it should be safer to use 1.9 version since it was the latest version for mimepull.

The missing dependency error messages were not too informative at this case.

Thanks to authors of original articles and  their comments who contribute to the solution.

Hope it helps others.

中文翻譯:

最近花了不少時間解決了一個問題, 想說先把解決方案寫下來以免忘記.

之前幾日在網路上尋找關於Jersey Multipart的教學. 在讀完許多教學後自己寫了一個簡單的範例.但是在Deploy時卻遇到missing dependency的問題. 最後終於在把Maven的POM檔做出以下修改後解決了問題:


<dependencies>
  <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
        <dependency>
      <groupId>com.sun.jersey</groupId>
      <artifactId>jersey-server</artifactId>
      <version>1.9</version>
    </dependency>
   <dependency>
      <groupId>com.sun.jersey.contribs</groupId>
      <artifactId>jersey-multipart</artifactId>
      <version>1.9</version>
    </dependency>
    <dependency>
      <groupId>com.sun.jersey</groupId>
      <artifactId>jersey-json</artifactId>
      <version>1.9</version>
    </dependency>
     <dependency>
      <groupId>org.jvnet.mimepull</groupId>
      <artifactId>mimepull</artifactId>
      <version>1.9</version>
    </dependency>
  </dependencies>


總結是Jersey libraries跟mimepull的版本必須要相同. 不然會出現Missing Dependency的問題. 所以即使Jersey libraries已有1.14版, 這裡必須使用1.9版, 因為mimepull只有到1.9版.

完全沒想到missing dependency會跟版本有相關性.

希望這篇文章可以幫到需要的朋友們.

Original Tutorial Site:
http://www.mkyong.com/webservices/jax-rs/file-upload-example-in-jersey/

Sincerely,
Danil