An open-proxy application is developed using PHP as the backend language to demonstrate the importance of Internet privacy. Like any proxy, this application also forwards the requests on behalf of the user using its own IP address and MAC address and forwards with the response that it got back from the Internet, thereby hiding the user's IP address.
The working of the proxy can be verified by exposing the local host to the Internet and accessing from a computer connected to a different network.
The IP address shown while searching from the Proxy is different from the IP shown in regular search. Thus, the working of the Proxy server is verified.
DeploymentDocker
Docker is unit of software that allows developers to isolate their app from its environment. For the uninitiated, resources to docker can found here.
Docker was used in this project to keep the application more flexible in deployment and also for easier migration.
We build our image using this command in our applications root directory:
$ docker build -t kuberpy:latest .
Output:
Now this can image be used to run a container:
$ docker run kuberpy:latest -p 8080:80
The -p
flag denotes publishing the port 80 on the container to port 8080 of the host machine.
Kubernetes
Kubernetes is an open-source container orchestration tool. It allows for self healing, high availability, auto scaling and many such features.
Note: Since the application is deployed in a Raspberry Pi, we used the k3s which is a light weight Kubernetes solution for edge computing.
To deploy the Kubernetes infrastructure, go to the applications root directory and run:
$ kubectl create -f deployment.yml
$ kubectl create -f service.yml
Let us now look at both these commands one by one.
deployment.yml
This file contains the configuration for the deploying the application. Number of pods, ports used and the number of replicas.
This indicates that we have one deployment k3s-deploy.
Each deployment consists of pods. Pods are the logical space where the application is hosted.
As we can see, there is one pods running as part of the k3s-deploy
deployment.
Further granularity allows us to describe the pods:
Now we have a deployment and its pods running. However they have no way of communicating with anything outside the cluster. This is where we use service.yml
service.yml
This file handles the network configuration of the applications. We use a loadbalancer type service because only this allows interaction outside the cluster. NodePort only allows interaction from within the cluster.
Further looking into services:
And that's it. Our application is now accessible at the specified host port and localhost.
Working demonstration:
Comments