tunnel-tool A dashboard and API to share services.

Easy and secure access to your team services.By vicjicama

Introduction

This post is about a tool to help developers have their services where they need them in a easy and secure way. This is a dashboard to let you control, share and review which are the services available in your devices. This dashboard is useful for developers without experience with SSH tunnels that want to share their services or helpful if you have a lot of experience with tunnel but you need a tool to help you to share multiple services across devices.

The goal of this tool is to save time and help a team of remote developers share their services easily with other team members, IOT devices, Kubernetes cluster or integration test environments in a secure and intuitive way.


Features

Here is a list of the highlight features compared with other alternatives.


How it works?

The tool is just a helper for something that you might be already doing to share your services: ssh -R port-forward remote + ssh -L port-forward listen

The tool consist of two parts, one for the server that is running on the exit node and and one for the client/dashboard that is executed on the target devices.

If you want to share to the public your services you might be doing something like: ssh -R port-forward remote + proxy

The tool manage and control multiple endpoints that follows the next steps for all the connections:

The tool will use containers to separate you current SSH configuration and ports for the server and the clients.

The containers allow us to have the same port in our client device, for example you can access www.device.local:6379 and www.another-device.local:6379 on the same device.

Some additional considerations that we have for the connections and executions are:


GraphQL API

The UI on the examples is just a way to present and control the underlying API for the devices, outlet and inlets, here is an example of a query to get the list of the devices and its outlets/inlets. Also here are the examples on how the start and stop of a connection is done.

query List {
  viewer {
    devices {
      list {
        deviceid
        outlets {
          list {
            outletid
            src {
              host
              port
            }
            state {
              status
              worker {
                workerid
                ip
                port
              }
            }
          }
        }
        inlets {
          list {
            inletid
            dest {
              host
              port
            }
            state {
              status
            }
          }
        }
      }
    }
  }
}
      
mutation Start {
  viewer {
    devices {
      device(deviceid: "banana-pi") {
        inlets {
          inlet(inletid: "vicjicama-lap.local:7099") {
            state {
              start {
                deviceid
              }
            }
          }
        }
      }
    }
  }
}
          
mutation Stop {
  viewer {
    devices {
      device(deviceid: "banana-pi") {
        inlets {
          inlet(inletid: "vicjicama-lap.local:7099") {
            state {
              stop {
                deviceid
              }
            }
          }
        }
      }
    }
  }
}
          

Getting Started

You need to have nodeJs, docker-compose and docker installed for the server and client devices that have control access to the server. (For pure edge devices like a raspberry all you need is nodeJs or no additional requirements in case of a kubernetes deployment edge device)


Server exit node

For the server side you only need to execute the startup script and allow the port that you selected for the sshd service, 25000 is the default. (you can change this to 80 or 443 for example)

In our example we are going to use an E2C instance that can be reached on tunnels.repoflow.com

cd ~/server #Use the path of your preference
curl -s "https://raw.githubusercontent.com/vicjicaman/tunnel-server/master/start.sh" > start.sh
bash start.sh

After you execute the script you will see the keys folder printed on the console, in this example the folder is: /home/gn5/server/workspace/keys, we will need it to add the client devices public keys.


Local client device

Once you have the server up and running you need to initializate the the client device, first get the startup script for the client:

cd ~/local #Use the path of your preference
curl -s "https://raw.githubusercontent.com/vicjicaman/tunnel-local/master/start.sh" > start.sh

To initialize the device you have to run the script just with the deviceid argument: bash start.sh DEVICEID

bash start.sh vicjicama-lap

This will create a key file that we need to copy to the keys folder on the server. For this particular example we need to copy the file /home/victor/local/workspace/keys/vicjicama-lap/vicjicama-lap.json to the folder /home/gn5/server/workspace/keys on the server.

Once you have the key file in place you need to run the start script again but now with the target server hostname or IP like this: bash start.sh DEVICEID HOSTNAME|IP

bash start.sh vicjicama-lap tunnels.repoflow.com

In our example we are using the server name tunnels.repoflow.com, but you can use the public IP as well.

We are going to repeat the process for another client device named kube-node, the UI will look like the next screen after start the server and both client devices.

The next video shows the process to add one outlet from kube-node to vicjicama-lap, for this example we are forwarding a React app port.

If you add more device, outlet and inlets your dashboard will look like something like the next screen.A general dashboard with all this information is very useful once you start having multiple service across multiple envs and multiple developers.


Use cases

Here is a list of some useful scenarios for services forward that we use:

I will write more about related use cases and features like: pure edge containers, integration with Kubernetes and integration with a microservices workflow.


Support and development

This version of the tool is free. The development and maintenance of this tool is covered with dedicated support, enterprise/custom features, managed exit nodes and other services.


Conclusion

Thanks for reading, I hope that you liked the presentation of the tool and that you give it a try!, I am sure that this will help to save time while working with multiple services, their versions and environments.

The idea and motivation behind this tool is based on the feedback from the users of the linker-tool , the linker-tool is heavily integrated with the Kubernetes API and services... a lot of users of the linker ask me if the linker-tool could be used stand alone without need a Kubernetes cluster, remove the user tokens, easier public shared ports, pure edge clients and more feedback. The tunnel-tool is the result of that feedback and ideas, we will be moving all those improvements to the linker-tool as well, stay tuned!

If you have any feedback, any question, any use case to discuss or if you want to reach out don't hesitate to contact me, my email is vic@repoflow.com