book and code

Tag Archives

3 Articles

HOW TO VPN FROM AWS

by Jamey 0 Comments
HOW TO VPN FROM AWS

Wouldn’t it be nice if we could utilize the resources and bandwidth of AWS, while maintaining the privacy of a VPN? Well, look no further, because I seem to have stumbled upon a solution, and this one is going to be a doozy. The process that I am about to outline may not be the easiest method of achieving this goal, and I’m sure other methods exist (probably by utilizing some form of split tunneling), however, I tried this particular method, and it worked flawlessly to achieve my needs, and I figured I would share in order to help those in need of this niche form of connectivity. I don’t really see Amazon putting anything in place that would monitor or stop this behavior, so I’m going to go ahead and put this out there to help spread the privacy love.


The problem

If you have ever used AWS for red team penetration testing or “grayhat activities” such as scanning the entire Internet using tools such as masscan, you know that it can be a real pain in the ass when it comes to triggering their terms of service and having to provide an explanation of what happened and what you did to resolve the issue before getting your account shut down. If you are familiar with AWS, you will know that there is no shortage of information on setting up VPNs in AWS, but most of that documentation surrounds point-to-point or site-to-site VPNs.

If you want to hide your home network traffic from your ISP, you simply use a VPN client, but if you aren’t using split-tunneling, or if your VPN client doesn’t provide that capability, then you know that all traffic is going out through the VPN interface, and you lose access to the host from your local network.

Such is the case when trying to use a VPN remotely. If you are connecting via SSH, as soon as you activate the VPN adapter, all traffic is forced through the VPN interface, including your SSH session, which causes it to either die or hang indefinitely, and you will be unable to re-connect to your remote instance until normal connectivity has been established.


The solution

I’ll go ahead and provide a TL;DR up front before going into all the details: use an AWS Nitro-based instance, which provides you with browser-based access to the serial console.

In this example, we are going to use Ubuntu 20.04 and set up an instance type c5n.4xlarge, which gives us a 25G network connection and 16vCPUs, which isn’t enough to require an explicit request to increase the number of vCPUs available (and also includes 42G of RAM). This should be enough for our requirements to scan the Internet at a decent speed, although if you are scanning for multiple ports, you may want to fill out the request to increase your vCPU quota, which will allow you access to the instance types with an even larger network connection. The c5n.4xlarge instance type currently runs you $0.864/hour (just under $650/month — not including traffic and storage), and if you are worried about that, then you are more than likely not thinking like a hacker. Should creating a throwaway account make you feel guilty? Not in the slightest. Bezos can afford us this simple pleasure in life, and it doesn’t go without effort on the part of the user, so I feel like I can use one when the need arises, while at the same time having a negligible effect on my sense of morality. Anyways, we’ll set this c5n.4xlarge Ubuntu guy up with a 100GB IO2 SSD, and we’re good to go.

If you are wanting to maintain one of these high-bandwidth VPN instances, the cheapest I saw was the a1.medium with a 10G connection, 1vCPU, and 1GB of RAM, currently running at $0.0255/hour (costing you just under $20/month, excluding traffic and storage).


Preparing for serial access

You will need a user with a password for accessing the serial console. For the purposes of this example, we are going to use the username serialuser and password password123$, so go ahead and SSH into this instance, and create the user:

sudo adduser serialuser

Continue with all of the defaults, and then we need to add this guy to sudoers:

sudo usermod -aG sudo serialuser

On Amazon Linux (or other RHEL-based distros like CentOS), you would just replace the sudo group with the wheel group in the above command.

Finally, I like to make sure that everything is fully-updated before I begin my fuckery, so let’s go ahead and get everything in order (if you want to add NOPASSWD:ALL in /etc/sudoers, now would be the time to run sudo visudo):

sudo apt update
sudo apt dist-upgrade
sudo apt autoremove
sudo reboot

At this point, you should be able to select your instance in the AWS EC2 Console, and click Connect. Select the “Serial” tab, and make sure serial access is enabled, and click “Connect”. If you don’t see anything at all on the screen after a while, then go ahead and restart the instance via the console and repeat the same procedure, and you should see your instance booting and eventually be presented with a login prompt. Enter the credentials for serialuser that we created previously, and you’re good to go.


Example VPN setup

We’re going to use ProtonVPN as an example, and I have the Plus plan, but for the example, we’ll use the Basic (free) plan, so the connection location I choose may be different from the one you choose.

Let’s install the dependencies:

sudo apt install python3-pip openvpn dialog

We’ll be ignoring best practices during this example (hence already installing pip3 as an OS package). We will also be installing protonvpn-cli from PyPi, because I like that version better than the official version and feel like it’s easier to use.

sudo -H pip3 install protonvpn-cli

Like I said, no best practices in sight. Using sudo -H will install protonvpn-cli as root in /usr/local/bin, which is already in our $PATH.

Next, run the following command to enter all of your ProtonVPN information and get it all set up:

sudo protonvpn init

Next, we’re going to connect to the VPN within a screen session so that we can do other stuff in the serial console.

screen -LS vpn
sudo protonvpn c

Choose your server an protocol, and you should be connected. Finally, let’s take care of some DNS stuff real quick, since resolveconf can really try to burn you:

sudo mv /etc/resolv.conf /etc/resolv.conf.bak
cat /etc/resolv.conf.bak | tee /etc/resolv.conf

Before disconnecting from VPN, you will want to replace your original resolv.conf by running sudo mv /etc/resolv.conf.bak /etc/resolv.conf. You can disconnect from VPN after doing this by running protonvpn d, and your orginal DNS setting should be written back to /etc/resolv.conf.

Ctrl+A-D to get back to the normal console session and confirm by getting your current external IP and checking the information like in the example command/output below:

$ curl icanhazip.com
5.8.16.166

$ curl ipinfo.io/5.8.16.166
{
  "ip": "5.8.16.166",
  "city": "Saint Petersburg",
  "region": "St.-Petersburg",
  "country": "RU",
  "loc": "59.9386,30.3141",
  "org": "AS206804 EstNOC OY",
  "postal": "190000",
  "timezone": "Europe/Moscow",
  "readme": "https://ipinfo.io/missingauth"
}

Bingo-bango.

11 views

How to Backup FreeNAS to Google Drive Using Duplicati

by Jamey 4 Comments
How to Backup FreeNAS to Google Drive Using Duplicati

We all have our own backup solutions, some better than others, but the standard is the 3-2-1 Backup Strategy, which suggests having at least (3) copies of your data (not including the production data itself, with (2) of those copies being stored locally on different hard drives, and (1) copy stored somewhere offsite. Most of us datahoarders and homelabbers have some implementation of this rule in one form or another.

If you are just looking for the tutorial and want to skip through all of my personal backstory bullshit, just scroll on to the end, and don’t complain about it. This is a personal blog, not some Medium article. At the end, I will discuss how to set up incremental, versioned, block-level, encrypted backups to Google Drive on FreeNAS.

Note: the single caveat is that the unlimited storage is only free and unlimited for GSuites for Business accounts that have 5 or more users (otherwise, you will be paying normal Google Drive storage fees).

1,379 views

Highly-Available, Scalable WordPress using ECS/Docker & RDS/MariaDB

by Jamey 0 Comments
Highly-Available, Scalable WordPress using ECS/Docker & RDS/MariaDB

The recent Amazon S3 outage showed us just how delicate the state of the web is, especially when you don’t utilize Amazon’s built-in redundancy features. My goal was to create a highly-available and scalable WordPress installation in AWS using Docker. I would have auto-scale Docker clusters in multiple Availability Zones running Nginx, PHP-FPM, and a Redis client. The Docker config and WordPress install would be on EFS volumes that would be
mounted in the Docker containers. I would use an RDS MariaDB for the database backend and Redis-based ElastiCache for serving up the site blazing fast from memory.

50 views