I recently had the pleasure of attending DeveloperWeek in Oakland, CA. In addition to working at the Akamai booth, making new friends, and spreading the good news about cloud computing, my teammate, Talia, and I were tasked with creating a demo to showcase the new VPC product.
Background
A virtual private cloud (VPC) enables private communication between two cloud computing instances, isolating network traffic from other Internet users, thereby improving security.
So how did I choose to display this? By building a little Pokémon dashboard, of course.
I deployed two applications, each consisting of an application server and a database server (four servers in total). The first pair of applications + database server is deployed normally, the second is configured to work inside a VPC.
The front end of each app is built with Qwik and uses Tailwind for styling. The server side is powered by Qwik City (Qwik’s official meta-framework) and runs on Node.js hosted on a shared Linode VPS. Applications also use PM2 for process management and Caddy as a reverse proxy and SSL provider. The data is stored in a PostgreSQL database that also runs on a shared Linode VPS. Applications communicate with the database using Drizzle, an object-relational mapper (ORM) for JavaScript. The entire infrastructure for both applications is managed by Terraform using the Terraform Linode provider, which was new to me but made provisioning and destroying infrastructure really quick and easy (once I learned how it all works).
If you’re interested, you can find the full code here: github.com/AustinGil/linode-vpc-demo
Demo
As I mentioned before, the demo implements two identical applications. There’s nothing special about it, but here’s a screenshot.
(I had to change the Pokémon names for reasons…)
There is nothing special about this technology. I chose these tools because I like them, not necessarily because they were the best tools for work.
The interesting thing is the infrastructure.
When we consider application number 1, it essentially consists of two servers located within the Akamai cloud, one server for the application and one server for the database. When the user loads the application, the application server pulls the data from the database, constructs the HTML, and returns the result to the user.
The problem here is how the database connection is configured. In some cases, you can set up a database server without knowing the IP addresses of the computers you want to allow access from (such as an application server). In these cases, it is not uncommon to allow any computer with the right credentials to connect to the database. This presents a security vulnerability as it could allow a bad actor to connect to the database and steal sensitive data.
A bad actor would still need the database host, port, username and password to gain access, so it’s not trivial. And as I said, this is not an uncommon practice, but we can do better.
If you know the IP address for each computer that needs access, a good solution might be to set up a firewall or VLAN. But if your infrastructure is more dynamic, with servers coming up and down, maintaining a list of IP addresses can be cumbersome. And that’s where VPCs shine. You can configure servers to live inside a VPC and allow communication to flow freely, only between other computers on the network.
That’s how app #2 is set up. Users can connect to the application server, which allows traffic from the public Internet but also lives inside the VPC. The application server connects to the database, which is also in the VPC, and only allows connections within the same network. Then the application server takes the data, builds the HTML, and returns the page to the user.
For an ordinary user, the experience is identical. The browser loads the table with the modified Pokémon data just fine. VPC does not change anything for normal users.
For bad actors, however, the experience is different. Even if they somehow manage to get credentials to access the database, they won’t be able to connect due to network isolation from the VPC. Here, the VPC acts as a virtual firewall, ensuring that only devices on the same network can access the database.
(This concept is sometimes called “segmentation”)
Evidence
It’s great to show a demo and talk about the infrastructure with cute diagrams, but I always want to prove, even to myself, that things are working as expected. So I thought a good way to test it would be to try connecting directly to both databases using my database client, DBeaver.
For database no. 1, I set up a Postgres connection using the host IP address I got from my Akamai dashboard and the port, username, and password I set in my Terraform script. The connection worked as expected.
For database no. 2, all I had to change was the IP address, since all database provisioning was handled by the same script using Terraform. The only difference was that the database server was placed within the same VPC as the application server, and was configured to only allow connections from any computer within the same network.
As expected, I got an error when trying to connect, even though I had all the correct information.
The error doesn’t mention anything about VPC. It just says my IP address is not whitelisted in the config file. This makes sense. I could explicitly add the IP address of my home and get access to the database if needed, but that’s beside the point.
The key point is that I didn’t explicitly add any IP address whitelisted for Postgres. However, the app server managed to connect just fine, and everyone else was blocked, thanks to the VPC.
Encode
The last thing I’ll touch on is the Terraform code to implement this application. You can find the full file here: github.com/AustinGil/linode-vpc-demo/blob/main/terraform/terraform.tf
It’s also worth mentioning that I tried to make this Terraform file reusable for other people (or for me in the future). This required a bit more variables and configuration settings on the basis tfvars
file: github.com/AustinGil/linode-vpc-demo/blob/main/terraform/terraform.tfvars.example
Anyway, I’ll just highlight the key parts.
1. Configure the Terraform Provider
First, since I used the Linode Terraform provider, it makes sense to know how to set it up:
terraform
required_providers
linode =
source = "linode/linode"
version = "2.13.0"
variable "LINODE_TOKEN"
provider "linode"
token = var.LINODE_TOKEN
This section sets the provider as well as a variable that Terraform will ask you for or that you can provide with tfvars
file.
2. Set up the VPC and VPC subnet
Then I set the actual vpc
resource together with subnet
resource. This part required a lot of learning on my part.
resource "linode_vpc" "vpc"
label = "$local.app_name-vpc"
region = var.REGION
resource "linode_vpc_subnet" "vpc_subnet"
vpc_id = linode_vpc.vpc.id
label = "$local.app_name-vpc-subnet"
ipv4 = "$var.VPC_SUBNET_IP"
Servers can only be added to VPCs in the same region. At the time of writing, there are thirteen regions where VPCs are supported. For the most up-to-date details, see the docs: linode.com/docs/products/networking/vpc/.
I tried setting my servers to San Francisco and ran into errors a few times before I realized it was not an available region. So I went with Seattle ("us-sea"
) instead of that.
Subnets were also a learning point for me. As a web application developer, I haven’t done much networking, so I had to do some research when asked to provide the “IPv4 range of this subnet in CIDR format”.
It turns out that there are three ranges of IPv4 addresses that are reserved for private networks (such as a VPC):
10.0.0.0 – 10.255.255.255
172.16.0.0 – 172.31.255.255
192.168.0.0 – 192.168.255.255
You must choose one of these three options, but you must use the CIDR format, which is the way to represent the IP range you want to use. Don’t ask me for more details, because that’s all I know. Akamai has more documentation on subnets. I just left with 10.0.0.0/24
.
Each server in the private the network will have an IPv4 address within that range.
3. Deploy application servers
To have Terraform deploy my application servers, I used linode_instance
resource. I also used stackscript
resource for creating a reusable script to install and configure software. It’s like a Bash script found in the Akamai cloud dashboard that you can reuse on new servers.
I won’t include the code here, but it installs Node.js 20 via NVM, installs PM2, clones my project repo, runs the app, and installs Caddy. You can see the StackScript content in the source code, but I want to focus on the Terraform stuff.
resource "linode_instance" "application1"
depends_on = [
linode_instance.database1
]
image = "linode/ubuntu20.04"
type = "g6-nanode-1"
label = "$local.app_name-application1"
group = "$local.app_name-group"
region = var.REGION
authorized_keys = [ linode_sshkey.ssh_key.ssh_key ]
stackscript_id = linode_stackscript.configure_app_server.id
stackscript_data =
"GIT_REPO" = var.GIT_REPO,
"START_COMMAND" = var.START_COMMAND,
"DOMAIN" = var.DOMAIN1,
"NODE_PORT" = var.NODE_PORT,
"DB_HOST" = linode_instance.database1.ip_address,
"DB_PORT" = var.DB_PORT,
"DB_NAME" = var.DB_NAME,
"DB_USER" = var.DB_USER,
"DB_PASS" = var.DB_PASS,
resource "linode_instance" "application2"
depends_on = [
linode_instance.database2
]
image = "linode/ubuntu20.04"
type = "g6-nanode-1"
label = "$local.app_name-application2"
group = "$local.app_name-group"
region = var.REGION
authorized_keys = [ linode_sshkey.ssh_key.ssh_key ]
stackscript_id = linode_stackscript.configure_app_server.id
stackscript_data =
"GIT_REPO" = var.GIT_REPO,
"START_COMMAND" = var.START_COMMAND,
"DOMAIN" = var.DOMAIN2,
"NODE_PORT" = var.NODE_PORT,
"DB_HOST" = var.DB_PRIVATE_IP,
"DB_PORT" = var.DB_PORT,
"DB_NAME" = var.DB_NAME,
"DB_USER" = var.DB_USER,
"DB_PASS" = var.DB_PASS,
interface
purpose = "public"
interface
purpose = "vpc"
subnet_id = linode_vpc_subnet.vpc_subnet.id
Configuring the two resources is almost identical, with only a few notable things to note:
- Application #2 includes the configuration to add to the VPC.
- StackScript needs an IP address for the database. Application #1 uses the public IP address from database #1 (
linode_instance.database1.ip_address
). Application #2 uses the variable (var.DB_PRIVATE_IP
). This variable will appear later, but it is the private IP address assigned to database #2, which is running inside the VPC. This can be manually assigned, so I set it to10.0.0.3
.
Also note that they are deployed in the same region as the VPC, for the reasons I stated above.
4. Set up the database servers
Databases are also set up using linode_instance
and linode_stackscript
resources. Once again, I’ll skip over the StackScript content you can find in the repository. It installs Postgres, sets up the database and credentials, and provides some configuration options.
resource "linode_instance" "database1"
image = "linode/ubuntu20.04"
type = "g6-nanode-1"
label = "$local.app_name-db1"
group = "$local.app_name-group"
region = var.REGION
authorized_keys = [ linode_sshkey.ssh_key.ssh_key ]
stackscript_id = linode_stackscript.configure_db_server.id
stackscript_data =
"DB_NAME" = var.DB_NAME,
"DB_USER" = var.DB_USER,
"DB_PASS" = var.DB_PASS,
"PG_HBA_ENTRY" = "host all all all md5"
resource "linode_instance" "database2"
image = "linode/ubuntu20.04"
type = "g6-nanode-1"
label = "$local.app_name-db2"
group = "$local.app_name-group"
region = var.REGION
authorized_keys = [ linode_sshkey.ssh_key.ssh_key ]
stackscript_id = linode_stackscript.configure_db_server.id
stackscript_data =
"DB_NAME" = var.DB_NAME,
"DB_USER" = var.DB_USER,
"DB_PASS" = var.DB_PASS,
"PG_HBA_ENTRY" = "host all all samenet md5"
interface
purpose = "public"
interface
purpose = "vpc"
subnet_id = linode_vpc_subnet.vpc_subnet.id
ipv4
vpc = var.DB_PRIVATE_IP
As with application servers, the two database servers are largely the same, with only a few key differences:
- The second database includes the configuration to add to the VPC.
- Various settings are written to the client authentication file (
pg_hba.conf
). Database #1 allows all internet connections ("host all all all md5"
) while database #2 allows access only from the same network ("host all all samenet md5"
).
It’s also worth noting that we explicitly assign the server’s private IP address when we configure the VPC settings (var.DB_PRIVATE_IP
). This is the same static value given to the application server so that it can connect to the database within the VPC.
Closure
We hope this post has opened our eyes to what VPCs are, why they’re cool, and when you might want to consider one. It’s like having your own little private internet. It’s not strictly a replacement for VLANs or firewalls, but it’s a great addition to any existing security practices, or at least something to keep in mind.
Making the demo was interesting in itself and there were a lot of things that were completely new to me. I spent a lot of time learning:
- What are VPCs and how do they work?
- It was my first time using Terraform so that included installation, usage, terminology, etc.
- I’ve used Postgres before, but never had to manually configure client access.
- This was my second project using Drizzle and although it was very limited, the migration process was challenging.
- I learned more than I care to know about networking, computer interfaces, IP ranges and CIDR. I have much more respect for people who work at the network layer.
- Linode StackScript is also super cool. It has been my preferred way of configuring servers using Terraform and I want to see how they work otherwise.
There were also a few resources that I found particularly useful:
And in case you want to follow up on this or related topics, Talia has put together some great posts recently:
And of course, if you’re interested in trying VPC or any other Akamai cloud computing products, new users can sign up at linode.com/austingil and get $100 in free credits 🙂
Thank you very much for reading. If you liked this article and want to support me, the best way to do it is yes Sharesign up for my newsletter and follow me on twitter.