Hosting of Static Sites at AWS

Started by sinelogixweb, Aug 05, 2022, 03:10 AM

Previous topic - Next topic

sinelogixwebTopic starter

In this topic, I want to analyze in detail the topic of hosting static sites at AWS. The topic can not be called very complex, but there are enough nuances.
To configure manually, you will need to organize the interaction of 4-5 services, and frequently you can meet very interesting rakes along the way.

Some time ago there was an official tutorial on such a manual setup. At times it could seem complicated, at times strange. Most likely, it was a tribute to versatility and the need to show different hosting options. However, now the tutorial has been radically updated, and suggests using the AWS Amplify service to solve that problem. On the one hand, this is convenient, but on the other hand, frequently it is required to figure out what exactly is happening "under the hood". Therefore, here we will analyze how to set everything up by hand.

Static sites

To begin with, in a nutshell - why a static website? Oddly enough, the fashion for static sites, which returned 8-10 years ago, still does not go away. And it's not just about the big number of sites on GitHub Pages. Static site generators such as Jekyll, Hugo or the hipster Gatsby continue to release new releases regularly and remain more than in demand.

The test of time has long passed. But why is it cool? When you have a static website, there is almost nothing to break on it. There are no login forms, admin panel, dynamic scripts that can be fooled. A static site is very fast. You do not need to load the CPU processing a heavy request. Content can be cached from the CDN to the user's browser.

Website hosting or why AWS

There is almost nothing to break on a static website. If you can not break the site, you can break, for example, hosting. You bought a virtual machine from DigitalOcean, installed nginx there, and uploaded the site. But nginx and other packages also need to be updated periodically. Therefore, a logical choice in favor of reducing worries (aka maintenance) is to use clouds, such as AWS. The OS and the web server are not going anywhere, but updating and protecting them will no longer be your task.

F.e., I took the site that we (Plesk) use as a promo for our open source projects on GitHub.

The site is static. It basically consists of one index.html page, a few images, a custom font, css styles, and some JavaScript.


We will use AWS S3 to host the website. The first thing we need is to create an S3 bucket. Basically, it's a named place to store files. The name must be globally unique. I plan to host the site at, so I chose tech-plesk-space as the name of the bucket (it was also possible with dots).
Let the region be Frankfurt, eu-central-1. Frankfurt is home to Europe's largest traffic exchange point. In our case, I do not plan to distribute content directly from S3, we will do it using CloudFront, so the choice of region is not too important. But, in general, in terms of latency and connectivity for consumers from Europe, Frankfurt is a very good location. From the screenshot below, you can see that the "Block all public access" checkbox is checked. That's right, we will distribute content not directly from S3, but through the CloudFront service, which is Amazon's Content Delivery Network.

After creating the bucket is empty, and we need to fill in the files that we plan to distribute. You can do this through the web interface, but I will use the AWS CLI. This can be done using the following command from the directory where the site files are located locally:

aws s3 sync --delete . s3://tech-plesk-space

What's going on here? We are syncing local files to an S3 bucket called tech-plesk-space. The "delete" option allows you to delete files in the bucket if they don't exist locally.

If we did not restrict public access, then the files could be viewed through the technical domain. Amazon created for our convenience. As you can see, the domain name consists of the bucket name, the S3 service name, the eu-central-1 region, and the suffix for technical domains:

Route 53

Before setting up public access to the website, let's deal with the domain name. As such, I chose

We go to the Amazon console, open the Route 53 service and create a new public zone.

After the zone is created, we get information about NS records. These are the DNS servers that will store information about our zone. If you have previously worked with some small providers, then it looked something like this: there are ns1, ns2, and all domains are hosted on them. In the case of AWS, it is impossible to say in advance which DNS servers will serve the zone.

So, we have a list of NS records, what to do with it? If you bought a domain, then go to the domain control panel and register the received records.

In our case, the domain is a subdomain of, and this zone is hosted in a Digital Ocean DNS account. In order to perform the delegation correctly, we need to write the appropriate glue records.

NS records alone are not enough to serve the website. We also need an A-record. We'll come back to this topic a bit later when we get to CloudFront, but before that, we need to do one more thing.


Current trends say that the site should be accessible via HTTPS, and it should have a valid certificate. AWS has a certificate management service called AWS Certificate Manager (or ACM for short). We go there to create a certificate. It is absolutely free, but the secret part of the certificate (key) will not be given to you. So these certificates can only be used inside the Amazon infrastructure.

Another interesting point: for the needs of CloudFront, the certificate must be created in the us-east-1 (N. Virginia) zone, otherwise you will not be able to use it within CloudFront. Well, we switch the region to us-east-1 and go to create a certificate. We choose DNS validation as the validation method.

The process may take up to 20 minutes. In practice, this usually takes much less time. At the end of the procedure, after successfully issuing the certificate, you can go to Route53 and delete the validation record.

The certificate is issued for a period of 1 year and nothing additional needs to be done to renew it.


It remains literally the last step - to configure CloudFront so that our site becomes available to users.

To do this, we go to the CloudFront interface and create a new entity called Distribution. Select Origin Domain Name That is, S3 will act as our origin, and CloudFront will distribute content. Who is not familiar with the CDN ideology, origin is a repository of original files, and edges are servers that cache origin's data and directly distribute content to clients. Also, for beauty, you can also change the Viewer Redirect Policy setting and set it to "Redirect HTTP to HTTPS".

As Alternate Domain Names (CNAMEs) we specify Do not forget about the SSL Certificate: there we select Custom SSL Certificate and from the drop-down (which does not look like a drop-down at all) we select the certificate we need.

Another thing to do is to set the Default Root Object. Here we will specify index.html. Without this action, when accessing the website without specifying a file (that is, simply writing in the browser), we will get a 403 error. In the context of web servers, this setting is usually referred to as the Directory Index.

After the Distribution has been created, you need to slightly fix access to origin. To do this, go to the corresponding tab for editing. Next, we choose that we want to restrict access, the Restrict Bucket Access setting; we want to create a new Access Identity and let's update the bucket policy.

Why was the above action necessary? Recall that when setting up S3, we left the "block all access" option by default. Thus, CloudFront would not be able to read content from the bucket.

The final touch remains: specify in Route53 what exactly will serve as our A-records for the domain. To do this, go to the Route53 interface, select Create New Record Set, select the A-record type, but instead of the record itself, select that we have Alias, and select the appropriate CloudFront distribution from the drop-down list.

Other points

I did not touch upon the topic of request logging. It is disabled by default, but can be configured. The logs will be added to the S3 bucket. It is required to get into account the specifics of the approach to CDN (Amazon is not at all unique here) that logging is not a real-time debugging tool.
Logs are delivered with decent delays, and the documentation says that such delays can be up to 24 hours. From practice, I can say that connecting Google Analytics almost always removed questions about logging for such static websites.

Another point that I wanted to briefly mention is the price. In the case of a free tier and a small load, the cost will be about 50 cents, and then only on Route53. Without free tier it turned out somewhere around 1-3 dollars a month. In general, it may even seem that it is even expensive for hosting a static site, but, in fact, in the case of AWS, we have a big potential for project development (scalability, pay-as-you-grow, global availability, a big number of other convenient services and much more).


As I mentioned at the beginning, the official tutorial from Amazon now advises using the AWS Amplify service. It is really more convenient: here you can connect a Git repository, and semi-automatically configure a number of the above things. But, frequently, you need to deal with the already configured infrastructure or organize some hosting variations, because of which you will have to deal with all the nuances.

Another interesting take at that problem is services like Netlify. They simplify some things even more (although they sometimes complicate things), but, of course, their use will cost extra funs.



I'm using both Netify and Amplify. In my opinion, the services are absolutely identical, they differ only in the config format and pricing policy. Netify has a free plan that is enough for a simple weblog.
At the same time, transferring a static website, regardless of the technology in which it is made, to one of these hosting providers is a matter of two minutes.

Apparently, it makes sense to fool around with setting up hosting for statics, if you are concerned about mobility and minimal dependence on a particular service, and if you still pay Amazon, then it's better to choose a more convenient solution. All this is pure IMHO, of course.