June 29, 2018

Had a busy last couple days and I’ve felt exhausted around 9 pm the last few nights so I haven’t done a lot of anything productive.  It’s also been busier at work because I’ve been given some projects to work on so that’s been great to have more to do.

Wednesday I went to a python/github meetup and it was pretty cool.  I have some github experience, but it was nice to have a walk through with other people and gain from their experience.

June 25, 2018

After work and dinner and other things, I thought I’d start reading again.  I read like 10 pages and didn’t get anything, so instead of forcing myself to read more, I’m just taking a break tonight.

Here’s a picture of my wife and I’s cat touching her food dispenser a little bit before food is scheduled to come out.  She’s so smart and cute!

Now I’m reading an AWS book

I’ve developed a process in going through learning material that seems to work for me, I can’t remember where I got a lot of it from, but I remember doing something similar when I was getting my CCNA R&S.  What I’ve done is went through 2 video courses (acloud.guru and linux academy) independently, taking notes, doing their walk through, etc.  After that, I did some practice exams and flash cards – although the flash cards don’t seem to be as helpful.

Now I’m going through a book and even though it seems to be a little bit on the older side since it doesn’t have the new exam topics, it’s been very helpful.  At the end there are a ton of labs that I think will be really beneficial to go through as part of the exam, but to get more hands on practice.  Once that’s done, I’ll compile my notes and go to taking practice exams and filling in the gaps in areas that I’ve missed.

This seemed to work well so far.  Why I think it works for me, is that I’m a visual learner, so I need someone to show me and explain something that I have little to no experience with in an easy, general way.  Once I’ve built on that, then it’s helpful to hear it from someone in a different way.  After that, reading it is different because I can take it at my own pace and go back, dissect it and whatnot.  Once I have it that far, I usually have a good understanding on the topic – provided I’ve actually done the examples and tried things out on my own.  I learn waaay better by doing rather than by just theory.

HTTPS is working… but

So after looking around a bit, I followed some documentation here on how to get letscrypt set up on this type of an image.  It’s working fine, but it looks like the http://www.miles-smith.info is broken.  I think it’s because I didn’t add it as a domain, but not sure that this point.  I’ll have to come back to this later, I’m going to relax the rest of the evening.

 


 

Actually, I decided to go ahead and try it again before I published this.  Here’s what I did:

I ended up re running certbot a couple of times adding the www. subdomain and after a few times, I got a message saying something like I’m only allowed to do it so many times in so many days.  It looked like it was fine though so I left it at that.

I then changed the httpd-prefix.conf file that had this:

RewriteCond %{HTTPS} !=on

RewriteRule ^/(.*) https://%{SERVER_NAME}/$1 [R,L]

to this:

RewriteCond "%{HTTPS}" !=on

RewriteRule "^/(.*) https://%{SERVER_NAME}/$1" [R,L]

and what I noticed is that when there are double quotes, it looks like it DOESN’T force the directed to HTTPS.  So I changed that back to not have double quotes – violla! It is working as expected.

 

I did look at some other documentation that had the RewriteRule different:

RewriteRule “^/(.*) https://%{SERVER_NAME}/$1” [R,L]

RewriteRule “^/?(.*) https://%{SERVER_NAME}/$1″ [R,L]

But this didn’t seem to any effect on the way it handled the traffic.  Phew, now I’m done for the day.

 

Adding some DNS records

A few things I noticed today.  I was missing the www. in front of my domain name.  Not a big deal, but it’s not set up.

 

I also added a CNAME record for work.miles-smith.info to resolve to iss-office.ddns.net.  Not something that I needed to do, but I’ve been reading about adding VM’s into an on site machine and making them work with AWS and that seems like something that would make sense.  If I were able to do something like, get wordpress in a dynamodb and then have a machine at work host it that could be something cool.  Or set up a read replica there or something else.  Or even just use it to have extra storage or something.  Anyway, just thinking now.

 

I also noticed I don’t have https set up yet (duh, since I haven’t done it).  I’ll have to get certbot to get those done for me and then set up an entry in crontab to automatically renew it (3 months?)

This site isn’t running on https yet

Making a few hosts behind a load balancer

I’ve been feeling like I should document some of the stuff that I’ve learned, and while it would be really cool to start with something awesome, I’m going to go the more basic route first.  So here’s the steps on setting up 5 instances behind a load balancer running httpd with static content.

  1. Create the load balancer (this seems to take a bit so I usually do it first)
  2. Create instances.  There are a few steps here, I’ll only focus on one or two
  3. Attach the instances to the load balancer’s target group
  4. Go to the load balancers DNS

 

 

Create the Load Balancer

Log in to the AWS Console and select EC2.  From there, on the lower left you’ll see Load Balancer.  Once that’s selected, on the top bar you’ll click “Create Load Balancer”.  Since this one is for http traffic only, I’m going to select the Application Load Balancer.

For the name, I specified 5-Instance-Balancer, it’s internet facing on a regular ipv4.  The balancer is checking port 80 (default), and the VPC will be the default VPC that Amazon created for me.  It’s on 172.31.0.0/16 and I selected us-east-1[a-f].

Since its running http, not https, it does tell you that it’s not secure.  For this set up its fine.

In the Security Group configuration, I’ve previously set one up, but you could create a new one to specify what port(s) you want open – just port 80 in this example.

In step 4 you configure routing.  This is pretty easy, you give it a name – in this case I’ll call in MyTargetGroup – and we’ll be monitoring instances on port 80 (these are the default settings).  The protocol is left to HTTP and the path I modified it to be /index.html, which will be the only file on the web servers that we are testing with.  There are some advanced health settings to change the threshold on whether a host is healthy or not, but in this example those were left to their defaults.  After this step, just click through till it’s complete.

 

Create the Instances

Navigate to the EC2 section of the AWS console, and click Launch Instances.  I selected the regular AWS AMI, t2.micro type, left everything up to this point default with two exceptions: We are adding 5 instances and under Advanced Details, there is a section to pass commands to the image as it boots up.  This can be accessed after the instance boots up by going to http://169.254.169.254/latest/user-data.  Here’s what I added.

#!/bin/bash
yum install httpd -y
cat > /var/www/html/index.html << EOF
<HTML>
<HEAD>
<H1>HI! You have reached the following host: `hostname`</H1>
Here is some random data `hostname | sha256sum`
</HTML>
</HEAD>
EOF
/etc/init.d/httpd start
chkconfig httpd on

 

What this does is install the httpd daemon, put some things into /var/www/html/index.html so it can be differentiated on the ELB, start uphttpd and enable it to start up in the event that the instance is rebooted.

After that’s done, all the other settings are default, except make sure that you add the security group to have port 80 open & launch them.

 

Target Groups

Go back to the main EC2 menu, and click on Target Groups.  Select ‘MyTargetGroup’ and then under Action -> Register and deregister targets.  The instances should be running, and you should see 5 of them, so select all of them, click ‘Add to registered’ and then save.  It might take a few minutes for them to all show up as healthy, so give it some time.

Here’s a few screenshots of the instances html output.  You’ll notice that it’s referencing the ELB and from there it’s directing traffic to the healthy instances.

 

That’s it!  Later I might add one with an autoscaling group.  Oh one thing I noticed that happened was the AWS put all the instances in the same AZ, which would be an issue in a production environment since you want to spread the instances across multiple for resiliency.

 

 

Billing changes to my AWS account

I must have skipped the section on logging into AWS as a non-root user since I’ve always signed in with my email to the root account and didn’t think much of it.  Since that’s not best practice, I created an account to have admin access and ran into some problems.

 

It turned out that I had created an account for myself previously, but never logged into it so I had to add permissions to it.  I knew that I needed to the AdministratorAccess and AmazonS3FullAccess role so I added those too.  After looking at some options, I didn’t want to have to log into my root account to look at the bill, so I added Full Access to Billing as well.  When I logged back into my non-root account though, I had a problem – I still couldn’t see the bill and thought that was weird.  I was presented with a link that I followed and read the following.

 

To enable access to billing data on your AWS test account

Use your AWS account email address and password to sign in to the AWS Management Console as the AWS account root user.

  1. On the navigation bar, choose your account name, and then choose My Account.
  2. Next to IAM User and Role Access to Billing Information, choose Edit, and then select the check box to activate IAM user and federated user access to the Billing and Cost Management pages.
  3. Sign out of the console, and then proceed to Step 2: Create IAM Policies That Grant Permissions to Billing Data.

 

I had created the policy to allow my user access to billing, but not granted IAM Users access to billing.  I guess it’s like I was granted access to information, but that information was locked somewhere and needed to be able to be availble first.  After I did that, it was fine and I was able to view the bill as the root user.

Here’s the link that I followed to complete this set up.

https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_billing.html

 

Forgot to turn off some things from last week

I have been going to video courses on Linux Academy.com and acloud.guru and they’ve been great and last week I was building test environments, setting up an ELB between a handful of instances, testing things and what not while following the tutorials.  I took a little break from them to focus this week on reviewing my notes and to go through some of the practice exams that I purchased as well.  I bought all this stuff over Memorial day weekend, since they had a sale (they always have a sale).  Anyway, I forgot to turn the ELB a week or so ago, and kept an Elastic IP for a little over a month.  Oops.  Good thing it wasn’t that expensive.  I also paid $12 for the domain name, I was going to go with an .online domain, but it as $39 and didn’t seem to add any real value so this one is fine.

 

Here’s a screenshot of my bill so far this month.

Route 53, Static IP, etc

So I got a few things set up, not really in this order, but this is the order that I will put them in.

First thing I needed to do was get the Lightsail instance a Static IP.  To do that I needed to log into the web gui of lightsail, go to Networking and attach it to my instance.  Here’s a clip of what you’d see.

After that, all I really needed was an A record to point to in the dns settings of Route 53, but I made a bucket called ‘miles-smith.info’ and changed the properties of the bucket to host a static website.  This part wasn’t really necessary to do, since I figured the instance would be up 24/7, but if it went down, I’d like to have it redirect to something – just for fun.

Once this was done, then I needed to get into Route 53 and add DNS records to make the domain name associated with the static public IP and make the failover to the S3 bucket.  I ran into problems when I was doing this because it made me associate it with a health check, which I forgot to do.  This is what the health check looked like.

Pretty straightforward, checks to see if the static IP is up, if it’s up, awesome, if not then the failover can take place.  Once that was done, I was able to add the DNS records and here’s what they looked like.

The first DNS record is an A record without an alias since it’s associating a domain name to an IP address.  The failover is set to Primary, and it’s associated with the healthcheck I had just made.

The second record is an Alias, since it points to the S3 bucket index.html file (which basically says “This is a placeholder file” and says hi to my wife 🙂 ). I shutdown the instance and waited a few for it to fail, and it sure did redirect to my static S3 index file.

I think that’s all I’ll do for tonight.  I’ve thought about keeping this up, if the $5 is worth it and I’m not sure – since I don’t know if this is something that I’ll keep up doing tbh.  But we will see.  I might experiment migrating the wordpress site to an old raspberry pi I have for fun too.