06 Jul 2016
•
AWS
•
Custom IAM Policies
Restrict user activity to a specific region in order to minimize resource wastage. This policy will ensure that all of the resources are being created only in the region of your choice so that when it comes down to resource cleanup and housekeeping, you’ll be able to save quite a lot of time.
Apart from restricting access region-wise, it allows users read-only access to resources launched in other regions.
16 Jun 2016
•
AWS
•
Logging
Elasticsearch, Logstash and Kibana (ELK) is a stack widely used for log analytics. The AWS Elasticsearch service lets you run the Elasticsearch and Kibana right out of the box with some manual setup of Logstash required. In this setup, we’ll walk through setting up the ELK stack on AWS, enabling ELB logs and analyzing the logs to identify and visualize trends like traffic spikes, frequently vistitor IPs etc.
Setting Up Elasticsearch
- Choose a domain name that would be part of the ElasticSearch endpoint. For example, mysite.
- Select the number of instances in the cluster. Ideally select atleast 2 nodes so that the sharding works. Leave the rest of the options as it is and click Next.
- Under Set up access policy, select Allow open access to the domain. This will allow anyone to access your endpoint. Since we are only testing this, you can go ahead with this option. However, a production environment would require more restrictive access policies. Click Next and then Confirm and Create.
- The domain will take around 5 to 10 minutes to get created.
- Meanwhile, enable Access Logs on ELB by following this guide.
Setting Up Logstash
-
Update and upgrade your system packages:
sudo apt-get update && sudo apt-get upgrade
-
Logstash requires Java on the machine. Install Java:
sudo apt-get install default-jdk
-
Add Logstash Package Repositories to your sources list. Verify that you’re getting the latest repository info on this page
Run: echo "deb https://packages.elastic.co/logstash/2.3/debian stable main" | sudo tee -a /etc/apt/sources.list
-
Finally, update your packages again and install Logstash:
sudo apt-get update && sudo apt-get install logstash
-
Add your AWS IAM Access Key and Secret Key by using environment variables:
echo AWS_ACCESS_KEY_ID=AKIAACCESSKEY
echo AWS_SECRET_ACCESS_KEY=tUx2SECRETKEYACk32
-
Create a Logstash configuration file and enter the Elasticsearch endpoint and S3 bucket name in it. Store this file as /etc/logstash/conf.d/logstash.conf
-
Run Logstash:
/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf</code>
Setting Up Kibana
- Open Kibana from the URL provided on the Elasticsearch console.
- Use the index pattern elb_logs
- Use the timestamp @timestamp
- To get started with visualizing your data, read through the Visualize section of Elastic’s official documentation.
05 May 2016
•
Automation
•
AWS
•
Ops
Introduction
Ansible is one of the easiest Configuration Management tools to get started with due to its client-less architecture. Ansible is written in Python and uses YAML for is configuration files which is a simple and human-readable syntax. One of the most important features of Ansible is its idempotency which means changes are only made when required and ignored otherwise. This prevents the possibility of inconsistency in your infrastructure.
Installation
-
The easiest and recommended way of installing is from the package repositories. Simply run:
sudo apt-get install ansible
-
To test the installation, run the following. If you get a prompt asking for the SSH password, it indicates that the installation was successful.
ansible all -m ping --ask-pass
-
Open the file /etc/ansible/hosts and add the IPs of your servers in it. For example, you can add EC2 instance public IP or you can add the private IP if your control machine is the same VPC as the servers. The control machine as the name indicates is the machine which will be sending the commands to your servers.
-
To test if the control machine is able to reach the servers, run the ping
command. Key-based authentication is encouraged over passwords. use the --private-key
flag to specify the path to your keyfile. If successful, it will return a JSON response containing the string success
ansible all -m ping --private-key ~/path/to/key -u ubuntu
-
To log in as root, add the become flag (-b)
ansible all -m ping --private-key ~/path/to/key -u ubuntu -b
-
Test the configuration further by running a live command on the nodes
ansible all -a "/bin/echo skywide" --private-key ~/path/to/key -u ubuntu
-
If you’d like to have a fully automated and unattended setup, it would be wise to disable the host key checking feature. This feature when enabled will prompt for confirmation of the key the first time Ansible tries to log into the server. Disabling host key checking however has some implications and you might want to fully understand them before disabling the host check. Inside /etc/ansible/ansible.cfg
, edit the following host_key_checking = False
-
This way, you are able to successfully run commands on your nodes. However, Ansible goes far beyond running simple commands on your nodes. To make the most of it, we need to make use of playbooks.
Components
-
Inventory
The inventory file is nothing but a file containing the list of servers you want to configure. This file by default exists at /etc/ansible/hosts. It is also possible to use multiple inventory files.
-
Dynamic Inventory
A Dynamic Inventory is when the inventory files are stored in a different system. For example, the files can be stored on a cloud storage provider, LDAP or other such storage solutions.
-
Patterns
The hosts that are specified in an inventory file can be grouped into categories like web servers and db servers by using the appropriate syntax. A Pattern refers to a set of groups (which are a set of hosts). Take for example, ansible all -m ping
Here all refers to all hosts. Whereas in ansible webservers -m ping
the command will only work on the hosts in the group webservers as defined in the inventory file.
-
Variables
Variables, as the name suggests, are values that are used when you are working on systems that require varied configuration. Variables can be something like a port number or a private IP address. Variables can be defined either in Inventory files or Playbooks.
Example:
- hosts: all
vars:
http_port: 80
Here http_port is a variable.
-
Ad-Hoc Commands
Ad-Hoc Commands are used to perform quick tasks that you don’t want to write an entire playbook for. Take the analogy of running a command through shell (ad-hoc command) and running it through a bash script (playbook). ansible all -m ping
is an example of a ad-hoc command.
-
Playbooks
A playbook is a file containing instructions. The following playbook will install Apache web server on your nodes.
To run the playbook, use the following command:
ansible-playbook skywide.yaml --private-key ~/path/to/key.pem
If you want to avoid specifying the path to the key file every time, modify the file /etc/ansible/ansible.cfg
and set the path in the variable private_key_file
29 Apr 2016
•
Automation
•
AWS
•
Ops
The following is a bare-minimum CloudFormation template to launch an EC2 instance. It’s bare-minimum in the sense that only the Parameters and Resources which are absolutely necessary are part of the template.
This template can also execute a bash script provided as User Data to the EC2 instance. The User Data can be a set of directives that will be executed only once during the first boot cycle of the instance. This feature can be used when you need to say launch several instances with the Apache web server installed on them. To do this you would simply supply the command apt-get install apache2
as user-data. There is no need to add sudo keyword to the command as the script will be run with root privileges.
In the above template, the custom script updates the list of packages (apt-get update) and installs Apache (apt-get install apache2). You can add/remove commands as required. Be sure to put them inside double quotes and each command should be on a new line.
22 Apr 2016
•
AWS
•
Ops
Scan EC2 using OpenVAS
Scanning your EC2 instances periodically to check for vulnerabilities and security loopholes is definitely something that no Systems/DevOps engineer should miss out on. There are several scanning tools available for these purposes but very few free ones. OpenVAS is an opensource and free tool which originated as a fork of the now commercial Nessus scanning tool.
Follow these steps to quickly get started with OpenVAS
- Launch an Ubuntu EC2 instance. how-to
- Add the following PPA:
sudo add-apt-repository ppa:mrazavi/openvas
- Update apt-get:
sudo apt-get update
- Install OpenVAS:
sudo apt-get install openvas
- Run the following commands to update OpenVAS scripts and data:
sudo apt-get install sqlite3
sudo openvas-nvt-sync
sudo openvas-scapdata-sync
sudo openvas-certdata-sync
sudo service openvas-scanner restart
sudo service openvas-manager restart
sudo openvasmd --rebuild --progress
The above commands will download large amounts of data from the internet. It might take several minutes to complete depending on the internet speed.
- After the downloads have finished, Goto
https:<instance-public-ip>:443
and login. The default username and password is admin