Sunday, December 30, 2018

AWS related stuff

1. Command Completion

On Unix-like systems, the AWS CLI includes a command-completion feature that enables you to use the TAB key to complete a partially typed command. This feature is not automatically installed so you need to configure it manually.
Configuring command completion requires two pieces of information: the name of the shell you are using and the location of the aws_completer script.
Completion on Amazon Linux
Command completion is configured by default on instances running Amazon Linux.
Sections
Identify Your Shell
If you are not sure which shell you are using, identify it with one of the following commands:
echo $SHELL – show the shell's installation directory. This will usually match the in-use shell, unless you launched a different shell after logging in.

$ echo $SHELL
/bin/bash
ps – show the processes running for the current user. The shell will be one of them.

$ ps
  PID TTY          TIME CMD
 2148 pts/1    00:00:00 bash
 8756 pts/1    00:00:00 ps
Locate the AWS Completer
The location can vary depending on the installation method used.
Package Manager – programs such as pip, yum, brew and apt-get typically install the AWS completer (or a symlink to it) to a standard path location. In this case, which will locate the completer for you.

$ which aws_completer
/usr/local/bin/aws_completer
Bundled Installer – if you used the bundled installer per the instructions in the previous section, the AWS completer will be located in the bin subfolder of the installation directory.

$ ls /usr/local/aws/bin
activate
activate.csh
activate.fish
activate_this.py
aws
aws.cmd
aws_completer
...
If all else fails, you can use find to search your entire file system for the AWS completer.

$ find / -name aws_completer
/usr/local/aws/bin/aws_completer
Enable Command Completion
Run a command to enable command completion. The command that you use to enable completion depends on the shell that you are using. You can add the command to your shell's RC file to run it each time you open a new shell.
  • bash – use the built-in command complete.
·         
$ complete -C '/usr/local/bin/aws_completer' aws
Add the command to ~/.bashrc to run it each time you open a new shell. Your ~/.bash_profile should source~/.bashrc to ensure that the command is run in login shells as well.
  • tcsh – complete for tcsh takes a word type and pattern to define the completion behavior.
·         
> complete aws 'p/*/`aws_completer`/'
Add the command to ~/.tschrc to run it each time you open a new shell.
  • zsh – source bin/aws_zsh_completer.sh
% source /usr/local/bin/aws_zsh_completer.sh
The AWS CLI uses bash compatibility auto completion (bashcompinit) for zsh support. For further details, refer to the top of aws_zsh_completer.sh.
Add the command to ~/.zshrc to run it each time you open a new shell.
Test Command Completion
After enabling command completion, type in a partial command and press tab to see the available commands.

$ aws sTAB
s3              ses             sqs             sts             swf
s3api           sns             storagegateway  support



2. AWS Control Tower: 


Control Tower automates the set-up of a well-architected, multi-account environment based on best practices, and guides you through a step-by-step process to customize Control Tower to your organization. It will automate the creation of an AWS Landing Zone with best practice blueprints including:
  • Configuring AWS organizations to create a multi-account environment.
    • Providing for identity management using AWS SSO users and groups.
    • Federating access using AWS Single Sign-On.
    • Centralizing logging using AWS CloudTrail and AWS Config.
    • Enabling cross-account security audits using AWS IAM.
    • Implementing network design using Amazon VPC.
    • Defining workflows for provisioning accounts using AWS Service Catalog.
In addition, it will put in place mandatory, curated guardrails, such as blocking accounts from being able to create an Internet gateway or ensuring only encrypted S3 objects can be created. This will incredibly shorten the amount of time it takes to get going with all the curated best practice from millions of customers who use AWS every day.
With AWS Control Tower, you pay only for AWS services enabled by Control Tower, which include the set-up of your AWS Landing Zone, mandatory guardrails, or customized options. You will incur costs for AWS services configured in the set-up of your Landing Zone, mandatory guardrails, and strongly recommended guardrails. No costs are incurred for strongly recommended guardrails that are preventative. The cost of each service will vary based on the number of regions, accounts, hours used, and guardrails enabled. AWS Control Tower is now available in limited preview, and you can sign up here.
This then leads to one of the most perennial problems that has existed in enterprise IT for a long time—that of having a comprehensive view of your high-priority security alerts and compliance status across AWS accounts. This is where Security Hub comes in.

3. AWS Security HUB: 

The typical enterprise security landscape has a number of powerful security tools deployed. From firewalls and endpoint protection to vulnerability and compliance scanners. But oftentimes this leaves your team switching back and forth between these tools to deal with hundreds, and sometimes thousands, of security alerts every day. With Security Hub you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner Solutions. Your findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. This allows you to save time with aggerated findings, improve compliance with automated checks, and quickly take action on findings. AWS Security Hub is offered at no cost during the preview period and is available as a region service in 15 of the current AWS regions. Pricing will be finalized when the service becomes generally available.
These two new powerful offerings will enable enterprises to move even faster to build new innovations for their customers and migrate their IT systems even faster. I look forward to seeing and hearing how customers use this to keep their momentum moving forward for their journey to AWS Cloud.

Friday, April 27, 2018

Optimize cost efficiency on AWS in 7 ways

There are four reasons to explain how using AWS can change the economic model of the IT services that run your applications & workloads:
  1. By running your application in AWS cloud, you substitute traditional up-front capital expenses with a low variable cost model.
  2. AWS operates at a significant scale by virtue of the large number of customers that support their workloads.The economies of scale is continuously used to reduce costs and customers benefit from these cost savings.
  3. AWS services are adaptable. You are not forced to use resources on the OnDemand PAYG model. You only pay for the individual services you need, for as long as you use them and the capacity you require for your workload can also be reserved.
  4. Resources are available in AWS to save money as your workload gets larger. An example is Amazon S3 which is an object-based, simple key-value storage. S3 gives you lower prices based on frequent usage of the service. When you hit a specific volume, the cost per gigabyte reduces.
When you are building your system, you need to investigate and control the economy of your architecture. Think of a model where extensive changes are possible, driven by economics and the availability of new AWS services. Explore and take advantage of all the opportunities for optimizing costs that exist in AWS.
The seven ways to optimize for cost efficiency in AWS are:
1) Control Provisioned AWS Resources
It is crucial to emphasize the importance of controlling provisioned AWS resources. Think carefully about the individuals you allow to turn services on. Best practice is to have a group of owners who control the provisioning of resources for various departments via IAM. Provide tools to each team to make them autonomous on their cost optimization.To optimize cost, shut down test instances at the end of each working day and on weekends. You can also run workloads into docker containers and quickly spin them on the new Elastic Compute Cloud (EC2) container service. Use Dev/Ops Tools like AWS Opworks and Elastic Beanstalk to quickly deploy applications without having to worry about the underlying infrastructure. Lastly, use AWS CloudFormation to create resource templates of your AWS resources to build your environments quickly.
2) Make use of the Appropriate Storage Classes
The 5 storage classes tiers available in AWS are S3, S3-IA (Infrequently Access), S3 One Zone-IA, Reduced Redundancy Storage (RRS) and Glacier.
If an object is less than 128kb, Amazon S3 charges you for 128kb. The cost putting a file on S3 can be broken down into the actual storage cost, cost of the HTTP PUT requests, cost of the HTTP GET requests and the cost of the data transfer. 
Take advantage of S3-IA for data that is accessed less frequently, but requires rapid access when needed. S3-IA’s fee is lower than S3 but you are charged a retrieval fee.
Just like S3-IA, One Zone-IA is also designed for long-lived but less frequently accessed data and you are charged for a minimum storage duration of 30 days. The differences are One Zone-IA is less expensive and stores objects in only one Availability Zone (AZ). Objects stored in One Zone-IA are not resilient to the physical loss of the AZ.
RRS is designed to provide 99.99% durability and 99.99% availability of objects over a given year. RRS is designed for data that is easily reproducible such as thumbnails. Store images in a bucket and the thumbnails in RRS so if the file is missing you can regenerate the thumbnails.
Amazon Glacier is great for archiving of long-term backups of cold or old data. Glacier is just as durable as S3 but the tradeoff is that it takes 3-5 hours to restore data. Glacier Storage Class is designed for data that is retained for more than 90 days.
Lastly, implement object lifecycle management to manage your objects so that they are stored in a cost-effective manner. For object lifecycle management, you can choose to transition objects to S3-IA or One Zone-IA 30 days after the creation date or archive to Glacier Storage Class or simply set up a delete policy where S3 should delete expired objects on your behalf.
3) Select the Right Instance Type
It is important to ensure that you are using the most cost-effective instances because different instance families cost different amounts. Select the instance that best suits your application workload. Consider factors like virtual central processing unit, ideal use case and memory to optimize the amount of money you spend. It is recommended that at least twice a year, there is an assessment of your instance choice to ensure they match the reality of your workload. Optimize around the particular instance resource which would result in the delivery of best price performance.
Tagging of instances are imperative. The cost per hour of running systems can be monitored in real time, calculated using tags and these results can drive the development team to optimize costs. To enforce discipline on tagging in your organization, you can set up a “No tags? No instance” policy where instances without a tag are stopped. You can create a script that shuts down instances that are not tagged but please be extremely cautious.
4) Monitor, Track, and Analyze your Services Usage
Trusted Advisor and CloudWatch are monitoring and management tools to access your instance metrics. Based on your assessment you can scale your instance size up or down. Trusted Advisor is an excellent tool because it identifies idle resources by running configuration checks.Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. With Trusted Advisor, stay up-to-date with your AWS resource deployment by getting weekly updates to increase security and performance and reduce your overall costs. You can also create alerts, monitor service limits and automate actions with Cloudwatch.
Match resources to your workload by using AWS Cloudwatch to gain system-wide visibility and keep your application running smoothly. Cloudwatch can be used to set alarms, collect & monitor log files, and automatically react to changes in your resources like operational health. With Amazon Cloudwatch, you can also monitor custom metrics generated by your own applications via a simple API by sending and storing metrics that are important to your application’s operational performance. Turn off non-production instances and use Amazon CloudWatch and Auto Scaling to match demand.
5) Use Auto Scaling
It is important to align your resources with demand. To handle demand or sudden traffic spikes, you can simply design dynamically for capacity by using Auto Scaling to add resources only when required and equally turns them off when not. The benefit of including Auto Scaling to your application’s architecture isn’t just limited to better cost management, it enables you to detect when an instance is unhealthy, terminate it and relaunch another.
To set up Auto Scaling, you need:
  • A launch configuration where you describe what Auto Scaling will create when adding instances.
  • Configure your Auto Scaling group and define the AZs that you wish to create the instances into. Set the maximum and minimum size of the group to automatically scale the number of instances.
  • Set up an Auto Scaling policy where parameters for performing an Auto Scaling action are defined. Define a cool-down period to prevent the addition of large amounts of capacity. For a scale-up policy, you can add an instance to respond to a particular event.
6) Consolidated Billing
Consolidated Billing enables you to see a combined view of all AWS charges incurred by all your accounts i.e. you get one bill for multiple AWS accounts. Consolidated billing is available at no additional charge and one account is usually designated the Master Account. The Master Account pays the charges that are accumulated by all the other accounts in the consolidated billing family. Each account charges can be easily tracked and the cost data can also be downloaded in CSV format.
An example:
Let’s consider 2 AWS accounts named Alice and Eve.
Alice transfers 8TB of data and Eve transfers 6TB.
Alice’s consolidated bill consists of Eve’s account and her own account.
The master account is Alice’s account because she pays for the charges incurred by herself and Eve.
If AWS charges $0.19 per GB for the first 10 TB of data transferred, $0.15 for the next 40 TB.
To calculate for the first 10TB;
0.19*1024 = $194.56
To calculate for the next 40TB;                                                                     {1TB=1024GB}
0.15*1024 = $153.60
For the 14TB that Alice and Eve used, Alice (Master account) is charged:
($194.56 * 10TB) + ($153.60 * 4TB)
=$1945.6 + $614.4 = $2560
The average cost-per-unit of data transfer for the month is therefore=$2560/14TB=$182.86 per TB. This average rate is shown on the Bills Page and can be downloaded as a cost report for each account listed in the consolidated bill.
Without Consolidated Billing, AWS would have charged Alice and Eve each $194.75 per TB for their usage. A total of ($194.75* 14) = $2726.50.
Total Cost Savings with Consolidated Billing = ($2726.50 - $2560)= $166.50  
7) Use Reserved and Spot Instances
Commitment by using Reserved Instances (RI) would provide some dollar savings. With Reserved Instances, you can save up to 75% over equivalent on-demand capacity. If you buy a RI and you don’t need it, you can easily sell it back or buy a shorter duration RI in the reserved instance marketplace. Reservations come with three different payment options: Full Upfront, Partial Upfront, and No Upfront. With partial and no upfront, you pay the remaining balance monthly over the term. Apart from Reserved Instances, Amazon RDS, Dynamo DB, Redshift and Elastic Cache are other services where you can take advantage of reservation.
Spot Instances are a phenomenal way to save money for non-stateful workloads, simply bid on EC2 capacity which is not currently in use. Spot Instances are ideal for workloads where you need access to large amounts of compute capacity but you are not concerned about an interruption because you have a mechanism for dealing with the interruption. The prices of Spot Instances vary overtime on the bases of current demand. 

Cost Explorer, Billing Dashboard, Detailed Billing Report are additional examples of excellent AWS’s tools that can be used to determine your daily spend and maintain a strict billing hygiene. You can also build your own monitoring solution by developing a Lambda function that ingests detailed billing file into Redshift. Always remember that you are not only charged for data transfer to the Internet but also between AZs, so instances that communicate with each other should be located in the same AZ.
KEY TAKEAWAYS
  1. The simplest way to save money on AWS is not to use services that you don’t need and to investigate your unused infrastructure
  2. Always select the right instance type
  3. Optimize your S3 consumption and make use of the appropriate S3 storage class
  4. Use Cloudwatch and Trusted Advisor to monitor your daily costs
  5. Use Auto Scaling to align your resources with demand
  6. Benefit from cost savings by using Consolidated Billing
  7. Use Reserved and Spot Instances