Friday, April 27, 2018

Optimize cost efficiency on AWS in 7 ways

There are four reasons to explain how using AWS can change the economic model of the IT services that run your applications & workloads:
  1. By running your application in AWS cloud, you substitute traditional up-front capital expenses with a low variable cost model.
  2. AWS operates at a significant scale by virtue of the large number of customers that support their workloads.The economies of scale is continuously used to reduce costs and customers benefit from these cost savings.
  3. AWS services are adaptable. You are not forced to use resources on the OnDemand PAYG model. You only pay for the individual services you need, for as long as you use them and the capacity you require for your workload can also be reserved.
  4. Resources are available in AWS to save money as your workload gets larger. An example is Amazon S3 which is an object-based, simple key-value storage. S3 gives you lower prices based on frequent usage of the service. When you hit a specific volume, the cost per gigabyte reduces.
When you are building your system, you need to investigate and control the economy of your architecture. Think of a model where extensive changes are possible, driven by economics and the availability of new AWS services. Explore and take advantage of all the opportunities for optimizing costs that exist in AWS.
The seven ways to optimize for cost efficiency in AWS are:
1) Control Provisioned AWS Resources
It is crucial to emphasize the importance of controlling provisioned AWS resources. Think carefully about the individuals you allow to turn services on. Best practice is to have a group of owners who control the provisioning of resources for various departments via IAM. Provide tools to each team to make them autonomous on their cost optimization.To optimize cost, shut down test instances at the end of each working day and on weekends. You can also run workloads into docker containers and quickly spin them on the new Elastic Compute Cloud (EC2) container service. Use Dev/Ops Tools like AWS Opworks and Elastic Beanstalk to quickly deploy applications without having to worry about the underlying infrastructure. Lastly, use AWS CloudFormation to create resource templates of your AWS resources to build your environments quickly.
2) Make use of the Appropriate Storage Classes
The 5 storage classes tiers available in AWS are S3, S3-IA (Infrequently Access), S3 One Zone-IA, Reduced Redundancy Storage (RRS) and Glacier.
If an object is less than 128kb, Amazon S3 charges you for 128kb. The cost putting a file on S3 can be broken down into the actual storage cost, cost of the HTTP PUT requests, cost of the HTTP GET requests and the cost of the data transfer. 
Take advantage of S3-IA for data that is accessed less frequently, but requires rapid access when needed. S3-IA’s fee is lower than S3 but you are charged a retrieval fee.
Just like S3-IA, One Zone-IA is also designed for long-lived but less frequently accessed data and you are charged for a minimum storage duration of 30 days. The differences are One Zone-IA is less expensive and stores objects in only one Availability Zone (AZ). Objects stored in One Zone-IA are not resilient to the physical loss of the AZ.
RRS is designed to provide 99.99% durability and 99.99% availability of objects over a given year. RRS is designed for data that is easily reproducible such as thumbnails. Store images in a bucket and the thumbnails in RRS so if the file is missing you can regenerate the thumbnails.
Amazon Glacier is great for archiving of long-term backups of cold or old data. Glacier is just as durable as S3 but the tradeoff is that it takes 3-5 hours to restore data. Glacier Storage Class is designed for data that is retained for more than 90 days.
Lastly, implement object lifecycle management to manage your objects so that they are stored in a cost-effective manner. For object lifecycle management, you can choose to transition objects to S3-IA or One Zone-IA 30 days after the creation date or archive to Glacier Storage Class or simply set up a delete policy where S3 should delete expired objects on your behalf.
3) Select the Right Instance Type
It is important to ensure that you are using the most cost-effective instances because different instance families cost different amounts. Select the instance that best suits your application workload. Consider factors like virtual central processing unit, ideal use case and memory to optimize the amount of money you spend. It is recommended that at least twice a year, there is an assessment of your instance choice to ensure they match the reality of your workload. Optimize around the particular instance resource which would result in the delivery of best price performance.
Tagging of instances are imperative. The cost per hour of running systems can be monitored in real time, calculated using tags and these results can drive the development team to optimize costs. To enforce discipline on tagging in your organization, you can set up a “No tags? No instance” policy where instances without a tag are stopped. You can create a script that shuts down instances that are not tagged but please be extremely cautious.
4) Monitor, Track, and Analyze your Services Usage
Trusted Advisor and CloudWatch are monitoring and management tools to access your instance metrics. Based on your assessment you can scale your instance size up or down. Trusted Advisor is an excellent tool because it identifies idle resources by running configuration checks.Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. With Trusted Advisor, stay up-to-date with your AWS resource deployment by getting weekly updates to increase security and performance and reduce your overall costs. You can also create alerts, monitor service limits and automate actions with Cloudwatch.
Match resources to your workload by using AWS Cloudwatch to gain system-wide visibility and keep your application running smoothly. Cloudwatch can be used to set alarms, collect & monitor log files, and automatically react to changes in your resources like operational health. With Amazon Cloudwatch, you can also monitor custom metrics generated by your own applications via a simple API by sending and storing metrics that are important to your application’s operational performance. Turn off non-production instances and use Amazon CloudWatch and Auto Scaling to match demand.
5) Use Auto Scaling
It is important to align your resources with demand. To handle demand or sudden traffic spikes, you can simply design dynamically for capacity by using Auto Scaling to add resources only when required and equally turns them off when not. The benefit of including Auto Scaling to your application’s architecture isn’t just limited to better cost management, it enables you to detect when an instance is unhealthy, terminate it and relaunch another.
To set up Auto Scaling, you need:
  • A launch configuration where you describe what Auto Scaling will create when adding instances.
  • Configure your Auto Scaling group and define the AZs that you wish to create the instances into. Set the maximum and minimum size of the group to automatically scale the number of instances.
  • Set up an Auto Scaling policy where parameters for performing an Auto Scaling action are defined. Define a cool-down period to prevent the addition of large amounts of capacity. For a scale-up policy, you can add an instance to respond to a particular event.
6) Consolidated Billing
Consolidated Billing enables you to see a combined view of all AWS charges incurred by all your accounts i.e. you get one bill for multiple AWS accounts. Consolidated billing is available at no additional charge and one account is usually designated the Master Account. The Master Account pays the charges that are accumulated by all the other accounts in the consolidated billing family. Each account charges can be easily tracked and the cost data can also be downloaded in CSV format.
An example:
Let’s consider 2 AWS accounts named Alice and Eve.
Alice transfers 8TB of data and Eve transfers 6TB.
Alice’s consolidated bill consists of Eve’s account and her own account.
The master account is Alice’s account because she pays for the charges incurred by herself and Eve.
If AWS charges $0.19 per GB for the first 10 TB of data transferred, $0.15 for the next 40 TB.
To calculate for the first 10TB;
0.19*1024 = $194.56
To calculate for the next 40TB;                                                                     {1TB=1024GB}
0.15*1024 = $153.60
For the 14TB that Alice and Eve used, Alice (Master account) is charged:
($194.56 * 10TB) + ($153.60 * 4TB)
=$1945.6 + $614.4 = $2560
The average cost-per-unit of data transfer for the month is therefore=$2560/14TB=$182.86 per TB. This average rate is shown on the Bills Page and can be downloaded as a cost report for each account listed in the consolidated bill.
Without Consolidated Billing, AWS would have charged Alice and Eve each $194.75 per TB for their usage. A total of ($194.75* 14) = $2726.50.
Total Cost Savings with Consolidated Billing = ($2726.50 - $2560)= $166.50  
7) Use Reserved and Spot Instances
Commitment by using Reserved Instances (RI) would provide some dollar savings. With Reserved Instances, you can save up to 75% over equivalent on-demand capacity. If you buy a RI and you don’t need it, you can easily sell it back or buy a shorter duration RI in the reserved instance marketplace. Reservations come with three different payment options: Full Upfront, Partial Upfront, and No Upfront. With partial and no upfront, you pay the remaining balance monthly over the term. Apart from Reserved Instances, Amazon RDS, Dynamo DB, Redshift and Elastic Cache are other services where you can take advantage of reservation.
Spot Instances are a phenomenal way to save money for non-stateful workloads, simply bid on EC2 capacity which is not currently in use. Spot Instances are ideal for workloads where you need access to large amounts of compute capacity but you are not concerned about an interruption because you have a mechanism for dealing with the interruption. The prices of Spot Instances vary overtime on the bases of current demand. 

Cost Explorer, Billing Dashboard, Detailed Billing Report are additional examples of excellent AWS’s tools that can be used to determine your daily spend and maintain a strict billing hygiene. You can also build your own monitoring solution by developing a Lambda function that ingests detailed billing file into Redshift. Always remember that you are not only charged for data transfer to the Internet but also between AZs, so instances that communicate with each other should be located in the same AZ.
KEY TAKEAWAYS
  1. The simplest way to save money on AWS is not to use services that you don’t need and to investigate your unused infrastructure
  2. Always select the right instance type
  3. Optimize your S3 consumption and make use of the appropriate S3 storage class
  4. Use Cloudwatch and Trusted Advisor to monitor your daily costs
  5. Use Auto Scaling to align your resources with demand
  6. Benefit from cost savings by using Consolidated Billing
  7. Use Reserved and Spot Instances

No comments:

Post a Comment