AWS Cost Optimizations

Consider changing storage class of S3 objects from standard to standard-infrequent access. In total, there are four storage classes in S3, and selecting storage class appropriate to use-case can save costs. Changing storage class of stored objects can be achieved using S3 Life Cycle policies Consider reserving EC2 Instances. Cost savings are huge with reserved instances even when no upfront payments are made.Consider reserving RDS Instances. It can lower down costs significantly. Moreover, no upfront payments are required to reserve instances. If multiple web applications are deployed on multiple ec2 instance and every application is behind a Classic Load Balancer, then consider using Application Load Balancer instead. Application Load Balancer allows host-based routing. Single Application Load Balancer can be used to route traffic to different applications, and it will save costs significantly (since other load balancers can be removed now). Every web application can be part of diffe…

AWS: Increase Connection Timeout

Environment: Elastic Beanstalk - Java with Tomcat

Recently, we faced a situation where java-tomcat server was taking more than 60 seconds to process the request. And we were getting Bad Gateway 5xx Errors. For this particular use case, we could not use asynchronous communication protocol such as Queues; And business users were more than happy to wait for requests to complete even if it takes more than a minute.

For every request, Classic Load Balancer maintains 2 connections. One connection is from client to load balancer, and another connection from load balancer to ec2-instance. By default, idle timeout of classic load balancer is set to 60 seconds which means it can close connection if data is not sent within 60 seconds. In order to handle requests which takes more than 60 seconds to process, we can edit the idle timeout setting and increase it to any value upto 4000 seconds. However, If we want to handle requests which takes more than 60 seconds to process, then Load Balancer …

Refactoring Steps

In Continuation of previous post: Refactoring
Following is summary of some of the refactoring approaches that I frequently use, and that I keep in mind while doing code reviews. Some of the following guidelines may seem like common sense but it is important to revisit, nonetheless. Moreover, more often than not, you will come across code which faced design decay due to number of reasons and can use some basic refactoring approaches:

Extract Functions:
Functions should be atomic. Ideally, one function should be doing one task and doing it well. Split functions into smaller functions if it is getting complex to understandCode should look like endless calls of delegationWrite small functions with meaningful names such that every function shows its intent. Do not underestimate importance of good names. I am not suggesting not to have comments at all, but before adding comments consider if splitting functions or if giving meaningful names can avoid need of comments. A good name will allow yo…


Like many programming concepts, refactoring is broad concept and for different people it may have different meaning. I am sticking with definition of refactoring presented by Ken in his book. Refactoring can be defined as process of:
making code easier to understand making code cheaper to modifyadding structure to code, if possible At this point, I would also like to mention what refactoring is not. Refactoring is not optimization. I am sure many of us have been part of debates where engineers were hesitant to add structure to code arguing performance may deteriorate. In my experience, most of the times my own fears related to performance were baseless. Every application is different; however, key is not to speculate but to have performance tests, set the benchmark, refactor code and run performance tests again. Even if performance gets a hit, you might find it easier to optimize now that code is refactored and hopefully it will be easier to understand and cheaper to modify. Moreover, …

Connect Lambda to RDS using IAM credentials

It is safer to connect to RDS using IAM credentials rather than database user credentials. You can assign a role to lambda with permissions to access database. In order to access database, you will need to generate token and this token will be valid for a very limited time. Thus, no need to encrypt and rotate passwords. And no need to worry about saving passwords in Vault and then manage vault access etc.
Following steps are required to connect to RDS using IAM role:
Modify RDS Instance and Enable IAM Authentication for RDS InstanceCreate database user, without password and assign relevant privilegesCreate IAM Policy with permissions to connect to databaseCreate IAM role. This role will be assigned to lambda/ec2-instanceDownload SSL Certificates for RDS provided by AWS. These certificates are region specificUse Java-SDK to generate token and use this token to connect to RDS   Enable IAM Authentication Log into AWS Management console to modify RDS instance settings. Select RDS instance,…

AWS Parameter Store

A typical web application needs credentials to access different resources such as credentials to connect to database and tokens to communicate with other web services.
It is common practice to pass these secret parameters to applications via system properties or environment variables. For example, if you are using Elastic Beanstalk for java web application then you can pass parameters (database url, username, password etc.) as properties.  Every infrastructure is different, but in general this practice is neither secure nor manageable. Some of the common problems are: parameters are not encrypted, parameters might be available in plain text on ec2-instance ebs scripts, parameters are hard to rotate if parameters are being shared by multiple applications. Moreover, if credentials are shared by different applications and multiple people are responsible for deployment then all those people will need access to credentials.
Instead of passing secret parameters as environment variables, se…

AWS Cross-Account RDS Backups

It is hard to deny importance of cross-region copies of RDS snapshots. Recovery will be a very difficult process, if not entirely impossible, in case of a disaster as small as un-availability of AWS region. However, even cross-region copies of backups are not sufficient. What if AWS Account credentials gets compromised? Or some employee goes rogue and deletes snapshots? These scenarios are not unheard of [1]. In any case, it is good idea to have off-site backups. Backup is not a backup if it is not in a completely separate location. Over AWS, these off-site backups can be snapshots stored in entirely different AWS Account. Unfortunately, AWS does not have service which you can use to create and store backups in different account. But it is trivial to set it up yourself using scripts or Lambdas. Creating manual snapshots and saving these snapshots in completely different AWS account will ensure data recovery in majority of disaster scenarios. And this process can be automated easily u…