Lambda Alias and Versions

In the previous post, we looked at creating and deploying lambda functions. Deploying lambda functions is relatively straight forward. However, deployments can get tricky when there are many lambda functions, function code changes are frequent, and releases need to be tracked appropriately. This is where Lambda Aliases and Lambda Versions can make life easier. There are three elements and understanding these can help to refine lambda deployment process. Lambda Function: Lambda function refers to actual piece of code being deployed. This is function which gets executed when lambda is invoked. Every lambda function can be uniquely identified by ARN. Lambda Version: As name suggests, Lambda version refers to version of lambda function. One lambda function can have many lambda versions and every version gets a unique ARN (by appending version as suffix to actual lambda ARN). Every time you make a change in lambda function, you can create a new version and add relevant descriptio

Deploy Java Lambda - AWS

This post walks through steps required to create and deploy java lambda functions using eclipse. In the next post, we will have a look at release management and versioning of lambdas. We will be using Eclipse for this example. AWS tool kit plugin is not mandatory to create java lambda functions, but it is handy to have it installed. This post assumes that AWS plugin is installed in eclipse. Click File -> New -> Other -> aws and then select ‘AWS Lambda Java Project’. Enter name of project ‘LambdaExamples’ and click Finish. Lambda Project has been created and lambda functions can now be created in this project. Again click File -> New -> Other -> aws and then select ‘AWS Lambda Function’. Enter source folder, package name and name of lambda function. Also select type of event which this lambda will receive. Lambdas are usually triggered as part of some event. For example, a new file uploaded to S3, or a new message arrived in SNS Topic. AWS pass

Setup SFTP server using AWS Transfer

AWS Transfer is a fully managed SFTP service. AWS Transfer can be integrated with existing authentication protocols such as LDAP etc. Following are the steps to start basic sftp server: Users will need to connect to sftp server using private key. If you do not already have key, then generate one. On windows, you can generate public-private key pair using PuTTYgen. Save the private key once pair is generated. No need to save public key as it can be generated anytime using Private Key. Log into AWS and create S3 Bucket. Sftp server will be mapped on this bucket. Using IAM, Create Role which will be used by AWS Transfer service to publish logs. Select ‘Transfer’ as AWS Service which will use this role. Attach following two policies to role. { "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:ListAllMyBuckets" ,

AWS Cost Optimizations

Consider changing storage class of S3 objects from standard to standard-infrequent access. In total, there are four storage classes in S3, and selecting storage class appropriate to use-case can save costs. Changing storage class of stored objects can be achieved using S3 Life Cycle policies Consider reserving EC2 Instances. Cost savings are huge with reserved instances even when no upfront payments are made. Consider reserving RDS Instances. It can lower down costs significantly. Moreover, no upfront payments are required to reserve instances. If multiple web applications are deployed on multiple ec2 instance and every application is behind a Classic Load Balancer, then consider using Application Load Balancer instead. Application Load Balancer allows host-based routing. Single Application Load Balancer can be used to route traffic to different applications, and it will save costs significantly (since other load balancers can be removed now). Every web application can be part of d

AWS: Increase Connection Timeout

Environment: Elastic Beanstalk - Java with Tomcat Recently, we faced a situation where java-tomcat server was taking more than 60 seconds to process the request. And we were getting Bad Gateway 5xx Errors. For this particular use case, we could not use asynchronous communication protocol such as Queues; And business users were more than happy to wait for requests to complete even if it takes more than a minute. For every request, Classic Load Balancer maintains 2 connections. One connection is from client to load balancer, and another connection from load balancer to ec2-instance. By default, idle timeout of classic load balancer is set to 60 seconds which means it can close connection if data is not sent within 60 seconds. In order to handle requests which takes more than 60 seconds to process, we can edit the idle timeout setting and increase it to any value upto 4000 seconds . However, If we want to handle requests which takes more than 60 seconds to process, then Load Balancer

Refactoring Steps

In Continuation of previous post: Refactoring Following is summary of some of the refactoring approaches that I frequently use, and that I keep in mind while doing code reviews. Some of the following guidelines may seem like common sense but it is important to revisit, nonetheless. Moreover, more often than not, you will come across code which faced design decay due to number of reasons and can use some basic refactoring approaches: Extract Functions: Functions should be atomic. Ideally, one function should be doing one task and doing it well. Split functions into smaller functions if it is getting complex to understand Code should look like endless calls of delegation Write small functions with meaningful names such that every function shows its intent. Do not underestimate importance of good names. I am not suggesting not to have comments at all, but before adding comments consider if splitting functions or if giving meaningful names can avoid need of comments. A good name wil


Like many programming concepts, refactoring is broad concept and for different people it may have different meaning. I am sticking with definition of refactoring presented by Ken in his book . Refactoring can be defined as process of: making code easier to understand making code cheaper to modify adding structure to code, if possible At this point, I would also like to mention what refactoring is not. Refactoring is not optimization . I am sure many of us have been part of debates where engineers were hesitant to add structure to code arguing performance may deteriorate. In my experience, most of the times my own fears related to performance were baseless. Every application is different; however, key is not to speculate but to have performance tests, set the benchmark, refactor code and run performance tests again. Even if performance gets a hit, you might find it easier to optimize now that code is refactored and hopefully it will be easier to understand and cheaper to modify.