Setup SFTP server using AWS Transfer

AWS Transfer is a fully managed SFTP service. AWS Transfer can be integrated with existing authentication protocols such as LDAP etc. Following are the steps to start basic sftp server:
  1. Users will need to connect to sftp server using private key. If you do not already have key, then generate one. On windows, you can generate public-private key pair using PuTTYgen. Save the private key once pair is generated. No need to save public key as it can be generated anytime using Private Key.

  2. Log into AWS and create S3 Bucket. Sftp server will be mapped on this bucket.

  3. Using IAM, Create Role which will be used by AWS Transfer service to publish logs. Select ‘Transfer’ as AWS Service which will use this role. Attach following two policies to role.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListAllMyBuckets",
                    "s3:GetBucketLocation"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": "s3:ListBucket",
                "Resource": "arn:aws:s3::: <YourBucket>"
            },
            {
                "Effect": "Allow",
                "Action": "s3:*",
                "Resource": "arn:aws:s3:::<YourBucket>/*"
            }
        ]
    }
    

    The above policy is giving full access to base bucket. However, in real case scenario, you would like different users to have different access permissions. For example, you might need to give access to different folders to different users. This is possible as users can be associated with scope down policies to restrict access, as we will see in following steps.

  4. Also attach AWS managed policy ‘AWSCloudWatchFullAccess’ to the role.

  5. Click on role, then select tab ‘Trust relationships’ and make sure that role has ‘transfer.amazonaws.com’ as trusted entity. If not, then attach following policy document to establish trust relationship.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "transfer.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

  6. Create another IAM policy. This policy will be attached to individual users to scope down and restrict access. An example of such a policy is:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowListingOfUserFolder",
                "Action": [
                    "s3:ListBucket"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::${transfer:HomeBucket}"
                ],
                "Condition": {
                    "StringLike": {
                        "s3:prefix": [
                            "${transfer:UserName}/*",
                            "${transfer:UserName}"
                        ]
                    }
                }
            },
            {
                "Sid": "AWSTransferRequirements",
                "Effect": "Allow",
                "Action": [
                    "s3:ListAllMyBuckets",
                    "s3:GetBucketLocation"
                ],
                "Resource": "*"
            },
            {
                "Sid": "HomeDirObjectAccess",
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:DeleteObjectVersion",
                    "s3:DeleteObject",
                    "s3:GetObjectVersion"
                ],
                "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
            }
        ]
    }
    

  7. Before evaluating user access, AWS Transfer service will replace transfer:HomeBucket and transfer:UserName values with that of user who is trying to log in. This allows us to use same generic policy for every user as variables will be replaced by AWS before evaluating access.

  8. Goto AWS Transfer Service and create a new server. You can select custom hostnames, but for this demo, select ‘public end point’ as end-point, ‘None’  as hostname, and ‘Service Managed’ as Identity Provider Type. Select Role that we created earlier. This will allow server to push logs to cloud watch stream.

  9. Once server is created, click add user and enter username. This is username that you will be using along with private key to connect to sftp server. Attach scoped-down policy that we created earlier. Select bucket that was created earlier, and if applicable, also select sub-prefix. If selecting prefix (sub-folder), make sure that prefix string contains username string. For example, if username is ‘demo’ then prefix can be ‘demo-folder’ etc. Enter public key against the private key that we generated earlier.

  10. Go to WinSCP, or any other tool that you would like to use to test sftp connection. Server id will act as hostname, and you can use private key to authenticate your user. Connection should be successful, and you should be able to log-into your prefix sub-bucket. Test to make sure that you can PUT files anywhere else other than sub-bucket.

References:
AWS Developer Forum
StackOverflow

Comments

Popular posts from this blog

Practice Questions - AWS Solutions Architect - Associate Certification

Continuous Integration using AWS CodePipeline (GitHub to Elastic BeanStalk)

AWS Cross-Account RDS Backups