DataMasque Portal

Cross Account Functionality for AWS

DataMasque supports cross-account AWS access, allowing your DataMasque instance to be in one AWS account while accessing and masking data in another AWS account. This can be done by assuming a role in the target account or by using resource-specific policies, such as an S3 bucket policy or KMS policy. In some cases, both methods can be combined to grant access.

How It Works

There are two ways that DataMasque can access resources in different accounts from which it is running: resource policies or role assumption.

The method you use will depend on the resource type and your account setup. This guide uses the terms source account, which means the AWS account in which DataMasque is deployed, and target account, meaning the AWS account containing resource to be masked.

Resource Policies

Resource policies can only be applied to certain types of resources. The policy is specified directly on the resource in the target account, giving access to the role DataMasque has in the source account. For example, DataMasque in source Account A is running on an EC2 which has role RA applied to it. The policy on the S3 bucket in the target Account B grants access to RA directly.

Resource policies are supported when masking S3 Buckets (S3 bucket Policy). When masking S3 buckets, access to a specific KMS key might also be required. This can be granted with a KMS Key Policy.

This method of cross-account access is described in Using S3 Bucket Policies.

Role Assumption

This is when DataMasque in one AWS account assumes a role in another AWS account. It is as if DataMasque has logged in to the other account as that role. Access to resources is based on the IAM policies assigned to the role. For example, DataMasque in Account A assumes the role RB in Account B. It will then have access to all resources based on RB's policy in Account B.

In this type of cross-account access, DataMasque requires the ARN of the role it is to assume.

Role assumption is supported when masking:

Using S3 Bucket Policies

This section presents a guide on enabling cross-account access to S3 buckets. We use the following example scenario to demonstrate this functionality:

You have deployed DataMasque on an EC2 or an EKS cluster in Account A, and you want DataMasque to access buckets in Account B and Account C.

Figure 1.0 gives an overview of how this access is enabled.

Figure 1.0

AWS S3 Account Diagram

Account Name Account ID Description
Account A 1111-1111-1111 DataMasque is deployed on an EC2 or EKS instance in this account.
Account B 2222-2222-2222 This account has an S3 bucket to be accessed by the DataMasque instance in Account A.
Account C 3333-3333-3333 This account has an S3 bucket to be accessed by the DataMasque instance in Account A.

Account A Setup

In Account A, perform the following setup:

  1. Create an IAM policy that allows reading/writing to all S3 buckets. While this policy allows the DataMasque instance to attempt to access any bucket, the level of access to a bucket is specified based on the bucket policies applied on the target account (where the bucket resides).

    The policy in the source account (Account A) should look like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "SourceBucketRead",
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Sid": "SourceBucketPermissionCheck",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetBucketAcl",
        "s3:GetBucketPolicyStatus",
        "s3:GetBucketPublicAccessBlock"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    },
    {
      "Sid": "DestinationBucketWrite",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Sid": "DestinationBucketSecurityCheck",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetBucketAcl",
        "s3:GetBucketPolicyStatus",
        "s3:GetBucketPublicAccessBlock",
        "s3:GetBucketObjectLockConfiguration"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    },
    {
      "Sid": "AllowEventLogging",
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogStream",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams",
        "logs:CreateLogGroup",
        "logs:PutLogEvents"
      ],
      "Resource": "*"
    },
    {
      "Sid": "ExportedDataAccess",
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketPublicAccessBlock",
        "s3:GetBucketPolicyStatus",
        "s3:GetBucketObjectLockConfiguration",
        "s3:PutObject",
        "s3:GetObject",
        "s3:GetEncryptionConfiguration",
        "s3:ListBucket",
        "s3:GetBucketAcl",
        "s3:DeleteObject"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    }
  ]
}
  1. This policy should be attached to an IAM role that is then attached to an EC2 instance or EKS cluster. Refer to AWS EC2 instances or AWS EKS clusters below for instructions on attaching the role.

Once the policy is created in Account A, further setup in Account B and Account C can be performed.

Target Accounts (Account B and Account C) Setup

You now need to create policies in the target account(s) that allow access to the buckets from the source account (Account A).

Create a bucket policy on the buckets in each target account to allow access from the specified role in Account A.

In this example, the bucket policy should be created in Account B and Account C. The role that is given access is DM-Role (ARN arn:aws:iam::111111111111:role/DM-Role).

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "SourceBucketRead",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111111111111:role/DM-Role"
        ]
      },
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::<bucket-name>",
        "arn:aws:s3:::<bucket-name>/*"
      ]
    },
    {
      "Sid": "SourceBucketPermissionCheck",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111111111111:role/DM-Role"
        ]
      },
      "Action": [
        "s3:ListBucket*",
        "s3:GetBucketAcl",
        "s3:GetBucketPolicyStatus",
        "s3:GetBucketPublicAccessBlock"
      ],
      "Resource": [
        "arn:aws:s3:::<bucket-name>",
        "arn:aws:s3:::<bucket-name>/*"
      ]
    },
    {
      "Sid": "DestinationBucketWrite",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111111111111:role/DM-Role"
        ]
      },
      "Action": [
        "s3:PutObject",
        "s3:GetObject*"
      ],
      "Resource": [
        "arn:aws:s3:::<bucket-name>",
        "arn:aws:s3:::<bucket-name>/*"
      ]
    },
    {
      "Sid": "DestinationBucketSecurityCheck",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111111111111:role/DM-Role"
        ]
      },
      "Action": [
        "s3:ListBucket*",
        "s3:GetBucketAcl",
        "s3:GetBucketPolicyStatus",
        "s3:GetBucketObjectLockConfiguration"
      ],
      "Resource": [
        "arn:aws:s3:::<bucket-name>",
        "arn:aws:s3:::<bucket-name>/*"
      ]
    },
    {
      "Sid": "AllowSSLRequestsOnly",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::<bucket-name>",
        "arn:aws:s3:::<bucket-name>/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}
  • Replace the 111111111111 account ID with the actual account ID of your source account (Account A in this example).
  • The role name DM-Role should be replaced with the role name chosen.
  • The role must exist before creating the bucket policy.
  • <bucket-name> should be replaced with the name of the bucket you are providing access to.

Remember that access won't be available until roles are created and assigned to the DataMasque instances, please continue reading below for more information.

Using Assume Role

For S3 File Connections or Amazon DynamoDB connections, using AWS Assume Role is possible.

With AWS Assume Role, DataMasque is able to grant itself temporary access credentials.

This section presents a guide on enabling access to S3 buckets created in target AWS accounts using cross-account IAM roles. We will use the following example scenario to demonstrate this functionality:

You have deployed DataMasque on an EC2 or an EKS cluster in Account A and you want to allow DataMasque to access one or more S3 buckets in Account B and/or Account C.

Figure 2.0 gives an overview of how this access is enabled.

Figure 2.0

AWS S3 Account Diagram

Note that this scenario illustrates S3 bucket access only, but the same concepts apply for masking DynamoDB and Redshift. The specific policies required to access these will differ. Please refer to DynamoDB Policies and Redshift Policies below.

To effectively manage cross-account access control and audit S3 object permissions, we will utilise cross-account IAM roles in AWS Accounts where S3 buckets reside and another IAM Role in the AWS Account where DataMasque has been deployed on EKS or EC2. The following table shows how we refer to these accounts and resources in the instructions below:

The IAM ARNs/Names mentioned in this document are for illustration purposes only. Please change it to the actual IAM Role ARNs in the source and target AWS accounts based on naming policies.

Account Name Account ID IAM Policy IAM Role Description
Account A 1111-1111-1111 Source-DM-Policy Source-DM-Role DataMasque is deployed on an EC2 or EKS instance in this account.
Account B 2222-2222-2222 Target-DM-Policy (Cross Account) Target-DM-Role (Cross Account) This account has an S3 bucket to be accessed by the DataMasque instance in Account A.
Account C 3333-3333-3333 Target-DM-Policy (Cross Account) Target-DM-Role (Cross Account) This account has an S3 bucket to be accessed by the DataMasque instance in Account A.

Account A Setup

In Account A, perform the following setup:

  1. Set up an IAM role named Source-DM-Role within Account A, with the EC2 service trusted. Attach this role to either the EC2 instance or the EKS cluster hosting the DataMasque instance. Also, create an IAM policy with the specified permissions and link it to the Source-DM-Role.

Note: The policy allows sts:AssumeRole permission for target roles in two accounts. If needed for multiple accounts, consider using * instead of individual Account ID numbers.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": [
        "arn:aws:iam::222222222222:role/Target-DM-Role",
        "arn:aws:iam::333333333333:role/Target-DM-Role"
      ]
    }
  ]
}
  1. Attach this Source-DM-Role IAM role created in to EC2 instances where the DataMasque application is running.

Target Accounts (Account B and Account C) Setup

  1. Create an IAM role called Target-DM-Role with the trust policy defined below attached to it.

Note: This trust policy restricts the sts:AssumeRole action to the IAM role Source-DM-Role created in the account where DataMasque is deployed.

```json

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:role/Source-DM-Role"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```
  1. Create an IAM policy with following permissions and attach it to the Target-DM-Role IAM role:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "BucketPermissionCheck",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetBucketAcl",
        "s3:GetBucketPolicyStatus",
        "s3:GetBucketPublicAccessBlock",
        "s3:GetBucketObjectLockConfiguration"
      ],
      "Resource": [
        "arn:aws:s3:::<source-bucket-name>",
        "arn:aws:s3:::<target-bucket-name>"
      ]
    },
    {
      "Sid": "BucketReadWrite",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::<source-bucket-name>/*",
        "arn:aws:s3:::<target-bucket-name>/*"
      ]
    }
  ]
}

If the bucket's contents are encrypted using a custom KMS key then the IAM role will need the following KMS permissions in addition to the S3 permissions defined above:

{
  "Effect": "Allow",
  "Action": [
    "kms:Decrypt",
    "kms:Encrypt",
    "kms:GenerateDataKey"
  ],
  "Resource": "arn:aws:kms:region:account-id:key/key-id"
}
  1. Attach the IAM policy created in step 2 to the IAM role created in step 1.

Before updating your DataMasque connection config, it is recommended to test these new delegated access policies in the AWS IAM policy simulator to confirm that the policies work as expected.

Updating Connections to Use Assume Role

Specifying the AWS Role ARN for any DataMasque S3 File or Amazon DynamoDB Connection is easy.

There are two approaches:

Through the DataMasque UI

  1. Navigate to the File Masking page or Database Masking page.
  2. Either create a new Connection or select Edit for an existing Connection which you wish to update with an Assume Role ARN.
  3. Check the Cross account access checkbox to enable the IAM Role ARN input box.
  4. Add the Assume Role ARN to the IAM Role ARN input box on the Connection details.
  5. Select Test Connection to confirm the Connection works with the input IAM Role ARN.
  6. A green snackbar alert should appear confirming that it works.
  7. Select Save And Exit to ensure the Connection retains the Assume Role functionality.

Through the DataMasque API

To update an S3 File Connection or Amazon DynamoDB connection config with an IAM Role ARN via the API, refer to the connection create API example.

You can simply add the "iam_role_arn" key value pair to the JSON request data when updating the connection config:

E.g.

{
  "version": "1.0",
  "name": "example-connection",
  …
  "iam_role_arn": "arn:aws:iam::123456789012:role/example-role"
}

Note: The means that you must include any other relevant and already existing key-value pairs in the request JSON. This data has been omitted in the example to avoid redundancy.

If the "role_arn" key value pair is not added to the connection config, DataMasque will attempt to connect to the S3 bucket without role assumption.

DynamoDB Policies

This section describes the policies needed for cross-account DynamoDB masking. Since DynamoDB requires the use of a staging S3 bucket to temporarily hold data for masking, you will also need to provide access to this S3 bucket with the policies you have created.

It assumes you are familiar with how assume role works, as described in Using Assume Role above.

  1. In the source account, set up a policy allowing sts:AssumeRole to the target accounts (refer to Account A Setup).

  2. In the target account(s), create a policy to allow the role to be assumed by the source account (refer to Target Accounts (Account B and Account C) Setup.

  3. Create an IAM policy with following permissions and attach it to the Target-DM-Role IAM role:

{
  "Version": "2012-10-17",
  "Statement": [
           {
        "Sid": "AllowEventLogging",
        "Effect": "Allow",
        "Action": [
            "logs:CreateLogStream",
            "logs:DescribeLogGroups",
            "logs:DescribeLogStreams",
            "logs:CreateLogGroup",
            "logs:PutLogEvents"
        ],
        "Resource": "*"
    },
    {
        "Sid": "ExportedDataAccess",
        "Effect": "Allow",
        "Action": [
            "s3:GetBucketPublicAccessBlock",
            "s3:GetBucketPolicyStatus",
            "s3:GetBucketObjectLockConfiguration",
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetEncryptionConfiguration",
            "s3:ListBucket",
            "s3:GetBucketAcl",
            "s3:DeleteObject"
        ],
        "Resource": [
            "arn:aws:s3:::<bucket-name>",
            "arn:aws:s3:::<bucket-name>/*"
        ]
    },
    {
        "Sid": "SourceTableExport",
        "Effect": "Allow",
        "Action": [
            "dynamodb:DescribeTable",
            "dynamodb:DescribeContinuousBackups",
            "dynamodb:ExportTableToPointInTime"
        ],
        "Resource": [
            "arn:aws:dynamodb:<region>:<account-id>:table/<table-name>"
        ]
    },
    {
        "Sid": "ExportProcessStatus",
        "Effect": "Allow",
        "Action": [
            "dynamodb:DescribeExport"
        ],
        "Resource": [
            "arn:aws:dynamodb:<region>:<account-id>:table/<table-name>/*"
        ]
    },
    {
        "Sid": "TargetTableImportAndCleanup",
        "Effect": "Allow",
        "Action": [
            "dynamodb:DeleteTable",
            "dynamodb:DescribeTable",
            "dynamodb:ImportTable",
            "dynamodb:UpdateTable"
        ],
        "Resource": [
            "arn:aws:dynamodb:<region>:<account-id>:table/<table-name>*"
        ]
    },
    {
        "Sid": "ImportProcessStatus",
        "Effect": "Allow",
        "Action": [
            "dynamodb:DescribeImport"
        ],
        "Resource": [
            "arn:aws:dynamodb:<region>:<account-id>:table/<table-name>*/*"
        ]
    }
  ]
}

Note that the region and account-id should be the region and account ID of the target account.

If the DynamoDB table or S3 Bucket have custom KMS keys configured, you will also need to add a Statement to allow access to the key(s). For example:

{
  "Effect": "Allow",
  "Action": [
    "kms:Decrypt",
    "kms:Encrypt",
    "kms:GenerateDataKey"
  ],
  "Resource": "arn:aws:kms:region:account-id:key/key-id"
}
  1. Attach the IAM policy created in step 3 to the IAM role created in step 2.

When setting up the DynamoDB connection in DataMasque, make sure to specify the ARN of the role to assume. In this example, you would specify the role in the target account, e.g. arn:aws:iam::222222222222:role/Target-DM-Role or arn:aws:iam::333333333333:role/Target-DM-Role.

Redshift Policies

When DataMasque masks a Redshift table it exports the table to S3 as .parquet files and performs masking on those files. Then the files are imported back into Redshift. This is achieved using the Redshift specific sql COPY and UNLOAD statements. Because of this, assuming a role in DataMasque is not necessary, only S3 bucket policies and IAM role configuration on Redshift is necessary. DataMasque requires network access to the Redshift instance just like any other database.

Redshift and S3 in the same account

Redshift and S3 are the same account which, is different to the account where DataMasque is running.

Account Name Account ID Description
Account A 1111-1111-1111 DataMasque is deployed on an EC2 or EKS instance in this account.
Account B 2222-2222-2222 This account has the Redshift instance and an S3 bucket to be accessed by the DataMasque instance in Account A.
  • Deploy Redshift as usual with an IAM policy allowing access to the individual S3 bucket.
  • Set up DataMasque and S3 as described in Using S3 Bucket Policies above, to allow access to the bucket in target account Account B from source account Account A.

DataMasque and S3 in same account

Only Redshift is in a different account. DataMasque and the S3 bucket are in the same account.

Account Name Account ID Description
Account A 1111-1111-1111 DataMasque is deployed on an EC2 or EKS instance in this account and the S3 bucket is also hosted here.
Account B 2222-2222-2222 This account has the Redshift instance.
  • DataMasque needs access to S3 from within the same AWS account.
  • Redshift needs to assume a role in Account A in order to access S3.

This is documented in the AWS documentation COPY or UNLOAD data from another account in Amazon Redshift.

After following the document above, set the IAM Role on your Redshift connection in DataMasque to arn:aws:iam::Amazon_Redshift_Account_ID:role/RoleB,arn:aws:iam::Amazon_S3_Account_ID:role/RoleA. (i.e. both role ARNs, separated by a comma).

S3 server-side encryption and KMS Key access

When configuring server-side encryption (SSE) on an S3 bucket or its objects, there are three options for encryption type:

  • Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • Server-side encryption with AWS Key Management Service keys (SSE-KMS)
  • Dual-layer server-side encryption with AWS Key Management Service keys (DSSE-KMS)

Depending on the option you select, you may have to configure access to the KMS key for the cross-account role.

SSE-S3

No extra configuration is required. Any role that has access to the S3 bucket will have access to this encryption key as well. This applies to both bucket policies and cross-account roles.

SSE-KMS or DSSE-KMS

If you use SSE-KMS or DSSE-KMS, you must configure the KMS keys to allow the necessary access. The configuration instructions below apply to both encryption methods.

Using Bucket Policy

Roles that have access to an S3 bucket via a bucket policy do not have access to custom KMS keys. Since bucket policies only provide access to the S3 bucket and not to any other AWS resources, you must use an alternative method to grant access to KMS keys.

Using Cross Account Roles

To grant cross-account access to KMS keys, you must attach a Key Policy directly to the KMS key. The source account role must be granted access to the KMS key with the appropriate permissions.

  1. In Account A (the source account), the IAM policy for the role must allow the kms:Decrypt, kms:Encrypt, kms:DescribeKey and kms:GenerateDataKey actions. For example:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "KMSPolicy",
      "Effect": "Allow",
      "Action": [
        "kms:Decrypt",
        "kms:Encrypt",
        "kms:DescribeKey",
        "kms:GenerateDataKey"
      ],
      "Resource": "*"
    }
  ]
}

Ensure that the policy also includes the necessary S3 actions as listed above .

  1. On the KMS key in the target accounts (Account B and Account C), allow the specified actions from the source account role. For example:
{
  "Version": "2012-10-17",
  "Id": "key-policy",
  "Statement": [
    {
      "Sid": "Allow use of the key cross account",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:role/DM-Role"
      },
      "Action": [
        "kms:Decrypt",
        "kms:Encrypt",
        "kms:DescribeKey",
        "kms:GenerateDataKey"
      ],
      "Resource": "*"
    }
  ]
}

If different objects in the bucket use different keys, then you will need to apply this policy to each key.

AWS S3 Key Specified Directly

If your account has an AWS-managed key with the alias aws/s3 visible, and you directly specify its ARN as an encryption key in S3, cross-account access to this key will not be possible. This key is managed by AWS for general use, and custom policies cannot be applied to it, which prevents cross-account access.