DynamoDB, AWS’s fast, flexible NoSQL serverless database service that provides single-digit millisecond performance at any scale, has many useful built-in features.
In this article we are going to look at how the Time to Live (TTL) feature can be enable and look at two methods of event capture within an AWS CDK stack.
The methods shown in this article could easily be extended to create an archiving method, when items reach the TTL they could be archived into a cool storage such as s3.

Method 1 – Filtered within the Lambda (Legacy)
Creating the DynamoDB table in CDK is fairly simple, for this example I will add billing mode and the removal policy to be pay per request and destroy.
Partition key will be a simple “id”, string parameter and the table name ttlTable1.
To capture the dynamo events, we will need to set the stream type, this is set to both new and old images.
Finally, to enable Dynamo’s TTL feature we will set the time to live attribute, we are going to keep this simple and use TTL as the attribute name.
This table will also be setup the same in method 2.
const ttlTable1 = new dynamodb.Table(this, 'ttlTable1', {
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
removalPolicy: RemovalPolicy.DESTROY,
partitionKey: { name: 'id', type: dynamodb.AttributeType.STRING },
tableName: 'ttlTable1',
stream: dynamodb.StreamViewType.NEW_AND_OLD_IMAGES,
timeToLiveAttribute: "TTL"
});
Next, we have the Lambda function. Nothing special in this lambda just note that we grant it stream read permissions on the dynamo table.
const triggeredEventFn = new lambdaNode.NodejsFunction(this, 'TriggerEventHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
entry: 'lambda/writeDeletedItemFromStreamMethod1.ts',
handler: 'main',
timeout: Duration.seconds(3),
bundling: {
externalModules: [
],
nodeModules: [
],
minify: true
},
environment: {
region: process.env.CDK_DEFAULT_REGION!
}
});
ttlTable1.grantStreamRead(triggeredEventFn);
Now to link up the event source for the lambda. In the first method we are going to use a Dynamo Event Source, this will trigger on all events to the stream.
triggeredEventFn.addEventSource(new DynamoEventSource(ttlTable1, {
startingPosition: lambda.StartingPosition.LATEST,
}));
As the dynamo event source will trigger on all event types, we need to add some of our own logic to the lambda itself, for the TTL removed items the event name will be REMOVE.
import { Context, Callback } from 'aws-lambda';
export async function main(event: any, _context: Context, callback: Callback) {
try {
const {Records} = event
Records.forEach((record: { eventName: any; dynamodb: any; }) => {
console.log(record)
const {eventName, dynamodb} = record
if (eventName === 'REMOVE') {
// logic for archiving the lambda item
} else {
// ignore this event type
}
})
} catch (error) {
console.log('error', error)
}
}
Once deployed, in the AWS console we can see that after a period of time the Item is deleted from the table.

Method 2 – Filtered Events
At a large scale the use of the Dynamo Event Source can get quite costly, that is because the lambda will be being called for each modification made to the table, including Insert, Update, Delete and Remove. This can cause unnecessary cost with unnecessary invocations.
However, another method now exists where we can use Lambda filtering to avoid these unnecessary invocations.
In this method both the code for the dynamo table and the lambda remains the same.
const ttlTable2 = new dynamodb.Table(this, 'ttlTable2', {
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
partitionKey: { name: 'id', type: dynamodb.AttributeType.STRING },
removalPolicy: RemovalPolicy.DESTROY,
stream: dynamodb.StreamViewType.NEW_AND_OLD_IMAGES,
tableName: 'ttlTable2',
timeToLiveAttribute: "TTL"
});
const triggeredEventMethod2Fn = new lambdaNode.NodejsFunction(this, 'TriggerEventHandlerFiltered', {
runtime: lambda.Runtime.NODEJS_14_X,
entry: 'lambda/writeDeletedItemFromStreamMethod2.ts',
handler: 'main',
timeout: Duration.seconds(3),
bundling: {
externalModules: [
],
nodeModules: [
],
minify: true
},
environment: {
region: process.env.CDK_DEFAULT_REGION!
}
});
ttlTable2.grantStreamRead(triggeredEventMethod2Fn);
Now, instead of using a Dynamo Event Source, we are going to use Event Source Mapping.
This will link together our event from the stream and the target lambda function.
const sourceMapping = new EventSourceMapping(this, 'eventSourceMapping', {
startingPosition: lambda.StartingPosition.LATEST,
target: triggeredEventMethod2Fn,
eventSourceArn: ttlTable2.tableStreamArn,
batchSize: 10,
retryAttempts: 10,
});
This alone would allow through all the same events as with the previous method; however, we can add an override to include the filter criteria and only capture the remove events.
const cfnSourceMapping = sourceMapping.node.defaultChild as CfnEventSourceMapping
cfnSourceMapping.addPropertyOverride('FilterCriteria', {
Filters: [
{
Pattern:
JSON.stringify({
// Only capture the REMOVE events
eventName: ['REMOVE'],
}),
},
],
});
The lambda code can then be simplified as it will now only see the remove event.
import { Context, Callback } from 'aws-lambda';
import {DynamoDB} from 'aws-sdk';
export async function main(event: any, _context: Context, callback: Callback) {
try {
const {Records} = event
Records.forEach((record: { eventName: any; dynamodb: any; }) => {
console.log(record)
// archive logic
})
} catch (error) {
console.log('error', error)
}
}
Checkout this stack on GitHub
Categories: AWS
Leave a Reply