Amazon Rekognition is a powerful tool available within the AWS cloud, designed for analyzing images and videos. With this service, you can detect objects, faces, emotions, and much more, offering tremendous potential for automating and understanding visual data. In this article, I’ll introduce the basics of Amazon Rekognition and show how you can use it in combination with AWS Lambda to create automated workflows for image analysis. In this article, I’ll also include a link to my video, where I demonstrate real-world examples of these functions in action.
What is Amazon Rekognition?
Amazon Rekognition is a machine learning service provided by AWS that automatically analyzes visual content such as images and videos. It offers a variety of features, including object detection, facial recognition, text recognition, and scene analysis. Rekognition is used across multiple industries, such as security, marketing, and large-scale visual data analysis, thanks to its ability to process and analyze images at scale.
One of the key advantages of Rekognition is its seamless integration with AWS Lambda, allowing for automatic image processing as soon as a file is uploaded to an Amazon S3 bucket. This integration enables fully automated workflows, helping businesses save time and resources through automatic tagging, identification, and image analysis.
Example 1: Using Lambda to Analyze an Image and Return Detected Objects

A simple but highly effective use case is setting up an AWS Lambda function that automatically analyzes an image using Amazon Rekognition and returns detailed information about the objects it detects. When an image is uploaded to a specified S3 bucket, the Lambda function is triggered to run, calling Rekognition to analyze the image and return a list of objects along with their confidence levels. This kind of automation is incredibly useful for tasks such as content moderation, image cataloging, and inventory management.
Below is an example of the code I used in the video. By changing the bucket
and image_name
you can easily adapt it to your needs and test it.
import json
import boto3
def lambda_handler(event, context):
client = boto3.client('rekognition')
bucket = 'rekognition-bucket-00235'
image_name = 'dog-8448345_1280.jpg'
response = client.detect_labels(
Image={
'S3Object': {
'Bucket': bucket,
'Name': image_name,
}
},
MaxLabels=10,
MinConfidence=80
)
labels = response['Labels']
result = []
for label in labels:
result.append({
'Name': label['Name'],
'Confidence': label['Confidence']
})
return {
'statusCode': 200,
'body': result
}
Example 2: Generating a New Image with Labeled Objects

In addition to analyzing and returning object data, AWS Lambda can also be used to create a new image with labels directly embedded on the detected objects. The Lambda function can take an image, analyze it using Rekognition, and then generate a new version of the image with bounding boxes and labels added. This can be useful for creating reports, visualizing analysis results, or highlighting specific details in images, such as in security or marketing applications.
import json
import boto3
from PIL import Image, ImageDraw, ImageFont
import random
import os
# Initialize S3 and Rekognition clients
s3_client = boto3.client('s3')
rekognition_client = boto3.client('rekognition')
def lambda_handler(event, context):
# S3 parameters: bucket name and image key from the event input
bucket_name = 'rekognition-bucket-00235'
image_key = 'fashion-2208045_1280.jpg'
# Download the image from S3 to the temporary directory in Lambda
s3_client.download_file(bucket_name, image_key, '/tmp/image.png')
# Open the downloaded image using PIL
image = Image.open('/tmp/image.png')
draw = ImageDraw.Draw(image)
# Use Amazon Rekognition to detect labels in the image
response = rekognition_client.detect_labels(
Image={'S3Object': {'Bucket': bucket_name, 'Name': image_key}},
MaxLabels=10 # Limit to 10 labels
)
# Set the font for the text to be drawn on the image
try:
# Use a larger font size for better visibility
font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 50)
except IOError:
# Fallback to default font if the TTF font is not available
font = ImageFont.load_default()
# Iterate through detected labels and draw them on the image
for label in response['Labels']:
# Get bounding box coordinates
for instance in label['Instances']:
box = instance['BoundingBox']
width, height = image.size
# Calculate the actual bounding box coordinates
left = box['Left'] * width
top = box['Top'] * height
right = left + (box['Width'] * width)
bottom = top + (box['Height'] * height)
# Generate a random color for the rectangle
color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
# Draw the rectangle around the detected object
draw.rectangle([left, top, right, bottom], outline=color, width=4)
# Create the label text with the name and confidence
text = f"{label['Name']} - {label['Confidence']:.2f}%"
# Create a background rectangle for better text visibility
# Using textbbox to get the bounding box of the text
text_bbox = draw.textbbox((left, top), text, font=font)
background_box = [text_bbox[0] - 5, text_bbox[1] - 5, text_bbox[2] + 5, text_bbox[3] + 5]
draw.rectangle(background_box, fill=(255, 255, 255, 128)) # White background with transparency
# Draw the text at the top-left corner of the bounding box
draw.text((left, top), text, fill=color, font=font)
# Create the new file name with a prefix
original_filename = os.path.splitext(image_key)[0]
new_image_key = f"{original_filename}_labelled.jpg"
# Save the updated image as JPEG with quality setting for better output
image.save('/tmp/labelled_image.jpg', 'JPEG', quality=95)
# Upload the labelled image back to the S3 bucket with the new name
s3_client.upload_file('/tmp/labelled_image.jpg', bucket_name, new_image_key)
# Return a success response with a link to the new image
image_url = f"https://{bucket_name}.s3.amazonaws.com/{new_image_key}"
return {
'statusCode': 200,
'body': json.dumps({'message': 'Image processed successfully!', 'image_url': image_url})
}
Example 3: Batch Processing and Analyzing Images in a Folder

Another powerful application is the ability to process multiple images stored in a specific S3 folder. AWS Lambda can be configured to scan all images within a folder and analyze each one, automatically generating reports on detected objects or other features, such as identifying whether individuals in the images are wearing glasses. This bulk-processing capability is ideal for use cases where a large number of images need to be processed at once, such as when analyzing security footage or organizing a database of customer images.
import json
import boto3
from PIL import Image, ImageDraw, ImageFont
import os
# Initialize S3 and Rekognition clients
s3_client = boto3.client('s3')
rekognition_client = boto3.client('rekognition')
def lambda_handler(event, context):
# S3 parameters: bucket name and folder prefix(directory)
bucket_name = 'rekognition-bucket-00235'
folder_prefix = 'faces'
# List all objects in the specified folder (prefix)
objects = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=folder_prefix)
# Check if the folder contains any objects
if 'Contents' not in objects:
return {
'statusCode': 400,
'body': json.dumps({'message': f'No images found in {folder_prefix}'}, indent=4)
}
processed_images = []
# Iterate through all the objects (images) in the folder
for obj in objects['Contents']:
image_key = obj['Key']
# Skip non-image files (you can add more file type filters as needed)
if not image_key.lower().endswith(('.jpg', '.jpeg', '.png')):
continue
# Download the image from S3 to the temporary directory in Lambda
s3_client.download_file(bucket_name, image_key, '/tmp/image.jpg')
# Open the downloaded image using PIL
image = Image.open('/tmp/image.jpg')
draw = ImageDraw.Draw(image)
# Use Amazon Rekognition to detect faces and check for glasses
response = rekognition_client.detect_faces(
Image={'S3Object': {'Bucket': bucket_name, 'Name': image_key}},
Attributes=['ALL']
)
# Set the font for the text to be drawn on the image
try:
font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 40)
except IOError:
font = ImageFont.load_default()
# Loop through detected faces and check for glasses
for face_detail in response['FaceDetails']:
# Get bounding box for each face
box = face_detail['BoundingBox']
width, height = image.size
left = box['Left'] * width
top = box['Top'] * height
right = left + (box['Width'] * width)
bottom = top + (box['Height'] * height)
# Check if the person is wearing glasses
if face_detail['Eyeglasses']['Value']:
text = 'Glasses Detected'
text_color = (0, 255, 0) # Green for glasses
else:
text = 'No Glasses'
text_color = (255, 0, 0) # Red for no glasses
# Draw a rectangle around the face
draw.rectangle([left, top, right, bottom], outline=text_color, width=3)
# Draw the text on the image
text_bbox = draw.textbbox((left, bottom + 10), text, font=font)
background_box = [text_bbox[0] - 5, text_bbox[1] - 5, text_bbox[2] + 5, text_bbox[3] + 5]
draw.rectangle(background_box, fill=(255, 255, 255, 128)) # Background for better readability
draw.text((left, bottom + 10), text, fill=text_color, font=font)
# Create the new file name with a prefix indicating glasses detection
if 'Glasses Detected' in text:
new_image_key = f"glasses_{image_key}"
else:
new_image_key = f"no_glasses_{image_key}"
# Save the modified image as a JPEG
image.save('/tmp/labelled_image.jpg', 'JPEG', quality=95)
# Upload the labelled image back to the S3 bucket with the new name
s3_client.upload_file('/tmp/labelled_image.jpg', bucket_name, new_image_key)
# Add the processed image to the list
processed_images.append(new_image_key)
# Return a success response with pretty JSON formatting
return {
'statusCode': 200,
'body': {'processed_images': processed_images}
}
IAM Role Permissions
To implement these examples, your Lambda function will require specific IAM role permissions. The function will need access to both Amazon S3 and Amazon Rekognition. Specifically, the role should be granted permissions to read objects from the S3 bucket and perform Rekognition’s image analysis operations. Proper configuration of these permissions is crucial to ensuring that the Lambda function can operate without issues.
It’s up to you what permissions you grant and what you want to do with your lambda. I recommend always granting only the necessary permissions.
Lambda Layer Requirement for Image Processing
By default, AWS Lambda does not include the necessary libraries for image processing, such as Pillow (PIL) or fonts for rendering text. To enable these capabilities, you’ll need to add a custom Lambda layer that includes these dependencies. Without this additional layer, the function won’t be able to manipulate images or draw text properly.
Summary
Amazon Rekognition is a powerful tool that, when combined with other AWS services, offers incredible capabilities for visual data analysis. It’s definitely worth exploring how this service can improve various processes in your business or projects. In my next video, I will show you how to use the OCR function to read text from images – stay tuned for more!