You first need an AWS account. To implement the application in this article, it’s well within the free tier. After login into AWS, you go to CloudFormation and create a stack using template.json.
In the next screen, give the stack a name (I used “dl-lambda”). The CloudFormation will automatically build the cloud infrastructure.
After the stack is successfully created, go to Outputs where the value (in this case it’s d2yf3xcp7rorxr.cloudfront.net) is the URL to access the Lambda function.
Click on that, you will come to a web site specified by the frontend.html stored in your S3 bucket dl-lambda-inferenceapp-*****. In this image, my friendly mailman carrying a package has triggered an event on my NVR. My NVR thus 1) stores the video, 2) generates a photo, and then 3) sends the photo to this Web API.
The photo with the bounding box around the mailman is then stored in the S3 bucket dl-lambda-image-outgoing.
Before I started this, I had programmed my object detection code dl-lambda.ipynb in Jupyter. To keep it simple in this article, I use the pre-trained Single Shot Multibox Detector based on ResNet from the Model Zoo. I then copy and paste the Python code from Jupyter into inference.py, which runs the Lambda handler function. Note that I’ve clearly marked the sections of code for easy porting.