External Blogs

Other than writing on my own website, I do write my articles on Dzone, Medium, InfoQ, and other professional websites:

DZone Blogs -  20+ Articles with ~1Million overall views


Medium Blogs - 4 Articles


InfoQ Blogs  - It's the beginning phase


Lumigo Blogs - All Serverless latest blogs can be found here.



Rajesh Bhojwani December 23, 2019
Read more ...

Youtube Videos

In addition to writing blogs, I enjoy creating video blogs on youtube.
So far, I have created the following videos:

PCF 2.4 feature- Service Instance Sharing in PCF

This video explains the new feature of PCF 2.3 and 2.4 which allows you to share the PCF marketplace service instance across multiple Orgs/Spaces.

Running Batch Application in PCF

In this video, we will see how to build a Spring Batch or Spring Cloud Task application and run it on PCF either manually or through PCF Scheduler.

AWS Step Functions Lesson Learned

This video talks about what is Step Functions, what are the Benefits, limitation and Best practices around it.


This video talks about - 1. Basics of Serverless Application 2. How AWS Lambda can be used to build Serverless applications. 3. Common Problems developer face while designing Lambda 4. What are the best practices you can follow to mitigate common problems


AWS Lambda Destinations vs Step Functions

This video talks through what is Lambda Destinations feature introduced in AWS 2019 reinvent and how it can replace some of the workloads being served by step functions.


Rajesh Bhojwani December 23, 2019
Read more ...

Rajesh Bhojwani December 23, 2019
Read more ...

Rajesh Bhojwani December 23, 2019
Read more ...
Introduction

AWS has recently launched the first time ever a Serverless Event Bus AWS EventBridge. Some say it is the extension of CloudWatch Events, others say it is providing the same features what SNS service offers.

In this article, we will talk through what exactly AWS EventBridge service is, where it can be used. Also, how it is different from CloudWatch Events, SNS and Kinesis services.

What is AWS EventBridge Service

EventBridge is bringing together your own (e.g. legacy) applications, SaaS (Software-as-a-Service) and AWS Services. It can stream real-time data from various event sources like Pagerduty, Datadog, and routes to various targets (AWS services) like SQS, Lambda, and Others. 
It supports 10 SaaS application partners and 90+ AWS Services as event sources. It supports 17 AWS Services as Targets and this list will grow over the years.

In simple terms, AWS EventBridge is an event bus which supports publish/subscribe model. Applications can publish events to it and it can fan-out to multiple target services. Now, the basic question would arise; what is new in it. The Event Bus is an old concept and AWS itself is providing that functionality through CloudWatch Events. 

Genesis of EventBridge

EventBridge is brought mainly to address the problems of SaaS platform integration with AWS services. In the current Cloud world, SaaS platforms like CRM, Identity providers have become key partners. They generate a large amount of data and need to pass it to AWS platforms for business processing. Before EventBridge, there were majorly two solutions to send these event data to AWS:

Polling

In this solution, we generally set up a Cron job or CloudWatch Scheduler to go out and call the SaaS APIs. It will check if any change in data and then pull the data. Polling frequency can be minutes to hours depending on the use case and SaaS platform capacity to bear the load. Look at the typical flow as below:



This solution looks simple but it brings two major issues:

  • Data Freshness issue - Scheduler will be calling the API may be every few minutes or an hour. So, it will not give real-time data. There will always be a gap which might not work in some of the business scenarios. 
  • Costing and Performance issues - To alleviate the data freshness issue, if we reduce the interval of polling, it will increase the cost as calls will be increased. Also, more resources will be consumed at the SaaS platform. This may cause throttle and slow performance issues.
So, the overall recommendation is to avoid Polling mechanism if you can.

SaaS Webhooks

This is another technique which eliminates the data freshness issue. Here, we find out an Http endpoint of the AWS hosted application which SaaS platform can call to send the events data. SaaS platform will set up the webhooks and send real-time data when records change. Look at a typical use case below:


In this flow, we still need to manage the public endpoint of the application for handling security/DDoS attacks. It will also require Authentication handling. In AWS, mostly it is done through with API Gateway or WAF/ALB option. We would need to write the code as well to handle the events.

So, looking at these shortcomings, AWS came up with EventBridge service which enables SaaS platforms to create Native Event source at AWS side and have a secure connection just by sharing Account Id and region with platforms. It not only solves the issue of real-time data processing but also takes care of event ingestion and delivery, security, authorization, and error handling for you.



Source: https://d1.awsstatic.com/product-marketing/EventBridge/product-page-diagram-EventBridge_How-it-works_V2@2x.1a889967415e66231d0bb0bbfee14337d3fa5aa8.png

Now, let us talk about what all the options are available in AWS itself for Event routing and how to compare them with EventBridge. Here are the options for event routing:
  • CloudWatch Events
  • SNS
  • EventBridge
  • Kinesis
CloudWatch Events vs EventBridge

CloudWatch Events can support only AWS services as event sources. It uses only the default event bus. Default event bus accepts events from AWS services, PutEvents API calls, and other authorized accounts. You can manage permissions on the default event bus to authorize other accounts.

EventBridge provides an option to create custom event buses and SaaS event bus on top of the default bus. The custom event bus is used to handle custom events raised by using PutEvents APIs. SaaS event bus is used to channel through events triggered by SaaS platforms. 

For default bus, EventBridge leverages the CloudWatch Events API, so CloudWatch Events users can access their existing default bus, rules, and events in the new EventBridge console, as well as in the CloudWatch Events console.

SNS vs EventBridge

SNS is a well-known event sourcing service. It shines very well when the throughput is very high maybe millions of tps. EventBridge supports 400 requests per second only.

However, the number of targets supported by SNS is limited compared to EventBridge.
For example, if an event needs to trigger Step Functions, it cannot do it directly as it is not available as a Target. It needs to call Lambda function and that can trigger the Step Functions. On the other hand, EventBridge supports 17 targets as of now. But, each Rule in EventBridge can configure max 5 targets.

SNS scales practically infinitely, but filtering is limited to attributes, not event content. SNS doesn't give the guarantee on the ordering of the messages.

Kinesis vs EventBridge

Kinesis service can be used as an event routing as well as event storing. This is an ideal solution for processing real-time data at large. It can fan-out to multiple consumers however, there is a limitation on the number of consumers can connect to a single stream. Each individual consumer would be sort of responsible for the kind of filtering out any messages that they weren't potentially interested in.

Kinesis also provides ordering guarantees. However, it doesn't have an entirely usage-based pricing model. It doesn’t automatically scale to demand. 

On the other hand, EventBridge cannot buffer the events. It needs SQS or Kinesis integration for event storing. 

Use Cases -

Let's take a couple of use cases and see how they will be implemented using SNS and EventBridge.

1.  If I want to build a system where if an EC2 instance is down, it should reboot the EC2 instance and also trigger a Lambda function to store the incident to the DynamoDB table.

If I build it using SNS as an event routing service, it would need to use SQS as well as it cannot be subscribed by EC2 directly. Here is the design for this solution:


If we implement the same use case using EventBridge, the design will be like this:


We can see the design is much simpler. With a fewer number of services, we are able to implement it.

2.  Let's take another use case where an employee resigns from the Organization and his record is updated in the CRM tool. It needs to trigger different workflows for all the approvals as part of exit checklist.

If we implement this use case using SNS, the design will look something like this:


If we use EventBridge, the design will be much simpler. It doesn't need polling, CloudWatch Scheduler, and Lambda functions. The design will look somewhat like this:


Things to Remember

      Now, we understand what is EventBridge service and how it can be used to make our design simpler in AWS. Let's keep a few things in mind while using this service:
  • Pricing for EventBridge is same as CloudWatch Events. Its dollar per 1 Million events published to the event bus.
  • CloudFormation is still not supported for Custom/SaaS event bus. This feature is yet to be released. However, for default bus, it is supported.
  • EventBridge will ensure to have a successful delivery to Targets. If failure happens, it will retry for 24 hours only before marking it as failed. In the case of Lambda, what successful delivery means from the EventBridge perspective is that it was able to asynchronously invoke your function. So when it gets a success back from the Lambda service saying that, "hey, yeah, the invoke call you made with success.". So then, at that point, you're really kind of relying on the standard Lambda retry policy for the failure handling within that kind of async invoke flow.
  • EventBridge makes connection seamless for an AWS service from an AWS account to another AWS service in a different account. It has a target option as "event bus from another account".
  • EventBridge needs SQS to bring resiliency but kinesis has that feature built-in.

    • A word of warning on the event bus. It is very hard for consumers to use it without having some kind of event schema registry. Event Schema Registry makes it possible to search for an event type and to build version the schemas so consumers and publishers understand what they are working with.

    Summary

    In this article, we understood how EventBridge is helping to solve the SaaS platform integration with AWS services. Also, for existing AWS services, integration has become much simpler and smooth. However, from a security perspective, there is no much information documented for SaaS platform integration. For enterprise-level companies, it matters a lot as we are just giving AWS account id and region information to the vendor. I hope, documentation will mature eventually. 
    Rajesh Bhojwani December 01, 2019
    Read more ...
    AWS brought Serverless Application Model (SAM) to ease the building of the serverless application on AWS. At a high level, it is an extension of AWS CloudFormation. AWS SAM consists of two major components - SAM template specifications, SAM CLI.

    In this article, we are going to talk about 5 tips to get the most out of AWS SAM templates:

    1. Reuse Common Pattern using Nested Applications

    As serverless architectures grow, common patterns get reimplemented across teams, and projects. This hurts the development velocity and leading to wasted effort. To avoid that, AWS has come up with nested applications in AWS SAM and the AWS Serverless Application Repository (SAR) to make these patterns shareable publicly and privately.

    Nested applications are built off a concept in AWS CloudFormation called nested stacks. Serverless applications are deployed as stacks that contain one or more other serverless application stacks.

    Let's take an example. If we want to build an API made up of AWS Lambda and Amazon API Gateway, we can use AWS SAM to create Lambda functions, configure API Gateway, deploy and manage them both. Now, We want to secure this API. API Gateway has several methods for doing this. Let's say we want to implement basic HTTP Basic Auth. So, Instead of creating this functionality from scratch, we can search it in SAR.


    We can review the AWS SAM template, license, and permissions provided in SAR. If it meets our requirements, just copy the SAM Template of it and put it in our application SAM template under "Resources" tag.
    These applications will have a type - AWS::Serverless::Application

    Now, we can check the output of this application and refer that in the main API Authorizer like below:


    With this simple piece of code, we can reuse any existing code resided in SAR.

    2.  Reduce Line of Code Using Globals

    Serverless Architecture is all about breaking large application into smaller functions. So, if we end up having 10-20 Lambda functions for my application and we configure that in SAM template, we will notice so many Function definition under "Resources" tag.

    In AWS SAM template, Globals is a section to create properties common to all the Serverless Function and APIs.

    All the AWS::Serverless::Function and AWS::Serverless::Api resources will inherit the properties defined here. Here is a typical example:




    3. Enable a feature using SAM Parameter and Mappings

    Let's suppose we want to enable a feature in our application based on the flag value set in an environment variable. It can get complicated if we want to do it based on the deployment environment like testing/prod.
    For example, I have a feature "Download PDF" which I want to enable only in the test environment for now and don't want to publish to Prod until business approves it. So, the flow will be that an environment value (testing/prod) will be passed as a parameter. There should be a collection object like Map which should hold the status value (on/off) for each environment. Now, logic has to be put to retrieve the status dynamically and set in an environment variable so that the application can behave accordingly.
    Using SAM Parameters and Mapping tags, this can be done very easily. See below SAM template on how it is implemented.


    Here !FindInMap will search DocumentEnvironment parameter value (testing/staging/prod) in DownloadPDFFeature Map and retrieve "status" field value and set it with "Download_Feature1" variable.

    4. Safe Deployment using AutoPublishAlias and DeploymentPreference

    AWS Lambda has Versioning and Alias feature which helps to do the increment deployment of Functions. When an enhancement happens on a function, it can be published with a bumped-up version and an Alias can point to a version which we want to expose to the consumers. Below is the example:

    In SAM, we can bump-up this version by just using one property - AutoPublishAlias
     

    AWS SAM will do the following 3 tasks behind the scene:

    • Detect when new code is being deployed based on changes to the Lambda function's Amazon S3 URI.
    • Create and publish an updated version of that function with the latest code.
    • Create an alias with a name provided through ENVIRONMENT parameter and points to the updated version of the Lambda function.

    Apart from this, DeploymentPreference property will help to enable the canary and liner deployment strategy which ensures that if there is any problem with the newer version of the function, it can be rolled back. For example, in the above code, it is going to redirect only 10% of the traffic to the new version for 10 minutes and then keep increasing by 10% every 10 minutes. So, we can monitor the new version and rollback if any issues found.

    5. Enable Security using Policy Templates

    For any Lambda Function, the most important thing is to define what execution it can do and who can invoke it. That's where we define the security around it. SAM has a concept called Policy Templates. Using these templates, having 2 lines of code, corresponds to a complete IAM policy that will be keyed in.


    In the above example, we have used DynamoDBCrudPolicy template which corresponds to the below IAM policy.

    Using this feature we are able to reduce the complexity of the security configuration and standardize the security of Lambda Function.

    Summary

    AWS SAM has been introduced to reduce the complexity of building serverless applications and that is very evident by looking at above basic tips. There are several other features available in SAM but we will be parking them for the next article. That's all for now.

    Rajesh Bhojwani November 03, 2019
    Read more ...
    All modern applications are nowadays being developed using either Serverless or Containers technology. However, it is always difficult to choose the one best suitable for a particular requirement.

    In this article, we will try to understand how these two are different from each other and in what scenario we can use one or other.

    Let us first start with understanding the basics of Serverless and Container technology.

    What is Serverless?

    Serverless is a development approach that replaces long-running virtual machines with computing power that comes into existence on demand and disappears immediately after use.
    Despite the name, there certainly are servers involved in running your application. It’s just that your cloud service provider, whether it’s AWS, Azure, or Google Cloud Platform, manages these servers, and they’re not always running.
    It is trying to resolve below issues:
    • Unnecessary charges for keeping the server up even when we are not consuming any resources
    • Overall responsibility for maintenance and uptime of the server.
    • Responsibility for applying the appropriate security updates to the server.
    • As our usage scales, we need to manage to scale up our server as well. And as a result, manage to scale it down when we don’t have as much usage.

    What is Containers?

    A container is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, and settings.
    Containers solve the problem of running software when it has been moved from one computing environment by essentially isolating it from its environment. For instance, containers allow you to move software from development to staging and from staging to production and have it run reliably regardless of the differences of all the environments.

    Comparison between Serverless vs Containers

    To start with, it’s worth saying that both — Serverless and Containers point out an architecture that is designed for future changes, and for leveraging the latest tech innovations in cloud computing. While many people often talk about Serverless Computing vs Docker Containers, the two have very little in common. That is because both technologies aren’t the same thing and serve a different purpose. First, let’s go over some common points:
    1. Less overhead
    2. High performance
    3. Requires less interaction at the infrastructure level to do provisioning.
    Although Serverless is more innovative technology than Containers, they both have their disadvantages and of course, benefits that make them both useful and relevant. So let’s review the two.

    Aspects
    Serverless
    Containers
    Longevity
    Lambda Functions is "Short-lived". Once its executed, it will spin down. Lambda has a timeout threshold of 15 minutes. Long-running workloads cannot run on this. However, Step-Functions can be used to break the long-running Application logic into smaller steps (Functions) and run it. But, it might not apply to all kinds of long-running application.
    ECS is "long-running" containers. It can run as long as you want.
    Throughput
    If an application is having High throughput, say 1Million requests per day, Lambda would be costing higher compare to container solutions. The reason is, it would need a higher resource like Memory and execution time will be high. As Lambda charges based on memory and execution time, the cost will increase in the multiplication factor. The second reason is that 1 function can have maximum 3GB Memory and it might not be able to handle the high throughput and would need concurrent execution which may introduce latency due to cold start time.
    ECS uses EC2 instances to host the applications. EC2 can handle high throughput more effectively than Serverless Functions as it has different types of instance types which can be used as per throughput requirement. Its cost will be comparatively less. Latency will also be better if a single EC2 instance can also handle such kind of load.
    For lower Throughput, Lambda is a good choice in terms of cost, performance, and time to deploy.
    For lower throughput also, EC2 works very well. While comparing with Lambda, need to consider other factors described in this table.
    Scaling
    Lambda has auto-scaling as a built-in feature.
    . It scales the functions with concurrent execution.
    . However, there is a max limit (1000 concurrent execution) at the account level.
    . Lambda horizontal scaling is very fast however, there will be very minimal latency due to cold start time.
    Containers don't have any constraints on scaling. However,
    . We would need to forecast the scaling requirements.
    . Also, it has to be designed and configured manually or automate it through scripts.
    . Scaling containers process is slower than scaling Lambda.
    . Also, higher the number of worker nodes we have, more the problems it will add to the maintenance like handling latency, throttling issues.
    Time to Deploy
    Lambda Functions are smaller in size and take significantly less time compared to containers. It takes milliseconds to deploy compared to seconds in container case.
    Containers take significant time initially to configure and set up as it would require system setting, libraries. However, once it is configured, it takes seconds to deploy.
    Cost
    In Serverless Architecture, infrastructure is not used unless the application is invoked. So, it will charge only for the server capacity that their application use during the uptime. Now, this can be cost-effective in some scenarios like:
    . Application is used rarely (once or twice a day)
    . Application has frequent scale up and down requirement due to the user request throughput changing frequently.
    . An application needs fewer resources to run. Because Lambda cost depends on memory and execution time. If it is compared with Container cost running 24 hours, it always wins.
    Containers are constantly running, and therefore cloud providers have to charge for the server space even if no one is using the application at the time.

    If Throughput is high, Containers are better cost-effective compared to Lambda.

    While comparing with EKS cluster, ECS cluster is free.
    Security
    For Lambda, system security is taken care of by AWS itself. It only needs to handle application-level security using IAM roles and policies. However, if Lambda has to run in a VPC, then VPC level security has to apply here.
    For Containers, we are also responsible for applying the appropriate security updates to the server. This includes patching OS, upgrades to software and libraries.
    ECS supports IAM Roles for Tasks which is great to grant containers access to AWS resources. For example, to allow containers to access S3, DynamoDB, SQS, or SES at runtime. EKS doesn't provide IAM level security at pods level.
    Vendor Locking
    Serverless function brings Vendor Locking as if you need to move the Lambda function to Azure function, it would need significant changes at code and configuration level.
    Containers are designed to run on any cloud platform which supports container technologies. So it brings the benefit to build once and run anywhere. However, the services being used for Security - IAM, KMS, Security Groups, and others are tightly coupled with AWS. It would need some rework to move this workload to other platforms.
    Infrastructure Control
    If a team doesn't have infrastructure skills, Lambda will be a good option. The team can concentrate on business logic development and let AWS handle the infrastructure.
    With Containers, we get full control of server, OS, Network components. We can define and configure within the limitations put by Cloud providers. So, if an application/system needs fine-grained control of infrastructure, this solution works better.
    Maintenance
    Lambda doesn't need any maintenance work as everything at the server level is being taken care of by AWS.
    Containers need for maintenance like patching, upgrade and that would require skilled resources as well. So, keep this in mind while choosing this architecture for deployment.
    State persistence
    Lambda is designed for serverless so it will not maintain any state. It is short-lived. Because of this reason, we cannot use caching for it and that may cause latency problem.
    Containers can leverage the benefits of caching.
    Latency & Startup Time
    For Lambda, cold start and warm start time are key factors to be considered. As they may cause latency as well as add to the cost of executing functions.
    Containers being running always doesn't have cold/warm start time. Also, using caching latency can be reduced.
    While comparing with EKS, ECS doesn't have any proxy concept at the node level. Load balancing is just between ALB and EC2 instances. So no extra hop of latency.
    VPC &ENI
    If Lambda is deployed in a VPC, its concurrent execution is limited by ENI capacity of the subnets.
    The number of ENIs per EC2 instance is limited from 2 to 15 depending on the instance type.
    In ECS, each task is assigned only a single ENI so we can have a maximum of 15 tasks per EC2 instance with ECS.
    Monolith Applications
    Lambda is not fit for Monolithic application. It cannot run complex type of application
    ECS can be used to run a monolith application
    Testing 
    Testing is difficult in serverless based web applications as it often becomes hard for developers to replicate the backend environment in a local environment.
    Since containers run on the same platform where they are deployed, it’s relatively simple to test a container-based application before deploying it to the production.
    Monitoring
    Lambda monitoring can be done through CloudWatch, X-Ray. Need to rely on Cloud vendor to provide monitoring capabilities. However, infrastructure level monitoring is not required in this case.
    Container monitoring would require to capture Availability, System Errors, Performance and Capacity metrics to configure HA for the container applications.


    When to use Serverless

    Serverless Computing is a perfect fit for the following use-cases: 
    1. If the application team doesn’t want to spend much time thinking where your code is running and how!
    2. If the team doesn't have skilled infrastructure resources and worried about the cost of maintenance of servers and resources application consumes, serverless will be a great fit for such use-case.
    3. If the application's traffic pattern changes frequently, it will handle it automatically. It will also even shut down when there is no traffic at all.
    4. Serverless websites and applications can be written and deployed without handling the work of setting up infrastructure. As such, it is possible to launch a fully-functional app or website in days using serverless.
    5. If a team needs a small batch job which can be finished within Lambda limits, its a good fit to use.

    When to use Containers

    Containers are best to use for Application deployment in the following use cases:
    • If the team wants to use the operating system of their own choice and leverage full control over the installed programming language and runtime version.
    • If the team wants to use software with specific version requirements, containers are great to start with.
    • If the team is okay in bearing the cost of using big yet traditional servers for anything such as Web APIs, machine learning computations, and long-running processes, then they might also want to try out containers as well (They will cost you less than servers anyways)
    • If the team wants to develop new container-native applications
    • If the team needs to refactor a very large and complicated monolithic application, then it’s better to use the container as it’s better for complex applications.

    Summary

    In a nutshell, we learned that both the technologies are good and can complement each other rather than competing. They both solve different problems and should be wisely. If you need help how to design and architect your application, reach out to me. 

    Rajesh Bhojwani August 30, 2019
    Read more ...

    Introduction

    AWS Lambda has gained good traction for building applications on AWS. But, is it really the best fit for all use cases? 
    Since its introduction in 2014, Lambda has seen enthusiastic adoption - by startups and enterprises alike. There is no doubt that it marks a significant evolution in cloud computing, leveraging the possibilities of the cloud to offer distinct advantages over a more traditional model such as EC2.  
    In this article, we are going to conduct a fair comparison between EC2 and Lambda covering various aspects of cloud-native features. Let's begin with a quick reminder of what these two services offer, and how they differ.

    What is AWS EC2?

    Amazon Elastic Compute Cloud (EC2) service was introduced to ease the provision of computing resources for developers. Its primary job is to provide self-service, on-demand, and resilient infrastructure.
    - It reduces the time required to spin up a new server to minutes from the days or weeks of work it might have taken in the on-premise world.
    - It can scale up and down instantly based on the computing requirement.
    - It provides an interface to configure capacity with minimal effort.
    - It allows complete admin access to the servers, making infrastructure management straightforward.
    - It also enables monitoring, security and support for multiple instances types (wide variety of Operating Systems, Memory, and CPUs).

    What is AWS Lambda?

    AWS Lambda was launched to eliminate infrastructure management of computing. It enables developers to concentrate on writing the function code without having to worry about provisioning infrastructure. We don't need to do any forecasting of the resources (CPU, Memory, Storage, etc.). It can scale resources up and down automatically. It is the epitome of Serverless Architecture.
    Before we start comparing the different features of both of these services, let's understand a few key things about Lambda:
    - Lambda was designed to be an event-based service which gets triggered by events like a new file being added to an S3 bucket, a new record added in a DynamoDB table, and so on. However, it can also be invoked through API Gateway to expose the function code as a REST API.
    It was introduced to reduce the idle time of computing resources when the application is not being used.
    - Lambda logs can be monitored the same way as EC2, through CloudWatch.
    - Lambda local development is generally done using AWS SAM or Serverless Framework. They use CloudFormation for deployment.
    - Unlike EC2, it is charged based on the execution time and memory used.

    Now, let’s take a deeper look at how Lambda and EC2 differ from each other in terms of performance, cost, security, and other aspects:

    Setup & Management

    For setting up a simple application on EC2, first, we need to forecast how much capacity the application would need. Then, we have to configure it to spin up the Virtual Machine. 
    After that, one needs to set up a bastion server to securely SSH to the VM and install the required software, web server, and so on. You’ll need to manage the scaling as well by setting up an Auto Scaling group. And that’s not all. ALB also need to be set up to do the load balancing in case multiple instances of applications are installed using multiple EC2 instances.
    In the case of Lambda, you won’t need to worry about the provisioning of VMs, software, scaling or load balancing. It is all handled by the Lambda service. We just need to compile the code and deploy to Lambda service. Scaling is automated. We just need to configure how many max concurrent executions we want to allow for a function. Load balancing will be handled by Lambda itself.
    So here, we can see Lambda is a clear winner.

    On-Demand vs. Always Available

    For EC2, we essentially have to pay for the amount of time EC2 instances are up and running. But for Lambda, it’s the amount of time functions are up and running. The reason is that Lambda is brought up and spun down automatically based on event sources and triggers. This is something we don’t get out of the box while using EC2. So, while an EC2 container is always available, 
    Lambda is available based on the request invocation. The advantage goes to Lambda functions since we are no longer paying for the idle time between invocations, which can save a lot of money in the long run.

    Performance

    There are various aspects to cover when we take performance into consideration. Let's discuss them one by one. 

    1. Concurrency and Scaling

    With EC2, we have full control in implementing the concurrency and scaling. We can use EC2 Auto Scaling groups to define the policies for scaling up and down. These policies involve defining conditions (avg. threshold limits) and actions (# of instances to be added or deleted). 
    However, it requires a lot of effort to identify the threshold limits and accurately forecast the # of instances required. It can only be done by carefully monitoring the metrics (CloudWatch, NewRelic, etc.).
    However, with Lambda, concurrency and scaling are handled as a built-in feature. We simply have to define the maximum number of concurrent executions we want a function to be restricted to. There are a few limitations though. It can have a max. 3 GB memory. So if a program needs to scale vertically for memory, it can't do that more than 3 GB. For horizontal scaling, the maximum limit is 1,000 concurrent executions. If your Lambda is deployed in a VPC, then it is even further restricted based on the number of IP addresses available for the subnets allocated.
    So, EC2 gives you more flexibility but requires manual configuration and forecasting. Lambda is designed to do all of that by itself but has a few limitations.

    2. Dependencies

    It is inevitable to run an application without external libraries. When we use EC2, there is no constraint to limit the number of dependencies for an application. However, the more dependencies an application has, the more time it will take to start. It will add a burden to the CPU as well.
    However, with Lambda, there are constraints in terms of the maximum size of a package - 50 MB (zipped, for direct upload) and 250 MB (unzipped, including layers). Sometimes, these sizes are not sufficient, especially for ML programs where we need a lot of third-party libraries. 
    AWS recommends using /tmp directory to install and download the dependencies during function runtime. However, it can take significant time to download all the dependencies from scratch when a new container is being created. So, this option is good when your lambda container is up most of the time, otherwise, it may cause a long cold start time for each invocation. Also, /tmp folder can hold a maximum of 512MB only. So, it is again restricted for limited use only.
    EC2 is a clear winner here.

    3. Latency

    Comparing the latency between EC2 and Lambda is not straightforward. It depends on the use cases. So, let’s try to get a clearer picture by going through a few examples.
    Let's take the first example where the application is used only a few times a day in the interval of 2-3 hours. 
    Now, if we use EC2, it will be running for the whole day and latency for the first request will be high but for all subsequent requests will be comparatively less. The reason is, when the EC2 instance is provisioned, all the scripts are run to set up the OS, software, EBS and other things. 
    If we use Lambda, the application doesn't need to be running for a whole day. Lambda container can be spun up based on each request. However, it will involve cold start time, which is not significant compared to the EC2 instance setup time. So, for this use case, Lambda will have more latency per request than EC2 but significantly less than the EC2's first request. To reduce this time, some teams create a Lambda function which will periodically call the application Lambda functions to keep them warm. However, it is going to increase the bill, so you need to keep a balance. 
    If we take another example, where the application needs to scale up and down frequently, In this case, EC2 will need to scale up to handle the increased volume of requests. This will impact the latency of the requests. While with Lambda, scaling will be comparatively fast and latency will be less.
    So, latency will ultimately depend on the use cases and other local factors (cold start time for Lambda and resources setup time for EC2).

    4. Timeout

    Lambda has more timeout constraints than EC2. If we have long-running workloads, Lambda may not be the right choice as It has a hard timeout limit of 15 minutes. But EC2 doesn't have such kind of restriction. 
    AWS has introduced Step Functions to overcome the Lambda timeout constraint. Also, if we have a Lambda function exposed as REST API through API Gateway, it also has a timeout limit of 29 seconds at the gateway. 
    Timeouts don’t occur only due to these limits, but the downstream system's application integration as well. And, that can happen to both EC2 and Lambda functions. 
    One more thing to note in the EC2 case is that if we don't configure security groups appropriately, it may also cause timeout errors.

    Cost

    To understand how EC2 and Lambda services compare on cost, let’s run  through a couple of examples:
    1. Let's assume an application has 5,000 hits per day with each execution taking 100 ms with 512MB. So the cost for the Lambda function will be $0.16.

    Now, for the same requirement, I believe I can use t2.nano EC2 instance. And, if we look at the cost for this instance, it will be $4.25.

    We can see, the Lambda cost ($0.16) is just ~4% of the EC2 price ($4.25).
    2. Let's take the second scenario where an application has a lot of hits, say 5 million per month, and each execution takes 200 ms with 1GB Memory. If we use Lambda, it will cost us $17.67. 


    However, if we use EC2 and, I believe, t3.micros should be able to handle this load, then it will cost us only $7.62. 
    So, in this case, EC2 is a cheaper solution than Lambda due to the high requirement of memory/request #/execution time. 
    3. Now, take an example where multiple EC2 instances would be required to handle the requests. In that case, EC2 would be costlier for two reasons. First, we need an ALB (Application Load Balancer) to handle the load balancing between those instances. It will add to the cost. Second, EC2 will eat up some memory being allocated and traffic is also not evenly distributed always so we would need more EC2 instances than anticipated. Lambda can handle the load balancing internally so no extra cost is added while scaling.

    Security

    When we talk about security for Lambda, most of the onus is on the AWS side which includes OS patching, upgrades, and other infrastructure-level security concerns. Generally, the malware sits idle on a server for a long time and then starts growing slowly. That is not possible in Lambda as it is stateless in nature.
    On the other hand, for EC2, we have full control to define system-level security. We need to configure the security groups, Network ACLs, and VPC subnet route tables to control the traffic in and out for an instance. However, it's a tiring job to ensure the system is fully secure. Security groups can grow to take care of business needs but can become overlapping and confusing sometimes.

    Monitoring

    Despite EC2's resiliency and elasticity, there are many ongoing objectives that require close tracking of capacity, predictability, and interdependence with other services. Let's talk about some of the important metrics that need to be monitored for EC2.
    1. Availability - To avoid an outage in production, we need to know if each of the EC2 instances running for the applications is healthy or not. EC2 has "Instance State" which can be used to track it.
    2. Status Checks - AWS performs status checks on all the running EC2 servers. System status checks monitor conditions like loss of network connectivity and hardware issues. That requires AWS’s involvement to fix. Instance status checks monitor conditions like exhausting memory and a corrupt file system. That requires our involvement to fix. The best practice is to set a status check alarm to notify you when a status check fails.
    3. Resources Capacity - CPU and Memory utilization are directly related to application responsiveness. If it gets exhausted, we will not even have sufficient memory to do SSH on the instance. The only option would be to reboot the instance which can cause downtime and state loss. A good monitoring system will store metrics from the instance and can show us an increase in its resource usage until eventually hitting a ceiling and becoming unavailable.
    4. System Errors - System errors can be found in the system log file like /var/log/syslog. We can aggregate these logs to Amazon CloudWatch Logs by installing their agent, or we can use syslog to forward the logs to some other central location like Splunk or ELK.
    5. Human Errors - EC2 needs a lot of manual configuration and sometimes it may go wrong. So we need tracking of such activities, which can be done through CloudTrail audit logs.
    6. Performance Metrics - Through CloudWatch Logs, we can monitor CPU usage and disk usage. However, it doesn't provide any metrics for application performance monitoring. And that's where we would need to use APM tools.
    7. Cost Monitoring - EC2 instances count, EBS volume usage, and Network usage is very important to monitor as auto-scaling can open a serious risk to the overall AWS billing. CloudWatch helps to get some information about the network usage for an instance but doesn't give overall information of how many instances are being used. Also, we would need to know about storage and network usage at an account level. And that is something that’s missing in CloudWatch Logs.
    So, most of the metrics can be tracked using CloudWatch, CloudTrail, and X-Ray but there are still, a few gaps to be filled.
    Now, let's talk about Lambda monitoring. Cloudwatch provides all the basic telemetry about the health of the Lambda function out of the box:
     -Invocation Count
    - Execution Duration
    - Error Count
    - Throttled Count

    In addition to these built-in metrics, we can also record custom metrics and publish them to CloudWatch Metrics. 
    However, there are a few limitations to these metrics. It doesn't cover concurrent execution metrics, which is the most common feature of Lambda. For cold start count and downstream integration-related metrics, we have to rely on X-Ray monitoring. There are also no metrics available for memory usage, but it can be done using custom metrics.
    Now, we have a much better understanding of the major differences between EC2 and Lambda in various aspects. So, let's talk about which one to use for a given use case.

    Use Cases 

    There are certain use cases where there is no competition between the two. For example, If we need to set up a DB like Couchbase or MongoDB, we have to go for EC2 only. We can't do that in Lambda. Another example would be if we need a hosting environment for Backup & Disaster Recovery. Again, EC2 would be the only choice. 
    However, there are certain use cases where developers might be a little uncertain about which one to use. 
    1. High-Performance Computing Applications - Although Lambda claims that it can perform real-time stream processing if these processes need high compute, it cannot handle it. Remember, Lambda has only 3 GB memory. And it may cause you high execution time, leading to either timeout issues or a higher bill. On the other hand, EC2 has no such restriction and is an ideal fit for these kinds of requirements.
    2. Event-Based Applications - Lambda has been primarily designed for handling event-based functions and it does that best. So if a new record is added to DynamoDB or a new file added to an S3 bucket needs processing, Lambda is the best fit. It's very easy to set up and saves cost as well.
    3. Security Sensitive Applications - AWS claim that it takes care of Lambda security very well. But remember one thing: Lambda functions are running in a shared VPC that may be shared with other customers in a multi-tenancy setup. So, if an application has highly sensitive data, and security is your primary concern, Lambda functions should be running under a dedicated VPC or use EC2 only. And don’t forget, running the Lambda under VPC has its own challenges like increased cold start time, limited concurrent executions, etc.
    4. Less Accessed Applications or Scheduled Jobs - If the application is used very rarely, or should be invoked based on schedule, Lambda is the right fit for it. It will save money as there is no need to run the server all the time.
    5. DevOps and Local Testing - DevOps has been developed for EC2 for years and has reached a good level of maturity, but Lambda is still going through that journey. AWS SAM and Serverless Framework is addressing those concerns. Local testing is another aspect you need to consider while using Lambda as it has few limitations in regards to what can be done.

    Summary

    In this article, we have understood that if a team doesn't want to deal with infrastructure, then Lambda will be the right choice. However, it comes with limitations in terms of the use cases it can run. Also, constant monitoring is a must to ensure there is a good balance between the ease it provides and the cost.
    EC2 has always been a standard choice for hosting any application and gives us full flexibility to configure our infrastructure. However, it is not best suited to all needs, and that's what where Lambda comes into the picture. 
    Keep using both services based on the considerations I have shared and do let me know your feedback.
    Rajesh Bhojwani August 17, 2019
    Read more ...