An application running on Amazon EC2 needs to asynchronously invoke an AWS Lambda function to perform data processing. The services should be decoupled. Which service can be used to decouple the compute services?
A. AWS Config
B. Amazon MQ
C. AWS Step Functions
D. Amazon SNS
D. Amazon SNS
Explanation
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.
CORRECT: "Amazon SNS" is the correct answer.
INCORRECT: "AWS Config" is incorrect. AWS Config is a service that is used for continuous compliance, not application decoupling.
INCORRECT: "Amazon MQ" is incorrect. Amazon MQ is similar to SQS but is used for existing applications that are being migrated into AWS. SQS should be used for new applications being created in the cloud.
INCORRECT: "AWS Step Functions" is incorrect. AWS Step Functions is a workflow service. It is not the best solution for this scenario.
References:
https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html
https://aws.amazon.com/sns/features/
A web application is deployed in multiple regions behind an ELB Application Load Balancer. You need deterministic routing to the closest region and automatic failover. Traffic should traverse the AWS global network for consistent performance. How can this be achieved?
Aa. Place an EC2 Proxy in front of the ALB and configure automatic failover
B. Configure AWS Global Accelerator and configure the ALBs as targets
C. Create a Route 53 Alias record for each ALB and configure a latency-based routing policy.
D. Use a CloudFront distribution with multiple custom origins in each region and configure for high availability
B. Configure AWS Global Accelerator and configure the ALBs as targets
Explanation:
AWS Global Accelerator is a service that improves the availability and performance of applications with local or global users. You can configure the ALB as a target and Global Accelerator will automatically route users to the closest point of presence.
Failover is automatic and does not rely on any client side cache changes as the IP addresses for Global Accelerator are static anycast addresses. Global Accelerator also uses the AWS global network which ensures consistent performance.
CORRECT: "Configure AWS Global Accelerator and configure the ALBs as targets" is the correct answer.
INCORRECT: "Place an EC2 Proxy in front of the ALB and configure automatic failover" is incorrect. Placing an EC2 proxy in front of the ALB does not meet the requirements. This solution does not ensure deterministic routing the closest region and failover is happening within a region that does not protect against regional failure. Also, this introduces a potential bottleneck and lack of redundancy.
INCORRECT: "Create a Route 53 Alias record for each ALB and configure a latency-based routing policy" is incorrect. A Route 53 Alias record for each ALB with latency-based routing does provide routing based on latency and failover. However, the traffic will not traverse the AWS global network.
INCORRECT: "Use a CloudFront distribution with multiple custom origins in each region and configure for high availability" is incorrect. You can use CloudFront with multiple custom origins and configure for HA. However, the traffic will not traverse the AWS global network.
References:
https://aws.amazon.com/global-accelerator/
https://aws.amazon.com/global-accelerator/faqs/
https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html
A large media site has multiple applications running on Amazon ECS. A Solutions Architect needs to use content metadata to route traffic to specific services. What is the MOST efficient method to fulfil this requirement?
A. Use an AWS Classic Load Balancer with a host-based routing rule to route traffic to the correct service
B. Use the AWS CLI to update an Amazon Route 53 hosted zone to route traffic as services get updated
C. Use an AWS Application Load Balancer with a path-based routing rule to route traffic to the correct service
D. Use Amazon CloudFront to manage and route traffic to the correct service
C. Use an AWS Application Load Balancer with a path-based routing rule to route traffic to the correct service
Explanation:
To add high availability to this architecture both the web tier and database tier require changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will ensure there are always instances running and traffic is being distributed to them.
The database tier should be migrated from the EC2 instances to Amazon RDS to take advantage of a managed database with Multi-AZ functionality. This will ensure that if there is an issue preventing access to the primary database a secondary database can take over.
CORRECT: "Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs" is the correct answer.
CORRECT: "Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment" is the correct answer.
INCORRECT: "Create new public and private subnets in the same AZ for high availability" is incorrect as this would not add high availability.
INCORRECT: "Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)" is incorrect because the existing servers are in a single subnet. For HA we need to instances in multiple subnets.
INCORRECT: "Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ" is incorrect because we also need HA for the database layer.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
When deciding whether to use SQS or Kinesis Data Streams to ingest data, which of the following should you take into account?
A. The frequency of data
B. The total amount of data
C. The number of consumers that need to receive the data
D. The order of data
C. The number of consumers that need to receive the data
Explanation:
SQS and Kinesis Data Streams are similar. But SQS is designed to temporarily hold a small message until a single consumer processes it, whereas Kinesis Data Streams is designed to provide durable storage and playback of large data streams to multiple consumers.
A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by midmorning. How should the scaling be changed to address the staff complaints and keep costs to a minimum?
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
B. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
D. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
Explanation:
Though this sounds like a good use case for scheduled actions, both answers using scheduled actions will have 20 instances running regardless of actual demand. A better option to be more cost effective is to use a target tracking action that triggers at a lower CPU threshold.
With this solution the scaling will occur before the CPU utilization gets to a point where performance is affected. This will result in resolving the performance issues whilst minimizing costs. Using a reduced cooldown period will also more quickly terminate unneeded instances, further reducing costs.
CORRECT: "Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period" is the correct answer.
INCORRECT: "Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens" is incorrect as this is not the most cost-effective option. Note you can choose min, max, or desired for a scheduled action.
INCORRECT: "Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens" is incorrect as this is not the most cost-effective option. Note you can choose min, max, or desired for a scheduled action.
INCORRECT: "Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period" is incorrect as AWS recommend you use target tracking in place of step scaling for most use cases.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
A company runs a large batch processing job at the end of every quarter. The processing job runs for 5 days and uses 15 Amazon EC2 instances. The processing must run uninterrupted for 5 hours per day. The company is investigating ways to reduce the cost of the batch processing job. Which pricing model should the company choose?
A. Reserved Instances
B. Spot Instances
C. Dedicated Instances
D. On-Demand Instances
D. On-Demand Instances
Explanation:
Each EC2 instance runs for 5 hours a day for 5 days per quarter or 20 days per year. This is time duration is insufficient to warrant reserved instances as these require a commitment of a minimum of 1 year and the discounts would not outweigh the costs of having the reservations unused for a large percentage of time. In this case, there are no options presented that can reduce the cost and therefore on-demand instances should be used.
CORRECT: "On-Demand Instances" is the correct answer.
INCORRECT: "Reserved Instances" is incorrect. Reserved instances are good for continuously running workloads that run for a period of 1 or 3 years.
INCORRECT: "Spot Instances" is incorrect. Spot instances may be interrupted and this is not acceptable. Note that Spot Block is deprecated and unavailable to new customers.
INCORRECT: "Dedicated Instances" is incorrect. These do not provide any cost advantages and will be more expensive.
References:
https://aws.amazon.com/ec2/pricing/
An application is running on Amazon EC2 behind an Elastic Load Balancer (ELB). Content is being published using Amazon CloudFront and you need to restrict the ability for users to circumvent CloudFront and access the content directly through the ELB. How can you configure this solution?
A. Create an Origin Access Identity (OAI) and associate it with the distribution
B. Use signed URLs or signed cookies to limit access to the content
C. Use a Network ACL to restrict access to the ELB
D. Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change
D. Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change
Explanation:
The only way to get this working is by using a VPC Security Group for the ELB that is configured to allow only the internal service IP ranges associated with CloudFront. As these are updated from time to time, you can use AWS Lambda to automatically update the addresses. This is done using a trigger that is triggered when AWS issues an SNS topic update when the addresses are changed.
CORRECT: "Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change" is the correct answer.
INCORRECT: "Create an Origin Access Identity (OAI) and associate it with the distribution" is incorrect. You can use an OAI to restrict access to content in Amazon S3 but not on EC2 or ELB.
INCORRECT: "Use signed URLs or signed cookies to limit access to the content" is incorrect. Signed cookies and URLs are used to limit access to files but this does not stop people from circumventing CloudFront and accessing the ELB directly.
INCORRECT: "Use a Network ACL to restrict access to the ELB" is incorrect. A Network ACL can be used to restrict access to an ELB but it is recommended to use security groups and this solution is incomplete as it does not account for the fact that the internal service IP ranges change over time.
References:
A company runs an application in an Amazon VPC that requires access to an Amazon Elastic Container Service (Amazon ECS) cluster that hosts an application in another VPC. The company’s security team requires that all traffic must not traverse the internet. Which solution meets this requirement?
A. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC
B. Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the VPC that hosts the ECS cluster
C. Configure a gateway endpoint for Amazon ECS. Update the route table to include an entry pointing to the ECS cluster
D. Configure an Amazon Route 53 private hosted zone for each VPC. Use private records to resolve internal IP addresses in each VPC
A. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC
Explanation:
The correct solution is to use AWS PrivateLink in a service provider model. In this configuration a network load balancer will be implemented in the service provider VPC (the one with the ECS cluster in this example), and a PrivateLink endpoint will be created in the consumer VPC (the one with the company’s application).
CORRECT: "Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC" is the correct answer.
INCORRECT: "Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the VPC that hosts the ECS cluster" is incorrect. The endpoint should be in the consumer VPC, not the service provider VPC (see the diagram above).
INCORRECT: "Configure a gateway endpoint for Amazon ECS. Update the route table to include an entry pointing to the ECS cluster" is incorrect. You cannot use a gateway endpoint to connect to a private service. Gateway endpoints are only for S3 and DynamoDB.
INCORRECT: "Configure an Amazon Route 53 private hosted zone for each VPC. Use private records to resolve internal IP addresses in each VPC" is incorrect. This does not provide a mechanism for resolving each other’s addresses and there’s no method of internal communication using private IPs such as VPC peering.
References:
An application launched on Amazon EC2 instances needs to publish personally identifiable information (PII) about customers using Amazon SNS. The application is launched in private subnets within an Amazon VPC. Which is the MOST secure way to allow the application to access service endpoints in the same region?
A. Use an Internet Gateway
B. Use AWS PrivateLink
C. Use a proxy instance
D. Use a NAT gateway
B. Use AWS PrivateLink
Explanation:
To publish messages to Amazon SNS topics from an Amazon VPC, create an interface VPC endpoint. Then, you can publish messages to SNS topics while keeping the traffic within the network that you manage with the VPC. This is the most secure option as traffic does not need to traverse the Internet.
CORRECT: "Use AWS PrivateLink" is the correct answer.
INCORRECT: "Use an Internet Gateway" is incorrect. Internet Gateways are used by instances in public subnets to access the Internet and this is less secure than an VPC endpoint.
INCORRECT: "Use a proxy instance" is incorrect. A proxy instance will also use the public Internet and so is less secure than a VPC endpoint.
INCORRECT: "Use a NAT gateway" is incorrect. A NAT Gateway is used by instances in private subnets to access the Internet and this is less secure than an VPC endpoint.
References:
https://docs.aws.amazon.com/sns/latest/dg/sns-vpc-endpoint.html
An e-commerce application is hosted in AWS. The last time a new product was launched, the application experienced a performance issue due to an enormous spike in traffic. Management decided that capacity must be doubled this week after the product is launched. What is the MOST efficient way for management to ensure that capacity requirements are met?
A. Add a Step Scaling policy
B. Add a Simple Scaling policy
C. Add a Scheduled Scaling action
D. Add Amazon EC2 Spot instances
C. Add a Scheduled Scaling action
Explanation:
Scaling based on a schedule allows you to set your own scaling schedule for predictable load changes. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. This is ideal for situations where you know when and for how long you are going to need the additional capacity.
CORRECT: "Add a Scheduled Scaling action" is the correct answer.
INCORRECT: "Add a Step Scaling policy" is incorrect. Step scaling policies increase or decrease the current capacity of your Auto Scaling group based on a set of scaling adjustments, known as step adjustments. The adjustments vary based on the size of the alarm breach. This is more suitable to situations where the load unpredictable.
INCORRECT: "Add a Simple Scaling policy" is incorrect. AWS recommend using step over simple scaling in most cases. With simple scaling, after a scaling activity is started, the policy must wait for the scaling activity or health check replacement to complete and the cooldown period to expire before responding to additional alarms (in contrast to step scaling). Again, this is more suitable to unpredictable workloads.
INCORRECT: "Add Amazon EC2 Spot instances" is incorrect. Adding spot instances may decrease EC2 costs but you still need to ensure they are available. The main requirement of the question is that the performance issues are resolved rather than the cost being minimized.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html
An application running video-editing software is using significant memory on an Amazon EC2 instance. How can a user track memory usage on the Amazon EC2 instance?
A. Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric
B. Use an instance type that supports memory usage reporting to a metric by default
C. Call Amazon CloudWatch to retrieve the memory usage metric data that exists for the EC2 instance
D. Assign an IAM role to the EC2 instance with an IAM policy granting access to the desired metric
A. Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric
Explanation:
There is no standard metric in CloudWatch for collecting EC2 memory usage. However, you can use the CloudWatch agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. The metrics can be pushed to a CloudWatch custom metric.
CORRECT: "Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric" is the correct answer.
INCORRECT: "Use an instance type that supports memory usage reporting to a metric by default" is incorrect. There is no such thing as an EC2 instance type that supports memory usage reporting to a metric by default. The limitation is not in EC2 but in the metrics that are collected by CloudWatch.
INCORRECT: "Call Amazon CloudWatch to retrieve the memory usage metric data that exists for the EC2 instance" is incorrect. As there is no standard metric for collecting EC2 memory usage in CloudWatch the data will not already exist there to be retrieved.
INCORRECT: "Assign an IAM role to the EC2 instance with an IAM policy granting access to the desired metric" is incorrect. This is not an issue of permissions.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
An application has been deployed on Amazon EC2 instances behind an Application Load Balancer (ALB). A Solutions Architect must improve the security posture of the application and minimize the impact of a DDoS attack on resources. Which of the following solutions is MOST effective?
A. Create a custom AWS Lambda function that monitors for suspicious traffic and modifies a network ACL when a potential DDoS attack is identified
B. Enable VPC Flow Logs and store them in Amazon S3. Use Amazon Athena to parse the logs and identify and block potential DDoS attacks
C. Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer
D. Enable access logs on the Application Load Balancer and configure Amazon CloudWatch to monitor the access logs and trigger a Lambda function when potential attacks are identified. Configure the Lambda function to modify the ALBs security group and block the attack
C. Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer
Explanation:
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span.
You can use this type of rule to put a temporary block on requests from an IP address that's sending excessive requests. By default, AWS WAF aggregates requests based on the IP address from the web request origin, but you can configure the rule to use an IP address from an HTTP header, like X-Forwarded-For, instead.
CORRECT: "Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer" is the correct answer.
INCORRECT: "Create a custom AWS Lambda function that monitors for suspicious traffic and modifies a network ACL when a potential DDoS attack is identified" is incorrect. There’s not description here of how Lambda is going to monitor for traffic.
INCORRECT: "Enable VPC Flow Logs and store them in Amazon S3. Use Amazon Athena to parse the logs and identify and block potential DDoS attacks" is incorrect. Amazon Athena is not able to block DDoS attacks, another service would be needed.
INCORRECT: "Enable access logs on the Application Load Balancer and configure Amazon CloudWatch to monitor the access logs and trigger a Lambda function when potential attacks are identified. Configure the Lambda function to modify the ALBs security group and block the attack" is incorrect. Access logs are exported to S3 but not to CloudWatch. Also, it would not be possible to block an attack from a specific IP using a security group (while still allowing any other source access) as they do not support deny rules.
References:
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components. Which solution should the Architect use?
A. Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream
B. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
C. Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3
D. Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue
B. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
Explanation:
This is a good use case for Amazon SQS. SQS is a service that is used for decoupling applications, thus reducing interdependencies, through a message bus. The front-end application can place messages on the queue and the back-end can then poll the queue for new messages. Please remember that Amazon SQS is pull-based (polling) not push-based (use SNS for push-based).
CORRECT: "Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages" is the correct answer.
INCORRECT: "Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream" is incorrect. Amazon Kinesis Firehose is used for streaming data. With Firehose the data is immediately loaded into a destination that can be Amazon S3, RedShift, Elasticsearch, or Splunk. This is not an ideal use case for Firehose as this is not streaming data and there is no need to load data into an additional AWS service.
INCORRECT: "Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3" is incorrect as per the previous explanation.
INCORRECT: "Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue " is incorrect as SQS is pull-based, not push-based. EC2 instances must poll the queue to find jobs to process.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/common_use_cases.html
Which of the following may be included in an AWS Config delivery channel? (Choose all that apply.)
A. A CloudWatch log stream
B. The delivery frequency of the configuration snapshot
C. An S3 bucket name
D. An SNS topic ARN
B. The delivery frequency of the configuration snapshot
C. An S3 bucket name
D. An SNS topic ARN
Explanation:
The delivery channel must include an S3 bucket name and may specify an SNS topic and the delivery frequency of configuration snapshots. You can’t specify a CloudWatch log stream.
A solutions architect is designing the infrastructure to run an application on Amazon EC2 instances. The application requires high availability and must dynamically scale based on demand to be cost efficient. What should the solutions architect do to meet these requirements?
A. Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones
B. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions
C. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
D. Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions
C. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
Explanation:
The Amazon EC2-based application must be highly available and elastically scalable. Auto Scaling can provide the elasticity by dynamically launching and terminating instances based on demand. This can take place across availability zones for high availability. Incoming connections can be distributed to the instances by using an Application Load Balancer (ALB).
CORRECT: "Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones" is the correct answer.
INCORRECT: "Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones" is incorrect as API gateway is not used for load balancing connections to Amazon EC2 instances.
INCORRECT: "Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions" is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group.
INCORRECT: "Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions" is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group.
References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html https://aws.amazon.com/elasticloadbalancing/
A company provides a REST-based interface to an application that allows a partner company to send data in near-real time. The application then processes the data that is received and stores it for later analysis. The application runs on Amazon EC2 instances. The partner company has received many 503 Service Unavailable Errors when sending data to the application and the compute capacity reaches its limits and is unable to process requests when spikes in data volume occur. Which design should a Solutions Architect implement to improve scalability?
A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions
B. Use Amazon API Gateway in front of the existing application. Create a usage plan with a quota limit for the partner company
C. Use Amazon SQS to ingest the data. Configure the EC2 instances to process messages from the SQS queue
D. Use Amazon SNS to ingest the data and trigger AWS Lambda functions to process the data in near-real time
A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions
Explanation:
Amazon Kinesis enables you to ingest, buffer, and process streaming data in real-time. Kinesis can handle any amount of streaming data and process data from hundreds of thousands of sources with very low latencies. This is an ideal solution for data ingestion.
To ensure the compute layer can scale to process increasing workloads, the EC2 instances should be replaced by AWS Lambda functions. Lambda can scale seamlessly by running multiple executions in parallel.
CORRECT: "Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions" is the correct answer.
INCORRECT: "Use Amazon API Gateway in front of the existing application. Create a usage plan with a quota limit for the partner company" is incorrect. A usage plan will limit the amount of data that is received and cause more errors to be received by the partner company.
INCORRECT: "Use Amazon SQS to ingest the data. Configure the EC2 instances to process messages from the SQS queue" is incorrect. Amazon Kinesis Data Streams should be used for near-real time or real-time use cases instead of Amazon SQS.
INCORRECT: "Use Amazon SNS to ingest the data and trigger AWS Lambda functions to process the data in near-real time" is incorrect. SNS is not a near-real time solution for data ingestion. SNS is used for sending notifications.
References:
https://aws.amazon.com/kinesis/
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
A Solutions Architect is designing a web application that runs on Amazon EC2 instances behind an Elastic Load Balancer. All data in transit must be encrypted. Which solution options meet the encryption requirement? (choose 2)
A. Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL on EC2 instances
B. Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances
C. Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances
D. Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances
E. Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances
D. Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances
E. Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances
Explanation:
You can passthrough encrypted traffic with an NLB and terminate the SSL on the EC2 instances, so this is a valid answer.
You can use a HTTPS listener with an ALB and install certificates on both the ALB and EC2 instances. This does not use passthrough, instead it will terminate the first SSL connection on the ALB and then re-encrypt the traffic and connect to the EC2 instances.
CORRECT: "Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances" is the correct answer.
CORRECT: "Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances" is the correct answer.
INCORRECT: "Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL on EC2 instances" is incorrect. You cannot use passthrough mode with an ALB and terminate SSL on the EC2 instances.
INCORRECT: "Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances" is incorrect. You cannot use a HTTPS listener with an NLB.
INCORRECT: "Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances" is incorrect. You cannot use a TCP listener with an ALB.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html
An application running on Amazon ECS processes data and then writes objects to an Amazon S3 bucket. The application requires permissions to make the S3 API calls. How can a Solutions Architect ensure the application has the required permissions?
A. Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn
B. Update the S3 policy in IAM to allow read/write access from Amazon ECS, and then relaunch the container
C. Create a set of Access Keys with read/write permissions to the bucket and update the task credential ID
D. Attach an IAM policy with read/write permissions to the bucket to an IAM group and add the container instances to the group
A. Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn
Explanation:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances. You define the IAM role to use in your task definitions, or you can use a taskRoleArn override when running a task manually with the RunTask API operation. Note that there are instances roles and task roles that you can assign in ECS when using the EC2 launch type. The task role is better when you need to assign permissions for just that specific task.
CORRECT: "Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn" is the correct answer.
INCORRECT: "Update the S3 policy in IAM to allow read/write access from Amazon ECS, and then relaunch the container" is incorrect. Policies must be assigned to tasks using IAM Roles and this is not mentioned here.
INCORRECT: "Create a set of Access Keys with read/write permissions to the bucket and update the task credential ID" is incorrect. You cannot update the task credential ID with access keys and roles should be used instead.
INCORRECT: "Attach an IAM policy with read/write permissions to the bucket to an IAM group and add the container instances to the group" is incorrect. You cannot add container instances to an IAM group.
References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
A web application allows users to upload photos and add graphical elements to them. The application offers two tiers of service: free and paid. Photos uploaded by paid users should be processed before those submitted using the free tier. The photos are uploaded to an Amazon S3 bucket which uses an event notification to send the job information to Amazon SQS. How should a Solutions Architect configure the Amazon SQS deployment to meet these requirements?
A. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first
B. Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue
C. Use one SQS standard queue. Use batching for the paid photos and short polling for the free photos
D. Use a separate SQS FIFO queue for each tier. Set the free queue to use short polling and the paid queue to use long polling
B. Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue
Explanation:
AWS recommend using separate queues when you need to provide prioritization of work. The logic can then be implemented at the application layer to prioritize the queue for the paid photos over the queue for the free photos.
CORRECT: "Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue" is the correct answer.
INCORRECT: "Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first" is incorrect. FIFO queues preserve the order of messages but they do not prioritize messages within the queue. The orders would need to be placed into the queue in a priority order and there’s no way of doing this as the messages are sent automatically through event notifications as they are received by Amazon S3.
INCORRECT: "Use one SQS standard queue. Use batching for the paid photos and short polling for the free photos" is incorrect. Batching adds efficiency but it has nothing to do with ordering or priority.
INCORRECT: "Use a separate SQS FIFO queue for each tier. Set the free queue to use short polling and the paid queue to use long polling" is incorrect. Short polling and long polling are used to control the amount of time the consumer process waits before closing the API call and trying again. Polling should be configured for efficiency of API calls and processing of messages but does not help with message prioritization.
References:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-how-it-works.html
A company runs an application on six web application servers in an Amazon EC2 Auto Scaling group in a single Availability Zone. The application is fronted by an Application Load Balancer (ALB). A Solutions Architect needs to modify the infrastructure to be highly available without making any modifications to the application. Which architecture should the Solutions Architect choose to enable high availability?
A. Create an Amazon CloudFront distribution with a custom origin across multiple Regions
B. Modify the Auto Scaling group to use two instances across each of three Availability Zones
C. Create a launch template that can be used to quickly create more instances in another Region
D. Create an Auto Scaling group to launch three instances across each of two Regions
B. Modify the Auto Scaling group to use two instances across each of three Availability Zones
Explanation:
The only thing that needs to be changed in this scenario to enable HA is to split the instances across multiple Availability Zones. The architecture already uses Auto Scaling and Elastic Load Balancing so there is plenty of resilience to failure. Once the instances are running across multiple AZs there will be AZ-level fault tolerance as well.
CORRECT: "Modify the Auto Scaling group to use two instances across each of three Availability Zones" is the correct answer.
INCORRECT: "Create an Amazon CloudFront distribution with a custom origin across multiple Regions" is incorrect. CloudFront is not used to create HA for your application, it is used to accelerate access to media content.
INCORRECT: "Create a launch template that can be used to quickly create more instances in another Region" is incorrect. Multi-AZ should be enabled rather than multi-Region.
INCORRECT: "Create an Auto Scaling group to launch three instances across each of two Regions" is incorrect. HA can be achieved within a Region by simply enabling more AZs in the ASG. An ASG cannot launch instances in multiple Regions.
References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-zone.html
A company requires a solution to allow customers to customize images that are stored in an online catalog. The image customization parameters will be sent in requests to Amazon API Gateway. The customized image will then be generated on-demand and can be accessed online. The solutions architect requires a highly available solution. Which solution will be MOST cost-effective?
A. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances
B. Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances
C. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
D. Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
D. Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
Explanation:
All solutions presented are highly available. The key requirement that must be satisfied is that the solution should be cost-effective and you must choose the most cost-effective option.
Therefore, it’s best to eliminate services such as Amazon EC2 and ELB as these require ongoing costs even when they’re not used. Instead, a fully serverless solution should be used. AWS Lambda, Amazon S3 and CloudFront are the best services to use for these requirements.
CORRECT: "Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin" is the correct answer.
INCORRECT: "Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances" is incorrect. This is not the most cost-effective option as the ELB and EC2 instances will incur costs even when not used.
INCORRECT: "Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances" is incorrect. This is not the most cost-effective option as the ELB will incur costs even when not used. Also, Amazon DynamoDB will incur RCU/WCUs when running and is not the best choice for storing images.
INCORRECT: "Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin" is incorrect. This is not the most cost-effective option as the EC2 instances will incur costs even when not used
References:
https://aws.amazon.com/serverless/
A web application runs in public and private subnets. The application architecture consists of a web tier and database tier running on Amazon EC2 instances. Both tiers run in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
A. Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs
B. Create new public and private subnets in the same AZ for high availability
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)
D. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
E. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ
A. Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs
D. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
Explanation:
To add high availability to this architecture both the web tier and database tier require changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will ensure there are always instances running and traffic is being distributed to them.
The database tier should be migrated from the EC2 instances to Amazon RDS to take advantage of a managed database with Multi-AZ functionality. This will ensure that if there is an issue preventing access to the primary database a secondary database can take over.
CORRECT: "Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs" is the correct answer.
CORRECT: "Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment" is the correct answer.
INCORRECT: "Create new public and private subnets in the same AZ for high availability" is incorrect as this would not add high availability.
INCORRECT: "Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)" is incorrect because the existing servers are in a single subnet. For HA we need to instances in multiple subnets.
INCORRECT: "Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ" is incorrect because we also need HA for the database layer.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
https://aws.amazon.com/rds/features/multi-az/
A web application is being deployed on an Amazon ECS cluster using the Fargate launch type. The application is expected to receive a large volume of traffic initially. The company wishes to ensure that performance is good for the launch and that costs reduce as demand decreases What should a solutions architect recommend?
A. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when an Amazon CloudWatch alarm is breached
B. Use Amazon EC2 Auto Scaling to scale out on a schedule and back in once the load decreases
C. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm
D. Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached
D. Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached
Explanation:
Amazon ECS uses the AWS Application Auto Scaling service to scales tasks. This is configured through Amazon ECS using Amazon ECS Service Auto Scaling. A Target Tracking Scaling policy increases or decreases the number of tasks that your service runs based on a target value for a specific metric. For example, in the image below the tasks will be scaled when the average CPU breaches 80% (as reported by CloudWatch).
CORRECT: "Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached" is the correct answer.
INCORRECT: "Use Amazon EC2 Auto Scaling with simple scaling policies to scale when an Amazon CloudWatch alarm is breached" is incorrect
INCORRECT: "Use Amazon EC2 Auto Scaling to scale out on a schedule and back in once the load decreases" is incorrect
INCORRECT: "Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm" is incorrect
References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html
A solutions architect is designing an application on AWS. The compute layer will run in parallel across EC2 instances. The compute layer should scale based on the number of jobs to be processed. The compute layer is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored. Which design should the solutions architect use?
A. Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage
C. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic
A. Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
Explanation:
In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue.
To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows:
Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet's running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance.
Acceptable backlog per instance: To calculate your target value, first determine what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message.
This solution will scale EC2 instances using Auto Scaling based on the number of jobs waiting in the SQS queue.
CORRECT: "Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue" is the correct answer.
INCORRECT: "Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage" is incorrect as scaling on network usage does not relate to the number of jobs waiting to be processed.
INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage" is incorrect. Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on CPU usage is not the best solution as it does not relate to the number of jobs waiting to be processed.
INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic" is incorrect. Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on the number of notifications in SNS is not possible.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
A company hosts a multiplayer game on AWS. The application uses Amazon EC2 instances in a single Availability Zone and users connect over Layer 4. Solutions Architect has been tasked with making the architecture highly available and also more cost-effective. How can the solutions architect best meet these requirements? (Select TWO.)
A. Configure a Network Load Balancer in front of the EC2 instances
B. Increase the number of instances and use smaller EC2 instance types
C. Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically
D. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically
E. Configure an Application Load Balancer in front of the EC2 instances
A. Configure a Network Load Balancer in front of the EC2 instances
D. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically
Explanation:
The solutions architect must enable high availability for the architecture and ensure it is cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be created to add and remove instances across multiple availability zones.
In order to distribute the traffic to the instances the architecture should use a Network Load Balancer which operates at Layer 4. This architecture will also be cost-effective as the Auto Scaling group will ensure the right number of instances are running based on demand.
CORRECT: "Configure a Network Load Balancer in front of the EC2 instances" is a correct answer.
CORRECT: "Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically" is also a correct answer.
INCORRECT: "Increase the number of instances and use smaller EC2 instance types" is incorrect as this is not the most cost-effective option. Auto Scaling should be used to maintain the right number of active instances.
INCORRECT: "Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically" is incorrect as this is not highly available as it’s a single AZ.
INCORRECT: "Configure an Application Load Balancer in front of the EC2 instances" is incorrect as an ALB operates at Layer 7 rather than Layer 4.
References:
https://docsaws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html