Digging for Artifacts
Contain Your Excitement
Oops! All CloudFormation
Oops! More CloudFormation
Free-For-All
100

Which of the following is an artifact repository that makes it easy for developers to find approved software packages they need to build their applications?

A. CodePipeline

B. CodeBuild

C. CodeDeploy

D. CodeArtifact

Correct Answer: D

Explanation: AWS CodeArtifact is an artifact repository service that makes it easy for organizations to securely store, publish, and share software packages used in their software development process.

100

Your company is shifting towards Elastic Container Service (ECS) to deploy applications. The process should be automated using the AWS CLI to create a service where at least ten instances of a task definition are kept running under the default cluster. Which of the following commands should be executed?


A. aws ecs create-service --service-name ecs-simple-service --task-definition ecs-demo --desired-count 10

B. aws ecs run-task --cluster default --task-definition ecs-demo

C. aws ecr create-service --service-name ecs-simple-service --task-definition ecs-demo --desired-count 10

D. docker-compose create ecs-simple-service

Correct Answer: A

Explanation: To create a new service you would use this command which creates a service in your default region called ecs-simple-service. The service uses the ecs-demo task definition and it maintains 10 instantiations of that task.

100

You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

A. Outputs

B. Resources

C. Conditions

D. Parameters

Correct Answer: B

Explanation: The Resources section specifies the stack resources and their properties, such as an Amazon Elastic Compute Cloud instance or an Amazon Simple Storage Service bucket.

100

Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?


A. Share the code using an EBS volume

B. Copy and paste the code into the template each time you need to use it

C. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template

D. Use a CloudFormation nested stack

Correct Answer: D

Explanation: Nested stacks are stacks created as part of other stacks. As your infrastructure grows, common patterns can emerge in which you declare the same components in multiple templates. You can separate out these common components and create dedicated templates for them. Then use the resource in your template to reference other templates, creating nested stacks. For example, assume that you have a load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates, you can create a dedicated template for the load balancer. Then, you just use the resource to reference that template from within other templates.

100

Your company has moved to AWS so it can use "Infrastructure as Code". You would like to apply version control to your infrastructure, so that you can roll back infrastructure to a previous stable version if needed. You would also like to quickly deploy testing and staging environments in multiple regions. What services should you use to achieve this?


A. Elastic Beanstalk and CodeCommit.

B. CloudWatch and CodeCommit.

C. CloudFormation and CodeCommit.

D. OpsWorks and CodeCommit.

Correct Answer: C

Explanation: CloudFormation allows for the rolling back to prior stacks and is AWS's go to Infrastructure as code solution. CodeCommit is a fully-managed source control service for secure Git-based repositories.

200

Which of the following is not something suitable to store in CodeArtifact?

A. Compiled applications

B. Documentation relating to your applications

C. Log files generated by your application

D. Deployable packages

Correct Answer: C

Explanation: CodeArtifact is primarily designed for storing and managing software packages, dependencies, and artifacts used in the software development process. It is not intended for storing log files generated by applications.

200

What service should you use if you want to deploy containerized applications without worrying about provisioning servers?

A. ECR

B. ECS

C. EC2

D. Fargate

Correct Answer: D

Explanation: AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Moving tasks such as server management, resource allocation, and scaling to AWS does not only improve your operational posture, but also accelerates the process of going from idea to production on the cloud, and lowers the total cost of ownership.

200

AWS CloudFormation helps model and provision all the cloud infrastructure resources needed for your business. Which of the following services rely on CloudFormation to provision resources (Select two)?


A. AWS Elastic Beanstalk

B. AWS Lambda

C. AWS Autoscaling

D. AWS Serverless Application Model (AWS SAM)

E. AWS CodeBuild

Correct Answers: A, D

Explanation: AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes.

AWS Serverless Application Model (AWS SAM) - You use the AWS SAM specification to define your serverless application. AWS SAM templates are an extension of AWS CloudFormation templates, with some additional components that make them easier to work with. AWS SAM needs CloudFormation templates as a basis for its configuration.

200

As an AWS certified developer associate, you are working on an AWS CloudFormation template that will create resources for a company's cloud infrastructure. Your template is composed of three stacks which are Stack-A, Stack-B, and Stack-C. Stack-A will provision a VPC, a security group, and subnets for public web applications that will be referenced in Stack-B and Stack-C. After running the stacks you decide to delete them, in which order should you do it?


A. Stack A, then Stack C, then Stack B

B. Stack A, then Stack B, then Stack C

C. Stack B, then Stack C, then Stack A

D. Stack C, then Stack A, then Stack B

Correct Answer: C

Explanation: All of the imports must be removed before you can delete the exporting stack or modify the output value. In this case, you must delete Stack B as well as Stack C, before you delete Stack A.

Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-exports.html

200

You want to build applications, define, and deploy AWS resources using a programming language of your choice. What should you use to accomplish this?


A. CloudFormation templates

B. Amplify

C. Elastic Beanstalk

D. CDKs

Correct Answer: D

Explanation: The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework for defining cloud infrastructure as code with modern programming languages and deploying it through AWS CloudFormation.

300

What is the purpose of creating an external connection when configuring CodeArtifact?


A. An external connection is required to enable developers in your organization to connect to the CodeArtifact repository.

B. An external connection is a connection between a CodeArtifact repository and an external, public repository so that packages can be fetched from the external repository.

C. An external connection is used to integrate your CodeArtifact repository with your CI/CD pipeline.

D. An external connection is used to enable developers from outside your organization to access your application.

Correct Answer: B

Explanation: An external connection is a connection between a CodeArtifact repository and an external, public repository so that when you request a package from the CodeArtifact repository that's not already present in the repository, the package can be fetched from the external connection.

300

Your company has embraced cloud-native microservices architectures. New applications must be dockerized and stored in a registry service offered by AWS. The architecture should support dynamic port mapping and support multiple tasks from a single service on the same container instance. All services should run on the same EC2 instance. Which of the following options offers the best-fit solution for the given use-case?


A. Classic Load Balancer + Beanstalk

B. Classic Load Balancer + ECS

C. Application Load Balancer + ECS

D. Application Load Balancer + Beanstalk

Correct Answer: C

Explanation: Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You can host your cluster on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks using the Fargate launch type. For more control over your infrastructure, you can host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances that you manage by using the EC2 launch type.

An Application load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. A listener checks for connection requests from clients, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes requests to its registered targets. Each rule consists of a priority, one or more actions, and one or more conditions.

When you deploy your services using Amazon Elastic Container Service (Amazon ECS), you can use dynamic port mapping to support multiple tasks from a single service on the same container instance. Amazon ECS manages updates to your services by automatically registering and deregistering containers with your target group using the instance ID and port for each container.

The Classic Load Balancer doesn't allow you to run multiple copies of a task on the same instance. Instead, with the Classic Load Balancer, you must statically map port numbers on a container instance.

You can create docker environments that support multiple containers per Amazon EC2 instance with a multi-container Docker platform for Elastic Beanstalk. However, ECS gives you finer control.

300

How can you prevent AWS CloudFormation from deleting successfully provisioned resources during a stack create operation, while allowing resources in a failed state to be updated or deleted upon the next stack operation?


A. Use the --enable-termination-protection flag with the AWS CLI.

B. Use the "--enable-rollback" flag with the AWS CLI.

C. Set Termination Protection to Enabled in the CloudFormation console.

D. In the CloudFormation console, for Stack failure options, select "Preserve successfully provisioned resources".

Correct Answer: D

Explanation: Create operations set to "Preserve successfully provisioned resources" preserves the state of successful resources, while failed resources will stay in a failed state until the next update operation is performed.

Reference: Stack Failure Options (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stack-failure-options.html#stack-failure-options-console)

300

Part of your CloudFormation deployment fails due to a misconfiguration, by default what will happen?


A. CloudFormation will ask you if you want to continue with the deployment

B. Failed components will remain available for debugging purposes

C. CloudFormation will rollback the entire stack

D. CloudFormation will rollback only the failed components

Correct Answer: C

Explanation: By default, the “automatic rollback on error” feature is enabled. This will direct CloudFormation to only create or update all resources in your stack if all individual operations succeed. If they do not, CloudFormation reverts the stack to the last known stable configuration.

300

You are a developer in a manufacturing company that has several servers on-site. The company decides to move new development to the cloud using serverless technology. You decide to use the AWS Serverless Application Model (AWS SAM) and work with an AWS SAM template file to represent your serverless architecture.

Which of the following is NOT a valid serverless resource type?


A. AWS::Serverless::UserPool

B. AWS::Serverless::SimpleTable

C. AWS::Serverless::Api

D. AWS::Serverless::Function

Correct Answer: A

Explanation: The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML.

SAM supports the following resource types:

AWS::Serverless::Api

AWS::Serverless::Application

AWS::Serverless::Function

AWS::Serverless::HttpApi

AWS::Serverless::LayerVersion

AWS::Serverless::SimpleTable

AWS::Serverless::StateMachine

UserPool applies to the Cognito service which is used for authentication for mobile app and web. There is no resource named UserPool in the Serverless Application Model.

400

Which of the following should you do if you want to allow developers to pull packages from an external public repository and make them available in a CodeArtifact repository?


A. Create an upstream CodeArtifact repository with an external connection to the external repository. Associate the upstream repository with the repository used by the developers.

B. Establish a peering connection with the external repository, enabling the developers to access the packages.

C. Create a Lambda function to poll the external repository for available packages and upload them to the CodeArtifact repository.

D. The developers should download all the packages they need directly from the external repository.

Correct Answer: A

Explanation: You can add a connection between a CodeArtifact repository and an external, public repository, so that when developers request a package from the CodeArtifact repository that's not already present in the repository, the package can be fetched from the external connection. This makes it possible to consume open-source dependencies used by your application.

400

A junior developer working on ECS instances terminated a container instance in Amazon Elastic Container Service (Amazon ECS) as per instructions from the team lead. But the container instance continues to appear as a resource in the ECS cluster.

As a Developer Associate, which of the following solutions would you recommend to fix this behavior?


A. You terminated the container instance while it was in RUNNING state, that lead to this synchronization issues

B. A custom software on the container instance could have failed and resulted in the container hanging in an unhealthy state till restarted again

C. The container instance has been terminated with AWS CLI, whereas, for ECS instances, Amazon ECS CLI should be used to avoid any synchronization issues

D. You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues

Correct Answer: D

Explanation: If you terminate a container instance while it is in the STOPPED state, that container instance isn't automatically removed from the cluster. You will need to deregister your container instance in the STOPPED state by using the Amazon ECS console or AWS Command Line Interface. Once deregistered, the container instance will no longer appear as a resource in your Amazon ECS cluster.

400

Your global organization has an IT infrastructure that is deployed using CloudFormation on AWS Cloud. One employee, in us-east-1 Region, has created a stack 'Application1' and made an exported output with the name 'ELBDNSName'. Another employee has created a stack for a different application 'Application2' in us-east-2 Region and also exported an output with the name 'ELBDNSName'. The first employee wanted to deploy the CloudFormation stack 'Application1' in us-east-2, but it got an error. What is the cause of the error?


A. Exported Output Values in CloudFormation must have unique names within a single Region

B. Output Values in CloudFormation must have unique names across all Regions

C. Exported Output Values in CloudFormation must have unique names across all Regions

D. Output Values in CloudFormation must have unique names within a single Region

Correct Answer: A

Explanation: Using CloudFormation, you can create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. A CloudFormation template has an optional Outputs section which declares output values that you can import into other stacks (to create cross-stack references), return in response (to describe stack calls), or view on the AWS CloudFormation console. For example, you can output the S3 bucket name for a stack to make the bucket easier to find. You can use the Export Output Values to export the name of the resource output for a cross-stack reference. For each AWS account, export names must be unique within a region. In this case, we would have a conflict within us-east-2.

400

A developer is designing an AWS CloudFormation template for deploying Amazon EC2 instances in numerous AWS accounts. The developer needs to select EC2 instances from a list of pre-approved instance types. What measures could the developer take to integrate the list of authorized instance types into the CloudFormation template?


A. Configure a pseudo parameter with the list of EC2 instance types as AllowedValues in the CloudFormation template

B. Configure a parameter with the list of EC2 instance types as AllowedValues in the CloudFormation template

C. Configure a mapping having a list of EC2 instance types as parameters in the CloudFormation template

D. Configure separate parameters for each EC2 instance type in the CloudFormation template

Correct Answer: B

Explanation: You can use the Parameters section to customize your templates. Parameters enable you to input custom values to your template each time you create or update a stack. AllowedValues refers to an array containing the list of values allowed for the parameter. When applied to a parameter of type String, the parameter value must be one of the allowed values. When applied to a parameter of type CommaDelimitedList, each value in the list must be one of the specified allowed values.

400

You are running your application using Docker provisioned with Elastic Beanstalk. You would like to upgrade the application to a new version. How should you approach this?


A. Create a new Docker image using the latest code, deploy it using Elastic Beanstalk, then terminate your old environment.

B. Bundle your code into a zip file, upload and deploy it using Cloud Formation.

C. Create a new Docker image using the latest code, log in to the underlying EC2 instance and install the new Docker image.

D. Bundle your Dockerfile and the application code into a zip file, then upload and deploy it using the Elastic Beanstalk console.

Correct Answer: D

Explanation: Elastic Beanstalk supports the deployment of web applications as Docker containers. Each time you upload a new version of your application with the Elastic Beanstalk console or the EB CLI, Elastic Beanstalk creates an application version. When using the console you can choose to upload and deploy your code in a single step. Elastic Beanstalk will manage the previous version of your code so you can revert back to it if necessary.

500

You are an AWS Developer Associate who would like your application to utilize open-source packages from public repositories, and you intend to use CodeArtifact to accomplish this. You have an internal repository called my-repo in your domain my-domain. You would like to establish a connection to an upstream repository called npm-store to allow developers to pull packages from an external public repository. What AWS CLI command should you use to accomplish this?


A. aws codeartifact create-repository --domain my-domain --repository npm-store

B. aws codeartifact associate-external-connection --domain my-domain --repository npm-store --external-connection "public:npmjs"

C. aws codeartifact update-repository --repository my-repo --domain my-domain --upstreams repositoryName=npm-store

D. aws codeartifact set-upstream-repository --repository my-repo --domain my-domain --upstream npm-store

Correct Answer: C

Explanation: Option C is the correct syntax for associating an upstream repository with a repository in a specific domain. It updates your repository my-repo within my-domain to set its upstream repository to npm-store, which has an external connection that allows it to pull packages from public repositories.

Option A is the command to create an upstream repository for your my-repo repository. It does not associate your repository in your domain to the upstream repository.

Option B is the command to add an external connection to the npm public repository to your npm-store repository. It does not associate your repository in your domain to the upstream repository.

Option D is the incorrect syntax for the command. To set an upstream repository, you use aws codeartifact update-repository.

500

A developer wants to package the code and dependencies for the application-specific Lambda functions as container images to be hosted on Amazon Elastic Container Registry (ECR). Which of the following options are correct for the given requirement? (Select two)


A. You can deploy Lambda function as a container image, with a maximum size of 15 GB

B. To deploy a container image to Lambda, the container image must implement the Lambda Runtime API

C. AWS Lambda service does not support Lambda functions that use multi-architecture container images

D. Lambda supports both Windows and Linux-based container images

E. You can test the containers locally using the Lambda Runtime API

Correct Answers: B, C

Explanation: To deploy a container image to Lambda, the container image must implement the Lambda Runtime API - To deploy a container image to Lambda, the container image must implement the Lambda Runtime API. The AWS open-source runtime interface clients implement the API. You can add a runtime interface client to your preferred base image to make it compatible with Lambda.

AWS Lambda service does not support Lambda functions that use multi-architecture container images - Lambda provides multi-architecture base images. However, the image you build for your function must target only one of the architectures. Lambda does not support functions that use multi-architecture container images.

Reference: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html

500

Your company wants to move away from manually managing Lambda in the AWS console and wants to upload and update them using AWS CloudFormation. How do you declare an AWS Lambda function in CloudFormation? (Select two)


A. Upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function block

B. Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block as long as there are no third-party dependencies

C. Upload all the code to CodeCommit and refer to the CodeCommit Repository in AWS::Lambda::Function block

D. Upload all the code as a folder to S3 and refer the folder in AWS::Lambda::Function block

E. Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block and reference the dependencies as a zip file stored in S3

Correct Answers: A, B

Explanation: Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html

A. Upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function block You can upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function block. The AWS::Lambda::Function resource creates a Lambda function. To create a function, you need a deployment package and an execution role. The deployment package contains your function code.

B. Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block as long as there are no third-party dependencies The other option is to write the code inline for Node.js and Python as long as there are no dependencies for your code, besides the dependencies already provided by AWS in your Lambda Runtime (aws-sdk and cfn-response and many other AWS related libraries are preloaded via, for example, boto3 (python) in the lambda instances.)

500

The development team at an IT company uses CloudFormation to manage its AWS infrastructure. The team has created a network stack containing a VPC with subnets and a web application stack with EC2 instances and an RDS instance. The team wants to reference the VPC created in the network stack into its web application stack.

As a Developer Associate, which of the following solutions would you recommend for the given use-case?


A. Create a cross-stack reference and use the Export output field to flag the value of VPC from the network stack. Then use Fn::ImportValue intrinsic function to import the value of VPC into the web application stack

B. Create a cross-stack reference and use the Outputs output field to flag the value of VPC from the network stack. Then use Ref intrinsic function to reference the value of VPC into the web application stack

C. Create a cross-stack reference and use the Export output field to flag the value of VPC from the network stack. Then use Ref intrinsic function to reference the value of VPC into the web application stack

D. Create a cross-stack reference and use the Outputs output field to flag the value of VPC from the network stack. Then use Fn::ImportValue intrinsic function to import the value of VPC into the web application stack

Correct Answer: A

Explanation: You can create a cross-stack reference to export resources from one AWS CloudFormation stack to another. For example, you might have a network stack with a VPC and subnets and a separate public web application stack. To use the security group and subnet from the network stack, you can create a cross-stack reference that allows the web application stack to reference resource outputs from the network stack. With a cross- stack reference, owners of the web application stacks don't need to create or maintain networking rules or assets. To create a cross-stack reference, use the Export output field to flag the value of a resource output for export. Then, use the Fn::ImportValue intrinsic function to import the value. You cannot use the Ref intrinsic function to import the value

Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html

500

As a developer, you are working on creating an application using AWS Cloud Development Kit (CDK). Which of the following represents the correct order of steps to be followed for creating an app using AWS CDK?


A. Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account

B. Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account

C. Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app

D. Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app

Correct Answer: B

Explanation: The standard AWS CDK development workflow is similar to the workflow you're already familiar as a developer. There are a few extra steps:

1. Create the app from a template provided by AWS CDK - Each AWS CDK app should be in its own directory, with its own local module dependencies. Create a new directory for your app. Now initialize the app using the cdk init command, specifying the desired template ("app") and programming language. The cdk init command creates a number of files and folders inside the created home directory to help you organize the source code for your AWS CDK app.

2. Add code to the app to create resources within stacks - Add custom code as is needed for your application.

3. Build the app (optional) - In most programming environments, after making changes to your code, you'd build (compile) it. This isn't strictly necessary with the AWS CDK—the Toolkit does it for you so you can't forget. But you can still build manually whenever you want to catch syntax and type errors.

4. Synthesize one or more stacks in the app to create an AWS CloudFormation template - Synthesize one or more stacks in the app to create an AWS CloudFormation template. The synthesis step catches logical errors in defining your AWS resources. If your app contains more than one stack, you'd need to specify which stack(s) to synthesize.

5. Deploy one or more stacks to your AWS account - It is optional (though good practice) to synthesize before deploying. The AWS CDK synthesizes your stack before each deployment. If your code has security implications, you'll see a summary of these and need to confirm them before deployment proceeds. cdk deploy is used to deploy the stack using CloudFormation templates. This command displays progress information as your stack is deployed. When it's done, the command prompt reappears.