Description
Genuine Exam Dumps For DOP-C02:
Prepare Yourself Expertly for DOP-C02 Exam:
Our team of highly skilled and experienced professionals is dedicated to delivering up-to-date and precise study materials in PDF format to our customers. We deeply value both your time and financial investment, and we have spared no effort to provide you with the highest quality work. We ensure that our students consistently achieve a score of more than 95% in the Amazon DOP-C02 exam. You provide only authentic and reliable study material. Our team of professionals is always working very keenly to keep the material updated. Hence, they communicate to the students quickly if there is any change in the DOP-C02 dumps file. The Amazon DOP-C02 exam question answers and DOP-C02 dumps we offer are as genuine as studying the actual exam content.
24/7 Friendly Approach:
You can reach out to our agents at any time for guidance; we are available 24/7. Our agent will provide you information you need; you can ask them any questions you have. We are here to provide you with a complete study material file you need to pass your DOP-C02 exam with extraordinary marks.
Quality Exam Dumps for Amazon DOP-C02:
Pass4surexams provide trusted study material. If you want to meet a sweeping success in your exam you must sign up for the complete preparation at Pass4surexams and we will provide you with such genuine material that will help you succeed with distinction. Our experts work tirelessly for our customers, ensuring a seamless journey to passing the Amazon DOP-C02 exam on the first attempt. We have already helped a lot of students to ace IT certification exams with our genuine DOP-C02 Exam Question Answers. Don’t wait and join us today to collect your favorite certification exam study material and get your dream job quickly.
90 Days Free Updates for Amazon DOP-C02 Exam Question Answers and Dumps:
Enroll with confidence at Pass4surexams, and not only will you access our comprehensive Amazon DOP-C02 exam question answers and dumps, but you will also benefit from a remarkable offer – 90 days of free updates. In the dynamic landscape of certification exams, our commitment to your success doesn’t waver. If there are any changes or updates to the Amazon DOP-C02 exam content during the 90-day period, rest assured that our team will promptly notify you and provide the latest study materials, ensuring you are thoroughly prepared for success in your exam.”
Amazon DOP-C02 Real Exam Questions:
Quality is the heart of our service that’s why we offer our students real exam questions with 100% passing assurance in the first attempt. Our DOP-C02 dumps PDF have been carved by the experienced experts exactly on the model of real exam question answers in which you are going to appear to get your certification.
Amazon DOP-C02 Sample Questions
Question # 1
A company has a mission-critical application on AWS that uses automatic scaling Thecompany wants the deployment lilecycle to meet the following parameters.• The application must be deployed one instance at a time to ensure the remaining fleetcontinues to serve traffic• The application is CPU intensive and must be closely monitored• The deployment must automatically roll back if the CPU utilization of the deploymentinstance exceeds 85%. Which solution will meet these requirements?
A. Use AWS CloudFormalion to create an AWS Step Functions state machine and AutoScaling hfecycle hooks to move to one instance at a time into a wait state Use AWSSystems Manager automation to deploy the update to each instance and move it back intothe Auto Scaling group using the heartbeat timeout
B. Use AWS CodeDeploy with Amazon EC2 Auto Scaling. Configure an alarm tied to theCPU utilization metric. Use the CodeDeployDefault OneAtAtime configuration as adeployment strategy Configure automatic rollbacks within the deployment group to roll backthe deployment if the alarm thresholds are breached
C. Use AWS Elastic Beanstalk for load balancing and AWS Auto Scaling Configure analarm tied to the CPU utilization metric Configure rolling deployments with a fixed batchsize of one instance Enable enhanced health to monitor the status of the deployment androll back based on the alarm previously created.
D. Use AWS Systems Manager to perform a blue/green deployment with Amazon EC2Auto Scaling Configure an alarm tied to the CPU utilization metric Deploy updates one at atime Configure automatic rollbacks within the Auto Scaling group to roll back thedeployment if the alarm thresholds are breached
Explanation: https://aws.amazon.com/about-aws/whats-new/2016/09/aws-codedeployintroduces-deployment-monitoring-with-amazon-cloudwatch-alarms-and-automaticdeployment-rollback/
Question # 2
A company has 20 service learns Each service team is responsible for its ownmicroservice. Each service team uses a separate AWS account for its microservice and aVPC with the 192 168 0 0/22 CIDR block. The company manages the AWS accounts withAWS Organizations.Each service team hosts its microservice on multiple Amazon EC2 instances behind anApplication Load Balancer. The microservices communicate with each other across thepublic internet. The company’s security team has issued a new guideline that allcommunication between microservices must use HTTPS over private network connectionsand cannot traverse the public internet.A DevOps engineer must implement a solution that fulfills these obligations and minimizesthe number of changes for each service team.Which solution will meet these requirements?
A. Create a new AWS account in AWS Organizations Create a VPC in this account anduse AWS Resource Access Manager to share the private subnets of this VPC with theorganization Instruct the service teams to launch a new. Network Load Balancer (NLB) and EC2 instances that use the shared private subnets Use the NLB DNS names forcommunication between microservices.
B. Create a Network Load Balancer (NLB) in each of the microservice VPCs Use AWSPrivateLink to create VPC endpoints in each AWS account for the NLBs Createsubscriptions to each VPC endpoint in each of the other AWS accounts Use the VPCendpoint DNS names for communication between microservices.
C. Create a Network Load Balancer (NLB) in each of the microservice VPCs Create VPCpeering connections between each of the microservice VPCs Update the route tables foreach VPC to use the peering links Use the NLB DNS names for communication betweenmicroservices.
D. Create a new AWS account in AWS Organizations Create a transit gateway in thisaccount and use AWS Resource Access Manager to share the transit gateway with theorganization. In each of the microservice VPCs. create a transit gateway attachment to theshared transit gateway Update the route tables of each VPC to use the transit gatewayCreate a Network Load Balancer (NLB) in each of the microservice VPCs Use the NLBDNS names for communication between microservices.
Explanation: https://aws.amazon.com/blogs/networking-and-content-delivery/connectingnetworks-
with-overlapping-ip-ranges/ Private link is the best option because TransitGateway doesn’t support overlapping CIDR ranges.
Question # 3
A security team is concerned that a developer can unintentionally attach an Elastic IPaddress to an Amazon EC2 instance in production. No developer should be allowed toattach an Elastic IP address to an instance. The security team must be notified if anyproduction server has an Elastic IP address at any timeHow can this task be automated’?
A. Use Amazon Athena to query AWS CloudTrail logs to check for any associate-addressattempts Create an AWS Lambda function to disassociate the Elastic IP address from theinstance, and alert the security team.
B. Attach an 1AM policy to the developers’ 1AM group to deny associate-addresspermissions Create a custom AWS Config rule to check whether an Elastic IP address isassociated with any instance tagged as production, and alert the security team
C. Ensure that all 1AM groups associated with developers do not have associate-address permissions. Create a scheduled AWS Lambda function to check whether an Elastic IPaddress is associated with any instance tagged as production, and alert the secunty team ifan instance has an Elastic IP address associated with it
D. Create an AWS Config rule to check that all production instances have EC2 1AM rolesthat include deny associate-address permissions Verify whether there is an Elastic IPaddress associated with any instance, and alert the security team if an instance has anElastic IP address associated with it.
Explanation:
To prevent developers from unintentionally attaching an Elastic IP address to an AmazonEC2 instance in production, the best approach is to use IAM policies and AWS Configrules. By attaching an IAM policy that denies the associate-address permission to thedevelopers’ IAM group, you ensure that developers cannot perform this action. Additionally,creating a custom AWS Config rule to check for Elastic IP addresses associated withinstances tagged as production provides ongoing monitoring. If the rule detects an ElasticIP address, it can trigger an alert to notify the security team. This method is proactive andenforces the necessary permissions while also providing a mechanism for detection andnotification. References: from Amazon DevOps sources
Question # 4
A company is using AWS CodePipeline to deploy an application. According to a newguideline, a member of the company’s security team must sign off on any applicationchanges before the changes are deployed into production. The approval must be recordedand retained.Which combination of actions will meet these requirements? (Select TWO.)
A. Configure CodePipeline to write actions to Amazon CloudWatch Logs.
B. Configure CodePipeline to write actions to an Amazon S3 bucket at the end of eachpipeline stage.
C. Create an AWS CloudTrail trail to deliver logs to Amazon S3.
D. Create a CodePipeline custom action to invoke an AWS Lambda function for approval.Create a policy that gives the security team access to manage CodePipeline customactions.
E. Create a CodePipeline manual approval action before the deployment step. Create apolicy that grants the security team access to approve manual approval stages.
Explanation:
To meet the new guideline for application deployment, the company can use a combinationof AWS CodePipeline and AWS CloudTrail. A manual approval action in CodePipelineallows the security team to review and approve changes before they are deployed. Thisaction can be configured to pause the pipeline until approval is granted, ensuring that nochanges move to production without the necessary sign-off. Additionally, by creating anAWS CloudTrail trail, all actions taken within CodePipeline, including approvals, arerecorded and delivered to an Amazon S3 bucket. This provides an audit trail that can beretained for compliance and review purposes.References:AWS CodePipeline’s manual approval action provides a way to ensure that amember of the security team can review and approve changes before they aredeployed1.AWS CloudTrail integration with CodePipeline allows for the recording andretention of all pipeline actions, including approvals, which can be stored inAmazon S3 for record-keeping2.
Question # 5
A company has an AWS CodeDeploy application. The application has a deployment groupthat uses a single tag group to identify instances for the deployment of ApplicationA. Thesingle tag group configuration identifies instances that have Environment=Production andName=ApplicattonA tags for the deployment of ApplicationA.The company launches an additional Amazon EC2 instance with Department=MarketingEnvironment^Production. and Name=ApplicationB tags. On the next CodeDeploydeployment of ApplicationA. the additional instance has ApplicationA installed on it. ADevOps engineer needs to configure the existing deployment group to preventApplicationA from being installed on the additional instanceWhich solution will meet these requirements?
A. Change the current single tag group to include only the Environment=Production tagAdd another single tag group that includes only the Name=ApplicationA tag.
B. Change the current single tag group to include the Department=MarketmgEnvironment=Production and Name=ApplicationAtags
C. Add another single tag group that includes only the Department=Marketing tag. Keepthe Environment=Production and Name=ApplicationA tags with the current single tag group
D. Change the current single tag group to include only the Environment=Production tagAdd another single tag group that includes only the Department=Marketing tag
Explanation:
To prevent ApplicationA from being installed on the additional instance, the deploymentgroup configuration needs to be more specific. By changing the current single tag group toinclude only the Environment=Production tag and adding another single tag group thatincludes only the Name=ApplicationA tag, the deployment process will target only theinstances that match both tag groups. This ensures that only instances intended forApplicationA with the correct environment and name tags will receive the deployment, thusexcluding the additional instance withthe Department=Marketing and Name=ApplicationB tags.References:AWS CodeDeploy Documentation: Working with instances for CodeDeployAWS CodeDeploy Documentation: Stop a deployment with CodeDeployStack Overflow Discussion: CodeDeploy Deployment failed to stop Application
Question # 6
A company uses an organization in AWS Organizations to manage its AWS accounts. Thecompany recently acquired another company that has standalone AWS accounts. Theacquiring company’s DevOps team needs to consolidate the administration of the AWSaccounts for both companies and retain full administrative control of the accounts. TheDevOps team also needs to collect and group findings across all the accounts to implementand maintain a security posture.Which combination of steps should the DevOps team take to meet these requirements?(Select TWO.)
A. Invite the acquired company’s AWS accounts to join the organization. Create an SCPthat has full administrative privileges. Attach the SCP to the management account.
B. Invite the acquired company’s AWS accounts to join the organization. Create theOrganizationAccountAccessRole 1AM role in the invited accounts. Grant permission to themanagement account to assume the role.
C. Use AWS Security Hub to collect and group findings across all accounts. Use SecurityHub to automatically detect new accounts as the accounts are added to the organization.
D. Use AWS Firewall Manager to collect and group findings across all accounts. Enable allfeatures for the organization. Designate an account in the organization as the delegatedadministrator account for Firewall Manager.
E. Use Amazon Inspector to collect and group findings across all accounts. Designate anaccount in the organization as the delegated administrator account for Amazon Inspector.
Explanation: The correct answer is B and C. Option B is correct because inviting theacquired company’s AWS accounts to join the organization and creating theOrganizationAccountAccessRole IAM role in the invited accounts allows the managementaccount to assume the role and gain full administrative access to the member accounts.Option C is correct because using AWS Security Hub to collect and group findings acrossall accounts enables the DevOps team to monitor and improve the security posture of theorganization. Security Hub can automatically detect new accounts as the accounts areadded to the organization and enable Security Hub for them. Option A is incorrect becausecreating an SCP that has full administrative privileges and attaching it to the managementaccount does not grant the management account access to the member accounts. SCPs are used to restrict the permissions of the member accounts, not to grant permissions tothe management account. Option D is incorrect because using AWS Firewall Manager tocollect and group findings across all accounts is not a valid use case for Firewall Manager.Firewall Manager is used to centrally configure and manage firewall rules across theorganization, not to collect and group security findings. Option E is incorrect because usingAmazon Inspector to collect and group findings across all accounts is not a valid use casefor Amazon Inspector. Amazon Inspector is used to assess the security and compliance ofapplications running on Amazon EC2 instances, not to collect and group security findingsacross accounts. References:Inviting an AWS account to join your organizationEnabling and disabling AWS Security HubService control policiesAWS Firewall ManagerAmazon Inspector
Question # 7
A company has an application and a CI/CD pipeline. The CI/CD pipeline consists of anAWS CodePipeline pipeline and an AWS CodeBuild project. The CodeBuild project runstests against the application as part of the build process and outputs a test report. Thecompany must keep the test reports for 90 days.Which solution will meet these requirements?
A. Add a new stage in the CodePipeline pipeline after the stage that contains theCodeBuild project. Create an Amazon S3 bucket to store the reports. Configure an S3deploy action type in the new CodePipeline stage with the appropriate path and format forthe reports.
B. Add a report group in the CodeBuild project buildspec file with the appropriate path andformat for the reports. Create an Amazon S3 bucket to store the reports. Configure anAmazon EventBridge rule that invokes an AWS Lambda function to copy the reports to theS3 bucket when a build is completed. Create an S3 Lifecycle rule to expire the objects after90 days.
C. Add a new stage in the CodePipeline pipeline. Configure a test action type with theappropriate path and format for the reports. Configure the report expiration time to be 90days in the CodeBuild project buildspec file.
D. Add a report group in the CodeBuild project buildspec file with the appropriate path andformat for the reports. Create an Amazon S3 bucket to store the reports. Configure thereport group as an artifact in the CodeBuild project buildspec file. Configure the S3 bucketas the artifact destination. Set the object expiration to 90 days.
Explanation: The correct solution is to add a report group in the AWS CodeBuild projectbuildspec file with the appropriate path and format for the reports. Then, create an AmazonS3 bucket to store the reports. You should configure an Amazon EventBridge rule thatinvokes an AWS Lambda function to copy the reports to the S3 bucket when a build iscompleted. Finally, create an S3 Lifecycle rule to expire the objects after 90 days. Thisapproach allows for the automated transfer of reports to long-term storage and ensuresthey are retained for the required duration without manual intervention1.References:AWS CodeBuild User Guide on test reporting1.AWS CodeBuild User Guide on working with report groups2.AWS Documentation on using AWS CodePipeline with AWS CodeBuild3.
Question # 8
An ecommerce company uses a large number of Amazon Elastic Block Store (AmazonEBS) backed Amazon EC2 instances. To decrease manual work across all the instances, aDevOps engineer is tasked with automating restart actions when EC2 instance retirementevents are scheduled.How can this be accomplished?
A. Create a scheduled Amazon EventBridge rule to run an AWS Systems Manager Automation runbook that checks if any EC2 instances are scheduled for retirement once aweek If the instance is scheduled for retirement the runbook will hibernate the instance
B. Enable EC2Auto Recovery on all of the instances. Create an AWS Config rule to limitthe recovery to occur during a maintenance window only
C. Reboot all EC2 instances during an approved maintenance window that is outside ofstandard business hours Set up Amazon CloudWatch alarms to send a notification in caseany instance is failing EC2 instance status checks
D. Set up an AWS Health Amazon EventBridge rule to run AWS Systems ManagerAutomation runbooks that stop and start the EC2 instance when a retirement scheduledevent occurs.
Explanation: https://aws.amazon.com/blogs/mt/automate-remediation-actions-for-amazonec2-notifications-and-beyond-using-ec2-systems-manager-automation-and-aws-health/
Question # 9
A DevOps engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances inan EC2 Auto Scaling group. The associated CodeDeploy deployment group, which isintegrated with EC2 Auto Scaling, is configured to perform in-place deployments withcodeDeployDefault.oneAtATime During an ongoing new deployment, the engineerdiscovers that, although the overall deployment finished successfully, two out of fiveinstances have the previous application revision deployed. The other three instances havethe newest application revisionWhat is likely causing this issue?
A. The two affected instances failed to fetch the new deployment.
B. A failed Afterinstall lifecycle event hook caused the CodeDeploy agent to roll back to theprevious version on the affected instances
C. The CodeDeploy agent was not installed in two affected instances.
D. EC2 Auto Scaling launched two new instances while the new deployment had not yetfinished, causing the previous version to be deployed on the affected instances.
Explanation:
When AWS CodeDeploy performs an in-place deployment, it updates the instances withthe new application revision one at a time, as specified by the deploymentconfiguration codeDeployDefault.oneAtATime. If a lifecycle event hook, suchas AfterInstall, fails during the deployment, CodeDeploy will attempt to roll back to theprevious version on the affected instances. This is likely what happened with the twoinstances that still have the previous application revision deployed. The failure ofthe AfterInstall lifecycle event hook triggered the rollback mechanism, resulting in thoseinstances reverting to the previous application revision.References:AWS CodeDeploy documentation on redeployment and rollback procedures1.Stack Overflow discussions on re-deploying older revisions with AWSCodeDeploy2.AWS CLI reference guide for deploying a revision2.
Question # 10
A company is examining its disaster recovery capability and wants the ability to switch over its daily operations to a secondary AWS Region. The company uses AWS CodeCommit asa source control tool in the primary Region.A DevOps engineer must provide the capability for the company to develop code in thesecondary Region. If the company needs to use the secondary Region, developers canadd an additional remote URL to their local Git configuration.Which solution will meet these requirements?
A. Create a CodeCommit repository in the secondary Region. Create an AWS CodeBuildproject to perform a Git mirror operation of the primary Region’s CodeCommit repository tothe secondary Region’s CodeCommit repository. Create an AWS Lambda function thatinvokes the CodeBuild project. Create an Amazon EventBridge rule that reacts to mergeevents in the primary Region’s CodeCommit repository. Configure the EventBridge rule toinvoke the Lambda function.
B. Create an Amazon S3 bucket in the secondary Region. Create an AWS Fargate task toperform a Git mirror operation of the primary Region’s CodeCommit repository and copythe result to the S3 bucket. Create an AWS Lambda function that initiates the Fargate task.Create an Amazon EventBridge rule that reacts to merge events in the CodeCommitrepository. Configure the EventBridge rule to invoke the Lambda function.
C. Create an AWS CodeArtifact repository in the secondary Region. Create an AWSCodePipeline pipeline that uses the primary Region’s CodeCommit repository for thesource action. Create a Cross-Region stage in the pipeline that packages the CodeCommitrepository contents and stores the contents in the CodeArtifact repository when a pullrequest is merged into the CodeCommit repository.
D. Create an AWS Cloud9 environment and a CodeCommit repository in the secondaryRegion. Configure the primary Region’s CodeCommit repository as a remote repository inthe AWS Cloud9 environment. Connect the secondary Region’s CodeCommit repository tothe AWS Cloud9 environment.
Explanation: The best solution to meet the disaster recovery capability and allow
developers to switch over to a secondary AWS Region for code development is option A.This involves creating a CodeCommit repository in the secondary Region and setting upan AWS CodeBuild project to perform a Git mirror operation of the primary Region’sCodeCommit repository to the secondary Region’s repository. An AWS Lambda function isthen created to invoke the CodeBuild project. Additionally, an Amazon EventBridge rule isconfigured to react to merge events in the primary Region’s CodeCommit repository andinvoke the Lambda function12. This setup ensures that the secondary Region’s repositoryis always up-to-date with the primary repository, allowing for a seamless transition in caseof a disaster recovery event1.References:AWS CodeCommit User Guide on resilience and disaster recovery1.AWS Documentation on monitoring CodeCommit events in Amazon EventBridgeand Amazon CloudWatch Events2.
Question # 11
A company has a single developer writing code for an automated deployment pipeline. Thedeveloper is storing source code in an Amazon S3 bucket for each project. The companywants to add more developers to the team but is concerned about code conflicts and lostwork The company also wants to build a test environment to deploy newer versions of codefor testing and allow developers to automatically deploy to both environments when code ischanged in the repository.What is the MOST efficient way to meet these requirements?
A. Create an AWS CodeCommit repository tor each project, use the mam branch forproduction code: and create a testing branch for code deployed to testing Use featurebranches to develop new features and pull requests to merge code to testing and mainbranches.
B. Create another S3 bucket for each project for testing code, and use an AWS Lambdafunction to promote code changes between testing and production buckets Enableversioning on all buckets to prevent code conflicts.
C. Create an AWS CodeCommit repository for each project, and use the main branch forproduction and test code with different deployment pipelines for each environment Usefeature branches to develop new features.
D. Enable versioning and branching on each S3 bucket, use the main branch for productioncode, and create a testing branch for code deployed to testing. Have developers use eachbranch for developing in each environment.
Explanation:
Creating an AWS CodeCommit repository for each project, using the main branch forproduction code, and creating a testing branch for code deployed to testing will meet therequirements. AWS CodeCommit is a managed revision control service that hosts Gitrepositories and works with all Git-based tools1. By using feature branches to develop newfeatures and pull requests to merge code to testing and main branches, the developers canavoid code conflicts and lost work, and also implement code reviews and approvals. OptionB is incorrect because creating another S3 bucket for each project for testing code andusing an AWS Lambda function to promote code changes between testing and productionbuckets will not provide the benefits of revision control, such as tracking changes,branching, merging, and collaborating. Option C is incorrect because using the mainbranch for production and test code with different deployment pipelines for eachenvironment will not allow the developers to test their code changes before deploying them to production. Option D is incorrect because enabling versioning and branching on each S3bucket will not work with Git-based tools and will not provide the same level of revisioncontrol as AWS CodeCommit. References:AWS CodeCommitCertified DevOps Engineer – Professional (DOP-C02) Study Guide (page 182)
Question # 12
A company is using AWS to run digital workloads. Each application team in the companyhas its own AWS account for application hosting. The accounts are consolidated in anorganization in AWS Organizations.The company wants to enforce security standards across the entire organization. To avoidnoncompliance because of security misconfiguration, the company has enforced the use ofAWS CloudFormation. A production support team can modify resources in the productionenvironment by using the AWS Management Console to troubleshoot and resolve application-related issues.A DevOps engineer must implement a solution to identify in near real time any AWSservice misconfiguration that results in noncompliance. The solution must automaticallyremediate the issue within 15 minutes of identification. The solution also must tracknoncompliant resources and events in a centralized dashboard with accurate timestamps.Which solution will meet these requirements with the LEAST development overhead?
A. Use CloudFormation drift detection to identify noncompliant resources. Use driftdetection events from CloudFormation to invoke an AWS Lambda function for remediation.Configure the Lambda function to publish logs to an Amazon CloudWatch Logs log group.Configure an Amazon CloudWatch dashboard to use the log group for tracking.
B. Turn on AWS CloudTrail in the AWS accounts. Analyze CloudTrail logs by usingAmazon Athena to identify noncompliant resources. Use AWS Step Functions to trackquery results on Athena for drift detection and to invoke an AWS Lambda function forremediation. For tracking, set up an Amazon QuickSight dashboard that uses Athena asthe data source.
C. Turn on the configuration recorder in AWS Config in all the AWS accounts to identifynoncompliant resources. Enable AWS Security Hub with the ~no-enable-default-standardsoption in all the AWS accounts. Set up AWS Config managed rules and custom rules. Setup automatic remediation by using AWS Config conformance packs. For tracking, set up adashboard on Security Hub in a designated Security Hub administrator account.
D. Turn on AWS CloudTrail in the AWS accounts. Analyze CloudTrail logs by usingAmazon CloudWatch Logs to identify noncompliant resources. Use CloudWatch Logsfilters for drift detection. Use Amazon EventBridge to invoke the Lambda function forremediation. Stream filtered CloudWatch logs to Amazon OpenSearch Service. Set up adashboard on OpenSearch Service for tracking.
Explanation:
The best solution is to use AWS Config and AWS Security Hub to identify and remediatenoncompliant resources across multiple AWS accounts. AWS Config enables continuousmonitoring of the configuration of AWS resources and evaluates them against desiredconfigurations. AWS Config can also automatically remediate noncompliant resources byusing conformance packs, which are a collection of AWS Config rules and remediationactions that can be deployed as a single entity. AWS Security Hub provides acomprehensive view of the security posture of AWS accounts and resources. AWSSecurity Hub can aggregate and normalize the findings from AWS Config and other AWSservices, as well as from partner solutions. AWS Security Hub can also be used to create adashboard for tracking noncompliant resources and events in a centralized location.The other options are not optimal because they either require more development overhead,do not provide near real time detection and remediation, or do not provide a centralizeddashboard for tracking. Option A is not optimal because CloudFormation drift detection is not a near real timesolution. Drift detection has to be manually initiated on each stack or resource, orscheduled using a cron expression. Drift detection also does not provide remediationactions, so a custom Lambda function has to be developed and invoked. CloudWatch Logsand dashboard can be used for tracking, but they do not provide a comprehensive view ofthe security posture of the AWS accounts and resources.Option B is not optimal because CloudTrail logs analysis using Athena is not a near realtime solution. Athena queries have to be manually run or scheduled using a cronexpression. Athena also does not provide remediation actions, so a custom Lambdafunction has to be developed and invoked. Step Functions can be used to orchestrate thequery and remediation workflow, but it adds more complexity and cost. QuickSightdashboard can be used for tracking, but it does not provide a comprehensive view of thesecurity posture of the AWS accounts and resources.Option D is not optimal because CloudTrail logs analysis using CloudWatch Logs is not anear real time solution. CloudWatch Logs filters have to be manually created or updated foreach resource type and configuration change. CloudWatch Logs also does not provideremediation actions, so a custom Lambda function has to be developed and invoked.EventBridge can be used to trigger the Lambda function, but it adds more complexity andcost. OpenSearch Service dashboard can be used for tracking, but it does not provide acomprehensive view of the security posture of the AWS accounts and resources.References:AWS Config conformance packsIntroducing AWS Config conformance packsManaging conformance packs across all accounts in your organization
Question # 13
A DevOps engineer manages a company’s Amazon Elastic Container Service (AmazonECS) cluster. The cluster runs on several Amazon EC2 instances that are in an AutoScaling group. The DevOpsengineer must implement a solution that logs and reviews all stopped tasks for errors.Which solution will meet these requirements?
A. Create an Amazon EventBridge rule to capture task state changes. Send the event to Amazon CloudWatch Logs. Use CloudWatch Logs Insights to investigate stopped tasks.
B. Configure tasks to write log data in the embedded metric format. Store the logs inAmazon CloudWatch Logs. Monitor the ContainerInstanceCount metric for changes.
C. Configure the EC2 instances to store logs in Amazon CloudWatch Logs. Create aCloudWatch Contributor Insights rule that uses the EC2 instance log data. Use theContributor Insights rule to investigate stopped tasks.
D. Configure an EC2 Auto Scaling lifecycle hook for the EC2_INSTANCE_TERMINATINGscale-in event. Write the SystemEventLog file to Amazon S3. Use Amazon Athena to querythe log file for errors.
Explanation:
The best solution to log and review all stopped tasks for errors is to use AmazonEventBridge and Amazon CloudWatch Logs. Amazon EventBridge allows the DevOpsengineer to create a rule that matches task state change events from Amazon ECS. Therule can then send the event data to Amazon CloudWatch Logs as the target. AmazonCloudWatch Logs can store and monitor the log data, and also provide CloudWatch LogsInsights, a feature that enables the DevOps engineer to interactively search and analyzethe log data. Using CloudWatch Logs Insights, the DevOps engineer can filter andaggregate the log data based on various fields, such as cluster, task, container, andreason. This way, the DevOps engineer can easily identify and investigate the stoppedtasks and their errors.The other options are not as effective or efficient as the solution in option A. Option B is notsuitable because the embedded metric format is designed for custom metrics, not forlogging task state changes. Option C is not feasible because the EC2 instances do notstore the task state change events in their logs. Option D is not relevant because theEC2_INSTANCE_TERMINATING lifecycle hook is triggered when an EC2 instance isterminated by the Auto Scaling group, not when a task is stopped by Amazon ECS.References:: Creating a CloudWatch Events Rule That Triggers on an Event – Amazon ElasticContainer Service: Sending and Receiving Events Between AWS Accounts – Amazon EventBridge: Working with Log Data – Amazon CloudWatch Logs: Analyzing Log Data with CloudWatch Logs Insights – Amazon CloudWatch Logs: Embedded Metric Format – Amazon CloudWatch: Amazon EC2 Auto Scaling Lifecycle Hooks – Amazon EC2 Auto Scaling
Question # 14
A company has deployed a critical application in two AWS Regions. The application usesan Application Load Balancer (ALB) in both Regions. The company has Amazon Route 53alias DNS records for both ALBs.The company uses Amazon Route 53 Application Recovery Controller to ensure that theapplication can fail over between the two Regions. The Route 53 ARC configurationincludes a routing control for both Regions. The company uses Route 53 ARC to performquarterly disaster recovery (DR) tests.During the most recent DR test, a DevOps engineer accidentally turned off both routingcontrols. The company needs to ensure that at least one routing control is turned on at alltimes.Which solution will meet these requirements?
A. In Route 53 ARC. create a new assertion safety rule. Apply the assertion safety rule tothe two routing controls. Configure the rule with the ATLEAST type with a threshold of 1.
B. In Route 53 ARC, create a new gating safety rule. Apply the assertion safety rule to thetwo routing controls. Configure the rule with the OR type with a threshold of 1.
C. In Route 53 ARC, create a new resource set. Configure the resource set with an AWS:Route53: HealthCheck resource type. Specify the ARNs of the two routing controls as thetarget resource. Create a new readiness check for the resource set.
D. In Route 53 ARC, create a new resource set. Configure the resource set with an AWS:Route53RecoveryReadiness: DNSTargetResource resource type. Add the domain namesof the two Route 53 alias DNS records as the target resource. Create a new readinesscheck for the resource set.
Explanation:
The correct solution is to create a new assertion safety rule in Route 53 ARC and apply itto the two routing controls. An assertion safety rule is a type of safety rule that ensures thata minimum number of routing controls are always enabled. The ATLEAST type of assertionsafety rule specifies the minimum number of routing controls that must be enabled for therule to evaluate as healthy. By setting the threshold to 1, the rule ensures that at least onerouting control is always turned on. This prevents the scenario where both routing controls are accidentally turned off and the application becomes unavailable in both Regions.The other solutions are incorrect because they do not use safety rules to prevent bothrouting controls from being turned off. A gating safety rule is a type of safety rule thatprevents routing control state changes that violate the rule logic. The OR type of gatingsafety rule specifies that one or more routing controls must be enabled for the rule toevaluate as healthy. However, this rule does not prevent a user from turning off bothrouting controls manually. A resource set is a collection of resources that are tested forreadiness by Route 53 ARC. A readiness check is a test that verifies that all the resourcesin a resource set are operational. However, these concepts are not related to routingcontrol states or safety rules. Therefore, creating a new resource set and a new readinesscheck will not ensure that at least one routing control is turned on at all times. References:Routing control in Amazon Route 53 Application Recovery ControllerViewing and updating routing control states in Route 53 ARCCreating a control panel in Route 53 ARCCreating safety rules in Route 53 ARC
Question # 15
A company manages a multi-tenant environment in its VPC and has configured AmazonGuardDuty for the corresponding AWS account. The company sends all GuardDutyfindings to AWS Security Hub.Traffic from suspicious sources is generating a large number of findings. A DevOpsengineer needs to implement a solution to automatically deny traffic across the entire VPCwhen GuardDuty discovers a new suspicious source.Which solution will meet these requirements?
A. Create a GuardDuty threat list. Configure GuardDuty to reference the list. Create anAWS Lambda function that will update the threat list Configure the Lambda function to runin response to new Security Hub findings that come from GuardDuty.
B. Configure an AWS WAF web ACL that includes a custom rule group. Create an AWSLambda function that will create a block rule in the custom rule group Configure theLambda function to run in response to new Security Hub findings that come from GuardDuty
C. Configure a firewall in AWS Network Firewall. Create an AWS Lambda function that willcreate a Drop action rule in the firewall policy Configure the Lambda function to run inresponse to new Security Hub findings that come from GuardDuty
D. Create an AWS Lambda function that will create a GuardDuty suppression rule.Configure the Lambda function to run in response to new Security Hub findings that comefrom GuardDuty.
Explanation:https://aws.amazon.com/blogs/security/automatically-block-suspicious-traffic-with-awsnetwork-firewall-and-amazon-guardduty/
Question # 16
A company recently deployed its web application on AWS. The company is preparing for alarge-scale sales event and must ensure that the web application can scale to meet thedemandThe application’s frontend infrastructure includes an Amazon CloudFront distribution thathas an Amazon S3 bucket as an origin. The backend infrastructure includes an AmazonAPI Gateway API. several AWS Lambda functions, and an Amazon Aurora DB clusterThe company’s DevOps engineer conducts a load test and identifies that the Lambdafunctions can fulfill the peak number of requests However, the DevOps engineer noticesrequest latency during the initial burst of requests Most of the requests to the Lambdafunctions produce queries to the database A large portion of the invocation time is used toestablish database connectionsWhich combination of steps will provide the application with the required scalability? (SelectTWO)
A. Configure a higher reserved concurrency for the Lambda functions.
B. Configure a higher provisioned concurrency for the Lambda functions
C. Convert the DB cluster to an Aurora global database Add additional Aurora Replicas inAWS Regions based on the locations of the company’s customers.
D. Refactor the Lambda Functions Move the code blocks that initialize databaseconnections into the function handlers.
E. Use Amazon RDS Proxy to create a proxy for the Aurora database Update the Lambdafunctions to use the proxy endpoints for database connections.
Explanation:
The correct answer is B and E. Configuring a higher provisioned concurrency for theLambda functions will ensure that the functions are ready to respond to the initial burst ofrequests without any cold start latency. Using Amazon RDS Proxy to create a proxy for theAurora database will enable the Lambda functions to reuse existing database connectionsand reduce the overhead of establishing new ones. This will also improve the scalabilityand availability of the database by managing the connection pool size and handlingfailovers. Option A is incorrect because reserved concurrency only limits the number ofconcurrent executions for a function, not pre-warms them. Option C is incorrect becauseconverting the DB cluster to an Aurora global database will not address the issue ofdatabase connection latency, and may introduce additional costs and complexity. Option Dis incorrect because moving the code blocks that initialize database connections into thefunction handlers will not improve the performance or scalability of the Lambda functions,and may actually worsen the cold start latency. References:AWS Lambda Provisioned ConcurrencyUsing Amazon RDS Proxy with AWS LambdaCertified DevOps Engineer – Professional (DOP-C02) Study Guide (page 173)
Question # 17
A company’s security policies require the use of security hardened AMIS in productionenvironments. A DevOps engineer has used EC2 Image Builder to create a pipeline thatbuilds the AMIs on a recurring schedule.The DevOps engineer needs to update the launch templates of the companys Auto Scalinggroups. The Auto Scaling groups must use the newest AMIS during the launch of AmazonEC2 instances.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure an Amazon EventBridge rule to receive new AMI events from Image Builder.Target an AWS Systems Manager Run Command document that updates the launchtemplates of the Auto Scaling groups with the newest AMI ID.
B. Configure an Amazon EventBridge rule to receive new AMI events from Image Builder.Target an AWS Lambda function that updates the launch templates of the Auto Scalinggroups with the newest AMI ID.
C. Configure the launch template to use a value from AWS Systems Manager ParameterStore for the AMI ID. Configure the Image Builder pipeline to update the Parameter Storevalue with the newest AMI ID.
D. Configure the Image Builder distribution settings to update the launch templates with thenewest AMI ID. Configure the Auto Scaling groups to use the newest version of the launch template.
Explanation:The most operationally efficient solution is to use AWS Systems ManagerParameter Store1 to store the AMI ID and reference it in the launch template2.This way, the launch template does not need to be updated every time a new AMIis created by Image Builder. Instead, the Image Builder pipeline can update theParameter Store value with the newest AMI ID3, and the Auto Scaling group canlaunch instances using the latest value from Parameter Store.The other solutions require updating the launch template or creating a new versionof it every time a new AMI is created, which adds complexity and overhead.Additionally, using EventBridge rules and Lambda functions or Run Commanddocuments introduces additional dependencies and potential points of failure.References: 1: AWS Systems Manager Parameter Store 2: Using AWS Systems Managerparameters instead of AMI IDs in launch templates 3: Update an SSM parameter withImage Builder
Question # 18
A company requires its internal business teams to launch resources through pre-approvedAWS CloudFormation templates only. The security team requires automated monitoringwhen resources drift from their expected state.Which strategy should be used to meet these requirements?
A. Allow users to deploy CloudFormation stacks using a CloudFormation service role only.Use CloudFormation drift detection to detect when resources have drifted from theirexpected state.
B. Allow users to deploy CloudFormation stacks using a CloudFormation service role only.Use AWS Config rules to detect when resources have drifted from their expected state.
C. Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforcethe use of a launch constraint. Use AWS Config rules to detect when resources havedrifted from their expected state.
D. Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforcethe use of a template constraint. Use Amazon EventBridge notifications to detect whenresources have drifted from their expected state.
Explanation:
The correct answer is C. Allowing users to deploy CloudFormation stacks using AWSService Catalog only and enforcing the use of a launch constraint is the best way to ensurethat the internal business teams launch resources through pre-approved CloudFormationtemplates only. AWS Service Catalog is a service that enables organizations to create andmanage catalogs of IT services that are approved for use on AWS. A launch constraint is arule that specifies the role that AWS Service Catalog assumes when launching a product.By using a launch constraint, the DevOps engineer can control the permissions that theusers have when launching a product. Using AWS Config rules to detect when resourceshave drifted from their expected state is the best way to automate the monitoring of theresources. AWS Config is a service that enables you to assess, audit, and evaluate theconfigurations of your AWS resources. AWS Config rules are custom or managed rulesthat AWS Config uses to evaluate whether your AWS resources comply with your desiredconfigurations. By using AWS Config rules, the DevOps engineer can track the changes inthe resources and identify any non-compliant resources.Option A is incorrect because allowing users to deploy CloudFormation stacks using aCloudFormation service role only is not the best way to ensure that the internal businessteams launch resources through pre-approved CloudFormation templates only. ACloudFormation service role is an IAM role that CloudFormation assumes to create,update, or delete the stack resources. By using a CloudFormation service role, the DevOpsengineer can control the permissions that CloudFormation has when acting on theresources, but not the permissions that the users have when launching a stack. Therefore,option A does not prevent the users from launching resources that are not approved by thecompany. Using CloudFormation drift detection to detect when resources have drifted fromtheir expected state is a valid way to monitor the resources, but it is not as automated andscalable as using AWS Config rules. CloudFormation drift detection is a feature thatenables you to detect whether a stack’s actual configuration differs, or has drifted, from itsexpected configuration. To use this feature, the DevOps engineer would need to manuallyinitiate a drift detection operation on the stack or the stack resources, and then view thedrift status and details in the CloudFormation console or API.Option B is incorrect because allowing users to deploy CloudFormation stacks using aCloudFormation service role only is not the best way to ensure that the internal business teams launch resources through pre-approved CloudFormation templates only, asexplained in option A. Using AWS Config rules to detect when resources have drifted fromtheir expected state is a valid way to monitor the resources, as explained in option C.Option D is incorrect because enforcing the use of a template constraint is not the best wayto ensure that the internal business teams launch resources through pre-approvedCloudFormation templates only. A template constraint is a rule that defines the values orproperties that users can specify when launching a product. By using a template constraint,the DevOps engineer can control the parameters that the users can provide whenlaunching a product, but not the permissions that the users have when launching a product.Therefore, option D does not prevent the users from launching resources that are notapproved by the company. Using Amazon EventBridge notifications to detect whenresources have drifted from their expected state is a less reliable and consistent solutionthan using AWS Config rules. Amazon EventBridge is a service that enables you toconnect your applications with data from a variety of sources. Amazon EventBridge candeliver a stream of real-time data from event sources, such as AWS services, and routethat data to targets, such as AWS Lambda functions. However, to use this solution, theDevOps engineer would need to configure the event source, the event bus, the event rule,and the event target for each resource type that needs to be monitored, which is morecomplex and error-prone than using AWS Config rules.
Question # 19
A company is building a web and mobile application that uses a serverless architecturepowered by AWS Lambda and Amazon API Gateway The company wants to fullyautomate the backend Lambda deployment based on code that is pushed to theappropriate environment branch in an AWS CodeCommit repositoryThe deployment must have the following:• Separate environment pipelines for testing and production• Automatic deployment that occurs for test environments onlyWhich steps should be taken to meet these requirements’?
A. Configure a new AWS CodePipelme service Create a CodeCommit repository for eachenvironment Set up CodePipeline to retrieve the source code from the appropriaterepository Set up the deployment step to deploy the Lambda functions with AWSCloudFormation.
B. Create two AWS CodePipeline configurations for test and production environmentsConfigure the production pipeline to have a manual approval step Create aCodeCommit repository for each environment Set up each CodePipeline to retrieve thesource code from the appropriate repository Set up the deployment step to deploy theLambda functions with AWS CloudFormation.
C. Create two AWS CodePipeline configurations for test and production environmentsConfigure the production pipeline to have a manual approval step. Create oneCodeCommit repository with a branch for each environment Set up each CodePipeline toretrieve the source code from the appropriate branch in the repository. Set up thedeployment step to deploy the Lambda functions with AWS CloudFormation
D. Create an AWS CodeBuild configuration for test and production environments Configurethe production pipeline to have a manual approval step. Create one CodeCommitrepository with a branch for each environment Push the Lambda function code to anAmazon S3 bucket Set up the deployment step to deploy the Lambda functions from theS3 bucket.
Explanation:
The correct approach to meet the requirements for separate environment pipelines andautomatic deployment for test environments is to create two AWS CodePipelineconfigurations, one for each environment. The production pipeline should have a manualapproval step to ensure that changes are reviewed before being deployed to production. Asingle AWS CodeCommit repository with separate branches for each environment allowsfor organized and efficient code management. Each CodePipeline retrieves the sourcecode from the appropriate branch in the repository. The deployment step utilizes AWSCloudFormation to deploy the Lambda functions, ensuring that the infrastructure as code is maintained and version-controlled.References:AWS Lambda with Amazon API Gateway: Using AWS Lambda with Amazon APIGatewayTutorial on using Lambda with API Gateway: Tutorial: Using Lambda with APIGatewayAWS CodePipeline automatic deployment: Set Up a Continuous DeploymentPipeline Using AWS CodePipelineBuilding a pipeline for test and production stacks: Walkthrough: Building a pipelinefor test and production stacks
Question # 20
A healthcare services company is concerned about the growing costs of software licensingfor an application for monitoring patient wellness. The company wants to create an auditprocess to ensure that the application is running exclusively on Amazon EC2 DedicatedHosts. A DevOps engineer must create a workflow to audit the application to ensurecompliance.What steps should the engineer take to meet this requirement with the LEASTadministrative overhead?
A. Use AWS Systems Manager Configuration Compliance. Use calls to the putcompliance-items API action to scan and build a database of noncompliant EC2 instancesbased on their host placement configuration. Use an Amazon DynamoDB table to storethese instance IDs for fast access. Generate a report through Systems Manager by callingthe list-compliance-summaries API action.
B. Use custom Java code running on an EC2 instance. Set up EC2 Auto Scaling for theinstance depending on the number of instances to be checked. Send the list ofnoncompliant EC2 instance IDs to an Amazon SQS queue. Set up another worker instanceto process instance IDs from the SQS queue and write them to Amazon DynamoDB. Usean AWS Lambda function to terminate noncompliant instance IDs obtained from the queue,and send them to an Amazon SNS email topic for distribution.
C. Use AWS Config. Identify all EC2 instances to be audited by enabling Config Recordingon all Amazon EC2 resources for the region. Create a custom AWS Config rule thattriggers an AWS Lambda function by using the “config-rule-change-triggered” blueprint. Modify the LambdaevaluateCompliance () function to verify host placement to return a NON_COMPLIANTresult if the instance is not running on an EC2 Dedicated Host. Use the AWS Config reportto address noncompliant instances.
D. Use AWS CloudTrail. Identify all EC2 instances to be audited by analyzing all calls tothe EC2 RunCommand API action. Invoke a AWS Lambda function that analyzes the hostplacement of the instance. Store the EC2 instance ID of noncompliant resources in anAmazon RDS for MySQL DB instance. Generate a report by querying the RDS instanceand exporting the query results to a CSV text file.
Explanation:
The correct answer is C. Using AWS Config to identify and audit all EC2 instances basedon their host placement configuration is the most efficient and scalable solution to ensurecompliance with the software licensing requirement. AWS Config is a service that enablesyou to assess, audit, and evaluate the configurations of your AWS resources. By creating acustom AWS Config rule that triggers a Lambda function to verify host placement, theDevOps engineer can automate the process of checking whether the instances are runningon EC2 Dedicated Hosts or not. The Lambda function can return a NON_COMPLIANTresult if the instance is not running on an EC2 Dedicated Host, and the AWS Config reportcan provide a summary of the compliance status of the instances. This solution requiresthe least administrative overhead compared to the other options.Option A is incorrect because using AWS Systems Manager Configuration Compliance toscan and build a database of noncompliant EC2 instances based on their host placementconfiguration is a more complex and costly solution than using AWS Config. AWS SystemsManager Configuration Compliance is a feature of AWS Systems Manager that enablesyou to scan your managed instances for patch compliance and configurationinconsistencies. To use this feature, the DevOps engineer would need to install theSystems Manager Agent on each EC2 instance, create a State Manager association to runthe put-compliance-items API action periodically, and use a DynamoDB table to store theinstance IDs of noncompliant resources. This solution would also require more API callsand storage costs than using AWS Config.Option B is incorrect because using custom Java code running on an EC2 instance tocheck and terminate noncompliant EC2 instances is a more cumbersome and error-pronesolution than using AWS Config. This solution would require the DevOps engineer to writeand maintain the Java code, set up EC2 Auto Scaling for the instance, use an SQS queueand another worker instance to process the instance IDs, use a Lambda function and anSNS topic to terminate and notify the noncompliant instances, and handle any potentialfailures or exceptions in the workflow. This solution would also incur more compute,storage, and messaging costs than using AWS Config.Option D is incorrect because using AWS CloudTrail to identify and audit EC2 instances byanalyzing the EC2 RunCommand API action is a less reliable and accurate solution than using AWS Config. AWS CloudTrail is a service that enables you to monitor and log theAPI activity in your AWS account. The EC2 RunCommand API action is used to executecommands on one or more EC2 instances. However, this API action does not necessarilyindicate the host placement of the instance, and it may not capture all the instances thatare running on EC2 Dedicated Hosts or not. Therefore, option D would not provide acomprehensive and consistent audit of the EC2 instances.
Leave A Comment