Question # 1
A DevOps engineer used an AWS Cloud Formation custom resource to set up AD Connector. The AWS Lambda function ran and created AD Connector, but Cloud Formation is not transitioning from CREATE_IN_PROGRESS to CREATE_COMPLETE.
Which action should the engineer take to resolve this issue?
|
A. Ensure the Lambda function code has exited successfully.
| B. Ensure the Lambda function code returns a response to the pre-signed URL.
| C. Ensure the Lambda function IAM role has cloudformation UpdateStack permissions for the stack ARN.
| D. Ensure the Lambda function IAM role has ds ConnectDirectory permissions for the AWS account.
|
B. Ensure the Lambda function code returns a response to the pre-signed URL.
Explanation:
[Reference:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/crpg-ref-responses.html,]
Question # 2
A company plans to use Amazon CloudWatch to monitor its Amazon EC2 instances. The company needs to stop EC2 instances when the average of the NetworkPacketsIn metric is less than 5 for at least 3 hours in a 12-hour time window. The company must evaluate the metric every hour. The EC2 instances must continue to run if there is missing data for the NetworkPacketsIn metric during the evaluation period.
A DevOps engineer creates a CloudWatch alarm for the NetworkPacketsIn metric. The DevOps engineer configures a threshold value of 5 and an evaluation period of 1 hour.
Which set of additional actions should the DevOps engineer take to meet these requirements?
|
A. Configure the Datapoints to Alarm value to be 3 out of 12. Configure the alarm to treat missing data as breaching the threshold. Add an AWS Systems Manager action to stop the instance when the alarm enters the ALARM state.
| B. Configure the Datapoints to Alarm value to be 3 out of 12. Configure the alarm to treat missing data as not breaching the threshold. Add an EC2 action to stop the instance when the alarm enters the ALARM state.
| C. Configure the Datapoints to Alarm value to be 9 out of 12. Configure the alarm to treat missing data as breaching the threshold. Add an EC2 action to stop the instance when the alarm enters the ALARM state.
| D. Configure the Datapoints to Alarm value to be 9 out of 12. Configure the alarm to treat missing data as not breaching the threshold. Add an AWS Systems Manager action to stop the instance when the alarm enters the ALARM state.
|
B. Configure the Datapoints to Alarm value to be 3 out of 12. Configure the alarm to treat missing data as not breaching the threshold. Add an EC2 action to stop the instance when the alarm enters the ALARM state.
Explanation:
To meet the requirements, the DevOps engineer needs to configure the CloudWatch alarm to stop the EC2 instances when the average of the NetworkPacketsIn metric is less than 5 for at least 3 hours in a 12-hour time window. This means that the alarm should trigger when 3 out of 12 datapoints are below the threshold of 5. The alarm should also treat missing data as not breaching the threshold, so that the EC2 instances continue to run if there is no data for the metric during the evaluation period. The DevOps engineer can add an EC2 action to stop the instance when the alarm enters the ALARM state, which is a built-in action type for CloudWatch alarms.
Question # 3
A DevOps engineer is designing an application that integrates with a legacy REST API. The application has an AWS Lambda function that reads records from an Amazon Kinesis data stream. The Lambda function sends the records to the legacy REST API.
Approximately 10% of the records that the Lambda function sends from the Kinesis data stream have data errors and must be processed manually. The Lambda function event source configuration has an Amazon Simple Queue Service (Amazon SQS) dead-letter queue as an on-failure destination. The DevOps engineer has configured the Lambda function to process records in batches and has implemented retries in case of failure.
During testing the DevOps engineer notices that the dead-letter queue contains many records that have no data errors and that already have been processed by the legacy REST API. The DevOps engineer needs to configure the Lambda function's event source options to reduce the number of errorless records that are sent to the dead-letter queue.
Which solution will meet these requirements?
|
A. Increase the retry attempts
| B. Configure the setting to split the batch when an error occurs
| C. Increase the concurrent batches per shard
| D. Decrease the maximum age of record
|
B. Configure the setting to split the batch when an error occurs
Explanation:
This solution will meet the requirements because it will reduce the number of errorless records that are sent to the dead-letter queue. When you configure the setting to split the batch when an error occurs, Lambda will retry only the records that caused the error, instead of retrying the entire batch. This way, the records that have no data errors and have already been processed by the legacy REST API will not be retried and sent to the dead-letter queue unnecessarily.
https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
Question # 4
A company has developed a static website hosted on an Amazon S3 bucket. The website is deployed using AWS CloudFormation. The CloudFormation template defines an S3 bucket and a custom resource that copies content into the bucket from a source location.
The company has decided that it needs to move the website to a new location, so the existing CloudFormation stack must be deleted and re-created. However, CloudFormation reports that the stack could not be deleted cleanly.
What is the MOST likely cause and how can the DevOps engineer mitigate this problem for this and future versions of the website?
|
A. Deletion has failed because the S3 bucket has an active website configuration. Modify the Cloud Formation template to remove the WebsiteConfiguration properly from the S3 bucket resource.
| B. Deletion has failed because the S3 bucket is not empty. Modify the custom resource's AWS Lambda function code to recursively empty the bucket when RequestType is Delete.
| C. Deletion has failed because the custom resource does not define a deletion policy. Add a DeletionPolicy property to the custom resource definition with a value of RemoveOnDeletion.
| D. Deletion has failed because the S3 bucket is not empty. Modify the S3 bucket resource in the CloudFormation template to add a DeletionPolicy property with a value of Empty.
|
B. Deletion has failed because the S3 bucket is not empty. Modify the custom resource's AWS Lambda function code to recursively empty the bucket when RequestType is Delete.
Explanation:
Step 1: Understanding the Deletion FailureThe most likely reason why the CloudFormation stack failed to delete is that the S3 bucket was not empty. AWS CloudFormation cannot delete an S3 bucket that contains objects, so if the website files are still in the bucket, the deletion will fail.
Issue:The S3 bucket is not empty during deletion, preventing the stack from being deleted.
Step 2: Modifying the Custom Resource to Handle DeletionTo mitigate this issue, you can modify the Lambda function associated with the custom resource to automatically empty the S3 bucket when the stack is being deleted. By adding logic to handle the RequestType: Delete event, the function can recursively delete all objects in the bucket before allowing the stack to be deleted.
Action:Modify the Lambda function to recursively delete the objects in the S3 bucket when RequestType is set to Delete.
Why:This ensures that the S3 bucket is empty before CloudFormation tries to delete it, preventing the stack deletion failure.
[Reference:AWS documentation onCloudFormation custom resources., This corresponds toOption B: Deletion has failed because the S3 bucket is not empty. Modify the custom resource's AWS Lambda function code to recursively empty the bucket when RequestType is Delete., , ]
Question # 5
A company is using AWS Organizations to centrally manage its AWS accounts. The company has turned on AWS Config in each member account by using AWS Cloud Formation StackSets The company has configured trusted access in Organizations for AWS Config and has configured a member account as a delegated administrator account for AWS Config
A DevOps engineer needs to implement a new security policy The policy must require all current and future AWS member accounts to use a common baseline of AWS Config rules that contain remediation actions that are managed from a central account Non-administrator users who can access member accounts must not be able to modify this common baseline of AWS Config rules that are deployed into each member account
Which solution will meet these requirements?
|
A. Create a CloudFormation template that contains the AWS Config rules and remediation actions. Deploy the template from the Organizations management account by using CloudFormation StackSets.
| B. Create an AWS Config conformance pack that contains the AWS Config rules and remediation actions Deploy the pack from the Organizations management account by using CloudFormation StackSets.
| C. Create a CloudFormation template that contains the AWS Config rules and remediation actions Deploy the template from the delegated administrator account by using AWS Config.
| D. Create an AWS Config conformance pack that contains the AWS Config rules and remediation actions. Deploy the pack from the delegated administrator account by using AWS Config.
|
D. Create an AWS Config conformance pack that contains the AWS Config rules and remediation actions. Deploy the pack from the delegated administrator account by using AWS Config.
Explanation:
The correct answer is D. Creating an AWS Config conformance pack that contains the AWS Config rules and remediation actions and deploying it from the delegated administrator account by using AWS Config will meet the requirements. A conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a region or across an organization in AWS Organizations1. By using the delegated administrator account, the DevOps engineer can centrally manage the conformance pack and prevent non-administrator users from modifying it in the member accounts.
Option A is incorrect because creating a CloudFormation template that contains the AWS Config rules and remediation actions and deploying it from the Organizations management account by using CloudFormation StackSets will not prevent non-administrator users from modifying the AWS Config rules in the member accounts. Option B is incorrect because deploying the conformance pack from the Organizations management account by using CloudFormation StackSets will not use the trusted access feature of AWS Config and will require additional permissions and resources.
Option C is incorrect because creating a CloudFormation template that contains the AWS Config rules and remediation actions and deploying it from the delegated administrator account by using AWS Config will not leverage the benefits of conformance packs, such as simplified deployment and management.
References:
Conformance Packs - AWS Config
Certified DevOps Engineer - Professional (DOP-C02) Study Guide (page 176)
Question # 6
A company's developers use Amazon EC2 instances as remote workstations. The company is concerned that users can create or modify EC2 security groups to allow unrestricted inbound access.
A DevOps engineer needs to develop a solution to detect when users create unrestricted security group rules. The solution must detect changes to security group rules in near real time, remove unrestricted rules, and send email notifications to the security team. The DevOps engineer has created an AWS Lambda function that checks for security group ID from input, removes rules that grant unrestricted access, and sends notifications through Amazon Simple Notification Service (Amazon SNS).
What should the DevOps engineer do next to meet the requirements?
|
A. Configure the Lambda function to be invoked by the SNS topic. Create an AWS CloudTrail subscription for the SNS topic. Configure a subscription filter for security group modification events.
| B. Create an Amazon EventBridge scheduled rule to invoke the Lambda function. Define a schedule pattern that runs the Lambda function every hour.
| C. Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule’s event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function.
| D. Create an Amazon EventBridge custom event bus that subscribes to events from all AWS services. Configure the Lambda function to be invoked by the custom event bus.
|
C. Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule’s event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function.
Explanation:
To meet the requirements, the DevOps engineer should create an Amazon EventBridge event rule that has the default event bus as the source. The rule's event pattern should match EC2 security group creation and modification events, and it should be configured to invoke the Lambda function. This solution will allow for near real-time detection of security group rule changes and will trigger the Lambda function to remove any unrestricted rules and send email notifications to the security team.
https://repost.aws/knowledge-center/monitor-security-group-changes-ec2
Question # 7
A company has an organization in AWS Organizations. The organization includes workload accounts that contain enterprise applications. The company centrally manages users from an operations account. No users can be created in the workloadaccounts. The company recently added an operations team and must provide the operations team members with administrator access to each workload account.
Which combination of actions will provide this access? (Choose three.)
|
A. Create a SysAdmin role in the operations account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the workload accounts.
| B. Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account.
| C. Create an Amazon Cognito identity pool in the operations account. Attach the SysAdmin role as an authenticated role.
| D. In the operations account, create an IAM user for each operations team member.
| E. In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group.
|
B. Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account.
D. In the operations account, create an IAM user for each operations team member.
E. In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group.
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
Question # 8
A company is hosting a static website from an Amazon S3 bucket. The website is available to customers at example.com. The company uses an Amazon Route 53 weighted routing policy with a TTL of 1 day. The company has decided to replace the existing static website with a dynamic web application. The dynamic web application uses an Application Load Balancer (ALB) in front of a fleet of Amazon EC2 instances.
On the day of production launch to customers, the company creates an additional Route 53 weighted DNS record entry that points to the ALB with a weight of 255 and a TTL of 1 hour. Two days later, a DevOps engineer notices that the previous static website is displayed sometimes when customers navigate to example.com.
How can the DevOps engineer ensure that the company serves only dynamic content for example.com?
|
A. Delete all objects, including previous versions, from the S3 bucket that contains the static website content.
| B. Update the weighted DNS record entry that points to the S3 bucket. Apply a weight of 0. Specify the domain reset option to propagate changes immediately.
| C. Configure webpage redirect requests on the S3 bucket with a hostname that redirects to the ALB.
| D. Remove the weighted DNS record entry that points to the S3 bucket from the example.com hosted zone. Wait for DNS propagation to become complete.
|
D. Remove the weighted DNS record entry that points to the S3 bucket from the example.com hosted zone. Wait for DNS propagation to become complete.
Question # 9
A company uses Amazon RDS for all databases in Its AWS accounts The company uses AWS Control Tower to build a landing zone that has an audit and logging account All databases must be encrypted at rest for compliance reasons. The company's security engineer needs to receive notification about any noncompliant databases that are in the company's accounts
Which solution will meet these requirements with the MOST operational efficiency?
|
A. Use AWS Control Tower to activate the optional detective control (guardrail) to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the company's audit account. Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
| B. Use AWS Cloud Formation StackSets to deploy AWS Lambda functions to every account. Write the Lambda function code to determine whether the RDS storage is encrypted in the account the function is deployed to Send the findings as an Amazon CloudWatch metric to the management account Create an Amazon Simple Notification Service (Amazon SNS) topic. Create a CloudWatch alarm that notifies the SNS topic when metric thresholds are met. Subscribe t
| C. Create a custom AWS Config rule in every account to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the audit account Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
| D. Launch an Amazon EC2 instance. Run an hourly cron job by using the AWS CLI to determine whether the RDS storage is encrypted in each AWS account Store the results in an RDS database. Notify the security engineer by sending email messages from the EC2 instance when noncompliance is detected
|
A. Use AWS Control Tower to activate the optional detective control (guardrail) to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the company's audit account. Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
Activate AWS Control Tower Guardrail:
Use AWS Control Tower to activate a detective guardrail that checks whether RDS storage is encrypted.
Create SNS Topic for Notifications:
Set up an Amazon Simple Notification Service (SNS) topic in the audit account to receive notifications about non-compliant databases.
Create EventBridge Rule to Filter Non-compliant Events:
Create an Amazon EventBridge rule that filters events related to the guardrail's findings on non-compliant RDS instances.
Configure the rule to send notifications to the SNS topic when non-compliant events are detected.
Subscribe Security Engineer's Email to SNS Topic:
Subscribe the security engineer's email address to the SNS topic to receive notifications when non-compliant databases are detected.
By using AWS Control Tower to activate a detective guardrail and setting up SNS notifications for non-compliant events, the company can efficiently monitor and ensure that all RDS databases are encrypted at rest.
References:
AWS Control Tower Guardrails
Amazon SNS
Amazon EventBridge
Question # 10
A company wants to set up a continuous delivery pipeline. The company stores application code in a private GitHub repository. The company needs to deploy the application components to Amazon Elastic Container Service (Amazon ECS). Amazon EC2, and AWS Lambda. The pipeline must support manual approval actions.
Which solution will meet these requirements?
|
A. Use AWS CodePipeline with Amazon ECS. Amazon EC2, and Lambda as deploy providers.
| B. Use AWS CodePipeline with AWS CodeDeploy as the deploy provider.
| C. Use AWS CodePipeline with AWS Elastic Beanstalk as the deploy provider.
| D. Use AWS CodeDeploy with GitHub integration to deploy the application.
|
B. Use AWS CodePipeline with AWS CodeDeploy as the deploy provider.
Explanation:
https://docs.aws.amazon.com/codedeploy/latest/ userguide/deployment-steps.html
Get 250 AWS Certified DevOps Engineer - Professional questions Access in less then $0.12 per day.
Amazon Web Services Bundle 1: 1 Month PDF Access For All Amazon Web Services Exams with Updates $100
$400
Buy Bundle 1
Amazon Web Services Bundle 2: 3 Months PDF Access For All Amazon Web Services Exams with Updates $200
$800
Buy Bundle 2
Amazon Web Services Bundle 3: 6 Months PDF Access For All Amazon Web Services Exams with Updates $300
$1200
Buy Bundle 3
Amazon Web Services Bundle 4: 12 Months PDF Access For All Amazon Web Services Exams with Updates $400
$1600
Buy Bundle 4
Disclaimer: Fair Usage Policy - Daily 5 Downloads
AWS Certified DevOps Engineer - Professional Exam Dumps
Exam Code: DOP-C02
Exam Name: AWS Certified DevOps Engineer - Professional
- 90 Days Free Updates
- Amazon Web Services Experts Verified Answers
- Printable PDF File Format
- DOP-C02 Exam Passing Assurance
Get 100% Real DOP-C02 Exam Dumps With Verified Answers As Seen in the Real Exam. AWS Certified DevOps Engineer - Professional Exam Questions are Updated Frequently and Reviewed by Industry TOP Experts for Passing AWS Certified Professional Exam Quickly and Hassle Free.
Amazon Web Services DOP-C02 Dumps
Struggling with AWS Certified DevOps Engineer - Professional preparation? Get the edge you need! Our carefully created DOP-C02 dumps give you the confidence to pass the exam. We offer:
1. Up-to-date AWS Certified Professional practice questions: Stay current with the latest exam content.
2. PDF and test engine formats: Choose the study tools that work best for you. 3. Realistic Amazon Web Services DOP-C02 practice exam: Simulate the real exam experience and boost your readiness.
Pass your AWS Certified Professional exam with ease. Try our study materials today!
Official AWS Certified DevOps Engineer Professional exam info is available on Amazon website at https://aws.amazon.com/certification/certified-devops-engineer-professional/
Prepare your AWS Certified Professional exam with confidence!We provide top-quality DOP-C02 exam dumps materials that are:
1. Accurate and up-to-date: Reflect the latest Amazon Web Services exam changes and ensure you are studying the right content.
2. Comprehensive Cover all exam topics so you do not need to rely on multiple sources.
3. Convenient formats: Choose between PDF files and online AWS Certified DevOps Engineer - Professional practice test for easy studying on any device.
Do not waste time on unreliable DOP-C02 practice test. Choose our proven AWS Certified Professional study materials and pass with flying colors. Try Dumps4free AWS Certified DevOps Engineer - Professional 2024 material today!
-
Assurance
AWS Certified DevOps Engineer - Professional practice exam has been updated to reflect the most recent questions from the Amazon Web Services DOP-C02 Exam.
-
Demo
Try before you buy! Get a free demo of our AWS Certified Professional exam dumps and see the quality for yourself. Need help? Chat with our support team.
-
Validity
Our Amazon Web Services DOP-C02 PDF contains expert-verified questions and answers, ensuring you're studying the most accurate and relevant material.
-
Success
Achieve DOP-C02 success! Our AWS Certified DevOps Engineer - Professional exam questions give you the preparation edge.
If you have any question then contact our customer support at live chat or email us at support@dumps4free.com.
|