DOP-C01 Guide

Actual Amazon-Web-Services DOP-C01 Practice Test Online

Exam Code: DOP-C01 (Practice Exam Latest Test Questions VCE PDF)
Exam Name: AWS Certified DevOps Engineer- Professional
Certification Provider: Amazon-Web-Services
Free Today! Guaranteed Training- Pass DOP-C01 Exam.

Check DOP-C01 free dumps before getting the full version:

NEW QUESTION 1
You have an Opswork stack setup in AWS. You want to install some updates to the Linux instances in the stack. Which of the following can be used to publish those updates. Choose 2 answers from the options given below

  • A. Create and start new instances to replace your current online instance
  • B. Then delete the current instances.
  • C. Use Auto-scaling to launch new instances and then delete the older instances
  • D. On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command
  • E. Delete the stack and create a new stack with the instances and their relavant updates

Answer: AC

Explanation:
As per AWS documentation.
By default, AWS OpsWorks Stacks automatically installs the latest updates during setup, after an instance finishes booting. AWS OpsWorks Stacks does not automatically install updates after an instance is online, to avoid interruptions such as restarting application servers. Instead, you manage updates to your online instances yourself, so you can minimize any disruptions.
We recommend that you use one of the following to update your online instances.
•Create and start new instances to replace your current online instances. Then delete the current instances.
The new instances will have the latest set of security patches installed during setup.
•On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command, which installs the current set of security patches and other updates
on the specified instances.
More information is available at: https://docs.aws.amazon.com/opsworks/latest/userguide/workingsecurity-updates.html

NEW QUESTION 2
You are working for a company has an on-premise infrastructure. There is now a decision to move to AWS. The plan is to move the development environment first. There are a lot of custom based applications that need to be deployed for the development community. Which of the following can help to implement the application for the development team?
Choose 2 answers from the options below.

  • A. Create docker containers for the customapplication components.
  • B. Use OpsWorks to deploy the docker containers.
  • C. Use Elastic beanstalk to deploy the dockercontainers.
  • D. Use Cloudformation to deploy the dockercontainers.

Answer: AC

Explanation:
The AWS documentation states the following for docker containers on Elastic Beanstalk
Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
For more information on docker containers and Elastic beanstalk, please visit the below URL http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

NEW QUESTION 3
Your company is planning to develop an application in which the front end is in .Net and the backend is in DynamoDB. There is an expectation of a high load on the application. How could you ensure the scalability of the application to reduce the load on the DynamoDB database? Choose an answer from the options below.

  • A. Add more DynamoDB databases to handle the load.
  • B. Increase write capacity of Dynamo DB to meet the peak loads
  • C. Use SQS to assist and let the application pull messages and then perform the relevant operation in DynamoDB.
  • D. Launch DynamoDB in Multi-AZ configuration with a global index to balance writes

Answer: C

Explanation:
When the idea comes for scalability then SQS is the best option. Normally DynamoDB is scalable, but since one is looking for a cost effective solution, the messaging in SQS can assist in managing the situation mentioned in the question.
Amazon Simple Queue Service (SQS) is a fully-managed message queuing service for reliably communicating among distributed software components and microservices - at any scale. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. SQS makes it simple and cost- effective to decouple and coordinate the components of a cloud application. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available
For more information on SQS, please refer to the below URL:
• https://aws.amazon.com/sqs/

NEW QUESTION 4
You are currently using SGS to pass messages to EC2 Instances. You need to pass messages which are greater than 5 MB in size. Which of the following can help you accomplish this.

  • A. UseKinesis as a buffer stream for message bodie
  • B. Store the checkpoint id fortheplacement in the Kinesis Stream in SQS.
  • C. Usethe Amazon SQS Extended Client Library for Java and Amazon S3 as a storagemechanism for message bodie
  • D. */
  • E. UseSQS's support for message partitioning and multi-part uploads on Amazon S3.
  • F. UseAWS EFS as a shared pool storage mediu
  • G. Store filesystem pointers to the fileson disk in the SQS message bodies.

Answer: B

Explanation:
The AWS documentation mentions the following
You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and consuming messages with a message size of up to 2 GB. To manage
Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java. Specifically, you use this library to:
Specify whether messages are always stored in Amazon S3 or only when a message's size exceeds 256 KB.
Send a message that references a single message object stored in an Amazon S3 bucket. Get the corresponding message object from an Amazon S3 bucket.
Delete the corresponding message object from an Amazon S3 bucket. For more information on SQS and sending larger messages please visit the link

NEW QUESTION 5
Which of the following CLI commands can be used to describe the stack resources.

  • A. awscloudformationdescribe-stack
  • B. awscloudformationdescribe-stack-resources
  • C. awscloudformation list-stack-resources
  • D. awscloudformation list-stack

Answer: C

Explanation:
Answer - C
This is given in the AWS Documentation list-stack-resources
Description
Returns descriptions of all resources of the specified stack.
For deleted stacks, ListStackResources returns resource information for up to 90 days after the stack has been deleted.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-stack-resources is a paginated operation. Multiple API calls may be issued in order to retrieve the entire data set of results. You can disable pagination by providing the —no-paginate argument. When using —output text and the —query argument on a paginated response, the —query argument must extract data from the results of the following query expressions: StackResourceSummaries For more information on the CLI command, please visit the below URL:
http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-stack-resources.html

NEW QUESTION 6
You have a web application hosted on EC2 instances. There are application changes which happen to the web application on a quarterly basis. Which of the following are example of Blue Green deployments which can be applied to the application? Choose 2 answers from the options given below

  • A. Deploythe application to an elastic beanstalk environmen
  • B. Have a secondary elasticbeanstalk environment in place with the updated application cod
  • C. Use the swapURL's feature to switch onto the new environment.
  • D. Placethe EC2 instances behind an EL
  • E. Have a secondary environment with EC2lnstances and ELB in another regio
  • F. Use Route53 with geo-location to routerequests and switch over to the secondary environment.
  • G. Deploythe application using Opswork stack
  • H. Have a secondary stack for the newapplication deploymen
  • I. Use Route53 to switch over to the new stack for the newapplication update.
  • J. Deploythe application to an elastic beanstalk environmen
  • K. Use the Rolling updatesfeature to perform a Blue Green deployment.

Answer: AC

Explanation:
The AWS Documentation mentions the following
AWS Elastic Beanstalk is a fast and simple way to get an application up and running on AWS.6 It's perfect for developers who want to deploy code without worrying about managing the underlying infrastructure. Elastic Beanstalk supports Auto Scaling and Elastic Load Balancing, both of which enable blue/green deployment.
Elastic Beanstalk makes it easy to run multiple versions of your application and provides capabilities to swap the environment URLs, facilitating blue/green deployment.
AWS OpsWorks is a configuration management service based on Chef that allows customers to deploy and manage application stacks on AWS.7 Customers can specify resource and application configuration, and deploy and monitor running resources. OpsWorks simplifies cloning entire stacks when you're preparing blue/green environments.
For more information on Blue Green deployments, please refer to the below link:
• https://dO3wsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 7
You have deployed a Cloudformation template which is used to spin up resources in your account. Which of the following status in Cloudformation represents a failure.

  • A. UPDATE_COMPLETE_CLEANUPJN_PROGRESS
  • B. DELETE_COMPLETE
  • C. ROLLBACK_IN_PROGRESS
  • D. UPDATE_IN_PROGRESS

Answer: C

Explanation:
AWS Cloud Formation provisions and configures resources by making calls to the AWS services that are described in your template.
After all the resources have been created, AWS Cloud Formation reports that your stack has been created. You can then start using the resources in your stack. If
stack creation fails, AWS CloudFormation rolls back your changes by deleting the resources that it created.
The below snapshot from Cloudformation shows what happens when there is an error in the stack creation.
DOP-C01 dumps exhibit
For more information on how Cloud Formation works, please refer to the below link: http://docs.ws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-whatis-howdoesitwork-html

NEW QUESTION 8
You are a Devops Engineer for your company. Your company is using Opswork stack to rollout a collection of web instances. When the instances are launched, a configuration file need to be setup prior to the launching of the web application hosted on these instances. Which of the following steps would you carry out to ensure this requirement gets fulfilled. Choose 2 answers from the options given below

  • A. Ensure that the Opswork stack is changed to use the AWS specific cookbooks
  • B. Ensure that the Opswork stack is changed to use custom cookbooks
  • C. Configure a recipe which sets the configuration file and add it to the ConfigureLifeCycle Event of the specific web layer.
  • D. Configure a recipe which sets the configuration file and add it to the Deploy LifeCycleEvent of the specific web layer.

Answer: BC

Explanation:
This is mentioned in the AWS documentation Configure
This event occurs on all of the stack's instances when one of the following occurs:
• An instance enters or leaves the online state.
• You associate an Elastic IP address with an instance or disassociate one from an instance.
• You attach an Elastic Load Balancing load balancer to a layer, or detach one from a layer.
For example, suppose that your stack has instances A, B, and C, and you start a new instance, D. After D has finished running its setup recipes, AWS OpsWorks Stacks triggers the Configure event on A, B, C, and D. If you subsequently stop A, AWS Ops Works Stacks triggers the Configure event on B, C, and
D. AWS OpsWorks Stacks responds to the Configure event by running each layer's Configure recipes, which update the instances' configuration to reflect the current set of online instances. The Configure event is therefore a good time to regenerate configuration files. For example, the HAProxy Configure recipes reconfigure the load balancer to accommodate any changes in the set of online application server instances.
You can also manually trigger the Configure event by using the Configure stack command. For more information on Opswork lifecycle events, please refer to the below URL:
• http://docs.aws.a mazon.com/opsworks/latest/userguide/workingcookbook-events.htm I

NEW QUESTION 9
What is required to achieve gigabit network throughput on EC2? You already selected cluster- compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.

  • A. Enable biplex networking on your servers, so packets are non-blocking in both directions and there's no switching overhead.
  • B. Ensure the instances are in different VPCs so you don't saturate the Internet Gateway on any one VPC.
  • C. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
  • D. Use a placement group for your instances so the instances are physically near each other in the same Availability Zone.

Answer: D

Explanation:
A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking. For more information on Placement Groups, please visit the below URL: http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/placement-groups.html

NEW QUESTION 10
Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table? Assume that no security keys are allowed to be stored on the EC2 instance. Choose 2 answers from the options below

  • A. CreateanlAM Role that allows write access to the DynamoDB table.
  • B. AddanlAMRoleto a running EC2 instance.
  • C. Createan 1AM Userthat allows write access to the DynamoDB table.
  • D. AddanlAMUserto a running EC2 instance.

Answer: AB

Explanation:
The AWS documentation mentions the following
We designed I AM roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that
the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using 1AM roles
For more information on 1AM Roles, please refer to the below URL: http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2.htmI

NEW QUESTION 11
Your company has an application hosted on an Elastic beanstalk environment. You have been instructed that whenever application changes occur and new versions need to be deployed that the fastest deployment approach is employed. Which of the following deployment mechanisms will fulfil this requirement?

  • A. Allatonce
  • B. Rolling
  • C. Immutable
  • D. Rollingwith batch

Answer: A

Explanation:
The following table from the AWS documentation shows the deployment time for each deployment methods.
DOP-C01 dumps exhibit
For more information on Elastic beanstalk deployments, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing- version, htm I

NEW QUESTION 12
Which of the following features of the Autoscaling Group ensures that additional instances are neither launched or terminated before the previous scaling activity takes effect

  • A. Termination policy
  • B. Cool down period
  • C. Ramp up period
  • D. Creation policy

Answer: B

Explanation:
The AWS documentation mentions
The Auto Scaling cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that Auto Scaling doesn't launch or terminate additional
instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy. Auto Scaling waits for the
cooldown period to complete before resuming scaling activities. When you manually scale your Auto Scaling group, the default is not to wait for the cooldown period,
but you can override the default and honor the cooldown period. If an instance becomes unhealthy.
Auto Scaling does not wait for the cooldown period to complete before replacing the unhealthy instance
For more information on the Cool down period, please refer to the below URL:
• http://docs.ws.amazon.com/autoscaling/latest/userguide/Cooldown.htmI

NEW QUESTION 13
You are Devops Engineer for a large organization. The company wants to start using Cloudformation templates to start building their resources in AWS. You are getting requirements for the templates from various departments, such as the networking, security, application etc. What is the best way to architect these Cloudformation templates.

  • A. Usea single Cloudformation template, since this would reduce the maintenanceoverhead on the templates itself.
  • B. Createseparate logical templates, for example, a separate template for networking,security, application et
  • C. Then nest the relevant templates.
  • D. Considerusing Elastic beanstalk to create your environments since Cloudformation is notbuilt for such customization.
  • E. Considerusing Opsworks to create your environments since Cloudformation is not builtfor such customization.

Answer: B

Explanation:
The AWS documentation mentions the following
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these
common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::
Cloud Form ation::Stackresource in your template to reference other templates.
For more information on Cloudformation best practises, please visit the below url http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 14
Your application stores sensitive information on an EBS volume attached to your EC2 instance. How can you protect your information? Choose two answers from the options given below

  • A. Unmount the EBS volume, take a snapshot and encrypt the snapsho
  • B. Re-mount the Amazon EBS volume
  • C. It is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 forencryption.
  • D. Copy the unencrypted snapshot and check the box to encrypt the new snapsho
  • E. Volumes restored from this encrypted snapshot will also be encrypted.
  • F. Create and mount a new, encrypted Amazon EBS volum
  • G. Move the data to the new volum
  • H. Delete the old Amazon EBS volume *t

Answer: CD

Explanation:
These steps are given in the AWS documentation
To migrate data between encrypted and unencrypted volumes
1) Create your destination volume (encrypted or unencrypted, depending on your need).
2) Attach the destination volume to the instance that hosts the data to migrate.
3) Make the destination volume available by following the procedures in Making an Amazon EBS Volume Available for Use. For Linux instances, you can create a mount point at /mnt/destination and mount the destination volume there.
4) Copy the data from your source directory to the destination volume. It may be most convenient to use a bulk-copy utility for this.
To encrypt a volume's data by means of snapshot copying
1) Create a snapshot of your unencrypted CBS volume. This snapshot is also unencrypted.
2) Copy the snapshot while applying encryption parameters. The resulting target snapshot is encrypted.
3) Restore the encrypted snapshot to a new volume, which is also encrypted.
For more information on EBS Encryption, please refer to the below document link: from AWS http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

NEW QUESTION 15
Which of the following services allows you to easily run and manage Docker-enabled applications across a cluster of Amazon EC2 instances

  • A. Elastic bean stalk
  • B. ElasticContainer service
  • C. Opswork
  • D. Cloudwatch

Answer: B

Explanation:
The AWS documentation provides the following information
Amazon EC2 Container Service (CCS) allows you to easily run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Applications packaged as containers locally will deploy and run in the same way as containers managed by Amazon ECS. Amazon CCS eliminates the need to install, operate, and scale your own cluster management infrastructure, and allows you to schedule Docker-enabled applications across your cluster based on your resource needs and availability requirements.
For more information on ECS, please visit the link:
• https://aws.amazon.com/ecs/details/

NEW QUESTION 16
Which of the following service can be used to provision ECS Cluster containing following components in an automated way:
1) Application Load Balancer for distributing traffic among various task instances running in EC2 Instances
2) Single task instance on each EC2 running as part of auto scaling group
3) Ability to support various types of deployment strategies

  • A. SAM
  • B. Opswork
  • C. Elastic beanstalk
  • D. CodeCommit

Answer: C

Explanation:
You can create docker environments that support multiple containers per Amazon CC2 instance with multi-container Docker platform for Elastic Beanstalk-Elastic Beanstalk uses Amazon Elastic Container Service (Amazon CCS) to coordinate container deployments to multi-container Docker environments. Amazon CCS provides tools to manage a cluster of instances running Docker containers. Elastic Beanstalk takes care of Amazon CCS tasks including cluster creation, task definition, and execution Please refer to the below AWS documentation: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html

NEW QUESTION 17
You have a set of EC2 instances hosted in AWS. You have created a role named DemoRole and assigned that role to a policy, but you are unable to use that role with an instance. Why is this the case.

  • A. You need to create an instance profile and associate it with that specific role.
  • B. You are not able to associate an 1AM role with an instanceC You won't be able to use that role with an instance unless you also create a user and associate it with that specific role
  • C. You won't be able to use that role with an instance unless you also create a usergroup and associate it with that specific role.

Answer: A

Explanation:
An instance profile is a container for an 1AM role that you can use to pass role information to an CC2 instance when the instance starts.
Option B is invalid because you can associate a role with an instance
Option C and D are invalid because using users or user groups is not a pre-requisite For more information on instance profiles, please visit the link:
• http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-ro le-ec2_instance- profiles.htm I

NEW QUESTION 18
Your company owns multiple AWS accounts. There is currently one development and one production account. You need to grant access to the development team to an S3 bucket in the production account. How can you achieve this?

  • A. Createan 1AM user in the Production account that allows users from the Developmentaccount (the trusted account) to access the S3 bucket in the Productionaccount.
  • B. When creating the role, define the Development account as a trustedentity and specify a permissions policy that allows trusted users to update theS3 bucket.
  • C. Use web identity federation with a third-partyidentity provider with AWS STS to grant temporary credentials and membershipinto the production 1AM user.
  • D. Createan 1AM cross account role in the Production account that allows users from theDevelopment account to access the S3 bucket in the Production account.

Answer: D

Explanation:
The AWS Documentation mentions the following on cross account roles
You can use AWS Identity and Access Management (1AM) roles and AWS Security Token Service (STS) to set up cross-account access between AWS accounts. When you assume an 1AM role in another AWS account to obtain cross-account access to services and resources in that account, AWS CloudTrail logs the cross-account activity. For more information on Cross account roles, please visit the below URL
• http://docs.aws.a mazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.htm I

NEW QUESTION 19
Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? Choose 3 Answers from the options below

  • A. Setting up a federation proxy or identity provider
  • B. Using AWS Security Token Service to generate temporary tokens
  • C. Tagging each folder in the bucket
  • D. Configuring 1AM role
  • E. Setting up a matching 1AM user for every user in your corporate directory that needs access to a folder in the bucket

Answer: ABD

Explanation:
The below diagram showcases how authentication is carried out when having an identity broker. This is an example of a SAML connection, but the same concept
holds true for getting access to an AWS resource.
DOP-C01 dumps exhibit
For more information on federated access, please visit the below link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_com mon-scenarios_federated- users.htm I
https://docs.aws.a mazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_sam I. html?icmpid=docs_iam_console
https://aws.ama zon.com/blogs/secu rity/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/

NEW QUESTION 20
Which of the following is the default deployment mechanism used by Elastic Beanstalk when the application is created via Console or EBCLI?

  • A. All at Once
  • B. Rolling Deployments
  • C. Rolling with additional batch
  • D. Immutable

Answer: B

Explanation:
The AWS documentation mentions
AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once. Rolling, Rolling with additional batch,
and Immutable) and options that let you configure batch size and health check behavior during deployments. By default, your environment uses rolling deployments
if you created it with the console or EB CLI, or all at once deployments if you created it with a different client (API, SDK or AWS CLI).
For more information on Elastic Beanstalk deployments, please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version- deploy.html

NEW QUESTION 21
What is web identity federation?

  • A. Use of an identity provider like Google or Facebook to become an AWS1AM User.
  • B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
  • C. Use of AWS 1AM Usertokens to log in as a Google or Facebook user.
  • D. Use STS service to create an user on AWS which will allow them to login from facebook orgoogle app.

Answer: B

Explanation:
With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) — such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an 1AM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long- term security credentials with your application. For more information on Web Identity federation please refer to the below link:
http://docs^ws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

NEW QUESTION 22
You are creating a cloudformation templates which takes in a database password as a parameter. How can you ensure that the password is not visible when anybody tries to describes the stack

  • A. Usethe password attribute for the resource
  • B. Usethe NoEcho property for the parameter value
  • C. Usethe hidden property for the parameter value
  • D. Setthe hidden attribute for the Cloudformation resource.

Answer: B

Explanation:
The AWS Documentation mentions
For sensitive parameter values (such as passwords), set the NoEcho property to true. That way, whenever anyone describes your stack, the parameter value is shown as asterisks (*•*").
For more information on Cloudformation parameters, please visit the below URL:
• http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/parameters-section- structure.html

NEW QUESTION 23
You have a requirement to host a cluster of NoSQL databases. There is an expectation that there will be a lot of I/O on these databases. Which EBS volume type is best for high performance NoSQL cluster deployments?

  • A. io1
  • B. gp1
  • C. standard
  • D. gp2

Answer: A

Explanation:
Provisioned IOPS SSD should be used for critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume
This is ideal for Large database workloads, such as:
• MongoDB
• Cassandra
• MicrosoftSQL Server
• MySQL
• PostgreSQL
• Oracle
For more information on the various CBS Volume Types, please refer to the below link:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/ CBSVolumeTvpes.html

NEW QUESTION 24
You currently have an Autoscalinggroup that has the following settings Min capacity-2
Desired capacity - 2 Maximum capacity - 2
Your launch configuration has AMI'S which are based on the t2.micro instance type. The application running on these instances are now experiencing issues and you have identified that the solution is to change the instance type of the instances running in the Autoscaling Group.
Which of the below solutions will meet this demand.

  • A. Change the Instance type in the current launch configuratio
  • B. Change the Desired value of the Autoscaling Group to 4. Ensure the new instances are launched.
  • C. Delete the current Launch configuratio
  • D. Create a new launch configuration with the new instance type and add it to the Autoscaling Grou
  • E. This will then launch the new instances.
  • F. Make a copy the Launch configuratio
  • G. Change the instance type in the new launch configuratio
  • H. Attach that to the Autoscaling Group.Change the maximum and Desired size of the Autoscaling Group to 4. Once the new instances are launched, change the Desired and maximum size back to 2.
  • I. Change the desired and maximum size of the Autoscaling Group to 4. Make a copy the Launch configuratio
  • J. Change the instance type in the new launch configuratio
  • K. Attach that to the Autoscaling Grou
  • L. Change the maximum and Desired size of the Autoscaling Group to 2

Answer: C

Explanation:
You should make a copy of the launch configuration, add the new instance type. The change the Autoscaling Group to include the new instance type. Then change the Desired number of the Autoscaling Group to 4 so that instances of new instance type can be launched. Once launched, change the desired size back to 2, so that Autoscaling will delete the instances with the older configuration. Note that the assumption here is that the current instances are equally distributed across multiple AZ's because Autoscaling will first use the AZRebalance process to terminate instances.
Option A is invalid because you cannot make changes to an existing Launch configuration.
Option B is invalid because if you delete the existing launch configuration, then your application will not be available. You need to ensure a smooth deployment process.
Option D is invalid because you should change the desired size to 4 after attaching the new launch configuration.
For more information on Autoscaling Suspend and Resume, please visit the below URL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resu me-processes.html

NEW QUESTION 25
Your company has multiple applications running on AWS. Your company wants to develop a tool that notifies on-call teams immediately via email when an alarm is triggered in your environment. You have multiple on-call teams that work different shifts, and the tool should handle notifying the correct teams at the correct times. How should you implement this solution?

  • A. Create an Amazon SNS topic and an Amazon SQS queu
  • B. Configure the Amazon SQS queue as a subscriber to the Amazon SNS topic.Configure CloudWatch alarms to notify this topic when an alarm is triggere
  • C. Create an Amazon EC2 Auto Scaling group with both minimum and desired Instances configured to 0. Worker nodes in thisgroup spawn when messages are added to the queu
  • D. Workers then use Amazon Simple Email Service to send messages to your on call teams.
  • E. Create an Amazon SNS topic and configure your on-call team email addresses as subscriber
  • F. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to this new topi
  • G. Notifications will be sent to on-call users when a CloudWatch alarm is triggered.
  • H. Create an Amazon SNS topic and configure your on-call team email addresses as subscriber
  • I. Create a secondary Amazon SNS topic for alarms and configure your CloudWatch alarms to notify this topic when triggere
  • J. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggere
  • K. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the first topic so that on-call engineers receive alerts.
  • L. Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscriber
  • M. Create another Amazon SNS topic and configure your CloudWatch alarms to notify this topic when triggere
  • N. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggere
  • O. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift.

Answer: D

Explanation:
Option D fulfils all the requirements
1) First is to create a SNS topic for each group so that the required members get the email addresses.
2) Ensure the application uses the HTTPS endpoint and the SDK to publish messages Option A is invalid because the SQS service is not required.
Option B and C are incorrect. As per the requirement we need to provide notification to only those on-call teams who are working in that particular shift when an alarm is triggered. It need not have to be send to all the on-call teams of the company. With Option B & C, since we are not configuring the SNS topic for each on call team the notifications will be send to all the on-call teams. Hence these 2 options are invalid. For more information on setting up notifications, please refer to the below document link: from AWS http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

NEW QUESTION 26
You are in charge of designing Cloudformation templates for your company. One of the key requirements is to ensure that if a Cloudformation stack is deleted, a snapshot of the relational database is created which is part of the stack. How can you achieve this in the best possible way?

  • A. Create a snapshot of the relational database beforehand so that when the cloudformation stack is deleted, the snapshot of the database will be present.
  • B. Use the Update policy of the cloudformation template to ensure a snapshot is created of the relational database.
  • C. Use the Deletion policy of the cloudformation template to ensure a snapshot is created of the relational database.
  • D. Create a new cloudformation template to create a snapshot of the relational database.

Answer: C

Explanation:
The AWS documentation mentions the following
With the Deletion Policy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS Cloud Formation deletes the resource by default. Note that this capability also applies to update operations that lead to resources being removed.
For more information on the Deletion policy, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/aws-attri bute- deletionpolicy.html

NEW QUESTION 27
Of the 6 available sections on a Cloud Formation template (Template Description Declaration, Template Format Version Declaration, Parameters, Resources, Mappings, Outputs), which is the only one required for a CloudFormation template to be accepted? Choose an answer from the options below

  • A. Parameters
  • B. Template Declaration
  • C. Mappings
  • D. Resources

Answer: D

Explanation:
If you refer to the documentation, you will see that Resources is the only mandatory field
Specifies the stack resources and their properties, such as an Amazon Elastic Compute Cloud instance or an Amazon Simple Storage Service bucket.
For more information on cloudformation templates, please refer to the below link:
• http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-anatomy.html

NEW QUESTION 28
You have an application which consists of EC2 instances in an Auto Scaling group. Between a particular time frame every day, there is an increase in traffic to your website. Hence users are complaining of a poor response time on the application. You have configured your Auto Scaling group to deploy one new EC2 instance when CPU utilization is greater than 60% for 2 consecutive periods of 5 minutes. What is the least cost-effective way to resolve this problem?

  • A. Decrease the consecutive number of collection periods
  • B. Increase the minimum number of instances in the Auto Scaling group
  • C. Decrease the collection period to ten minutes
  • D. Decrease the threshold CPU utilization percentage at which to deploy a new instance

Answer: B

Explanation:
If you increase the minimum number of instances, then they will be running even though the load is not high on the website. Hence you are incurring cost even though there is no need.
All of the remaining options are possible options which can be used to increase the number of instances on a high load.
For more information on On-demand scaling, please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html
Note: The tricky part where the question is asking for 'least cost effective way". You got the design consideration correctly but need to be careful on how the question is phrased.

NEW QUESTION 29
You work for an insurance company and are responsible for the day-to-day operations of your company's online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements:
- All log entries must be retained by the system, even during unplanned instance failure.
- The customer insight team requires immediate access to the logs from the past seven days.
- The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available.
How would you meet these requirements in a cost-effective manner? Choose three answers from the options below

  • A. Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performanc
  • B. Create a script that moves the logs from the instance to Amazon S3 once an hour.
  • C. Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
  • D. Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days.
  • E. Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exist
  • F. Create a script that moves the logs from the instance to Amazon S3 once an hour.
  • G. Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to fals
  • H. Create a script that moves the logs from the instance to Amazon S3 once an hour.
  • I. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availabilit
  • J. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log file
  • K. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.

Answer: CEF

Explanation:
Since all logs need to be stored indefinitely. Glacier is the best option for this. One can use Lifecycle events to stream the data from S3 to Glacier
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule
defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as
follows:
• Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARDJA QK for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
• Expiration actions - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. For more information on Lifecycle events, please refer to the below link:
• http://docs.aws.a mazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.htm I You can use scripts to put the logs onto a new volume and then transfer those logs to S3.
Note:
Moving the logs from CBS volume to S3 we have some custom scripts running in the background. Inorder to ensure the minimum memory requirements for the OS and the applications for the script to execute we can use a cost effective ec2 instance.
Considering the computing resource requirements of the instance and the cost factor a tZmicro instance can be used in this case.
The following link provides more information on various t2 instances. https://docs.aws.amazon.com/AWSCC2/latest/WindowsGuide/t2-instances.html
Question is "How would you meet these requirements in a cost-effective manner? Choose three answers from the options below"
So here user has to choose the 3 options so that the requirement is fulfilled. So in the given 6 options, options C, C and F fulfill the requirement.
" The CC2s use CBS volumes and the logs are stored on CBS volumes those are marked for non- termination" - is one of the way to fulfill requirement. So this shouldn't be a issue.

NEW QUESTION 30
When using EC2 instances with the Code Deploy service, which of the following are some of the pre- requisites to ensure that the EC2 instances can work with Code Deploy. Choose 2 answers from the options given below

  • A. Ensurean 1AM role is attached to the instance so that it can work with the CodeDeploy Service.
  • B. Ensurethe EC2 Instance is configured with Enhanced Networking
  • C. Ensurethe EC2 Instance is placed in the default VPC
  • D. Ensurethat the CodeDeploy agent is installed on the EC2 Instance

Answer: AD

Explanation:
This is mentioned in the AWS documentation
DOP-C01 dumps exhibit
For more information on instances for CodeDeploy, please visit the below URL:
• http://docs.aws.amazon.com/codedeploY/latest/userguide/instances.html

NEW QUESTION 31
......

Recommend!! Get the Full DOP-C01 dumps in VCE and PDF From DumpSolutions.com, Welcome to Download: https://www.dumpsolutions.com/DOP-C01-dumps/ (New 116 Q&As Version)


To know more about the DOP-C01, click here.

Tagged as : Amazon-Web-Services DOP-C01 Dumps, Download DOP-C01 pdf, DOP-C01 VCE, DOP-C01 pass4sure, examcollection DOP-C01