
- Login/Register
-
0 $0.00
You have 0 items in your cart
Buy your braindumps confidently with our secure SSL certification and safe payment methods.
Read MoreDownload the demo of your desired dumps free on just one click before purchase. 100% singup free demo.
Read MoreGet your certification in 1st attempt or get your 100% payment back according to our refund policy.
Read MoreResolve your issues and queries quickly with our dedicated 24/7 live customer support team.
Read MoreWe at Dumpssure certify you that our platform is one of the most authentic website for Amazon SAP-C01 exam questions and their correct answers. Pass your Amazon SAP-C01 exam with flying marks, and that too with little effort. With the purchase of this pack, you wil also get free demo questions dumps. We ensure your 100% success in SAP-C01 Exam with the help of our provided material.
DumpsSure offers a unique Online Test Engine where you can fully practice your SAP-C01 exam questions. This is one-of-a-kind feature which our competitors won't provide you. Candidates can practice the way they would want to attempt question at the real examination time.
Dumpssure also offers an exclusive 'Exam Mode' where you can attempt 50 random questions related to your SAP-C01 exam. This mode is exactly the same as of real SAP-C01 certification exam. Attempt all the questions within a limited time and test your knowledge on the spot. This mode will definitely give you an edge in real exam.
Our success rate from past 6 years is above 96% which is quite impressive and we're proud of it. Our customers are able to build their career in any field the wish. Let's dive right in and make the best decision of your life right now. Choose the plan you want, download the SAP-C01 exam dumps and start your preparation for a successful professional.
Dumpssure is providing free Amazon SAP-C01 question answers for your practice, to avail this facility you just need to sign up for a free account on Dumpssure. Thousands of customers from entire world are using our SAP-C01 dumps. You can get high grades by using these dumps with money back guarantee on SAP-C01 dumps PDF.
Our production experts have been preparing such material which can succeed you in Amazon SAP-C01 exam in a one day. They are so logical and notorious about the questions and their answers that you can get good marks in Amazon SAP-C01 exam. So DUMPSSURE is offering you to get excellent marks.
The basic mean of Dumpssure is to provide the most important and most accurate material for our users. You just need to remain connected to internet for getting updates even on your mobile. After purchasing, you can download the Amazon SAP-C01 study material in PDF format and can read it easily, where you have desire to study.
Our provided material is regularly updated step by step for new questions and answers for Amazon Exam Dumps, so that you can easily check the behaviour of the question and their answers and you can succeed in your first attempt.
We are so keen to provide our users with that questions which are verified by the Amazon Professionals, who are extremely skilled and have spent many years in this field.
Dumpssure is so devoted to our customers that we provide to most important and latest questions to pass you in the Amazon SAP-C01 exam. If you have purchased the complete SAP-C01 dumps PDF file and not availed the promised facilities for the Amazon exams you can either replace your exam or claim for money back policy which is so simple for more detail visit Guarantee Page.
A company is running an application in the AWS Cloud. The application consists ofmicroservices that run on a fleet of Amazon EC2 instances in multiple Availability Zonesbehind an Application Load Balancer. The company recently added a new REST API thatwas implemented in Amazon API Gateway. Some of the older microservices that run onEC2 instances need to call this new APIThe company does not want the API to be accessible from the public internet and does notwant proprietary data to traverse the public internetWhat should a solutions architect do to meet these requirements?
A. Create an AWS Site-to-Site VPN connection between the VPC and the API GatewayUse API Gateway to generate a unique API key for each microservice. Configure the APImethods to require the key.
B. Create an interface VPC endpoint for API Gateway, and set an endpoint policy to onlyallow access to the specific API Add a resource policy to API Gateway to only allow accessfrom the VPC endpoint Change the API Gateway endpoint type to private.
C. Modify the API Gateway to use IAM authentication Update the IAM policy for the IAMrole that is assigned to the EC2 instances to allow access to the API Gateway Move theAPI Gateway into a new VPC Deploy a transit gateway and connect the VPCs.
D. Create an accelerator in AWS Global Accelerator and connect the accelerator to the APIGateway. Update the route table for all VPC subnets with a route to the created GlobalAccelerator endpoint IP address. Add an API key for each service to use for authentication.
A large company recently experienced an unexpected increase in Amazon RDS andAmazon DynamoDB costs The company needs to increase visibility into details of AWSBilling and Cost Management There are various accounts associated with AWSOrganizations, including many development and production accounts. There is noconsistent tagging strategy across the organization, but there are guidelines in place thatrequire all infrastructure to be deployed using AWS Cloud Formation with consistenttagging Management requires cost center numbers and project ID numbers for all existingand future DynamoDB tables and RDS instancesWhich strategy should the solutions architect provide to meet these requirements?
A. Use Tag Editor to tag existing resources Create cost allocation tags to define the costcenter and project ID and allow 24 hours for tags to propagate to existing resources
B. Use an AWS Config rule to alert the finance team of untagged resources Create acentralized AWS Lambda based solution to tag untagged RDS databases and DynamoDBresources every hour using a cross-account rote.
C. Use Tag Editor to tag existing resources Create cost allocation tags to define the costcenter and project ID Use SCPs to restrict resource creation that do not have the costcenter and project ID on the resource.
D. Create cost allocation tags to define the cost center and project ID and allow 24 hoursfor tags to propagate to existing resources Update existing federated roles to restrictprivileges to provision resources that do not include the cost center and project ID on theresource
A company has an application that uses Amazon EC2 instances in an Auto Scaling group.The quality assurance (QA) department needs to launch a large number of short-livedenvironments to test the application. The application environments are currently launchedby the manager of the department using an AWS CloudFormation template To launch thestack, the manager uses a role with permission to use CloudFormation EC2. and AutoScaling APIs. The manager wants to allow testers to launch their own environments, butdoes not want to grant broad permissions to each userWhich set up would achieve these goals?
A. Upload the AWS CloudFormation template to Amazon S3. Give users in the QAdepartment permission to assume the manager's role and add a policy that restricts thepermissions to the template and the resources it creates Train users to launch the templatefrom the CloudFormation console
B. Create an AWS Service Catalog product from the environment template Add a launchconstraint to the product with the existing role Give users in the QA department permissionto use AWS Service Catalog APIs only_ Train users to launch the template from the AWSService Catalog console.
C. Upload the AWS CloudFormation template to Amazon S3 Give users in the QA department permission to use CloudFormation and S3 APIs, with conditions that restrict thepermissions to the template and the resources it creates Train users to launch the templatefrom the CloudFormation console.
D. Create an AWS Elastic Beanstalk application from the environment template Give usersin the QA department permission to use Elastic Beanstalk permissions only Train users tolaunch Elastic Beanstalk environments with the Elastic Beanstalk CLI, passing the existingrole to the environment as a service role
A financial services company in North America plans to release a new online webapplication to its customers on AWS . The company will launch the application in the useast-1 Region on Amazon EC2 instances. The application must be highly available andmust dynamically scale to meet user traffic. The company also wants to implement adisaster recovery environment for the application in the us-west-1 Region by using activepassive failover.Which solution will meet these requirements?
A. Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the useast-1 VPC. create an Application Load Balancer (ALB) that extends across multipleAvailability Zones in both VPCs Create an Auto Scaling group that deploys the EC2instances across the multiple Availability Zones in both VPCs Place the Auto Scaling groupbehind the ALB.
B. Create a VPC in us-east-1 and a VPC in us-west-1. In the us-east-1 VPC. create anApplication Load Balancer (ALB) that extends across multiple Availability Zones in thatVPC. Create an Auto Scaling group that deploys the EC2 instances across the multipleAvailability Zones in the us-east-1 VPC Place the Auto Scaling group behind the ALB Setup the same configuration in the us-west-1 VPC. Create an Amazon Route 53 hosted zone Create separate records for each ALB Enable health checks to ensure high availabilitybetween Regions.
C. Create a VPC in us-east-1 and a VPC in us-west-1 In the us-east-1 VPC. create anApplication Load Balancer (ALB) that extends across multiple Availability Zones in thatVPC Create an Auto Scaling group that deploys the EC2 instances across the multipleAvailability Zones in the us-east-1 VPC Place the Auto Scaling group behind the ALB Setup the same configuration in the us-west-1 VPC Create an Amazon Route 53 hosted zone.Create separate records for each ALB Enable health checks and configure a failoverrouting policy for each record.
D. Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the useast-1 VPC. create an Application Load Balancer (ALB) that extends across multipleAvailability Zones in Create an Auto Scaling group that deploys the EC2 instances acrossthe multiple Availability Zones in both VPCs Place the Auto Scaling group behind the ALBCreate an Amazon Route 53 host.. Create a record for the ALB.
A solutions architect is building a containerized NET Core application that will run in AWSFargate The backend of the application requires Microsoft SQL Server with high availabilityAll tiers of the application must be highly available. The credentials used for the connectionstring to SQL Server should not be stored on disk within the .NET Core front-endcontainersWhich strategies should the solutions architect use to meet these requirements?
A. Set up SQL Server to run in Fargate with Service Auto Scaling Create an Amazon ECStask execution role that allows the Fargate task definition to get the secret value for thecredentials to SQL Server running in Fargate Specify the ARN of the secret in AWSSecrets Manager in the secrets section of the Fargate task definition so the sensitive datacan be injected into the containers as environment variables on startup for reading into theapplication to construct the connection string Set up the NET Core service using ServiceAuto Scaling behind an Application Load Balancer in multiple Availability Zones
B. Create a Multi-AZ deployment of SQL Server on Amazon RDS Create a secret in AWSSecrets Manager for the credentials to the RDS database. Create an Amazon ECS taskexecution role that allows the Fargate task definition to get the secret value for thecredentials to the RDS database in Secrets Manager. Specify the ARN of the secret inSecrets Manager in the secrets section of the Fargate task definition so the sensitive datacan be injected into the containers as environment variables on startup for reading into theapplication to construct the connection string Set up the NET Core service in Fargate usingService Auto Scaling behind an Application Load Balancer in multiple Availability Zones.
C. Create an Auto Scaling group to run SQL Server on Amazon EC2 Create a secret inAWS Secrets Manager for the credentials to SQL Server running on EC2. Create anAmazon ECS task execution role that allows the Fargate task definition to get the secretvalue for the credentials to SQL Server on EC2. Specify the ARN of the secret in SecretsManager in the secrets section of the Fargate task definition so the sensitive data can beinjected into the containers as environment variables on startup for reading into theapplication to construct the connection string. Set up the NET Core service using serviceAuto Scaling behind an Application Load Balancer in multiple Availability Zones.
A solutions architect is responsible (or redesigning a legacy Java application to improve itsavailability, data durability, and scalability. Currently, the application runs on a single highmemory Amazon EC2 instance. It accepts HTTP requests Irom upstream clients, addsthem to an in-memory queue, and responds with a 200 status. A separate applicationthread reads items from the queue, processes them, and persists the results to an AmazonRDS MySQL instance. The processing time for each item takes 90 seconds on average,most ol which is spent waiting on external service calls, but the application is written toprocess multiple items in parallel.Traffic to this service is unpredictable. During periods of high load, items may sit in theinternal queue for over an hour while the application processes the backlog. In addition, thecurrent system has issues with availability and data loss if the single application node fails.Clients that access this service cannot be modified. They expect to receive a response toeach HTTP request they send within 10 seconds before they will time out and retry therequest.Which approach would improve the availability and durability of (he system whiledecreasing the processing latency and minimizing costs?
A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to passrequests to an AWS Lambda function. Migrate the core processing code to a Lambdafunction and write a wrapper class that provides a handler method that converts the proxyevents to the internal application data model and invokes the processing module.
B. Create an Amazon API Gateway REST API that uses a service proxy to put items in anAmazon SOS queue. Extract the core processing code from the existing application andupdate it to pull items from Amazon SOS instead of an in-memory queue. Deploy the newprocessing application to smaller EC2 instances within an Auto Scaling group that scalesdynamically based on the approximate number of messages in the Amazon SOS queue.
C. Modify the application to use Amazon DynamoDB instead of Amazon RDS. ConfigureAuto Scaling for the DynamoDB table. Deploy the application within an Auto Scaling groupwith a scaling policy based on CPU utilization. Back the in-memory queue with a memorymapped file to an instance store volume and periodically write that file to Amazon S3.
D. Update the application to use a Redis task queue instead of the in-memory queue. 8uilda Docker container image for the application. Create an Amazon ECS task definition thatincludes the application container and a separate container to host Redis. Deploy the newtask definition as an ECS service using AWS Fargate, and enable Auto Scaling.
A company runs an application in the cloud that consists of a database and a websiteUsers can post data to the website, have the data processed, and have the data sent backto them in an email. Data is stored in a MySQL database running on an Amazon EC2instance The database is running in a VPC with two private subnets The website is runningon Apache Tomcat in a single EC2 instance in a different VPC with one public subnetThere is a single VPC peering connection between the database and website VPC.The website has suffered several outages during the last month due to high trafficWhich actions should a solutions architect take to increase the reliability of the application?(Select THREE )
A. Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behindan Application Load Balancer
B. Provision an additional VPC peering connection
C. Migrate the MySQL database to Amazon Aurora with one Aurora Replica
D. Provision two NAT gateways in the database VPC
E. Move the Tomcat server to the database VPC
F. Create an additional public subnet in a different Availability Zone in the website VPC
A large multinational company runs a timesheet application on AWS that is used by staffacross the world The application runs on Amazon EC2 instances in an Auto Scaling groupbehind an Elastic Load Balancing (ELB) load balancer, and stores data in an Amazon RDSMySQL Multi-AZ database instance.The CFO is concerned about the impact on the business if the application is not availableThe application must not be down for more than two hours, but the solution must be ascost-effective as possibleHow should the solutions architect meet the CFO's requirements while minimizing dataloss?
A. In another region, configure a read replica and create a copy of the infrastructure Whenan issue occurs, promote the read replica and configure as an Amazon RDS Multi-AZdatabase instance Update the DNS record to point to the other region's ELB
B. Configure a 1-day window of 60-minute snapshots of the Amazon RDS Multi-AZdatabase instance Create an AWS CloudFormation template of the application infrastructure that uses the latest snapshot When an issue occurs use the AWSCloudFormation template to create the environment in another region Update the DNSrecord to point to the other region's ELB.
C. Configure a 1-day window of 60 minute snapshots of the Amazon RDS Multi-AZdatabase instance which is copied to another region Create an AWS CloudFormationtemplate of the application infrastructure that uses the latest copied snapshot When anissue occurs, use the AWS CloudFormation template to create the environment in anotherregion Update the DNS record to point to the other region's ELB
D. Configure a read replica in another region Create an AWS CloudFormation template ofthe application infrastructure When an issue occurs, promote the read replica and configureas an Amazon RDS Multi-AZ database instance and use the AWS CloudFormationtemplate to create the environment in another region using the promoted Amazon RDSinstance Update the DNS record to point to the other region's ELB
A retail company runs a business-critical web service on an Amazon Elastic ContainerService (Amazon ECS) cluster that runs on Amazon EC2 instances The web servicereceives POST requests from end users and writes data to a MySQL database that runs ona separate EC2 instance The company needs to ensure that data loss does not occur.The current code deployment process includes manual updates of the ECS service Duringa recent deployment, end users encountered intermittent 502 Bad Gateway errors inresponse to valid web requestsThe company wants to implement a reliable solution to prevent this issue from recurring.The company also wants to automate code deployments. The solution must be highlyavailable and must optimize cost-effectivenessWhich combination of steps will meet these requirements? (Select THREE.)
A. Run the web service on an ECS cluster that has a Fargate launch type Use AWSCodePipeline and AWS CodeDeploy to perform a blue/green deployment with validationtesting to update the ECS service.
B. Migrate the MySQL database to run on an Amazon RDS for MySQL Multi-AZ DBinstance that uses Provisioned IOPS SSD (io2) storage
C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as an event sourceto receive the POST requests from the web service Configure an AWS Lambda function topoll the queue Write the data to the database.
D. Run the web service on an ECS cluster that has a Fargate launch type Use AWSCodePipeline and AWS CodeDeploy to perform a canary deployment to update the ECSservice.
A finance company is storing financial records in an Amazon S3 bucket. The companypersists a record for every financial transaction. According to regulatory requirements, therecords cannot be modified for at least 1 year after they are written. The records are readon a regular basis and must be immediately accessible.Which solution will meet these requirements?
A. Create a new S3 bucket. Turn on S3 Object Lock, set a default retention period of 1year, and set the retention mode to compliance mode. Store all records in the new S3 bucket.
B. Create an S3 Lifecycle rule to immediately transfer new objects to the S3 Glacierstorage tier Create an S3 Glacier Vault Lock policy that has a retention period of 1 year.
C. Create an S3 Lifecycle rule to immediately transfer new objects to the S3 IntelligentTiering storage tier. Set a retention period of 1 year.
D. Create an S3 bucket policy with a Deny action for PutObject operations with a conditionwhere the s3:x-amz-object-retention header is not equal to 1 year.
A company wants to allow its marketing team to perform SQL queries on customer recordsto identify market segments. The data is spread across hundreds of files. The records mustbe encrypted in transit and at rest. The team manager must have the ability to manageusers and groups but no team members should have access to services or resources notrequired for the SQL queries Additionally, administrators need to audit the queries madeand receive notifications when a query violates rules defined by the security team.AWS Organizations has been used to create a new account and an AWS 1AM user withadministrator permissions for the team manager. Which design meets these requirements'?
A. Apply a service control policy (SCP) that allows access to 1AM Amazon RDS. and AWSCloudTrail Load customer records in Amazon RDS MySQL and train users to run queriesusing the AWS CLI. Stream the query logs to Amazon CloudWatch Logs from the RDSdatabase instance Use a subscription filter with AWS Lambda functions to audit and alarmon queries against personal data
B. Apply a service control policy (SCP) that denies access to all services except 1AMAmazon Athena Amazon S3 and AWS CloudTrail Store customer record files in AmazonS3 and tram users to run queries using the CLI via Athena Analyze CloudTrail events toaudit and alarm on queries against personal data
C. Apply a service control policy (SCP) that denies access to all services except 1AMAmazon DynamoDB. and AWS CloudTrail Store customer records in DynamoDB and trainusers to run queries using the AWS CLI Enable DynamoDB streams to track the queriesthat are issued and use an AWS Lambda function for real-time monitoring and alerting
D. Apply a service control policy (SCP) that allows access to 1AM Amazon Athena;Amazon S3, and AWS CloudTrail Store customer records as files in Amazon S3 and trainusers to leverage the Amazon S3 Select feature and run queries using the AWS CLIEnable S3 object-level logging and analyze CloudTrail events to audit and alarm on queriesagainst personal data
A company is deploying a third-party firewall appliance solution from AWS Marketplace tomonitor and protect traffic that leaves the company's AWS environments. The companywants to deploy this appliance into a shared services VPC and route all outbound internetbound traffic through the appliances.A solutions architect needs to recommend a deployment method that prioritizes reliabilityand minimizes failover time between firewall appliances within a single AWS Region. Thecompany has set up routing from the shared services VPC to other VPCs.Which steps should the solutions architect recommend to meet these requirements?(Select THREE)
A. Deploy two firewall appliances into the shared services VPC. each in a separateAvailability Zone
B. Create a new Network Load Balancer in the shared services VPC Create a new targetgroup, and attach it to the new Network Load Balancer Add each of the firewall applianceinstances to the target group.
C. Create a new Gateway Load Balancer in the shared services VPC Create a new targetgroup, and attach it to the new Gateway Load Balancer Add each of the firewall appliance instances to the target group
D. Create a VPC interface endpoint Add a route to the route table in the shared servicesVPC. Designate the new endpoint as the next hop for traffic that enters the shared servicesVPC from other VPCs.
E. Deploy two firewall appliances into the shared services VPC. each in the sameAvailability Zone
A company is migrating an on-premises content management system (CMS) to AWSFargate. The company uses the CMS for blog posts that include text, images, and videos.The company has observed that traffic to blog posts drops by more than 80% after theposts are more than 30 days oldThe CMS runs on multiple VMs and stores application state on disk This application state isshared across all instances across multiple Availability Zones Images and other media arestored on a separate NFS file share. The company needs to reduce the costs of theexisting solution while minimizing the impact on performance.Which combination of steps will meet these requirements MOST cost-effectively? (SelectTWO.)
A. Store media in an Amazon S3 Standard bucket Create an S3 Lifecycle configuration thattransitions objects that are older than 30 days to the S3 Standard-Infrequent Access (S3Standard-IA) storage class.
B. Store media on an Amazon Elastic File System (Amazon EFS) volume Attach the EFSvolume to all Fargate instances.
C. Store application state on an Amazon Elastic File System (Amazon EFS) volume Attachthe EFS volume to all Fargate instances.
D. Store application state on an Amazon Elastic Block Store (Amazon EBS) volume Attachthe EBS volume to all Fargate instances.
E. Store media in an Amazon S3 Standard bucket Create an S3 Lifecycle configuration thattransitions objects that are older than 30 days to the S3 Glacier storage class
A solutions architect uses AWS Organizations to manage several AWS accounts for acompany. The full Organizations feature set is activated for the organization. All productionAWS accounts exist under an OU that is named "production ‘’ Systems operators have fulladministrative privileges within these accounts by using IAM roles.The company wants to ensure that security groups in all production accounts do not allowinbound traffic for TCP port 22. All noncompliant security groups must be remediatedimmediately, and no new rules that allow port 22 can be created.Winch solution will meet these requirements?
A. Write an SCP that denies the CreateSecurityGroup action with a condition o(ec2:tngress rule with value 22. Apply the SCP to the 'production' OU.
B. Configure an AWS CloudTrail trail for all accounts Send CloudTrail logs to an AmazonS3 bucket In the Organizations management account. Configure an AWS Lambda functionon the management account with permissions to assume a role in all production accountsto describe and modify security groups. Configure Amazon S3 to invoke the Lambdafunction on every PutObject event on the S3 bucket Configure the Lambda function toanalyze each CloudTrail event for noncompliant security group actions and to automaticallyremediate any issues.
C. Create an Amazon EvertBridge (Amazon CloudWatch Events) event bus in theOrganizations management account. Create an AWS Cloud Formation template to deployconfigurations that send CreateSecurityGroup events to the even! bus from an productionaccounts Configure an AWS Lambda function in the management account withpermissions to assume a role «i all production accounts to describe and modify securitygroups. Configure the event bus to invoke the Lambda function Configure the Lambdafunction to analyse each event for noncompliant security group actions and to automaticallyremediate any issues.
D. Create an AWS CloudFormation template to turn on AWS Config Activate theINCOMING_SSH_DISABLED AWS Config managed rule Deploy an AWS Lambda functionthat will run based on AWS Config findings and will remediate noncompliant resourcesDeploy the CloudFormation template by using a StackSet that is assigned to the"production" OU. Apply an SCP to the OU to deny modification of the resources that theCloudFormation template provisions.
A company is launching a web-based application in multiple regions around the world Theapplication consists of both static content stored in a private Amazon S3 bucket and dynaECS containers behind an Application Load Balancer (ALB) The company requires that thestatic and dynamic application content be accessible through Amazon CloudFront onlyWhich combination of steps should a solutions architect recommend to restrict directcontent access to CloudFront? (Select THREE)
A. Create a web ACL in AWS WAF with a rule to validate the presence of a custom headerand associate the web ACL with the ALB
B. Create a web ACL in AWS WAF with a rule to validate the presence of a custom headerand associate the web ACL with the CloudFront distribution
C. Configure CloudFront to add a custom header to origin requests
D. Configure the ALB to add a custom header to HTTP requests
E. Update the S3 bucket ACL to allow access from the CloudFront distribution only
F. Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFrontdistribution Update the S3 bucket policy to allow access to the OAI only
A company has more than 10.000 sensors that send data to an on-premises Apache Kafkaserver by using the Message Queuing Telemetry Transport (MQTT) protocol . The onpremises Kafka server transforms the data and then stores the results as objects in anAmazon S3 bucketRecently, the Kafka server crashed. The company lost sensor data while the server wasbeing restored A solutions architect must create a new design on AWS that is highlyavailable and scalable to prevent a similar occurrenceWhich solution will meet these requirements?
A. Launch two Amazon EC2 instances to host the Kafka server in an active/standbyconfiguration across two Availability Zones. Create a domain name in Amazon Route 53Create a Route 53 failover policy Route the sensors to send the data to the domain name
B. Migrate the on-premises Kafka server to Amazon Managed Streaming for Apache Kafka(Amazon MSK). Create a Network Load Balancer (NLB) that points to the Amazon MSKbroker. Enable NLB health checks Route the sensors to send the data to the NLB.
C. Deploy AWS loT Core, and connect it to an Amazon Kinesis Data Firehose deliverystream Use an AWS Lambda function to handle data transformation Route the sensors tosend the data to AWS loT Core
D. Deploy AWS loT Core, and launch an Amazon EC2 instance to host the Kafka serverConfigure AWS loT Core to send the data to the EC2 instance Route the sensors to sendthe data to AWSIoT Core.
A company's AWS architecture currently uses access keys and secret access keys storedon each instance to access AWS services Database credentials are hard-coded on eachinstance SSH keys for command-line remote access are stored in a secured Amazon S3bucket The company has asked its solutions architect to improve the security posture of thearchitecture without adding operational complexly.Which combination of steps should the solutions architect take to accomplish this? (SelectTHREE.)
A. Use Amazon EC2 instance profiles with an 1AM role
B. Use AWS Secrets Manager to store access keys and secret access keys
C. Use AWS Systems Manager Parameter Store to store database credentials
D. Use a secure fleet of Amazon EC2 bastion hosts for remote access
E. Use AWS KMS to store database credentials
F. Use AWS Systems Manager Session Manager for remote access
A company has developed a new release of a popular video game and wants to make itavailable for public download. The new release package is approximately 5 GB in size. Thecompany provides downloads for existing releases from a Linux-based, publicly facing FTPsite hosted in an on-premises data center. The company expects the new release will bedownloaded by users worldwide The company wants a solution that provides improveddownload performance and low transfer costs, regardless of a user's location.Which solutions will meet these requirements?
A. Store the game files on Amazon EBS volumes mounted on Amazon EC2 instanceswithin an Auto Scaling group Configure an FTP service on the EC2 instances Use anApplication Load Balancer in front of the Auto Scaling group. Publish the game downloadURL for users to download the package.
B. Store the game files on Amazon EFS volumes that are attached to Amazon EC2instances within an Auto Scaling group Configure an FTP service on each of the EC2instances Use an Application Load Balancer in front of the Auto Scaling group Publish thegame download URL for users to download the package
C. Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload thegame files to the S3 bucket Use Amazon CloudFront for the website Publish the gamedownload URL for users to download the package.
D. Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload thegame files to the S3 bucket Set Requester Pays for the S3 bucket Publish the gamedownload URL for users to download the package
A company is planning to migrate its on-premises data analysis application to AWS. Theapplication is hosted across a fleet of servers and requires consistent system time.The company has established an AWS Direct Connect connection from its on-premisesdata center to AWS. The company has a high-precision stratum-0 atomic dock networkappliance that acts as an NTP source for all on-premises servers.After the migration to AWS is complete, the clock on all Amazon EC2 instances that hostthe application must be synchronized with the on-premises atomic clock network appliance.Which solution will meet these requirements with the LEAST administrative overhead?
A. Configure a DHCP options set with the on-premises NTP server address Assign theoptions set to the VPC. Ensure that NTP traffic is allowed between AWS and the onpremises networks.
B. Create a custom AMI to use the Amazon Time Sync Service at 169.254.169.123 Usethis AMI for the application Use AWS Config to audit the NTP configuration.
C. Deploy a third-party time server from the AWS Marketplace. Configure the time server tosynchronize with the on-premises atomic clock network appliance. Ensure that NTP trafficis allowed inbound in the network ACLs for the VPC that contains the third-party server.
D. Create an IPsec VPN tunnel from the on-premises atomic clock network appliance to theVPC to encrypt the traffic over the Direct Connect connection. Configure the VPC routetables to direct NTP traffic over the tunnel.
A development team has created a new flight tracker application that provides near-realtime data to users. The application has a front end that consists of an Application LoadBalancer (ALB) in front of two large Amazon EC2 instances in a single Availability Zone.Data is stored in a single Amazon RDS MySQL DB instance. An Amazon Route 53 DNSrecord points to the ALB.Management wants the development team to improve the solution to achieve maximumreliability with the least amount of operational overhead.Which set of actions should the team take?
A. Create RDS MySQL read replicas. Deploy the application to multiple AWS Regions. Usea Route 53 latency-based routing policy to route to the application.
B. Configure the DB instance as Multi-AZ. Deploy the application to two additional EC2instances in different Availability Zones behind an ALB.
C. Replace the DB instance with Amazon DynamoDB global tables. Deploy the applicationin multiple AWS Regions. Use a Route 53 latency-based routing policy to route to theapplication.
D. Replace the DB instance with Amazon Aurora with Aurora Replicas. Deploy theapplication to mulliple smaller EC2 instances across multiple Availability Zones in an AutoScaling group behind an ALB.
A company is creating a sequel for a popular online game. A large number of users from allover the world will play the game within the first week after launch. Currently, the gameconsists of the following components deployed in a single AWS Region:• Amazon S3 bucket that stores game assets• Amazon DynamoDB table that stores player scoresA solutions architect needs to design a multi-Region solution that will reduce latencyimprove reliability, and require the least effort to implementWhat should the solutions architect do to meet these requirements?
A. Create an Amazon CloudFront distribution to serve assets from the S3 bucket ConfigureS3 Cross-Region Replication Create a new DynamoDB able in a new Region Use the new table as a replica target tor DynamoDB global tables.
B. Create an Amazon CloudFront distribution to serve assets from the S3 bucket.Configure S3 Same-Region Replication. Create a new DynamoDB able m a new Region.Configure asynchronous replication between the DynamoDB tables by using AWSDatabase Migration Service (AWS DMS) with change data capture (CDC)
C. Create another S3 bucket in a new Region and configure S3 Cross-Region Replicationbetween the buckets Create an Amazon CloudFront distribution and configure originfailover with two origins accessing the S3 buckets in each Region. Configure DynamoDBglobal tables by enabling Amazon DynamoDB Streams, and add a replica table in a newRegion.
D. Create another S3 bucket in the same Region, and configure S3 Same-RegionReplication between the buckets- Create an Amazon CloudFront distribution and configureorigin failover with two origin accessing the S3 buckets Create a new DynamoDB table m anew Region Use the new table as a replica target for DynamoDB global tables.
A company is migrating its marketing website and content management system from anon-premises data center to AWS. The company wants the AWS application to be deployedin a VPC with Amazon EC2 instances used for the web servers and an Amazon RDSinstance for the database.The company has a runbook document that describes the installation process of the onpremises system. The company would like to base the AWS system on the processesreferenced in the runbook document. The runbook document describes the installation andconfiguration of the operating systems, network settings, the website, and contentmanagement system software on the servers After the migration is complete, the companywants to be able to make changes quickly to take advantage of other AWS features.How can the application and environment be deployed and automated m AWS. whileallowing for future changes?
A. Update the runbook to describe how to create the VPC. the EC2 instances and the RDSinstance for the application by using the AWS Console Make sure that the rest of the stepsin the runbook are updated to reflect any changes that may come from the AWS migration
B. Write a Python script that uses the AWS API to create the VPC. the EC2 instances andthe RDS instance for the application Write shell scripts that implement the rest of the stepsin the runbook Have the Python script copy and run the shell scripts on the newly createdinstances to complete the installation
C. Write an AWS Cloud Formation template that creates the VPC, the EC2 instances, andthe RDS instance for the application Ensure that the rest of the steps in the runbook areupdated to reflect any changes that may come from the AWS migration
D. Write an AWS CloudFormation template that creates the VPC the EC2 instances, andthe RDS instance for the application Include EC2 user data in the AWS Cloud Formationtemplate to install and configure the software.
A company has an application that runs on Amazon EC2 instances in an Amazon EC2Auto Scaling group. The company uses AWS CodePipeline to deploy the application. Theinstances that run in the Auto Scaling group are constantly changing because of scalingeventsWhen the company deploys new application code versions the company Installs the AWSCodeDeploy agent on any new target EC2 instances and associates the instances with theCodeDeploy deployment group The application is set to go live within the next 24 hoursWhat should a solutions architect recommend to automate the application deploymentprocess with the LEAST amount of operational overhead?
A. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWSLambda function when a new EC2 instance is launched into the Auto Scaling group. Codethe Lambda function to associate the EC2 instances with the CodeDeploy deploymentgroup.
B. Write a script to suspend Amazon EC2 Auto Scaling operations before the deploymentof new code. When the deployment is complete, create a new AMI and configure the AutoScaling group's launch template to use the new AMI for new launches. Resume AmazonEC2 Auto Scaling operations
C. Create a new AWS CodeBuild project that creates a new AMI that contains the newcode Configure CodeBuild to update the Auto Scaling group's launch template to the newAMI Run an Amazon EC2 Auto Scaling instance refresh operation.
D. Create a new AMI that has the CodeDeploy agent installed Configure the Auto Scalinggroup's launch template to use the new AMI Associate the CodeDeploy deployment groupwith the Auto Scaling group instead of the EC2 instances.
A solutions architect has been assigned to migrate a 50 TB Oracle data warehouse thatcontains sales data from on-premises to Amazon Redshift Major updates to the sales dataoccur on the final calendar day of the month For the remainder of the month, the datawarehouse only receives minor daily updates and is primarily used for reading andreporting Because of this the migration process must start on the first day of the month andmust be complete before the next set of updates occur. This provides approximately 30days to complete the migration and ensure that the minor daily changes have beensynchronized with the Amazon Redshift data warehouse Because the migration cannotimpact normal business network operations, the bandwidth allocated to the migration formoving data over the internet is 50 Mbps The company wants to keep data migration costslowWhich steps will allow the solutions architect to perform the migration within the specifiedtimeline?
A. Install Oracle database software on an Amazon EC2 instance Configure VPNconnectivity between AWS and the company's data center Configure the Oracle databaserunning on Amazon EC2 to join the Oracle Real Application Clusters (RAC) When theOracle database on Amazon EC2 finishes synchronizing, create an AWS DMS ongoingreplication task to migrate the data from the Oracle database on Amazon EC2 to AmazonRedshift Verify the data migration is complete and perform the cut over to AmazonRedshift.
B. Create an AWS Snowball import job Export a backup of the Oracle data warehouseCopy the exported data to the Snowball device Return the Snowball device to AWS Createan Amazon RDS for Oracle database and restore the backup file to that RDS instanceCreate an AWS DMS task to migrate the data from the RDS for Oracle database toAmazon Redshift Copy daily incremental backups from Oracle in the data center to theRDS for Oracle database over the internet Verify the data migration is complete andperform the cut over to Amazon Redshift.
C. Install Oracle database software on an Amazon EC2 instance To minimize the migrationtime configure VPN connectivity between AWS and the company's data center byprovisioning a 1 Gbps AWS Direct Connect connection Configure the Oracle databaserunning on Amazon EC2 to be a read replica of the data center Oracle database Start thesynchronization process between the company's on-premises data center and the Oracledatabase on Amazon EC2 When the Oracle database on Amazon EC2 is synchronizedwith the on-premises database create an AWS DMS ongoing replication task from theOracle database read replica that is running on Amazon EC2 to Amazon Redshift Verifythe data migration is complete and perform the cut over to Amazon Redshift.
D. Create an AWS Snowball import job. Configure a server in the company€™s data centerwith an extraction agent. Use AWS SCT to manage the extraction agent and convert theOracle schema to an Amazon Redshift schema. Create a new project in AWS SCT usingthe registered data extraction agent. Create a local task and an AWS DMS task in AWSSCT with replication of ongoing changes. Copy data to the Snowball device and return theSnowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to AmazonRedshift. Verify that the data migration is complete and perform the cut over to AmazonRedshift.
During an audit, a security team discovered that a development team was putting IAM usersecret access keys in their code and then committing it to an AWS CodeCommit repository. The security team wants to automatically find and remediate instances of this securityvulnerabilityWhich solution will ensure that the credentials are appropriately secured automatically?
A. Run a script nightly using AWS Systems Manager Run Command to search forcredentials on the development instances If found use AWS Secrets Manager to rotate thecredentials.
B. Use a scheduled AWS Lambda function to download and scan the application code fromCodeCommit If credentials are found, generate new credentials and store them in AWSKMS
C. Configure Amazon Macie to scan for credentials in CodeCommit repositories Ifcredentials are found, trigger an AWS Lambda function to disable the credentials and notifythe user
D. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new codesubmissions for credentials If credentials are found, disable them in AWS 1AM and notifythe user.
A company wants to deploy an API to AWS. The company plans to run the API on AWSFargate behind a load balancer. The API requires the use of header-based routing andmust be accessible from on-premises networks through an AWS Direct Connectconnection and a private VIF.The company needs to add the client IP addresses that connect to the API to an allow listin AWS. The company also needs to add the IP addresses of the API to the allow list. Thecompany's security team will allow /27 CIDR ranges to be added to the allow list. Thesolution must minimize complexity and operational overhead.Which solution will meet these requirements?
A. Create a new Network Load Balancer (NLB) in the same subnets as the Fargate taskdeployments. Create a security group that includes only the client IP addresses that needaccess to the API. Attach the new security group to the Fargate tasks. Provide the securityteam with the NLB's IP addresses for the allow list.
B. Create two new /27 subnets. Create a new Application Load Balancer (ALB) thatextends across the new subnets. Create a security group that includes only the client IPaddresses that need access to the API. Attach the security group to the ALB. Provide thesecurity team with the new subnet IP ranges for the allow list.
C. Create two new '27 subnets. Create a new Network Load Balancer (NLB) that extendsacross the new subnets. Create a new Application Load Balancer (ALB) within the newsubnets. Create a security group that includes only the client IP addresses that needaccess to the API. Attach the security group to the ALB. Add the ALB's IP addresses astargets behind the NLB. Provide the security team with the NLB's IP addresses for theallow list.
D. Create a new Application Load Balancer (ALB) in the same subnets as the Fargate taskdeployments. Create a security group that includes only the client IP addresses that needaccess to the API. Attach the security group to the ALB. Provide the security team with theALB's IP addresses for the allow list.
A software company is using three AWS accounts for each of its 1 0 development teamsThe company has developed an AWS CloudFormation standard VPC template thatincludes three NAT gateways The template is added to each account for each team Thecompany is concerned that network costs will increase each time a new development teamis added A solutions architect must maintain the reliability of the company's solutions andminimize operational complexityWhat should the solutions architect do to reduce the network costs while meeting theserequirements?
A. Create a single VPC with three NAT gateways in a shared services account Configureeach account VPC with a default route through a transit gateway to the NAT gateway in theshared services account VPC Remove all NAT gateways from the standard VPC template
B. Create a single VPC with three NAT gateways in a shared services account Configureeach account VPC with a default route through a VPC peering connection to the NATgateway in the shared services account VPC Remove all NAT gateways from the standardVPC template
C. Remove two NAT gateways from the standard VPC template Rely on the NAT gatewaySLA to cover reliability for the remaining NAT gateway.
D. Create a single VPC with three NAT gateways in a shared services account Configure aSite-to-Site VPN connection from each account to the shared services account Remove allNAT gateways from the standard VPC template
A company is migrating an application to the AWS Cloud. The application runs in an onpremises data center and writes thousands of images into a mounted NFS file system eachnight After the company migrates the application, the company will host the application onan Amazon EC2 instance with a mounted Amazon Elastic File System (Amazon EFS) filesystem.The company has established an AWS Direct Connect connection to AWS Before themigration cutover. a solutions architect must build a process that will replicate the newlycreated on-premises images to the EFS file systemWhat is the MOST operationally efficient way to replicate the images?
A. Configure a periodic process to run the aws s3 sync command from the on-premises filesystem to Amazon S3 Configure an AWS Lambda function to process event notificationsfrom Amazon S3 and copy the images from Amazon S3 to the EFS file system
B. Deploy an AWS Storage Gateway file gateway with an NFS mount point. Mount the filegateway file system on the on-premises server. Configure a process to periodically copythe images to the mount point
C. Deploy an AWS DataSync agent to an on-premises server that has access to the NFSfile system Send data over the Direct Connect connection to an S3 bucket by using a publicVIF Configure an AWS Lambda function to process event notifications from Amazon S3and copy the images from Amazon S3 to the EFS file system
D. Deploy an AWS DataSync agent to an on-premises server that has access to the NFSfile system Send data over the Direct Connect connection to an AWS PrivateLink interfaceVPC endpoint for Amazon EFS by using a private VIF Configure a DataSync scheduledtask to send the images to the EFS file system every 24 hours.
A company is migrating its infrastructure to the AW5 Cloud. The company must complywith a variety of regulatory standards for different projects. The company needs a multiaccount environment.A solutions architect needs to prepare the baseline infrastructure The solution must providea consistent baseline of management and security but it must allow flexibility for differentcompliance requirements within various AWS accounts. The solution also needs tointegrate with the existing on-premises Active Directory Federation Services (AD FS)server.Which solution meets these requirements with the LEAST amount of operationaloverhead?
A. Create an organization In AWS Organizations Create a single SCP for least privilege access across all accounts Create a single OU for all accounts Configure an IAM identityprovider tor federation with the on-premises AD FS server Configure a central toggingaccount with a defined process for log generating services to send log events to the centralaccount. Enable AWS Config in the central account with conformance packs for allaccounts.
B. Create an organization In AWS Organizations Enable AWS Control Tower on theorganization. Review included guardrails for SCPs. Check AWS Config for areas thatrequire additions Add OUs as necessary Connect AWS Single Sign-On to the on-premisesAD FS server
C. Create an organization in AWS Organizations Create SCPs for least privilege accessCreate an OU structure, and use it to group AWS accounts Connect AWS Single Sign-Onto the on-premises AD FS server. Configure a central logging account with a definedprocess for tog generating services to send log events to the central account Enable AWSConfig in the central account with aggregators and conformance packs.
D. Create an organization in AWS Organizations Enable AWS Control Tower on theorganization Review included guardrails for SCPs. Check AWS Config for areas thatrequire additions Configure an IAM identity provider for federation with the on-premises ADFS server.
The study guide for SAP-C01 is quite updated at DumpsSure. Helped a lot in passing my exam without any trouble. Thank you DumpsSure. Got 91% marks.
AnsariReal exam questions & answers were in the pdf file for SAP-C01. I achieved 96% marks by studying from them. It was that simple. Cheers to DumpsSure.
SundararajanBrilliant pdf files for exam Q&A by DumpsSure.com for the Amazon SAP-C01 exam. I recently passed my exam with excellent grades. Credit goes to DumpsSure. Keep up the good work guys.
NajariI am glad that I passed my SAP-C01 certification exam with 95% marks, and it is all because of DumpsSure. I haven’t seen such an all-inclusive training material. I am thankful to DumpsSure for this helpful learning material.
omaxHighly recommend exam dumps and online test engine by DumpsSure. Very similar to the real SAP-C01 exam. Passed with flying marks.
NegiAwesome exam practice software for the SAP-C01 exam. DumpsSure helped me score 91% marks in the exam. I highly recommend everyone to use the exam practicing software and data dumps.
clarkI am totally satisfied with my purchase of DumpsSure’s exam dumps. The performance and quality of Amazon SAP-C01 dumps PDF and exam engine was pretty awesome. It was an awesome experience learning and practicing on their ‘exam mode’. I cleared my exam in one go, thank you!
Kostadinov