Secure Checkout

100% SECURE CHECKOUT

Buy your braindumps confidently with our secure SSL certification and safe payment methods.

Read More
Download Demo

DOWNLOAD 100% FREE DEMO

Download the demo of your desired dumps free on just one click before purchase. 100% singup free demo.

Read More
Guarentee

100% MONEY BACK GUARANTEE

Get your certification in 1st attempt or get your 100% payment back according to our refund policy.

Read More
Customer Support

24/7 CUSTOMER SUPPORT

Resolve your issues and queries quickly with our dedicated 24/7 live customer support team.

Read More

Amazon SAP-C01 Dumps

We at Dumpssure certify you that our platform is one of the most authentic website for Amazon SAP-C01 exam questions and their correct answers. Pass your Amazon SAP-C01 exam with flying marks, and that too with little effort. With the purchase of this pack, you wil also get free demo questions dumps. We ensure your 100% success in SAP-C01 Exam with the help of our provided material.

DumpsSure offers a unique Online Test Engine where you can fully practice your SAP-C01 exam questions. This is one-of-a-kind feature which our competitors won't provide you. Candidates can practice the way they would want to attempt question at the real examination time.

Dumpssure also offers an exclusive 'Exam Mode' where you can attempt 50 random questions related to your SAP-C01 exam. This mode is exactly the same as of real SAP-C01 certification exam. Attempt all the questions within a limited time and test your knowledge on the spot. This mode will definitely give you an edge in real exam.

Our success rate from past 6 years is above 96% which is quite impressive and we're proud of it. Our customers are able to build their career in any field the wish. Let's dive right in and make the best decision of your life right now. Choose the plan you want, download the SAP-C01 exam dumps and start your preparation for a successful professional.

Why Dumpssure is ever best for the preparation for Amazon SAP-C01 exam?

Dumpssure is providing free Amazon SAP-C01 question answers for your practice, to avail this facility you just need to sign up for a free account on Dumpssure. Thousands of customers from entire world are using our SAP-C01 dumps. You can get high grades by using these dumps with money back guarantee on SAP-C01 dumps PDF.

A vital device for your assistance to pass your Amazon SAP-C01 Exam

Our production experts have been preparing such material which can succeed you in Amazon SAP-C01 exam in a one day. They are so logical and notorious about the questions and their answers that you can get good marks in Amazon SAP-C01 exam. So DUMPSSURE is offering you to get excellent marks.

Easy access on your mobile for the users

The basic mean of Dumpssure is to provide the most important and most accurate material for our users. You just need to remain connected to internet for getting updates even on your mobile. After purchasing, you can download the Amazon SAP-C01 study material in PDF format and can read it easily, where you have desire to study.

Amazon SAP-C01 Questions and Answers can get instantly

Our provided material is regularly updated step by step for new questions and answers for Amazon Exam Dumps, so that you can easily check the behaviour of the question and their answers and you can succeed in your first attempt.

Amazon SAP-C01 Dumps are demonstrated by diligence Experts

We are so keen to provide our users with that questions which are verified by the Amazon Professionals, who are extremely skilled and have spent many years in this field.

Money Back Guarantee

Dumpssure is so devoted to our customers that we provide to most important and latest questions to pass you in the Amazon SAP-C01 exam. If you have purchased the complete SAP-C01 dumps PDF file and not availed the promised facilities for the Amazon exams you can either replace your exam or claim for money back policy which is so simple for more detail visit Guarantee Page.

Amazon SAP-C01 Sample Questions

Question # 1

A car rental company has built a serverless REST API to provide data to its mobile app. The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambda functions and an Amazon Aurora MySQL Serverless DB cluster The company recently opened the API to mobile apps of partners A significant increase in the number of requests resulted causing sporadic database memory errors Analysis of the API traffic indicates that clients are making multiple HTTP GET requests for the same queries in a short period of time Traffic is concentrated during business hours, with spikes around holidays and other events The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution. Which strategy meets these requirements?

A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint Enable caching in the production stage. 
B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls Modify the Lambda functions to use the cache 
C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amount of available memory 
D. Enable throttling in the API Gateway production stage Set the rate and burst values to limit the incoming calls 



Question # 2

A company is migrating its infrastructure to the AW5 Cloud. The company must comply with a variety of regulatory standards for different projects. The company needs a multiaccount environment. A solutions architect needs to prepare the baseline infrastructure The solution must provide a consistent baseline of management and security but it must allow flexibility for different compliance requirements within various AWS accounts. The solution also needs to integrate with the existing on-premises Active Directory Federation Services (AD FS) server. Which solution meets these requirements with the LEAST amount of operational overhead? 

A. Create an organization In AWS Organizations Create a single SCP for least privilege access across all accounts Create a single OU for all accounts Configure an IAM identity provider tor federation with the on-premises AD FS server Configure a central togging account with a defined process for log generating services to send log events to the central account. Enable AWS Config in the central account with conformance packs for all accounts.
B. Create an organization In AWS Organizations Enable AWS Control Tower on the organization. Review included guardrails for SCPs. Check AWS Config for areas that require additions Add OUs as necessary Connect AWS Single Sign-On to the on-premises AD FS server 
C. Create an organization in AWS Organizations Create SCPs for least privilege access Create an OU structure, and use it to group AWS accounts Connect AWS Single Sign-On to the on-premises AD FS server. Configure a central logging account with a defined process for tog generating services to send log events to the central account Enable AWS Config in the central account with aggregators and conformance packs. 
D. Create an organization in AWS Organizations Enable AWS Control Tower on the organization Review included guardrails for SCPs. Check AWS Config for areas that require additions Configure an IAM identity provider for federation with the on-premises AD FS server. 



Question # 3

A company's solution architect is designing a diasaster recovery (DR) solution for an application that runs on AWS. The application uses PostgreSQL 11.7 as its database. The company has an PRO of 30 seconds. The solutions architect must design a DR solution with the primary database in the us-east-1 Region and the database in the us-west-2 Region. What should the solution architect do to meet these requirements with minimum application change?

A. Migrate the database to Amazon RDS for PostgreSQL in us-east-1. Set up a read replica up a read replica in us-west-2. Set the managed PRO for the RDS database to 30 seconds. 
B. Migrate the database to Amazon for PostgreSQL in us-east-1. Set up a standby replica in an Availability Zone in us-west-2, Set the managed PRO for the RDS database to 30 seconds. 
C. Migrate the database to an Amazon Aurora PostgreSQL global database with the primary Region as us-east-1 and the secondary Region as us-west-2. Set the managed PRO for the Aurora database to 30 seconds. 
D. Migrate the database to Amazon DynamoDB in us-east-1. Set up global tables with replica tables that are created in us-west-2. 



Question # 4

A solutions architect has been assigned to migrate a 50 TB Oracle data warehouse that contains sales data from on-premises to Amazon Redshift Major updates to the sales data occur on the final calendar day of the month For the remainder of the month, the data warehouse only receives minor daily updates and is primarily used for reading and reporting Because of this the migration process must start on the first day of the month and must be complete before the next set of updates occur. This provides approximately 30 days to complete the migration and ensure that the minor daily changes have been synchronized with the Amazon Redshift data warehouse Because the migration cannot impact normal business network operations, the bandwidth allocated to the migration for moving data over the internet is 50 Mbps The company wants to keep data migration costs low Which steps will allow the solutions architect to perform the migration within the specified timeline? 

A. Install Oracle database software on an Amazon EC2 instance Configure VPN connectivity between AWS and the company's data center Configure the Oracle database running on Amazon EC2 to join the Oracle Real Application Clusters (RAC) When the Oracle database on Amazon EC2 finishes synchronizing, create an AWS DMS ongoing replication task to migrate the data from the Oracle database on Amazon EC2 to Amazon Redshift Verify the data migration is complete and perform the cut over to Amazon Redshift.
B. Create an AWS Snowball import job Export a backup of the Oracle data warehouse Copy the exported data to the Snowball device Return the Snowball device to AWS Create an Amazon RDS for Oracle database and restore the backup file to that RDS instance Create an AWS DMS task to migrate the data from the RDS for Oracle database to Amazon Redshift Copy daily incremental backups from Oracle in the data center to the RDS for Oracle database over the internet Verify the data migration is complete and perform the cut over to Amazon Redshift. 
C. Install Oracle database software on an Amazon EC2 instance To minimize the migration time configure VPN connectivity between AWS and the company's data center by provisioning a 1 Gbps AWS Direct Connect connection Configure the Oracle database running on Amazon EC2 to be a read replica of the data center Oracle database Start the synchronization process between the company's on-premises data center and the Oracle database on Amazon EC2 When the Oracle database on Amazon EC2 is synchronized with the on-premises database create an AWS DMS ongoing replication task from the Oracle database read replica that is running on Amazon EC2 to Amazon Redshift Verify the data migration is complete and perform the cut over to Amazon Redshift. 
D. Create an AWS Snowball import job. Configure a server in the company€™s data center with an extraction agent. Use AWS SCT to manage the extraction agent and convert the Oracle schema to an Amazon Redshift schema. Create a new project in AWS SCT using the registered data extraction agent. Create a local task and an AWS DMS task in AWS SCT with replication of ongoing changes. Copy data to the Snowball device and return the Snowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to Amazon Redshift. Verify that the data migration is complete and perform the cut over to Amazon Redshift. 



Question # 5

A financial services company loaded millions of historical stock trades into an Amazon DynamoDB table The table uses on-demand capacity mode Once each day at midnight, a few million new records are loaded into the table Application read activity against the table happens in bursts throughout the day, and a limited set of keys are repeatedly looked up. The company needs to reduce costs associated with DynamoDB. Which strategy should a solutions architect recommend to meet this requirement? 

A. Deploy an Amazon ElastiCache cluster in front of the DynamoDB table. 
B. Deploy DynamoDB Accelerator (DAX) Configure DynamoDB auto scaling Purchase Savings Plans in Cost Explorer 
C. Use provisioned capacity mode Purchase Savings Plans in Cost Explorer 
D. Deploy DynamoDB Accelerator (DAX) Use provisioned capacity mode Configure DynamoDB auto scaling 



Question # 6

A company is launching a web-based application in multiple regions around the world The application consists of both static content stored in a private Amazon S3 bucket and dyna ECS containers behind an Application Load Balancer (ALB) The company requires that the static and dynamic application content be accessible through Amazon CloudFront only Which combination of steps should a solutions architect recommend to restrict direct content access to CloudFront? (Select THREE)

A. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB 
B. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution 
C. Configure CloudFront to add a custom header to origin requests 
D. Configure the ALB to add a custom header to HTTP requests
E. Update the S3 bucket ACL to allow access from the CloudFront distribution only
 F. Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution Update the S3 bucket policy to allow access to the OAI only 



Question # 7

A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances. Amazon Elastic File System (Amazon EFS) file systems, and Amazon RDS DB instances. To meet regulatory and business requirements, the company must make the following changes for data backups: • Backups must be retained based on custom daily, weekly, and monthly requirements. • Backups must be replicated to at least one other AWS Region immediately after capture. • The backup solution must provide a single source of backup status across the AWS environment. • The backup solution must send immediate notifications upon failure of any resource backup. Which combination of steps will meet these requirements with the LEAST amount of operational overhead? (Select THREE.)

A. Create an AWS Backup plan with a backup rule for each of the retention requirements. 
B. Configure an AWS Backup plan to copy backups to another Region. 
C. Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs. 
D. Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP_JOB_COMPLETEO. 
E. Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements. F. Set up RDS snapshots on each database. 



Question # 8

An online magazine will launch Its latest edition this month. This edition will be the first to be distributed globally. The magazine's dynamic website currently uses an Application Load Balancer in front of the web tier a fleet of Amazon EC2 instances for web and application servers, and Amazon Aurora MySQL. Portions of the website include static content and almost all traffic is read-only The magazine is expecting a significant spike m internet traffic when the new edition is launched Optimal performance is a top priority for the week following the launch Which combination of steps should a solutions architect take to reduce system response antes for a global audience? (Select TWO )

A. Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region Replace the web servers with Amazon S3 Deploy S3 buckets in crossRegion replication mode 
B. Ensure the web and application tiers are each m Auto Scaling groups. Introduce an AWS Direct Connect connection Deploy the web and application tiers in Regions across the world 
C. Migrate the database from Amazon Aurora to Amazon RDS for MySQL. Ensure all three of the application tiers—web. application, and database—are in private subnets. 
D. Use an Aurora global database for physical cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources. Deploy the web and application tiers in Regions across the world
 E. Introduce Amazon Route 53 with latency-based routing and Amazon CloudFront distributions. Ensure me web and application tiers are each in Auto Scaling groups 



Question # 9

A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux and Windows. The company has a large on-premises intra structure that consists of physical machines and VMs that host numerous applications. The company must capture details about the system configuration. system performance. running processure and network coi.net lions of its o. -premises ,on boards. The company also must divide the on-premises applications into groups for AWS migrations. The company needs recommendations for Amazon EC2 instance types so that the company can run its workloads on AWS in the most cost-effective manner. Which combination of steps should a solutions architect take to meet these requirements? (Select THREE.) 

A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs. 
B. Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs 
C. Group servers into applications for migration by using AWS Systems Manager Application Manager. 
D. Group servers into applications for migration by using AWS Migration Hub. 
E. Generate recommended instance types and associated costs by using AWS Migration Hub. 
F. Import data about server sizes into AWS Trusted Advisor. Follow the recommendations for cost optimization. 



Question # 10

A company has multiple business units Each business unit has its own AWS account and runs a single website within that account. The company also has a single logging account. Logs from each business unit website are aggregated into a single Amazon S3 bucket in the logging account. The S3 bucket policy provides each business unit with access to write data into the bucket and requires data to be encrypted. The company needs to encrypt logs uploaded into the bucket using a Single AWS Key Management Service {AWS KMS) CMK The CMK that protects the data must be rotated once every 365 days Which strategy is the MOST operationally efficient for the company to use to meet these requirements? 

A. Create a customer managed CMK ri the logging account Update the CMK key policy to provide access to the logging account only Manually rotate the CMK every 365 days. 
B. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business unit accounts. Enable automatic rotation of the CMK 
C. Use an AWS managed CMK m the togging account. Update the CMK key policy to provide access to the logging account and business unit accounts Manually rotate the CMK every 365 days. 
D. Use an AWS managed CMK in the togging account Update the CMK key policy to provide access to the togging account only. Enable automatic rotation of the CMK. 



Question # 11

A company is migrating an on-premises application and a MySQL database to AWS. The application processes highly sensitive data, and new data is constantly updated in the database. The data must not be transferred over the internet. The company also must encrypt the data in transit and at rest. The database is 5 TB in size. The company already has created the database schema in an Amazon RDS for MySQL DB instance The company has set up a 1 Gbps AWS Direct Connect connection to AWS. The company also has set up a public VIF and a private VIF. A solutions architect needs to design a solution that will migrate the data to AWS with the least possible downtime Which solution will meet these requirements?

A. Perform a database backup. Copy the backup files to an AWS Snowball Edge Storage Optimized device. Import the backup to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest Use TLS for encryption in transit Import the data from Amazon S3 to the DB instance. 
B. Use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. Create a DMS replication instance in a private subnet. Create VPC endpoints for AWS DMS. Configure a DMS task to copy data from the on-premises database to the DB instance by using full load plus change data capture (CDC). Use the AWS Key Management Service (AWS KMS) default key for encryption at rest. Use TLS for encryption in transit. 
C. Perform a database backup. Use AWS DataSync to transfer the backup files to Amazon S3 Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit Import the data from Amazon S3 to the DB instance. 
D. Use Amazon S3 File Gateway Set up a private connection to Amazon S3 by using AWS PrivateLink. Perform a database backup. Copy the backup files to Amazon S3. Use serverside encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance. 



Question # 12

A solutions architect is importing a VM from an on-premises environment by using the Amazon EC2 VM Import feature of AWS Import/Export The solutions architect has created an AMI and has provisioned an Amazon EC2 instance that is based on that AMI The EC2 instance runs inside a public subnet in a VPC and has a public IP address assigned The EC2 instance does not appear as a managed instance in the AWS Systems Manager console Which combination of steps should the solutions architect take to troubleshoot this issue"? (Select TWO )

A. Verify that Systems Manager Agent is installed on the instance and is running 
B. Verify that the instance is assigned an appropriate IAM role for Systems Manager 
C. Verify the existence of a VPC endpoint on the VPC 
D. Verify that the AWS Application Discovery Agent is configured 
E. Verify the correct configuration of service-linked roles for Systems Manager 



Question # 13

A company has automated the nightly retraining ot its machine learning models by using AWS Step Functions. The workflow consists of multiple steps that use AWS Lambda. Each step can fail for various reasons, and any failure causes a failure of the overall workflow. A review reveals that the retraining has failed multiple nights in a row without the company noticing the failure. A solutions architect needs to improve the workflow so that notifications are sent for all types of failures in the retraining process. Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.) 

A. Create an Amazon Simple Notification Service {Amazon SNS) topic with a subscription of type "Email" that targets the team's mailing list. 
B. Create a task named "Email" that forwards the input arguments to the SNS topic 
C. Add a Catch field to all Task. Map. and Parallel states that have a statement of "ErrorEquals": [ "states.all" ] and "Next": "Email". 
D. Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address. 
E. Create a task named "Email" that forwards the input arguments to the SES email address 
F. Add a Catch field to all Task, Map, and Parallel states that have a statement of "ErrorEquals": [ "states. Bun time" ] and "Next": "Email". 



Question # 14

a company needs to create a centralized logging architecture for all of its AWS accounts. The architecture should provide near-real-time data analysis for all AWS CloudTrail logs and VPC Flow logs across an AWS accounts. The company plans to use Amazon Elasticsearch Service (Amazon ES) to perform log analyses in me logging account. Which strategy should a solutions architect use to meet These requirements?

A. Configure CloudTrail and VPC Flow Logs m each AWS account to send data to a centralized Amazon S3 Ducket in the fogging account. Create an AWS Lambda function to load data from the S3 bucket to Amazon ES m the togging account 
B. Configure CloudTrail and VPC Flow Logs to send data to a fog group m Amazon CloudWatch Logs n each AWS account Configure a CloudWatch subscription filter m each AWS account to send data to Amazon Kinesis Data Firehose In the fogging account Load data from Kinesis Data Firehose Into Amazon ES in the logging account 
C. Configure CloudTrail and VPC Flow Logs to send data to a separate Amazon S3 bucket In each AWS account. Create an AWS Lambda function triggered by S3 evens to copy the data to a centralized logging bucket. Create another Lambda function lo load data from the S3 bucket to Amazon ES in the logging account. 
D. Configure CloudTrail and VPC Flow Logs to send data to a fog group in Amazon CloudWatch Logs n each AWS account Create AWS Lambda functions in each AWS account to subscribe to the tog groups and stream the data to an Amazon S3 bucket in the togging account. Create another Lambda function to toad data from the S3 bucket to Amazon ES in the logging account. 



Question # 15

A solutions architect is designing a solution to connect a company's on-premises network with all the company's current and future VPCs on AWS The company is running VPCs in five different AWS Regions and has at least 15 VPCs in each Region. The company's AWS usage is constantly increasing and will continue to grow Additionally, all the VPCs throughout all five Regions must be able to communicate with each other The solution must maximize scalability and ease of management Which solution meets these requirements'? 

A. Set up a transit gateway in each Region Establish a redundant AWS Site-to-Site VPN connection between the on-premises firewalls and the transit gateway in the Region that is closest to the on-premises network Peer all the transit gateways with each other Connect all the VPCs to the transit gateway in their Region 
B. Create an AWS CloudFormation template for a redundant AWS Site-to-Site VPN tunnel to the on-premises network Deploy the CloudFormation template for each VPC Set up VPC peering between all the VPCs for VPC-to-VPC communication 
C. Set up a transit gateway in each Region Establish a redundant AWS Site-to-Site VPN connection between the on-premises firewalls and each transit gateway Route traffic between the different Regions through the company's on-premises firewalls Connect all the VPCs to the transit gateway in their Region 
D. Create an AWS CloudFormation template for a redundant AWS Site-to-Site VPN tunnel to the on-premises network Deploy the CloudFormation template for each VPC Route traffic between the different Regions through the company's on-premises firewalls 



Question # 16

A company has an application that uses Amazon EC2 instances in an Auto Scaling group The quality assurance (QA) department needs to launch a large number of short-lived environments to test the application. The application environments are currently launched by the manager of the department using an AWS CloudFormation template To launch the stack, the manager uses a role with permission to use CloudFormation EC2. and Auto Scaling APIs. The manager wants to allow testers to launch their own environments, but does not want to grant broad permissions to each user Which set up would achieve these goals?

A. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to assume the manager's role and add a policy that restricts the permissions to the template and the resources it creates Train users to launch the template from the CloudFormation console 
B. Create an AWS Service Catalog product from the environment template Add a launch constraint to the product with the existing role Give users in the QA department permission to use AWS Service Catalog APIs only_ Train users to launch the template from the AWS Service Catalog console. 
C. Upload the AWS CloudFormation template to Amazon S3 Give users in the QA department permission to use CloudFormation and S3 APIs, with conditions that restrict the permissions to the template and the resources it creates Train users to launch the template from the CloudFormation console. 
D. Create an AWS Elastic Beanstalk application from the environment template Give users in the QA department permission to use Elastic Beanstalk permissions only Train users to launch Elastic Beanstalk environments with the Elastic Beanstalk CLI, passing the existing role to the environment as a service role 



Question # 17

A company has deployed an application to multiple environments in AWS. including production and testing the company has separate accounts for production and testing, and users are allowed to create additional application users for team members or services. as needed. The security team has asked the operations team tor better isolation between production and testing with centralized controls on security credentials and improved management of permissions between environments Which of the following options would MOST securely accomplish this goal? 

A. Create a new AWS account to hold user and service accounts, such as an identity account Create users and groups m the identity account. Create roles with appropriate permissions in the production and testing accounts Add the identity account to the trust policies for the roles 
B. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the operations team Set a strong IAM password policy on each account Create new IAM users and groups in each account to Limit developer access to just the services required to complete their job function. 
C. Create a script that runs on each account that checks user accounts For adherence to a security policy. Disable any user or service accounts that do not comply. 
D. Create all user accounts in the production account Create roles for access in me production account and testing accounts. Grant cross-account access from the production account to the testing account 



Question # 18

A company has a new security policy. The policy requires the company to log any event that retrieves data from Amazon S3 buckets. The company must save these audit logs in a dedicated S3 bucket. The company created the audit logs S3 bucket in an AWS account that is designated for centralized logging. The S3 bucket has a bucket policy that allows write-only cross-account access A solutions architect must ensure that all S3 object-level access is being logged for current S3 buckets and future S3 buckets. Which solution will meet these requirements?

A. Enable server access logging for all current S3 buckets. Use the audit logs S3 bucket as a destination for audit logs 
B. Enable replication between all current S3 buckets and the audit logs S3 bucket Enable S3 Versioning in the audit logs S3 bucket 
C. Configure S3 Event Notifications for all current S3 buckets to invoke an AWS Lambda function every time objects are accessed . Store Lambda logs in the audit logs S3 bucket. 
D. Enable AWS CloudTrail. and use the audit logs S3 bucket to store logs Enable data event logging for S3 event sources, current S3 buckets, and future S3 buckets. 



Question # 19

A company is running an application in the AWS Cloud. The company's security team must approve the creation of all new IAM users. When a new 1AM user is created, all access for the user must be removed automatically. The security team must then receive a notification to approve the user. The company has a multi-Region AWS CloudTrail trail In the AWS account. Which combination of steps will meet these requirements? (Select THREE.) 

A. Create an Amazon EventBridge (Amazon CloudWatch Events) rule. Define a pattern with the detail-type value set to AWS API Call via CloudTrail and an eventName of CreateUser. 
B. Configure CloudTrail to send a notification for the CreateUser event to an Amazon Simple Notification Service (Amazon SNS) topic. 
C. Invoke a container that runs in Amazon Elastic Container Service (Amazon ECS) with AWS Fargate technology to remove access 
D. Invoke an AWS Step Functions state machine to remove access. 
E. Use Amazon Simple Notification Service (Amazon SNS) to notify the security team. 
F. Use Amazon Pinpoint to notify the security team. 



Question # 20

A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared files stored in Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company increased the cluster size to 1,000 EC2 instances, overall performance was well below expectations Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Select THREE.) 

A. Ensure the HPC cluster is launched within a single Availability Zone. 
B. Launch the EC2 instances and attach elastic network interfaces in multiples of four. 
C. Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled 
D. Ensure the cluster is launched across multiple Availability Zones. 
E. Replace Amazon EFS with multiple Amazon EBS volumes in a RAID array. 
F. Replace Amazon EFS with Amazon FSx for Lustre. 



Question # 21

A company is finalizing the architecture for its backup solution for applications running on AWS. All of the applications run on AWS and use at least two Availability Zones in each tier. Company policy requires IT to durably store nightly backups of all its data in at least two locations: production and disaster recovery. The locations must be m different geographic regions. The company also needs the backup to be available to restore immediately at the production data center, and within 24 hours at the disaster recovery location backup processes must be fully automated. What is the MOST cost-effective backup solution that will meet all requirements?

A. Back up all the data to a large Amazon EBS volume attached to the backup media server m the production region. Run automated scripts to snapshot these volumes nightly. and copy these snapshots to the disaster recovery region. 
B. Back up all the data to Amazon S3 in the disaster recovery region Use a Lifecycle policy to move this data to Amazon Glacier in the production region immediately Only the data is replicated: remove the data from the S3 bucket in the disaster recovery region. 
C. Back up all the data to Amazon Glacier in the production region. Set up cross-region replication of this data to Amazon Glacier in the disaster recovery region. Set up a lifecycle policy to delete any data o der than 60 days. 
D. Back up all the data to Amazon S3 in the production region. Set up cross-region replication of this S3 bucket to another region and set up a lifecycle policy in the second region to immediately move this data to Amazon Glacier 



Question # 22

A company runs a serverless application in a single AWS Region. The application accesses external URLs and extracts metadata from those sites. The company uses an Amazon Simple Notification Service (Amazon SNS) topic to publish URLs to an Amazon Simple Queue Service (Amazon SQS) queue An AWS Lambda function uses the queue as an event source and processes the URLs from the queue Results are saved to an Amazon S3 bucket The company wants to process each URL other Regions to compare possible differences in site localization URLs must be published from the existing Region. Results must be written to the existing S3 bucket in the current Region. Which combination of changes will produce multi-Region deployment that meets these requirements? (Select TWO.) 

A. Deploy the SOS queue with the Lambda function to other Regions. 
B. Subscribe the SNS topic in each Region to the SQS queue. 
C. Subscribe the SQS queue in each Region to the SNS topics in each Region. 
D. Configure the SQS queue to publish URLs to SNS topics in each Region. 
E. Deploy the SNS topic and the Lambda function to other Regions. 



What Our Client Says