Easy & Quick Way To Pass Your Any Certification Exam.
Our Amazon SAP-C01 dumps are key to get success. More than 80000+ success stories.
Clients Passed Amazon SAP-C01 Exam Today
Passing score in Real Amazon SAP-C01 Exam
Questions were from our given SAP-C01 dumps
Dumpsspot offers the best SAP-C01 exam dumps that comes with 100% valid questions and answers. With the help of our trained team of professionals, the SAP-C01 Dumps PDF carries the highest quality. Our course pack is affordable and guarantees a 98% to 100% passing rate for exam. Our SAP-C01 test questions are specially designed for people who want to pass the exam in a very short time.
Most of our customers choose Dumpsspot's SAP-C01 study guide that contains questions and answers that help them to pass the exam on the first try. Out of them, many have passed the exam with a passing rate of 98% to 100% by just training online.
Dumpsspot puts the best SAP-C01 Dumps question and answers forward for the students who want to clear the exam in their first go. We provide a guarantee of 100% assurance. You will not have to worry about passing the exam because we are here to take care of that.
A company needs to store and process image data that will be uploaded from mobiledevices using a custom mobile app. Usage peaks between 8 AM and 5 PM on weekdays,with thousands of uploads per minute. The app is rarely used at any other time A user isnotified when image processing is complete.Which combination of actions should a solutions architect take to ensure image processingcan scale to handle the load1? (Select THREE.)
A. Upload files from the mobile software directly to Amazon S3. Use S3 event notificationsto create a message in an Amazon MO queue.
B. Upload files from the mobile software directly to Amazon S3. Use S3 event notificationsto create a message in an Amazon Simple Queue Service (Amazon SQS) standard
C. queue.
D. Invoke an AWS Lambda function to perform image processing when a message isavailable in the queue
E. Invoke an S3 Batch Operations job to perform image processing when a message isavailable in the queue
F. Send a push notification to the mobile app by using Amazon Simple Notification Service(Amazon SNS) when processing is complete. F Send a push notification to the mobile appby using Amazon Simple Email Service (Amazon SES) when processing is complete.
company is running an application distributed over several Amazon EC2 instances in an Auto Seating group behind an Application Load Balancer The security team requires that all application access attempts be made available for analysis information about the client IP address, connection type, and user agent must be included Which solution will meet these requirements?
A. Enable EC2 detailed monitoring, and include network logs. Send all logs throughAmazon Kinesis Data Firehose to an Amazon Elasticsearch Service (Amazon ES) clusterthat the security team uses for analysis.
B. Enable VPC Flow Logs for all EC2 instance network interfaces Publish VPC Flow Logsto an Amazon S3 bucket Have the security team use Amazon Athena to query and analyzethe logs.
C. Enable access logs for the Application Load Balancer, and publish the logs to anAmazon S3 bucket. Have the security team use Amazon Athena to query and analyze thelogs
D. Enable Traffic Mirroring and specify all EC2 instance network interfaces as the source.Send all traffic information through Amazon Kinesis Data Firehose to an AmazonElasticsearch Service (Amazon ES) cluster that the security team uses for analysis.
A company has many services running in its on-premises data center. The data center isconnected to AWS using AWS Direct Connect (DX) and an IPSec VPN. The service data issensitive and connectivity cannot traverse the internet. The company wants to expand intoa new market segment and begin offering its services to other companies that are usingAWS.Which solution will meet these requirements?
A. Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network LoadBalancer, and make the service available over DX.
B. Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind anApplication Load Balancer, and make the service available over DX.
C. Attach an internet gateway to the VPC. and ensure that network access control andsecurity group rules allow the relevant inbound and outbound traffic.
D. Attach a NAT gateway to the VPC. and ensure that network access control and securitygroup rules allow the relevant inbound and outbound traffic.
A company is launching a new web application on Amazon EC2 instances. Developmentand production workloads exist in separate AWS accounts.According to the company's security requirements, only automated configuration tools areallowed to access the production account. The company's security team wants to receiveimmediate notification if any manual access to the production AWS account or EC2instances occursWhich combination of actions should a solutions architect take in the production account tomeet these requirements? (Select THREE.)
A. Turn on AWS CloudTrail logs in the application's primary AWS Region Use AmazonAthena to queiy the logs for AwsConsoleSignln events.
B. Configure Amazon Simple Email Service (Amazon SES) to send email to the securityteam when an alarm is activated.
C. Deploy EC2 instances in an Auto Scaling group Configure the launch template to deployinstances without key pairs Configure Amazon CloudWatch Logs to capture system accesslogs Create an Amazon CloudWatch alarm that is based on the logs to detect when a userlogs in to an EC2 instance
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to send amessage to the security team when an alarm is activated
E. Turn on AWS CloudTrail logs for all AWS Regions. Configure Amazon CloudWatchalarms to provide an alert when an AwsConsoleSignin event is detected.
F. Deploy EC2 instances in an Auto Scaling group. Configure the launch template to deletethe key pair after launch. Configure Amazon CloudWatch Logs for the system access logsCreate an Amazon CloudWatch dashboard to show user logins over time.
A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1 Region. The application should dynamically scale to meet user demand and maintain resiliency. Additionally, the application must have disaster recovery capabilities in an active-passive configuration with the us-west-1 Region.Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?
A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect bothVPCs. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones(AZsJ to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs ineach Region as part of an Auto Scaling group spanning both VPCs and served by the ALB.
B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part ofan Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1Region Create an Amazon Route 53 record set with a failover routing policy and healthchecks enabled to provide high availability across both Regions.
C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect bothVPCs Deploy an Application Load Balancer (ALB) that spans both VPCs Deploy EC2instances across multiple Availability Zones as part of an Auto Scaling group in each VPCserved by the ALB. Create an Amazon Route 53 record that points to the ALB.
D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part ofan Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1Region. Create separate Amazon Route 53 records in each Region that point to the ALB inthe Region. Use Route 53 health checks to provide high availability across both Regions.
A company is running an application distributed over several Amazon EC2 instances in anAuto Scaling group behind an Application Load Balancer The security team requires that allapplication access attempts be made available for analysis Information about the client IPaddress, connection type, and user agent must be included.Which solution will meet these requirements?
A. Enable EC2 detailed monitoring, and include network logs Send all logs throughAmazon Kinesis Data Firehose to an Amazon ElasDcsearch Service (Amazon ES) clusterthat the security team uses for analysis
B. Enable VPC Flow Logs for all EC2 instance network interfaces Publish VPC Flow Logsto an Amazon S3 bucket Have the security team use Amazon Athena to query and analyzethe logs
C. Enable access logs for the Application Load Balancer, and publish the logs to anAmazon S3 bucket Have the security team use Amazon Athena to query and analyze the logs
D. Enable Traffic Mirroring and specify all EC2 instance network interfaces as the source.Send all traffic information through Amazon Kinesis Data Firehose to an Amazon Elasticsearch Service (Amazon ES) cluster that the security team uses for analysis.
A company is running a web application on Amazon EC2 instances in a production AWSaccount. The company requires all logs generated from the web application to be copied toa central AWS account (or analysis and archiving. The company's AWS accounts arecurrently managed independently. Logging agents are configured on the EC2 instances toupload the tog files to an Amazon S3 bucket in the central AWS account.A solutions architect needs to provide access for a solution that will allow the productionaccount to store log files in the central account. The central account also needs to haveread access to the tog files.What should the solutions architect do to meet these requirements?
A. Create a cross-account role in the central account. Assume the role from the productionaccount when the logs are being copied.
B. Create a policy on the S3 bucket with the production account ID as the principal. AllowS3 access from a delegated user.
C. Create a policy on the S3 bucket with access from only the CIDR range of the EC2instances in the production account. Use the production account ID as the principal.
D. Create a cross-account role in the production account. Assume the role from theproduction account when the logs are being copied.
A company is planning on hosting its ecommerce platform on AWS using a multi-tier webapplication designed for a NoSOL database. The company plans to use the us-west-2Region as its primary Region. The company want to ensure that copies of the applicationand data are available in a second Region, us-west-1, for disaster recovery. The companywants to keep the time to fail over as low as possible. Failing back to the primary Regionshould be possible without administrative interaction after the primary service is restored.Which design should the solutions architect use?
A. Use AWS Cloud Formation StackSets lo create the stacks in both Regions with AutoScaling groups for the web and application tiers. Asynchronously replicate static contentbetween Regions using Amazon S3 cross-Region replication. Use an Amazon Route 53DNS failover routing policy to direct users to the secondary site in us-west-1 in the event ofan outage. Use Amazon DynamoDB global tables for the database tier.
B. Use AWS Cloud Formation StackSets to create the stacks in both Regions with AutoScaling groups for the web and application tiers. Asynchronously replicate static contentbetween Regions using Amazon S3 cross-Region replication. Use an Amazon Route 53DNS failover routing policy to direct users to the secondary site in us-west-1 in the event ofan outage. Deploy an Amazon Aurora global database for the database tier.
C. Use AWS Service Catalog to deploy the web and application servers in both Regions.Asynchronously replicate static content between the two Regions using Amazon S3 crossRegion replication. Use Amazon Route 53 health checks to identify a primary Regionfailure and update the public DNS entry listing to the secondary Region in the event of anoutage. Use Amazon RDS for MySQL with cross-Region replication for the database tier.
D. Use AWS CloudFormation StackSets to create the stacks in both Regions using AutoScaling groups for the web and application tiers. Asynchronously replicate static contentbetween Regions using Amazon S3 cross-Region replication. Use Amazon CloudFront withstatic files in Amazon S3, and multi-Region origins for the front-end web tier. Use AmazonDynamoD8 tables in each Region with scheduled backups to Amazon S3.
A development team has created a new flight tracker application that provides near-realtime data to users. The application has a front end that consists of an Application LoadBalancer (ALB) in front of two large Amazon EC2 instances in a single Availability Zone.Data is stored in a single Amazon RDS MySQL DB instance. An Amazon Route 53 DNSrecord points to the ALB.Management wants the development team to improve the solution to achieve maximumreliability with the least amount of operational overhead.Which set of actions should the team take?
A. Create RDS MySQL read replicas. Deploy the application to multiple AWS Regions. Usea Route 53 latency-based routing policy to route to the application.
B. Configure the DB instance as Multi-AZ. Deploy the application to two additional EC2instances in different Availability Zones behind an ALB.
C. Replace the DB instance with Amazon DynamoDB global tables. Deploy the applicationin multiple AWS Regions. Use a Route 53 latency-based routing policy to route to theapplication.
D. Replace the DB instance with Amazon Aurora with Aurora Replicas. Deploy theapplication to mulliple smaller EC2 instances across multiple Availability Zones in an AutoScaling group behind an ALB.
A large company in Europe plans to migrate its applications to the AWS Cloud. Thecompany uses multiple AWS accounts for various business groups. A data privacy lawrequires the company to restrict developers' access to AWS European Regions only.What should the solutions architect do to meet this requirement with the LEAST amount ofmanagement overhead^
A. Create IAM users and IAM groups in each account. Create 1AM policies to limit accessto non-European Regions Attach the 1AM policies to the 1AM groups
B. Enable AWS Organizations, attach the AWS accounts, and create OUs for EuropeanRegions and non-European Regions. Create SCPs to limit access to non-EuropeanRegions and attach the policies to the OUs.
C. Set up AWS Single Sign-On and attach AWS accounts. Create permission sets withpolicies to restrict access to non-European Regions Create 1AM users and 1AM groups ineach account.
D. Enable AWS Organizations, attach the AWS accounts, and create OUs for EuropeanRegions and non-European Regions. Create permission sets with policies to restrict accessto non-European Regions. Create 1AM users and 1AM groups in the primary account.
A financial services company receives a regular data feed from its credit card servicing partner Approximately 5.000 records are sent every 15 minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number (PAN) data The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to remove and merge specific fields, and then transform the record into JSON format Additionally, extra feeds are likely to be added in the future, so any design needs to be easily expandable.Which solutions will meet these requirements?
A. Trigger an AWS Lambda function on file delivery that extracts each record and writes itto an Amazon SQS queue. Trigger another Lambda function when new messages arrive inthe SOS queue to process the records, writing the results to a temporary location inAmazon S3. Trigger a final Lambda function once the SOS queue is empty to transform therecords into JSON format and send the results to another S3 bucket for internalprocessing
B. Tigger an AWS Lambda function on file delivery that extracts each record and wntes it toan Amazon SOS queue. Configure an AWS Fargate container application toC. automatically scale to a single instance when the SOS queue contains messages. Havethe application process each record, and transform the record into JSON format. When thequeue is empty, send the results to another S3 bucket for internal processing and scaledown the AWS Fargate instance
D. Create an AWS Glue crawler and custom classifier based on the data feed formats andbuild a table definition to match Trigger an AWS Lambda function on file delivery to start anAWS Glue ETL job to transform the entire record according to the processing andtransformation requirements. Define the output format as JSON. Once complete, have theETL job send the results to another S3 bucket for internal processing
E. Create an AWS Glue crawler and custom classifier based upon the data feed formatsand build a table definition to match. Perform an Amazon Athena query on file delivery tostart an Amazon EMR ETL job to transform the entire record according to the processingand transformation requirements. Define the output format as JSON. Once complete, sendthe results to another S3 bucket for internal processing and scale down the EMR cluster.
A company has an application that generates reports and stores them in an Amazon S3bucket. When a user accesses their report, the application generates a signed URL toallow the user to download the report. The company's security team has discovered thatthe files are public and that anyone can download them without authentication. Thecompany has suspended the generation of new reports until the problem is resolved.Which set of actions will immediately remediate the security issue without impacting theapplication's normal workflow?
A. Create an AWS Lambda function that applies a deny all policy for users who are notauthenticated. Create a scheduled event to invoke the Lambda function.
B. Review the AWS Trusted Advisor bucket permissions check and implement therecommended actions.
C. Run a script that puts a private ACL on all of the objects in the bucket.
D. Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option toTRUE on the bucket.
A financial services company logs personally identifiable information 10 its application logsstored in Amazon S3. Due to regulatory compliance requirements, the log files must beencrypted at rest. The security team has mandated that the company's on-premiseshardware security modules (HSMs) be used to generate the CMK material.Which steps should the solutions architect take to meet these requirements?
A. Create an AWS CloudHSM cluster. Create a new CMK in AWS KMS usingAWS_CloudHSM as the source (or the key material and an origin of AWS_CLOUDHSM.Enable automatic key rotation on the CMK with a duration of 1 year. Configure a bucketpolicy on the togging bucket thai disallows uploads of unencrypted data and requires thatthe encryption source be AWS KMS.
B. Provision an AWS Direct Connect connection, ensuring there is no overlap of the RFC1918 address space between on-premises hardware and the VPCs. Configure an AWSbucket policy on the logging bucket that requires all objects to be encrypted. Configure thelogging application to query the on-premises HSMs from the AWS environment for theencryption key material, and create a unique CMK for each logging event.
C. Create a CMK in AWS KMS with no key material and an origin of EXTERNAL. Importthe key material generated from the on-premises HSMs into the CMK using the public keyand import token provided by AWS. Configure a bucket policy on the logging bucket thatdisallows uploads of non-encrypted data and requires that the encryption source be AWSKMS.
D. Create a new CMK in AWS KMS with AWS-provkJed key material and an origin ofAWS_KMS. Disable this CMK. and overwrite the key material with the key material fromthe on-premises HSM using the public key and import token provided by AWS. Re-enablethe CMK. Enable automatic key rotation on the CMK with a duration of 1 year. Configure abucket policy on the logging bucket that disallows uploads of non-encrypted data andrequires that the encryption source be AWS KMS.
A company plans to migrate to AWS. A solutions architect uses AWS Application DiscoveryService over the fleet and discovers that there is an Oracle data warehouse and severalPostgreSQL databases. Which combination of migration patterns will reduce licensingcosts and operational overhead? (Select TWO.)
A. Lift and shift the Oracle data warehouse to Amazon EC2 using AWS DMS.
B. Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS QMS.
C. Lift and shift the PostgreSQL databases to Amazon EC2 using AWS DMS.
D. Migrate the PostgreSQL databases to Amazon RDS for PostgreSQL using AWS DMS
E. Migrate the Oracle data warehouse to an Amazon EMR managed cluster using AWS DMS.
A solution architect needs to deploy an application on a fleet of Amazon EC2 instances.The EC2 instances run in private subnets in An Auto Scaling group. The application isexpected to generate logs at a rate of 100 MB each second on each of the EC2 instances.The logs must be stored in an Amazon S3 bucket so that an Amazon EMR cluster canconsume them for further processing The logs must be quickly accessible for the first 90days and should be retrievable within 48 hours thereafter.What is the MOST cost-effective solution that meets these requirements?
A. Set up an S3 copy job to write logs from each EC2 instance to the S3 bucket with S3Standard storage Use a NAT instance within the private subnets to connect to Amazon S3.Create S3 Lifecycle policies to move logs that are older than 90 days to S3 Glacier.
B. Set up an S3 sync job to copy logs from each EC2 instance to the S3 bucket with S3Standard storage Use a gateway VPC endpoint for Amazon S3 to connect to Amazon S3.Create S3 Lifecycle policies to move logs that are older than 90 days to S3 Glacier DeepArchive
C. Set up an S3 batch operation to copy logs from each EC2 instance to the S3 bucket withS3 Standard storage Use a NAT gateway with the private subnets to connect to AmazonS3 Create S3 Lifecycle policies to move logs that are older than 90 days to S3 GlacierDeep Archive
D. Set up an S3 sync job to copy logs from each EC2 instance to the S3 bucket with S3Standard storage Use a gateway VPC endpoint for Amazon S3 to connect to Amazon S3.Create S3 Lifecycle policies to move logs that are older than 90 days to S3 Glacier
A company has a three-tier application running on AWS with a web server, an application server, and an Amazon RDS MySQL DB instance. A solutions architect is designing a disaster recovery (OR) solution with an RPO of 5 minutes.Which solution will meet the company's requirements?
A. Configure AWS Backup to perform cross-Region backups of all servers every 5 minutes.Reprovision the three tiers in the DR Region from the backups using AWS CloudFormationin the event of a disaster.
B. Maintain another running copy of the web and application server stack in the DR Regionusing AWS CloudFormation drill detection. Configure cross-Region snapshots ol the DBinstance to the DR Region every 5 minutes. In the event of a disaster, restore the DBinstance using the snapshot in the DR Region.
C. Use Amazon EC2 Image Builder to create and copy AMIs of the web and applicationserver to both the primary and DR Regions. Create a cross-Region read replica of the DBinstance in the DR Region. In the event of a disaster, promote the read replica to becomethe master and reprovision the servers with AWS CloudFormation using the AMIs.
D. Create AMts of the web and application servers in the DR Region. Use scheduled AWSGlue jobs to synchronize the DB instance with another DB instance in the DR Region. Inthe event of a disaster, switch to the DB instance in the DR Region and reprovision theservers with AWS CloudFormation using the AMIs.
A company runs an application that gives users the ability to search for videos and relatedinformation by using keywords that are curated from content providers. The applicationdata is stored in an on-premises Oracle database that is 800 GB in size.The company wants to migrate the data to an Amazon Aurora MySQL DB instance. Asolutions architect plans to use the AWS Schema Conversion Tool and AWS DatabaseMigration Service (AWS DMS) for the migration. During the migration, the existingdatabase must serve ongoing requests. The migration must be completed with minimumdowntimeWhich solution will meet these requirements?
A. Create primary key indexes, secondary indexes, and referential integrity constraints inthe target database before starting the migration process
B. Use AWS DMS to run the conversion report for Oracle to Aurora MySQL. Remediateany issues Then use AWS DMS to migrate the data
C. Use the M5 or CS DMS replication instance type for ongoing replication
D. Turn off automatic backups and logging of the target database until the migration andcutover processes are complete
A company is running an application on Amazon EC2 instances in three environments;development, testing, and production. The company uses AMIs to deploy the EC2instances. The company builds the AMIs by using custom deployment scripts andinfrastructure orchestration tools for each release in each environment.The company is receiving errors in its deployment process. Errors appear during operatingsystem package downloads and during application code installation from a third-party Githosting service. The company needs deployments to become more reliable across allenvironments.Which combination of steps will meet these requirements? (Select THREE).
A. Mirror the application code to an AWS CodeCommit Git repository. Use the repository to build EC2 AMIs.
B. Produce multiple EC2 AMIs. one for each environment, for each release.
C. Produce one EC2 AMI for each release for use across all environments.
D. Mirror the application code to a third-party Git repository that uses Amazon S3 storage.Use the repository for deployment.
E. Replace the custom scripts and tools with AWS CodeBuild. Update the infrastructuredeployment process to use EC2 Image Builder.
A solutions architect is responsible (or redesigning a legacy Java application to improve itsavailability, data durability, and scalability. Currently, the application runs on a single highmemory Amazon EC2 instance. It accepts HTTP requests Irom upstream clients, addsthem to an in-memory queue, and responds with a 200 status. A separate applicationthread reads items from the queue, processes them, and persists the results to an AmazonRDS MySQL instance. The processing time for each item takes 90 seconds on average,most ol which is spent waiting on external service calls, but the application is written toprocess multiple items in parallel.Traffic to this service is unpredictable. During periods of high load, items may sit in theinternal queue for over an hour while the application processes the backlog. In addition, thecurrent system has issues with availability and data loss if the single application node fails.Clients that access this service cannot be modified. They expect to receive a response toeach HTTP request they send within 10 seconds before they will time out and retry therequest.Which approach would improve the availability and durability of (he system whiledecreasing the processing latency and minimizing costs?
A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to passrequests to an AWS Lambda function. Migrate the core processing code to a Lambdafunction and write a wrapper class that provides a handler method that converts the proxyevents to the internal application data model and invokes the processing module.
B. Create an Amazon API Gateway REST API that uses a service proxy to put items in anAmazon SOS queue. Extract the core processing code from the existing application andupdate it to pull items from Amazon SOS instead of an in-memory queue. Deploy the newprocessing application to smaller EC2 instances within an Auto Scaling group that scalesdynamically based on the approximate number of messages in the Amazon SOS queue.
C. Modify the application to use Amazon DynamoDB instead of Amazon RDS. ConfigureAuto Scaling for the DynamoDB table. Deploy the application within an Auto Scaling groupwith a scaling policy based on CPU utilization. Back the in-memory queue with a memorymapped file to an instance store volume and periodically write that file to Amazon S3.
D. Update the application to use a Redis task queue instead of the in-memory queue. 8uilda Docker container image for the application. Create an Amazon ECS task definition thatincludes the application container and a separate container to host Redis. Deploy the newtask definition as an ECS service using AWS Fargate, and enable Auto Scaling.
A company maintains a restaurant review website. The website is a single-page applicationwhere files are stored in Amazon S3 and delivered using Amazon CloudFront. Thecompany receives several fake postings every day that are manually removed.The security team has identified that most of the fake posts are from bots with IPaddresses that have a bad reputation within the same global region. The team needs tocreate a solution to help restrict the bots from accessing the website.Which strategy should a solutions architect use?
A. Use AWS Firewall Manager to control the CloudFront distribution security settings.Create a geographical block rule and associate it with Firewall Manager.
B. Associate an AWS WAF web ACL with the CloudFront distribution. Select the managedAmazon IP reputation rule group for the web ACL with a deny action.
C. Use AWS Firewall Manager to control the CloudFront distribution security settings.Select the managed Amazon IP reputation rule group and associate it with FirewallManager with a deny action.
D. Associate an AWS WAF web ACL with the CloudFront distribution. Create a rule groupfor the web ACL with a geographical match statement with a deny action.
An AWS customer has a web application that tuns on premises. The web applicationfetches data from a third-party API that is behind a firewall. The third party accepls only onepublic CIDR block in each client's allow list.The customer wants to migrate their web application to the AWS Cloud. The application willbe hosted on a set of Amazon EC2 instances behind an Application Load Balancer (ALB)in a VPC. The ALB is located in public subnets. The EC2 instances are located in privatesubnets. NAT gateways provide internet access to the private subnets.How should a solutions architect ensure that the web application can continue to call thethird-parly API after the migration?
A. Associate a block of customer-owned public IP addresses to the VPC. Enable public IPaddressing for public subnets in the VPC.
B. Register a block of customer-owned public IP addresses in the AWS account. CreateElastic IP addresses from the address block and assign them lo the NAT gateways in theVPC.
C. Create Elastic IP addresses from the block of customer-owned IP addresses. Assign thestatic Elastic IP addresses to the ALB.
D. Register a block of customer-owned public IP addresses in the AWS account. Set upAWS Global Accelerator to use Elastic IP addresses from the address block. Set the ALBas the accelerator endpoint.