Discount Offer
Go Back on SAA-C03 Exam
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99



Pass exam with Dumps4free or we will provide you with three additional months of access for FREE.

SAA-C03 Practice Test

Whether you're a beginner or brushing up on skills, our SAA-C03 practice exam is your key to success. Our comprehensive question bank covers all key topics, ensuring you’re fully prepared.


Page 22 out of 193 Pages

Topic 4: Exam Pool D

A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon API Gateway REST API. The company uses Amazon Route 53 as its DNS provider. The company wants to provide individual and secure URLs for all customers. Which combination of steps will meet these requirements with the MOST operational efficiency? (Select THREE.)


A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint.


B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a different Region.


C. Create hosted zones for each customer as required in Route 53. Create zone records that point to the API Gateway endpoint.


D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.


E. Create multiple API endpoints for each customer in API Gateway.


F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).





A.
  Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint.

D.
  Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.

F.
  Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).

Explanation: To provide individual and secure URLs for all customers using an API Gateway REST API, you need to do the following steps: Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint. This step will allow you to use a custom domain name for your API instead of the default one generated by API Gateway. A wildcard custom domain name means that you can use any subdomain under your domain name (such as customer1.example.com or customer2.example.com) to access your API. You need to register your domain name with a registrar (such as Route 53 or a third-party registrar) and create a hosted zone in Route 53 for your domain name. You also need to create a record in the hosted zone that points to the API Gateway endpoint using an alias record. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region. This step will allow you to secure your API with HTTPS using a certificate issued by ACM. A wildcard certificate means that it can match any subdomain under your domain name (such as *.example.com). You need to request or import a certificate in ACM that matches your custom domain name and verify that you own the domain name. You also need to request the certificate in the same Region as your API. F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM). This step will allow you to associate your custom domain name with your API and use the certificate from ACM to enable HTTPS. You need to create a custom domain name in API Gateway for the REST API and specify the certificate ARN from ACM. You also need to create a base path mapping that maps a path from your custom domain name to your API stage.

A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report on incomplete multipart uploads for cost compliance purposes. Which solution will meet these requirements with the LEAST operational overhead?


A. Configure AWS Config with a rule to report the incomplete multipart upload object count.


B. Create a service control policy (SCP) to report the incomplete multipart upload object count.


C. Configure S3 Storage Lens to report the incomplete multipart upload object count.


D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.





C.
  Configure S3 Storage Lens to report the incomplete multipart upload object count.

Explanation: S3 Storage Lens is a cloud storage analytics feature that provides organization-wide visibility into object storage usage and activity across multiple AWS accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart upload object count as one of the metrics that it collects and displays on an interactive dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet format to an S3 bucket for further analysis. This solution will meet the requirements with the least operational overhead, as it does not require any code development or policy changes.

A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?


A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.


B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3.


C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.


D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3.





C.
  Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.

A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage. What should a solutions architect do to meet these requirements?


A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.


B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.


C. Create a file system on a Provisioned IOPS SSD (102) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.


D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchromze the EBS volumes across the different EC2 instances.





B.
  Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.

Explanation: it allows the EC2 instances to read and write rapidly and concurrently to shared storage across two Availability Zones. Amazon EFS provides a scalable, elastic, and highly available file system that can be mounted from multiple EC2 instances. Amazon EFS supports high levels of throughput and IOPS, and consistent low latencies. Amazon EFS also supports NFSv4 lock upgrading and downgrading, which enables high levels of concurrency.

A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata.

The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.

Which solution meats these requirements?


A. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.


B. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.


C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.


D. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.





C.
  Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.

Explanation: This solution meets the requirements of scalability, performance, and availability. AWS Lambda can process the photos in parallel and scale up or down automatically depending on the demand. Amazon S3 can store the photos and metadata reliably and durably, and provide high availability and low latency. DynamoDB can store the metadata efficiently and provide consistent performance. This solution also reduces the cost and complexity of managing EC2 instances and EBS volumes.
Option A is incorrect because storing the photos in DynamoDB is not a good practice, as it can increase the storage cost and limit the throughput. Option B is incorrect because Kinesis Data Firehose is not designed for processing photos, but for streaming data to destinations such as S3 or Redshift. Option D is incorrect because increasing the number of EC2 instances and using Provisioned IOPS SSD volumes does not guarantee scalability, as it depends on the load balancer and the application code. It also increases the cost and complexity of managing the infrastructure.


Page 22 out of 193 Pages
Previous