Go Back on DAS-C01 Exam
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

DAS-C01 Practice Test


Page 3 out of 13 Pages

A company wants to use an automatic machine learning (ML) Random Cut Forest (RCF) algorithm to
visualize complex real-word scenarios, such as detecting seasonality and trends, excluding outers, and
imputing missing values.
The team working on this project is non-technical and is looking for an out-of-the-box solution that will
require the LEAST amount of management overhead.
Which solution will meet these requirements?


A.

Use an AWS Glue ML transform to create a forecast and then use Amazon QuickSight to visualize the
data.


B.

Use Amazon QuickSight to visualize the data and then use ML-powered forecasting to forecast the key business metrics.


C.

Use a pre-build ML AMI from the AWS Marketplace to create forecasts and then use Amazon
QuickSight to visualize the data.


D.

Use calculated fields to create a new forecast and then use Amazon QuickSight to visualize the data.





A.
  

Use an AWS Glue ML transform to create a forecast and then use Amazon QuickSight to visualize the
data.



A smart home automation company must efficiently ingest and process messages from various connected
devices and sensors. The majority of these messages are comprised of a large number of small files. These
messages are ingested using Amazon Kinesis Data Streams and sent to Amazon S3 using a Kinesis data stream
consumer application. The Amazon S3 message data is then passed through a processing pipeline built on
Amazon EMR running scheduled PySpark jobs.
The data platform team manages data processing and is concerned about the efficiency and cost of
downstream data processing. They want to continue to use PySpark.
Which solution improves the efficiency of the data processing jobs and is well architected?


A.

Send the sensor and devices data directly to a Kinesis Data Firehose delivery stream to send the data to Amazon S3 with Apache Parquet record format conversion enabled. Use Amazon EMR running
PySpark to process the data in Amazon S3.


B.

Set up an AWS Lambda function with a Python runtime environment. Process individual Kinesis data
stream messages from the connected devices and sensors using Lambda.


C.

Launch an Amazon Redshift cluster. Copy the collected data from Amazon S3 to Amazon Redshift and
move the data processing jobs from Amazon EMR to Amazon Redshift.


D.

Set up AWS Glue Python jobs to merge the small data files in Amazon S3 into larger files and transform them to Apache Parquet format. Migrate the downstream PySpark jobs from Amazon EMR to AWS Glue.





A.
  

Send the sensor and devices data directly to a Kinesis Data Firehose delivery stream to send the data to Amazon S3 with Apache Parquet record format conversion enabled. Use Amazon EMR running
PySpark to process the data in Amazon S3.



A company analyzes its data in an Amazon Redshift data warehouse, which currently has a cluster of three
dense storage nodes. Due to a recent business acquisition, the company needs to load an additional 4 TB of
user data into Amazon Redshift. The engineering team will combine all the user data and apply complex
calculations that require I/O intensive resources. The company needs to adjust the cluster's capacity to support
the change in analytical and storage requirements.
Which solution meets these requirements?


A.

Resize the cluster using elastic resize with dense compute nodes.


B.

Resize the cluster using classic resize with dense compute nodes.


C.

Resize the cluster using elastic resize with dense storage nodes.


D.

Resize the cluster using classic resize with dense storage nodes.





C.
  

Resize the cluster using elastic resize with dense storage nodes.



A hospital uses wearable medical sensor devices to collect data from patients. The hospital is architecting a near-real-time solution that can ingest the data securely at scale. The solution should also be able to remove the patient’s protected health information (PHI) from the streaming data and store the data in durable storage. Which solution meets these requirements with the least operational overhead?


A.

Ingest the data using Amazon Kinesis Data Streams, which invokes an AWS Lambda function using Kinesis Client Library (KCL) to remove all PHI. Write the data in Amazon S3.


B.

Ingest the data using Amazon Kinesis Data Firehose to write the data to Amazon S3. Have Amazon S3 trigger an AWS Lambda function that parses the sensor data to remove all PHI in Amazon S3.


C.

Ingest the data using Amazon Kinesis Data Streams to write the data to Amazon S3. Have the data
stream launch an AWS Lambda function that parses the sensor data and removes all PHI in Amazon S3.


D.

Ingest the data using Amazon Kinesis Data Firehose to write the data to Amazon S3. Implement a
transformation AWS Lambda function that parses the sensor data to remove all PHI.





C.
  

Ingest the data using Amazon Kinesis Data Streams to write the data to Amazon S3. Have the data
stream launch an AWS Lambda function that parses the sensor data and removes all PHI in Amazon S3.



A global company has different sub-organizations, and each sub-organization sells its products and services in
various countries. The company's senior leadership wants to quickly identify which sub-organization is the
strongest performer in each country. All sales data is stored in Amazon S3 in Parquet format.
Which approach can provide the visuals that senior leadership requested with the least amount of effort?


A.

Use Amazon QuickSight with Amazon Athena as the data source. Use heat maps as the visual type.


B.

Use Amazon QuickSight with Amazon S3 as the data source. Use heat maps as the visual type.


C.

Use Amazon QuickSight with Amazon Athena as the data source. Use pivot tables as the visual type.


D.

Use Amazon QuickSight with Amazon S3 as the data source. Use pivot tables as the visual type.





C.
  

Use Amazon QuickSight with Amazon Athena as the data source. Use pivot tables as the visual type.




Page 3 out of 13 Pages
Previous