Go Back on DP-203 Exam
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

DP-203 Practice Test


Page 15 out of 42 Pages

Topic 3, Mix Questions

You have an Azure Synapse Analytics job that uses Scala.
You need to view the status of the job.
What should you do?


A.

From Azure Monitor, run a Kusto query against the AzureDiagnostics table.


B.

From Azure Monitor, run a Kusto query against the SparkLogying1 Event.CL table.


C.

From Synapse Studio, select the workspace. From Monitor, select Apache Sparks applications.


D.

From Synapse Studio, select the workspace. From Monitor, select SQL requests.





C.
  

From Synapse Studio, select the workspace. From Monitor, select Apache Sparks applications.



You have an Azure Databricks workspace named workspace1 in the Standard pricing tier.
You need to configure workspace1 to support autoscaling all-purpose clusters. The solution
must meet the following requirements:
Automatically scale down workers when the cluster is underutilized for three
minutes.
Minimize the time it takes to scale to the maximum number of workers.
Minimize costs.
What should you do first?


A.

Enable container services for workspace1.


B.

Upgrade workspace1 to the Premium pricing tier.


C.

Set Cluster Mode to High Concurrency.


D.

Create a cluster policy in workspace1.





B.
  

Upgrade workspace1 to the Premium pricing tier.



Explanation:
For clusters running Databricks Runtime 6.4 and above, optimized autoscaling is used by
all-purpose clusters
in the Premium plan
Optimized autoscaling:
Scales up from min to max in 2 steps.
Can scale down even if the cluster is not idle by looking at shuffle file state.
Scales down based on a percentage of current nodes.
On job clusters, scales down if the cluster is underutilized over the last 40 seconds.
On all-purpose clusters, scales down if the cluster is underutilized over the last 150
seconds.
The spark.databricks.aggressiveWindowDownS Spark configuration property specifies in
seconds how often a
cluster makes down-scaling decisions. Increasing the value causes a cluster to scale down
more slowly. The
maximum value is 600.
Note: Standard autoscaling
Starts with adding 8 nodes. Thereafter, scales up exponentially, but can take many steps to
reach the max. You
can customize the first step by setting the
spark.databricks.autoscaling.standardFirstStepUp Spark
configuration property.
Scales down only when the cluster is completely idle and it has been underutilized for the
last 10 minutes.
Scales down exponentially, starting with 1 node.
Reference:
https://docs.databricks.com/clusters/configure.html

You have an Azure Factory instance named DF1 that contains a pipeline named PL1.PL1
includes a tumbling window trigger.
You create five clones of PL1. You configure each clone pipeline to use a different data
source.
You need to ensure that the execution schedules of the clone pipeline match the execution
schedule of PL1.
What should you do?


A.

Add a new trigger to each cloned pipeline


B.

Associate each cloned pipeline to an existing trigger.


C.

Create a tumbling window trigger dependency for the trigger of PL1.


D.

Modify the Concurrency setting of each pipeline





B.
  

Associate each cloned pipeline to an existing trigger.



What should you recommend using to secure sensitive customer contact information?


A.

data labels


B.

column-level security


C.

row-level security


D.

Transparent Data Encryption (TDE)





B.
  

column-level security



Explanation:
Scenario: All cloud data must be encrypted at rest and in transit.
Always Encrypted is a feature designed to protect sensitive data stored in specific
database columns from access (for example, credit card numbers, national identification
numbers, or data on a need to know basis). This includes database administrators or other
privileged users who are authorized to access the database to perform management tasks,
but have no business need to access the particular data in the encrypted columns. The
data is always encrypted, which means the encrypted data is decrypted only for processing
by client applications with access to the encryption key.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-security-overview

Note: This question is part of a series of questions that present the same scenario. Each
question in the series contains a unique solution that might meet the stated goals. Some
question sets might have more than one correct solution, while others might not have a
correct solution.
After you answer a question in this scenario, you will NOT be able to return to it. As a
result, these questions will not appear in the review screen.
You have an Azure Storage account that contains 100 GB of files. The files contain text
and numerical values. 75% of the rows contain description data that has an average length
of 1.1 MB.
You plan to copy the data from the storage account to an Azure SQL data warehouse.
You need to prepare the files to ensure that the data copies quickly.
Solution: You modify the files to ensure that each row is less than 1 MB.
Does this meet the goal?


A.

Yes


B.

No





A.
  

Yes



Explanation:
When exporting data into an ORC File Format, you might get Java out-of-memory errors
when there are large text columns. To work around this limitation, export only a subset of
the columns.
References:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/guidance-for-loading-data


Page 15 out of 42 Pages
Previous