Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

Professional-Data-Engineer Practice Test


Page 6 out of 23 Pages

Topic 5: Practice Questions

If you're running a performance test that depends upon Cloud Bigtable, all the choices except one below are recommended steps. Which is NOT a recommended step to follow?


A.

Do not use a production instance.


B.

Run your test for at least 10 minutes.


C.

Before you test, run a heavy pre-test for several minutes.


D.

Use at least 300 GB of data.





A.
  

Do not use a production instance.



If you're running a performance test that depends upon Cloud Bigtable, be sure to follow these steps as you
plan and execute your test:
Use a production instance. A development instance will not give you an accurate sense of how a production
instance performs under load.
Use at least 300 GB of data. Cloud Bigtable performs best with 1 TB or more of data. However, 300 GB of
data is enough to provide reasonable results in a performance test on a 3-node cluster. On larger clusters, use
100 GB of data per node.
Before you test, run a heavy pre-test for several minutes. This step gives Cloud Bigtable a chance to balance
data across your nodes based on the access patterns it observes.
Run your test for at least 10 minutes. This step lets Cloud Bigtable further optimize your data, and it helps
ensure that you will test reads from disk as well as cached reads from memory.

All Google Cloud Bigtable client requests go through a front-end server ______ they are sent to a Cloud
Bigtable node.


A.

before


B.

after


C.

only if


D.

once





A.
  

before



In a Cloud Bigtable architecture all client requests go through a front-end server before they are sent to a
Cloud Bigtable node.
The nodes are organized into a Cloud Bigtable cluster, which belongs to a Cloud Bigtable instance, which is a
container for the cluster. Each node in the cluster handles a subset of the requests to the cluster.
When additional nodes are added to a cluster, you can increase the number of simultaneous requests that the
cluster can handle, as well as the maximum throughput for the entire cluster.

Which of the following is not true about Dataflow pipelines?(Choose One)


A.

Pipelines are a set of operations


B.

Pipelines represent a data processing job


C.

Pipelines represent a directed graph of steps


D.

Pipelines can share data between instances





D.
  

Pipelines can share data between instances



The data and transforms in a pipeline are unique to, and owned by, that pipeline. While your program can
create multiple pipelines, pipelines cannot share data or transforms

What Dataflow concept determines when a Window's contents should be output based on certain criteria being met?


A.

Sessions


B.

OutputCriteria


C.

Windows


D.

Triggers





D.
  

Triggers



Explanation
Triggers control when the elements for a specific key and window are output. As elements arrive, they are put
into one or more windows by a Window transform and its associated WindowFn, and then passed to the
associated Trigger to determine if the Windows contents should be output.

What is the HBase Shell for Cloud Bigtable?


A.

The HBase shell is a GUI based interface that performs administrative tasks, such as creating and
deleting tables.


B.

The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting
tables.


C.

The HBase shell is a hypervisor based shell that performs administrative tasks, such as creating and
deleting new virtualized instances.


D.

The HBase shell is a command-line tool that performs only user account management functions to grant
access to Cloud Bigtable instances.





B.
  

The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting
tables.



Explanation
The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting
tables. The Cloud Bigtable HBase client for Java makes it possible to use the HBase shell to connect to Cloud
Bigtable.


Page 6 out of 23 Pages
Previous