Go Back on CCA175 Exam
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

CCA175 Practice Test


Page 4 out of 20 Pages

Problem Scenario 10 : You have been given following mysql database details as well as
other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following.
1. Create a database named hadoopexam and then create a table named departments in
it, with following fields. department_id int,
department_name string
e.g. location should be
hdfs://quickstart.cloudera:8020/user/hive/warehouse/hadoopexam.db/departments
2. Please import data in existing table created above from retaidb.departments into hive
table hadoopexam.departments.
3. Please import data in a non-existing table, means while importing create hive table
named hadoopexam.departments_new






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Go to hive interface and create database.
hive
create database hadoopexam;
Step 2. Use the database created in above step and then create table in it. use
hadoopexam; show tables;
Step 3 : Create table in it.
create table departments (department_id int, department_name string);
show tables;
desc departments;
desc formatted departments;
Step 4 : Please check following directory must not exist else it will give error, hdfs dfs -Is
/user/cloudera/departments
If directory already exists, make sure it is not useful and than delete the same.
This is the staging directory where Sqoop store the intermediate data before pushing in
hive table.
hadoop fs -rm -R departments
Step 5 : Now import data in existing table
sqoop import \
-connect jdbc:mysql://quickstart:3306/retail_db \
~username=retail_dba \
-password=cloudera \
-table departments \
-hive-home /user/hive/warehouse \
-hive-import \
-hive-overwrite \
-hive-table hadoopexam.departments
Step 6 : Check whether data has been loaded or not.
hive;
use hadoopexam;
show tables;
select" from departments;
desc formatted departments;
Step 7 : Import data in non-existing tables in hive and create table while importing.
sqoop import \
-connect jdbc:mysql://quickstart:3306/retail_db \
-username=retail_dba \
~password=cloudera \
-table departments \
-hive-home /user/hive/warehouse \
-hive-import \
-hive-overwrite \
-hive-table hadoopexam.departments_new \
-create-hive-table
Step 8 : Check-whether data has been loaded or not.
hive;
use hadoopexam;
show tables;
select" from departments_new;
desc formatted departments_new;

Problem Scenario 28 : You need to implement near real time solutions for collecting
information when submitted in file with below
Data
echo "IBM,100,20160104" >> /tmp/spooldir2/.bb.txt
echo "IBM,103,20160105" >> /tmp/spooldir2/.bb.txt
mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt
After few mins
echo "IBM,100.2,20160104" >> /tmp/spooldir2/.dr.txt
echo "IBM,103.1,20160105" >> /tmp/spooldir2/.dr.txt
mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt
You have been given below directory location (if not available than create it) /tmp/spooldir2
.
As soon as file committed in this directory that needs to be available in hdfs in
/tmp/flume/primary as well as /tmp/flume/secondary location.However, note that/tmp/flume/secondary is optional, if transaction failed which writes in
this directory need not to be rollback.
Write a flume configuration file named flumeS.conf and use it to load data in hdfs with
following additional properties .
1. Spool /tmp/spooldir2 directory
2. File prefix in hdfs sholuld be events
3. File suffix should be .log
4. If file is not committed and in use than it should have _ as prefix.
5. Data should be written as text to hdfs






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Create directory mkdir /tmp/spooldir2
Step 2 : Create flume configuration file, with below configuration for source, sink and
channel and save it in flume8.conf.
agent1 .sources = source1
agent1.sinks = sink1a sink1bagent1.channels = channel1a channel1b
agent1.sources.source1.channels = channel1a channel1b
agent1.sources.source1.selector.type = replicating
agent1.sources.source1.selector.optional = channel1b
agent1.sinks.sink1a.channel = channel1a
agent1 .sinks.sink1b.channel = channel1b
agent1.sources.source1.type = spooldir
agent1 .sources.sourcel.spoolDir = /tmp/spooldir2
agent1.sinks.sink1a.type = hdfs
agent1 .sinks, sink1a.hdfs. path = /tmp/flume/primary
agent1 .sinks.sink1a.hdfs.tilePrefix = events
agent1 .sinks.sink1a.hdfs.fileSuffix = .log
agent1 .sinks.sink1a.hdfs.fileType = Data Stream
agent1 .sinks.sink1b.type = hdfs
agent1 .sinks.sink1b.hdfs.path = /tmp/flume/secondary
agent1 .sinks.sink1b.hdfs.filePrefix = events
agent1.sinks.sink1b.hdfs.fileSuffix = .log
agent1 .sinks.sink1b.hdfs.fileType = Data Stream
agent1.channels.channel1a.type = file
agent1.channels.channel1b.type = memory
step 4 : Run below command which will use this configuration file and append data in hdfs.
Start flume service:
flume-ng agent -conf /home/cloudera/flumeconf -conf-file
/home/cloudera/flumeconf/flume8.conf -name age
Step 5 : Open another terminal and create a file in /tmp/spooldir2/
echo "IBM,100,20160104" » /tmp/spooldir2/.bb.txt
echo "IBM,103,20160105" » /tmp/spooldir2/.bb.txt mv /tmp/spooldir2/.bb.txt
/tmp/spooldir2/bb.txt
After few mins
echo "IBM.100.2,20160104" »/tmp/spooldir2/.dr.txt
echo "IBM,103.1,20160105" » /tmp/spooldir2/.dr.txt mv /tmp/spooldir2/.dr.txt
/tmp/spooldir2/dr.txt

Problem Scenario 95 : You have to run your Spark application on yarn with each executor
Maximum heap size to be 512MB and Number of processor cores to allocate on each
executor will be 1 and Your main application required three values as input arguments V1
V2 V3.
Please replace XXX, YYY, ZZZ
./bin/spark-submit -class com.hadoopexam.MyTask -master yarn-cluster-num-executors 3
-driver-memory 512m XXX YYY lib/hadoopexam.jarZZZ






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution
XXX: -executor-memory 512m YYY: -executor-cores 1
ZZZ : V1 V2 V3
Notes : spark-submit on yarn options Option Description
archives Comma-separated list of archives to be extracted into the working directory of
each executor. The path must be globally visible inside your cluster; see Advanced
Dependency Management.
executor-cores Number of processor cores to allocate on each executor. Alternatively, you
can use the spark.executor.cores property, executor-memory Maximum heap size to
allocate to each executor. Alternatively, you can use the spark.executor.memory-property.
num-executors Total number of YARN containers to allocate for this application.
Alternatively, you can use the spark.executor.instances property. queue YARN queue to
submit to. For more information, see Assigning Applications and Queries to Resource
Pools. Default: default.

Problem Scenario 50 : You have been given below code snippet (calculating an average
score}, with intermediate output.
type ScoreCollector = (Int, Double)
type PersonScores = (String, (Int, Double))
val initialScores = Array(("Fred", 88.0), ("Fred", 95.0), ("Fred", 91.0), ("Wilma", 93.0),
("Wilma", 95.0), ("Wilma", 98.0))
val wilmaAndFredScores = sc.parallelize(initialScores).cache()
val scores = wilmaAndFredScores.combineByKey(createScoreCombiner, scoreCombiner,
scoreMerger)
val averagingFunction = (personScore: PersonScores) => { val (name, (numberScores,totalScore)) = personScore (name, totalScore / numberScores)
}
val averageScores = scores.collectAsMap(}.map(averagingFunction)
Expected output: averageScores: scala.collection.Map[String,Double] = Map(Fred ->
91.33333333333333, Wilma -> 95.33333333333333)
Define all three required function , which are input for combineByKey method, e.g.
(createScoreCombiner, scoreCombiner, scoreMerger). And help us producing required
results.






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
val createScoreCombiner = (score: Double) => (1, score)
val scoreCombiner = (collector: ScoreCollector, score: Double) => {
val (numberScores. totalScore) = collector (numberScores + 1, totalScore + score)
}
val scoreMerger= (collector-!: ScoreCollector, collector2: ScoreCollector) => { val
(numScoresl. totalScorel) = collector! val (numScores2, tota!Score2) = collector
(numScoresl + numScores2, totalScorel + totalScore2)
}

Problem Scenario 83 : In Continuation of previous question, please accomplish following
activities.
1. Select all the records with quantity >= 5000 and name starts with 'Pen'
2. Select all the records with quantity >= 5000, price is less than 1.24 and name starts with
'Pen'
3. Select all the records witch does not have quantity >= 5000 and name does not starts
with 'Pen'
4. Select all the products which name is 'Pen Red', 'Pen Black'
5. Select all the products which has price BETWEEN 1.0 AND 2.0 AND quantity
BETWEEN 1000 AND 2000.






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Select all the records with quantity >= 5000 and name starts with 'Pen'
val results = sqlContext.sql(......SELECT * FROM products WHERE quantity >= 5000 AND
name LIKE 'Pen %.......)
results.show()
Step 2 : Select all the records with quantity >= 5000 , price is less than 1.24 and name
starts with 'Pen'
val results = sqlContext.sql(......SELECT * FROM products WHERE quantity >= 5000 AND
price < 1.24 AND name LIKE 'Pen %.......)
results. showQ
Step 3 : Select all the records witch does not have quantity >= 5000 and name does not
starts with 'Pen'
val results = sqlContext.sql('.....SELECT * FROM products WHERE NOT (quantity >= 5000
AND name LIKE 'Pen %')......)
results. showQ
Step 4 : Select all the products wchich name is 'Pen Red', 'Pen Black'
val results = sqlContext.sql('.....SELECT' FROM products WHERE name IN ('Pen Red',
'Pen Black')......)
results. showQ
Step 5 : Select all the products which has price BETWEEN 1.0 AND 2.0 AND quantity
BETWEEN 1000 AND 2000.
val results = sqlContext.sql(......SELECT * FROM products WHERE (price BETWEEN 1.0
AND 2.0) AND (quantity BETWEEN 1000 AND 2000)......)
results. show()


Page 4 out of 20 Pages
Previous