Go Back on CCA175 Exam
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

CCA175 Practice Test


Page 3 out of 20 Pages

Problem Scenario 15 : You have been given following mysql database details as well as
other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following activities.
1. In mysql departments table please insert following record. Insert into departments
values(9999, '"Data Science"1);
2. Now there is a downstream system which will process dumps of this file. However,
system is designed the way that it can process only files if fields are enlcosed in(') single
quote and separate of the field should be (-} and line needs to be terminated by : (colon).
3. If data itself contains the " (double quote } than it should be escaped by \.
4. Please import the departments table in a directory called departments_enclosedby and
file should be able to process by downstream system.






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Connect to mysql database.
mysql -user=retail_dba -password=cloudera
show databases; use retail_db; show tables;
Insert record
Insert into departments values(9999, '"Data Science"');
select" from departments;
Step 2 : Import data as per requirement.
sqoop import \
-connect jdbc:mysql;//quickstart:3306/retail_db \
~username=retail_dba \
-password=cloudera \
-table departments \
-target-dir /user/cloudera/departments_enclosedby \
-enclosed-by V -escaped-by \\ -fields-terminated-by-' -lines-terminated-by :
Step 3 : Check the result.
hdfs dfs -cat/user/cloudera/departments_enclosedby/part"

Problem Scenario 12 : You have been given following mysql database details as well as
other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following.
1. Create a table in retailedb with following definition.
CREATE table departments_new (department_id int(11), department_name varchar(45),
created_date T1MESTAMP DEFAULT NOW());
2. Now isert records from departments table to departments_new
3. Now import data from departments_new table to hdfs.
4. Insert following 5 records in departmentsnew table. Insert into departments_new
values(110, "Civil" , null); Insert into departments_new values(111, "Mechanical" , null);
Insert into departments_new values(112, "Automobile" , null); Insert into departments_new
values(113, "Pharma" , null);
Insert into departments_new values(114, "Social Engineering" , null);
5. Now do the incremental import based on created_date column.






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Login to musql db
mysql -user=retail_dba -password=cloudera
show databases;
use retail db; show tables;
Step 2 : Create a table as given in problem statement.
CREATE table departments_new (department_id int(11), department_name varchar(45),
createddate T1MESTAMP DEFAULT NOW());
show tables;
Step 3 : isert records from departments table to departments_new insert into
departments_new select a.", null from departments a;
Step 4 : Import data from departments new table to hdfs.
sqoop import \
-connect jdbc:mysql://quickstart:330G/retail_db \
~username=retail_dba \
-password=cloudera \
-table departments_new\
-target-dir /user/cloudera/departments_new \
-split-by departments
Stpe 5 : Check the imported data.
hdfs dfs -cat /user/cloudera/departmentsnew/part"
Step 6 : Insert following 5 records in departmentsnew table.
Insert into departments_new values(110, "Civil" , null);
Insert into departments_new values(111, "Mechanical" , null);
Insert into departments_new values(112, "Automobile" , null);
Insert into departments_new values(113, "Pharma" , null);
Insert into departments_new values(114, "Social Engineering" , null);
commit;
Stpe 7 : Import incremetal data based on created_date column.
sqoop import \
-connect jdbc:mysql://quickstart:330G/retaiI_db \
-username=retail_dba \
-password=cloudera \
-table departments_new\
-target-dir /user/cloudera/departments_new \
-append \
-check-column created_date \
-incremental lastmodified \
-split-by departments \
-last-value "2016-01-30 12:07:37.0"
Step 8 : Check the imported value.
hdfs dfs -cat /user/cloudera/departmentsnew/part"

Problem Scenario 32 : You have given three files as below.
spark3/sparkdir1/file1.txt
spark3/sparkd ir2ffile2.txt
spark3/sparkd ir3Zfile3.txt
Each file contain some text.
spark3/sparkdir1/file1.txt
Apache Hadoop is an open-source software framework written in Java for distributed
storage and distributed processing of very large data sets on computer clusters built from
commodity hardware. All the modules in Hadoop are designed with a fundamental
assumption that hardware failures are common and should be automatically handled by the
framework
spark3/sparkdir2/file2.txt
The core of Apache Hadoop consists of a storage part known as Hadoop Distributed File
System (HDFS) and a processing part called MapReduce. Hadoop splits files into large
blocks and distributes them across nodes in a cluster. To process data, Hadoop transfers
packaged code for nodes to process in parallel based on the data that needs to be
processed.
spark3/sparkdir3/file3.txt
his approach takes advantage of data locality nodes manipulating the data they have
access to to allow the dataset to be processed faster and more efficiently than it would be
in a more conventional supercomputer architecture that relies on a parallel file system
where computation and data are distributed via high-speed networking
Now write a Spark code in scala which will load all these three files from hdfs and do the
word count by filtering following words. And result should be sorted by word count in
reverse order.
Filter words ("a","the","an", "as", "a","with","this","these","is","are","in", "for",
"to","and","The","of")
Also please make sure you load all three files as a Single RDD (All three files must be
loaded using single API call).
You have also been given following codec
import org.apache.hadoop.io.compress.GzipCodec
Please use above codec to compress file, while saving in hdfs.






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Create all three files in hdfs (We will do using Hue). However, you can first
create in local filesystem and then upload it to hdfs.
Step 2 : Load content from all files.
val content =
sc.textFile("spark3/sparkdir1/file1.txt,spark3/sparkdir2/file2.txt,spark3/sparkdir3/file3.
txt") //Load the text file
Step 3 : Now create split each line and create RDD of words.
val flatContent = content.flatMap(word=>word.split(" "))
step 4 : Remove space after each word (trim it)
val trimmedContent = f1atContent.map(word=>word.trim)
Step 5 : Create an RDD from remove, all the words that needs to be removed.
val removeRDD = sc.parallelize(List("a","theM,ManM, "as",
"a","with","this","these","is","are'\"in'\ "for", "to","and","The","of"))
Step 6 : Filter the RDD, so it can have only content which are not present in removeRDD.
val filtered = trimmedContent.subtract(removeRDD}
Step 7 : Create a PairRDD, so we can have (word,1) tuple or PairRDD. val pairRDD =
filtered.map(word => (word,1))
Step 8 : Now do the word count on PairRDD. val wordCount = pairRDD.reduceByKey(_ +
_)
Step 9 : Now swap PairRDD.
val swapped = wordCount.map(item => item.swap)
Step 10 : Now revers order the content. val sortedOutput = swapped.sortByKey(false)
Step 11 : Save the output as a Text file. sortedOutput.saveAsTextFile("spark3/result")
Step 12 : Save compressed output.
import org.apache.hadoop.io.compress.GzipCodec
sortedOutput.saveAsTextFile("spark3/compressedresult", classOf[GzipCodec])

 

Problem Scenario 79 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.orders
table=retail_db.order_items
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Columns of products table : (product_id | product categoryid | product_name |
product_description | product_prtce | product_image )
Please accomplish following activities.
1. Copy "retaildb.products" table to hdfs in a directory p93_products
2. Filter out all the empty prices
3. Sort all the products based on price in both ascending as well as descending order.
4. Sort all the products based on price as well as product_id in descending order.
5. Use the below functions to do data ordering or ranking and fetch top 10 elements top()
takeOrdered() sortByKey()






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Import Single table .
sqoop import -connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba -
password=cloudera -table=products -target-dir=p93_products -m 1
Note : Please check you dont have space between before or after '=' sign. Sqoop uses the
MapReduce framework to copy data from RDBMS to hdfs
Step 2 : Step 2 : Read the data from one of the partition, created using above command,
hadoop fs -cat p93_products/part-m-00000
Step 3 : Load this directory as RDD using Spark and Python (Open pyspark terminal and
do following). productsRDD = sc.textFile("p93_products")
Step 4 : Filter empty prices, if exists
#filter out empty prices lines
nonemptyjines = productsRDD.filter(lambda x: len(x.split(",")[4]) > 0)
Step 5 : Now sort data based on product_price in order.
sortedPriceProducts=nonempty_lines.map(lambdaline:(float(line.split(",")[4]),line.split(",")[2]
)).sortByKey()
for line in sortedPriceProducts.collect(): print(line)
Step 6 : Now sort data based on product_price in descending order.
sortedPriceProducts=nonempty_lines.map(lambda line:
(float(line.split(",")[4]),line.split(",")[2])).sortByKey(False)
for line in sortedPriceProducts.collect(): print(line)
Step 7 : Get highest price products name.
sortedPriceProducts=nonemptyJines.map(lambda line : (float(line.split(",")[4]),linesplit(,,,,,)[
2]))-sortByKey(False).take(1)
print(sortedPriceProducts)
Step 8 : Now sort data based on product_price as well as product_id in descending order.
#Dont forget to cast string #Tuple as key ((price,id),name)
sortedPriceProducts=nonemptyJines.map(lambda line : ((float(line
print(sortedPriceProducts)
Step 9 : Now sort data based on product_price as well as product_id in descending order,
using top() function.
#Dont forget to cast string
#Tuple as key ((price,id),name)
sortedPriceProducts=nonemptyJines.map(lambda line: ((float(line.s^^
print(sortedPriceProducts)
Step 10 : Now sort data based on product_price as ascending and product_id in ascending
order, using takeOrdered{) function.
#Dont forget to cast string
#Tuple as key ((price,id),name) sortedPriceProducts=nonemptyJines.map(lambda line:
((float(line.split(","}[4]},int(line.split(","}[0]}},line.split(","}[2]}}.takeOrdered(10, lambda tuple :
(tuple[0][0],tuple[0][1]))
Step 11 : Now sort data based on product_price as descending and product_id in
ascending order, using takeOrdered() function.
#Dont forget to cast string
#Tuple as key ((price,id},name)
#Using minus(-) parameter can help you to make descending ordering , only for numeric
value.
sortedPrlceProducts=nonemptylines.map(lambda line:
((float(line.split(","}[4]},int(line.split(","}[0]}},line.split(","}[2]}}.takeOrdered(10, lambda tuple :
(-tuple[0][0],tuple[0][1]}}

Problem Scenario 80 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.products
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Columns of products table : (product_id | product_category_id | product_name |
product_description | product_price | product_image )
Please accomplish following activities.
1. Copy "retaildb.products" table to hdfs in a directory p93_products
2. Now sort the products data sorted by product price per category, use productcategoryid
colunm to group by category






Answer: See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Import Single table .
sqoop import -connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba -
password=cloudera -table=products -target-dir=p93
Note : Please check you dont have space between before or after '=' sign. Sqoop uses the
MapReduce framework to copy data from RDBMS to hdfs
Step 2 : Step 2 : Read the data from one of the partition, created using above command,
hadoop fs -cat p93_products/part-m-00000
Step 3 : Load this directory as RDD using Spark and Python (Open pyspark terminal and
do following}. productsRDD = sc.textFile(Mp93_products")
Step 4 : Filter empty prices, if exists
#filter out empty prices lines
Nonempty_lines = productsRDD.filter(lambda x: len(x.split(",")[4]) > 0)
Step 5 : Create data set like (categroyld, (id,name,price)
mappedRDD = nonempty_lines.map(lambda line: (line.split(",")[1], (line.split(",")[0],
line.split(",")[2], float(line.split(",")[4]))))
tor line in mappedRDD.collect(): print(line)
Step 6 : Now groupBy the all records based on categoryld, which a key on mappedRDD it
will produce output like (categoryld, iterable of all lines for a key/categoryld)
groupByCategroyld = mappedRDD.groupByKey() for line in groupByCategroyld.collect():
print(line)
step 7 : Now sort the data in each category based on price in ascending order.
# sorted is a function to sort an iterable, we can also specify, what would be the Key on
which we want to sort in this case we have price on which it needs to be sorted.
groupByCategroyld.map(lambda tuple: sorted(tuple[1], key=lambda tupleValue:
tupleValue[2])).take(5)
Step 8 : Now sort the data in each category based on price in descending order.
# sorted is a function to sort an iterable, we can also specify, what would be the Key on
which we want to sort in this case we have price which it needs to be sorted.
on groupByCategroyld.map(lambda tuple: sorted(tuple[1], key=lambda tupleValue:
tupleValue[2] , reverse=True)).take(5)


Page 3 out of 20 Pages
Previous