linkedin-skill-assessments-quizzes

Hadoop

Q1. Partitioner controls the partitioning of what data?

Q2. SQL Windowing functions are implemented in Hive using which keywords?

Q3. Rather than adding a Secondary Sort to a slow Reduce job, it is Hadoop best practice to perform which optimization?

Q5. MapReduce jobs can be written in which language?

Q6. To perform local aggregation of the intermediate outputs, MapReduce users can optionally specify which object?

Q7. To verify job status, look for the value ___ in the ___.

Q8. Which line of code implements a Reducer method in MapReduce 2.0?

Q9. To get the total number of mapped input records in a map job task, you should review the value of which counter?

Q10. Hadoop Core supports which CAP capabilities?

Q11. What are the primary phases of a Reducer?

Q12. To set up Hadoop workflow with synchronization of data between jobs that process tasks both on disk and in memory, use the ___ service, which is ___.

Q13. For high availability, use multiple nodes of which type?

Q14. DataNode supports which type of drives?

Q15. Which method is used to implement Spark jobs?

Q16. In a MapReduce job, where does the map() function run?

Q17. To reference a master file for lookups during Mapping, what type of cache should be used?

Q18. Skip bad records provides an option where a certain set of bad input records can be skipped when processing what type of data?

Q19. Which command imports data to Hadoop from a MySQL database?

Q20. In what form is Reducer output presented?

Q21. Which library should be used to unit test MapReduce code?

Q22. If you started the NameNode, then which kind of user must you be?

Q23. State __ between the JVMs in a MapReduce job

Q24. To create a MapReduce job, what should be coded first?

Q25. To connect Hadoop to AWS S3, which client should you use?

Q26. HBase works with which type of schema enforcement?

Q27. HDFS file are of what type?

Q28. A distributed cache file path can originate from what location?

Q29. Which library should you use to perform ETL-type MapReduce jobs?

Q30. What is the output of the Reducer?

map function processes a certain key-value pair and emits a certain number of key-value pairs and the Reduce function processes values grouped by the same key and emits another set of key-value pairs as output.

Q31. To optimize a Mapper, what should you perform first?