Capital Analytics, a digital marketing research firm, plans to move its research archives to a Hadoop cluster. They have a Hadoop cluster configuration with 1 NameNode and 3 DataNode(s).

For a replication factor of 3 and the data block size of 128 MB, what will happen if we store a file called '2010_archive' that is 150 MB in size?

(Select all acceptable answers.)

The '2010_archive' file will be split into 2 blocks.
Metadata for the '2010_archive' file will be stored with the DataNode(s).
Metadata information at the NameNode will remain unchanged if we increase the replication factor from 3 to 5.
Failure of any one DataNode will result in the loss of data from the '2010_archive' file.
All requests to read/write files to the cluster will be received by the given single NameNode.

Hadoop Hadoop File Blocks Hadoop Replication Factor New Public


Would you like to see our other questions?

We have 1000+ premium hand-crafted questions for 160+ job skills and 20+ coding languages. We prefer questions with small samples of actual work over academic problems or brain teasers.

Visit our question library
Private Concierge

Send us an email with an explanation of your testing needs and a list of candidates. We will create an appropriate test, invite your candidates, review their results, and send you a detailed report.

Contact Private Concierge

Would you like to see our tests? The following tests contain Hadoop related questions:
On the TestDome Blog

Screening Applicants: The Good, the Bad and the Ugly

Since we’re all biased and we use incorrect proxies, why not just outsource hiring to experts or recruitment agencies? After all, they’ve been screening people for many years, so they must know how to do it right?

Not really. I was surprised to discover that many experts disagree with each other. Everybody praises their pet method and criticizes the others. Many of these methods look legitimate, but are based on...

Dashboard Start Trial Sign In Home Tour Tests Questions Pricing