Overview
Your hiring company uses our platform to design your HackerRank Tests and assessments. After you complete and submit your HackerRank Test, the company that conducted the test owns your reports and decides the scores. It is their discretion to share your results with you. HackerRank does not send any test reports to the candidates.
Your evaluators may use manual or automatic evaluation methods to assess your answers and assign relevant scores. Automatic evaluation is typically used for Coding, Multiple Choice, and Sentence completion (Fill-in-the-blanks) type of questions. Your answer is compared against a preset answer to check for correctness. A total score, partial score, or no score is assigned based on the comparison. Alternatively, evaluators may also review your answers manually. Questions that require you to define flow diagrams or involve subjective answers are typically reviewed manually and assigned scores by the evaluators.
Therefore, your overall Test scores can include the sum of automatic and manually assigned scores.
This article helps you understand the general practices used by your test setter to evaluate your HackerRank Test answers.
Coding Questions Evaluation
Typically, your solution to coding Questions based on different programming languages, database programming Questions, etc., is automatically evaluated and assigned scores based on the number of test cases that were successfully executed by your solution to return the expected output. You may be assigned a specific score for every successful test case.
Refer to How are my coding Questions graded or scored topic for more information.
General Evaluation Methods
The following table specifies the general evaluation methods used to evaluate the answers for different types of Questions in HackerRank's assessments:
Question Type | Evaluation Method |
Coding |
Automatic evaluation: Your code is validated for each test case. The output from your code is compared with the expected output to validate whether the test case has passed or failed. The total score assigned to the candidate is the sum of scores of all the passed test cases. |
Database Engineer |
Automatic evaluation: The data retrieved by you is compared to the correct answer. If they match, a full score is awarded to the candidates. However, if they do not match, then the candidate receives a zero score. |
HTML/CSS/JavaScript |
Manual evaluation: The evaluator reviews the final webpage rendered by the candidate's code to verify how the page appears and responds. Based on the evaluation, a relevant score is assigned manually. |
DevOps |
Automatic evaluation: The test setter provides a Check script (written in Bash) to validate if you have performed the correct tasks. Based on the output of the Check script, a score is assigned automatically. |
Java Project |
Automatic evaluation: You are evaluated based on the number of unit tests that pass. Your score for a question= (Number of unit tests passed/Total number of unit tests)* Total score. |
Approximate Solution |
Automatic evaluation: The evaluator writes a custom checker specifying the scoring logic for these questions. For each test case, the custom checker is run to get the score. The sum of the scores returned by the custom checker for each test case is your total score. |
Multiple Choice |
Automatic evaluation: Your responses are compared to the correct answer for scoring the question. By default, the score assigned to one multiple choice question is 5. If there is more than one correct answer for a question, then each correct answer is assigned an equal fraction of the total score. The evaluator might also enable negative marking for wrong answers to ensure that candidates do not indulge in guessing. |
Sentence completion |
Automatic evaluation: To evaluate the score for a question, the fraction of correctly answered blanks to the total number of blanks is multiplied by the total score for the question. |
Subjective |
Manual evaluation: The evaluator is required to read your answers, review diagrams, and explanations to assign scores manually. |
Diagram | |
File Upload | |
Front-end, Back-end, and Full-stack development |
Automatic evaluation - Based on J-unit based test scoring or custom scoring: In J-unit based test scoring, there is an equal score assigned to each test case. The candidate receives a score that is proportional to the number of test cases that the code pass. In custom scoring, the question setter can specify the rules for scoring a particular question in a script. This script is an executable program or a shell command that specifies scoring rules and can be run in the Ubuntu environment. |