At Qualified we believe in assessing developers in a modern coding environment and testing their code in an intuitive format. This is why we utilize fully-featured development environments, complete with unit testing, for our code challenges. Our IDE comes with a testing environment that integrates standard unit testing frameworks for each language. These language-specific testing frameworks (ie. Mocha for JavaScript and unittest for Python) allow developers to run in-depth tests and allow our skill assessments to test language and framework-specific abilities.

This development environment provides developers an intuitive workflow and a familiarity to their favorite local IDE, resulting in immediate productivity and peak performance. The upside of using these powerful testing environments is that developers have more control, just as they would when coding in the workplace. It also allows them to debug the challenge much like they would in their own custom environment. There is no limit to what can be done in the environment since the tests are run inside the environment itself just like regular test specs would be.

With all of this upside in language-specific testing environments, the power and flexibility also create potential drawbacks. For each coding challenge, test cases need to be written in the specific unit testing framework of the language it’s run in. This makes it time-consuming and difficult to translate challenges into other languages, as each new language needs it’s own set of unit tests. While some challenges, especially framework-specific ones, need to be written in a specific language, there are general algorithmic and computer science challenges that transcend language syntax. These challenges can be created in a language agnostic format simply by defining an entry point, some input and the expected output. Once we know those details, creating a challenge that is supported across every programming language should be as simple as pushing a button. This is why we built the Language Generator for code challenges.

1*Q-U6HGECSCrK85wH96Xm8g

What is this Language Generator?

The Language Generator is a tool that allows code challenges on Qualified to support multiple programming languages instantly. Instead of writing unit tests for a custom code challenge in each language, simply create the challenge using our Code Challenge Language Generator and generate support across programming languages with the push of a button.

How does the Language Generator work?

Teams using Qualified can take advantage of the Language Generator to define their challenge in a language agnostic format that is then used to generate unit tests across languages. This format uses a single YAML configuration file to define entry points, return values, and inputs. This configuration file is then used to generate unit tests for the challenge in every language that the generator supports, extending native support for the challenge across languages.

To use the Language Generator, simply start by designing a code challenge. Without thinking about the configuration, let’s begin by designing the code challenge itself, the problem, description, and what we want to assess. Typically the easiest way is to start designing it in a programming language that we know well.

To explain the Language Generator let’s break it down through an example. Let’s take a look at one, the “Say Hello” challenge

Example: The “Say Hello” challenge

For our example let’s create a basic code challenge that requires a developer to write a simple function. We know JavaScript well and decide to start with that. So for our Say Hello challenge we want to have a function called sayHello which takes in a name and returns "Hello [name]!".

So the setup code for the candidate might be:

function sayHello() {
}

The test cases in Mocha might look something like this:

let assert = require("chai").assert;
describe('Challenge', function() {
  it('says_hello', function() {
    assert.deepEqual(sayHello("Qualified"), "Hello, Qualified!");
  });
});

Essentially we’re checking to see if our input of a particular string matches our expected output of another string. We’ve gotten pretty far into designing this challenge, so let’s take a look at how the YAML configuration would be built:

entry_point: say_hello
return_type: String
parameters:
  - name: name
    type: String
test_cases:
  - it: says_hello
    assertions:
      - input_arguments:
        - type: String
          value: Qualified
        expected_output:
          type: String
          value: Hello, Qualified!
  - it: handles_empty_input
    assertions:
      - input_arguments:
        - type: String
          value:
        expected_output:
          type: String
          value: Hello there!
example_test_cases:
  - it: basic_test
    assertions:
      - input_arguments:
        - type: String
          value: Qualified
        expected_output:
          type: String
          value: Hello, Qualified!

Let’s break it down quick. So we’ve got our entry_point as say_hello.

entry_point: say_hello

Since this is translated into multiple languages it’s shown snake-cased here, but it will translate into the appropriate casing for each language. In JavaScript, it would become sayHello automatically.

Notice we designated a return_type of String

return_type: String

This must be specified especially when translating into strong-typed languages.

Next we have our parameters

parameters:
  - name: name
    type: String

This is our list of parameters that will be sent to the entry point. Automatically the entry point is setup for the candidate so they understand what kind of parameters to expect and where to expect we’ll be sending them. In the example above we’re sending sayHello our string name.

Finally we have our test_cases and example_test_cases. Example test cases are the tests available to the candidate immediately for testing. These are used to help the candidate get their feet wet and grow confident with the testing environment which may be new to them. The test cases are the ones they cannot see, but can be designed to send debugging information back to the candidate in order to lead them to the correct solution.

Let’s take a look at the test_cases

test_cases:
  - it: says_hello
    assertions:
      - input_arguments:
        - type: String
          value: Qualified
        expected_output:
          type: String
          value: Hello, Qualified!
  - it: handles_empty_input
    assertions:
      - input_arguments:
        - type: String
          value:
        expected_output:
          type: String
          value: Hello there!

The configuration allows the challenge designer to create multiple it clauses and as many assertions as they would want within each clause. Each assertion can have unlimited input arguments, each with their own particular type and value. Then we can specify, based on those inputs, what the expected output should be.

In JavaScript this configuration would translate out to:

let assert = require("chai").assert;
describe('Challenge', function() {
  it('says_hello', function() {
    assert.deepEqual(sayHello("Qualified"), "Hello, Qualified!");
  });
  it('handles_empty_input', function() {
    assert.deepEqual(sayHello(""), "Hello there!");
  });
});

With that one configuration file, this challenge can be generated across all supported languages! That wraps it up for our simple “Say Hello” example. Now that we have get the format it’s time for you to try creating your own custom code challenge with the Language Generator.

If you’re already on Qualified, get started by head to the Challenge section, clicking on “Create a Code Challenge”, and the Language Generator will appear in the sidebar. If you haven’t tried out Qualified.io yet, head over and claim your free trial to try out creating code challenges with the Language Generator today!