
N4IDE provides tests support by allowing different TestRunners to extend it with test specific functionality. This allows to support specialized and very different from each other test requirements (e.g. nodejs based tests, browser based interactive ui tests, server integration tests).
Explanation of the main components:
a project with production code (e.g. src
folder), and test code (e.g. test folder)
test code may contain special language features contributed by Test Library
manage user project, including all test related parts (e.g. support test related code, validate some test code constraints)
host runner, allow its UI contributions
contribute to N4IDE necessary elements (test results view, user test selection, test start/stop actions)
use N4IDE mechanisms to access user project test fragment (e.g. discover tests)
configure Test Execution Environment
manage test runtime (e.g. start/stop execution environment)
hosts (directly or indirectly) js engine in which tests are executed
executes test library logic
is responsible for some tests execution aspects (e.g. test isolation)
deals with some execution errors (e.g. no callback from async test, infinite loop in test)
provides test api that user can use in his project
coordinates scheduling and test code execution (e.g. order of tests, execution of setups / teardowns)
creates test results
Below picture and listings depicts the components of the Test Runner in the IDE:
Test Delegate |
|
Test Facade |
|
HTTP Server |
|
Resource Router Servlet |
|
REST Endpoint Logic |
|
Test Event Bus |
|
Test Finite State Machine Registry |
|
Test Finite State Machine |
|
Test Tree Registry |
|
Test UI |
|
In this section and subsections we specify N4IDE support for testing with Mangelhaft.
Mangelhaft is N4JS Test Library. It is focused more on a xUnit tests than other forms of testing (BDD, Acceptance Testing, Functional UI Testing).
The following test scenarios are supported on different Test Execution Environments:
Test | Node | Browser | Wrapper |
---|---|---|---|
Plain |
|
|
|
DOM |
|
|
|
non-interactive UI |
|
||
interactive UI (iUI) |
|||
(non UI) Server |
|||
iUI Server |
|
A special problem about JavaScript tests is to control asynchronous tests and non-terminating tests.
Performance and test isolation are conflicting goals: a perfect isolation would mean to run every tests by a separate JavaScript engine, which is not performant. For that reason, all tests are run by the same JS-engine in general. A test has to notify the test runner when it has been finished (successfully or with failure). If it does not finish in a defined time (timeout), Test Execution Environment or Manglehaft needs to handle that (e.g. restart node vm in which code is executed)…
Main concerns with running test in parallel on js side are:
Timeouts Mangelhaft is supposed to track test timeout. If tests are running in fake parallel mode achieved by cooperative multitasking, then one test running eats up time for other test. This can cause tests to timeout when running in parallel, while succeed when running in sequential mode.
Mutability on client. Tests running in parallel can affect each other by mutating global state in which they operate. When they run in sequential mode this can happen too, but it is much less likely to.
Mutable state on the server. Tests running on the same session/login are prone to affecting each other through server interaction (and or mutating data on the server).
xUnit API is user facing API for defining tests. It allows test developer to define tests and configure some test execution aspects. N4IDE (via Test Runner extension) supports defined API by :
gathering information via AST analysis and reflection
presenting user available actions, based on gathered information
gathering user input and configurations for test execution
generating proper data for test infrastructure, based on user actions
A test group is a logical collection of tests. It is created by grouping N4ClassDeclarations
that contain test methods or test methods directly (see Test Method). Those classes or individual methods can be assigned to a Group by annotating them with @Group
annotation. This annotation takes non empty list of strings as parameter. Passed strings are used as category name (which is like its id).
Annotation:
'@Group'
(' $group+=$STRING ')?
AnnotatedElement
;
AnnnotatedElement:
N4JSClassDeclaration | N4JSMethodDeclaration
;
@Group
properties
name → Group
targets → N4Method, N4Class
retention policy → RUNTIME
transitive → YES
repeatable → YES
arguments → Strings
arguments are optional → NO
Test Method marks procedure that has to be executed by Test Library.
Annotation:
'@Test'
AnnotatedElement
;
AnnnotatedElement:
N4JSMethodDeclaration
;
@Test
properties
name → Test
targets → N4Method
retention policy → RUNTIME
transitive → NO
repeatable → NO
arguments → none
Additional TestMethod constraints:
allowed only N4ClassDeclarations
in project test fragment
method must be public
method takes no parameters
method return type is Promise?
method must not be referenced by other owning class members or other classes (also no @override)
@BeforeAll
marks method that will be executed once before all tests in a given test class will be executed.
Annotation:
'@BeforeAll'
AnnotatedElement
;
AnnnotatedElement:
N4JSMethodDeclaration
;
@BeforeAll
properties
name → BeforeAll
targets → N4Method
retention policy → RUNTIME
transitive → NO
repeatable → NO
arguments → none
The same constraints apply as for the test method, see Test Method Constraints.
@Before
marks method that will be executed once before each tests in a given test class will be executed.
Annotation:
'@Before'
AnnotatedElement
;
AnnnotatedElement:
N4JSMethodDeclaration
;
@Before
properties
name → Before
targets → N4Method
retention policy → RUNTIME
transitive → NO
repeatable → NO
arguments → none
The same constraints apply as for the test method, see Test Method Constraints.
@After
marks method that will be executed once after each tests in a given test class will be executed.
Annotation:
'@After'
AnnotatedElement
;
AnnnotatedElement:
N4JSMethodDeclaration
;
@After
properties
name → After
targets → N4Method
retention policy → RUNTIME
transitive → NO
repeatable → NO
arguments → none
The same constraints apply as for the test method, see Test Method Constraints.
@AfterAll
marks method that will be executed once after all tests in a given test class will be executed.
Annotation:
'@After'
AnnotatedElement
;
AnnnotatedElement:
N4JSMethodDeclaration
;
allowed only in class marked with @TestClass
method must be public
method takes no parameters
method return type is void
method must not be referenced by other owning class members
@AfterAll
properties
name → AfterAll
targets → N4Method
retention policy → RUNTIME
transitive → NO
repeatable → NO
arguments → none
The same constraints apply as for the test method, see Test Method Constraints.
name |
@Ignore |
targets |
N4Method, N4Class |
retention policy |
RUNTIME |
transitive |
YES |
repeatable |
NO |
arguments |
String reason |
arguments are optional |
→ Yes |
Test Ignore allows to mark tests that should be skipped during the test execution. That is the preferred way to temporarily disable tests without removing them (or commenting them out). Test developers may provide reason for skipping to make reason/intentions clearer.
This annotation is transitive, which means that: Test Method is considered as marked with Test Skip
explicitly when it is directly marked or
implicitly, when container of a Test Method is marked.
If a class is marked as @Ignore
, then all its contained test methods will be ignored.
When @Ignore
occurs at class level in a test class hierarchy chain, then the following rules are applied. Assume the following test classes:
export public class A {
@Test
public aTest(): void {
console.log('A#aTest');
}
}
import { A } from "A"
@Ignore('Class B is ignored.')
export public class B extends A {
@Test
public b1Test(): void {
console.log('B#b1Test');
}
@Ignore("Method B#b2Test is ignored.")
@Test
public b2Test(): void {
console.log("B#b2Test");
}
}
import { B } from "B"
export public class C extends B {
@Test
public cTest(): void {
console.log('C#cTest');
}
}
When module A is being tested, then it is obvious that all the test methods of A
will be tested. No methods will be skipped at all.
When module B is being tested, then although the inherited members of class A
will be included in the test tree, all methods, including the inherited ones (from class A
from module A) will be skipped. Nothing will be tested.
When module C is being tested, then all inherited members from class B
and class A
will be collected an included in the test tree. The @Ignore
annotation declared at class level at B
will be ignored but the @Ignore
at method level in class B
will be considered. In a nutshell, the following methods will be executed:
A#aTest
B#b1Test
C#cTest
The above described behavior is identical to the behavior of JUnit 4 with respect to the @Ignore
annotation handling in case of test class inheritance.
Timeout allows test developer to set custom timeout when executing given test code. This can be used to set timeout for both Test Methods or Test Fixtures
Annotation:
'@Timeout'
($timoeout+=$INT)?
AnnotatedElement
;
AnnnotatedElement:
N4JSClassDeclaration | N4JSMethodDeclaration
;
@Timeout
properties
name → Timeout
targets → N4Method, N4Class
retention policy → RUNTIME
transitive → YES
repeatable → NO
arguments → Number
arguments are optional → NO
Description allows test developer provide string describing given test or test class that can be used in IDE test view or in the test report.
Annotation:
'@Description'
($desc+=$STRING)?
AnnotatedElement
;
AnnnotatedElement:
N4JSClassDeclaration | N4JSMethodDeclaration
;
@Description
properties
name → Description
targets → N4Method, N4Class
retention policy → RUNTIME
transitive → YES
arguments → String
arguments are optional → NO
Test Runtime Environment communicates with Test Runner over HTTP. Defined communication is based on protocol used between lupenrein and old ide. It is used to send the information about test execution progress from the Test Runtime to Test Runner. Information send by this protocol is not equivalent to test results. Test Runner interprets progress it receives and based on gathered information it generates test results. Under specific conditions Test Runner may change reported test status PASS to test result FAILED and put this information to the test report e.g. when timeout happens (see note on timeouts below).
Test Listener shows Communication flow expected by the Test Runner. When the Test Runner is started first it waits for Start Session message. Next Test Tree message is expected. This describes list of all tests that are expected to be executed. For all tests in the list Test Runner expects Test Start and Test End message to be received. End Session is expected to be last message in the test session. Ping message can be send multiple times in between other messages to manage synchronization issues between Test Runner and Test Runtime (see below).
Since all communication is asynchronous, IDE Test Runner must assume some timeout values that will define standard wait time during communication:
Initial 90s timeout to wait for the Start Session message. It may be fixed or adjusted to given environment (local/remote) and project (library/application).
Default timeout between all other test messages is 10 seconds. Test Runtime may notify IDE Test Runner that it should wait longer with Ping test message. This is one time thing, as soon as another command is received the default timeout will have to be reused again.
Do to the asynchronous nature of the tests, status updates can be given out of order by the Test Runtime Environment. The only sure thing is that all tests begin with SessionStart and ends with a SessionEnd. Furthermore a TestStart will be send before the TestEnd for a particular test.
IDE Test Runner will be waiting for specific messages from Test Runtime. We assume that communication will be done over HTTP protocol. Test Execution Environement should be configured by the Test Runner in a way that Test Runtime knows address where it has to send messages (see Test Runtime Configuration). Test Runner exposes RESTful API allowing him to receive messages. Below we define parts of that api that enable specific messages to be communicated.
When defining Test Messages we assume following model of tests:
TestTree {
ID sessionId,
Array<TestSuite>? testSuites
}
TestSuite {
string name,
Array<TestCase>? testCases,
Array<TestSuite>? children
}
TestCase {
ID id,
string className,
string origin,
string name,
string displayName,
TestResult? result
}
TestResult {
TestStatus teststatus,
number elapsed,
string? expected,
string? actual,
string? message,
array<string>? trace
}
enum TestStatus {
PASSED, SKIPPED, FAILED, ERROR
}
ID {
string value
}
The ID of a test case in the following specifications is referred to as testID
.
This ID is of the following structure:
testID: fqn '#' methodName
When used as part of the URL the testID is percent-escaped as defined in RFC3986 Section 2.1. This is necessarry to circumvent the fact that the N4JS FQN delimiter /
is a reserved character in URLs and cannot be used in its original form.
Signals start of the test session. When user triggers test execution, configures IDETestRunnerCtrl, afterwards IDE Listener waits for this message from TestRunner.
StartSession :
uri : /n4js/testing/sessions/{sessionID}/start
method : POST
contentType : application/vnd.n4js.start_session_req.tm+json
accept: application/json
responses:
200:
400:
Start session request object MIME type application/vnd.n4js.start_session_req.tm+json:
{
map<string, string>? properties
}
Signals that test runner is still busy doing things, and will report later to the listener.
PingSession :
uri : /n4js/testing/sessions/{sessionID}/ping
method : POST
contentType : application/vnd.n4js.ping_session_req.tm+json
accept: application/json
responses:
200:
400:
Ping session request object MIME type application/vnd.n4js.ping_session_req.tm+json:
{
number timeout,
string? comment
}
Signals end of test session Notifies IDE Listener that session is finished and no further related TestMessages are expected. IDE, can stop listening and proceed with its own tasks (e.g. create summary test report ).
EndSession :
uri : /n4js/testing/sessions/{sessionID}/end
method : POST
responses:
200:
400:
Signals that a test run has started. Updates the state of the test reported with the tree .
StartTest :
uri : /n4js/testing/sessions/{sessionID}/tests/{testID}/start
method : POST
contentType : application/vnd.n4js.start_test_req.tm+json
accept: application/json
responses:
200:
contentType : application/vnd.n4js.start_test_res.tm+json
400:
Start test request object MIME type application/vnd.n4js.start_test_req.tm+json:
{
number timeout,
map<string, string>? properties
}
Start test response object MIME type application/vnd.n4js.start_test_res.tm+json:
{
links : [
{
rel: "ping test",
uri: "/n4js/testing/sessions/{sessionID}/tests/{testID}/ping"
},
{
rel: "end test",
uri: "/n4js/testing/sessions/{sessionID}/tests/{testID}/end"
}
]
}
Signals that a test run has ended. Updates the state of the test reported with the tree .
EndTest :
uri : /n4js/testing/sessions/{sessionID}/tests/{testID}/end
method : POST
contentType : application/vnd.n4js.end_test_req.tm+json
accept: application/json
responses:
200:
400:
End test request object MIME type application/vnd.n4js.end_test_req.tm+json:
{
TestResult result
}
Notifies IDE that TestRunner is doing something (e.g. test setup/teardown code, long running test). Without this notification IDE might interpret long pause in received messages as timeout, TestRunner crash or other issues (in consequence it might terminate whole test execution environment).
PingTest :
uri : /n4js/testing/sessions/{sessionID}/tests/{testID}/ping
method : POST
contentType : application/vnd.n4js.ping_test_req.tm+json
accept: application/json
responses:
200:
400:
Ping test request object MIME type application/vnd.n4js.ping_test_req.tm+json:
{
number timeout,
string? comment
}
Assembles and returns with the test catalog representing all the tests available in the underlying IN4JSCore specific workspace. The content of the test catalog is calculated dynamically. The test catalog calculation depends on the current built state of the workspace. If the workspace was cleaned and not built yet, then a test catalog containing zero test suites (and test cases) will be provided as a response. If the workspace is built and in consistent state, then a catalog containing all test cases will be sent as the response body. The provided test catalog format complies to the Mangelhaft reporters.
TestCatalog :
uri : /n4js/testing/sessions/testcatalog
method : GET
contentType : application/vnd.n4js.assemble_test_catalog_req.tm+json
accept: application/json
responses:
200:
400:
Below listings represents an example of the test catalog format:
{
"endpoint": "http://localhost:9415",
"sessionId": "fc3a425c-b675-47d7-8602-8877111cf909",
"testDescriptors": [
{
"origin": "SysProjectA-0.0.1",
"fqn": "T/T",
"testMethods": [
"t"
]
},
{
"origin": "TestProjectA-0.0.1",
"fqn": "A/A",
"testMethods": [
"a"
]
},
{
"origin": "TestProjectA-0.0.1",
"fqn": "B/B",
"testMethods": [
"b1",
"b2"
]
},
{
"origin": "TestProjectB-0.0.1",
"fqn": "CSub1/CSub1",
"testMethods": [
"c1",
"c2"
]
},
{
"origin": "TestProjectB-0.0.1",
"fqn": "CSub2/CSub2",
"testMethods": [
"c1",
"c2",
"c3"
]
}
]
}
Below example demonstrates what are the expected HTTP requests and JSON structures for a simple test group.
class A {
@Test
public void foo() {}
@Test
@Ignore
public void bar() {}
}
class B {
@Test
public void baz() {}
}
class C {
@Test
public void qux() {}
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/start/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.start_session_req.tm+json; charset=ISO-8859-1
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FC%23qux/start/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.start_test_req.tm+json; charset=ISO-8859-1
Body:
{
"timeout": 1000
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FB%23baz/start/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.start_test_req.tm+json; charset=ISO-8859-1
Body:
{
"timeout": 1000
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FA%23bar/start/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.start_test_req.tm+json; charset=ISO-8859-1
Body:
{
"timeout": 1000
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FA%23foo/start/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.start_test_req.tm+json; charset=ISO-8859-1
Body:
{
"timeout": 1000
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FA%23bar/ping
Headers: Accept=*/*
Content-Type=application/vnd.n4js.ping_test_req.tm+json; charset=ISO-8859-1
Body:
{
"timeout": 1000
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FC%23qux/ping/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.ping_test_req.tm+json; charset=ISO-8859-1
Body:
{
"timeout": 2000
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FB%23baz/end/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.end_test_req.tm+json; charset=ISO-8859-1
Body:
{
"message": "Some optional message.",
"trace": [
"trace_element_1",
"trace_element_2",
"trace_element_3"
],
"expected": "1",
"testStatus": "FAILED",
"elapsedTime": 100,
"actual": "2"
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FC%23qux/end/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.end_test_req.tm+json; charset=ISO-8859-1
Body:
{
"message": "Some failure message.",
"trace": [
"trace_element_1",
"trace_element_2",
"trace_element_3"
],
"expected": "4",
"testStatus": "FAILED",
"elapsedTime": 50,
"actual": "3"
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2F%23foo/end/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.end_test_req.tm+json; charset=ISO-8859-1
Body:
{
"expected": "2",
"testStatus": "PASSED",
"elapsedTime": 60,
"actual": "power of 2 for 2"
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/tests/Test%2FA%23bar/end/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.end_test_req.tm+json; charset=ISO-8859-1
Body:
{
"testStatus": "SKIPPED",
"elapsedTime": 0,
}
Request method: POST
Request path: http://localhost:9415/n4js/testing/sessions/19f47a37-c1d1-4cb7-a514-1e131f26ab13/end/
Headers: Accept=*/*
Content-Type=application/vnd.n4js.end_session_req.tm+json; charset=ISO-8859-1
Test Runner must gather relevant information and send it to Test Environment to allow proper test execution:
gathering user input and test options
gathering information about user project test code
maintaining proper name mappings (e.g. if project is minimized test names/references must be mapped correctly)
Test Runner uses N4IDE infrastructure to obtain information about test fragment of the user project. Based on that information and user input in UI (e.g. triggering test execution on whole project) IDE can determine Test Methods that should be executed. Such test list or Test Plan is send to Test Environment and is expected to be executed by a Test Library.
TestPlan {
Array<TestProcedure> procedures
}
TestProcedure {
string functionName,
string functionType,
string functionContainer,
string containerModule
}
Additionally Test Runner sends to Test Environment other configuration options:
Test Runner test communication protocol base url (baseURL)
For example assuming that user selects ProjectX to test that contains only one test class in src/test/n4js/core path like:
class MyTestClass{
@BeforeAll
public void someOneTimeSetup(){ /* setup code */}
@Test
public void testA(){ /* some test code*/ }
@Test
public void testB(){ /* some test code*/ }
@Test
public void testC(){ /* some test code*/ }
@After
public void afterCleanup(){ /* setup code */}
}
Configuration sent for Test Execution Environment would look like:
{
"baseURL" : "http://localhost:1234/",
"testPlan":
[
{
"functionName": "someOneTimeSetup",
"functionType": "@BeforeAll",
"functionContainer": "MyTestClass",
"containerModule": "test/n4js/core/MyTestClass",
},
{
"functionName": "testA",
"functionType": "@Test",
"functionContainer": "MyTestClassA",
"containerModule": "test/n4js/core/MyTestClassA",
},
{
"functionName": "afterCleanup",
"functionType": "@After",
"functionContainer": "MyTestClassA",
"containerModule": "test/n4js/core/MyTestClassA",
},
{
"functionName": "testB",
"functionType": "@Test",
"functionContainer": "MyTestClassA",
"containerModule": "test/n4js/core/MyTestClassA",
},
{
"functionName": "afterCleanup",
"functionType": "@After",
"functionContainer": "MyTestClassA",
"containerModule": "test/n4js/core/MyTestClassA",
},
{
"functionName": "testC",
"functionType": "@Test",
"functionContainer": "MyTestClassA",
"containerModule": "test/n4js/core/MyTestClassA",
},
{
"functionName": "afterCleanup",
"functionType": "@After",
"functionContainer": "MyTestClassA",
"containerModule": "test/n4js/core/MyTestClassA",
}
]
}