1 Introduction to APP4MC

The goal of the project is the development of a consistent, open, expandable tool platform for embedded software engineering. It is based on the model driven approach as basic engineering methodology. The main focus is the optimization of embedded multi-core systems.

Most functions in a modern car are controlled by embedded systems. Also more and more driver assistance functions are introduced. This implies a continuous increase of computing power accompanied by the request for reduction of energy and costs. To handle these requirements the multi-core technology permeates the control units in cars. This is today one of the biggest challenges for automotive systems. Existing applications can not realize immediate benefit from these multi-core ECUs because they are not designed to run on such architectures. In addition applications and systems have to be migrated into AUTOSAR compatible architectures. Both trends imply the necessity for new development environments which cater for these requirements.

The tool platform shall be capable to support all aspects of the development cycle. This addresses predominantly the automotive domain but is also applicable to telecommunication by extensions which deal with such systems in their native environment and integrated in a car.

Future extensions will add support for visualization tools, graphical editors. But not only design aspects will be supported but also verification and validation of the systems will be taken into account and support tools for optimal multi-core real-time scheduling and validation of timing requirements will be provided. In the course of this project not all of the above aspects will be addressed in the same depth. Some will be defined and some will be implemented on a prototype basis. But the basis platform and the overall architecture will be finalized as much as possible.

The result of the project is an open tool platform in different aspects. On the one hand it is published under the Eclipse Public License (EPL) and on the other hand it is open to be integrated with existing or new tools either on a company individual basis or with commercially available tools.

2 User Guide

2.1 Introduction

APP4MC comes with a predefined perspective available in the Eclipse menu under Window -> Open Perspective -> Other -> APP4MC. This perspective consists of the following elements:

  • AMALTHEA Model Explorer
  • Editor
    • Tree Editor showing the structure of the model content
    • Standard Properties Tab is used to work on elements attributes

The following screenshot is showing this perspective and its contained elements.

2.1.1 Steps to create a new AMALTHEA model

APP4MC provides a standard wizard to create a new AMALTHEA model from scratch.

Step 1: Create a new general project

The scope of an AMALTHEA model is defined by its enclosing container (project or folder).
Therefore a project is required.

Step 2: Create a new folder inside of the created project

It is recommended to create a folder ( although a project is also a possible container ).

Step 3: Create a new AMALTHEA model

In the context menu (right mouse button) an entry for a new AMALTHEA model can be found.

Another starting point is File -> New -> Other

In the dialog you can select the parent folder and the file name.

2.1.2 AMALTHEA Editor

The AMALTHEA Editor shows either the entire model that contains sub models or one of the sub models.
The next screenshot shows the "New Child" menu with all its possibilities.

In addition the AMALTHEA Editor has two commands available at the top right of the Editor.

Show types of model elements

The Show types of elements button triggers the editor to show the direct type of the element in the tree editor using [element_type]. The following screenshot shows the toggle and the types marked with an underline.

Search for model elements

The editor provides the possibility to filter model elements by using the available name attribute. This can be used for example to filter all elements in the model which has abs in their name. The search and result view is using the Eclipse search as result view.

The search result is using the Eclipse Search View.

An additional option is to toggle the search results to group them by type as shown in the next screenshot.

2.1.3 AMALTHEA Examples

The AMALTHEA tool platform comes with several examples. This section will describe how a new project based on these examples can be created.

Step 1

Click the "new" icon in the top left corner and select "Example...".

Step 2

The "New Example" wizard will pop up and shows several examples.
Select one examples and hit continue.

Step 3

You will see a summary of the example projects that will be created.
Click "Finish" to exit this dialog.

You can now open the editor to inspect the models.

2.2 Concepts

2.2.1 Hardware

The AMALTHEA hardware model is used to describe hardware systems. The overall structure of this model is as followed:

  • A System contains one or more ECUs
  • An ECU contains one or more Microcontrollers
  • A Microcontroller contains one or more Cores
  • (Optional) Each of these abstractions can contain one or more memories, networks or peripheral elements.

Usually, this is also considered the least amount of information which must be present in the model.

Since the hardware model is meant to be utilized for a wide range of use cases, additional information might be required w.r.t. the specific use case. Examples for its usage, as well as the resp. minimal amount of required information, are provided within the example section ( User Guide -> Examples -> Hwmodel Example).

The information, which is required in determining a software-to-hardware-mapping with the AMALTHEA Mapping Plugin, is listed in Deliverable D3.4. Examples of hardware models for this use case will be generated during the mapping tutorials (See sections User Guide -> Tutorials -> Mapping Example 1/2).

2.2.2 Software (development)

The AMALTHEA System Model can also be used in early phases of the development process when only limited information about the resulting software is available.

Runnables

The Runnable element is the basic software unit that defines the behavior of the software in terms of runtime and communication. It can be described on different levels of abstraction:

  1. timing only (activation and runtime)
  2. including communication (in general)
  3. adding detailed call sequences

To allow a more detailed simulation a description can also include statistical values like deviations or probabilities. This requires additional information that is typically derived from an already implemented function. The modeling of observed behavior is described in more detail in chapter Software (runtime).

Process Prototypes

Process Prototypes are used to define the basic data of a task. This is another possibility to describe that a set of Runnables has a similar characteristic (e.g. they have the same periodic activation).
A prototype can then be processed and checked by different algorithms. Finally a partitioning algorithm generates (one or more) tasks that are the runtime equivalents of the prototype.

This processing can be guided by specifications that are provided by the function developers:

  • The Order Specification is a predefined execution order that has to be guaranteed.
  • An Access Specification defines exceptions from the standard write-before-read semantics.

Constraints

In addition the partitioning and mapping can be restricted by Affinity Constraints that enforce the pairing or separation of software elements and by Property Constraints that connect hardware capabilities and the corresponding software requirements.
The Timing Constraints will typically be used to check if the resulting system fulfills all the requirements.

Activations

Activations are used to specify the intended activation behavior of Runnables and ProcessPrototypes. Typically they are defined before the creation of tasks (and the runnable to task mappings). So this is a way to cluster runnables and to document when the runnables should be executed.

The following activation patterns can be distinguished:

  • Single: single activation
  • Periodic: periodic activation with a specific frequency
  • Sporadic: recurring activation without following a specific pattern
  • Event: activation triggered by a TriggerEvent
  • Custom: custom activation (free textual description)

To describe a specific (observed) behavior at runtime there are Stimuli in the AMALTHEA model. They can be created based on the information of the specified activations.

2.2.3 Software (runtime)

During runtime, the dynamic behavior of the software can be observed. The following Gantt chart shows an excerpt of such a dynamic behavior.

To model the observed behavior in the AMALTHEA model there are schedulable units (Processes) that contain the basic software units (Runnables) and stimuli that describe when the processes are executed. Statistical elements like distributions (Gauss, Weibull, ...) are also available in the model. They allow describing the variation of values if there are multiple occurrences.

In the following sections, a high level description of the individual elements of a software description that define the dynamic behavior are presented.

Processes (Tasks or ISRs)

Processes represent executable units that are managed by an operating system scheduler. A process is thus the smallest schedulable unit managed by the operating system. Each process also has its own name space and resources (including memory) protected against use from other processes. In general, two different kinds of processes can be distinguished: task and Interrupt Service Routine (ISR). Latter is a software routine called in case of an interrupt. ISRs have normally higher priority than tasks and can only be suspended by another ISR which presents a higher priority than the one running. In the Gantt chart above, a task called 'TASK_InputProcessing' can be seen. All elements that run within the context of a process are described in the following sections.

Runnables

Runnables are basic software units. In general it can be said that a Runnable is comparable to a function. It runs within the context of a process and is described by a sequence of instructions. Those instructions can again represent different actions that define the dynamic behavior of the software. Following, such possible actions are listed:

  • Semaphore Access: request/release of a semaphore
  • Label Access: reading/writing a data signal
  • Instructions: number of unspecified instructions to be executed
  • ...

In the following sections elements, that can be of concern within a runnable, are described in more detail.

Labels

Labels represent the system's view of data exchange. As a consequence, labels are used to represent communication in a flattened structure, with (at least) one label defined for each data element sent or received by a Runnable instance.

Semaphore

The main functionality of a semaphore is to control simultaneous use of a single resource by several entities, e.g. scheduling of requests, multiple access protection.

Stimulation

Before, we described the dynamic behavior of a specific process instance. In general however, a process is not only activated once but many times. The action of activating a process is called stimulation. The following stimulation patterns can thereby be distinguished:

  • Single: single activation of a process
  • Periodic: periodic activation of a process with a specific frequency
  • Sporadic: recurring activation of a process without following a specific pattern
  • Event: activation triggered by a TriggerEvent
  • Custom: custom activation (free textual description)

2.2.4 General Concepts

Grouping of elements (Tags, Tag groups)

It is possible to use Tags for grouping elements of the model. Currently the following AMALTHEA sub models are supporting Tags:

  1. Components
  2. Software (SW)
  3. Hardware (HW)

The corresponding model classes are storing the relevant Tags in a local list, available in:

  • org.eclipse.app4mc.amalthea.model.ComponentsModel
  • org.eclipse.app4mc.amalthea.model.SWModel
  • org.eclipse.app4mc.amalthea.model.HWModel

The reason for doing this is that Tags used in the sub models mostly have a different technical background. It should be avoided that e.g. elements in the software model are referring to Tags from a components model.

Custom Properties

The AMALTHEA model provides Custom Properties to enhance the model in a generic way. These can be used for different kind of purpose:

  • Store attributes, which are relevant for your model, but not yet available at the elements
  • Processing information of algorithms can be stored in that way, e.g. to mark an element as already processed

2.2.5 Communication via channels

Channel

Sender and receiver communicating via a channel by issuing send and receive operations on ports; read policy and transmission policy define communication details.

As basic thinking model, reading and writing to channels happens during runnable execution in distinct phases:

  • Receiving phase: The specified receive operations are performed for every read port.
  • Computing phase: A specified time passes, modeling processing of input data.
  • Sending phase: The specified send operations are performed for every write port.

A channel is specified by two attributes:

  • elementType: the type that is sent or received via connected ports.
  • defaultElements: number of elements initially in the channel (at start-up).
  • maxElements (integer) denoting a buffer limit, that is, the channel depth.

In other words, no more than maxElements elements of the given type may be stored in the channel.

Channel Access

In the basic thinking model, all elements are stored as a sequential collection without buffer size limit (unlimited storage).

Sending

A runnable may send elements to a channel by issuing a send operations.
The send operation has a single parameter:

  • elements (integer): Number of elements that are written.
Receiving

A runnable may receive elements from a channel by issuing receive operations.
The operation has multiple aspects:

  • Port Type
    • A LIFO (last-in, first-out) port is chosen if processing the last written elements is the primary focus and thereby missing elements is tolerable.
    • A FIFO (first-in, first-out) port is chosen if every written element needs to be handled, that is, loss of elements is not tolerable.

The port type defines the direction the receive operations take effect: accesses from LIFO ports are from top of the sequential collection, while accesses from FIFO ports are from bottom of the sequential collection.

  • Receive Operation
    • Read will received elements without modifying the channel
    • Take will remove the received elements from the channel

Each operation has two parameters and several attributes specifying the exact behavior:

  • elements (integer): Maximum number n of elements that are received.
  • elementIndex (integer): Position (index i) in channel at which the operation is effective. Zero is the default and denotes the oldest (FIFO port) or newest element (LIFO port) in the channel.

Receive operations then are written in function notation, that is Read(n, i) and Take(n, i). For convenience, the index may be left out for default value '0' (Take(n,0) == Take(n)). See next figure for examples which elements are denoted by receive operations for FIFO and LIFO ports.

Additional attributes further define the receive operation:

  • lowerBound (integer): Specify the minimum number of elements returned by the operation. The value must be in the range [0,n], with n is the maximum number of elements that are received as specified as attribute of the receive operation. Default value is n.
  • dataMustBeNew (Boolean): Specify if the operation must only return elements that are not previously read by this port. Default value is false.
Common Read Policies: Last, New, and AllNew

As three specific read policies are frequently used, these are introduced in the following.

Policy Port Type Operation lower bound must be new
Last(x) LIFO Read(x,0) x false
New(x) LIFO Read(x,0) 0 true
AllNew FIFO Take(sizeOfChannel,0) 0 true

An example of the different behavior of the New and Last policy is shown in the next figure. For demonstration purpose, in the example periodic triggering of the receiving runnables R1 and R2 and sporadic data writing by runnable S is assumed. Note that while New may be used on an empty channel, Last would lead to an error (dashed boxes).
Left: Static connection between three runnables (S, R1, and R2). Right: Dynamic view of writing elements to channel over time (green arrows), runnable execution indicated as blue boxes, and the resulting received elements shown in square brackets.

Transmission Policy

To further specify how elements are accessed by a runnable in terms of computing time, an optional transmission policy may specify details for each receive and send operation. The intention of the transmission policy is to reflect computing demand (time) depending on data.

The transmission policy consists of the following attributes:

  • chunkSize: Size of a part of an element, maximum is the element size.
  • chunkProcessingInstructions (integer): Number of instructions that will be executed to send / receive one chunk.
  • transmitRatio (float): Specify the ratio of each element that is actually transmitted by the runnable in percent. Value must be between [0, 1], default value is 1.0.

Example for using transmission policy to detail the receiving phase of a runnable execution. Two elements are received, leading to transmission time as given in the formula. After the receiving phase, the runnable starts with the computing phase.

2.2.6 Memory Sections

Purpose

Memory Sections are used for the division of the memory (RAM/ROM) into different blocks and allocate the "software" memory elements ( e.g. Labels), code accordingly inside them.
Each Memory Section has certain specific properties ( e.g. faster access of the elements, storing constant values). By default compiler vendors provide certain Memory Sections ( e.g. .data, .text) and additional Memory Sections can be created based on the project need by enhancing the linker configuration.

Definition

A "Memory Section" is a region in memory (RAM/ROM) and is addressed with a specific name. There can exist multiple "Memory Sections" inside the same Memory (RAM/ROM) but with different names. Memory Section names should be unique across the Memory (RAM/ROM).

Memory Sections can be of two types:

  • Virtual Memory Section
  • Physical Memory Section

Virtual Memory Section

"Virtual Memory Sections" are defined as a part of data specification and are associated to the corresponding Memory Elements (e.g. Label's) during the development phase of the software. Intention behind associating "Virtual Memory Sections" to Memory elements like Label's is to control their allocation in specific Memory (e.g. Ram1 or Ram2) by linker.

As a part of linker configuration – It is possible to specify if a "Virtual Memory Section" (e.g. mem.Sec1) can be part of certain Memory (e.g. Ram1/Ram2/SYSRAM but not Ram3).

Example:

Software should be built for ManyCore ECU – containing 3 Cores (Core1, Core2, Core3). Following RAMs are associated to the Cores: Ram1 – Core1, Ram2 – Core2, Ram3 – Core3, and also there is SYSRAM.

Virtual Memory Section : mem.sec1 (is defined as part of data specification) is associated to Label1 and Label2.

In Linker configuration it is specified that mem.sec1 can be allocated only in Ram1 or Ram2.

Below diagram represents the linker configuration content - w.r.t. possibility for physical allocation of mem.sec1 in various memories .

Based on the above configuration – Linker will allocate Label1, Label2 either in Ram1/Ram2/SYSRAM but not in Ram3/Ram4.

Physical Memory Section

"Physical Memory Sections" are generated by linker. The linker allocates various memory elements (e.g. Label's) inside "Physical Memory Sections".

Each "Physical Memory Section" has following properties:

  • Name – It will be unique across each Memory
  • Start and End address – This represents the size of "Physical Memory Section"
  • Associated Physical Memory (e.g. Ram1 or Ram2)

Example: There can exist mem.sec1.py inside Ram1 and also in Ram2. But these are physically two different elements as they are associated to different memories (Ram1 and Ram2) and also they have different "start and end address".

Below diagram represents the information w.r.t. virtual memory sections (defined in data specification and associated to memory elements) and physical memory sections (generated after linker run).

Modeling Memory Section information in AMALTHEA

  • As described in the above concept section:
    • Virtual memory sections are used:
      • To specify constraints for creation of Physical memory sections by linker
      • To control allocation of data elements (e.g. Labels) in a specific memory (e.g. Ram1/Ram2/SYSRAM)
    • Physical memory sections are containing the data elements after linker run (representing the software to be flashed into ECU)

Below figure represents the modeling of "Memory Section" (both virtual and physical) information in AMALTHEA model:

Below are equivalent elements of AMALTHEA model used for modeling the Memory Section information:

  • Section
    • This element is equivalent to Virtual Memory Section defined during the SW development phase.
    • As a part of data specification defined in the sw-development phase, a Section object (with specific name) is associated to Label and Runnable elements.
  • PhysicalSectionConstraint
    • This element is equivalent to the constraint specified in the linker configuration file, which is used to instruct linker for the allocation of Physical Memory Sections in specified Memories.
    • PhysicalSectionContraint is used to specify the combination of Virtual Memory Section and Memories (which can be considered by linker for generation of Physical Memory Sections).

Example: PhysicalSectionConstraint-1 is specifying following relation "Section-1" <--> "Memory-1", "Memory-2". This means that the corresponding Physical Memory Section for "Section-1" can be generated by linker in "Memory-1" or in "Memory-2" or in both.

  • PhysicalSectionMapping
    • This element is equivalent to Physical Memory Section generated during the linker run.
      • Each PhysicalSectionMapping element:
        • Contains the Virtual Memory Section (e.g. Section-1) which is the source.
        • is associated to a specific Memory and it contains the start and end memory address (difference of start and end address represents the size of Physical Memory Section).
        • contains the data elements (i.e. Labels, Runnables part of the final software).

Note: There is also a possibility to associate multiple Virtual Memory Section's as linker has a concept of grouping Virtual Memory Sections while generation of Physical Memory Section.

Example: For the same Virtual Memory Section (e.g. Section-1), linker can generate multiple Physical Memory Sections in different Memories (e.g. PhysicalSectionMapping-1, PhysicalSectionMapping-2). Each PhysicalSectionMapping element is an individual entity as it has a separate start and end memory address.

2.3 Examples

2.3.1 Modeling Example 1

General information

Modeling Example 1 describes a simple system consisting of 4 Tasks, which is running on a dual core processor.
The following figure shows the execution footprint in a Gantt chart:

In the following sections, the individual parts of the AMALTHEA model for Modeling Example 1 are presented followed by a short description of its elements.

Hardware Model

The hardware model of Modeling Example 1 consists as already mentioned of a dual core processor.
The following gives a structural overview on the modeled elements.
There, the two cores, 'Core_1' and 'Core_2', have a static processing frequency of 100 MHz each, which is specified by the corresponding quartz oscillator 'Quartz'.

Operating System Model

The operating system (OS) model defines in case of Modeling Example 1 only the needed Scheduler.
Since a dual core processor has to be managed, two schedulers are modeled correspondingly.
In addition to the scheduling algorithm used by the scheduler, in this case OSEK, a delay of 100 ns is set, which is the presumed time the scheduler needs for context switches.
A comprehension of the modeled properties can be found in the following table:

Scheduler Type Algorithm Delay
Scheduler_1 SchedulingHWUnit OSEK 100 ns
Scheduler_2 SchedulingHWUnit OSEK 100 ns

Mapping Model

The mapping model defines allocations between different model parts.
On the one hand, this is the allocation of processes to a scheduler.
In case of Example 1, 'Task_1' and 'Task_2' are managed by 'Scheduler_1', while the other tasks are managed by 'Scheduler_2'.
On the other hand the allocation of cores to a scheduler is set.
For Modeling Example 1 two local schedulers were modeled.
As a consequence, each scheduler manages one of the processing cores.
A comprehension of the modeled properties can be found in the following tables:

Executable Allocation
Scheduler Process
Scheduler_1 Task_1
Scheduler_1 Task_2
Scheduler_2 Task_3
Scheduler_2 Task_4
Core Allocation
Scheduler Core
Scheduler_1 Core_1
Scheduler_2 Core_2

Software Model

Tasks

As already mentioned above, the software model of Modeling Example 1 consists exactly of four tasks, named 'Task_1' to 'Task_4'.
Each task is preemptive and has a priority assigned according its deadline, meaning the one with the shortest deadline, 'Task_1', has the highest priority, and so on.
Each task also calls a definitive number of Runnables in a sequential order.
A comprehension of the modeled properties can be found in the following table:

Task Priority Preemption MTA* Deadline Calls
Task_1 4 Preemptive 1 75 ms 1) Runnable_1_1
Task_2 3 Preemptive 1 115 ms 1) Runnable_2_1
2) Runnable_2_2
Task_3 2 Preemptive 1 300 ms 1) Runnable_3_1
2) Runnable_3_2
3) Runnable_3_3
Task_4 1 Preemptive 1 960 ms 1) Runnable_4_1
2) Runnable_4_2
3) Runnable_4_3
4) Runnable_4_4

*MTA = Multiple Task Activation Limit

Runnables

In addition to the task, the software model also contains a definition of Runnables.
For Modeling Example 1, ten individual Runnables are defined.
The only function of those in this example is to consume processing resources.
Therefore, for each Runnable a constant number of instruction cycles is stated.
A comprehension of the modeled properties can be found in the following table:

Runnable InstructionCycles
Runnable_1_1 1500000
Runnable_2_1 1500000
Runnable_2_2 1500000
Runnable_3_1 1000000
Runnable_3_2 2000000
Runnable_3_3 1000000
Runnable_4_1 1000000
Runnable_4_2 2000000
Runnable_4_3 3000000
Runnable_4_4 2000000

Stimuli Model

The stimulation model defines the activations of tasks.
Since the four tasks of Modeling Example 1 are activated periodically, four stimuli according their recurrence are modeled.
A comprehension of the modeled properties can be found in the following table:

Stimulus Type Offset Recurrence
Stimulus_Task_1 Periodic 0 ms 180 ms
Stimulus_Task_2 Periodic 0 ms 200 ms
Stimulus_Task_3 Periodic 0 ms 300 ms
Stimulus_Task_4 Periodic 0 ms 1 s

2.3.2 Modeling Example 2

General information

Modeling Example 2 describes a simple system consisting of 4 Tasks, which is running on a single core processor.
The following figure shows the execution footprint in a Gantt chart:

In the following sections, the individual parts of the AMALTHEA model for Modeling Example 2 are presented followed by a short description of its elements.

Hardware Model

The hardware model of Modeling Example 2 consists as already mentioned of a single core processor.
The following gives a structural overview on the modeled elements.
There, the core, 'Core_1' , has a static processing frequency of 600 MHz each, which is specified by the corresponding quartz oscillator 'Quartz_1'.

Operating System Model

The operating system (OS) model defines in case of Modeling Example 2 only the needed Scheduler.
Since only a single core has to be managed, a single scheduler is modeled correspondingly.
In addition to the scheduling algorithm used by the scheduler, in this case OSEK, a delay of 100 ns is set, which is the presumed time the scheduler needs for context switches.
A comprehension of the modeled properties can be found in the following table:

Scheduler Type Algorithm Delay
Scheduler_1 SchedulingHwUnit OSEK 100 ns

Mapping Model

The mapping model defines allocations between different model parts.
On the one hand, this is the allocation of processes to a scheduler.
Since there is only one scheduler available in the system, all four tasks are mapped to 'Scheduler_1'.
On the other hand the allocation of cores to a scheduler is set.
As a consequence, the scheduler manages the only available processing core.
A comprehension of the modeled properties can be found in the following tables:

Executable Allocation
Scheduler Process
Scheduler_1 Task_1
Scheduler_1 Task_2
Scheduler_1 Task_3
Scheduler_1 Task_4
Core Allocation
Scheduler Core
Scheduler_1 Core_1

Software Model

Tasks

As already mentioned above, the software model of Modeling Example 2 consists exactly of four tasks, named 'Task_1' to 'Task_4'.
All tasks have assigned the same priority (10) to get a cooperative scheduling.
'Task_2' to'Task_4' call a definitive number of Runnables in a sequential order.
'Task_1' instead contains a call graph that models two different possible execution sequences.
In 70% of the cases the sequence 'Runnable_1_1', 'Runnable_1_2', 'Task_2', 'Runnable_1_4' is called, while in the remaining 30% the sequence 'Runnable_1_1', 'Runnable_1_3', 'Task_3', 'Runnable_1_4' is called.
As it can be seen, the call graph of 'Task_1' contains also interprocess activations, which activate other tasks.
A comprehension of the modeled properties can be found in the following table:

Task Priority Preemption MTA* Deadline Calls
Task_1 10 Preemptive 3 25 ms 1.1) Runnable_1_1
1.2) Runnable_1_2
1.3) Task_2
1.4) Runnable_1_4
2.1) Runnable_1_1
2.2) Runnable_1_3
2.3) Task_3
2.4) Runnable_1_4
Task_2 10 Preemptive 3 25 ms 1) Runnable_2_1
Task_3 10 Preemptive 3 25 ms 1) Runnable_3_1
Task_4 10 Preemptive 3 25 ms 1) Runnable_4_1

*MTA = Multiple Task Activation Limit

Runnables

In addition to the task, the software model also contains a definition of Runnables.
For Modeling Example 2, seven individual Runnables are defined.
The only function of those in this example is to consume processing resources.
Therefore, for each Runnable a number of instruction cycles is stated.
The number of instruction cycles is thereby either constant or defined by a statistical distribution.
A comprehension of the modeled properties can be found in the following table:

Runnable Type Instructions
Runnable_1_1 Constant 1000000
Runnable_1_2 Constant 2000000
Runnable_1_3 Constant 3000000
Runnable_1_4 Constant 4000000
Runnable_2_1 Uniform Distribution 1000000
5000000
Runnable_3_1 Gauss Distribution mean: 1000000
sd: 50000
upper: 5000000
Runnable_4_1 Constant 4000000

Stimulation Model

The stimulation model defines the activations of tasks.
'Task_1' is activated periodically by 'Stimulus_Task_1'
'Stimulus_Task_2' and 'Stimulus_Task_3' represent the inter-process activations for the corresponding tasks.
'Task_4' finally is activated sporadically following a Gauss distribution.
A comprehension of the modeled properties can be found in the following table:

Stimulus Type Parameters
Stimulus_Task_1 Periodic offset: 0 ms
recurrence: 25 ms
Stimulus_Task_2 Inter-Process
Stimulus_Task_3 Inter-Process
Stimulus_Task_4 Sporadic (Gauss) mean: 30 ms
sd: 5 ms
upper: 100 ms

2.3.3 Modeling Example "Purely Periodic without Communication"

This system architecture pattern consists of a task set, where each task is activated periodically and no data accesses are performed. The execution time for each task is determined by the called runnable entities as specified in the table below. All tasks contain just one runnable except of T7, which calls at first R7,1 and after that R7,2.

The table below gives a detailed specification of the tasks and their parameters. The tasks are scheduled according fixed-priority, preemptive scheduling and if not indicated otherwise, all events are active in order to get a detailed insight into the system's behavior.

Task Priority Preemption Multiple Task Activation Limit Activation Execution Time
T1 7 FULL 1 Periodic R1 Uniform
Offset = 0 Min = 9.95
Recurrence = 80 Max = 10
T2 6 FULL 1 Periodic R2 Uniform
Offset = 0 Min = 29.95
Recurrence = 120 Max = 30
T3 5 FULL 1 Periodic R3 Uniform
Offset = 0 Min = 19.95
Recurrence = 160 Max = 20
T4 4 FULL 1 Periodic R4 Uniform
Offset = 0 Min = 14.95
Recurrence = 180 Max = 15
T5 3 FULL 1 Periodic R5 Uniform
Offset = 0 Min = 29.95
Recurrence = 200 Max = 30
T6 2 FULL 1 Periodic R6 Uniform
Offset = 0 Min = 39.95
Recurrence = 300 Max = 40
T7 1 FULL 1 R7,1 Uniform
Min = 59.95
Periodic Max = 60
Offset = 0 R7,2 Uniform
Recurrence = 1000 Min = 19.95
Max = 20

In order to show the impact of changes to the model, the following consecutive variations are made to the model:

1) Initial Task Set
For this variation, the Tasks T4, T5, T6, and T7 of the table above are active.
2) Increase of Task Set Size I
For this variation, the Tasks T3, T4, T5, T6, and T7 are active. That way the utilization of the system is increased.
3) Increase of Task Set Size II
For this variation, the Tasks T1, T3, T4, T5, T6, and T7 are active. That way the utilization of the system is increased.
4) Increase of Task Set Size III
As from this variation on, all tasks (T1 - T7) are active. That way the utilization of the system is increased.
5) Accuracy in Logging
For this variation, just task events are active. That way, only a limited insight into the system's runtime behavior is available.
6) Schedule
As from this variation on, T7 is set to non-preemptive. That way, the timing behavior is changed, which results in extinct activations (see red mark in the figure below).
7) Activation
As from this variation on, the maximum number of queued activation requests for all tasks is set to 2. That way, the problem with extinct activations resulting from the previous variation is solved.
8) Schedule Point
For this variation, a schedule point is added to T7 between the calls of R7,1 and R7,2. That way, the timing behavior is changed in particular.
9) Scheduling Algorithm
For this variation, the scheduling algorithm is set to Earliest Deadline First. That way, the timing behavior is changed completely.

2.3.4 Modeling Example "Client-Server without Reply"

This system architecture pattern extends the modeling example "Purely Periodic without Communication" by adding an one-way communication between tasks. It consists of two tasks T1, and T2. Task T1 sends a message to Task T2 before runnable R1 is called. In 20% of the cases Message 1, in 30% of the cases Message 2, in 20% of the cases Message 3, in 15% of the cases Message 4, and in 15% of the cases any other message than the previously mentioned ones is sent. Task T2 reacts on the contents of the message by calling different runnables. In case of Message 1 runnable R2,1, in case of Message 2 runnable R2,2, in case of Message 3 runnable R2,3, in case of Message 4 runnable R2,4, and in case of any other message than the previous mentioned ones runnable R2,x is called as default.

The table below gives a detailed specification of the tasks and their parameters. The tasks are scheduled according fixed-priority, preemptive scheduling and if not indicated otherwise, all events are active in order to get a detailed insight into the system's behavior.

Task Priority Preemption Multiple Task Activation Limit Activation Execution Time
T1 2 FULL 1 Periodic R1 Uniform
Offset = 0 Min = 9.9 * 106
Recurrence = 100 * 106 Max = 10 * 106
T2 1 FULL 1 R2,x Uniform
Min = 99
Max = 100
R2,1 Uniform
Min = 990
Max = 1 * 103
Periodic R2,2 Uniform
Offset = 15 * 106 Min = 49.5 * 103
Recurrence = 60 * 106 Max = 50 * 103
R2,3 Uniform
Min = 990 * 103
Max = 1 * 106
R2,4 Uniform
Min = 39.6 * 106
Max = 40 * 106

In order to show the impact of changes to the model, the following consecutive variations are made to the model:

1) Initial Task Set
As defined by the table above.
2) Exclusive Area
For this variation, all data accesses are protected by an exclusive area. Therefore, the data accesses in T1 as well as all runnables in T2 (R2,x, R2,1, R2,2, R2,3, and R2,4) are protected during their complete time of execution via a mutex and priority ceiling protocol. That way, blocking situations appear.
3) Inter-process Activation
As from this variation on, task T2 gets activated by an inter-process activation from task T1 instead of being activated periodically. The interprocess activation is performed right after the message message is written in T2 and consequently before the runnable R1 is called. That way, a direct connection between T1 and T2 is established.
4) Priority Ordering
As from this variation on, the priority relation between task T1 and T2 is reversed. As a consequence, the priority of task T1 is set to 1 and the priority of task T2 is set to 2. That way, a switch from asynchronous to synchronous communication is considered.
5) Event Frequency Increase
As from this variation on, the periodicity of T1 is shortened. For this, the value for the period of task T1 is cut in half to 50 * 106 time units. That way, the utilization of the system is increased.
6) Execution Time Fluctuation
As from this variation on, the execution time distribution is widened for both tasks. Therefore, the maximum of every uniform distribution is increased by 1 percent so that they vary now by 2 percent. That way, the utilization of the system is increased, which results in extinct activations.
7) Activation
As from this variation on, the maximum number of queued activation requests for both tasks is set to 2. That way, the problem with extinct activations resulting from the previous variation is solved.
8) Accuracy in Logging of Data State I
For this variation, the data accesses in task T1 and task T2 are omitted. Instead, the runnable entities R2,x, R2,1, R2,2, R2,3, and R2,4, each representing the receipt of a specific message, are executed equally random, meaning each with a probability of 20%. That way, only a limited insight into the system's runtime behavior is available.
9) Accuracy in Logging of Data State II
For this variation, just task events are active. That way, only a limited insight into the system's runtime behavior is available.

2.3.5 Modeling Example "State Machine"

In this system architecture pattern the modeling example "Client Server without Reply" is extended in such a way that now the task that receives messages (T2) not only varies its dynamic behavior and consequently also its execution time according the transmitted content but also depending on its internal state, meaning the prior transmitted contents. To achieve, this task T1 sends a message to task T2 with either 0 or 1 before runnable R1 is called. The value 0 is used in 75 % of the cases and 1 in the other cases as content of the message. Starting in state 0, T2 decreases or increases the state its currently in depending on the content of the message, 0 or 1 respectively. The runnable R 2,1, R 2,2, and R 2,3 represent then the three different states that the system can be in.

The table below gives a detailed specification of the tasks and their parameters. The tasks are scheduled according fixed-priority, preemptive scheduling and if not indicated otherwise, all events are active in order to get a detailed insight into the system's behavior.

Task Priority Preemption Multiple Task Activation Limit Activation Execution Time
T1 2 FULL 1 Periodic R1 Uniform
Offset = 0 Min = 9.9 * 106
Recurrence = 100 * 106 Max = 10 * 106
T2 1 FULL 1 R 2,1 Uniform
Min = 99
Max = 100
Periodic R 2,2 Uniform
Offset = 15 * 106 Min = 99 * 103
Recurrence = 60 * 106 Max = 100 * 103
R2,3 Uniform
Min = 49.5 * 106
Max = 50 * 106

In order to show the impact of changes to the model, the following consecutive variations are made to the model:

1) Initial Task Set
As defined by the table above.
2) Exclusive Area
For this variation, all data accesses are protected by an exclusive area. Therefore, the data accesses in T1 as well as all runnables in T2 (R2,1, R2,2, and R2,3) are protected during their complete time of execution via a mutex and priority ceiling protocol. That way, blocking situations appear.
3) Priority Ordering
As from this variation on, the priority relation between task T1 and T2 is reversed. As a consequence, the priority of task T1 is set to 1 and the priority of task T2 is set to 2. That way, the timing behavior is changed fully.
4) Inter-process Activation
As from this variation on, task T2 gets activated by an inter-process activation from task T1 instead of being activated periodically. The interprocess activation is performed right after the message message is written in T1 and consequently before the runnable R1 is called. That way, a direct connection between T1 and T2 is established.
5) Event Frequency Increase
As from this variation on, the periodicity of T1 is shortened. For this, the value for the period of task T1 is halved to 50 * 106. That way, the utilization of the system is increased, which results in extinct activations.
6) Activation
As from this variation on, the maximum number of queued activation requests for both tasks is set to 2. That way, the problem with extinct activations resulting from the previous variation is solved.
7) Execution Time Fluctuation
As from this variation on, the execution time distribution is widened for both tasks. Therefore, the maximum of the uniform distribution is increased by 1 percent so that the uniform distribution varies now by 2 percent. That way, the utilization of the system is further increased.
8) Accuracy in Logging of Data State I
For this variation, the data write accesses in task T1 and task T2 are omitted. Instead, the runnables R2,1, R2,2, and R2,3, each representing the execution of a specific state, are executed with a probability of 60 %, 30 %, and 10 % respectively. That way, only a limited insight into the system's runtime behavior is available.
9) Accuracy in Logging of Data State II
For this variation, just task events are active. That way, only a limited insight into the system's runtime behavior is available.

2.3.6 Modeling Example "Feedback Loop"

The task set of the modeling example "State Machine" is expanded further in this architecture pattern with the result that messages are exchanged in a loop, instead of just in one way. To achieve this, task T1 sends a message u to task T2 before runnable R1 is called. The content of this message is 0, if the content of a previously received message e is 0, or 1 if it was 1. Task T2 represents then a state machine with three states that increases its state, if message u is 1 and decreases, if it is 0. In each state the messages y and w are set with state specific values and sent to task T3 and task T4 respectively. In case of State 0, the messages y and w contain the value 0, in case of State 1 both contain 50 and in case of State 2 the value 100 is sent. These messages are written before runnable R2 is called. However, in 30 % of the cases task T4 is activated via an inter-process activation before this runnable call happens. Task T3 varies its dynamic behavior and consequently also its execution time according the transmitted content of message y. Task T4 finally prepares again the input for task T1. If the content received in message w is 0, then in 30% of the cases the content of message e is 0, otherwise 1. In the case that message w is 50, message e is set to 0 with a probability of 50% and to 1 accordingly. Finally, message e is set to 0 in 70% of the cases and to 1 in 30% of the cases, if message w is 100. In addition to this feedback loop, other system architecture patterns are added to be executed concurrently in order to increase the complexity. The tasks T5 and T6 represent a client-server without reply and are equal to the tasks T1 and T2 respectively as described in the modeling example "Client-Server without Reply". T7 is a periodically activated task without any communication and identical to task T7 of modeling example "Purely Periodic without Communication".

The table below gives a detailed specification of the tasks and their parameters. The tasks are scheduled according fixed-priority, preemptive scheduling and if not indicated otherwise, all events are active in order to get a detailed insight into the system's behavior.

Task Priority Preemption Multiple Task Activation Limit Activation Execution Time
T1 3 FULL 1 Periodic R1 Uniform
Offset = 0 Min = 9.9 * 106
Recurrence = 600 * 106 Max = 10 * 106
T2 2 FULL 1 Periodic R2 Uniform
Offset = 20 * 106 Min = 99 * 104
Recurrence = 300 * 106 Max = 100 * 104
T3 3 FULL 1 R3,0 Uniform
Min = 99 * 104
Max = 100 * 104
Periodic R3,1 Uniform
Offset = 50 * 106 Min = 99 * 105
Recurrence = 500 * 106 Max = 100 * 105
R3,2 Uniform
Min = 99 * 106
Max = 100 * 106
T4 1 FULL 1 R4 Uniform
Inter-process Activation Min = 99 * 105
Max = 100 * 105
T5 5 FULL 1 Periodic R5 Uniform
Offset = 0 Min = 99 * 105
Recurrence = 100 * 106 Max = 100 * 105
T6 4 FULL 1 R6,x Uniform
Min = 99
Max = 100
R6,1 Uniform
Min = 990
Max = 1 * 103
Periodic R6,2 Uniform
Offset = 15 * 106 Min = 49.5 * 103
Recurrence = 60 * 106 Max = 50 * 103
R6,3 Uniform
Min = 990 * 103
Max = 1 * 106
R6,4 Uniform
Min = 49.5 * 106
Max = 50 * 106
T7 0 FULL 1 R7,1 Uniform
Min = 59.4 * 106
Periodic Max = 60 * 106
Offset = 0 R7,2 Uniform
Recurrence = 1000 Min = 19.8 * 106
Max = 20 * 106

In order to show the impact of changes to the model, the following consecutive variations are made to the model:

1) Initial Task Set
For this variation, the tasks T1, T2, T3, and T4 of the table above are active.
2) Increase of Task Set Size I
For this variation, the Tasks T1, T2, T3, T4, T5, and T6 are active. That way the utilization of the system is increased.
3) Increase of Task Set Size II
As from this variation on, all tasks are active. That way the utilization of the system is increased.
4) Inter-process Activation
As from this variation on, task T2 gets activated by an inter-process activation from task T1, task T3 by an inter-process activation from task T2, and task T6 by an inter-process activation from task T5 instead of being activated periodically. The inter-process activation in task T1 is performed right after the message u is written in T2 and consequently before the runnable R1 is called, in task T2 respectively right before task T4 is activated, and in task T5 task T6 is called right before runnable R5. That way, a direct connection between these tasks is established.
5) Event Frequency Increase
As from this variation on, the periodicity of the tasks T1, T5, and T7 are shortened. For task T1, the value for the period is set to 450 * 106, the task T5 to 60 * 106, and for task T7 to 575 * 106. That way, the information density is increased.
6) Execution Time Fluctuation
As from this variation on, the execution time distribution is widened for both tasks. Therefore, the maximum of the uniform distribution is increased by 1 percent so that the uniform distribution varies now by 2 percent. That way, the utilization of the system is increased, which results in extinct activations.
7) Activation
As from this variation on, the maximum number of queued activation requests for both tasks is set to 2. That way, the problem with extinct activations resulting from the previous variation is solved.
8) Accuracy in Logging of Data State I
For this variation, the data accesses in all tasks are omitted. Instead, the runnable entities R3,0, R3,1, and R3,2, are executed with a probability of 50 %, 30 %, and 20 % respectively, and the runnable entities R6,x, R6,1, R6,2, R6,3, and R6,4 are executed with a probability of 15 %, 20 %, 30 %, 20 %, and 15 % respectively. That way, only a limited insight into the system's runtime behavior is available.
9) Accuracy in Logging of Data State II
For this variation, just task events are active. That way, only a limited insight into the system's runtime behavior is available.

2.3.7 Modeling Example "State Machine Feedback Loop"

The task set of the modeling example "State Machine" is expanded further in this architecture pattern by combining the ideas behind the modeling example "State Machine" and "Feedback Loop". This means that messages are exchanged in a loop and each sender/receiver is also a state machine. To achieve this, task T1 has two different internal states 0 and 1, and task T2 manages three consecutive states 0, 1, and 2. The state task T1 is currently in is sent via a message to task T2 before runnable R1 is called. If the content of the message sent from task T1 is 1, task T2 increases its internal state, e.g. from state 0 to 1, and if it is 0, task T2 decreases its internal state accordingly. Then, depending on the state task T2 is currently in, the according runnable (R2,0 for state 0, etc.) is executed. If the maximum or minimum state of task T2 is reached but the received message from task T1 tells task T2 to further increase or respectively decrease its internal state, task T2 sends a message to task T1. This message then causes task T1 to toggle its internal state which consequently results in a switch from increasing to decreasing or vice versa. In addition to this state machine feedback loop, other system architecture patterns are added to be executed concurrently in order to increase the complexity. The tasks T3 and T4 represent a client-server without reply and are equal to the tasks T1 and T2 respectively as described above in the modeling example "Client-Server without Reply". T5 is a periodically activated task without any communication and identical to task T7 in the modeling example "Purely Periodic without Communication".

The table below gives a detailed specification of the tasks and their parameters. The tasks are scheduled according fixed-priority, preemptive scheduling and if not indicated otherwise, all events are active in order to get a detailed insight into the system's behavior.

Task Priority Preemption Multiple Task Activation Limit Activation Execution Time
T1 2 FULL 1 Periodic R1 Uniform
Offset = 0 Min = 9.9 * 106
Recurrence = 300 * 106 Max = 10 * 106
T2 1 FULL 1 R2,0 Uniform
Min = 99
Max = 100
Periodic R2,1 Uniform
Offset = 15 * 106 Min = 99 * 103
Recurrence = 250 * 106 Max = 100 * 103
R2,2 Uniform
Min = 49.5 * 106
Max = 50 * 106
T3 4 FULL 1 Periodic R3 Uniform
Offset = 0 Min = 99 * 105
Recurrence = 100 * 106 Max = 100 * 105
T4 3 FULL 1 R4,x Uniform
Min = 99
Max = 100
R4,1 Uniform
Min = 990
Max = 1 * 103
Periodic R4,2 Uniform
Offset = 15 * 106 Min = 49.5 * 103
Recurrence = 60 * 106 Max = 50 * 103
R4,3 Uniform
Min = 990 * 103
Max = 1 * 106
R4,4 Uniform
Min = 49.5 * 106
Max = 50 * 106
T5 0 FULL 1 R5,1 Uniform
Min = 59.4 * 106
Periodic Max = 60 * 106
Offset = 0 R5,2 Uniform
Recurrence = 1000 Min = 19.8 * 106
Max = 20 * 106

In order to show the impact of changes to the model, the following consecutive variations are made to the model:

1) Initial Task Set
For this variation, the tasks T1, and T2 of the table above are active.
2) Increase of Task Set Size I
For this variation, the tasks T1, T2, T3, and T4 are active. That way the utilization of the system is increased.
3) Increase of Task Set Size II
As from this variation on, all tasks are active. That way the utilization of the system is increased.
4) Inter-process Activation
As from this variation on, task T2 gets activated by an inter-process activation from task T1, and task T4 by an inter-process activation from task T3 instead of being activated periodically. The inter-process activation in task T1 is performed right after the message to task T2 is written and consequently before the runnable R1 is called, and in task T3 task T4 is called right before runnable R3. That way, a direct connection between these tasks is established.
5) Event Frequency Increase
As from this variation on, the periodicity of the tasks T1, T3, and T5 are shortened. For task T1, the value for the period is set to 220 * 106, the task T3 to 50 * 106, and for task T5 to 500 * 106. That way, the information density is increased.
6) Execution Time Fluctuation
As from this variation on, the execution time distribution is widened for both tasks. Therefore, the maximum of the uniform distribution is increased by 1 percent so that the uniform distribution varies now by 2 percent. That way, the utilization of the system is increased, which results in extinct activations.
7) Activation
As from this variation on, the maximum number of queued activation requests for both tasks is set to 2. That way, the problem with extinct activations resulting from the previous variation is solved.
8) Accuracy in Logging of Data State I
For this variation, the data accesses in all tasks are omitted. Instead, all runnablea representing a state are executed equally random, meaning the runnables R2,0, R2,1, and R2,2 are each executed with a probability of 33 %, and the runnables R4,x, R4,1, R4,2, R4,3, and R4,4 each with a probability of 20 %. That way, only a limited insight into the system's runtime behavior is available.
9) Accuracy in Logging of Data State II
For this variation, just task events are active. That way, only a limited insight into the system's runtime behavior is available.

2.3.8 Democar Example

The so called Democar model presented in the AMALTHEA example files describe a simple engine management system.

Origin

The Democar example is loosely based on the publicly available information taken from


A Timing Model for Real-Time Control-Systems and its Application
on Simulation and Monitoring of AUTOSAR Systems
Author: Patrick Frey

A version of the dissertation can be downloaded from University of Ulm: pdf.

Files

AMALTHEA_Democar.amxmi
contains the complete model, consisting of a hardware model, a model of the operating system, a stimulation model and a model that describes the software system.

Amalthea_ArExampleEngine.atdb
is an AMALTHEA Trace Database and it contains a trace that is the result of a simulation of this example.

2.3.9 Hwmodel Example

This example contains several hardware model examples. These are:

gateway.amxmi
which describes how to model master/slave Networks. It shows a Local Interconnect Network® (LIN) with one Master, represented by a the microcontroller "Gateway" and two sensors which respond to multiple unique commands. The lin specification is publicly available under http://www.lin-subbus.org/.

memory.amxmi
which describes how to model memory segments with the AMALTHEA hardware model.

MPC5668G.amxmi
which describes a sample system consisting of a Freescale MPC5668G Demo Board and its SoC.
It is loosely based on the publicly available information taken from the Freescale website: Product summary MPC5668G.

2.4 Tutorials

2.4.1 Task creation example

The following section will describe how to generate tasks out of software models with sequencing constraints.

Step 1

Create a new example project based on the "Democar Example" ( File -> New -> Example).

Step 2

Click on the left arrow of the "org.eclipse.app4mc.amalthea.example.democar.mapping" project to expand it. You will see several files with models of the democar project, each of them containing one specific model, e.g. "AMALTHEA_Democar_MappingExample.amxmi" with the software model and constraints model. Usually, a software model needs to be partitioned before it can be mapped. You can either use the partitioning plugin for this, or the included (already partitioned) software model.
Right-click on the file "AMALTHEA_Democar_MappingExample.amxmi" and click "Generate Tasks" to start the task generation process. This will also convert the Activation elements from the partitioned software model to Stimuli and store these in a stimulation model.

Step 3

Once the tasks have been generated, a new file with tasks will be created in the "output" folder. The file "AMALTHEA_Democar_MappingExample_withTasks.amxmi" contains the final software model, the stimulation model and the constraints model and is ready to be mapped.

2.4.2 Mapping example 1

The following section will describe how to calculate a mapping with the AMALTHEA tool platform using the DFG and ILP based load balancing algorithms.

Step 1

The mapping example required a software model with tasks. You can either use the resulting model from the previous task creation example ("AMALTHEA_Democar_MappingExample_withTasks.amxmi") or the already existing software model example. Right-click the file with the software model you would like to map and select "Perform Mapping" to open the GUI for the mapping.

Step 2

The name of the project as well as the path to the software project have already been entered for you. To perform the mapping, you need to specify the path to a hardware model. This can be done by clicking on the second "Search" button from the top and locating the file "AMALTHEA_Democar_MappingExample-hw.amxmi". Now you only need to click "Create Mapping" to perform the calculations and create a mapping from tasks to cores.

Step 3

Once the mapping is performed, a new file will be generated in the "output" folder. Mapping and OS can be found in the file "AMALTHEA_Democar_MappingExample_withTasks_out.amxmi". To inspect this file, just double-click on it to open a content tree explorer.
Additional information about the mapping is provided in the console.

2.4.3 Mapping example 2

The following section will describe how to calculate a mapping with the AMALTHEA Tool Platform using the experimental energy minimization algorithm.

Step 1

The first step is to change the mapping algorithm into the "Energy efficient mapping" algorithm. Do this by clicking on "Window" and then "Preferences" in the menu bar. The preferences window will open and you are able to configure the mapping plugin on its preferences page under the point "AMALTHEA Mapping".
Select "Energy efficient mapping (Experimental)" and close the page using the OK button on the bottom.

Step 2

The algorithm for energy efficient mapping (minimization of the energy consumption) comes with specifically altered model files based on the Democar and HVAC use case which are stored in the folder "modelEnergyEfficient". Open this folder, right click on one of the two "AMALTHEA????_EnergyEfficientMappingExample.amxmi" files and select perform mapping. As this algorithm comes with its own task creation algorithm, we can skip the task creation process.

Step 3

Similar to the previous example, the name of the project as well as the path have already been entered for you. To perform the mapping, specify the path to the hardware model. This is done by clicking the second search button from the top and locating the file "IMX6.amxmi-hw" within the "modelEnergyEfficient" folder. Start the mapping calculation by hitting the "Create Mapping" button.

Step 4

Once the mapping is performed, two additional files will be generated in the "output" folder. The mapping can be found in the file "AMALTHEA_????_MappingExample_withTasks_mapping.amxmi-mapping". To inspect this file, just double-click on it to open a content tree explorer.
Additional information about the mapping is provided in the console.

2.4.4 Task Visualizer Example

The following section describes how to visualize the execution of tasks along with their states on cores using the task visualizer plugin.

Preparation

In order to execute the task visualizer, it is necessary to describe the overall system in terms of an AMALTHEA Model file. The minimal amount of information consists of

  • Tasks (Software Model) which contain Runnables (Software Model) and refer to Periodic Activations (Stimuli Model)
  • Cores (Hardware Model)
  • Allocations of Tasks to Cores (Mapping Model)

Creating such a Model can either be done manually or by following the Tutorial on "Mapping example 1"

Step 1

Right-click the file containing the System you would like to visualize and select "Visualize Task Execution" from the "AMALTHEA Multicore Features" context menu to open the Task Visualization Overview.

Step 2

The Task Visualization Overview provides basic information about the software as well as the hardware of the system. The upper part summarizes the allocations of Tasks to Cores, along with the number of total instructions per Task, the period between a tasks activation, the core these tasks are allocated to, as well as the Worst Case Execution Time (WCET) in terms of ps and percentages of the cores total utilization. THe lower part summaraizes the available cores along with the number of computatable instructions per second (IPS) and their utilization (with 1.0 being 100%). Continue by clicking next to advance to the the Task Dependencie Overview.

Step 3

The Task Dependencies Overview visualizes the task graph with its inter-task dependencies as well as the communication delay. For simplicity, several restrictions are applied to the considered dependencies as illustrated in the box "Relations Filter".

If the check box "Consider task precedence" is deactivated, the tasks will be handled as batch tasks without any dependencies among each other.

For this tutorial, we activate the task precedence consideration and advance to the scheduling selection by clicking next.

Step 4

The Scheduling Selection screen allows to fine-tune the execution visualization and to specify a scheduling algorithm. Until now, the following scheduling approaches are implemented:

  • Preemptive Rate Monotonic Scheduling
  • Cooperative Earliest Deadline First
  • Preemptive Earliest Deadline First

The scale allows configuring the "step" or "increment size" during the execution calculation, e.g. by setting this value to ms, each internal "tick" will increment the the execution model by one millisecond. If a task's total execution time is lower then the increment size, it will be round up to one millisecond. Consequently, the value should be set to a similar scale as the average execution time of the tasks.

Finally, the Simulation-Time allows specifying the time frame which should be visualized, e.g. with a task period of 15 ms, a value of 30.000.000.000 ps (30 ms) will visualize 2 periods.

To continue with this tutorial, set all values as shown in the figure and click "Finish" to plot the Execution Visualization.

Step 5

Finally, a Gantt-Chart illustrating the execution of each core is shown. By double-clicking the tab "TraceView", you can maximize the window to full-screen.

The top element of each chart illustrates the core's utilization, followed by the states of each task, which are defined as followed:

  • Green: Running
  • Yellow: Ready
  • Red: Suspended
  • Orange: Waiting

The description of the actual state as well as their start/end times can also been seen by hovering the cursor over a state.

2.4.5 AMALTHEA Trace Database (ATDB) Import Example

The following section describes how to use the information contained within an AMALTHEA Trace Database (ATDB) to create a model.

Step 1

The figure above shows content of "Amalthea_ArExampleEngine.atdb", an AMALTHEA Trace database provided with the "Democar Example".
To get this file create a new example project (see "Creating an example"), based on the "Democar Example".

Step 2

Right-click on the project folder "org.eclipse.app4mc.amalthea.example.democar" or any other. You will see several possible options. In that option menu press "Import..." then.
This will open a dialogue where the different import options available are listed. Open the folder called "AMALTHEA" by clicking on cross next to the title. This is shown in the figure above.

Step 3

Next, choose mark the option "From AMALTHEA Trace DB" by clicking on it and proceed to the next page by either double-clicking the option name or by pressing the "Next" button. There you are prompted to enter the location of the ATDB file you want to import. Click the upper "Browse..." button to navigate your file system and select the wanted file. If you want to change the target project the model will be created in press the lower "Browse..." button and make your choice. The figure above shows an example input.

Step 4

After you have made all required inputs, click on "Finish". This will start the analysis of the database. Once the model creation is done the dialogue will close and the newly created amxmi file opened on the AMALTHEA Model editor as shown in the figure above.

2.5 Editors / Viewers

In addition to the standard model editor there are editors and viewers for specific purposes:

  • AMALTHEA Trace Database Metrics Viewer

2.5.1 AMALTHEA Trace Database Metrics Viewer

This is a simple viewer that shows how to access the AMALTHEA trace database.

2.5.2 Other editors

Further (graphical) editors / viewers are under development

2.6 Model Validation

2.6.1 Usage of Check-based Validation

The AMALTHEA validation can be triggered by right clicking a model element in the left Model Explorer View and choose in the context menu Validate – Check-based Validation.

All AMALTHEA sub elements of the chosen element are then included for the validation. In the next window the catalog appears allowing the selection of specific validation checks based on your selected model. Button "Select All" will select all listed validation checks for execution. Each entry of the catalog is called Category which may contain several validation checks (i.e. constraints). The simplest way is to define a 1:1 relationship between category and constraint.

If an error is found, it is shown in the Problems view of Eclipse. A simple double click on the error will lead you to the affected elements in the AMALTHEA model. Sphinx validation distinguishes between three error types: errors, warnings and info.

2.6.2 Available Validations

The plugin org.eclipse.app4mc.amalthea.model.all.sphinx.validation contains several validators and constraints (i.e. checks) for various model types of AMALTHEA.

Basic Check Validator

Basic checks which cannot be assigned to a specific AMALTHEA model are summarized. Responsible implementation for this validation is the class org.eclipse.app4mc.amalthea.validation.checks.BasicCheckValidator.

Missing Referable Name
Checks the name of all objects that are instances of IReferable. The name is used to refer to objects in the AMALTHEA model, therefore missing names are handled as an error .

Duplicate Unique Name
Checks the unique name of all objects that are instances of IReferable. The unique name is used to refer to objects in the AMALTHEA model, therefore duplicates are handled as an error .

Undefined Unit
Checks all units: TimeUnit, FrequencyUnit, DataSizeUnit, DataRateUnit.
If the unit is undefined, it will be handled as an error .

Custom Property Duplicate Key
A validation to check all custom attributes of one element. If it contains duplicate key entries, it is treated as a warning .

Weibull Parameters
Checks the parameters in the Distribution of type WeibullEstimators. The parameters must not be equal to each other, if this is the case, it will be handled as an warning .

ConstraintsModel Check Validator

Responsible implementation for this validation is the class org.eclipse.app4mc.amalthea.validation.checks.ConstraintsModelCheckValidator.

EventChainsConstraint
Checks for the AMALTHEA constraints model if the EventChains are consistent regarding completeness (fully build chain) and correctness. Therefore each EventChain's stimulus/response pair is checked if events are existent and linked properly to a complete chain. An event chain of type SubEventChain is checked for its "border" events fit to the stimulus and response event of the parent event chain. Cascades of sub events are processed recursively. A missing or wrong information in the event chain will be handled as an error .

MappingModel Check Validator

Responsible implementation for this validation is the class org.eclipse.app4mc.amalthea.validation.checks.MappingModelCheckValidator.

Process2Scheduler2CoreMappingConstraint
Checks for the entire AMALTHEA model instance if the mapping information are complete and correct between Software(SW) <-> Operating System(OS) <-> Software(SW) model. I.e. mapping relationship {Process(SW) to Scheduler(OS) to Core(HW)}. A missing mapping information (i.e. no Entity) or unmapped Processes will be handled as an error .

EventModel Check Validator

Responsible implementation for this validation is the class org.eclipse.app4mc.amalthea.validation.checks.EventModelCheckValidator.

to be extended

HardwareModel Check Validator

Responsible implementation for this validation is the class org.eclipse.app4mc.amalthea.validation.checks.HardwareModelCheckValidator.

to be extended

2.7 Model Migration

2.7.1 AMALTHEA Model Migration

Why model migration is required ?

EMF based models are the instances of ECORE meta model (which is updated for each release).

As there is a tight dependency between model instance and the corresponding meta model, old EMF models can not be loaded with the newer release of meta model.

Example : Due to the change in the namespace of the meta model, loading of model files from prior versions would fail with the latest version

This problem can be solved by explicitly migrating the model files from the prior versions to be compatible to the latest meta model version

AMALTHEA model migration

As described above, same scenario is also applicable for AMALTHEA models as they are instances of EMF based AMALTHEA ECORE meta model.

For each release of AMALTHEA there will be changes in the meta model contents, due to this it is not possible to load models built from previous releases of AMALTHEA into latest tool distribution.

Model Migration functionality is part of this distribution, using this feature it is possible to convert models from previous releases (e.g. itea.1.0.3, itea.1.1.0, itea.1.1.1, 0.7.0, 0.7.1, 0.7.2) to the ones which are compatible to the next versions of AMALTHEA meta model.

Only forward migration of models is supported by Model Migration functionality of AMALTHEA

2.7.2 Supported versions for model Migration

Model migration functionality provides a possibility to migrate the models ( created from previous releases of AMALTHEA ) to the latest versions

Only forward migration is supported

AMALTHEA meta model versions information

Below were the releases of AMALTHEA meta model during its phase of being as a ITEA2 funded project ( For details refer : www.amalthea-project.org )

  • 1.0.3
  • 1.1.0
  • 1.1.1

From below version, AMALTHEA meta model is a part of the official project Eclipse APP4MC ( See: www.eclipse.org/app4mc )

  • 0.7.0
  • 0.7.1
  • 0.7.2

Model migration

As described above, only forward migration is supported by the AMALTHEA model migration utility.

Model migration utility migrates the specified model sequentially to the next versions (step by step) till expected version is reached.

Below figure represents the steps involved in the migration of model from ITEA2 1.0.3 version to APP4MC 0.7.0 version:

2.7.3 Pre-requisites for AMALTHEA model migration

VM arguments

Default max heap memory (Xmx) used by the APP4MC distribution is 2 GB. In case of migrating huge models, it is recommended to increase this memory to 4 GB before invocation of "AMALTHEA Model Migration" feature

Follow the below steps to increase the heap memory setting of APP4MC :

  • Open app4mc.ini file ( present in the location where APP4MC is installed) and change the parameter -Xmx from 2g to 4g. ( Note: In case if APP4MC plugins are integrated inside custom eclipse application, then corresponding <application_name>.ini file -Xmx parameter should be updated as specified below )

Usage of UUID's instead of URI fragments inside AMALTHEA models

It is recommended that AMALTHEA models (1.0.3 and 1.1.0 versions) have UUID's instead of Xpath like URI fragments for identifying the elements.

Note: Default referencing mechanism within an EMF model or across models is to have Xpath like URI fragments (example : "//@stimuliModel/@stimuli.9") but this is not recommended when the model elements are modified through external mechanism ( instead of standard EMF editors) as it is quite challenging to update index order and references accordingly.

Model files generated by AMALTHEA (1.0.3) /Amalthea4Public (1.1.0) IDE contains UUID instead of Xpath like URI fragments. From Amalthea4Public (1.1.1), custom URI fragments are introduced which are based on the element names rather than index

How to identify if Xpath like URI fragments are used in AMALTHEA model files

Open AMALTHEA model files in text editor

  • search if xmi:id attribute is present for the tags
  • search for String : "//@" , if the search result consists of this kind of Strings "//@stimuliModel/@stimuli.9"

If the results of "xmi:id" are false and "//@" are true, this means - > URI fragments are used in the AMALTHEA model files

How to identify if UUID's are used in AMALTHEA model files

Open AMALTHEA model files in text editor

  • search if xmi:id attribute is present for the tags

If model elements consists of xmi:id attributes for their identification and for referencing, model content looks like below:

<calls xsi:type="sw:TaskRunnableCall" xmi:id="_FddJyEJsEeW3445fEkSwWQ" runnable="_FeWjyUJsEeW3445fEkSwWQ"/>

Update AMALTHEA models to use UUID's instead of Xpath like URI fragments

Once it is identified that models contain Xpath like URI fragments, it is recommended to update them to have UUID's before invoking model migration.

For update of models to have UUID's, use one of the below plugin (depending on the model version) and execute it in the corresponding AMALTHEA distribution to which model is belonging to

AMALTHEA Model Version Utility plugin to convert URI Fragments to UUID's
1.0.3
1.1.0

Follow the below steps to convert URI Fragments to UUID's:

  1. Import the dowloaded utility plugin into corresponding AMALTHEA distribution



    • Select the option "Existing Projects into Workspace"



    • Select the location of zip file (downloaded utility plugin)



  2. Open the models folder in the project - > remove the existing models and copy the models which should be updated for UUID's ( It is important to have all the models and the dependent models copied with the same folder structure)



  3. Configuring the copied models to be considered for UUID's updation :



  4. Executing workflow to update the Amalthea models :



  5. Updated model files (with UUID's) will overwrite the input models. These model files should be used for model migration

Special behavior

During the UUID update process following behavior is performed:

  • If AMALTHEA model elements are referring to Franca Interfaces, references are removed but the content of the reference is copied as CustomProperty for FInterfacePort element



2.7.4 How to invoke AMALTHEA model migration

AMALTHEA model migration utility is developed as a eclipse plugin and it is part of APP4MC distribution ( from release 0.7.0)

Model migration utility can be invoked by selecting the required models to be migrated in the UI and specifying the target AMALTHEA version to which models should be migrated

  • Step 1: Selection of AMALTHEA models

  • Step 2: Opening AMALTHEA Model Migration dialog and configuring migration inputs

On click of AMALTHEA Model Migration action, selected files are parsed and the following information is retrieved:

  1. AMALTHEA model version to which model is belonging to
  2. Referred AMALTHEA models inside the selected model ( i.e. case of cross document references)
    • AMALTHEA model version to which these models are belonging to

Once the above information is successfully retrieved, following Model Migration dialog is opened:

Model Migration dialog consists of following information:

  1. Selected AMALTHEA model files : These are the models which are explicitly selected by the user
  2. Model scope files : These are the model files which are referred in the models explicitly selected by the user
    Example: In the dialog shown above, mapping.amxmi is the file selected by the user, and it contains references to the model elements of : sw.amxmi, sw.amxmi-sw, default.amxmi-os. Due to this reason, all these referred models are selected implicitly for model migration
    Note: In case if the selected/referred models are having the different AMALTHEA model versions, migration can not be proceeded and the following error dialog is displayed
  3. Input model version : This is the AMALTHEA model version to which both selected & referred models are belonging to
    Note: AMALTHEA model versions are represented with the following naming convention in this dialog
    AMALTHEA 1.0.3itea.103
    AMALTHEA4public 1.1.0itea.110
    AMALTHEA4public 1.1.1itea.111
    APP4MC 0.7.00.7.0
    APP4MC 0.7.10.7.1
    APP4MC 0.7.20.7.2

  4. Output model version : This is the AMALTHEA model version to which both selected & referred models should be migrated to
  5. Output directory : This is the location where migrated AMALTHEA models should be generated.
    Note: This option is available only if the specified output model version is 1.1.1 or higher.
    In case of AMALTHEA model versions 1.0.3 and 1.1.0 - > Model scope is set as project, where as from version 1.1.1 and higher - > model scope is limited to a folder. Due to this reason below is the behavior of model migration utility :
    • If the output model version is selected as 1.1.1 or higher - > migrated models are generated in a single folder (due to scope restriction) specified by the user
    • If the output model version is selected as 1.1.0 - > input model files will be overwritten by the migrated model files
  • Step 3: Model migration

Once the required parameters are configured in the model migration dialog, click on "Migrate Models" button in the dialog to invoke migration.

Special cases:

  • Handling of input model files with name in the following format : <model name>.amxmi-<sub model name> ( Example: test.amxmi-sw)
    • From 1.1.1 version, files of these extension are not recognized by the AMALTHEA loader as AMALTHEA models, due to this reason if such files are supplied as input to model migration utility - > generated files will be of following convention <model name>.amxmi-<sub model name>.amxmi ( Example: test.amxmi-sw.amxmi).
      For details refer to the below image containing both input models and migrated models :

2.7.5 Additional details

For details regarding the below topics, refer to the corresponding links:

  1. How model elements are migrated across various versions ?
  2. How to update models to contain UUID ?
  3. How to update max heap memory used by the application ?

3 Data Models

3.1 Data Model Overview

The AMALTHEA data models are related to the activities in a typical design flow. The focus is on design, implementation and optimization of software for multicore systems. The data exchange between the activities is supported by the two main models of AMALTHEA, the System-Model and the Trace-Model.

Modeling
The behavior of a component or system is often defined in the form of block diagrams or state charts. Dynamical behavior can also be formalized by differential equations that describe how the state of a system changes over time. Modeling tools like Matlab/Simulink, ASCET or Yakindu allow to simulate the model and to generate software descriptions and implementation.

Partitioning
Based on the description of the software behavior (e.g. label accesses, synchronization, ...) this step identifies software units that can be executed in parallel.

System Modeling
The structure of the hardware (e.g. cores, memory sizes, ...) and system constraints are added to the model.
The constraints are limiting the possible results of the next step.

Optimization
The activity of assigning executable software units to the cores and mapping data and instructions to memory sections. This step can be done manually or supported by a tool that optimizes the partitioning based on information about the software behavior (e.g. data dependencies, required synchronization, etc.).

Simulation / Software Execution
In this step model-based simulation tools evaluate the timing behavior of the software.
Typically these types of high level simulations are based on the hardware and software description of the system.
Low level simulators (e.g. instruction set simulators) or real hardware can be used to execute existing software.
The resulting traces provide additional data that is the basis for a more detailed analysis.

A simplified picture shows the main purpose of the models.

The open AMALTHEA models allow custom tooling, interoperability of tools and the combination of different simulation or execution environments.

3.1.1 AMALTHEA System Model

The System Model contains:

Hardware / ECU Description
Hardware abstraction that includes general information about the hardware. Examples are: Number of cores, features of the cores, available memory, access times (from core x to memory y), etc.

SW Description
The description contains information about the static or dynamic behavior the software. This includes: tasks, software components, interfaces, variables, types, etc. It is also possible to describe the characteristics of software functions like the access to variables (read, write, frequency) or the calls to service routines (call tree).

Timing Constraints
Timing Constraints like End-to-End Delay, Latency and Synchronization can be formally written in the "TIMMO Timing Augmented Description Language" (TADL). They are derived from timing requirements or control theory.

Mapping Constraints
The different cores of a typical embedded multicore ECU have different features. For optimal performance it is necessary to restrict the assignment of some software functions to e.g. cores with fastest I/O connections or the maximum clock rate. For safety reasons it is required that some functions are located on specific cores that e.g. can run in lock step mode. Constraints like this are represented in this sub model.

SW Mapping
All information about the assignment of software units (e.g. tasks or runnables) to the cores and about the mapping of data and instructions to memory sections.

3.1.2 AMALTHEA Trace Model

There is no specific EMF data model to describe event traces. The relevant events and their states are represented in the Event Model. In addition special trace formats for multicore have been specified in the AMALTHEA project and a Trace Database has been implemented. This database stores traces in a way that allows fast retrieval of the information (see the Developer Guide for a detailed description of the database structure).

3.1.3 Structure of the model

The definition of the AMALTHEA data model follows some basic principles:

  • The model is defined in one package to simplify the handling (e.g. allow opposite references).
  • Different aspects are addressed in different logical sub models.
  • Existing EMF models from other Eclipse projects are reused and referenced instead of creating own definitions.
  • References are based on unique names within the same type of element.

We also try to use cycles wherever possible and needed instead of direct time information. This has advantages in a multi-core environment, as the cores could have different clock frequencies.

The following figure shows the different logical parts of the AMALTHEA data model and how they are referencing each other. The central AMALTHEA model and common model that contains reusable elements are drawn without connections in this diagram.

3.2 Common Model

The AMALTHEA common model is not a model by itself, but provides standard elements used by other models as containments.

3.2.1 Common Elements

The only exception is the CommonElements class that provides a central container for tags and classifiers. These elements are used in many sub models where references to tags or classifiers provide a mechanism to annotate the objects.

3.2.2 Tags

Tags are a generic possibility to annotate objects in the AMALTHEA model.

3.2.3 Classifiers

Classifiers are used to define specific features or abilities of a core or a memory. They are used in the PropertyConstraintsModel to restrict the allocation to cores or the memory mapping.

3.2.4 Custom Properties

The CustomProperty element is used to define own properties that are not (yet) available in AMALTHEA. If there is the need to extend an element or to store tool information related to processing steps, CustomProperties can be used to store this type of information. It also provides the possibility to work with prototypical approaches that later (if established and more stable) can be integrated in the standard model.

The elements are stored in a HashMap. The values can be of different types as shown in the structure picture, like String, Integer, Boolean...
In addition a ReferenceObject is available to store own references to other EObject elements.
The ListObject can be used to store multi-valued custom properties.

3.2.5 Time (and Time Unit)

The AMALTHEA data model includes a common element to describe time ranges in an easy way, the Time element. The Time class in general allows to define negative time values. If only positive values are expected the AMALTHEA validation will show a warning.
The Time element can be referred by any other element for specifying attributes to store time information.
Time units are needed to describe different timing behavior and requirements, like deadlines or offsets of components.
To support different time ranges, especially different time units, AMALTHEA predefines these types like seconds, milli-seconds, micro-seconds, nano-seconds or pico-seconds.

3.2.6 Frequency (and Frequency Unit)

3.2.7 Data Size (and Data Size Unit)

The DataSize (and DataRate) definition contains units and prefixes

  • according to the SI Standard
  • for binary multiples
International System of Units (SI) Prefixes for binary multiples
Name Prefix Factor
kilo k 103
mega M 106
giga G 109
tera T 1012
peta P 1015
Name Prefix Factor
kibi Ki 210
mebi Mi 220
gibi Gi 230
tebi Ti 240
pebi Pi 250
exbi Ei 260

The DataSize provides convenience methods to get the size also in bit and byte.
It is internally converted and can be retrieved in both ways.

3.2.8 Data Rate (and Data Rate Unit)

3.2.9 Deviation

The Deviation class is used to model statistical distributions within AMALTHEA. There is a wide variety of possible use cases, where such a distribution is needed. With such a deviation for example, the variation in run times of functions or respectively their instructions can be imitated. Therefore, AMALTHEA currently supports the following three statistical distributions:

Boundaries

With the Boundaries class it is possible to define the deviation of instructions in such a way that a specific scenario is covered. The scenario is on the one hand specified by the minimum and maximum value between which the instructions vary. On the other hand, the Sampling Type specifies the specific scenario that is covered. The following sampling types are available which are visualised in the figures below:

BestCase
Defines the scenario in which most instances should have runtimes close to the set minimum runtime, but still should consider some more time-consuming outliers up to the set maximum.
WorstCase
Defines the scenario in which most instances should have runtimes close to the set maximum runtime, but still should consider some less time-consuming outliers down to the set minimum.
AverageCase
Defines the scenario in which most instances should have runtimes close to the middle between the set minimum and maximum, but still should consider some less and more time-consuming outliers down to the set minimum or up to the set maximum respectively.
CornerCase
Defines the scenario in which most instances should have runtimes close to the set minimum and maximum runtime, but still should consider some other time-consuming outliers between those two.
Uniform
Defines the scenario in which all instances should have runtimes that are uniformly distributed between the set minimum and maximum.

Uniform Distribution

The uniform distribution is a statistical distribution where the values between the stated lower and upper bound are equally likely to be observed.

Gaussian/Normal Distribution

The Gaussian/normal distribution is a statistical distribution where the values decrease symmetrically. The maximum value and thus its location is thereby stated by the mean and the rate of decrease is defined by its standard deviation. Since the curves approach zero on either side, an additional upper and lower bound can be added to constraint the values.

Beta Distribution

The Beta distribution is a statistical distribution whose shape is defined by alpha > 0 and beta > 0. That way, the Beta distribution can also be used to model other distributions like for example uniform, normal, or Bernoulli distribution. Since the curves can approach zero or infinity on either side, an additional upper and lower bound can be added to constraint the values.

Weibull Distribution

The Weibull distribution is a statistical distribution whose shape is defined by kappa > 0 and the scale of the distribution by lambda > 0. That way, the Weibull distribution can also be used to model other distributions like for example the exponential or normal distribution. Since the curves can approach zero or infinity on either side, an additional upper and lower bound can be added to constraint the values. Alternatively, the Weibull distribution can also be parameterized using the mean value and a parameter that constraints the distribution regarding the per mill of remaining values.

3.2.10 Statistic Elements

The contained elements are representing statistical values.
The values can be set either with a min, avg and max representation using the MinAvgMaxStatistic element.
The other possibility is to set a single value using the SingleValueStatistic element.
The minimum and maximum values are set as a normal int value, the average the single value as float.

3.2.11 Instructions

Instructions are used to express execution times in a more abstract way. The number of instructions characterizes the amount of computation that is necessary to execute e.g. a Runnable. Depending on the capabilities of a Core (the frequency and the instructions per cycle) the time to execute such an element will differ.

In the next picture a more detailed overview is given about the Instructions.

Name Description
Instructions As a runnable contains several instructions to perform, this is represented using different algorithms or a constant value to consume the same amount of time/ticks per cycle.

3.2.12 Counters

The Counter element describes an activation of a target element that happens only every nth time.

Name Description
prescaler Gives the number n for the activation,
e.g. if set to 2, the target element is executed every second time.
offset Initial shift for the first execution of the target.

If for example prescaler is 5 and offset is 2 it is executed on the 2nd, 7th, 12th, … time.

Counters are available at the following elements:

  • Call sequence items:
    • ClearEvent
    • EnforcedMigration
    • InterProcessActivation
    • SchedulePoint
    • SetEvent
    • TaskRunnableCall
    • TerminateProcess
    • WaitEvent
  • Stimuli:
    • InterProcess
    • EventStimulus

3.3 Components Model

The AMALTHEA component model is central accessible through the ComponentsModel element.

It holds the following types:

  • Component / Composite ('components' containment reference)
  • System ('systems' containment reference)

3.3.1 Components Model Elements

The elements of the Components Model inherit several capabilities from common elements.

Ports, Components, Composites and ComponentInstances are referable by unique names.
A Connector also has a name but the name is optional and does not have to be unique.

Component

The 'Component' class represents a component. Components could be created directly within the 'ComponentModel' and are used as a type for a component instance.

It contains several ports of type 'Port'. A component refers the classes 'OsEvent', 'Label', 'Runnable' and 'AbstractProcess' from the software model and the class 'Semaphore' from the OS model.

System and Composite

Systems are defined as top level elements within a component model. A system contains several Component- and
Connection-instances and is used to define the architecture of a technical system.

A 'Composite' is a special component type to aggregate Component- and Connection-instances compositely.
So it could be used to create hierarchical component structures.

System and Composite implement the interface 'ISystem'.
The following diagram shows the main elements to represent the hierarchical structure.

In general each inner port should be connected. If a port is intentionally left unconnected it has to be added to the list of 'groundedPorts'.

ComponentInstance and Connector

The 'ComponentInstance' and the 'Connector' can be created within a 'System' or a 'Composite'. 'ComponentInstances' are used to represent instances of component- or composite-types. The 'Connector' class is used to connect the component instances to each other via their Ports. The connector contains a source and target 'QualifiedPort'.

QualifiedPort

A 'qualified' port refers a 'ComponentInstance' and a 'Port'.
If the 'instance' link is null then the QualifiedPort refers to a port of the enclosing composite.

FInterfacePort

The 'FInterfacePort' class contains the attribute 'kind' to set the port direction ('provides' or 'requires').
The attribute 'interface name' can be used to refer to an external definition, e.g. described in detail with the Franca IDL.

3.3.2 Example

Diagram

The diagram of the example shows a composite 'A' that contains two component instances 'X' and 'Y' of type 'B'. The connections between the ports are named 'c1' to 'c4'. The grounded port 'in_3' of instance 'X' (intentionally left unconnected) is marked green. The second unconnected port 'in_2' of instance 'Y' is unspecified. It is marked red and has to be changed, either declared as grounded or connected to another port.

Model Editor

The same example is shown in the standard AMALTHEA editor.

3.3.3 Franca IDL

Franca is a common interface definition language (IDL). The initial version has been developed by the GENIVI consortium as part of the standardization of an In-Vehicle Infotainment (IVI) platform. The first public version of Franca was released in 2012 under the Eclipse Public License. The future development of Franca will take place in an official Eclipse project under the Eclipse Modeling Top-level Project.

The Franca IDL files can be edited in a textual editor that is provided by Franca.

AMALTHEA ports can specify the 'interface name' to refer to interfaces in Franca.

3.4 Configuration Model

The purpose of the configuration model is to provide a common mechanism for configuration purpose.
The included configurations can contain elements for further processing or build steps.

The central element is the ConfigModel class.

Currently the only configuration object is EventConfig.

3.4.1 Event Configuration

The event configuration represents target events to trace, either in a simulation or on a target hardware platform. The EventConfig elements are contained in the ConfigModel class as list with the name eventsToTrace. Attributes of EventConfig are:

  1. name: (optional) name of the element
  2. event: reference to an existing events in the Events model

Sample

An example use case can be to trace all Process activate events. To express this in the configuration, one contained element must be of type EventConfig with the corresponding Event pointing to an already existent element. The Event is of type ProcessEvent and the ProcessEventType is set to activate. The other attributes are left blank to not limit the configuration to one Process with a given name for example.

The consumer of the configuration must then match and filter the relevant elements for further processing.

The following screenshot is showing this minimal configuration.

3.5 Constraints Model

The constraints model contains different kind of constraints. There are the runnable-sequencing-constraints that can be used to define a required order for the runnables of the Software Model, the affinity constraints for defining the constraints for the mapping of runnables, processes and schedulers, and the timing constraints for restricting the time span between events or the duration of event chains. Regarding to that, it is also possible to define event chains in this model.

3.5.1 Requirements

The Requirements are used to specify quality requirements for the dynamic architecture.
Requirements are divided into the following types depending on the entity type for which the requirement is specified:

  • Architecture Requirements for components
  • Process Chain Requirements for process chains
  • Process Requirements for tasks and isrs
  • Runnable Requirements for runnables

The Severity attribute is used to describe the quality impact if the requirement is not fulfilled.
The Limit defines the metric, the value and if the value for the metric is an upper limit or a lower limit.
Depending on the metric unit, the following Limits can be specified

  • Count Requirement Limit for metrics like Preemptions
  • CPU Percentage Requirement Limit for metrics like CPULoad
  • Frequency Requirement Limit for metrics like CacheMissFrequency
  • Percentage Requirement Limit for metrics like OsOverheadRelative
  • Time Requirement Limit for metrics like ResponseTime

An example for a requirement is the Deadline for a task. The Deadline is specified by an upper limit for the response time of the respective task.

3.5.2 Runnable Sequencing Constraints

These constraints can be used to define execution orders of runnables, or in other words, the dependencies between runnables. These dependencies can result from data exchange or any functional dependency, which is not necessarily visible by other model parameter.

The following requirements can be specified with this constraint:

  • Execution sequence of runnables A->B, meaning A has to be finished before B starts
  • Scope on certain process(es), when a runnable is executed multiple times in different process contexts
  • Binding of runnables within a process (loosely, strict)
  • Position, where sequence should start (start sequence), end (end sequence) or at any position
  • Occurrence attribute for multiple preceding/succeeding runnables (all or at least one)

A RunnableSequencingConstraint contains a list of ProcessRunnableGroup elements and an enumeration describing the basic rule for the sequencing. Each runnable group collects references to the runnables that should be sequenced. The sequence is defined by the order of the runnable groups within the sequencing constraint. The order of the runnables references within a group is unimportant. To sequence two runnables it is necessary to create the RunnableSequencingConstraint and two ProcessRunnableGroups, each referencing a runnable.

It is possible to describe that a set of runnables have to be executed before or after another runnable or set of runnables. For that it is possible to put more than one runnable reference in a group. As already mentioned, the order of the referenced runnables within a ProcessRunnableGroup is unimportant.

The RunnableOrderType provides different sequencing modes. The mode "loose binding" allows the execution of runnables (that are not part of the constraint) between the runnables of the sequencing-constraint. In addition to that, the option "lose binding & unspecified start runnable" allows to start with any runnable from the sequence, but the sequence itself must not be violated.

The "strict binding" option is the opposite of "loose binding". It says that the runnables must execute in direct order, without any interrupt by another runnable. With "start-", "any-" and "end sequence" it can be defined if the runnables of the constraint should be executed at the start, at the end or at any position in the process.

A runnable-sequencing-constraint is independent of the processes that are executing the runnables. With a scope it is possible to define that a sequencing-constraint checks only the runnables of a set of processes or just one process. The scope can be defined individually for each runnable.

The following picture shows a RunnableSequencingConstraint. The constraint has two runnable groups, visualized by the circles. In this case there is just one runnable in each group.
The runnables in the group must be executed in the order of the group. Basically there is no restriction on which core or process the runnables are executed. It is important that the order is correct, and that the runnable of one group terminates before the runnable of the next group starts.

The RunnableSequencingConstraint in this figure has two processes as a scope in its second group. That means that the runnable "R3" is allowed to be executed on the processes "P1" and "P3". But it is only expected one time!

Each runnable-sequencing-constraint has a runnable-order-type.
This type can be:

  • loose binding
  • loose binding & unspecified start runnable
  • strict binding & any sequence
  • strict binding & start sequence
  • strict binding & end sequence

The meaning of "loose binding" is that the runnables of a constraint do not have to follow each other directly, they can be also separated by other runnables that are not part of the sequencing-constraint.
"Unspecified start runnable" means that the order of runnables must be fulfilled (this is always the case) but it does not matter which of the runnables is the runnable to start with.
So with an "unspecified start runnable" the sequence can be:

  • "R1" -> "R3" -> "R4"
  • "R3" -> "R4" -> "R1"
  • "R4" -> "R1" -> "R3"

The opposite of loose binding is strict binding. It means that, if the runnables are executed within the same process, they have to be executed directly after each other. It is not allowed that other runnables than the sequencing runnables execute within this sequence.
The mode "start sequence" means that the first runnable has to be at the beginning of its process. Assuming that all runnables are executed on the same processes, the combination "strict binding & start sequence" means that all runnables of the constraint have to be executed in the correct order at the beginning of the process.
The mode "end sequence" is like "start sequence" but here the last runnable must be executed at the end of the last process of the runnable sequence.

3.5.3 Data Age Constraints

Data Age constraints are used to define when the information in a label becomes valid or invalid after its last update. Therefore a runnable and a label has to be set. The information update occurs when the runnable performs a write access on the label. It is possible to define the minimum time after the information of a label update becomes valid. This means that the information shall not be used for further calculations before this time is over. The maximum time on the other hand defines the time after the label update when the information becomes invalid. Beside of time it is possible to define a minimum and maximum cycle. The cycle is related to the activation of the process that executes the runnable.

  • DataAgeTime: The Time object in the role of minimumTime must not contain a negative value!
  • DataAgeTime: The Time object in the role of maximumTime must not contain a negative value!

3.5.4 Data Coherency Groups

A DataCoherencyGroup is used to define data coherency requirements for a group of labels.
The Direction hereby is used to specify if the labels have to be read or written coherently. Moreover, the scope attribute defines the context of the coherent read or write requirement. Possible scopes are components, processes, and runnables.

3.5.5 Data Stability Groups

A DataStabilityGroup is used to define that the values of labels have to be kept stable within a given execution context.
Currently, the following execution contexts are covered by the scope:

  • Component
  • Process
  • Runnable

This means that it has to be guaranteed that the values of labels are identical either within the runnable, the process, or the component in which the denoted labels are accessed.

3.5.6 Event Chains

The concept for event chains is based on the Timing Augmented Description Language.
The Timing Augmented Description Language (TADL) is coming from the TIMMO-2-USE project.

The EventChain consists of EventChainItems. These items are classified in two types:

  1. SubEventChains: Inner anonymous EventChains, which are only defined once and has no global meaning, but only in the context of the current defined EventChain.
  2. EventChainReference: Used to reference already global defined EvenChains in the root list of the model.

An Event Chain object references always two events, a stimulus event and a response event. To define a simple event chain that just contains two events, one event chain object is enough. In this case it would just be a chain that with its stimulus as first event and the response as second event.
If more events are required it is necessary to add sub event chains. The stimulus is always the first event of an event chain, the response is always the last event. The events that are defined in the sub event chains are the events in between.

The picture below shows a simple example for an event chain of four events in a row.
The top level chain defines the first event (E1) and the last event (E4).
It contains a number of event chains. They describe the way from E1 to E4.
These sub event chains are added as strands to the parent.
For this some rules has to be considered:
The stimulus of the first child event chain has to be the same as the stimulus of the parent (red in picture).
The stimulus of other child event chains have to be equal to the response of the previous chain (blue in picture).
The response of the last child event chain has to be the same as the response of the parent (green in picture).

As a stimulus or response event it is either possible to use an Entity Event or an Event Set.
An Entity Event is a single event regarding to an entity like a task or a runnable. So it can be e.g. the start of a runnable.
If a set of events is used, then all events of this group must occur fulfill the event chain. The order in which the events occur is not important.

3.5.7 Timing Constraints

Synchronization Constraints

An EventSynchronizationConstraint describes how tightly the occurrences of a group of events follow each other.
There must exist a sequence of time windows of width tolerance, such that every occurrence of every event in events belongs to at least one window, and every window is populated by at least one occurrence of every event.
The parameter multipleOccurrencesAllowed defines, whether for the constraint all occurrences have to be considered or just the subsequent ones.

An EventChainSynchronizationConstraint describes how tightly the occurrences of an event chain follow the occurrences of a different event chain.
The SynchronizationType defines which parts of the event chains have to be in sync, stimulus or response, and the width of a time window sets the allowed tolerance.
The parameter multipleOccurrencesAllowed defines, whether for the constraint all occurrences have to be considered or just the subsequent ones.

  • SynchronizationConstraint: The Time object in the role of tolerance must not contain a negative value!

Repetition Constraint

A RepetitionConstraint describes the distribution of the occurrences of a single event, including jitter.
Every sequence of span occurrences of event must have a length of at least lower and at most upper time units.

  • RepetitionConstraint: The Time object in the role of lower must not contain a negative value!
  • RepetitionConstraint: The Time object in the role of upper must not contain a negative value!
  • RepetitionConstraint: The Time object in the role of period must not contain a negative value!
  • RepetitionConstraint: The Time object in the role of jitter must not contain a negative value!

Delay Constraint

A Delay Constraint imposes limits between the occurrences of an event called source and an event called target.
Every instance of source must be matched by an instance of target, within a time window starting at lower and ending at upper time units relative to the source occurrence.
In addition, a MappingType defines whether there is a strong ( OneToOne ), neutral ( Reaction ), or weak ( UniqueReaction ) delay relation between the events.

  • DelayConstraint: The Time object in the role of lower must not contain a negative value!
  • DelayConstraint: The Time object in the role of upper must not contain a negative value!

Event Chain Latency Constraint

An EventChainLatencyConstraint defines how long before each response a corresponding stimulus must have occurred ( Age ), or how long after a stimulus a corresponding response must occur ( Reaction ).
It always refers to an EventChain.

  • EventChainLatencyConstraint: The Time object in the role of minimum must not contain a negative value!
  • EventChainLatencyConstraint: The Time object in the role of maximum must not contain a negative value!

3.5.8 Affinity Constraints

Affinity constraints are used to define the mapping of executable objects to each other.
The objects that can be mapped are:

  • Runnables
  • Processes (Task or ISR)
  • Labels

An affinity constraint can either be a pairing or a separation constraint. A pairing constraint contains one amount of objects and a target. The pairing constraints say "All these objects must run together on this target". A separation constraint contains two groups of objects and a target. It says "This group of objects is not allowed to be mapped with the other group of objects on the specific target". So the separation constraint can be used to forbid a combination of objects on a target. It can also be used to say "These objects are not allowed to be mapped on this target". In this case only one group of the separation constraint is used.

Each affinity constraint has one or more targets. The type of the target depends on the type that should be mapped.

Data Affinity Constraints

A DataConstraint is used to define the mapping of label objects to memory units.

Process Affinity Constraints

A ProcessConstraint is used to define the mapping of process (Task or ISR) objects to processing cores or scheduling units.

Runnable Affinity Constraints

A RunnableConstraint is used to define the mapping of runnable objects to processing cores or scheduling units.

3.5.9 Physical Section Constraints

A PhysicalSectionConstraint is used to to define the mapping of Section objects to Memories. This mapping of Section object to Memory objects specifies that corresponding PhysicalSectionMapping associated to this Section element can be allocated only in the mapped Memories.

Example: PhysicalSectionConstraint with the below properties has the following semantic:
	name: Ram1_Ram2_PhysicalSectionConstraint
	Memories : RAM1, RAM2
	Section : .abc.reini
Semantic: PhysicalSectionMapping for .abc.reini section can only be allocated either in RAM1 or RAM2 or in both. But not in other Memories.

3.6 Event Model

The event model provides the classes to describe the BTF-Events that can be used for the tracing configuration, for the modeling of event chains and for some timing constraints.

There are different event classes for the different entity types that can be traced:

Entity Event Class
Process (Task, ISR) ProcessEvent
Stimulus StimulusEvent
Runnable RunnableEvent
Label LabelEvent
Semaphore SemaphoreEvent

In a running system, each entity can have different states. An event trace consists of the events that are visualizing the state-transitions of the traced entities. To define such an event in the model, each kind of event class contains an event-type-enumeration that provides the event-types for the state-transitions of its entity. The following picture shows the possible states of a process:

So for example the event-type-enumeration for a process event contains the events activate, start, resume, ...

If it is required to define an event like "start-event of some process" then it is enough to create a object of type ProcessEvent and set the event-type start.

It is also possible to restrict the definition of an event to a special entity. So it can be defined like "start-event of task T_1". Therefore it is possible to reference a process from ProcessEvent. In general, each event class can reference an entity of the corresponding type. In addition to that, each event class provides individual restrictions. So it is possible for ProcessEvent that the event is not only restricted to a special process, it can be also restricted to a core. So that would be like "start-event of task T_1 on core C_2". Another example is the class RunnableEvent, it allows to restrict the event to a runnable, the process that executes the runnable and the core that executes the process.

3.7 Hardware model

The AMALTHEA hardware model is used to describe hardware systems which usually consists of ECUs, microcontrollers, cores, memories, additional peripherals etc. It is central accessible through the HWModel element and contains following elements:

  • SystemType
  • ECUType
  • MicrocontrollerType
  • CoreType
  • MemoryType
  • NetworkType
  • HwSystem
  • AccessPath

3.7.1 Structure

A reduced overview of the Meta Model specifying the Hardware Model is shown below. The overall hierarchy within this model is represented by blue classes (HwSystem, ECU, Microcontroller and Core). Red classes indicate type descriptions which contain the parameters of their referencing elements, e.g. it is possible to specify a type of a core with all relevant attributes which will be referenced by core instances. This supports quick modeling of homogeneous architectures and increases the reutilization of already described hardware. Green classes represent additional hardware peripherals which can be described on each of the hardware hierarchies (HwSystem, ECU, Microcontroller and Core).

The structure of descriptions regarding the network is shown in the following figure. The ComplexNode and each of its specializations (e.g. HwSystems, ECUs, Networks, ...) may contain a list of HwPort elements. The ComplexPort element is a specialization which is attached to exactly one network. More detailed descriptions about the respective elements are given in the following section.

ComplexNode

The ComplexNode is the generalization of all hardware components. It holds the described components name and contains its nested hardware elements, e.g. memories, networks, ports etc.

Attribute Type Mul. Valid Values Description
quartzes Containment * Quartz Containment for Quartz objects (i.e. frequency generators) located on the specialization of this element.
prescaler Containment * Prescaler Containment for Prescaler (i.e. frequency prescaler) objects located on the specialization of this element.
memories Containment * Memory Containment for Memory (i.e. memories, such as RAM or ROM) objects located on the specialization of this element.
networks Containment * Network Containment for Network (i.e. networks, such as LIN, CAN or ETHERNET) objects located on the specialization of this element.
components Containment * HwComponent Containment for HwComponent (i.e. all other entities of hardware) objects located on the specialization of this element.
ports Containment * HwPort Containment for HwPort (i.e. interfaces with multiple pins) objects located on the specialization of this element.

HwSystem

The HwSystem is a specialization of the ComplexNode and represents a cluster of one or more ECU elements which in sum implement a specific system.

Attribute Type Mul. Valid Values Description
systemType Reference 1 SystemType Refers to a SystemType containing the configuration for this system.
ecus Containment + ECU Containment for Electronic Control Units ( ECU) that are part of this system.

ECU

The ECU (Electronic Control Unit) is a specialization of the ComplexNode and represents any physical electronic hardware on an embedded system level abstraction level.

Attribute Type Mul. Valid Values Description
ecuType Reference 1 ECUType Refers to a ECUType containing the configuration for this ECU.
microcontrollers Containment + Microcontroller Containment for Microcontrollers that are part of this ECU.

Microcontroller

The Microcontroller is a specialization of the ComplexNode and represents any physical hardware on System on Chip abstraction level, e.g. processors with one or more special or generic purpose cores.

Attribute Type Mul. Valid Values Description
microcontrollerType Reference 1 MicrocontrollerType Refers to a MicrocontrollerType containing the configuration for this microcontroller.
cores Containment + Core Containment for processing cores ( Core) that are part of this microcontroller.

Core

The Core is a specialization of the ComplexNode and represents the respective processing units on any type of single- or multi core processor.

Attribute Type Mul. Valid Values Description
coreType Reference 1 CoreType Refers to a CoreType containing the configuration for this core.
lockstepGroup Integer ? 0 – 255 If set, this value defines the lockstep group of the core. All cores with the same lockstep group are operating in lockstep mode together.

Memory

The Memory is a specialization of the ComplexNode and represents memories. It is used to describe any type of memory module, e.g. Flash, RAM, etc.
Please note that to specify a memory with the type CACHE (e.g. lv1 cache as part of a core) it is sufficient to specify it within a core. It is not necessary to specify additional networks or ports.

Attribute Type Mul. Valid Values Description
type Reference 1 MemoryType Refers to a MemoryType containing the configuration for this memory.

Network

The Network is a specialization of the ComplexNode and represents networks of interconnected ComplexPort elements.

Attribute Type Mul. Valid Values Description
type Reference 1 NetworkType Refers to a NetworkType containing the configuration for this network.

Quartz

The Quartz is a specialization of the ComplexNode and represents a frequency generator which may be located on any ComplexNode or its specializations. It may be referenced by any number of Prescalers.

Attribute Type Mul. Valid Values Description
type QType 1 STATIC / DYNAMIC Specifies if the Quartz operates at either a static or dynamic frequency.
frequency Integer ? 0 – 255 Frequency on which this Quartz is operating.

Component

The Component (or HwComponent) is a specialization of the ComplexNode and represent hardware elements which are not further specified. They may contain any number of nested Components.

Attribute Type Mul. Valid Values Description
nestedComponents Containment 1 HwComponent Containment for HwComponent (i.e. all other entities of hardware) objects located below this component.

HardwareTypeDescription

The abstract class HardwareTypeDescription is the generalization of all type description for memories, cores, ECUs, networks etc. It allows its specializations to stor information about their features.

AbstractionTypeDescription

The abstract class AbstractionTypeDescription is a specialization of the HardwareTypeDescription and generalization for hardware types which describe hardware abstraction, i.e. the HwSystem, ECU, Microcontroller and Core.

SystemType

The SystemType is a specialization of the AbstractionTypeDescription and used to define a specific type of a system. If multiple systems with the same characteristics exist, it is sufficient to define only one type and reference it by all systems sharing the same type.

ECUType

The ECUType is a specialization of the AbstractionTypeDescription and used to define a type of an ECU.

MicrocontrollerType

The MicrocontrollerType is a specialization of the AbstractionTypeDescription and used to define a type of a Microcontroller.

CoreType

The CoreType is a specialization of the AbstractionTypeDescription and used to define a type of a Core, i.e. its bit width and ticks per cycle.

Attribute Type Mul. Valid Values Description
bitWidth Integer 1 0-255 The cores architectures bit width (e.g. 32 or 64 bit)
instructionsPerCycle Integer 1 0-255 Number of processed (computed) instructions per cycle.

MemoryType

The MemoryType is a specialization of the HardwareTypeDescription and used to define a type of a memory, i.e. its size, access pattern and type.

Attribute Type Mul. Valid Values Description
xAccessPattern String 1 String Timing Architects specific value. Will be used in the future to determine the access pattern of a memory element.
type MemoryType 1 RAM/CACHE/FLASH_INT/FLASH_EXT Specifies the type of this Memory.
size Long 1 >0 Size of this memory.

NetworkType

The NetworkType is a specialization of the HardwareTypeDescription and used to define a type of a Network, i.e. max supported bit width and scheduling policy.

Attribute Type Mul. Valid Values Description
schedPolicy SchedType 1 RROBIN/PRIORITY Specifies the scheduling policy which is used on this Network.
bitWidth Integer 1 0 – 255 Specifies the max bit width that may be transferred on this network per cycle.

CrossbarSwitch

The CrossbarSwitch is a specialization of a NetworkType and represents switches, which allow simultaneous/concurrent connections between all types of components

Attribute Type Mul. Valid Values Description
conConnections Integer 1 0 – 255 Maximum number of concurrent 1:n connections

Bus

The Bus is a specialization of the NetworkType and represents any kind of bus network, e.g. CAN, LIN etc.

Attribute Type Mul. Valid Values Description
type BusType 1 CAN/TTCAN/LIN/FLEXRAY/ETHERNET/SPI/NA Specifies the type of the bus network.

Bridge

The Bridge is a specialization of a NetworkType and represents bridges or gateways. It is used to transfer data from one network to another and performs a one-way translation, e.g. a ComplexNode containing a Bridge is accessed by its ComplexPort and routes the data to its containing Bridge. If a ComplexNode contains more than one Bridge, the respective target network is determined by the address range of its connected ComplexPort elements.

Pin

The Pin represents a single pin and serves as abstract representation for pure informative use, e.g. by a developer describing a port.

ComplexPin

The ComplexPin is a specialization of the Pin and contains detailed information, i.e. the type of the pin, its address and its direction (read/write).

Attribute Type Mul. Valid Values Description
type PinType 1 ANALOG/DIGITAL Specifies if this pin is either operating in analog or digital mode
baseAddress Long 1 >= 0 Specifies the address of this pin, i.e. the address this pin is accessed by.
direction RWType 1 R/W/RW Specifies the direction the data may travel through this pin, e.g. ®ead only, (W)rite only or both (RW).

Port

The Port represents groups of pins and may be used to describe simple interfaces, e.g. 8-bit ports etc. Its main purpose is to describe ports which only serve informative use.

ComplexPort

The ComplexPort is a specialization of the Port and represents interfaces on any type of ComplexNode which are connected to a Network.

Attribute Type Mul. Valid Values Description
network Reference 1 Network References the network which this ComplexPort is a part of. Only ComplexPorts attached to the same network are capable of communicated directly with each other. If data is transfered over multiple networks, either a valid route (e.g. Port A -> Network A -> Port B and Port C -> Network B -> Port D, with Ports B and C beeing on the same device), or an access path have to be present.
isMaster Boolean 1 TRUE/FALSE Specifies if this ComplexPort may send requests to a Network (e.g. a processor) or just answer them (e.g. a memory).
bitWidth Integer 1 >0 Specifies the max bit width that may be transferred via this ComplexPort.
baseAddress Long 1 >0 Specifies the base address of this ComplexPort, making it addressable through other ComplexPorts in the Network.
addressRange Long 1 >0 Specifies the size of the addressable area this ComplexPort is addressed by. The address space is determined by the baseAddress offset and this value.
direction RWType 1 R/W/RW Specifies the direction the data may travel through this port, e.g. ®ead only, (W)rite only or both (RW). Note that the direction is always related to the isMaster attribute, e.g. Master ports read and write TO the network, while non-master ports are read and written BY the network.
readCycles Integer 1 >0 Specifies the number of cycles until a read is performed.
writeCycles Integer 1 >0 Specifies the number of cycles until a write is performed.
schedValue Integer 1 >0 Optional parameter for the usage in network scheduling policies. The meaning of this attribute depends on the used schedulingPolicy of the referenced Network, e.g. it may be the priority in a priority scheduled network or the order in a round robin scheduled network.

AbstractType

The AbstractType is an abstract class, allowing attributes to describe data without specifying their data type. It contains the field name to specify the name of the value while the specializations BoolValue, CharValue, IntValue, LongValue, DoubleValue and StringValue contain the resp. value types.

3.7.2 AccessPaths

The AccessPath element describes in general the needed time in cycles for a hardware element ( ComplexNode) A to access a hardware element B.
It can be configured in two different ways:

  1. Latency: Describes the needed cycles for a given RWType using a Deviation (from common model) or a constant value.
  2. Sequence of ports: Describes the access path in detail by referencing the ComplexPorts from given Hardware elements. These ports includes the specification for the different access types in cycles, which can then be summed up to get the total value.

Both approaches includes the possibility to reference another given path of the same type, so it is possible for example to describe sub paths, which are then included in a main path.

The following picture shows the structure of these two approaches:

Latency Overview

To describe a latency for a given core a memory, only these elements are needed inside of the Hardware model.
The next picture shows a simple overview of a core, which is able to access two memories with different latencies.

Using the provided AMALTHEA model editor this small sample would look like the following picture:

Hardware Access Path

Using the HwAccessPath element to describe an access path, the included elements must be available with the proper ComplexPorts. The access path consists with a list of elements, which are referencing existing ports.

The next overview shows a very simple sample, with a core and two memories connected using a network and the proper ports.

Configuring the two access paths from Core1 to Memory1 and to Memory2 results in two HwAccessPaths containing the proper ports, showed in the next overview.

3.8 Mapping Model

The mapping model is intended to provide tools that use hardware and software models (e.g. code generators) information about the corresponding mappings and allocations. This information contains associations between

  • schedulers and executable software: A scheduler manages and distributes executable software like runnables or tasks on its managed cores,
  • schedulers and cores: A scheduler can manage one or more cores and deploy computations on these
  • data and memories: Data (such as functions, variables, heap etc) is mapped on static and volatile memories.

3.8.1 Overview

The Meta Model specifying the Mapping Model is shown below.

MappingModel

The MappingModel serves as a container for each of the mapping rules, i.e. Allocations (executable software and cores which are allocated to schedulers) and Mappings (labels and software which is mapped to memories).

3.8.2 Mappings

MemoryMapping

The MemoryMapping is a class, describing the mapping of parts of the software model to Memory. It is used to associate specializations of the AbstractMemoryElement (i.e. Label, Runnable, TaskPrototype and Process). The target memory is specified by a reference to an explicit Memory within an existing hardware model. The position in memory can also be defined as address here. If the address is a absolute memory address, a offset address from the memories first address, or if the address information is not expected at all is defined by the Memory Address Mapping Type enumeration in the root element of the Mapping Model. The Additional attributes, e.g. to supply further information for a code generator, may be described by the containment attributeList.

PhysicalSectionMapping

The PhysicalSectionMapping class (can also be called as Physical Memory Section ) describes the following:

  • mapping of various Section elements to a specific Memory
  • mapping various Label and Runnable elements to a Physical Memory Section
  • description of memory address location where the Physical Memory Section is allocated

3.8.3 Allocations

CoreAllocation

The CoreAllocation describes the allocation of cores to a Scheduler. This class consists of references to the respective Scheduler, which specified within an existing OS model, and a core, which is specified in a hardware model. Further attributes, which may be used to supply additional information, e.g. for a code generator, may be described by the containment attributeList.

RunnableAllocation

The RunnableAllocation is a specialization of the ExecutableAllocation. It is used to associate a Runnable, specified within an existing software model, with a Scheduler.

ISRAllocation

The ISRAllocation is used to associate an ISR with an InterruptConroller. The attribute 'priority' can be used to assign a value for this specific context. If the value is set, it will be used instead of the default.

TaskAllocation

The TaskAllocation is used to associate a Task with a TaskScheduler. The attribute 'priority' can be used to assign a value for this specific context. If the value is set, it will be used instead of the default.

3.9 OS Model

This part of the AMALTHEA model describes the provided functionality of an operating system. It mainly provides a way to specify how access is given to certain system resources. Therefore the concepts of scheduling, buffering, and semaphores are supported, which are described in detail in the following chapter.

3.9.1 Operating System

The basic element in the OS Model is the operating system. There can multiple operating systems in one model. The operating system type can be used to describe a generic operating system. It is also possible to use the vendor operating system type to define a vendor specific OS. An operating system contains a number of task schedulers and interrupt controllers. A task scheduler controls the execution of a task on one or multiple processor cores. An interrupt controller is the controller for the execution of ISRs and can be also mapped to multiple cores. The mapping of tasks and ISRs to their controller and the mapping of the controller to the cores can be done in the Mapping Model. An operating system can also contain a description of the overhead it produces. For this there is a more detailed explanation below.

3.9.2 Scheduler

Interrupt controller and task scheduler have a scheduling algorithm. The picture below shows that both types are inherited of the scheduler type. Each scheduler has a scheduling unit. The scheduling unit can be either a hardware scheduling unit or a software scheduling unit.

Scheduling HW Unit

This class is used when scheduling is done by a hardware unit. The only attribute of this class
is the delay attribute. It represents the amount of time that is required to execute scheduling.

  • SchedulingHWUnit: The Time object in the role of delay must not contain a negative value!

Scheduling SW Unit

This class is used when scheduling is done by software unit.

Scheduling Algorithm

This is an abstract class for the different scheduling algorithms.

Scheduling Algorithm Description
OSEK OSEK compliant Scheduling Algorithm
PfairPD2 Proportionate Fair PD 2 Scheduling Algorithm
PartlyPFairPD2 PPfair-PD 2 (Partly Proportionate Fair PD 2 Scheduling Algorithm)
PartlyEarlyReleaseFairPD2 P-ERfair-PD 2 (Partly Early Release Fair PD 2 Scheduling Algorithm)
EarlyReleaseFairPD2 ERfair-PD 2 (Early Release Fair PD 2 Scheduling Algorithm)
LeastLocalRemainingExecutionTimeFirst Task with the smallest local remaining execution time gets the highest priority
EarliestDeadlineFirst EDF (Earliest Deadline First Scheduling Algorithm): Task with the earliest deadline gets the highest priority
DeadlineMonotonic DMS (Deadline Monotonic Scheduling Algorithm): Task with the shortest period gets the lowest priority
RateMonotonic RMS (Rate Monotonic Scheduling Algorithm): Task with the shortest period gets the highest priority
PriorityBasedRoundRobin Round Robin Scheduling Algorithm with prioritized processes
UserSpecificSchedulingAlgorithm This class contains a list of algorithm parameter. Each parameter has a key and a value (both Strings). A user can store all information for its own specific scheduling algorithm here

3.9.3 Os Overhead

It is possible to define the overhead that is produced by an operating system. The defined overhead can be assigned to an operating system definition. Each overhead information is defined as a set of instructions that has to be executed when the corresponding OS function is used. The instructions can be either a constant set or a deviation of instructions. It is possible to define the overhead for the ISR category one and two and for a number of operating system API functions.

ISR Overhead

  • ISR category 1 & 2: Describes the overhead for ISRs of category one and two by adding a set of instructions that is executed at start and terminate of the ISR

API Overhead

There exists also an overhead for API calls. The following API calls are considered:

  • API Activate Task: Runtime overhead for the activation of a task or ISR by another task or ISR (inside the activating process)
  • API Terminate Task: Runtime for explicit task termination call (inside a task)
  • API Schedule: Runtime for task scheduling (on scheduling request)
  • API Request Resource: Runtime overhead for requesting a semaphore (inside a runnable)
  • API Release Resource: Runtime overhead for releasing a semaphore (inside a runnable)
  • API Set Event: Runtime overhead for requesting an OS event (inside a task or ISR)
  • API Wait Event: Runtime overhead for waiting for an OS event (inside a task or ISR)
  • API Clear Event: Runtime overhead for clearing an OS event (inside a task or ISR)
  • API Send Message: Runtime overhead for cross-core process activation or event (inside a task or ISR)
  • API Enforced Migration: Runtime overhead for migrating from one scheduler to another scheduler (inside a task or ISR)
  • API Suspend OsInterrupts
  • API Resume OsInterrupts
  • API Request Spinlock
  • API Release Spinlock
  • API SenderReceiver Read
  • API SenderReceiver Write
  • API SynchronousServerCallPoint
  • API IOC Read
  • API IOC Write

3.9.4 OS Data Consistency

The OsDataConsistency class provides a way to configure an automatic data consistency mechanism of an operating system. It is used to cover the following two use cases:

  • Provide a configuration for external tools that perform a data consistency calculation based on the stated information.
  • Provide the results of a performed data consistency calculation which then have to be considered by external tools (e.g. by timing simulation).

To distinguish the different use cases and to consequently also indicate the workflow progress for achieving data consistency, OsDataConsistencyMode allows to define the general configuration of the data consistency. The following modes are available:

  1. noProtection: data stability and coherency is NOT automatically ensured.
  2. automaticProtection: data stability and coherency HAS TO BE ensured according configuration either via custom protection or via model elements.
    1. customProtection: data stability and coherency IS ensured according configuration but not via model elements.
    2. handeldByModelElements: data stability and coherency IS ensured via model elements.

The DataStability class defines for which sequence of runnables data has to be kept stable. This can either be stability within a process meaning over all its runnables, within each runnable or within each schedule section. Furthermore, it can be stated whether all data is considered for stability or just those accessed multiple times.

The NonAtomicDataCoherency class defines for which sequence of runnables data has to be kept coherent. Like for data stability it can be stated whether all data is considered for coherency or just those accessed multiple times.

3.9.5 Semaphore

With this object, a semaphore can be described which limits the access of several processes to one resource at the same time.

Attribute Description
name Name of semaphore (inherited from ReferableBaseObject)
maxValue Maximum number of processes that can concurrently access the semaphore
initialValue Initial number of processes that access the semaphore
priorityCeilingPrototcol Defines if the priority ceiling protocol is activated. If it is activated, a process that accesses the semaphore gets a higher priority as the processes that can also access the same semaphore

3.10 PropertyConstraints Model

The scope of the Property Constraints model is to limit the design space by providing information about the specific hardware properties that parts of the software rely on, i.e. what properties or features have to be supplied by the respective hardware in order to be a valid mapping or allocation target.

This information comprises

  • Core allocation constraints, which describe the constraints on cores.
  • Memory mapping constraints, which describe the constraints on memories.

3.10.1 Structure

The figure below shows the Property Constraints model. In order to provide a better understanding of the model, interconnections between software model elements are not shown in this figure.

3.10.2 CoreAllocationConstraint

The CoreAllocationConstraint is an abstract class for describing constraints which affect the selection of a suitable Core.

RunnableAllocationConstraint

The RunnableAllocationConstraint is a specialization of the CoreAllocationConstraint. It is used to specify constraints on Core elements which are used in Runnable to Core allocations.

ProcessAllocationConstraint

The ProcessAllocationConstraint is a specialization of the CoreAllocationConstraint. It is used to specify constraints on Core elements which are used in the allocation of Process's specializations (i.e. Task and ISR), to Cores.

ProcessPrototypeAllocationConstraint

Tha same as ProcessAllocationConstraint but for process prototypes in an earlier phase of the development.

3.10.3 MemoryMappingConstraint

The MemoryMappingConstraint is an abstract class for describing constraints which affect the selection of a suitable Memory. The actual constraint on the core is described by the HwMemoryConstraint.

AbstractElementMappingConstraint

The AbstractElementMappingConstaint is a specialization of the MappingConstraint. It is used to specify constraints on Memory elements which are used in the mapping of AbstractMemoryElement specializations (i.e. Label, Runnable, TaskPrototype or Process) to Memories.

3.10.4 Classifications

The specializations CoreClassification and MemoryClassification are used to describe the features that a hardware element ( Core or Memory) needs to provide in order to be a valid target. This is done by references to Classifiers, conditions (requires vs. excludes) and the kind of grouping (all of them vs. at least one of them).

3.10.5 Example

3.11 Stimuli Model

The Stimuli Model contains stimulus and clock objects.

3.11.1 Stimuli

A stimulus is responsible to activate processes. The following different types are available:

  • ArrivalCurve: A list of time-borders in which a specified number of events are expected
  • CustomStimulus: To describe own custom types of activations, including properties
  • EventStimulus: Activation triggered by an event, defined in the event model. Contains a Counter element if activation is either shifted and / or happening every nth time.
  • InterProcess: Activations based on an explicit inter-process activation. Contains a Counter element if activation is either shifted and / or happening every nth time.
  • Periodic: Periodic activations based on an offset and a recurrence.
  • PeriodicEvent: Periodic activations based on other events, like rotation speed dependency
  • Single: Activates the process only once
  • Sporadic: Sporadic activations of processes
  • Synthetic: Activations periodically triggered depending on a defined list of trigger times.
  • Periodic: The Time object in the role of offset must not contain a negative value!
  • Periodic: The Time object in the role of recurrence must not contain a negative value!
  • Synthetic: The Time object in the role of offset must not contain a negative value!
  • Synthetic: The Time object in the role of recurrence must not contain a negative value!
  • TimestampList: The Time objects in the role of timestamps must not contain a negative value!
  • Single: The Time object in the role of activation must not contain a negative value!
  • ArrivalCurveEntry: The Time object in the role of lowerTimeBorder must not contain a negative value!
  • ArrivalCurveEntry: The Time object in the role of upperTimeBorder must not contain a negative value!

It is possible to define a deviation of time for every type of stimulus. For this, a stimulus can have an object of type Deviation, which refers to the deviation in the common model. This reference uses the class SignedTime as parameter for the generic values of the deviation. So the deviation is about time. The values (negative or positive) represents a shift of the activation to left or right on the time line. In this way it is possible to increase or decrease the activation time.

It is possible to define lists of mode label values for a stimulus. An entry in this list is always a pair of a mode label and a mode literal. One of this lists is the set-mode-labels-list. Each time the stimulus is triggered all mode labels in this list are set to the corresponding value.
The enabling-mode-label-list and the disabling-mode-label-list can be used to enable or disable a stimulus by setting the mode label to a specific value (e.g. by a write access with a runnable).

  • If there are entries in the disabling-mode-label-list, the stimulus is disabled if one of these mode labels has the defined value.
  • If there are entries in the enabling-mode-label-list, the stimuli is disabled if no one of these mode labels has the defined value.
  • If there are entries in both lists, the stimulus is disabled when at least one label of the disabling-mode-label-list has the defined value or if all labels in the enabling-mode-label-list are different to their defined value. If no one of the disabling-mode-label-list fits its value but at least one of the enabling-mode-label-list, then the stimulus is enabled.

  • ModeValueListEntry: The Mode objects referenced via valueProvider and value must be equal!

Single

Single allows to specify an activation at a single relative point in time. The first activation occurs after the time units specified by activation.

Periodic

Periodic allows to specify periodic activations based on an offset and a recurrence. The first activation occurs after the time units specified by offset, and then every following activation occurs after the time units specified by recurrence. This means, in general, an activation of instance i is triggered at time t = offset + i * recurrence.

The following figure shows a Periodic stimulus example with a fix offset and recurrence time.

In addition, the same Periodic stimulus can be extended by a Gaussian deviation. The activation time varies then according the values of the distribution, as depicted in the following figure.

Besides the deviation, a Periodic stimulus can also have a clock.

Synthetic

Synthetic allows to specify a periodic activation of trigger times. It is defined by a list of timestamps triggerTimes of size m, a period recurrence, and an offset offset. Each timestamp in triggerTimes specifies a single activation at a relative point in time. The moment in time these timestamps are relative to is defined the following way: an activation of instance i is triggered at time t = offset + floor(i / m) * recurrence + triggerTimes[i modulo m].

The following figure shows a Synthetic stimulus example with two periodic activations after the time units T1 and T2.

Arrival Curves

An arrival curve is described as a list of time-borders in which a specified number of events are expected. The picture below shows an example for this. In the first picture there is a number of events on a timeline.

In the picture below every distance between two events is measured. The minimum and the maximum distance is added to the table as time-border for the occurrence of two events. This means that after one event there has to be at least a gap of one time-unit before the next event can occur. It also means that there will be always a second event within eight time units after the event before. Basically this would be enough to describe an Arrival Curve Stimulus. But it is possible to create a more precise stimulus by describing additional time borders for greater number of events. This is shown in the steps below.

The same as for two events in the picture above is done here for three events. Like already mentioned above, this is an additional restriction for occurrence of an event.

And for four events:

The picture below shows the table as arrival curve graph. The red line is the upper-time-border that shows the latest time where the event will occur. The green line shows the earliest possible time where the event can occur.

3.11.2 Clocks

A clock is a time base which describes the progress of time for one or more periodic stimuli in relation to global time. If two equal stimuli have a different time base, the time of task activation can be different. There are different kind of clock functions, the clock sinus function, the clock triangle function and the clock multiplier list. The clock multiplier list is a list of timestamp-multiplier value pairs. Is a specified timestamp arrived, the clock changes to the corresponding multiplier value.

  • ClockTriangleFunction: The Time object in the role of period must not contain a negative value!
  • ClockSinusFunction: The Time object in the role of period must not contain a negative value!
  • ClockMultiplierListEntry: The Time object in the role of time must not contain a negative value!

3.12 Software Model

The AMALTHEA software model is central accessible through the SWModel element.

3.12.1 Memory Information

Analyzing and mapping the software structure to available memories needs additional information of the included elements. This type of information targets the consumed size of memory of an element, represented by the size attribute of type DataUnit. The element AbstractMemoryElement is a generalized element that provides this data. The following image shows the structure and also the elements of the software model that are extending AbstractMemoryElement (the overview picture is only showing the hierarchy and not possible relationships between the elements):

3.12.2 Labels

The label element represents a data element. It is directly located in a defined area of a given memory.
It can be used as a parameter, a temporarily existing variable or representing a constant value.

The following attributes are describing a label:

Name Description
name The name represented as String value (derived from AbstractElementMemoryInformation)
displayName In addition to the name attribute, which must be unique, a label can also be described by an additional and optional display name. The displayName attribute must not be unique. It can be used for example to represent specification data, which can be different from the unique name (like a symbol) of an available software build.
dataType Reference to the data type definition
constant Boolean value to represent, if label is representing a constant value
bVolatile Boolean value to mark a label as volatile
dataStability Enumeration value to represent the data stability needs of the label. If set, it overwrites the global settings stated by the OsDataDependency, otherwise it inherits them (see OS Data Consistency).

3.12.3 Channels

The channel element has two different characteristics: it represents a data element in memory and also a way how runnables transmit larger amounts of data. A typical applications is the streaming of video data where a continuous sequence of images is sent in smaller chunks.

The following attributes are describing a label:

Name Description
name The name represented as String value (derived from AbstractElementMemoryInformation)
displayName In addition to the name attribute, which must be unique, a label can also be described by an additional and optional display name. The displayName attribute must not be unique. It can be used for example to represent specification data, which can be different from the unique name (like a symbol) of an available software build.
elementType Reference to the data type definition of a single element
defaultElements Number of elements initially in the channel (at start-up)
maxElements Depth of channel (maximum number of elements that may be stored)

3.12.4 Data Types

General Information

The AMALTHEA data model supports meta information for base data types. Therefore the element BaseTypeDefinition exists in the software part of the model. It consists of the name and number of bits to define a data type. Additional information to store is the mapping information of these data types for a target platform, represented by the DataPlatformMapping element, like AUTOSAR or any other one.

The compound data types are data structures, based on given or defined base types.
In the literature they are also often named composite or derived types see Wiki.
The result of this type of definition is an own data type, which can be used as base data types.
They can consist of static structures or dynamic ones, like arrays or stacks.

The following compound data type definitions are currently supported:

  • Pointer: Holds a reference to another type using its memory address
  • Array: Contains a number of elements of the same data type. The size of an array can be fixed or expandable.
  • Struct: Contains other data types in a structured way, often called fields or members. The fields can be accessed by their name.

Sample

In the picture below a small sample is modeled containing the following information:

  • Boolean type with size of 8 bits and platform mapping for AR (Boolean) and C (bool)
  • Char16 type with size of 16 bits and platform mapping for AR (Char16) and C (short)
  • charArray consists of 4 elements of type Char16
  • "hello world struct" with the attribute name (Char16) and valid (Boolean)

Note: The picture shows the activated element types for better understanding!

3.12.5 Activations

Activations are used to specify the intended activation behavior of Runnables and ProcessPrototypes. Typically they are defined before the creation of tasks (and the runnable to task mappings). So if there are no tasks defined, or if the mapping of runnables to tasks is not done, this is a way to document when the runnables should be executed.
Activations are independent of other top level elements in the AMALTHEA model. Single, periodic, sporadic, event or custom (free textual description) activation pattern can be defined. This information can be used in the following development steps to create tasks, stimuli and the mappings to dedicated hardware.

  • PeriodicActivation: The Time object in the role of mim must not contain a negative value!
  • PeriodicActivation: The Time object in the role of max must not contain a negative value!
  • PeriodicActivation: The Time object in the role of offset must not contain a negative value!
  • PeriodicActivation: The Time object in the role of deadline must not contain a negative value!
  • SingleActivation: The Time object in the role of min must not contain a negative value!
  • SingleActivation: The Time object in the role of max must not contain a negative value!

3.12.6 Runnables and Services

Both elements, runnables and services, are an abstraction of an executable entity. They are both represented by the Runnable element and are distinguished by using the service attribute of type boolean.
Instead of tasks, which are providing a context for the operating system, runnables and services are including the instructions to perform. They include an abstraction of these instructions using different algorithms, based on performance data.
The difference between runnables and functions is based on their activation and type of calling. While the initial activation of a runnable can only be performed by a task or another runnable, services can only be activated by runnables or other services.
Runnables and Services in the AMALTHEA model currently have no call parameters. Remark: Under discussion for one of the next releases.

Based on these types of information, an additional analysis can be performed.
The next picture is showing the call graph of task, runnable and services.

3.12.7 Runnables

The next picture shows the general structure of the Runnable element.

Name Description
callback True if a runnable is used as a callback
service True if Runnable element can be seen in a service or global function manner. In more detail, the Runnable is only called from other Runnables and not directly from a Task context.
deadline Information in Time at which the element must be finished / terminated.
runnableItems List containing RunnableItem objects, representing instructions, label accesses, other Runnable accesses. The possibilities are described below.
tags Can be used to annotate or group special kind of Runnables
activation Although runnables in the simulation can be only activated by tasks, it is possible to specify an activation pattern for a runnable.
asilLevel Possibility to specify the ASIL level ("automotive safety integrity level").
  • Runnable: The Time object in the role of deadline must not contain a negative value!

3.12.8 Runnable Items

The RunnableItems are describing the detailed behavior of a Runnable element. This can be either an abstraction of instructions by representing the used running time or representing access to other elements like Runnables, Labels and Semaphore. An overview of the different possibilities is given in the next pictures.

Groups

A Group is used to structure the RunnableItems. The boolean attribute isOrdered indicates if the execution order of the contained elements is fixed or random.

Calls and AUTOSAR communication

Name Description
RunnableCall The activation of another Runnable.
ServerCall An abstract description for client/server communication. It refers to a required runnable that describes the called server operation
SenderReceiverCommunication An abstract description for sender-receiver-communication (it can be read or write). If the access has to be buffered is represented using the values of LabelAccessBuffering.

Label Access

Name Description
access The type of access is represented using the values of LabelAccessEnum.
data Describes the access to an existent Label (set as reference).
dataStability Describes the data stability needs. If set, it overwrites the label settings, otherwise it inherits them (see OS Data Consistency).
statistic Defines the frequency of the label access.
transmissionPolicy The following attributes reflect the computing demand (time) depending on data:
  • chunkSize: Size of a part of an element, maximum is the element size.
  • chunkProcessingInstructions: Number of instructions that will be executed to transmit one chunk.
The next attribute specifies the amount of data actually accessed by a runnable (required to analyze memory bandwidth demands):
  • transmitRatio: Specify the ratio of each element that is actually transmitted by the runnable in percent. Value must be between [0, 1], default value is 1.0 .

Channel Access

Common attributes:

Name Description
data Describes the access ( ChannelSend or ChannelReceive) to an existent Channel (set as reference).
elements Maximum number of elements that are transmitted.
transmissionPolicy The following attributes reflect computing demand (time) depending on data:
  • chunkSize: Size of a part of an element, maximum is the element size.
  • chunkProcessingInstructions: Number of instructions that will be executed to send / receive one chunk.
The next attribute specifies the amount of data actually accessed by a runnable (required to analyze memory bandwidth demands):
  • transmitRatio: Specify the ratio of each element that is actually transmitted by the runnable in percent. Value must be between [0, 1], default value is 1.0 .

Receive attributes:

Name Description
receiveOperation The type of an operation is defined by LIFO or FIFO, Read or Take:
  • LIFO: last-in, first-out
  • FIFO: first-in, first-out
  • Read: reads elements (without modifying the channel content)
  • Take: removes the received elements
elementIndex Position (index) in channel at which the operation is effective. Zero is the default and denotes the oldest (FIFO) or newest element (LIFO) in the channel.
lowerBound Minimum number of elements returned by the operation. The value must be in the range [0,n], with n is the maximum number of elements that are received. Default value is n.
dataMustBeNew Specify if the operation must only return elements that are not previously read by the runnable. Default value is false.

Semaphore Access

Name Description
SemaphoreAccess Represents an access of a Semaphore. The type of access is defined using the SemaphoreAccessEnum values. The Semaphore itself is set as a reference to an existent one.

Mode Label Access

Name Description
ModeLabelAccess Describes the access to an existing ModeLabel (set as reference). The type of access is represented using the values of LabelAccessEnum: write represents the change of a mode label. The defined ModeLiteral is set. read indicates that the behavior of the runnable is influenced by the current value of the ModeLabel.
  • ModeLabelAccess: The Mode objects referenced via modeValue and data must be equal!

Runnable Mode Switch

With the RunnableModeSwitch it is possible to define different execution paths. They can be used like if-else or switch-case statements in a programming language. A RunnableModeSwitch uses the value of a ModeLabel (with the help of ModeValueProvider) to decide which path has to be executed.
A ModeSwitchEntry object is referenced to represent the different paths and each entry defines the value the mode must have to execute the path with the help of the ModeLiteral values.
At the RunnableModeSwitch it is also possible to define a default path, which is executed if none of the ModeSwitchEntries fits to the current set value of the mode.

Runnable Probability Switch

Each entry (path) of a Runnable Probability Switch has a probability-value. As the name indicates, this is the value that defines the probability that the path is executed. It is computed as a fraction of its value divided by the sum of all Probability Switch Entries in the surrounding switch.

Runnable Instructions

Runnable Instructions allow to specify the default instructions and (optional) the instructions for specific core types (e.g. FPGA). These extended instructions are implemented as a map with the core type as the key.

Statistical Values

It is possible to add to different elements (as shown in the picture above) different types of statistical values. These values can be used to describe in more detail the following behaviors:

  • RunEntityCallStatistic: Can be used to describe in more detail the dynamic call behavior of a [ Task - Runnable] or [ Runnable - Runnable] call. The value is representing how often the call is performed.
  • LabelAccessStatistic: Describes in more detail the dynamic behavior accessing a Label by having cache misses or how often a read/write is performed.
  • InstructionFetch: Describes the behavior to fetch instructions from the memory.

3.12.9 Tasks / ISR

The following figure shows the structure which describes Tasks and Interrupt Service Routines ( ISR) in the AMALTHEA software model. The abstract class Process generalizes ISRs and Tasks and provides the common attributes, like priority for priority based scheduling algorithms or the activation represented by the Stimulus. A Task or ISR contains calls either to other Tasks or Runnables. These types of calls are included in the callGraph attribute.

Name Description
name Name of the process
priority Priority of the process for priority based scheduling.
osekTaskGroup Priority for defining groups of tasks according OSEK
callGraph List of task calls. This specifies the behavior of the process. The objects will be executed in the same order they are stored in the list.
preemption Defines the preemption of a task by higher priority tasks.
multipleTaskActivationLimit Defines the maximal number of existing instances from the same task, which is checked for load limitation reasons when a task is activated. For the case of an exceeded multipleTaskActivationLimit, the activation is ignored.
stimuli Reference to one or more Stimuli representing the different types of possible activations of this Task

3.12.10 The Call Graph

The Call Graph is used to define how a Task or ISR behaves during its execution. For that it contains two different kinds of elements, Call Sequences and Switches. The following picture shows the data structure of a Call Graph:

The Call Graph contains a list of elements of type GraphEntryBase. This is the basic, abstract type for the Switch types and for the Call Sequence. With the Switches it is possible to define different execution paths. They can be used like if-else or switch-case statements in a programming language. There are two different kind of Switches, a Mode Switch and a Probability Switch. Each Switch contains a number Switch Entry objects. These are the different possible program paths that can be executed. Also like the Call Graph, the Switch Entry can contain a list of objects that can be Switches or Call Sequences.

Mode Switch

A Mode Switch uses the value of a Mode Label to decide which entry has to be executed. Therefore a Mode Label object is referenced (via ModeValueProvider) and each entry defines the values the mode label must have to execute the path.

Probability Switch

Each entry (path) of a Probability Switch has a probability-value. As the name indicates, this is the value that defines the probability that the path is executed. It is computed as a fraction of its value divided by the sum of all Probability Switch Entries in the surrounding switch.

Call Sequence

A Call Sequence contains a list of elements that are executed by the Process. The following list gives a short overview over these elements:

  • Wait Event: Wait for an OS-Event
  • Clear Event: Clear an OS-Event
  • Set Event: Set/Fire an OS-Event
  • Enforced Migration: Migrate to another Scheduler
  • Inter Process Activation: Activate a Process
  • Schedule Point: Trigger the Scheduler
  • Task Runnable Call: Invoke a Runnable

The following picture shows the structure of a Call Graph. The graph is like a tree of Call Sequences and Switches. The Call Sequences are used to define what the task is actually executing. With the Switches it is possible to define alternative execution paths. The elements of the graph are executed in the order they are stored in the data model. The same goes for the elements within a Call Sequence.

3.12.11 Call Sequence Content

As already explained, the content of a Call Sequence specifies what a process is actually doing when it is executed.
This section describes the different entry-types of a Call Sequence.

Task Runnable Call

With the Task Runnable Call the process executes a specific runnable.

Enforced Migration

Each process is controlled by at least one scheduler. A scheduler is the resource owner of one or multiple processor cores (The scheduler decides on which of its cores the process is executed). The Enforced Migration forces the process to switch to another scheduler. Therefore the Enforced Migration entry contains a reference to the new scheduler.

Inter Process Activation

The Inter Process Activation references a stimulus of type Inter Process. The execution of an Inter Process Activation entry triggers the processes that are mapped to this stimulus.

Schedule Point

At a Schedule Point, the process calls the scheduler that currently administrates it. This is used for cooperative task scheduling (see OSEK Specification 2.2.3, 2005).

Terminate Process

If a Terminate Process is reached during the execution of a Call Graph, the Task or ISR terminates immediately. It is not required to insert this element at the end of a Call Graph. It can be used to define an execution path (by using Switches) that terminates a process.

Wait/Clear/Set Event

The AMALTHEA Software Model contains a list of OS-Event objects. These can be used for task synchronization. To deal with OS-Events a Call Sequence can execute the elements Wait Event, Clear Event and Set Event.

With Wait Event the process waits for a number of events (defined in the Event Mask) to be set. Here it can be defined if the process waits just for one of the OS-Events (maskType = OR) or for all of them (maskType = AND). The waiting behavior defines if the process is blocking the core when it waits (active) or not (passive).

Set Event sets/fires a list of OS-Events. Here it is possible to define a context for the OS-Event. If a process is referenced, then the OS-Events are set just for this process.

The execution of a Clear Event entry unsets all referenced OS-Events.

For more information about OS-Events, see the OSEK Specification 2.2.3, 2005.

3.12.12 Modes

Modes and Mode Labels

The Mode element denotes a specific system state. It is described using the following attributes:

Name Description
name Name of the mode
literals Enumeration values of the mode

Modes are used in Mode Switches inside the Call Graph of Tasks and ISRs using Mode Labels. Moreover, mode labels are used to enable and disable Stimuli.

The Mode Label element represents a value of a specific Mode. Is is described using the following attributes:

Name Description
name Name of the mode label
displayName In addition to the name attribute, which must be unique, a mode label can also be described by an additional and optional display name. The displayName attribute must not be unique. It can be used for example to represent specification data, which can be different from the unique name (like a symbol) of an available software build.
mode Reference of the Mode
initialValue Initial value of the referenced mode

Mode Changes

The value of the mode label can be changed from its initial value using the set-mode-labels-list of a Stimulus.

Another possibility is a write access within a Runnable via Mode Label Access.

3.12.13 Process Prototypes

In addition to the Task elements, the AMALTHEA model contains an element process prototype.
This prototype can be used to define raw data of a task. It can be used to specify access to
labels (read, write) or other runnables/functions as possible with the normal task, but not the
order of the access. These prototypes are then processed by different algorithms. The algorithms are creating the tasks, are filling, verifying or modifying the data based on their different checks. The final result of this processing are tasks, which are corresponding to the data of the prototypes.

These tasks are representing the current state and can be further processed, for example to generate code or further simulation. With the process prototypes available in the model, it is possible to define the structure of the software in an early development phase. The implementation at that moment is open and not yet completed, but the general idea can be verified.
Another issue can be the distribution to a multi-core system, coming from a single-core system. Therefore the call graph can be analyzed and computed to get the right order and parallelization of the elements and dependencies.

3.12.14 Process Chains

The following figure shows the structure which describes Process Chains in the AMALTHEA software model. A process chain is used to group task and isrs together which are executing after each other and represent an end-to-end data processing path. The processes inside a process chain are connected via Inter Process Activations.

The following attributes are describing a process chain:

Name Description
name Name of the process chain
processes List of tasks and isrs which are included in the process chain

3.12.15 Custom Entities

The CustomEntity element defines a way to add not yet available elements of the software model in a generic way. The only contained attribute defines the type of the entity by setting it as a String. Additional properties can be set using the CustomAttributes.

3.12.16 Section

Section (Virtual Memory Section) is used to group the memory elements (Labels, Runnables). This is achieved by associating the Section element to Label & Runnable elements

Below are properties of Section element:

  • name
  • asilLevel

With this type of information available, the complexity of mapping software components to memories can be reduced. The next picture gives an overview about the general idea of the mapping with Sections.

4 Developer Guide

4.1 Overview of Features and Plug-ins

4.1.1 Features

There are two top-level features of the AMALTHEA Tool Platform, the AMALTHEA Tool Platform and the AMALTHEA Tool Platform SDK features. The non-SDK feature contains everything you need to run the AMALTHEA tools and models. If you like to develop new tools on top of the models or extend the models you may need the SDK. It contains beside the runtime all sources.

The top-level feature itself contains basic platform plug-ins and tools as well as several sub-features. These are

  • APP4MC Platform – Platform (includes all features and 3rd party libraries).
  • APP4MC Docu – Basic Help (is extended by loading other features).
  • APP4MC Examples – Examples that can be added by the example wizard.
  • APP4MC Multicore – Algorithms for multicore development like partitioning and mapping.
  • APP4MC Tracing – Record, analyze and transform runtime traces of the embedded system.
  • APP4MC Validation – Model validations (can be selected and executed on demand).
  • AMALTHEA Models – AMALTHEA models (EMF).
  • AMALTHEA Edit – AMALTHEA model edit support.
  • AMALTHEA Editors – Editors for the models.
  • AMALTHEA Converters – Independent feature for AMALTHEA model migration.
  • AMALTHEA Workflow Common – Basic elements to define workflows on the model.
  • AMALTHEA Workflow – Defining workflows (based on MWE2 workflow engine).
  • AMALTHEA Workflow Scripting – Defining workflows (based on EASE scripting).
  • AMALTHEA Import – Import of the models.
  • AMALTHEA Export – Exports of the models.

For all features there also exists an SDK containing the sources. If you install the AMALTHEA Tool Platform SDK, it contains all features as SDKs.

4.1.2 Plug-ins

The plug-ins are organized in various namespaces. The ids of most plug-ins containing models are within the namespace org.eclipse.app4mc.amalthea.model, whereas examples can be found in plug-ins named org.eclipse.app4mc.amalthea.example.

4.2 Model Validation

AMALTHEA uses for custom model validations the Sphinx validation project which is based on EMF validation .

4.2.1 Understanding Check Catalogs

A Sphinx validator optionally makes use of a Catalog. When a catalog is used, each @Check annotations should have a constraint name (which can be seen as unique id) to match a constraint in the catalog. So there is a logical mapping from methods to constraints. For a constraint to be applicable within the scope of a validator, the set of categories specified in its @Check annotation should be a subset of the set of categories referenced by the constraint in the check catalog. In other words, categories are used to restrict applicability to check methods.

Following catalog is going to demonstrate the relationship between categories and constraint.

Except for constraint check 5 each other constraint is assigned to at least one category. Constraints check 4 and check 5 are linked to multiple categories.

Hint: The assignment from constraint to one or more categories could also be done in the @Check annotation itself.

4.2.2 Adding own Validations

Adding an own validation to the AMALTHEA framework can be done by the following steps:

4.2.3 Plugin dependencies

org.eclipse.core.runtime,
org.eclipse.sphinx.emf,
org.eclipse.sphinx.emf.check,
org.eclipse.app4mc.amalthea.model

Implementation

Create a class which is extending org.eclipse.sphinx.emf.check.AbstractCheckValidator and implement a method of arbitrary name and tag it with the check annotation. At this point it is important to know that at least one parameter is needed for the check method. For instance, if you want to perform a check related to the whole AMALTHEA model you should consider org.eclipse.app4mc.amalthea.model.Amalthea as parameter type. An empty parameter list will never activate the check for your AMALTHEA model. This raises the capability of an so called out-of-the-box scoping which means that the validation checks only appears on the relevant model elements
(e.g. validations related to the AMALTHEA software model may only appear when right-clicking on an appropriate model element of type Software)

import org.eclipse.sphinx.emf.check.AbstractCheckValidator;
import org.eclipse.sphinx.emf.check.Check;
import org.eclipse.app4mc.amalthea.model.central.AMALTHEA;


public class BasicExampleConstraint extends AbstractCheckValidator {

	/**
	 *
	 * constraint: 
	 *   Must match at least to one existing constraint from the check catalog
	 *
	 * categories: 
	 *   Is of type array, thus constraint can be assigned to multiple categories
	 *
	 */
	@Check(constraint = "BasicExampleConstraint", categories = { "BasicCategory", ...and many more ... })
	void checkVeryImportantConstraint(final AMALTHEA ama)
	{

		/**
		 * Perform your check
		 */

		error("Error occurs", ama, ama.eContainingFeature());
		warning("Warning occurs", ama, ama.eContainingFeature());
		info("Info occurs", ama, ama.eContainingFeature());
		// Error type information is taken directly from catalog
		issue(ama, ama.eContainingFeature()); 
	}
}

Create your own class and insert at least one method extended by the check Sphinx annotation. The method should contain exactly one parameter of at least the type of the most generic model element of the AMALTHEA model (i.e. org.eclipse.app4mc.amalthea.model.central.AMALTHEA).
Implement the validate method by adding your own custom validation.

Plugin configuration

Add an extension point called org.eclipse.sphinx.emf.check.checkvalidators and for each implemented constraint a child entry for your class to the plugin.xml to register it in the Sphinx Validation framework. For each child entry you must specify your constraint class (including @Check annotated methods) and optionally a catalog which contains more detailed information about the composition and relationship between constraints.

The current configuration of the AMALTHEA model validations can be found in the plugin org.eclipse.app4mc.amalthea.model.common.validation.sphinx in the plugin.xml.

There the constraints section includes the list of available validations and their corresponding implementation class.

4.3 Model Workflow

4.3.1 Introduction

The APP4MC AMALTHEA platform provides the option to define a workflow on a model with different steps. AMALTHEA provides a simple API for implementing such a workflow. The definition of the workflow can be done either in plain Java or any other language, which is able to access Java classes.
The EASE framework provides a scripting environment inside of Eclipse with different script engines like Rhino (JavaScript), Jython or Groovy. AMALTHEA provides a sample and some convenient helpers based on EASE to define and execute such a workflow.
As an alternative APP4MC provides also an implementation for the Modeling Workflow Engine 2 (MWE2), coming from the Xtext framework. The definition of the workflow can be done in a textual syntax from MWE2, containing different components which are working on the given model.
AMALTHEA provides several plugins for this purpose:

org.eclipse.app4mc.amalthea.workflow.core
Includes basic API and some predefined workflow components, which can be used independent from any framework.
org.eclipse.app4mc.amalthea.workflow.ease
Provides some helper modules for EASE
org.eclipse.app4mc.amalthea.workflow.base
Provides basic classes for usage with MWE2

4.3.2 General Structure

The plugin org.eclipse.app4mc.amalthea.workflow.core provides a general abstract class org.eclipse.app4mc.amalthea.workflow.core.WorkflowComponent which can be used to extend in the case to provide or implement an own workflow step.

It provides the following features:

  • Defines the constant AMALTHEA_SLOT, which can be used to store and retrieve the AMALTHEA model from the org.eclipse.app4mc.amalthea.workflow.core.Context.
  • Provides some convenient methods to retrieve the model from the context, get a copy of the model or store the model in the context.

The interface org.eclipse.app4mc.amalthea.workflow.core.Context provides convenient methods to store and retrieve data in a org.eclipse.app4mc.amalthea.workflow.core.WorkflowComponent.

The class org.eclipse.app4mc.amalthea.workflow.core.DefaultContext is a default implementation using an internal java.util.HashMap to store the data.

A sample workflow implementation with two components WorkfklowComponent1 and WorkflowComponent2 can look like the following structure.

Both classes are extending org.eclipse.app4mc.amalthea.workflow.core.WorkflowComponent.

The next step is to create a WorkflowDefinition, which needs to do the following steps:

  1. Create an instance of a org.eclipse.app4mc.amalthea.workflow.core.Context using the org.eclipse.app4mc.amalthea.workflow.core.DefaultContext
  2. Create an instance of WorkfklowComponent1 and configure it if needed with the proper setter methods
  3. Call the run method of WorkfklowComponent1 and pass the context
  4. Create an instance of WorkfklowComponent2 and configure it if needed with the proper setter methods
  5. Call the run method of WorkfklowComponent2 and pass the context

Therefore using the context, data can be shared between the different workflow component implementations.
The following diagram is showing this flow in more detail:

4.3.3 Available Basic Components

The APP4MC AMALTHEA platform ships with some available workflow steps, which can be used out of the box.

Model Reader

The component org.eclipse.app4mc.amalthea.workflow.component.ModelReader reads a given list of files containing AMALTHEA models. The result model is stored in the AMALTHEA_SLOT as default. Please refer the JavaDoc for more details.

Simple configuration inside of a workflow can look like the following:

ModelReader reader = new ModelReader();
reader.addFileName("path to file");
reader.run(ctx);

Model Writer

The component org.eclipse.app4mc.amalthea.workflow.component.ModelWriter writes a given AMALTHEA model to either one file or several files. As default the current available model in the AMALTHEA_SLOT is taken.
The following parameters are available to set:

  • boolean singleFile default true
  • String outputDir
  • String fileName, if output is set to split files the different models are separated by an additional suffix in the name indicating the contained model.

Sample configuration inside of a workflow:

ModelWriter writer = new ModelWriter();
writer.setOutputDir("path to dir");
writer.setFileName("output.amxmi");
writer.run(ctx);

Add Schedule Points

The component org.eclipse.app4mc.amalthea.workflow.component.AddSchedulePoints modifies a given model (default in the AMALTHEA_SLOT) in that way, that in the org.eclipse.app4mc.amalthea.model.sw.SWModel the contained org.eclipse.app4mc.amalthea.model.sw.Task elements are checked if the preemption is set to cooperative. If this is the case, it will add between the elements of the org.eclipse.app4mc.amalthea.model.sw.CallGraph new elements of type org.eclipse.app4mc.amalthea.model.sw.SchedulePoint.

Sample configuration inside of a workflow:

AddSchedulePoints addSchedulePoints = new AddSchedulePoints();
addSchedulePoints.run(ctx);

4.3.4 Other Components

Create Tasks

The component org.eclipse.app4mc.multicore.openmapping.workflow.CreateTasks from the plugin org.eclipse.app4mc.multicore.openmapping is the corresponding workflow element for the Task creation example. It takes a copy of a given model (defaults to AMALTHEA_SLOT) and performs the action org.eclipse.app4mc.multicore.openmapping.algorithms.taskgen.pragmatic.PragmaticTaskCreator. The result of this action is stored as default in the model slot createtasks.

Sample configuration inside of a workflow:

CreateTasks ct = new CreateTasks();
ct.run(ctx);

Generate Mapping

The component org.eclipse.app4mc.multicore.openmapping.workflow.GenerateMapping from the plugin org.eclipse.app4mc.multicore.openmapping is the corresponding workflow element for the Mapping example. It takes a copy of a given model (defaults to AMALTHEA_SLOT) and performs one of the actions org.eclipse.app4mc.multicore.openmapping.algorithms.heuristic.lb.LoadBalancingDFG or org.eclipse.app4mc.multicore.openmapping.algorithms.ilp.lb.ILPBasedLoadBalancing or org.eclipse.app4mc.multicore.openmapping.algorithms.ilp.energyminimization.EnergyMinimization or org.eclipse.app4mc.multicore.openmapping.algorithms.ga.lb.GABasedLoadBalancing depending on the parameter mappingAlg. The result of this action is stored as default in the model slot mapping.
Parameter:

  • String mappingAlg with the value of either dfg or ilp_lb or ilp_energy or ga_lb

Sample configuration inside of a workflow:

GenerateMapping mappingDfg = new GenerateMapping();
mappingDfg.setModelSlot("createtasks");
mappingDfg.setMappingAlg("dfg");
mappingDfg.setEnableLog(true);
mappingDfg.run(ctx);

4.3.5 EASE modules

The purpose of using EASE is to provide one way to define and run a workflow for a model.
Therefore APP4MC provides some helper methods to be used in the EASE scripting environment.
The modules are provided by the plugin org.eclipse.app4mc.amalthea.workflow.ease.

Workflow Module

The workflow module provides some helpers regarding running a APP4MC workflow definition based on EASE.
The general module can be loaded with the following line:

loadModule('/APP4MC/Workflow')

EASE opens an own Eclipse console by default to show the output of the executed script when using right click and Run as -> EASE Script. Therefore if you are using Log4J for your logging, you can use the following provided methods to configure dedicated Log4J Loggers to use also the EASE console for output.

The following overview gives an overview about the available helper methods:

Method Params Description
addLoggerToConsole String loggerName Adds a Log4J logger to the output of the current used output of the EASE script engine.
addLoggerToConsole String loggerName, String pattern Adds a Log4J logger to the output of the current used output of the EASE script engine with a given pattern, see org.apache.log4j.PatternLayout for more details
endWorkflow - Basic finish actions to be performed, should be called at the end

4.3.6 MWE2 Workflow

The plugin org.eclipse.app4mc.amalthea.workflow.base provides a general class org.eclipse.app4mc.amalthea.workflow.base.AmaltheaWorkflow which can be used to extend in the case to provide or implement an own workflow step based on Modeling Workflow Engine 2 (MWE2).
It provides the following features:

  • Extends org.eclipse.emf.mwe.core.lib.WorkflowComponentWithModelSlot
  • Defines the constant AMALTHEA_SLOT, which can be used to store and retrieve the AMALTHEA model from the org.eclipse.emf.mwe.core.WorkflowContext.
  • Provides some convenient methods to retrieve the model from the context, get a copy of the model or store the model in the context.

To use the AMALTHEA model workflow component, currently the following dependencies are needed in addition to the AMALTHEA model plugins:

  • org.eclipse.app4mc.amalthea.workflow.base
  • org.eclipse.emf.mwe2.lib
  • org.eclipse.emf.mwe2.launch
  • org.apache.log4j

MWE2 Components

The APP4MC AMALTHEA platform ships with some available workflow steps for usage together with MWE2.
Note: To use the components below as shown in the corresponding configurations, the classes must be imported!

Reader

The component org.eclipse.app4mc.amalthea.workflow.util.AmaltheaReader reads a given list of files containing AMALTHEA models. The result model is stored in the AMALTHEA_SLOT as default.

Sample configuration inside of a workflow:

component = AmaltheaReader {
	fileName = "${base}/model/AMALTHEA_Democar_MappingExample.amxmi"
	fileName = "${base}/model/AMALTHEA_Democar_MappingExample-hw.amxmi"
}
Writer

The component org.eclipse.app4mc.amalthea.workflow.util.AmaltheaWriter writes a given AMALTHEA model to either one file or several files. As default the current available model in the AMALTHEA_SLOT is taken.
The following parameters are available to set:

  • boolean singleFile default true
  • String outputDir
  • String fileName, if output is set to split files the different models are separated by an additional suffix in the name indicating the contained model.

Sample configuration inside of a workflow:

component = AmaltheaWriter {
	fileName = "createtasks"
	singleFile = true
	outputDir = "${base}/workflow-output"
}
Add Schedule Points

The component org.eclipse.app4mc.amalthea.workflow.util.AddSchedulePoints modifies a given model (default in the AMALTHEA_SLOT) in that way, that in the org.eclipse.app4mc.amalthea.model.SWModel the contained org.eclipse.app4mc.amalthea.model.Task elements are checked if the preemption is set to cooperative. If this is the case, it will add between the elements of the org.eclipse.app4mc.amalthea.model.CallGraph new elements of type org.eclipse.app4mc.amalthea.model.SchedulePoint.

Sample configuration inside of a workflow:

component = AddSchedulePoints {
}

4.3.7 Current Limitations / Open Points

As there are two options available (Basic Java/EASE and MWE2 based) for running a workflow, there are currently also some limitations existent. The following table should help you to chose the right base:

Use Case Supported in MWE2 Supported in EASE Reason
Loading of AMALTHEA model files (with cross document references) using workflow/script file no yes
  • MWE2: Separate Java runtime is started by MWE2, Sphinx relies in running Eclipse instance
  • EASE: Makes use of the runtime from runtime workbench
Using workflow components which are available in the runtime (i.e. workflow component classes which are already available in plugins as a part of the runtime product) yes yes
  • MWE2: Separate runtime is started by MWE2 and if plugin (containing workflow component) is added as dependency then its classes are loaded
  • EASE: Makes use of the runtime from runtime workbench and has access to all classes
Defining custom workflow components (Java classes) and using them in the workflow/script file located in the same workspace yes yes
  • MWE2: Custom workflow component objects can be directly created in MWE2 script file(as in Java) and there is no restriction [irrespective of whether there is constructor]
  • EASE: As the runtime from the launched workbench (runtime workbench) is used, custom Java classes available in the workspace are not part of the classpath. For creation of objects of custom workflow components (Java classes): EASE JVM module createInstance API should be used by specifying the absolute path of the Java class
Using Java classes (available in the runtime workspace from other plugins --> these classes are not part of runtime workbench) inside custom workflow component Java classes yes yes
  • MWE2: works perfectly (just like native)
  • EASE: works, for creation of object from custom Java classes, EASE JVM module is used, which compiles the used Java classes and creates objects of them
Using classes (API) from 3rd party libraries (added in the classpath of the plugin)inside custom workflow component java classes yes no(*)
  • MWE2: As new Java runtime is created, all the required plugins & jars are added in the classpath
  • EASE: In the current implementation only the runtime from workbench is considered, jars from the classpath of a plugin (created newly in the runtime workbench) are not loaded in the runtime of EASE. Due to this reason, execution of the script fails with an exception java.lang.NoClassDefFoundError: <your used class from 3rd party lib>

*: This feature is already available in the master branch of EASE. Consider to update it to an newer version than 0.3.0 if it is needed.

Sample for using EASE JVM module to load a class from the same workspace:

var labelsCreationComponent = createInstance("workspace://com.ease.example/src/org/eclipse/app4mc/amalthea/example/workflow/components/CreateLabelsComponent.java")

4.3.8 Overall Sample

A sample workflow is available in the AMALTHEA example projects inside of org.eclipse.app4mc.amalthea.example.democar.mapping/workflow/sample_workflow.js. In general this workflow contains all steps as shown in the AMALTHEA examples tutorial Task creation and Mapping example, but it can be executed in one script. Execute it by doing a right click and then Run As -> EASE Script.
The following steps are performed in the sample configuration:

  1. Read the files AMALTHEA_Democar_MappingExample.amxmi and AMALTHEA_Democar_MappingExample-hw.amxmi inside if the model folder. Result model is saved in the default AMALTHEA_SLOT.
  2. Perform the Create Tasks component
  3. Write the result model available in the createtasks model slot to a single model file createtasks in the directory workflow-output.
  4. Perform the Generate Mapping component with the dfg algorithm on the result of the Create Tasks component. This is done by specifying modelSlot = "createtasks".
  5. Write the result model available in the mapping model slot to a single model file mapping_dfg in the directory workflow-output.
  6. Perform the Generate Mapping component with the ilp algorithm on the result of the Create Tasks component. This is done by specifying modelSlot = "createtasks". In addition the result slot is overwritten by resultSlot = "mappingilp", as the previous run of Generate Mapping is already stored in the mapping slot.
  7. Write the result model available in the mapping_ilp model slot to a single model file mapping_ilp in the directory workflow-output.

The following diagram shows a snippet of this workflow:

4.3.9 Adding a new workflow component

Below you will find a sample how to add and implement a new workflow component.

4.3.10 Create project

  1. Add a new plugin project with the name my.sample.workflow
  2. Open the MANIFEST.MF in the META-INF folder.
  3. Switch to the tab Dependencies to add the following plugin dependencies: org.eclipse.app4mc.amalthea.workflow.core
  4. Add a new class my.sample.workflow.HelloWorld, which is extending org.eclipse.app4mc.amalthea.workflow.core.WorkflowComponent.
  5. Implement something in the runInternal(Context ctx) throws WorkflowException method (see sample below).
@Override
protected void runInternal(Context ctx) throws WorkflowException {
	// some checking if sw model is available
	if (null == getAmaltheaModel(ctx).getSwModel()) {
		throw new WorkflowException("No proper SWModel available!");
	}
	this.log.info("Number of tasks in model: " + getAmaltheaModel(ctx).getSwModel().getTasks().size());
}

Execute the new component in the available sample

The previous created class my.sample.workflow.HelloWorld should be added to a workflow.
Therefore we are using the provided sample project from APP4MC org.eclipse.app4mc.amalthea.example.democar.mapping.
Before starting with the next steps, we need to start a new runtime from the existing workspace, so that the plugin my.sample.workflow is already loaded.

Note: If you want to use the classes from the plugin my.sample.workflow in a EASE script located in the same workspace, you can create an instance of it at runtime using the JVM module of EASE. Please consider the EASE documentation for more details.

  1. Add the AMALTHEA democar samples to your workspace (File – New – Example – Democar Examples)
  2. Go to the project org.eclipse.app4mc.amalthea.example.democar.mapping.
  3. Open the sample_workflow.js located in the workflow folder.
  4. Add to the imports: importPackage(my.sample.workflow)
  5. Add to the logger configs: addLoggerToConsole("my.sample.workflow")
  6. Add the First snippet below after the first AmaltheaReader component and before the CreateTasks component.
  7. Run the script by doing a right click -> Run As -> EASE Script
var ctx = new DefaultContext()

//Reader
var reader = new ModelReader()
reader.addFileName(MODEL_LOCATION1)
reader.addFileName(MODEL_LOCATION2)
reader.run(ctx)

var hw = new HelloWorld()
hw.run(ctx)

//create tasks based on initial model
//result is saved in modelslot createtasks
var createTasks = new CreateTasks()
createTasks.run(ctx)

4.4 Model Migration

4.4.1 Technologies used

For migration of AMALTHEA models, plane java approach has been used ( non EMF) along with the following 3rd party libraries :

  • JDOM 2
  • JAXEN
  • XALAN
  • XERCES

4.4.2 Framework for model migration

Using the above specified technologies, eclipse plugin based framework for model migration is developed.

  • org.eclipse.app4mc.amalthea.converters.common plugin consists of Model migration framework

4.4.3 Components of Model Migration Framework

  • Converters
  • Cache Builders
  • Pre Processor
  • Post Processor

All the above specified components are contributed as extension points in the plugin org.eclipse.app4mc.amalthea.converters.common

Detailed description of each component can be found below:

Converters: This component is responsible for converting the model (xmi) content from one version to another. A Converter component is executed for selected model files and the corresponding referred model files for migration ( each execution is w.r.t. specific model file).

Converter can be registered by adding extension definition for : org.eclipse.app4mc.amalthea.model.converters and defining the following parameters :

  • ID : This is the unique identifier of a Converters component with which it is identified
  • Name : Name of the component
  • Input model version : AMALTHEA model version in which the input model content is present.
  • Output model version : AMALTHEA model version to which the input model content should be converted to.

All AMALTHEA/AMALTHEA4public ITEA releases are specified with prefix itea (e.g. itea.103, itea.110, itea.111) and APP4MC releases are specified with their version number (e.g. 0.7.0, 0.7.1)

  • Converter Class : Java class implementing org.eclipse.app4mc.amalthea.converters.common.base.IConverter interface and providing the definition for method : public void convert(final File targetFile, final Map<File, Document> fileName_documentsMap, final List<ICache> caches) throws Exception
  • enabled : By specifying true/false - > specific converter invocation can be enabled or disabled

  • Converter dependencies : A Converter can be dependent on other Converter components. During the execution of a specific Converter component first its dependencies are invoked and then dependent converter is invoked.

Converter dependencies can be specified at the extension level in the following way:


Cache Builders: This component is responsible for building the required cache, by querying the input models or storing the necessary information - > before executing the Converters for specific migration step ( e.g. itea.103 to itea.110)
Scope of each Cache Builder component is only during a step of model migration ( e.g. itea.103 to itea.110)

Cache Builder can be registered by adding extension definition for : org.eclipse.app4mc.amalthea.model.converters.cachebuilders and defining the following parameters :

  • ID : This is the unique identifier of a Cache Builder component with which it is identified
  • name : Name of the component
  • class : Java class implementing org.eclipse.app4mc.amalthea.converters.common.base.ICache interface and providing implementation for following methods : public void buildCache(final Map<File, Document> fileName_documentsMap), public Map<File, Map<String, Object>> getCacheMap(), public void clearCacheMap()
  • enabled : By specifying true/false - > specific Cache Builder components invocation can be enabled or disabled
  • input-model-version : AMALTHEA model version in which the model content is present.

Each Cache Builder component is tightly coupled to a specific AMALTHEA model version

Pre Processor: This component is responsible for fetching/modifying specific information before invocation of Cache Builders/Converters components.

This component is tightly coupled to input version of the AMALTHEA model files which are selected (explicitly/implicitly) for model migration. Each Pre-Processor component is executed only once for the input model files, irrespective of number of migration steps.

Pre Processor component can be registered by adding extension definition for : org.eclipse.app4mc.amalthea.model.converters.preprocessor and defining the following parameters :

  • ID : This is the unique identifier of a Pre-Processor component with which it is identified
  • class : Java class implementing org.eclipse.app4mc.amalthea.converters.common.base.IPreProcessor interface and providing implementation for method : public void process(final File targetFile, final Map<File, Document> fileName_documentsMap) throws Exception
  • input-model-version : AMALTHEA model version in which the model content can be present. There is a possibility to specify multiple input-model-version's, during the runtime if the supplied AMALTHEA models version matches to at least one of the specified versions - > then corresponding pre-processor is executed

Each Pre processor component is tightly coupled to the specified AMALTHEA model versions

Post Processor: This component is responsible for adding/modifying specific information after invocation of Cache Builders/Converters components for a specific migration step ( e.g. itea.103 to itea.110)

This component is tightly coupled to input and output version of the AMALTHEA model files which are selected (explicitly/implicitly) for model migration. Each Pre-Processor component is executed after the migration step if the below criteria is full filled:

Post Processor component can be registered by adding extension definition for : org.eclipse.app4mc.amalthea.model.converters.postprocessor and defining the following parameters :

  • ID : This is the unique identifier of a Post-Processor component with which it is identified
  • class : Java class implementing org.eclipse.app4mc.amalthea.converters.common.base.IPostProcessor interface and providing implementation for method : public void process(final Map<File, Document> fileName_documentsMap, final AbstractHelper helper) throws Exception
  • input-model-version : AMALTHEA model version in which the model content can be present. There is a possibility to specify multiple input-model-version's.
  • output-model-version : AMALTHEA model version to which the model content is converted as a part of one of the migration step. There is a possibility to specify multiple output-model-version's.

Example how Post-Processor's are invoked:

If Post-Processor is defined with input-model-versions : itea.103, itea.110 and output version 0.7.0.In case if the migration is invoked from itea.103 to 0.7.0 for the specified AMALTHEA models.Following migration steps are to be performed internally:
  • itea.103 to itea.110
  • itea.110 to itea.111
  • itea.110 to 0.7.0 (APP4MC)

In this case post-processor is invoked after migration step itea.110 to itea.111.

Flexibility of post-processors is, they are defined in a generalized way to specify in the direction which they should be invoked as a part of model migration.

Model migration sequence

Sequence in which various components are invoked during model migration

4.4.4 How to add custom Model Migration components

As described above, Model Migration framework is part of org.eclipse.app4mc.amalthea.converters.common plugin. This plugin should be added as a dependency to the custom plugin which shall contain Model Migration components.

Below are the various extension points available as a part of Model Migration framework:

For defining custom Model Migration, minimum requirement is to define a extension for org.eclipse.app4mc.amalthea.model.converters extension point and specify the class which has the implementation for migrating the model contents.

CacheBuilders, PreProcessor, PostProcessor components are optional - > based on the scenarios to migrate the model elements, it must me decided if these components are required.

4.4.5 AMALTHEA meta model changes

Based on the changes in AMALTHEA meta model across various releases, below description contains the differences in detail which are considered for model migration:

Version ITEA 1.0.3 to ITEA 1.1.0

Below are the changes in the meta model from ITEA 1.1.0 version to ITEA 1.1.1

  • Changes in the namespaces:
AMALTHEA Namespace (version ITEA 1.0.3) AMALTHEA Namespace (verison ITEA 1.1.0)
http://www.amalthea.itea2.org/model/1.1.0/sw http://www.amalthea.itea2.org/model/1.2.0/sw
http://www.amalthea.itea2.org/model/1.1.0/stimuli http://www.amalthea.itea2.org/model/1.2.0/stimuli
http://www.amalthea.itea2.org/model/1.1.0/propertyconstraints http://www.amalthea.itea2.org/model/1.2.0/propertyconstraints
http://www.amalthea.itea2.org/model/1.1.0/os http://www.amalthea.itea2.org/model/1.2.0/os
http://www.amalthea.itea2.org/model/1.1.0/mapping http://www.amalthea.itea2.org/model/1.2.0/mapping
http://www.amalthea.itea2.org/model/1.1.0/hw http://www.amalthea.itea2.org/model/1.2.0/hw
http://www.amalthea.itea2.org/model/1.1.0/events http://www.amalthea.itea2.org/model/1.2.0/events
http://www.amalthea.itea2.org/model/1.1.0/constraints http://www.amalthea.itea2.org/model/1.2.0/constraints
http://www.amalthea.itea2.org/model/1.1.0/config http://www.amalthea.itea2.org/model/1.2.0/config
http://www.amalthea.itea2.org/model/1.1.0/common http://www.amalthea.itea2.org/model/1.2.0/common
http://www.amalthea.itea2.org/model/1.1.0/central http://www.amalthea.itea2.org/model/1.2.0/central
http://amalthea.itea2.org/model/1.1.0/components http://amalthea.itea2.org/model/1.2.0/components
  • Following Class names are changed:
Class name in 1.0.3 Changed Class name in 1.1.0 Amalthea sub-model
DeadlineMonotinic DeadlineMonotonic OS
  • Below are the references which are removed:
Variable Name (version ITEA 1.0.3) Class containing Variable AMALTHEA sub model how migration is done from 1.0.3 models to 1.1.0
scheduler OSModel OS model
"scheduler" elements contained inside OSModel element are converted into TaskScheduler elements in the following way:
  • OperatingSystem element is created and associated to the OSModel element (Note: OperatingSystem class is newly introduced in 1.1.0)
  • All the Scheduler elements present inside the OSModel element are moved to the OperatingSystem element as TaskScheduler's
  • All the places where Scheduler elements were referred earlier, are updated so as to refer corresponding TaskScheduler element
read, write LabelAccessStatistic Software model
"read" elements contained inside LabelAccessStatistic element are removed and "write" elements are replaced by tag name "value" based on the following criteria :
  • If the LabelAccess element containing LabelAccessStatistic ->contains the access as "write", then all the "read" elements of LabelAccessStatistic are removed

"write" elements contained inside LabelAccessStatistic element are removed and "read" elements are replaced by tag name "value" based on the following criteria :
  • If the LabelAccess element containing LabelAccessStatistic ->contains the access as "read", then all the "write" elements of LabelAccessStatistic are removed
  • Below are the changes in the reference names :
Variable Name (version ITEA 1.0.3) Changed Variable Name (version ITEA 1.1.0) Class containing Variable AMALTHEA sub model how migration is done from 1.0.3 models to 1.1.0
readCacheMisses cacheMisses LabelAccessStatistic Software model "readCacheMisses" is changed to "cacheMisses" in all the places where LabelAccessStatistic definition is present
executableAllocation processAllocation, runnableAllocation MappingModel Mapping model
Based on the content of the "executableAllocation", accordingly either "processAllocation" or "runnableAllocation" elements are created and the corresponding data of "exeuctableAllocation" is mapped.
Below are the criterions :
  • If "executableAllocation" consists of Task element and Scheduler mapping then - > "ProcessAllocation" of type "mapping:TaskAllocation" is created
  • If "executableAllocation" consists of ISR element and Scheduler mapping then - > "ProcessAllocation" of type "mapping:ISRAllocation" is created, and to it corresponding ISR element and "InterruptController" element are associated ( Note:If there is a mapping to "ISR" and "Scheduler" in "executableAllocation", InterruptController element is created and referred in the "executableAllocation". In this case previous mapping to the Scheduler element is lost)
  • If "executableAllocation" consists of Runnable element and Scheduler mapping then - > "RunnableAllocation" of type "mapping:RunnableAllocation" is created

Version ITEA 1.1.0 to ITEA 1.1.1

Below are the changes in the meta model from ITEA 1.1.0 version to ITEA 1.1.1

  • Changes in the namespaces:
AMALTHEA Namespace (version ITEA 1.1.0) AMALTHEA Namespace (version ITEA 1.1.1)
http://www.amalthea.itea2.org/model/1.2.0/sw http://www.amalthea.itea2.org/model/1.3.0/sw
http://www.amalthea.itea2.org/model/1.2.0/stimuli http://www.amalthea.itea2.org/model/1.3.0/stimuli
http://www.amalthea.itea2.org/model/1.2.0/propertyconstraints http://www.amalthea.itea2.org/model/1.3.0/propertyconstraints
http://www.amalthea.itea2.org/model/1.2.0/os http://www.amalthea.itea2.org/model/1.3.0/os
http://www.amalthea.itea2.org/model/1.2.0/mapping http://www.amalthea.itea2.org/model/1.3.0/mapping
http://www.amalthea.itea2.org/model/1.2.0/hw http://www.amalthea.itea2.org/model/1.3.0/hw
http://www.amalthea.itea2.org/model/1.2.0/events http://www.amalthea.itea2.org/model/1.3.0/events
http://www.amalthea.itea2.org/model/1.2.0/constraints http://www.amalthea.itea2.org/model/1.3.0/constraints
http://www.amalthea.itea2.org/model/1.2.0/config http://www.amalthea.itea2.org/model/1.3.0/config
http://www.amalthea.itea2.org/model/1.2.0/common http://www.amalthea.itea2.org/model/1.3.0/common
http://www.amalthea.itea2.org/model/1.2.0/central http://www.amalthea.itea2.org/model/1.3.0/central
http://amalthea.itea2.org/model/1.2.0/components http://amalthea.itea2.org/model/1.3.0/components
  • Below are the changes in the reference names :
Variable Name (version ITEA 1.1.0) Changed Variable Name (version ITEA 1.1.1) Class containing Variable AMALTHEA sub model how migration is done from 1.1.0 models to 1.1.1
tagName name Tag Software model, Components model attribute "tagName" is changed to "name" in all the places where Tag definition is present
elements components, systems ComponentsModel Components model
  1. XML child elements of ComponentsModel having name as "elements" and attribute "xsi:type" as either "components:Component" or "components:Composite" are changed to "components"
  2. XML child elements of ComponentsModel having name as "elements" and attribute "xsi:type" as "components:System" are changed to "systems"
elements connectors, componentInstances System Components model
  1. XML child elements of System having name as "elements" and attribute "xsi:type" as "components:Connector" are changed to "components"
  2. XML child elements of ComponentsModel having name as "elements" and attribute "xsi:type" as "components:ComponentInstance" are changed to "componentInstances"
maximumCyle maximumCycle DataAgeCycle Constraints model attribute "maximumCyle" is changed to "maximumCycle" in all the places where DataAgeCycle definition is present
setLabelsList setModeValueList Stimulus Stimuli model XML node name is set as "setModeValueList"
enablingLabelsList enablingModeValueList Stimulus Stimuli model XML node name is set as "enablingModeValueList"
disablingLabelsList disablingModeValueList Stimulus Stimuli model XML node name is set as "disablingModeValueList"
  • Below are the references which are removed:
Variable Name (version ITEA 1.1.0) Class containing Variable AMALTHEA sub model how migration is done from 1.1.0 models to 1.1.1
deadline Task, ISR, ProcessPrototype Software model
"deadline" specified at the Process, ProcessPrototype is converted into a constraint element in the following way:
  • with the name of corresponding Process/ProcessPrototype -> ProcessRequirement object is created and the object of Process/ProcessPrototype is linked to it
  • TimeRequirement object is created with the following properties and associated to the created ProcessRequirement:
    1. limitType as upperlimit
    2. metric as responsetime
    3. SignedTime elment is created with the following properties and associated to the TimeRequirement element : value and unit - > values for these elements are fetched from the "deadline" object available at Process/ProcessPrototype
  • once the required content from the "deadline" element of Process/ProcessPrototype is fetched, "deadline" is removed
initialValue Label Software model Initial value attribute is removed from all the Label objects
  • Below are the changes in the default values of Enums :
Enum Name Default value (version ITEA 1.1.0) Default value (version ITEA 1.1.1) AMALTHEA sub model Behaviour model migration to migrated 1.1.0 models to 1.1.1
TimeUnit ps _undefined_ Common model Attribute "unit" with value as "ps" is created (when it is missing in the input model) -> for all the XMI tags where TimeUnit defintion should be present
InterfaceKind PROVIDES _undefined_ Components model Attribute "kind" with value as "PROVIDES" is created (when it is missing in the input model) -> for all the XMI tags where InterfaceKind defintion should be present
RunnableOrderType successor _undefined_ Constraints model Attribute "orderType" with value as "successor" is created (when it is missing in the input model)-> for all the XMI tags where RunnableOrderType defintion should be present
RunnableGroupingType allOfThem _undefined_ Constraints model Attribute "groupingType" with value as "allOfThem" is created (when it is missing in the input model)-> for all the XMI tags where RunnableGroupingType defintion should be present
QType DYNAMIC _undefined_ Hardware model Attribute "type" with value as "DYNAMIC" is created (when it is missing in the input model)-> for all the XMI tags where QType defintion should be present
MemoryType RAM _undefined_ Hardware model Attribute "type" with value as "RAM" is created (when it is missing in the input model)-> for all the XMI tags where MemoryType defintion should be present
BusType CAN _undefined_ Hardware model Attribute "busType" with value as "CAN" is created (when it is missing in the input model)-> for all the XMI tags where BusType defintion should be present
RWType R _undefined_ Hardware model Attribute "direction" with value as "R" is created (when it is missing in the input model)-> for all the XMI tags where RWType defintion should be present
SchedType RROBIN _undefined_ Hardware model Attribute "schedPolicy" with value as "RROBIN" is created (when it is missing in the input model)-> for all the XMI tags where SchedType defintion should be present
PinType ANALOG _undefined_ Hardware model Attribute "type" with value as "ANALOG" is created (when it is missing in the input model)-> for all the XMI tags where PinType defintion should be present
FeatureType floatingPointUnit _undefined_ Hardware model Attribute "value" with value as "floatingPointUnit" is created (when it is missing in the input model)-> for all the XMI tags where FeatureType defintion should be present
MemoryAddressMappingType none _undefined_ Mapping model Attribute "addressMappingType" with value as "none" is created (when it is missing in the input model)-> for all the XMI tags where MemoryAddressMappingType defintion should be present
ComparatorType equal _undefined_ Property Constraints model Attribute "comparator" with value as "equal" is created (when it is missing in the input model)-> for all the XMI tags where ComparatorType defintion should be present
ConjunctionType and _undefined_ Property Constraints model Attribute "conjunction" with value as "and" is created (when it is missing in the input model)-> for all the XMI tags where ConjunctionType defintion should be present
WaitEventType AND _undefined_ Software model Attribute "maskType" with value as "AND" is created (when it is missing in the input model)-> for all the XMI tags where WaitEventType defintion should be present
WaitingBehaviour unspecified _undefined_ Software model In sw:WaitEvent/ sw:SynchronousServerCall / sw:SynchronousServerCall -> if attribute "waitingBehaviour" is with value as "unspecified", then attribute "waitingBehaviour" and its value are removed in the XML, so that the default value for "waitingBehaviour" as per 1.1.1 : "_ unspecified _" is consisdered
AccessPrecedenceType ignoreWR _undefined_ Software model Attribute "orderType" with value as "ignoreWR" is created (when it is missing in the input model)-> for all the XMI tags where AccessPrecedenceType defintion should be present
OrderType order _undefined_ Software model Attribute "orderType" with value as "order" is created (when it is missing in the input model)-> for all the XMI tags where OrderType defintion should be present
LabelAccessEnum read _undefined_ Software model Attribute "access" with value as "read" is created (when it is missing in the input model)-> for all the XMI tags where LabelAccessEnum defintion should be present
SemaphoreAccessEnum request _undefined_ Software model Attribute "accessEnum" with value as "request" is created (when it is missing in the input model)-> for all the XMI tags where SemaphoreAccessEnum defintion should be present
Preemption cooperative _undefined_ Software model Attribute "preemption" with value as "cooperative" is created (when it is missing in the input model)-> for all the XMI tags where Preemption defintion should be present
  • Below are the Class's/Enum's moved to different sub models :
Enum/Class Name Namespace (version ITEA 1.1.0) Namespace (version ITEA 1.1.1) Behaviour model migration to migrated 1.1.0 models to 1.1.1
SchedulingSWUnit sw os Attribute "xsi:type" of XML node of TaskScheduler/InterruptController is updated to "os:SchedulingSWUnit"
  • ITEA 1.1.0 Class's removed in ITEA 1.1.1, details how data of these class's is migrated :
LabelSwitchEntry (present in Software model)
Class is removed in ITEA 1.1.1, and as replacement following Class is introduced: "ModeSwitchEntry"
LabelSwitch (present in Software model)
Class is removed in ITEA 1.1.1, and as replacement following Class's are introduced: "ModeSwitch, ModeLabel"
LabelValueEntry (present in Stimuli model)
Class is removed in ITEA 1.1.1, and as replacement following Class is introduced: "ModeValueListEntry"
  • Model migration behavior:
    • LabelSwitch :
      • From the referred "Label" element inside "LabelSwitch", "ModeLabel" element is created with the same name -> and associated to the same Software Model where "Label" element is defined
      • "LabelSwitch" element is replaced with "ModeSwitch" element, and it refers to the "ModelLabel" (created in the above step)
    • LabelSwitchEntry :
      • For all the LabelSwitchEntry objects inside a specific AMALTHEA model file, a single "Mode" element is created ( with the following naming convention : "mode_sw_&lt;counter&gt;_migration_generated" )
      • Based on the unique values of LabelSwitchEntry elements, ModeLiteral objects are created
      • LabelSwitchEntry is replaced by ModeSwitchEntry, based on the earlier "value" specified for LabelSwitchEntry -> corresponding ModeLiteral element is referred
      • For details refer to the below figure:



    • LabelValueEntry :
      • For Label referred inside each LabelValueEntry, corresponding ModeLabel object is created in the software model and it is associated to the entry object.
      • For all the LabelValueEntry objects inside a specific AMALTHEA model file, a single "Mode" element is created ( with the following naming convention : "mode_stimuli_&lt;counter&gt;_migration_generated" )
      • Based on the unique values of LabelValueEntry elements, ModeLiteral objects are created
      • LabelValueEntry is replaced by ModeValueListEntry, based on the "value" specified for LabelValueEntry -> corresponding ModeLiteral element is referred instead of "value"
      • !! For LabelValueEntry migration to be successful, it is mandatory to have SW-Model in the same model file. In case SW-Model is not present, migration of LabelValueEntry objects will not be performed.

      • For details refer to the below figure:



Version ITEA 1.1.1 to App4MC 0.7.0

Below are the changes in the meta model from ITEA 1.1.1 version to APP4MC 0.7.0

  • Instead of multiple namespaces (e.g. sw, hw, constraints, components, etc.) a single namespace "am" is introduced
    • Impact on 0.7.0 model file due to this change:
      • xmlns:am="http://app4mc.eclipse.org/amalthea/0.7.0 is included in the xmi file, instead of other namespaces (e.g. xmlns:hw, xmlns:sw etc.,)
      • xsi:type attributes will consists of namespace prefix as "am" in the value. Example:
        <ports xsi:type="am:ComplexPort" name="Mem_1_port" bitWidth="16" direction="R" writeCycles="10" readCycles="10"/>
      • Root tag will have "am" as the prefix. Example:
        <am:HWModel xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:am="http://app4mc.eclipse.org/amalthea/0.7.0"/>
  • Reference pattern is changed.
    As there is single model namespace in 0.7.0, namespace prefix is removed before the class name (eg: "ABSMode?type=Label").
    • old: <Element Name>?type=<NameSpace prefix>.<ElementClassName>
    • new: <Element Name>?type=<ElementClassName>
  • Root tag of the model, changed from AMALTHEA to Amalthea
  • IPort interface is changed to abstract class Port
  • FInterface reference (i.e. Franca Interface) is removed from class FInterfacePort
    • Impact on 0.7.0 model file due to this change:
      • If AMALTHEA models from previous versions have the reference to Franca Interface ports, as a part of migration this reference is removed. ( In this case, reference to the Franca Interface element is stored as a CustomProperty of type String inside FInterfacePort element)
  • InterfaceKind ENUM literal values are changed from PROVIDES, REQUIRES to provides, requires
  • Following class names are changed

Class name from 1.1.1 Changed Class name in 0.7.0
OSEvent OsEvent
MemType MemoryType
System HwSystem
Component HwComponent
Port HwPort
AbstractionTypeDescription AbstractionType
MemoryType MemoryTypeEnum
OSInstructions OsInstructions
Periodic PeriodicActivation
  • Below references are removed inside ComplexNode and HardwareTypeDescription ( both these class's are part of Hardware Model)
    • AbstractType[] attribute , AbstractType[] features
      • Impact on 0.7.0 model file due to this change: Content inside the attribute and features is stored as a CustomProperty elements inside the corresponding elements.
        • Content of "AbstractType[] attribute" is stored in the CustomProperty, with key as "attribute#<attribute name>" and value as <attribute value>
        • Content of "AbstractType[] features" is stored in the CustomProperty, with key as "feature#<feature name>" and value as <feature value>
      • Below figure shows the changes between 1.1.1 model ( containing attributes & features) and the corresponding 0.7.0 model ( created using model migration utility)
  • ProcessAllocation class removed from MappingModel, instead ISRAllocation ( maps ISR and InterruptController) and TaskAllocation ( maps Task and Scheduler)
    • Impact on 0.7.0 model file due to this change:
      • processAllocation tag is removed from mappingModel, and instead of this taskAllocation and isrAllocation tags will be generated accordingly.
        For details refer to the image
  • Below are the changes in the reference names :
Variable Name (version ITEA 1.1.1) Changed Variable Name (version APP4MC 0.7.0) Class containing Variable AMALTHEA sub model
isMaster master ComplexPort Hardware model
mem memory Mapping Mapping model
memTypeDefinitions memoryTypeDefinitions PropertyConstraintsModel Property Constraints model
isDefault default ModeSwitchEntry Software model
isBuffered buffered Label Software model
isBuffered buffered LabelAccess Software model
accessEnum access SemaphoreAccess Software model
isBuffered buffered SenderReceiverCommunication Software model
isOrdered ordered Group Software model

Version APP4MC 0.7.0 to App4MC 0.7.1

Below are the changes in the meta model from APP4MC 0.7.0 to APP4MC 0.7.1

Change in the namespace:

AMALTHEA Namespace (version App4MC 0.7.0) AMALTHEA Namespace (version App4MC 0.7.1)
http://app4mc.eclipse.org/amalthea/0.7.0 http://app4mc.eclipse.org/amalthea/0.7.1

Root Tag in AMALTHEA model amxmi file: It is recommended to have "Amalthea" as a root tag in amxmi file.

Based on the above statement, if the AMALTHEA model file is having sub-model tag as root element (e.g. SWModel or HWModel etc.,), as a part of model migration -> root element is changed as "Amalthea" tag and the content of sub-model is copied inside it.

Input model (containing SWModel as the root tag):
 "<am:SWModel xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:am="http://app4mc.eclipse.org/amalthea/0.7.0">
  <labels/>
</am:SWModel>"
Output model (after model migration):
<am:Amalthea xmlns:am="http://app4mc.eclipse.org/amalthea/0.7.0" xmlns:xmi="http://www.omg.org/XMI">
  <swModel>
    <labels />
  </swModel>
</am:Amalthea> 

Below are the changes in the reference names:

Variable Name (version APP4MC 0.7.0) Changed Variable Name (version APP4MC 0.7.1) Class containing Variable AMALTHEA sub model Model Migration behavior
graphEntries items ModeSwitchEntry SW Model xml tag "graphEntries" present inside "ModeSwitchEntry" object is changed to "items" in amxmi file
value values ModeSwitchEntry SW Model xml tag "value" present inside "ModeSwitchEntry" object is changed to "values" in amxmi file

Below are the changes in the reference Types:

Variable Name Variable Type (version APP4MC 0.7.0) Variable Type (version APP4MC 0.7.1) Class containing Variable AMALTHEA sub model Model Migration behavior
size Long DataSize MemoryType HW Model, PropertyConstraintsModel Attribute "size" is migrated as "DataSize" object. Long value of "size" is migrated to "value" of DataSize. As the DataSizeUnit info is not available, "unit" value is not set
size DataUnit DataSize AbstractElementMemoryInformation SW Model "size" of type "DataUnit" is migrated as "DataSize" object. Int value of "numberBits" attribute is migrated to "value" of DataSize, "unit" attribute is set as "bit" of type DataSizeUnit
size DataUnit DataSize BaseTypeDefinition SW Model "size" of type "DataUnit" is migrated as "DataSize" object. Int value of "numberBit" attribute is migrated to "value" of DataSize, "unit" attribute is set as "bit" of type DataSizeUnit
frequency EInt Frequency Quartz SW Model Attribute "frequency" of type EInt is migrated as "Frequency" object. "frequency" EInt value is migrated to "value" EDouble of DataSize. As the FrequencyUnit info is not available, "unit" value is not set

Below references are removed:

Variable Name Variable Type Class containing Variable AMALTHEA sub model Model Migration behavior
size DataUnit Section SW Model Content is removed from the model as size can't be specified for "Section" (considered as virtual memory section)
labels [0-*] Label Section SW Model Content is removed from the model. As per APP4MC 0.7.1 semantics, Label object has the association to the Section inside which it can be allocated
runEntities [0-*] Runnable Section SW Model Content is removed from the model. As per APP4MC 0.7.1 semantics, Runnable object has the association to the Section inside which it can be allocated
groupingType RunnableGroupingType ProcessRunnableGroup Constraints Model Content is removed from the model
entries [0-*] ProcessRunnableGroupEntry ProcessRunnableGroup Constraints Model Content is removed from the model. Runnable object belonging to the ProcessRunnableGroupEntry is associated to the runnables list contained at the ProcessRunnableGroup object
default EBoolean ModeSwitchEntry SW Model Content removed from the model. If several ModeSwitchEntry objects contain attribute "default" as "true", then first ModeSwitchEntry which has "default" as "true" is converted to "ModeSwitchDefault" object

Below Classes are removed:

Class Name AMALTHEA sub model Model Migration behavior
SchedulerPairingConstraint Constraints Model Content is removed from the model. There is no equivalent of this model element in APP4MC 0.7.1
SchedulerSeparationConstraint Constraints Model Content is removed from the model. There is no equivalent of this model element in APP4MC 0.7.1
ProcessRunnableGroupEntry Constraints Model This element is removed from the model, but the Runnables associated to it are associated to ProcessRunnableGroup object
OrderConstraint Constraints Model Content is removed from the model. There is no equivalent of this model element in APP4MC 0.7.1
AgeConstraint Constraints model Content is migrated as EventChainLatencyConstraint element with LatencyType as "Age"
ReactionConstraint Constraints model Content is migrated as EventChainLatencyConstraint element with LatencyType as "Reaction"
SynchronisationConstraint Constraints model Content is migrated as EventSynchronizationConstraint element
SectionMapping Mapping Model Content is removed from the model. As per 0.7.1, there is a possibility to specify PhysicalSectionMapping element i.e. defining mapping of various Section elements to Memory
SectionMappingConstraint Property Constraints Model Content is removed from the model.As per 0.7.1, there is a possibility to specify PhysicalSectionConstraint element i.e. defining possibility of Section allocation across various Memories
DataUnit Sw model Content is migrated as DataSize. Attribute "unit" is set as DataSizeUnit of type "bit"

Version APP4MC 0.7.1 to APP4MC 0.7.2

Change in the namespace:

AMALTHEA Namespace (version App4MC 0.7.1) AMALTHEA Namespace (version App4MC 0.7.2)
http://app4mc.eclipse.org/amalthea/0.7.1 http://app4mc.eclipse.org/amalthea/0.7.2

Below are the changes in the reference names:

Variable Name (version APP4MC 0.7.1) Changed Variable Name (version APP4MC 0.7.2) Class containing Variable AMALTHEA sub model Model Migration behavior
runnables group RunnablePairingConstraint Constraints Model xml tag "runnables" present inside "RunnablePairingConstraint" object is changed to "group" in amxmi file
processes group ProcessPairingConstraint Constraints Model xml tag "processes" present inside "ProcessPairingConstraint" object is changed to "group" in amxmi file
labels group DataPairingConstraint Constraints Model xml tag "labels" present inside "DataPairingConstraint" object is changed to "group" in amxmi file
initalValue initialValue Semaphore OS Model xml attribute "initalValue" present inside "Semaphore" object is changed to "initialValue" in amxmi file
arrivalCurveEntries entries ArrivalCurve Stimuli Model xml tag "arrivalCurveEntries" present inside "ArrivalCurve" object is changed to "entries" in amxmi file

Below are the changes in the reference Types:

Variable Name Variable Type (version APP4MC 0.7.1) Variable Type (version APP4MC 0.7.2) Class containing Variable AMALTHEA sub model Model Migration behavior
instructions OsExecutionInstructions Instructions SchedulingSWUnit OS Model type of the instructions tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiSendMessage OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiSendMessage tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiTerminateTask OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiTerminateTask tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiSchedule OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiSchedule tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiRequestResource OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiRequestResource tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiReleaseResource OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiReleaseResource tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiSetEvent OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiSetEvent tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiWaitEvent OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiWaitEvent tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiClearEvent OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiClearEvent tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiActivateTask OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiActivateTask tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation
apiEnforcedMigration OsExecutionInstructions Instructions OsAPIInstructions OS Model type of the apiEnforcedMigration tag in amxmi is updated either to InstructionsConstant or InstructionsDeviation based on if the type in input is : OsExecutionInstructionsConstant or OsExecutionInstructionsDeviation

Below are the changes in the relation of elements:

Variable Name Variable Type Variable Relation (version APP4MC 0.7.1) Variable Relation (version APP4MC 0.7.2) Class containing Variable AMALTHEA sub model Model Migration behavior
memory Memory containment association HwMemoryProperty PropertyConstraints model memory objects containment is changed as association relation. As a result definition of Memory object should not be present inside HwMemoryProperty, rather only reference of memory should be present inside HwMemoryProperty. Model migration is performed in the following way for this change : Memory elements definition from HwMemoryProperty tag are moved to HW Model (Note: addition of Memory (from PropertyConstraints model) to HW model happens only if Memory with this name is not existing in the model scope)
core Core containment association HwCoreProperty PropertyConstraints model core objects containment is changed as association relation. As a result definition of Core object should not be present inside HWCoreProperty, rather only reference of core should be present inside HWCoreProperty. Model migration is performed in the following way for this change : Core elements definition from HwCoreProperty tag are moved to HW Model (Note: addition of Memory (from PropertyConstraints model) to HW model happens only if Memory with this name is not existing in the model scope)

Below are the changes in Enum elements:

Enum Name Enum Literal (version APP4MC 0.7.1) Enum Literal (version APP4MC 0.7.2) AMALTHEA sub model Model Migration behavior
Preemption unknown - SW Model unknown literal is removed from Preemption. Model Migration will replace "unknown" literal with the default literal

Below references are removed:

Variable Name (version APP4MC 0.7.1) Variable Type Class containing Variable AMALTHEA sub model Model Migration behavior
samplingType SamplingType Deviation Hardware Model Stimuli MOdel OS Model Software Model In 0.7.2, samplingType attribute is shifted from Deviation to Boundaries element.As a part of Model migration -> samplingType content is removed from Deviation and associated to the corresponding Distribution of type Boundaries. If Boundaries element is not present inside Deviation as a distribution, corresponding samplingType data is skipped during model migration
coreTypeDefinitions CoreType PropertyConstraintsModel Property Constraints Model coreTypeDefinitions objects are removed from the PropertyConstraintsModel tag and are associated to HW Model (Note: addition of CoreType to HW model happens only if CoreType with this name is not existing in the model scope. If there exists CoreType element with the same name in "PropertyConstraintsModel" and in "HW Model" --> the one from PropertyConstraintsModel will be removed and the one from HW Model will be referred accordingly inside Core element etc.,)
memoryTypeDefinitions MemoryType PropertyConstraintsModel Property Constraints Model memoryTypeDefinitions objects are removed from the PropertyConstraintsModel tag and are associated to HW Model (Note: addition of MemoryType to HW model happens only if MemoryType with this name is not existing in the model scope . If there exists MemoryType element with the same name in "PropertyConstraintsModel" and in "HW Model" --> the one from PropertyConstraintsModel will be removed and the one from HW Model will be referred accordingly inside MemoryElement element etc.,)
tags Tag ComponentsModel, HWModel, SWModel Components Model, Hardware Model, Software Model Tag objects are removed from the ComponentsModel,HWModel,SWModel tags and their content is shifted to CommonElements Model (Note: CommonElements model is contained inside Amalthea root node)

Below Classes are removed:

Class Name (version APP4MC 0.7.1) AMALTHEA sub model Model Migration behavior
TargetProcess Constraints Model Content is removed from the model. There is no equivalent of this model element in APP4MC 0.7.2
TargetCallSequence Constraints Model Content is removed from the model. There is no equivalent of this model element in APP4MC 0.7.2
OsExecutionInstructions, OsExecutionInstructionsDeviation, OsExecutionInstructionsConstant OS Model Replacement elements are : Instructions,InstructionsDeviation,InstructionsConstant. As there is no change in the content of these elements(when compared to previous elements) -> during model migration corresponding old type names are replaced with the new model elements
ProbabilityGroup SW Model ProbabilityGroup is replaced with RunnableProbabilitySwitch
ProbabilityRunnableItem SW Model ProbabilityRunnableItem is replaced wtih ProbabilitySwitchEntry
DeviationRunnableItem SW Model Content of DeviationRunnableItem i.e RunnableItem is moved directly inside the Group as a part of "items" list
EventConfigElement, EventConfigLink Config Model Both EventConfigElement and EventConfigLink objects are converted as EventConfig objects (As EventConfigElement & EventConfigLink classes are removed from the MetaModel – as per the semantics equivalent class for both of them is EventConfig). In case of migrating EventConfigElement – If definition of EntityEvent element is present as a sub-element -> it is moved to Events Model and the corresponding reference to EntityEvent is established inside EventConfig using attribute "event"
OsBuffering OS Model OsBuffering elements are migrated as OsDataConsistency elements. Below steps describe the criteria considered for migration of data :
LabelBufferring SW Model LabelBufferring elements are migrated as DataStability elements. Below steps describe the criteria considered for migration of data :
- If LabelBuffering value is "buffered" then the corresponding value of "dataStability" is set as "CustomProtection"
- If LabelBuffering value is "notBuffered" then the corresponding value of "dataStability" is set as "noProtection"
- If LabelBuffering value is " undefined" (default) then the corresponding value of "dataStability" is set as " undefined" (default)
LabelAccessBufferring SW Model LabelAccessBufferring elements are migrated as DataStability elements. Below steps describe the criteria considered for migration of data :
- If LabelAccessBufferring value is "inherited" then the corresponding value of "dataStability" is set as "inherited"
- If LabelAccessBufferring value is "buffered" then the corresponding value of "dataStability" is set as "customProtection"
- If LabelAccessBufferring value is "notBuffered" then the corresponding value of "dataStability" is set as "noProtection"
- If LabelAccessBufferring value is " undefined" (default) then the corresponding value of "dataStability" is set as " undefined" (default)

Version APP4MC 0.7.2 to APP4MC 0.8.0

Change in the namespace:

AMALTHEA Namespace (version App4MC 0.7.2) AMALTHEA Namespace (version App4MC 0.8.0)
http://app4mc.eclipse.org/amalthea/0.7.2 http://app4mc.eclipse.org/amalthea/0.8.0

Below references are removed from storage format (amxmi):

Variable Name (version APP4MC 0.7.2) Variable Type Class containing Variable AMALTHEA sub model Model Migration behavior
mode Mode ModeValueProvider SW Model mode is made as a derived variable and transient property is set to it, as a result it will not be serialized into the model file. Based on the selection of ModeLiteral (in ModeValueProvider element) mode element will be considered accordingly.
osDataConsistency OsDataConsistency OSModel OS Model OsDataConsistency element is shifted from OSModel to OperatingSystem element. As a part of migration, osDataConsistency element content is copied inside each OperatingSystem element

Below references names are changed :

Variable Name (version APP4MC 0.7.2) Variable Name (version APP4MC 0.8.0) Variable Type Class containing Variable AMALTHEA sub model Model Migration behavior
mapping memoryMapping Mapping (in 0.7.2) -> MemoryMapping (in 0.8.0) MappingModel Mapping model As there is a change in the reference name (from mapping to memoryMapping) in metamodel --> corresponding XML tag names in amxmi are changed from mapping to memoryMapping, type attribute is removed from memoryMapping XML tag as MemoryMapping is a concrete class

Below Classes/Interfaces are removed:

Class Name (version APP4MC 0.7.2) AMALTHEA sub model Model Migration behavior
ModeValueProvider (Interface) SW Model Content present inside ModeValueProvider class is moved to ModeLabel. Reference to Mode element is made as derived variable, and it automatically populated based on the selection of ModeLiteral.
SignedTimeObject SW Model Equivalent of this element is TimeObject in APP4MC 0.8.0. There is no change in the storage format.
SignedTime SW Model Equivalent of this element is Time in APP4MC 0.8.0. There is no change in the storage format.
Mapping Mapping Model This interface is removed from the model. MemoryMapping is equivalent of this model element in APP4MC 0.8.0. As a part of model migration type attribute is removed from the tag which is defining MemoryMapping.
AbstractElementMapping Mapping Model MemoryMapping is equivalent of this model element in APP4MC 0.8.0. As a part of model migration type attribute is removed from the tag which is defining MemoryMapping.
AbstractElementMemoryInformation HW Model AbstractMemoryElement is the equivalent of this model element in APP4MC 0.8.0. There is no change in the storage format.
ProbabiltitySwitch HW Model ProbabiltitySwitch class is changed to ProbabilitySwitch (typo corrected in the class name). ProbabilitySwitch is the equivalent of this model element in APP4MC 0.8.0
AllocationConstraint PropertyConstraints Model AllocationConstraint class is changed to CoreAllocationConstraint. In CoreAllocationConstraint -> reference to HwCoreConstraint element is removed, based on this change -> during model migration CustomProperty is created with key as "hwConstraint (element removed during Migration of Model to 0.8.0 version)" and value as the XML content of "hwConstraint" element.
MappingConstraint PropertyConstraints Model MappingConstraint class is changed to MemoryMappingConstraint. In MemoryMappingConstraint -> reference to HwMemmoryConstraint element is removed, based on this change -> during model migration CustomProperty is created with key as "hwConstraint (element removed during Migration of Model to 0.8.0 version)" and value as the XML content of "hwConstraint" element.
HwCoreConstraint, HwCoreConjunction, HwCoreProperty PropertyConstraints Model These elements are removed from the model. As a reference XML content is stored as a CustomProperty inside CoreAllocationConstraint
HwMemoryConstraint, HwMemoryConjunction, HwMemoryProperty PropertyConstraints Model These elements are removed from the model. As a reference XML content is stored as a CustomProperty inside MemoryMappingConstraint

Below are the changes in the datatype of elements:

Variable Name Variable datatype (version APP4MC 0.7.2) Variable datatype (version APP4MC 0.8.0) Class containing Variable AMALTHEA sub model Model Migration behavior
instructionsPerCycle int float CoreType HW model int is converted to float

4.5 Partitioning

The AMALTHEA partitioning approach supports various features, which can be combined and configured in order to exploit software parallelism in various ways. Besides mandatory model enhancements such as the label access analysis leading to a constraints model with RunnableSequencingConstraints and the cycle elimination to provide directed acyclic graphs, features like activation grouping or the independent graph grouping can be optionally performed. Subsequently, one of two different partitioning approaches can be performed, that cut the created graphs into partitions ( ProcessPrototypes) that can later be transformed to tasks. These tasks can be efficiently executed in parallel on a target platform, since ordering constraints are considered and task execution time and inter task communications are minimized. On the one hand, the CPP approach performs graph theoretical mechanisms to initially reveal a critical path of runnables (also denoted as nodes) and dependencies (also denoted as edges) from a source node to a sink node. That initial path is assigned to the first partition and branch nodes are identified and assigned to additional tasks with respect to order constraints, execution cycles and communication overheads. This partitioning approach provides an automatic runnable distribution resulting in an efficient parallelism factor with respect to an unbounded number of tasks and causal runnable orders i.e. dependencies. On the other hand, the ESSP partitioning approach performs a timing based partitioning with respect to a bounded number of partitions (configurable by the user), since this case is often mandatory in order to meet system requirements. Both approaches therefore read, adapt and write AMALTHEA specific constraints and software models and express the first approaches within the AMALTHEA platform to automatically exploit system parallelism and utilize parallel resources. The user benefits from not manually assigning runnables to tasks through a complex, error prone and time consuming process but triggering a configurable partitioning approach, that automatically performs that assignment using different algorithms for various optimizing criteria in order to distribute existing software effectively.

4.5.1 Usage of Partitioning

The following configuration window shows the different features, which are available for the partitioning process.

Each process is described in the following sections.

In order to start the partitioning, the user is intended can right click on an .amxmi file and select the Perform Partitioning command as shown in the following figure.

Alternatively, a workflow can be written (using the MWE2 eclipse workflow engine) to start the partitioning process and further define various workflows for different inputs and outputs (see example at this partitioning help's end).

With regard to the partitioning configuration, the partitioning process will perform various model adaptations as shown in the following figure and described in the next sections.

4.5.2 Pre-Partitioning

In order to perform the actual partitioning based on DAWGs (directed acyclic weighted graphs), various processing needs to be performed on the given input i.e. a set of runnables with label accesses, execution cycles and activations. Required processing are creating a constraints model with RunnableSequencingConstraints and cycle elimination that creates AccessPrecedences for each dependency that needs to be decomposed from a RunnableSequencingConstraint to eliminate one or more cycles. Further adaptations can create ProcessPrototypes in order to group runnables based on activation references or based on independent graphs.

Activation Analysis

Typical for embedded software, code fragments need to be executed within different intervals i.e. timers, interrupts or events. Sensors or actuators for example must often be read within short intervals for precise and accurate executions. Contrarily, certain processing fragments independent from the system's stability can be executed at greater intervals due to having less impact on the system's performance. Such activations can either be referenced by tasks via the stimulation model or by runnables via activations in the software model. By assessing these model entities and references, temporal classification can be implied. The activation analysis feature creates ProcessPrototypes for each activation and assigns referencing runnables correspondingly via creating TaskRunnableCalls within the ProcessPrototypes.

Label Analysis

The label analysis comprises the comparison of read and write accesses of all runnables for identifying dependencies. For example, in case runnable A writes a label and another runnable B reads the same label, runnable B depends on runnable A. This dependency is saved as a RunnableSequencingConstraint. Such RunnableSequencingConstraint is the basis for performing a DAG (directed acyclic graph) analysis and allows the determination of concurrency due to giving information about fork and join dependencies respectively runnables that can be calculated in parallel. Furthermore, the label access analysis allows deriving memory structure via common data accesses. The RunnableSequencingConstraints are stored within a new or an existing constraints model. An example is given in the following picture:

Cycle Elimination

The cycle elimination is a mandatory step for all subsequent methods and features. Topological and graph theoretical calculations require DAWGs, such that a cycle elimination has to be performed in advance. A cycle may occur in case two runnables share the same resource (label) in both directions, i.e. both runnables read and write the same label, or in case runnable A reads label BA and writes label AB and runnable B reads label AB and writes label BA. Furthermore, a cycle may be formed across multiple runnables. For the purpose of finding such cycles, the external JgraphT library has been added to the project, that supports finding all of these cycles. After all cycles have been identified, a specific mechanism (configurable through 'Minimal dependency decompositions for cycle elimination' in the preference page) detects edges (dependencies), which occur in multiple cycles. This mechanism iterates over edges within multiple cycles in descending order, i.e. it starts with an edge, that occurs in most cycles for ensuring minimal edge elimination. In order to retain a dependency that has been selected for elimination, such selected edges are transformed from a RunnableSequencingConstraint to an AccessPrecedence. After edge sharing cycles have been decomposed, all cycles that do not share any edges have to be decomposed as well. For each of these cycles an edge can be identified that provides an optimal retaining graph (configurable through 'Increase parallelization potential for cycle elimination result' in the preference page). The following figure shows 7 cycle elimination possibilities with an example graph.

Red transitions indicate edges, that are decomposed into AccessPrecedences for the corresponding solution. Rretaining graphs are shown on the right side of each solution indicated by a dashed rectangle. For illustration purposes, we assume equal execution time for each runnable (letter). Solution 1 and 5 (green dashed rectangles) feature the minimal runtime for two tasks. This assessment is made with respect to topological graph structure respectively the span of a graph (critical path) compared with its parallelism.

4.5.3 Partitioning Algorithms

Each of the algorithms described in the following creates ProcessPrototypes i.e. task precursor.

Independent Graph Partitioning (GGP)

The independent graph identification process, or also denoted as global graph partitioning, can be executed after the cycle elimination in case the user selected the corresponding entry in the configurations ('Create independent graph partitions'). This methodology looks for graphs or single nodes, that do not share any label with other runnables or complete graphs that do not share labels with other graphs. Such methodology allows forming tasks, which can be totally distributed to either different cores or even to totally different systems as seen in the following figure:

Critical Path Partitioning (CPP)

The CPP approach considers node weights (i.e. arbitrary computation / execution cycles / instructions of runnables) and partitions DAWGs, whereas runnables are equally distributed among an automatically determined number of partitions. The partitioning's objective is to reduce overall execution time and inter-task communication. The subsequent mapping methodology further considers resource constraints (availability of hardware and software resources namely program and data memory). The system's critical path is assigned to the first partition and branches of the graph are subsequently assigned to additional partitions. The approach has been chosen, because the critical path features mandatory sequential ordering, that cannot be computed in parallel. Thus, the weight of a critical path provides a lower bound on the total time to perform all the jobs. A CPP example is shown in the following figure (left = input graph; right = partitioning result):

Earliest Start Schedule Partitioning (ESSP)

The ESSP partitioning is developed for allowing the user to restrict the number of partitioning. This may be important and useful in very large systems in order to keep the task creation and task inter-communication costs low. The ESSP partitioning balances runnables across partitions with respect to their causal orders. For this purpose, only runnable's eit (earliest initial time) -values are calculated that define the sum of the longest preceding path's instructions. The methodology picks the partition with the lowest schedule length, calculates a set of assignable runnables, which predecessors are already scheduled, and assigns a runnable to the current partition that minimizes the schedule length.

Partitioning Example

The following figure illustrates the resulting partitions of the previously described algorithms based on an example graph shown on the left. On the right, the partitioning results are shown for (top) - CPP, (middle) - ESSP configured with 2 partitions and (bottom) - ESSP for 3 partitions.

4.5.4 Further Features

Two further features provide graph visualizations and a different graph representation through RunnableSequencingConstraints.

Applet Generation

The applet generation can be triggered with a different command in the context menu via right-clicking on an .amxmi file. In case the file contains a software model with runnables and a constraints model with RunnableSequencingConstraints, a java file will be generated that can be executed as an applet with the help of the JgraphT library. Such an applet visualizes a graph and can be adapted in an applet viewer. The following figure shows such an applet.

Dependency Alternative

This file generation feature focuses on a constraints model approach that features a different dependency representation compared with the result from the label access analysis that always features two RunnableGroups with each one RunnableGroupEntry entity within a RunnableSequencingConstraint. The approach can be adapted to feature more RunnableGroups and more RunnableGroupEntries and a less amount of RunnableSequencingConstraints, derived from the same graph. An example is expressed in the following figure.

4.5.5 Workflow example

The described features can also be triggered by a workflow (Eclipse's MWE2 workflow engine), exemplarily shown in the following code.

Workflow {
//basic setup
loadModule('/System/Resources')
loadModule('/APP4MC/Workflow')

//Importing needed packages
importPackage(org.eclipse.app4mc.amalthea.workflow.component)
importPackage(org.eclipse.app4mc.amalthea.workflow.core)
importPackage(org.eclipse.app4mc.multicore.partitioning.workflow)

//Configure logging
addLoggerToConsole("org.eclipse.app4mc.multicore")
addLoggerToConsole("org.eclipse.app4mc.amalthea.workflow")

print("Starting Workflow ...")
//general setup
const BASE = "platform:/resource"
const PROJECT = BASE + "/org.eclipse.app4mc.amalthea.example.democar.mapping"
const MODEL_LOCATION1 = PROJECT + "/model/AMALTHEA_Democar_MappingExample.amxmi"

var ctx = new DefaultContext()

//Reader
var reader = new ModelReader()
reader.addFileName(MODEL_LOCATION1)
reader.run(ctx)

//prepartitioning
var prepart = new PrePartitioningWrkflw()
prepart.setAa(false)
prepart.setGgp(false)
prepart.setMinimEdge(false)
prepart.setEffEdge(false)
prepart.run(ctx)
//result is saved in modelslot prePartitioning

//Writer
var writer = new ModelWriter()
writer.setModelSlot("prePartitioning")
writer.setFileName("prePartitioning")
writer.setSingleFile(true)
writer.setOutputDir(PROJECT + "/workflow-output")
writer.run(ctx)

//partitioning
var part = new GeneratePartitioning()
part.setModelLoc(MODEL_LOCATION1);
part.setModelSlot("prePartitioning")
part.setPartitioningAlg("essp")
part.setNumberOfPartitions("4")
part.run(ctx)

//Writer
var writer = new ModelWriter()
writer.setModelSlot("partitioning")
writer.setFileName("partitioning")
writer.setSingleFile(true)
writer.setOutputDir(PROJECT + "/workflow-output")
writer.run(ctx)

print("Finished Workflow")
ctx.clear()
endWorkflow()

4.5.6 Plugin dependencies

org.eclipse.core.runtime,
org.eclipse.emf.ecore,
org.eclipse.emf.ecore.xmi,
org.eclipse.sphinx.emf,
org.eclipse.ui,
org.jgrapht,
org.eclipse.app4mc.amalthea.model,
org.eclipse.app4mc.multicore.openmapping

4.6 Model Details

4.6.1 Unique ID generation

AMALTHEA uses a named based schema to reference other elements inside of a model.
A custom implementation is used instead of the standard EMF mechanism, which uses an index based implementation.
The target element id is identified by the following schema:

java.net.URLEncoder.encode(<name of element>, StandardCharsets.UTF_8.toString)) + "?type=" + <element>.eClass.name

Samples:

Referencing a Runnable element with name foo in a TaskRunnableCall looks like the following snippet:

<calls xsi:type="am:TaskRunnableCall" runnable="foo?type=Runnable" />

4.6.2 Interfaces and base objects

Several interfaces and abstract classes are defined in the common model.
They provide basic common functionality for all objects, which are extending it, like the possibility to add CustomAttributes or Tags. Tags are available at different elements with the purpose to group them together or to annotate them.

4.6.3 Transient back pointers

AMALTHEA maintains a (small) number of back pointers in the model. A transient reference of an object 'points back' to the object referring to it. This allows easier navigation, at the expense of greater memory use.

Current back pointers

The data model has some intermediate objects to express the call of a runnable or the access to a label.
These objects are containments of Task or Runnable and can have additional attributes. The back pointers support an easy way to answer queries like "Which tasks call runnable x ?" or "Which functions read label y ?".

Back pointer (read only) Reference
LabelAccess labelAccesses data Label
TaskRunnableCall taskRunnableCalls runnable Runnable
RunnableCall runnableCalls runnable Runnable

All elements with memory representation (e.g. labels, runnables) can be mapped to exactly one memory. The back pointer provides a list of all elements that are assigned to a specific memory.

Back pointer (read only) Reference
Mapping mappings memory Memory
AbstractElementMapping mappings abstractElement AbstractElementMemoryInformation

Labels and runnables can be located in exactly one section. The back pointer provides a list of all elements that are assigned to a specific section.

Back pointer (read only) Reference
Label labels section Section
Runnable runnables section Section

Implementation

Xcore:

In addition to a reference <reference> there is a transient reference named <reference>LinkInt .
The visiblity in the generated code and in the user interface is controlled by @GenModel annotations.

class TaskRunnableCall extends CallSequenceItem
{	...
	refers Runnable[1] runnable
	
	// hidden link (internal)
	@GenModel(property="None", suppressedGetVisibility="true", suppressedSetVisibility="true"))
	refers transient Runnable[1] runnableLinkInt opposite taskRunnableCalls
}

class Runnable extends AbstractElementMemoryInformation
{	...	
	// back pointer (readonly)
	@GenModel(property="Readonly", suppressedSetVisibility="true")
	@GenModel(propertyCategory="Read only", propertyFilterFlags="org.eclipse.ui.views.properties.expert")
	refers transient TaskRunnableCall[] taskRunnableCalls opposite runnableLinkInt 
}

Classes and references:

Generated code:

EMF code generation allows the use of dynamic templates. AMALTHEA uses this technology to additionally set the transient value in the setter code of the standard value. The dynamic template is triggered by the name pattern '<reference>' and '<reference>LinkInt'.

public class TaskRunnableCallImpl ... {
	...
	public void setRunnable(org.eclipse.app4mc.amalthea.model.Runnable newRunnable) {
		...
 		//Additional setting transient value for transient bi-directional reference
 		setRunnableLinkInt(newRunnable);
	}
}

User Interface

4.7 AMALTHEA Model XSD Schema

The AMALTHEA model is specified with the Eclipse Modeling Framework (EMF). In EMF data-models are defined with the Ecore/Xcore format. Eclipse developers can use the AMALTHEA Platform to work with the model. For developers that are not using Java or Eclipse, the specification is also provided in the XSD format. A XSD file for each package can be found here.

The AMALTHEA model references two external model definitions:
- The Ecore model (http://www.eclipse.org/emf/2002/Ecore)
- The Franca model (http://core.franca.org)

The XSD file for the Ecore model can be found on the Eclipse EMF download page
Download the Models package from Latest Releases. The download contains a folder "org.eclipse.emf.ecore_*". This folder contains the XSD schema for Ecore.

The Franca model is only available as Ecore file. To generate an XSD schema open the AMALTHEA Tool Platform and do the following steps:

Open the Plug-Ins View by selecting Window -> Show View -> Other and select Plug-Ins View in the folder Plug-in Development:

Select the plug-in org.franca.core and import it as Binary Project:

Open the model folder in the imported project, do a right click on the "franca.genmodel" file and select "Export Model".

Select the entry "XML Schema for XMI" in the first wizard page.

Navigate to the last wizard page, select "org.franca.core.franca" and click finish.

Now there the file "francaXMI.xsd" in the model folder.

5 Release Notes

5.1 Eclipse APP4MC 0.8.0 (Apr 2017)

General

  • Model changes (mainly to simplify handling)
  • Improved tooling: model utilities
  • New UI components: Multicore Wizard, Task-Visualizer (draft version)

Model handling

  • Model migration support (0.7.2 -> 0.8.0)

Model changes

  • Simplified model handling: Mapping Model, Modes, Time
  • New classifiers to annotate Core and Memory
  • Reworked Property Constraints
  • Extended OS Overheads

For details see Release 0.8.0 issues

5.2 Eclipse APP4MC 0.7.2 (Jan 2017)

General

  • Smaller product definition (optional parts can be installed via update sites)
  • Harmonized model API

Model handling

  • Model migration support (0.7.1 -> 0.7.2)
  • Improved back references

Model changes

  • New core-specific runtimes (instructions) for runnables
  • New model elements for streaming communication (Channels)*
  • New Transmission Policy for communication (e.g. label accesses)
  • New Event-based Stimulus
  • New Data Consistency needs
  • New Counters for InterProcessActivation and EventStimulus
  • Harmonized model elements to describe probability
  • Storage of common elements (e.g. tags) in one central location

Bug fixes / Improvements

  • Model migration (ISR call graph, Section names, cross document references)
  • Back references (Memory -> Mapping, Scheduler -> CoreAllocation)

Documentation

  • New model elements: Physical memory sections, Channels
  • New examples
* = Experimental feature (first modeling attempt that may change in future versions).

For details see Release 0.7.2 issues

5.3 Eclipse APP4MC 0.7.1 (Oct 2016)

Model handling

  • Model migration support (0.7.0 -> 0.7.1)
  • Improved update of opposite references
  • Storage of default values is enabled by default

Model changes

  • New distribution "Beta Distribution"
  • New stimuli type "Synthetic" (to "replay" activations from a hardware trace)
  • New DataStabilityGroup
  • Introduced SamplingType as new attribute for Deviation
  • Introduced physical memory sections (PhysicalSectionConstraint, PhysicalSectionMapping)
  • Reworked AffinityConstraints (removed Scheduler constraints, added Data constraints)
  • Reworked Event / Event-Chain Constraints
  • Reworked RunnableSequencingConstraints
  • New types for DataSizes and DataRates

For details see Bugzilla entries for 0.7.1

5.4 Eclipse APP4MC 0.7.0 (Jul 2016)

Model handling

  • Model migration support (itea1.1.0 -> itea1.1.1 -> 0.7.0)
  • Simplified model creation wizard
  • One consolidated model
  • Additional opposite references (transient) to ease model navigation
  • Scripting support for workflows (based on Eclipse EASE)

Model changes

  • Removed 'attributes' and 'features' from hardware model
  • Replaced generic ProcessAllocation with specific ISRAllocation and TaskAllocation
  • New BigInteger values for CustomAttributes
  • Some renamed classes and attributes (details in the help)

Model structure

  • Removed direct dependency to Franca (replaced by implicit reference by interface name)

Documentation

  • Updated model documentation / class diagrams
  • New developer guide regarding Model Migration Framework
  • New developer guide regarding Workflows

5.5 AMALTHEA 1.1.1 (Oct 2015)

Model handling

  • Changed serialized model references to name based references
  • Restricted model scope to surrounding folder (without sub folders)

Model changes

  • Added HwAccessPath to hardware model to describe an access path either by using latencies or available hardware elements
  • Added Modes to common model and ModeLabels to software model to describe alternative execution paths of tasks
  • Added isBuffered attribute to Labels and LabelAccesses to describe data consistency requirements
  • Added Requirements to constraints model to describe quality requirements for the dynamic architecture
  • Added DataCoherencyGroups to constraints model to describe data coherency requirements
  • Added ProcessChains to software model to describe data processing over multiple tasks
  • Switched validations to Sphinx check based and added different model validations
  • New values for CustomAttributes with Time and SignedTime

Features

  • New AMALTHEA Trace Database (ATDB) Import to create an AMALTHEA model from a trace

Documentation

  • Updated model documentation / class diagrams
  • Updated user guide documentation to include steps for create a new AMALTHEA model with wizard
  • Updated user guide documentation to include a tutorial for using the AMALTHEA Trace Database (ATDB) Import

5.6 AMALTHEA 1.1.0 (Aug 2015)

  • Updated OS model to include now possibility to describe the operating system in more detail, like OS overhead
  • Added Event Sets to Event model to allow the definition of an unspecified order for events in an event chain
  • Added Data Age Constraints to constraints model
  • Added Label Value Lists to Stimulus model to enable and disable stimuli by label values or to set label values by the activation of a stimulus
  • Added Arrival Curve Stimulus to Stimulus model
  • Added Terminate Task for Call Graph in Software model to allow the termination of a task within a switch
  • Added a address attribute in the Mapping model to allow the definition of a memory address in the memory mapping
  • Added CustomAttributes to all model AMALTHEA model elements
  • Added first version of OSLC adapter
  • Replace of attribute read/write in LabelAccessStatistic with value and readCacheMisses with cacheMisses
  • Added documentation for Counter element in SW model

6 Roadmap

Current topics:

  • Model changes (e.g. for heterogeneous hardware)
  • Model utilities
  • Interactive help (visualization of model and meta model structure)

All new releases are provided by Eclipse APP4MC.

For details see: