Issue: 1.0 |
||
Status: Definitive |
||
2nd July 2002 |
||
|
||
Originator |
|
|
Rod Chapman, Peter Amey |
||
|
||
Approver |
|
|
David Stokes |
||
|
||
|
||
|
|
|
|
|
|
|
|
|
Contents
4 Other significant Examiner changes
5 Miscellaneous Examiner corrections
6.1 Simplification of common Integer inequalities
6.2 Simplification of common enumerated inequalities
6.3 Simplification of VCs involving quantified expressions
7.1 New built-in rule families: MODULAR, BITWISE and ENUMERATION
10.1SPARK — The SPADE Ada Kernel and SPARK95 — The SPADE Ada95 Kernel
10.3 Generation of VCs for SPARK Programs
10.4 Generation of RTCs for SPARK Programs
10.6Proof Checker Rules Manual
10.7 Installation manuals - SUN and NT
11 Limitations and known errors
12 Change Summary from Release 2.0
12.1Release 2.0 - November 1995
12.4Release 3.0 - September 1997
12.5Release 4.0 - December 1998
12.7Release 6.0 - November 2001
13 Operating system compatibility
Continued development of the SPARK[1] Examiner has resulted in the development of a number of useful features which Praxis Critical Systems wish to make available before the next planned major release. These developments have been incorporated into interim Release 6.1.
This document describes changes in the behaviour of all variants of the SPARK Examiner Release 6.1 compared to Release 6.0.
For further information about this document please contact Praxis Critical Systems:
By phone: 01225 466991
By FAX: 01225 469006
By email: sparkinfo@praxis-cs.co.uk
The SPARK language has been extended to include a subset of tagged types and type extensions. The principal exclusion from the supported subset is class-wide operations including dynamic dispatch (equivalent to pragma Restrictions (No_Dispatch) in RM H.4 (19). The restriction preserves the statically deterministic nature of control flow in SPARK programs and allows the full range of SPARK analyses to be performed. Other restrictions are introduced to avoid the need for overload resolution. A useful consequence of the selected rules is that no additional run-time checks are required since a SPARK program can never fail “Tag_Check” (as defined in RM 11.5 (18)).
The SPARK restrictions on tagged types are summarised below. An example and further information can be found at Appendix A.
1 Abstract types may not be used.
2 Controlled types may not be used.
3 Tagged types and type extensions may only be declared in the specification of library unit packages.
4 At most one type extension may be declared in each package; however, more than one root tagged type may be declared in each package.
5 A subprogram declaration may not have the same name as a potentially inheritable subprogram unless it successfully overrides it.
6 Actual parameters matching formals of tagged types must be objects (or ancestor type conversions of objects) not general expressions.
7 The operand of an ancestor type conversion must be an object (not an expression).
8 When completing a private extension the type named in the private part must be exactly the same as that named in the visible part (Ada requires only that it has to be derived from the same root). This is simply a matter of simplicity and clarity.
9 The ancestor part of an extension aggregate may not be a type mark.
There is one further temporary restriction that may be considered to be an Examiner tool limitation rather than a SPARK language restriction:
1 An untagged private type may not be implemented as a tagged type or a type extension.
One existing SPARK rule is relaxed for tagged types: a type conversion of a tagged type may be used as an actual parameter where the corresponding formal is exported.
A new form of annotation, the type assertion, is now defined. This allows a user to specify additional properties of a type.
Initially, the only form of type assertion allowed is used to specify the Base Type of a signed integer type declaration. This can be used to significantly improve the generation of VCs for Overflow_Check when the "exp" command line switch is used.
As an example, consider the following type declaration:
type T is range 1 .. 10;
When generating VCs for Overflow_Check, the Examiner generates references to T'Base'First and T'Base'Last. Unfortunately, the choice of T'Base is implementation-dependent, and so the Examiner cannot generate replacement rules for these constants without additional information. The type-assertion annotation and the configuration file mechanism (see section 4) allow this information to be given.
Using the type assertion annotation, the user might write:
type T is range 1 .. 10;
--# assert T'Base is Short_Short_Integer;
This annotation specifies that the base type of T is Standard.Short_Short_Integer. Using the configuration file mechanism, the user can also specify the range of such predefined Integer base types. Given these data, the Examiner can generate suitable replacement rules for T'Base'First and T'Base'Last which can be used by the Simplifier.
It is important to note that the set of available Integer base types, and the compiler's policy regarding how base type are chosen for a given integer type are implementation-dependent, and so must be determined and verified by the user.
Constrained and full-range subtypes of modular types are now permitted.
This file serves a similar purpose to the target data file, in that it allows implementation dependent values to be supplied to the Examiner. However, it is a significantly more general mechanism, and has a greater positive impact on the generation and simplification of VCs, as well as static semantic checking. The use of the target configuration file is mutually exclusive with the use of the target data file. The use of the target configuration is recommended.
NB. If the target configuration file is specified on the command line, but cannot be found by the Examiner, then analysis of the source files will not carried out, and a warning will be emitted.
The format of the file resembles a number of SPARK package specifications, concatenated in one file. The grammar of the file is as follows:
config_file = config_defn { , config_defn }
config_defn = “package” package_name “is” [seq_defn] “end” package_name “;”
seq_defn = defn { , defn }
defn = fp_type_defn | int_type_defn | int_subtype_defn | fp_const_defn | int_const_defn | private_defn
fp_type_defn = “type” fp_type_name “is” “digits” int_literal “range” fp_literal .. fp_literal “;”
int_type_defn = “type” int_type_name “is” “range” int_expr “..” int_expr “;”
private_defn = “type” private_type_name “is” “private” “;”
int_expr = un_exp_part [ add_sub int_literal ]
add_sub = “+” | “-“
un_exp_part = [ “-“ ] exp_part
exp_part = int_literal “**” int_literal | int_literal
int_subtype_defn = “subtype” int_subtype_name “is” simple_name “range” int_expr “..” int_expr “;”
fp_const_defn = fp_const_name “:” “constant” “:=” fp_literal “;”
int_const_defn = int_const_name “:” “constant” “:=” int_expr “;”
int_literal = <a valid SPARK integer literal>
fp_literal = <a valid SPARK floating point literal>
Although the format of the target configuration file resembles a SPARK source file, it is important to note that it is not, and that only items from the above grammar are allowed: in particular, the range of acceptable expressions is much more limited. Standard Ada comments may also be used.
The following is an example target configuration file for GNAT 3.14a1 under Win32.
package Standard is
type Integer is range -2**31 .. 2**31-1;
type Short_Short_Integer is range -2**7 .. 2**7-1;
type Short_Integer is range -2**15 .. 2**15-1;
type Long_Integer is range -2**31 .. 2**31-1;
type Long_Long_Integer is range -2**63 .. 2**63-1;
type Short_Float is digits 6 range -3.40282E+38 .. 3.40282E+38;
type Float is digits 6 range -3.40282E+38 .. 3.40282E+38;
type Long_Float is digits 15
range -1.79769313486232E+308 .. 1.79769313486232E+308;
type Long_Long_Float is digits 18
range -1.18973149535723177E+4932 .. 1.18973149535723177E+4932;
end Standard;
package System is
type Address is private;
Storage_Unit : constant := 8;
Word_Size : constant := 32;
Max_Int : constant := 2**63-1;
Min_Int : constant := -2**63;
Max_Binary_Modulus : constant := 2**64;
Max_Base_Digits : constant := 18;
Max_Digits : constant := 18;
Fine_Delta : constant := 1.0842E-19;
Max_Mantissa : constant := 63;
subtype Any_Priority is Integer range 0 .. 31;
subtype Priority is Any_Priority range 0 .. 30;
subtype Interrupt_Priority is Any_Priority range 31 .. 31;
end System;
The Examiner supports ‘package’ specifications in the target configuration file. In SPARK95 mode, both package Standard and package System may be specified.
In SPARK83 mode, only package Standard is allowed. The packages may appear in any order.In package Standard, the following types may be declared:
· Integer;
· Float;
· Any signed integer type which has ‘_Integer’ as a suffix; and
· Any floating point type which has ‘_Float’ as a suffix.
In package System, the type Address may be declared, as may the following subtypes:
· Any_Priority;
· Priority; and
· Integer_Priority.
Additionally, the following named numbers may be defined:
· Storage_Unit;
· Word_Size;
· Max_Binary_Modulus;
· Min_Int;
· Max_Int;
· Max_Digits;
· Max_Base_Digits;
· Max_Mantissa; and
· Fine_Delta.
The ‘package’ specifications are expected to accord to the normal SPARK rule that the package name be specified in the ‘end’ clause. All declarations within the packages are optional, subject to any additional rules below.
In package Standard, type Integer and all ‘_Integer’ types are expected to be signed integer types. Similarly, type Float and all ‘_Float’ types are expected to be constrained floating point types.
In package System, which may only be included in SPARK95 mode, the following well-formedness checks apply:
· Type Address must be private. The declaration of type Address implicitly declares a deferred constant System.Null_Address of type Address;
· Storage_Unit, Word_Size, Max_Binary_Modulus, Min_Int, Max_Int, Max_Digits, Max_Base_Digits and Max_Mantissa are all expected to be named integers.
· Fine_Delta is expected to be a named real.
· Subtype Any_Priority is expected to have Integer as its parent type; additionally, if Any_Priority is specified, both Priority and Interrupt_Priority must also be specified.
· Subtypes Priority and Interrupt_Priority are expected to have Any_Priority as their parent types. The range of Priority must include at least 30 values. The declaration of subtype Priority implicitly defines a constant Default_Priority of type Priority.
· The following relations must hold between Any_Priority, Priority and Interrupt_Priority:
Any_Priority’First = Priority’First;
Any_Priority’Last = Interrupt_Priority’Last;
Priority’Last + 1 = Interrupt_Priority’First.
· Max_Binary_Modulus must be a positive power of 2.
In addition, standard SPARK rules on redeclaration of existing identifiers, empty ranges and legality of subtype ranges apply.
An Ada source file is distributed with the Examiner which will automatically generate a valid target configuration file when compiled and run; it is named confgen.adb and will be located in the same directory as Examiner binary. Please note that the resultant configuration file will only be valid for SPARK95 usage, since it includes package System. The packages specifications are generated on the standard output, and can be redirected to a file in the normal fashion.
The configuration file generator will probably require some minor modification (depending on whether the target compiler supports Long_Integer, for example), so it is suggested that the user inspect the source code before use.
NOTE: In an environment where both host and target cross compilers are being used, it is very important that the configuration file is valid for the final target computer (i.e. using the cross compiler), not the host compiler.
The Examiner command line has a large number of optional switches and arguments. A new switch /help on VAX and NT and -help on Sun systems provides a convenient reminder of the options available.
Release 6.0 introduced a new form of derives clause: “derives null from X, Y, Z”. This was primarily concerned with exteneral variables and could be used to show that an imported variable was used only to have an effect outside the “SPARK boundary” of a system. For example, a procedure that performed a busy wait on an input port until a particular value (passed in as a parameter) was read. This procedure would clearly need to import the parameter but nothing would be derived from it (except an invisible side-effect on time consumed).
A useful alternative purpose for the null derives clause was to allow data logging of values in a way that is hidden for the Examiner’s information flow analysis. A data logging procedure, with a hidden body, can be give a null derives annotation thus making calls to it neutral in flow analysis terms. A full description of this use is in the Release 6.0 Release Note.
Release 6.0 of the Examiner took the transparency of calls to a procedure with a null derives annotation to the extreme that the actual parameters passed were not even subject to data flow analysis; this meant that undefined (random) values could be logged. Release 6.1 still permits the use of the null derives in this way but will signal a data flow error if the actual parameters concerned do not have valid values. For example:
procedure Log (X : in Integer);
--# derives null from X;
...
Z : Integer;
...
-- Z has no valid value at this point
Log (Z);
^ a data flow error will now be reported here
Attributes ’Floor and ’Ceiling are now implemented and accepted by the Examiner.
The Examiner no longer allows misuse of logical operators between universal integer types as in:
X : constant := 5 and 3;
The Examiner now detects illegal duplicate record field names in type declarations. For example:
type T is record
A : Integer;
A : Boolean; -- illegal reuse of A
end record;
Under certain circumstances the Examiner would generate VCs for overflow checks involving “universal_integer_first” and “universal_integer_last”. These have been replaced with “system__min_int” and “system__max_int” respectively. Note that values for these constants can be supplied to the Examiner using the configuration file (See Section 4.1).
Release 6.0 of the Examiner did not check the validity of assertions placed between a while condition and the reserved word “loop”; thus:
while X > 0
--# assert this assertion not checked for validity
loop
Unchecked, malformed assertions could result in invalid VCs or cause the VC generator to crash.
The /noecho option is intended to facilitate use of the Examiner in script and command files by suppressing output to the console. In release 6.0, certain screen messages were not correctly suppressed; this has been corrected in Release 6.1.
The Examiner no longer incorrectly permits zero or negative accuracy in real type definitions. The following are both illegal and are rejected.
type T1 is delta -0.5 range -1.0 .. 1.0;
type T2 is delta 0.0 range -1.0 .. 1.0;
Recursion in meta-files and index-files is now detected and reported.
The Examiner now gives more meaningful messages in the presence of either illegal or obsolete address clauses, depending on whether SPARK83 or SPARK95 mode is active.
The Examiner now gives more meaningful messages for the use of illegal or obsolete Import and Interface pragmas, depending on whether SPARK83 or SPARK95 mode is active.
None
Release 6.1 of the SPARK toolset includes release 2.03 of the SPADE Simplifier, which incorporates the following improvements:
The simplifier is now more powerful at discharging VCs whose conclusions are of the form A + B >= Z, A + B <= Z, or A <> Z, where A and B are FDL variables and Z is an integer literal. These forms arise commonly in the RTC VCs arising from range- and division-checks.
Some improvement has been made to the Simplifier's ability to discharge VCs involving inequalities over enumerated types. In particular, the common forms of VC arising from SPARK "for" loops over enumerated types are now simplified successfully. For example, consider the following VC, which arises as a result of the path "around" a SPARK "for" loop, given an enumerated type T and a loop variable I:
H1: i >= t__first .
H2: i <= t__last .
H3: not (i = t__last) .
->
C1: succ(i) >= t__first .
C2: succ(i) <= t__last .
The simplifier now discharges such VCs automatically.
The Simplifier can now simplify simple cases of VCs involving quantified expressions, as typically arise from the use of array types in SPARK. For example, given the FDL declarations
type programswitch__positions =
(programswitch__auto, programswitch__clock,
programswitch__on1, programswitch__off1,
programswitch__on2, programswitch__off2);
type onofftimes = array [programswitch__positions] of
integer;
a VC of the form
H1: for_all(i___1 : programswitch__positions,
i___1 >= programswitch__on1 and
i___1 <= programswitch__off2 ->
element(onofftime, [i___1]) >= 0 and
element(onofftime, [i___1]) <= 86399) .
->
C1: element(onofftime, [programswitch__off1]) >= 0 .
C2: element(onofftime, [programswitch__off1]) <= 86399 .
is successfully discharged by the Simplifier.
The performance of the Simplifier's hypothesis-elimination phase has been improved in this release, particularly for VCs that have a large number of hypotheses.
Release 6.1 of the SPARK toolset includes release 1.47 of the SPADE Proof Checker, which incorporates the following improvements:
Three new built-in rule families are supplied with this release. MODULAR supplies rules for dealing with expressions involving SPARK's "mod" operator. BITWISE provides rules concerning the "bit__and", "bit__or" and "bit__xor" functions that are used to model SPARK's bitwise operators on modular types. Finally, ENUMERATION supplies additional useful rules for dealing with inequalities over enumerated types.
The Proof Checker Rules manual contains more detail on these rule families.
The POGS tool now includes a facility for VCs to be marked in a file as being "proved by review." POGS now takes these files into account in its analysis.
There are no known backward incompatilities introduced between Release 6.0 and Release 6.1.
The following user manuals have been updated as part of this release:
· Updated to describe tagged types, type assertion annotations, the configuration file, and modular subtypes.
· Adds documentation of target configuration file.
· Adds new error messages and warnings.
· Command-line switches summary includes the "Help" switch.
· Adds description of how tagged types are mapped into FDL.
· Adds description of how the target configuration file and type assertion annotations can be used to improve the VC generation and subsequent simplification of VCs for Overflow_Check.
· Modified to document the use of the new "proof review" file mechanism.
· Updated to include three new built-in rule families: MODULAR, BITWISE and ENUMERATION.
· These manuals have been updated to include instructions for upgrading to release 6.1 from a former release.
This section describes limitations of the Examiner tool arising mainly from incomplete implementation of planned features.
1 Array aggregate and case statement completeness checking: in some, rare, cases involving very large type ranges in the absence of an others clause, a warning may be produced that the Examiner cannot check the completeness of the aggregate or case statement.
2 The flow analyser does not take account of for loops which must be entered because they are looping over a type mark which, in SPARK, cannot be empty. The flow analysis is identical to the case where the loop range might be empty.
3 The Examiner does not permit the declaration of subprograms in the private part of a package; this limitation will be reviewed in a future Examiner release since the introduction of child packages may make such subprograms useful.
4 The SPARK 95 language definition removes the distinction between initial and later declarative items; this distinction remains in force in the Examiner which requires SPARK 83 declaration orders even in SPARK 95 mode.
5 The Examiner does not yet permit the use of 8-bit characters in SPARK 95 user-defined identifiers.
6 Universal expressions in a modular context may sometimes require type qualification.
1 Ada string inequality is not modelled.
2 VCs involving string catenation which includes the character ASCII.NUL will be incorrect.
3 Aggregates of multi-dimensional arrays cannot be modelled although aggregates of arrays of arrays can.
4 Verification conditions involving real numbers are evaluated using infinite precision or perfect arithmetic; this allows the correctness of an algorithm to be shown but cannot guard against the effects of cumulative rounding errors for example.
5 The Examiner does not generate VCs for package initialization parts. Statically-determinable constraint errors will be detected during well-formation checking of package initialization.
This section lists known errors in the Examiner which are awaiting investigation and correction.
1 The SPARK rule that array actual parameters must have the same bounds as the formal parameter is not checked for function parameters where the actual parameter is a subtype of an unconstrained array type. Since subtype bounds are static in SPARK errors of this kind should be detected by an Ada compiler. If not an unconditional run-time error will occur.
2 The Examiner permits the body of a subprogram to be entirely made up of proof statements thus breaching the Ada rule that at least one Ada statement must be present.
3 Where a package declares two or more private types the Examiner permits mutual recursion between their definitions in the private part of the package.
4 The Examiner does not “freeze” for loop bounds properly when generating VCs. For example given for I in Integer range 1.. F(X) loop where the value of F(X) changes in the body of the loop. To avoid this problem assign the value of F(X) to a suitable variable and use this as the loop bound.
5 The Examiner does not take due account of a range constraint when determining the subtype of a loop variable; this affects completeness checking of case statements within the loop. For example for I in Integer range 1..4 loop would require only values 1, 2, 3 and 4 to be covered by the case statement.
6 The Examiner wrongly demands that a private type that has limited private components must also be limited.
7 When summarising the counts of pragmas found during an analysis the totals may depend on whether units are selected via the command line (or metafile) or using the index mechanism. The difference affects only pragmas placed between program units and arises because placing a file name on the command line causes the entire file to be analysed whereas selecting it using indexes causes only the required unit to be read from the file.
8 A string literal still requires type qualification when it is an actual parameter and the formal parameter is a constrained string subtype.
Each Examiner Release is accompanied by a release note detailing changes from the previous version; this section simply summarises the various changes that have been made.
Release 2.0 added:
· static expression evaluation;
· variable initialization at declaration;
· full-range scalar subtypes; and
· operator renaming in package specifications.
Release 2.1 added:
· facilities for proof of absence of run-time errors
Release 2.5 was distributed with “High Integrity Ada - the SPARK Approach” and provided initial facilities for SPARK 95
Windows NT was supported for the first time with this release. Release 3.0 also added:
· additional SPARK 95 support;
· flow analysis of record fields;
· command line meta files;
· named numbers;
· unqualified string literals;
· moded global annotations; and
· optional information flow analysis.
With Release 4.0 we upgraded all users to a single product standard supporting SPARK 83, SPARK 95 and analysis options up to an including proof facilities. New features were:
· full implementation of public and private child packages;
· default switch file; and
· provision of the INFORMED design document.
· Enhanced proof support:
facilities for proof of programs containing “abstract state”;
addition of quantified expressions;
proof rule generation for enumeration types;
identification of the kind and source of each VC;
suppression of trivially true VCs;
Proof Obligation Summariser tool (POGS)
· Optional HTML output files with hyperlinks that can be “browsed” interactively
· Better support for common Ada file naming conventions
· User-selectable annotation character
· Improved suppression of analysis were results might otherwise be misleading
· Static expression evaluation in proof contexts
· Singleton enumeration types
· Revised SPARK_IO package
· Error numbering
· Introduction of “external variables” to simplify modelling of the interactions between a SPARK program and its external environment.
· Addition of the “null derives” annotation to describe information flows which affect only the external environment.
· Introduction of modular types
· Use of loop labels in exit statements
· Use of global modes on function subprograms
· Extended support for predefined types such as Long_Integer
· Simplified run-time check generation for own variables
· Relaxation of need for mandatory type announcement of own variables
· Plain output option to simplify comparisons of Examiner output files
· Platform-independent switch files and metafiles
· Support for intentionally infinite loops
· Detection of own variables that can never be initialized
· Detection of unusable private types
· Extra refinement checks on global variables when performing data flow analysis
· Detection of unnecessary others clause in case statements
· Extensions to the POGS tool
· Improved error messages to distinguish different cases of variables which are “not declared or visible”
· Improved SPADE Simplifier Release 2.0
· New “SPARKSimp” Tool
Release 6.1 is not available on VAX/VMS at this time.
The toolset is compatible with Solaris 5.6 through to 8 including those with a 64-bit kernel.
The toolset is compatible with Windows NT 4.0 and Windows 2000. The executables are also known to work on Windows 95 and 98; however, use of the toolset on these operating systems is unsupported. The FlexLM licence manager software only runs on Windows NT or 2000.
Appendix A. Tagged type example
A.1 Record extensions
This example illustrates many of the features of tagged types supported by SPARK. Comments in the code emphasise points of particular interest.
package Points
is
type T is tagged record
X, Y : Integer;
end record;
procedure InitPoint (O : out T;
X, Y : in Integer);
--# derives O from X, Y;
--# post O.X = X and O.Y = Y;
-- to illustrate alternative implementation
procedure InitPoint2 (O : out T;
X, Y : in Integer);
--# derives O from X, Y;
--# post O.X = X and O.Y = Y;
function GetX (O : T) return Integer;
--# return O.X;
function GetY (O : T) return Integer;
--# return O.Y;
end Points;
------------------------------------------------------------
package body Points
is
procedure InitPoint (O : out T;
X, Y : in Integer)
is
begin
O.X := X;
O.Y := Y;
end InitPoint;
-- alternative implementation using aggregate
procedure InitPoint2 (O : out T;
X, Y : in Integer)
is
begin
O := T'(X => X, Y => Y);
end InitPoint2;
function GetX (O : T) return Integer
is
begin
return O.X;
end GetX;
function GetY (O : T) return Integer
is
begin
return O.Y;
end GetY;
end Points;
------------------------------------------------------------
with Points;
--# inherit Points;
package Lines
is
type TenthsOfDegrees is range 0..3600; -- anticlockwise from 3 o'clock
-- although T has been re-used here, there is no restriction on the type name that can be used.
type T is new Points.T with record
Angle : TenthsOfDegrees;
Length : Natural;
end record;
procedure InitLine (O : out T;
X, Y : in Integer;
Angle : in TenthsOfDegrees;
Length : in Natural);
--# derives O from X, Y, Angle, Length;
--# post O.X = X and
--# O.Y = Y and
--# O.Angle = Angle and
--# O.Length = Length;
-- to illustrate alternative implementation
procedure InitLine2 (O : out T;
X, Y : in Integer;
Angle : in TenthsOfDegrees;
Length : in Natural);
--# derives O from X, Y, Angle, Length;
--# post O.X = X and
--# O.Y = Y and
--# O.Angle = Angle and
--# O.Length = Length;
function GetAngle (O : T) return TenthsOfDegrees;
--# return O.Angle;
function GetLength (O : T) return Natural;
--# return O.Length;
-- InitPoint, InitPoint2, GetX and GetY are inherited unchanged from Points
end Lines;
------------------------------------------------------------
package body Lines
is
procedure InitLine (O : out T;
X, Y : in Integer;
Angle : in TenthsOfDegrees;
Length : in Natural)
is
begin
InitPoint (O, X, Y); -- inherited operation, sets X and Y
O.Angle := Angle;
O.Length := Length;
end InitLine;
-- alternative implementation using extension aggregate
procedure InitLine2 (O : out T;
X, Y : in Integer;
Angle : in TenthsOfDegrees;
Length : in Natural)
is
Apoint : Points.T;
begin
Points.InitPoint (Apoint, X, Y); -- direct call of root operation
O := T'(Apoint with Angle, Length); -- extension aggregate
end InitLine2;
function GetAngle (O : T) return TenthsOfDegrees
is
begin
return O.Angle;
end GetAngle;
function GetLength (O : T) return Natural
is
begin
return O.Length;
end GetLength;
end Lines;
------------------------------------------------------------
with Points;
--# inherit Points;
package NewPoints
is
type T is new Points.T with null record; -- no extra fields
function GetX (O : T) return Integer; -- example of overriding operation
--# return O.X;
end NewPoints;
------------------------------------------------------------
package body NewPoints
is
function GetX (O : T) return Integer
is
begin
-- direct call of root operation using ancestor type conversion of argument
return Points.GetX (Points.T (O));
end GetX;
end Newpoints;
------------------------------------------------------------
with NewPoints;
--# inherit NewPoints;
package Circles
is
type T is new NewPoints.T with record
Radius : Natural;
end record;
procedure InitCircle (O : out T;
X, Y : in Integer;
Radius : in Natural);
--# derives O from X, Y, Radius;
--# post O.X = X and O.Y = Y and O.Radius = Radius;
function GetRadius (O : T) return Natural;
--# return O.Radius;
-- NewPoints.GetX is inherited
end Circles;
------------------------------------------------------------
package body Circles
is
procedure InitCircle (O : out T;
X, Y : in Integer;
Radius : in Natural)
is
begin
InitPoint (O, X, Y); -- inherited operation, via NewPoints
O.Radius := Radius;
end InitCircle;
function GetRadius (O : T) return Natural
is
begin
return O.Radius;
end GetRadius;
end Circles;
------------------------------------------------------------
with Points, Lines, Circles;
--# inherit Points, Lines, Circles;
package StickMen
is
-- "multiple inheritance"
type T is record
Head : Circles.T;
Torso : Lines.T;
LeftArm : Lines.T;
RightArm : Lines.T;
LeftLeg : Lines.T;
RightLeg : Lines.T;
end record;
procedure InitStickMan (O : out T;
X, Y : in Integer; -- centre of head
Height : in Natural;
Width : in Natural);
--# derives O from X, Y, Height, Width;
--# pre Height > 10 and Width > 10; -- to ensure head not invisible
end StickMen;
------------------------------------------------------------
package body StickMen
is
procedure InitStickMan (O : out T;
X, Y : in Integer; -- centre of head
Height : in Natural;
Width : in Natural)
is
DrawFromY : Integer;
TorsoLength,
LegLength,
ArmLength,
HeadSize : Natural;
begin
HeadSize := Width / 10;
ArmLength := Width / 2;
TorsoLength := (Height - HeadSize) / 2;
LegLength := TorsoLength;
DrawFromY := Y;
-- draw head of radius one tenth width
Circles.InitCircle (O.Head,
X,
DrawFromY,
HeadSize);
-- move to bottom of head
DrawFromY := DrawFromY - HeadSize;
-- draw torso
Lines.InitLine (O.Torso,
X,
DrawFromY,
2700, -- down
TorsoLength);
-- move to bottom of torso
DrawFromY := DrawFromY - TorsoLength;
-- draw legs
Lines.InitLine (O.LeftLeg,
X,
DrawFromY,
2600,
LegLength);
Lines.InitLine (O.RightLeg,
X,
DrawFromY,
2800,
LegLength);
-- move back to just below head
DrawFromY := Y - 2 * HeadSize;
-- draw arms
Lines.InitLine (O.LeftArm,
X,
DrawFromY,
1900,
ArmLength);
Lines.InitLine (O.RightArm,
X,
DrawFromY,
3500,
ArmLength);
end InitStickMan;
end StickMen;
A.2 Private extensions
This example, which is a subset of the one above shows the use of private tagged types and private extensions. It should be noted that proof functions can be inherited in exactly the same way as Ada subprograms.
package Points
is
type T is tagged private;
-- proof functions to make visible private fields of T
--# function GetX (O : T) return Integer;
--# function GetY (O : T) return Integer;
procedure InitPoint (O : out T;
X, Y : in Integer);
--# derives O from X, Y;
--# post GetX (O) = X and GetY (O) = Y;
private
type T is tagged record
X, Y : Integer;
end record;
end Points;
------------------------------------------------------------
package body Points
is
procedure InitPoint (O : out T;
X, Y : in Integer)
is
begin
O.X := X;
O.Y := Y;
end InitPoint;
end Points;
------------------------------------------------------------
with Points;
--# inherit Points;
package Lines
is
type TenthsOfDegrees is range 0..3600; -- anticlockwise from 3 o'clock
type T is new Points.T with private; -- private extension
--# function GetAngle (O : T) return TenthsOfDegrees;
--# function GetLength (O : T) return Natural;
procedure InitLine (O : out T;
X, Y : in Integer;
Angle : in TenthsOfDegrees;
Length : in Natural);
--# derives O from X, Y, Angle, Length;
--# post GetX (O) = X and -- use of inherited proof function
--# GetY (O) = Y and -- use of inherited proof function
--# GetAngle (O) = Angle and
--# GetLength (O) = Length;
private
type T is new Points.T with record
Angle : TenthsOfDegrees;
Length : Natural;
end record;
end Lines;
------------------------------------------------------------
package body Lines
is
procedure InitLine (O : out T;
X, Y : in Integer;
Angle : in TenthsOfDegrees;
Length : in Natural)
is
begin
InitPoint (O, X, Y); -- inherited operation, sets X and Y
O.Angle := Angle;
O.Length := Length;
end InitLine;
end Lines;
A.3 Path functions and verification conditions
In VCs and path function, type extensions are modelled by linking together a collection of record types using the field name “Inherit” (which can never occur in a SPARK program). The types in the example in Section A.1 are modelled thus:
package Points
type T is record
X, Y : Integer;
end record;
package Lines
type T is record
Inherit : Points.T;
Angle : TenthsOfDegrees;
Length : Natural;
end record;
package NewPoints
type T is record
Inherit : Points.T;
end record;
package Circles
type T is record
Inherit : NewPoints.T;
Radius : Natural;
end record;
This explicit inherit structure appears in VCs and Path Functions including those cases where an implicit view conversion is required in the call to an inherited root operation. For example, procedure InitLine in Section A.1 contains the procedure call statement:
InitPoint (O, X, Y);
There is no explicitly declared procedure InitPoint in package Lines; however, there is one in package Points and its parameter O is of type Lines.T which is an extension of type Points.T so we may call the inherited operation Points.InitPoint. We need to provide the appropriate view of the actual parameter as required by Points.InitPoint; this is achieved by modelling the actual parameter as O.Inherit rather than O. O.Inherit is of type Points.T and provided the view required.
The inherit fields also appear in aggregates. An aggregate of type Lines.T:
T'(X, Y, Angle, Length);
generates the following model:
mk__t(inherit := mk__points__t(x := x, y := y),
angle := angle, length := length)
and for the extension aggregate:
T'(Apoint with Angle, Length);
the following is generated
mk__t(inherit := apoint,
angle := angle, length := length)
Ancestor type conversions are also modelled by inserting the required number of inherit de-references.
Document Control and References
Praxis Critical Systems Limited, 20 Manvers Street, Bath BA1 1PX, UK.
Copyright Praxis Critical Systems Limited 2002. All rights reserved.
Issue 0.1 (23rd April 2002): First draft
Issue 0.2 (25th April 2002): After initial comments by BJD
Issue 0.3 (24th May 2002): Adds information on type assertions and configuration file
Issue 0.4 (5th June 2002): Adds sections on Simplifier, Checker, POGS and User Manual updates. Also updated examiner improvements section to Examiner release 6.0d05.
Issue 1.0 (2nd July 2002): Definitive issue following review S.P0468.79.77.
None.
CVSROOT/userdocs/Examiner_RN_6p1.doc
[1] Note: The SPARK programming language is not sponsored by or affiliated with SPARC International Inc. and is not based on SPARC™ architecture.