1. Introduction
Thread algebra [
1] is a domain-specific process algebra, which is tailor made for providing semantics for imperative programs in the form of instruction sequences written in the notation of the program algebra PGA of [
2]. For introductions to program algebra, instruction sequences and thread algebra we refer to [
3], and for more comprehensive information, to [
4]. For more information on thread algebra we mention [
5,
6]. A comprehensive example of the use of notations from process algebra and thread algebra can be found in [
7]. Below, the phrase “instruction sequence” will refer to an instruction sequence in one of the notations provided by program algebra, and thread will refer to an element of the domain of a thread algebra.
Thread algebra offers constants (stop) for a properly terminating thread and (deadlock/divergence) for an improperly terminating thread. With threads and also is a thread. A non-terminating (run of a) thread performs an infinite number of subsequent method calls. Non-termination differs from divergence in that divergence may involve an infinite number of subsequent internal steps, while non-termination involves an infinity of externally visible steps, viz. the subsequent method calls.
The idea is that constitutes a method call, where a component accessible under the name (focus) is asked to process a method call . Processing a method will change the state of the component which performs that task, and moreover, it will produce a result in the form of a Boolean value. If is returned, the thread proceeds as and if is returned, the thread proceeds as . Components able to process method calls are called services. A focus and a service made accessible under focus are combined as a service in focus . A thread and a service in focus can cooperate in different ways: use (denoted , apply (denoted , and reply (denoted . In the case of use, the service is used for computing a thread; in the case of apply, the thread transforms the service which will contain inputs before a run of the thread is started and outputs upon termination of the run said thread; and in case of reply, a mere Boolean value is computed. Use and apply represent extremes of the different way a sequential process may deal with data: data may support the computation of a more complicated process, and the sequential process is used for computing a function which is applied to the data. Below, we will only consider the case of use.
A representation of thread algebra terms of the process algebra ACP (as defined in [
8]) is given in [
4,
9]. The idea is that a thread
P is represented by a process
and a service
is represented by a process
. ♮ is the thread/service to process projection, which may be alternatively be denoted by
. In particular, using ACP notation for actions for sending and reading along a port, and communication thereof (as used in [
10] and later):
is represented by a parallel composition
of a representation
of
and a representation
of
. Here
is the encapsulation operator which guarantees that corresponding send and receive actions, together collected in the set
, will synchronize. A state of a service
then takes the following form:
Here is a Boolean condition which determines the reply value of the service on the call of method : if , then the reply value is and if , then the reply value is . The function determines the state of the service after a reply has been returned. The follow-up state depends on the original state , the method involved and the reply value that has been returned.
For an ordinary service , the so-called reply condition, and , the so-called effect function, are functions. In the case of an infinite state space, it is plausible that both functions are computable. Throughout the paper, we will assume that is a function, while may depend on other attributes of a system in which the service operates. If is a function, i.e., it depends on and only, the service is called ordinary.
The following equivalent form is more suggestive about the fact that for each method call, the corresponding reply conditions must be evaluated just once. Here,
is the “if
then
x else
y” function with the condition written as the argument in the middle. When located under focus
the send and receive actions are renamed, the focus
being used as a port name for the process algebra notation, we obtain
A trivial manner in which the reply condition can be non-functional is to have it dependent of the focus by which the service is made accessible (using multiline notation for better readability and subscripts for the reply condition and effect function so that unambiguous reference to these from outside the architerm can be made).
Multiple services,
accessible under pairwise different foci
are combined into a single service using the service composition operator ⊕:
. For the details of service composition, we refer to [
4]. In terms of the process algebra, one finds
, at least in case the
are pairwise different. Thread algebra allows different representations in terms of process algebra. For instance, for some applications, an interpretation into a discrete time process algebra (see, for example, [
11]) is preferable. Below, the thread algebra notation is used unless the additional detail of how a basic action is realised via a pair of atomic actions matters.
A thread
may be defined in different ways. First of all, threads can be defined, like processes in process algebra, by means of recursion equations or systems of recursion equations; for instance
. Otherwise, comparable with the use of a transition system as a process definition in process algebra, a thread, say
, can be defined by means of thread extraction from an instruction sequence, say
:
. An expression of the form
is referred to as an architerm, it describes a so-called execution architecture for
, in the terminology of [
12].
Process algebras contain behaviours as elements and provide composition operators for these. Many different process algebras have been designed, and process algebras can be tailor made for certain application areas. Conventionally, process algebras have been studied in the context of parallel processes so that parallel composition is the primary composition operator. In thread algebra, behaviours represent sequential systems, including deterministic systems that result from scheduled concurrent compositions of sequential systems (see [
1,
5]).
Instruction sequences, as well as threads, allow, as much as possible, the separation of data and control. For no datatype, not even for Booleans, the presence is taken for granted in the setting of program algebra. Only Boolean feedback from operations on data (i.e., method calls), which are located in services, is used to influence the progression of control.
1.1. Prospecting, Foresight and Lookahead Conditions
We will discuss prospecting in the context of thread algebra. Prospecting allows an agent participating in a system to survey (aspects of) the potential future behaviour of one or more other agents in the system. We model prospecting as a capability of a service. We use the term prospecting rather than the perhaps more appealing term lookahead in order to diminish the connotation of statistically sound inference from available data. Dealing with prospecting in a process algebra setting seems to be harder but may follow suit just as, for instance, strategic interleaving was first defined in the context of thread algebra in [
1] and only thereafter in a setting of process algebra (see, for example, [
13]).
Foresight constitutes the apparently successful ability of an agent in a system to take future options of behaviour of the entire system into account. We use the term foresight rather than forecasting, also in order to weaken the connotation of inference based on scientifically rigorous methods. Indeed prospecting, as meant below, may be achieved by way of magic, and forecasting may be based on unexplained or non-rigorous forecasting. Foresight is a phenomenon which concerns a system as a whole because it combines the intention of an agent in the system to take expectations or knowledge about the entire system (including the agent) into account as well as some assessment of the success of the agent in that respect.
Prospecting may allow successful foresight. One may think of an agent as a robot the software of which is known to another agent in the form of a compiled source program. Somehow may be able to make useful predictions about the behaviour of on the basis of that information. We will not look into this particular scenario, and it is mentioned only in order to indicate that it is not inconceivable that agent uses intelligence to obtain meaningful information regarding the future behaviour of another component .
An example of prospect and foresight is as follows. With
, we will denote the method interface of
, and a detailed description of interfaces is given in
Section 2.4. Let
and
. Now consider the following architerm:
.
The task is to design a service
with method interface
which works in such a manner that in all circumstances,
will perform the basic action
. Suppose that
is an ordinary service, say
with state space
and
, and with reply condition
and effect function
. Now suppose
, then on the first call of a method (in this case
) a reply
is received so that the next state of the computation will be
Now assume that service
produces reply
on the first call of method
. Then (writing
for the initial state of
) the subsequent state is
Clearly, will not perform basic action . A similar argument can be given if it is assumed that . It follows that an ordinary service cannot have the required property that always a call will take place.
Next, with a leap of imagination, one may allow the reply condition to include as an additional argument and to allow to experiment with the (hypothetical) future behaviour of . Now consider the following reply condition for : (i.e., ). This new condition is permitted to acquire prospective information about another service in the architerm . Let denote the service obtained from by replacing the reply condition by then the execution architecture has the required property.
We will say that is a lookahead condition which allows to perform prospecting, a state of affairs which then explains the fact that in configurations of the form a phenomenon of foresight can be observed. In this paper, we will demonstrate some examples of lookahead conditions that may serve as non-ordinary reply conditions. The design of a most general class of such conditions is non-trivial so it seems, and the following problem is left open.
Problem 1. Is there a plausible definition of which conditions may occur as (non-ordinary) reply conditions, so that prospection can be “exploited” to its limits?
This question applies in particular to configurations where more than one service may use lookahead conditions.
1.2. Configurations and Intermediate Configurations
Configurations capture both initial stages and intermediate stages for computations in an obvious manner: step after step, the basic actions of the thread are performed and the states of various services are updated. Final states of such computations are mere service families from which the thread has become trivial: either or .
The term configuration refers to a threads/service configuration. A typical (generic) configuration has the form . In the setting of thread algebra configurations compute by performing steps which correspond to (i) a method call being made, (ii) a reply being computed, and (iii) the (side) effect on the state of the service which is processing the call that is computed. Upon looking at the same run from the perspective of process algebra, additional precision can be obtained, and intermediate states and configurations appear. We will need some notation for intermediate configurations and components thereof.
For a thread , will denote that same thread, though with its first atomic action (when present) missing. This idea can be made formal via the translation to processes: is the unique process p such that for some atomic action it is the case that if such a decomposition of atomic action exists and (the inactive process in ACP) otherwise. In particular, one finds: .
Moreover for two services
with the same method interface
, let
denote a service, present under focus
, in the state just after having received a method call and going to reply
, then moving on as
if the reply condition
turns out to be true and going to reply
, while continuing as the service
otherwise. Upon reading threads as process with ♮, one finds
Now consider the configuration
. As a process,
takes smaller steps (one atomic action at a time) than as a thread/service configuration (one basic action at a time). After performing one atomic action (sending
to
via port
f), the following configuration
is obtained:
.
The advantage of considering is that it provides a precise picture of the situation in which a reply condition determines which reply is to be returned after a method call has been received and the corresponding reply has been computed by the service at hand, and how the service at hand will proceed. This picture allows an informal answer to Problem 1.
Definition 1 (Informal definition of lookahead conditions)
. In the configuration , a lookahead condition may depend on
(i) all focus and method names which play a role in ,
(ii) the threads and , and
(iii) the services .
What matters is that dependence on the services and is not permitted. The reason to be so cautious about and lies in the observation that it is more likely than not that the same lookahead condition occurs in these services as well. That may happen, for instance, if and are chosen as both continuations of a single service upon having received method call in state and if after a number of steps, the same state is visited once more during the computation of . A complicated self-reference might result, which is now excluded, however.
In this manner, a reasonable upper bound to the notion of a lookahead condition in the context of
is obtained. A meaningful restriction, however, is that
is built up by logical connectives and quantifiers over primitive assertions, which can be understood as expressing outcomes of hypothetical experiments with configurations of the form
for various ordinary services
and
with the same method interface
as is required for
and
in the configuration
.
Below, we will make use of the following notation: expresses that after zero or more intermediate steps (counting atomic actions as steps), the computation starting with the architerm will perform a call (or more precisely, will perform an action which constitutes the synchronisation of a send action and a receive action with content along port (in ACP style process algebra terminology). Taking for the condition is an example of an assertion which can be understood as expressing information concerning outcomes of experimentation with runs of and, for that reason, as a lookahead condition.
1.3. Thread Algebra, an ACP Representable Domain Specific Process Algebra
An action in the context of program algebra and thread algebra is called a basic action. Seen from the viewpoint of process algebras, basic actions are not atomic actions. Instead, a basic action can be explained in terms of synchronous communication of atomic actions. As was outlined above, thread algebra, services, and service composition can be interpreted in process algebra, and in fact, different such interpretations exist. We prefer to understand this kind of interpretation as a representation. The process algebra provides a well-known semantic standard based on bisimulation semantics, while the thread algebra is to some extent a matter of ad hoc design. By thinking in terms of representations of thread algebra in process algebra, the conceptual primacy of process algebra is emphasised. Upon adopting a specific representation, e.g., as indicated above, the use of thread algebra instead of the underlying process algebra becomes merely a matter of convenience. Thread algebra does not come with the ambition to acquire any new semantic insights or principles for concepts, such as sequential composition, concurrent composition, atomicity of actions, abstraction from internal (silent) steps, and the proper definition and use of infinite state systems. Thread algebra is a domain-specific process algebra that features two kinds/classes of domain-specific processes: threads and service families. Thread algebra offers several domain-specific combinators on these domain-specific classes of processes (apply, use, and reply), each of which are instances of reactive concurrent composition when understood from the perspective of process algebra. Thread algebra is ACP representable in the sense that there is a preferred representation (♮) of thread algebra into ACP-style process algebra.
Thread algebra is one of several domain-specific process algebras that have been developed on the basis of representation in ACP. Process algebra based on asynchronous communication constitutes a domain-specific process algebra which may be further specialised for use with dataflow networks. Alternatively, systolic arrays may be modelled with a highly concurrent version of ACP. The notation
CRL [
14] and its successor mCRL2 [
15] may be understood as domain-specific process algebras which exploit data as parameters of processes rather than to have as separate semantic entities wrapped in a dedicated process, or as parameters of an additional operator (viz. the state operator of [
16]).
For threads and services, the notion of an interface is vital. Interface calculus is a “sister” of the alpha–beta calculus for ACP in [
17]. As it turns out, due to the built-in symmetries in ACP, the idea of an interface boils down to a set of actions, which is relatively simple compared with the notion of an interface, as it is needed for the setup of thread algebra.
Thread algebra involves threads and services, representing two types of agents with threads having a focus on control, and services a focus on data. Thread algebra is more specific for sequential, deterministic, and reactive systems and, therefore, less general than process algebra, which is meant to capture arbitrary concurrent systems.
Different forms of process algebra come into play for different themes. In this paper, we will look at prospecting, both phenomena which one may at least hypothetically ascribe to the behaviour of an agent, and foresight, a capability which, again hypothetically, may be ascribed to the participation of an agent in a system.
For software engineering, the idea of prospecting is rather uncommon, though very common in processor pipelines. Prospecting is usually expected of rational agents. Prospecting and foresight play a role already in sequential systems, and for that reason, the investigation of these topics may well start with thread algebra rather than in process algebra.
Process algebra is a tool for analysing the options for mutual communication and understanding in a multi-agent network. Such networks are thematic for AI. Thread algebra, and an underlying program algebra, provide specialised algebraic approaches for dealing with sequential subsystems of such networks.
Prospecting services were studied in [
12] (then named forecasting services) and in [
4], where it was shown that such services cannot correctly forecast halting when giving replies in a two-valued logic only. In [
4], different forms of the halting problem are formalised in the context of thread algebra. Apparently, thread algebra is better suited for analysing such applications, where control and data are to be treated on equal footing than process algebras, where control is the dominant feature and data are an “add on” feature to some extent.
1.4. Process Algebra, Thread Algebra, and AI
Process algebra can help to analyse the boundaries of what can be computed. What Turing machines do for computability, allowing techniques for analysing what can and what cannot be done, can be repeated and adapted in a plurality of models of concurrency when it comes to analysing the notion of algorithmic impossibility for interacting agents. Process algebra, as an approach with a plurality of instances, each of which serves as a model for concurrency, allows some uniformity for the analysis of such models. Analysing what cannot be done in qualitative terms is of relevance for AI as much (or even more) as it has been for computer science.
When it comes to proving the impossibility of computational phenomena, arguments often depend on diagonalisation. At face value, the liar paradox and the Russel paradox on the one hand and undecidability of the halting problem and the incompleteness of the axioms of Peano arithmetic (following Gödel) are similar phenomena. However, there is a difference: the liar paradox and the Russel paradox show that certain combinations of features are clearly inconsistent. These are paradoxes. The unsolvability of the Halting problem and the incompleteness of Peano arithmetic show that certain problems are just too complex to be solved by certain limited means, and these express unfeasibility rather than a paradox. Admittedly, this discrepancy is gradual; provability logics in arithmetic bridge this kind of gap, and in [
4], it was analysed how Turing incompleteness can take the form of a paradox-like manifest incompatibility when working in a setting of program algebra and thread algebra.
The famous observation by Cohen dating back to 1984 [
18] (see also [
19,
20]) that an agent cannot decide whether or not its behaviour will be harmful, brings to the surface a disparity of the first kind: a certain combination of features, including a form of forecasting is paradoxical, i.e., leads to a manifest contradiction, while at closer inspection, a somewhat different casting of the same theme brings it closer to the halting problem and Gödel incompleteness. What is tried cannot be achieved in full generality because it is too difficult, not because it is a paradoxical (self-contradictory) idea as such.
In this article, we elaborate on these matters with the following general question in mind.
Problem 2. Where are the boundaries and what are the options concerning self-reflection about the future of a computation, the self-reflection being understood as a part of the same computation?
In light of the objectives of this special issue, it matters contemplating how and why this particular problem might matter for AI.
Contemplating the Cohen impossibility result, several follow-up questions arise. To begin with, one may ask how the software engineering life-cycle for certain safety critical components would change (would have changed) if the result would have been the opposite to what it has been. How would the tests (which, after all, do not exist) would be used in practice.
The software engineering life-cycle is a computational model where computed phases and human actions take place in an interleaved manner. However, as a result of progress in AI, increasingly, many of the human steps are becoming incorporated in the computed phases. The question arises if the availability of the program/machine that Cohen showed not to exist would, hypothetically, show up in systems design for automated programming. Apparently, an attempt to incorporate a component which is blessed with successful foresight about whether or not another component is viral (assuming a particular definition of that) into a larger system induces a change of the requirements on said component. As a consequence of that change, instead of being paradoxical, it becomes merely unfeasible (as in the case of the halting problem) to determine virality.
Unfeasible problems may have useful automated approximations, however, which then could be used for imperfect but still practical automated improvements of an ordinary software engineering life-cycle. By working with approximations of software components that provably do not exist, hypothetical foresight may be transformed into plausible forecasting.
The Cohen impossibility may be considered, like the halting problem, to constitute merely a conceptual matter. However, one may consider the notion of a program fault. The formidable literature on program faults provides some principled approaches, e.g., in [
21,
22], where the idea is that a fault must have two properties, it can be the cause of a failure during a run and it allows a (preferably provable, otherwise at least evidence based) improvement such that said cause is removed. For a survey of approaches to the notion of a program fault, we refer to [
23]. Faults may be understood as options for forecasting certain failures, and in a fully automated software engineering life-cycle, one can imagine tools for dealing with, i.e., profiting from, such forms of foresight.
Another link between AI and forecasting arises when contemplating robot morality. Without assuming any form of lookahead, it is hardly possible to imagine any moral judgement regarding the control software for a military robot. For some reflection on this matter, we refer to [
24].
Another area where lookahead plays a role is garbage collection, where, given a pool of linked objects which is used for the processing of a single instruction sequence only, the notion of garbage may be made dependant on the future behaviour of said instruction sequence. This idea gives rise to the notion of shedding (see [
25]), which involves run-time foresight by a thread about its own future behaviour. Comparable to the case of virus detection, it takes care to avoid self-contradictory requirements, and indeed AI methods for automated self-reflection might become useful for garbage collection.
1.5. Survey of the Paper
Thread algebra is introduced as a domain-specific process algebra. In particular, thread algebra is ACP representable. The use of process algebra facilitates the introduction of so-called intermediate states, which proves very helpful for introducing the informal notion of a lookahead condition (in Definition 1). Two more informal notions, prospecting service and foresight pattern, are introduced. The relevance of non-paradoxical foresight patterns is motivated.
A fairly complete account is presented of interfaces in the context of thread algebra. That is done both in intuitive and semantic terms and by way of an axiomatisation which is informative, though lacking known formal properties.
The new results are as follows: Proposition 3 demonstrates a case where foresight cannot be realised, while this is not entirely obvious. Proposition 5 introduces a new foresight mechanism, which indicates that the Cohen impossibility result is shown in a context where the sought forecasting pattern is not paradoxical. Theorem 2 indicates that for the same requirements, paradoxical circumstances arise in the presence of a second service with prospecting capability. The basic action foresight pattern is introduced, and in Proposition 4, a prospecting service is obtained which can implement this pattern.
2. Threads, Services, and Architerms
At risk of repeating some elements that were covered in the introduction, we will proceed with some additional remarks on threads, services and service families. We will use without further reference the elements of program algebra, thread algebra, and instruction sequence theory which are surveyed in [
3,
26], and which have been set out in detail in the papers referenced therein. In these papers, the focus method notation
provides a basic instruction, where the focus
represents the name of (a link to) a service and
names a method to be applied to (as seen from an instruction sequence) and processed by (as seen from the service) the service, under the assumption that (i) the service will send a Boolean reply back to the instruction sequence which made the method call
in such a manner that the continuation of the computation as prescribed by the instruction sequence may (but need not) depend on the reply value, (ii) as a consequence of processing the method
, the service may update its state, and (iii) in between the processing of subsequent method calls, say
and
, the service may interact with external components working as an independent process, concurrently with other services. The architerm, also termed configuration, which will mostly be used below, is thus:
with
a thread and
, a service family which offers access to service
under focus
for
, and with
. More often than not, the encapsulation operator will be omitted from an architerm.
The service combination operator
is assumed to be commutative and associative so that the ordering of the services in an expression for an execution architecture does not matter.
performs restriction to the services with a focus outside
. The representation of restriction in ACP-style process algebra satisfies
with
containing the communication actions along port
, that is
Here
and
with
, the communication function of ACP. Services with a focus in
may be used as containers for inputs or as containers for auxiliary data only of use during a computation. All outputs are located as states in services with a focus outside
. For this particular notion of an execution architecture for instruction sequences and for threads, we refer to [
12] and to the further development thereof in [
4]. Below, we will use architerm as an abbreviation of execution architecture.
The equality of architerms is denoted with ≡. Architerm equality is a somewhat informal motion because it depends on the context as to how much abstraction comes with ≡. Writing , it is confirmed that denotes an architerm involving restriction (removal of auxiliary services from the result by ), a thread , which is applied to (processing determined by •) services which are accessible via foci , respectively. No further structural information is given (via ) regarding and the respective services. We will always assume that the are pairwise different. With , the service family contains the services with either a pure input status or an auxiliary status (or a combination of both) for the architerm . With , it is required that a thread is applied which is obtained via thread extraction from an instruction sequence beginning with and ending with !. The idea is then that said instruction sequence () constitutes a part of the architerm. By substitution for variables, the architerms can be refined.
2.1. Reply Function Services
Each service comes with (i) an input alphabet and (ii) a reply function . When a method call takes place, the reply is and the new reply function is . A reply-only service has implicit states, and the reply function serves as its state.
We write for the service with method interface and reply function . When available under focus , the (single-service) service family has provided interface . will process method call by asking to process the call .
2.2. Stateful Services
A service may also have an explicit state space. Then, comes with a state space containing an initial state and a reply condition function and an effect function . Upon method call and when in state , the condition is evaluated. Let be the result of evaluating , then is returned to the calling thread as a reply, and the next state is .
Such services are called stateful. We write (for stateful service) for the service with method interface , state space , with initial state reply condition function , and effect function .
The architectural primitive indicates that thread is run in the context of the service family , with , assuming encapsulation with , a state of , with as an output, in case of termination, a service family . In case of non-termination as well as in the case of divergence after a number of method calls have been processed, the result is , where is the unique service which accepts no methods.
2.3. Threads and Thread Extraction
A thread is either (stop, termination), or or for some method call (basic action ) and some threads and . can be abbreviated to .
Instruction sequences and threads are connected via the thread extraction operator , for instance: , , and , where .
For the construction of an apply architerm, it is required that the interfaces of the thread and service family match, i.e., that every method call can be processed by one of the services. Expressing these requirements rests on a notion of interface, which we will discuss in some detail.
2.4. Interfaces: Required Interfaces and Provided Interfaces
Interfaces serve as abstractions of components which are made in order to facilitate reasoning about the design and composition of components. Consider the instruction sequence, . The processing of works as follows: (i) apply method to the service accessible with focus ; (ii) if the reply is returned, skip the second method call and terminate; and (iii) if instead is replied, perform the method call and then terminate, irrespective of the reply which has been returned. requires of an execution environment that it provides a service accessible under focus which is able to process a method named and another service accessible under focus which will process a method named . This information can be combined in the interface . The interface is a required interface, it conveys what an environment must provide for to be properly effectuated. Now consider . will terminate at once and make no method calls. It is plausible to say that its required interface is empty. If, however, a jump from outside is allowed and a run may not start with the first instruction (e.g., in ), then it is not plausible to think of as having an empty interface. With , we will denote the required interface of an instruction sequence , which contains a mere collection of all basic actions (), which occur in (either as a void basic instruction , as a positive test instruction or as a negative test instruction ). Thus, with as above, .
Given the task to design an instruction sequence, say
, which computes some given function or otherwise performs some specified task, there may be constraints (requirements on) on the required interface of
for instance one may wish that
for some interface
. For the programming task at hand,
is an access-constraining interface. Access-constraining interfaces play a key role in complexity questions about instruction sequences when looking for the shortest, or otherwise best, instruction sequence in some format (say PGLB of [
2]) that achieves a certain given task.
Complementarily to the required interface of an instruction sequence, one may consider the method interface of a service, say , e.g., . A method interface of a service is a provided interface. If a service family offers only accessible under focus , then the provided interface of is . If one has the task to design a service family, say , then it may be required that a certain interface is provided by : .
2.5. Some Naming Conventions; Equations for Interfaces
The paper involves a range of different types of entities, and we will follow some informal, and non-strict rules for the naming of variables for these types. We will use for instruction sequences, for threads, for interfaces, for services, for state spaces of (stateful) services with for states in the state space of stateful services, and for service families. Methods often are given names involving or , and foci often have names involving , , or .
Using these conventions, some equations that loosely specify the various types of and operations about interfaces are given in
Table 1 and
Table 2. Concerning these tables, some comments are in order.
Interfaces are by default sets of focus method pairs. Interfaces may serve as the required interface for a thread, as the provided interface for a service family, or as the constraining interface for a (forthcoming) thread. is the empty interface. is the empty service family. Its interface is empty as well.
Services have a method interface which consists of method names only. is the empty method interface. is the unique service without any method.
The service family differs from because it involves a focus . This complication is useful because it allows ⊕ to be a total, commutative, and associative operator. Suppose the result of a call to a service family is hard to define. Non-determinism is to be avoided, and ⊕ is supposedly commutative. A simple way out is to avoid service families with multiple services under the same focus. Adopting fails, as it renders ⊕ as being non-associative. However, adopting , blocks ambiguous method calls and allows ⊕ to be commutative and associative, though not idempotent (i.e., fails on any service family with ).
The notion of an interface occurs in quite different ways in the literature on software technology. Three clusters of use of the term “interface” may be distinguished as follows:
- -
Operational interfaces: collections of primitives with definite meaning.
- *
API: an application programmer interface is a toolkit in a software engineering environment;
- *
Processor instruction sets;
- *
Collections of message passing primitives (e.g., MPI).
- -
Requirements of specification-oriented interfaces (interface elements come with limited information regarding semantics).
- *
Modal interfaces (see, for example, [
27]);
- *
Interface automata (see, for example, [
28]);
- *
Algebraic specifications as interface of data types.
- -
Syntactic interfaces. (comprising declarations of syntactic elements; interface elements come without a predefined semantics, but may have suggestive names or symbols).
- *
Similarity types in universal algebra (interfaces for first order theories);
- *
Signatures as interfaces for algebraic (abstract data type) specifications;
- *
Sets (alphabets) of atomic actions as process interfaces in process algebra (see, for example, [
17]);
- *
Sets of basic actions as interfaces in program algebra and thread algebra (following, for example, [
26].
In the context of thread algebra, interfaces are used in the capacity of syntactic interfaces as well as in the capacity of operational interfaces (though in a theoretical manner).
2.6. Thread Service Interaction, Instruction Sequence Processing Operators
In PGA-style program algebra and the corresponding theory of instruction sequences, the general idea of an execution architecture is as follows: given a instruction sequence , thread extraction produces a thread . Threads are processes captured in a thread algebra which is a simplified process algebra. Threads have three forms: , the terminating (stopping) thread, the deadlocked (diverging, improperly terminating) thread and , the basic action form of a thread. The stopped thread may be equipped with a result value, for instance, a bit: and . Threads in basic action form can perform a unique basic action, say, for : , which works as follows:
(step 1) Let the service in focus (say the service name , which at the time of the method call is in state , or more precisely ) performs the (its) method named , the service is offered from the context as given by the execution architecture.
(step 2) A Boolean reply is expected, at reply , the run proceeds with , and at reply , the run proceeds with , and the reply is found by evaluating the reply condition for method to state , thereby obtaining Boolean value .
(step 3) In both cases the state of is updated with the effect function to .
Three operators connecting instruction sequences and service families come into play:
(i) The reply operator: , which produces a bit in and in which is understood as a way to define (compute) a function from (states of) service families to bits. To allow definition of the reply operator, two different termination instructions are needed: (terminate with result ) and (terminate with result .
(ii) The apply operator in which is supposed to describe a transformation from to a new state of it (i.e., to a new state for each of the services contained in it).
(iii) The use operator , which produces a thread rather than a value or a service family (carrying the states of the various services). For , the services combined in act as local resources, to begin with memory sources, which support in computing thread .
Typically, one may imagine compositions
where it is plausible though not necessary that the service families
and
have no focus in common (i.e.,
). Encapsulation drops services from a service family: with
, a collection of foci,
removes from
each service accessible under a focus in
. Assuming that
and
involve disjoint collections of foci results in the following useful identity:
3. Prospecting Services: Endowed with a Non-Ordinary Capability
Below, further lookahead conditions are introduced, as a follow up on the ones mentioned in
Section 1.1. Prospecting refers to the capability of a service to evaluate one or more lookahead conditions. That is a non-ordinary capability. Foresight is a property of a system (here, an execution architecture) where one or more services perform as if they were able to forecast the actual and potential future behaviour of the system. Proper foresight is present if no ordinary service meets the given requirements.
The basic action foresight pattern, when realised, allows a system to use a certain service (here,
) to operate in such a manner that a given basic action (viz.
) will certainly be performed provided that basic action might occur with non-deterministic behaviour for
. For an explanation of the conditions of the form
, see
Section 1.2 above.
Definition 2 (Basic action foresight pattern)
. Let and let be method interfaces. Further, let be a service with then an open architerm
realises the basic action foresight pattern for some and method if for all ordinary (but arbitrary) services with , and for all threads with , the following holds:
if or
then also, .
The example in
Section 1.1 above (taking
and
) provides a proof of the following proposition.
Proposition 1. No ordinary service satisfies the requirements of the basic action foresight pattern.
In Theorem 4 below, a prospecting service named MCP is shown to satisfy the requirements of the basic action foresight pattern. From the latter observation, it follows that the basic action foresight pattern is not self-contradictory.
3.1. Forecasting versus Prospection: Forecasting Patterns and Prospecting Services
Proposition 1 asserts that a certain foresight pattern will not emerge unless a service with prospecting capability is available. A foresight pattern may also be inconsistent (paradoxical, self-contradictory).
Foresight is reflexive in the sense that it works as if an agent (service) is well-informed about its own future behaviour, as well as about the behaviour of other agents (services). Prospection is non-reflexive, and refers to the capability of a service to acquire knowledge about the future behaviour of threads and other services in the same system. Lookahead conditions which occur in a service will work under the hypothesis that the service proceeds as an ordinary service (even if, in fact, it does not). For this limitation, see the preliminary definition of lookahead conditions and the remarks in
Section 1.2 above. Using these conventions, prospection realised in a service by way of lookahead conditions will not have any paradoxical consequences, however hypothetical the evaluation of lookahead conditions in practice may be.
Proposition 2. Let , and let be ordinary (i.e., non-prospecting) services and let the interface be given by . Further let be a lookahead service with .
Now, it is not possible that for all threads of the form , with , the following holds: the first call along focus of the computation returns if and only if the computation of properly terminates.
Proof. Suppose otherwise, and take , . Now the computation terminates properly (ends in ) if and only if the first (and only) method call returns if and only if the same computation diverges (ends in ). □
Proposition 2 is a rephrasing of the argument on the non-existence of malware detection by Cohen [
18]. The latter result works with an action
instead of termination. We will discuss this forecasting pattern in more detail in
Section 3.3 below. It is informative to phrase Proposition 2 in terms of a foresight pattern.
3.2. More Primitive Lookahead Conditions R
Primitive lookahead conditions have the quality that these may be understood as outcomes of experimenting with an architerm as outlined in Definition 1 and subsequent comments. Besides the lookahead condition , which is introduced below Definition 1, the following lookahead conditions (for an architerm ) are used below:
holds if the computation from does not terminate (i.e., perpetuates).
if the computation starting from properly terminates.
asserts that the computation starting from improperly terminates (i.e., ends in ).
asserts that in the ’th step proper termination occurs.
asserts that at the ’th step improper termination occurs.
asserts that the computation from architerm at no stage performs a method call . Here, the computation may not terminate or may terminate properly or improperly and if , holds.
asserts that the computation from architerm performs a method call in step number for a natural number .
Definition 3 (Termination foresight pattern A)
. Let and let be method interfaces. Further, let be a service with method interface then an open architerm realises the termination foresight pattern A if the following holds:
let be ordinary services such that . Then, for all threads with , it is the case that: , if and only if the first call of in the computation returns .
Proposition 1 entails that the termination foresight pattern A cannot be realised, not even with a prospecting service. That state of affairs is hardly surprising. Here is a similar foresight pattern for which it is less obvious that it cannot be realised.
Definition 4 (Termination foresight pattern B)
. Let and let be method interfaces. Further let be an SFS service (see Section 2.2 above) with method interface and with state space and initial state , then an open architerm realises the termination foresight pattern B
if the following holds: let be ordinary services such that and assume that . Then for all threads with , and for each state , it is the case that if and only if the first call of in the computation returns .
Termination foresight pattern B is unrealisable, irrespective of any conceivable prospecting capabilities of :
Proposition 3. Termination foresight pattern B cannot be realised by any ordinary or non-ordinary service .
Proof. Let be the state space of so that and choose and as follows: .
If then which contradicts the assumptions of the proposition. It follows that , Now consider . For it follows in the same way that . In this way, an infinite sequence results such that for all , and . It follows that so that termination does not occur, which contradicts the assumption that the reply to the first method call equals , i.e., . □
3.3. SHRD: A Hypothetical Service for Security Hazard Risk Detection
We contemplate a service
(security hazard risk detection), according to which the occurrence of method call
constitutes a security hazard, and which supports running a thread in such a way that the risk of a security hazard is detected and an alternative option for proceeding the computation is taken. The following result states the impossibility result of Cohen in terms of threads and services. This formalisation is based on [
29].
Malware detection in practice, and in theory, amounts to much more than the mere detection of the future occurrence of a single basic action. For instance, in [
30], the occurrence of runs of certain instruction sequences as subcomputations, perhaps interspersed with other steps, of the run of an instruction sequence is considered an indication/confirmation of a virus infection.
Proposition 4. Let and let be ordinary services with , and let the interface be given by . Let (for security hazard determination) be a (possibly non-ordinary) service with method interface .
Now, it is not possible that for all threads of the form with , the following holds: the first call along focus of the computation returns if and only if the computation of does not involve a method call .
Proof. Suppose otherwise, and take , . Then, one finds that the computation avoids performing if and only if the first (and only) method call returns if and only if the same computation performs (and then terminates). □
Now assume, irrespective of Proposition 4, that an -like service is available as suggested in the proposition. Plausibly, would be used in the following manner: in a context call in order to prevent a run of leading to a method call and instead to engage in a run of , thereby hopefully avoiding . Although as indicated above in Proposition 4 cannot exist, a lookahead service which allows the particular use just mentioned is conceivable. In the following paragraph, such a service is defined under the name . will be a service of the form according to the following definition.
Definition 5. Let the service has singleton state space and and when accessible under focus in an architerm it works as follows: on method call evaluate , then return , without changing the state.
3.4. SHRAT: A Prospecting Service for Security Hazard Risk Assessment
In [
31], the possibility of a service
(security hazard risk assessment) service is contemplated, which, supposedly, determines during the operation of an instruction sequence
, the text of which is known to the service, and can determine whether or not the thread created by the instruction sequence will in the future perform a method call
to an unnamed service.
A method call
to
aims to determine whether or not “the future is ok”, i.e., no
action will be performed. We will rephrase this idea with more emphasis on interfaces than in the presentation in [
31]. To begin with the unnamed service is given name
and is present via the interface under focus
.
We imagine a service with a single method and a single state . In slight contrast with one may imagine a service with method interface containing a method . Most methods from constitute useful (friendly) operations on the state space of , while the method brings its state in disarray, so that it may be called only under special circumstances.
Let
be an instruction sequence such that
The application of
to a service family
while making use of the support of the (possibly non-ordinary) service
amounts to application of the thread
extracted from the instruction sequence
to the service family
followed by forgetting
, thereby obtaining
Now the idea is that in a thread , when effectuated in the context of a service family , the reply of on the call is positive if the computation of will not involve any call of the form while the reply is negative otherwise. Here, it is assumed that and are threads arising from thread extraction of states of so that the required interface of both and is included in as introduced above.
For example, taking
, if
contains a single occurrence of
, it still may be the case that this particular method call will certainly not be performed so that a positive reply of
can be given by
. For instance if
returns
on
then in
the call of
returns
and thus:
With this idea in mind, a definition of the functionality of the lookahead service
can be given as follows. (
is defined in
Section 2.1 above).
Definition 6. The assertion with arguments is as follows:
Definition 7. determines its reply to a method call as follows: first determine the truth value (say of the assertion as in Definition 6) with arguments ; then the reply given by to the call equals while the state is left unchanged.
In this definition, the form of matters. In particular, if one or more of the services have prospecting capability and is informed about that fact, then when evaluating , it is “known” that when evaluating the services operate on the basis of the information that the service accessible via focus behaves as an ordinary service, thus reducing the number of non-ordinary services in the service family at hand. It follows that replies of are well defined also without the assumption that the services are ordinary services.
3.5. Obtaining Lookahead Functionality with
We first formalise with Proposition 5 the intuition that the sought foresight pattern cannot be provided by an ordinary service. Then Theorem 1 captures how realises the required foresight pattern.
Proposition 5. Let and let be ordinary (i.e., non lookahead) services with . Let be an ordinary service with method interface .
Then it cannot be the case that for all threads and both with required interfaces included in the following holds: the reply of the service to the first method call in the computation equals if and only if
.
Proof. There are two cases: and . In the first case, choose and and in the second case choose and . In both cases the required equivalence fails. □
Theorem 1. Let and let be ordinary (i.e., non lookahead) services with , and let and be threads both with required interfaces included in .
Then the reply of the service (as defined above) to the first method call in the computation equals if and only if
Proof. Without loss of generality, we will assume that and we will write instead of and instead of . Let be a state space with so that . We will distinguish two cases in this order: (i) the case that the computation involves a method call , and (ii) the case that it does not.
For case (i), suppose that the computation involves a method call . It is shown that the reply by to the first method call of the computation equals . To this end, it suffices to prove that for each infinite sequence of Boolean values , the computation involves a basic action . If not, choose a thread with required interface included in and such that (i) the computation from architerm involves a method call , (ii) for some finite or infinite sequence of Boolean values the computation involves no basic action , (iii) the number of steps of the computation of until its first call of is minimal, say . A contradiction is derived from a case distinction on the structure of .
Consider first the case that , terminates at once and does not involve an occurrence of the basic action , thereby contradicting requirement (i) on and . A similar argument applies if . Now suppose that . In this case, there is no such that the computation avoids performing basic action , thereby contradicting requirement (ii) on the choice of and . Next let for . Now, two cases are distinguished: and . Consider the first case, then in one step, the architerm evolves to the architerm . Now, it must be the case that (a) the computation from the latter architerm involves an occurrence of (because of assumption (i) on and , (b) for some (the same as for the combination and ) the computation does not involve an occurrence of the basic action , and (c) the number of steps in the computation of until the first call of equals which contradicts the minimality of . The case works in the same way. Finally consider the case . There are two subcases: in the first step of the computation of returns (subcase 1) or it returns (subcase 2).
(Subcase 1). If returns , then it is known that (a) the computation of involves an occurrence of (because of requirement (i) on and ), (b) there is a sequence such that the computation does not contain an occurrence of (the existence of is obtained from the definition of and the fact that its most recent reply was positive), and (c) the number of steps in this computation until the first occurrence of equals . Together, (a), (b), and (c) contradict the minimality of for and .
(Subcase 2). The case that returns begins with the observation (a) that because the computation involves a call so does the computation from the next step (because of the assumption made on the value of the call ) ; observation (b) reads that . To see this, notice that due to the definition of , the fact that the most recent reply to a call was negative must have been caused by the fact that, for each , each computation starting from the architerm involves a call . It follows from the latter that (the reply to the recent call as encoded in ) must be .
The computation from architerm involves no call , and together with , it follows that the computation from architerm (with ) contains no call either. Finally notice (c) that the length of the latter computation is . Together, (a), (b), and (c) contradict the minimality of , which concludes subcase 2.
In the second part of the proof, regarding case (ii) ( does not involve a method call ) one may assume that the computation of does not contain a method call .
We show that the first reply of to a call (in the computation of ) equals . Consider the finite or infinite sequence of Boolean values such that the successive calls (if any) of in the computation of are replied to by as encoded in . Making use of the fact that is not a lookahead service, and that, consequently, its replies will only depend on the state of the service at the moment of a call being made and, of course, on the particular method which is being called, while its replies cannot not depend on future behaviour of the system, the architerm produces the same computation, and for that reason, it does not contain an occurrence of the method call . Now it follows from clause (b) in the definition of the behaviour of that the reply given to the first call of in the computation of equals . □
Definition 8. is the restriction of to finite state threads (fst) and finite state services (fss).
Proposition 6. is a computable service taking as additional inputs for the determination of a reply on a method call a combination of two finite PGLB instruction sequences for the representation of the threads and and finite state machines for the representation of the services .
Proof. A decision method for (in the definition of ) is as follows: Given arguments let be the respective sizes (number of elements) of the respective state spaces of these components. We find that a computation path without a call can have at most steps unless it gets into a cycle. So by checking all paths of a length of steps it can be found if there is a (possibly cyclic) path without a call to , in which case the sequence of successive replies to calls provides a cyclic sequence as required, while otherwise, no such exists. □
Definition 9. is the restriction of to finite state threads (fst) and a combination of finite state services (fss) with a service for a Turing tape (tts).
Proposition 7. is a non-computable service taking as additional inputs for the determination of a reply on a method call a combination of two finite PGLB instruction sequences for the representation of the threads and , a bit sequence for the tape architerm for , and finite state machines for the representation of the services .
Proof. We consider the collection of PGLB instruction sequences with required interface included in and such that termination can only occur at the last instruction (which must be !). Termination of computations of the form is well known to be not computationally decidable. We find that for : the computation from terminates if and only if the computation from involves a call , if and only if the initial call in the computation receives reply . If is a computable service, it follows that the halting problem for PGLB instruction sequences over a Turing tape service is decidable as well, which is not the case. □
The notion of a lookahead service leaves open the possibility of the interaction of multiple lookahead services, and suggests possible generalisations of Theorem 1, with one or more of the featuring lookahead capability. This suggestion leads to the following result.
Theorem 2. The condition for Theorem 1 that are ordinary services is a necessary condition for Theorem 1.
Proof. Let in the notation of Theorem 1. We assume that and . We consider the following example:
and . Moreover, is an ordinary service such that . , however, is a non-ordinary service, which is defined as follows:
(i) has a single state only, so none of its actions results in a state change.
(ii) On and , the reply is always .
(iii) on , the evaluation of for in a context depends on prospection as follows: is (and with it the reply of to the call ), if and only if there are two different ordinary services and so that and .
(iv) On there is no state change.
Now adopting the definition of as given above, consider the computation starting from . This computation, however, does not comply with the requirement on as given in Theorem 1. In particular, it is the case that:
(a) while
(b) The first call of in the computation receives reply from .
To prove (b), first notice that the call requests to evaluate the reply condition . It must be shown that for each reply function it is the case that . The latter computation starts with the call which will return if and only if two different ordinary services and exist so that
, and
, with
.
There are two cases and . If , then both computations (for and for ) proceed with a reply in the next step, i.e., the call , and both computations will subsequently perform a call of and will therefore not perform a call of . It follows, in this case, that the criterion for returning on the call is not satisfied, and is returned as a reply. For the case , a similar argument leads to the same conclusion.
We find that the reply of on the method call in the computation starting from must be and therefore the next state of the computation is . Now, two cases can be distinguished: and .
In the first case, the computation of proceeds to
. After a step for the call , the call will produce reply (on the basis of the specification of ), so that the subsequent step is a call . A similar argument applies if . This proves that the call is unavoidable, which proves (b) above.
For (a), it must be shown that
. In the second step of the computation, produces a reply to the call . The latter reply of equals . To see this, one notices that when choosing (as a possible future for itself, such that the next reply on is and the next reply on is ), the reply of to the next call of would be , because otherwise would become unavoidable. Moreover, the action will take place.
Alternatively, upon contemplating as a possible future behaviour of so that the next reply to a call is and the next reply on is , the reply of to the next call of would be , because otherwise would become unavoidable. Moreover the action will take place.
So, given the fact that these two different computations for and are possible, the reply of to the initial method call (in the computation ) equals (due to the particular non-ordinary definition of ). So the computation proceeds to which will not perform any basic action any longer such that is satisfied, which proves (a). □
4. Promoting a Method Call
Rather than for avoiding a certain method call, prospection may, for whatever reason, be used to promote it taking place, i.e., to guarantee that it will take place if that guarantee can be given. However, the simplest foresight pattern to that extent fails. Instead of an action , which is preferably avoided, the presence of an action which is preferably performed may be assumed.
Theorem 3. Let , and let be ordinary services with . There is no single state service with such that the following holds:
for all threads and with required interfaces included in , the reply of the service to the first method call in the computation equals if and only if it is the case that .
Proof. Suppose otherwise, and let meet the requirements of the Theorem. Using the notation of Theorem 3, let and .
First assume that replies on the first call . Then after processing the positive reply, the computation returns to precisely the same state as it was in before said method call. Now, because is a single-state service, the computation enters in a loop from which it will not escape and during which no method call will be performed. It follows that on a positive reply to the first call to , it is not the case that will perform the required method call, so that the requirement on is violated in that case.
Hence, it may be assumed that the first method call receives reply . In that case, given the assumptions made on in the theorem, such will also be the case of the computation , which is precisely the same and which is processed with from the same unique state. In that case, because of the negative reply, the method call will be performed in the computation of so that a mismatch with the requirement on arises. □
Theorem 3 generalises to stateful prospecting services. First notice that the definition of a single state prospecting service can be easily adapted to involve several methods:
Definition 10 (Single state multi-method prospecting service)
. uses to determine which reply to produce on a call of method . Definition 5 is adapted by using a quantifier of type instead.
Definition 11 (Stateful multi-method prospecting service)
. A prospecting service with method interface , a state space and an effect function can be defined if for each , and , a lookahead condition is given which determines the reply on method in state of .
A workable notation for a service of this form with method interface is with and a lookahead condition. We will refer to services of this form as stateful with an ordinary effect function.
Services with multiple states will use the result of the evaluation of a reply condition for determining the next state transition (the result of the effect function). Then, the effect function has type where the third argument is taken to be the reply value which is given in the same step.
Proposition 8. Let , and let be method interfaces with . Let . There is no stateful prospecting service
for which the following holds:
for all -tuples of ordinary services , with , for all states , and for all threads and with required interfaces included in , the reply of the service to the first method call in the computation
equals if and only if
.
Proof. Suppose otherwise, and let meet the stated requirements. Choose threads and as follows: and and let : . Suppose . For let be the smallest subset of which contains and which contains for each . It must be the case that for all , , because otherwise the computation from architerm will perform a method call after as many steps as needed to arrive at a state with so that, given the requirements on , must be expected instead, thereby contradicting the above assumption that . It follows that for all states , of it is the case that whence without ever performing , from which it follows, with the requirements on in the theorem, that should hold, which yields a contradiction with the facts just established about , thus concluding the proof. □
Method Call Promotion with a Single State Prospecting Service:
The single state service has state and method interface , and it is made accessible via focus , which differs from . In a context , the intended foresight pattern comprises that produces replies in such a way that (i) a shortest path to a call of is found, and (ii) a positive reply is preferred in case (i) with both choices shortest paths to a call of of equal lengths can be found or (ii) no such paths can be found at all.
A reply
to the call
(i.e.,
, in the context
as mentioned above) is computed as follows:
with
In other words: the reply is if upon processing that reply (i.e., proceeding with ), under the assumption that is non-deterministic, a shortest path (of a computation of ) to a call of can be found, or if such a path does not exist.
Or, phrased alternatively, the reply is if only upon proceeding with a path in the computation of to a call of (with non-deterministic ) can be found, which is shorter than any such path in the computation of .
Proposition 9. Let , and let be ordinary (i.e., non-lookahead) services with .
Then, for all threads with , the following implication holds: if there is a function such that
then also
.
Proof. Without loss of generality and for the ease of notation, We will assume that and . Let for natural number , be the following assertion:
Let be ordinary (i.e., non-lookahead) services with , and let be a thread with required interface included in , then if there is a Boolean function such that
,
then there is a natural number such that
.
It is immediate that implies Proposition 10. To prove that for all , holds, use induction on with as the basis. If and performs a basic instruction as the first step then , for some threads and from which it follows that performs as the first step.
Now let . In case or , the required implication is immediate as an instance of material implication. In case , the implication is immediate too. The remaining cases to be checked are (a) with and , except the case that and , and (b) .
In case (a) there are two subcases, both of which have two subcases: returns on the call and returns of the call . Each of these cases works in the same way. We consider the case that and that is returned. Now, performs a single-step transition to . Here, denotes the state of a service just after having replied to a call of method . It follows that performs in at most steps. Now the induction hypothesis can be applied to the latter architerm, which yields that the computation of will perform a basic instruction within the first steps from which it follows that computing involves a call of within the first steps.
In case (b), there are again two subcases: (b/t) , and (b/f) . In case (b/t), upon performing a first step, the computation proceeds from arriving at a method call at the -th step. Using the induction hypothesis, the computation of performs a method call within the first steps. Now, once more, there are two cases: (b/t/t) the reply on the first method call in the computation of is , and (b/t/f) the mentioned reply equals . In the first case, it is immediate that the computation of contains a call of within steps. In the second case (b/t/f) in view of the definition of , with , it must be so that the computation of performs a call of , and in fact within less than steps. Now, with the induction hypothesis, it follows that the computation of involves a call of within steps, from which it follows (with the assumption for case (b/t/f), that the computation of involves a call of within steps. The argument in case (b/f) is very similar to the case for (b/t), and is deleted for that reason. □
Proposition 10. Let , and let and be as in Proposition 9.
Then the reply of the service (as defined above) to the first method call in the computation equals if and only if the following condition is satisfied:
for some positive , the computation performs the method call in the ’th step while does not reach an occurrence of the method call in steps for some .
Proof. This proof is a straightforward though somewhat tedious consequence of Proposition 9. Suppose that the reply of to the first method call in the computation equals . Then in view of the definition of , there exists a function such that the computation performs . With Proposition 9, it follows that the computation performs , say, at the -th step. Following this computation, one finds that for some function , it must be the case that performs in the -th step, moreover for function , the computation performs within fewer than steps. Now contemplating the definition of , one notices that its condition is not satisfied, whence its condition must be satisfied, from which it follows that for no function , it is the case that performs within or fewer steps, from which it follows by Proposition 9 that does not perform a method call within steps or less. This proves one direction of the Theorem; the argument in the other direction is quite similar, and is deleted for that reason. □
Theorem 4. With the same assumptions as in Propositions 9 and 10: for all threads with required interface included in , the following implication holds: if performs a basic instruction or performs a basic instruction then also performs a basic instruction .
Proof. This result follows from Proposition 10 plus the observation that if performs a basic instruction , then for some Boolean function also performs a basic instruction . □
Problem 3. Consider architerms containing both a service and an service. Can Theorem 4 be generalised to this case?
5. Lookahead Instructions
Threads provide abstractions from sequential programs. The impossibility result of Cohen on virus detection was formulated in terms of programs rather than in terms of threads and services. In this section, we will consider how the results on lookahead services may be used for obtaining information at the lower abstraction level of instruction sequences.
Special instructions may be designed, which incorporate the support of lookahead services into instruction (sequence) processing without making explicit calls to a lookahead service. Leaving apart the task of designing a syntax for such instructions in general, it is already informative to consider some simple special cases.
With , for a positive natural number, we will denote the following condition which may be evaluated during a run at some position (instruction) in an instruction sequence. Let be an instruction sequence such that . Now , when evaluated at position asserts that a computation of when stated at the beginning of will proceed in such a manner that the ’th step is a basic action involving a method call , i.e., either the void method call or a test or a test . If the computation has terminated, either properly or improperly, at or before the ’th step, the value of a is considered .
Making use of a lookahead condition is not an obvious matter. The simplest idea is to use the condition as a test which is evaluated at the position where it occurs. However, in , a difficulty arises. Assuming that at position , the condition takes value , then upon starting the run in position , the result of the test is positive and because of its embedding in a negative test instruction, the instruction is performed as if it were so that as a second step in the run, termination (!) occurs, which implies that the evaluation of at position should have taken value in hindsight. Alternatively one may consider the possibility that evaluation of the condition at position yields value , and then the third instruction is performed as if it were so that the second step is so that, again in hindsight, (evaluated at position ) should have yielded value . The instruction is, in this context, paradoxical because no evaluation of its embedded condition is possible. In that case, a run of involving the putting into effect of the third instruction is said to terminate improperly (just like ).
By first jumping to another position, either forward or backward, the mentioned paradox may be avoided, though not fully. With , the condition is denoted which, when evaluated at position amounts to evaluating at position . If that position lies outside the range of instructions, the result is . Similarly the condition is evaluated by evaluating at position , again returning if that position is or lower. Consider When evaluating at position , the hypothetical computation has to start at position , then as the first step, the test is performed, and then depending on the result either the call is made (so that the evaluation of yields ) or improper termination results (so that the evaluation of yields ).
A paradoxical situation may result if the computation started at position for an evaluation of at position visits the test instruction (say ) within steps. In that case, it is assumed that no definite value can be found for the condition and that the execution of the instruction at position runs into improper termination. With the understanding that, say is an abbreviation of , it is plausible to adopt the convention that leads to improper termination immediately, because the test is called before an evaluation of the embedded condition is known.
For the (hypothetical) processing of a condition , say, occurring as an embedded instruction in a test at position in an instruction sequence , we notice the following:
(i) It does not change the state of any service (because it is a hypothetical form of processing);
(ii) It returns if, when (hypothetically) the computation is performed, that computation will perform a basic action in the -th step (without any visit to the ’th instruction in between);
(iii) It returns otherwise (including the cases that proper or improper termination arises before the a’th step of the computation).
Implementing a Lookahead Instruction via a Prospective Service
A useful modification of the lookahead instructions and is as follows: and include tests which require that at least once during a run, a basic action or test based on the method call is performed.
The positive result in Theorem 1 can be used for clarifying the feasibility of a specific lookahead instruction. Here, the use of infinite instruction sequences, obtained via repetition of a finite instruction sequence, comes in handy. As was shown in [
2], repetition, which underlies the program algebra PGA, has the expressive power of arbitrary flow charts though without making use of backward jumps.
Proposition 11. Let . Let be an instruction sequence with zero or more occurrences of as its only lookahead instructions and without backward jumps and with required interface contained in . Let be obtained from by replacing each occurrence of by the method call .
Then, upon defining as , the semantics of instruction is given in such a way that it complies with the (informal) definition given above for the (lookahead) test instruction .
Proof. When using repetition as an instruction sequence constructor, an infinite, repeating, instruction sequence results in which only forward jumps are used. Using this fact, the correspondence between the semantics of instruction and its counterpart after thread abstraction is perfect. □