You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

157 lines
9.8 KiB

\paragraph{} A function is said to be \emph{polymorphic} when a single implementation of this function can be used with several different types. A polymorphic function may accept types that need to be treated differently at runtime. It could be because they have different memory representation, use different calling conventions or need to be discriminated by the garbage collector. It is thus necessary to keep track of these informations at runtime in order to interpret or to compile a polymorphic function.
% TODO: exemples
\paragraph{} Many polymorphism implementation techniques exist but only some of them have been described in research papers. We describe these techniques extensively, list their advantages and limitations, and compare all of them.
\section{By hand: meta-programming}
\paragraph{Source code duplication by hand} TODO.
\paragraph{Source code generation} TODO.
\paragraph{Source code transformation} It is possible to generate source code using a preprocessor, this is described in~\cite{SY74} under the name \emph{syntax-macro extension}. Its is used~\cite{EBN02} in the C programming language through it's preprocessor~\cite{SW87}. The technique is described in~\cite{Reb17}. For instance, given the \commandbox{list.c} file:
Running \commandbox*{cpp -P list.c} will give:
In practice one would put the generic code in its own file. Then, defining \commandbox{TYPE} and \commandbox{TYPED(X)} before including the file is enough to get a specialised version.
\paragraph{String mixins} See D's \emph{String mixins}~\cite{Fou22}.
For instance, givent the \commandbox{list.d} file:
Running \commandbox*{gdc -c list.d -fsave-mixins=/dev/stdout} will give:
\paragraph{Token stream transformation} See Rust's \emph{procedural macros}~\cite{Dev22}.
\paragraph{AST transformation} See OCaml's PPXs~\cite{Con22ppx}.
\paragraph{} For more metaprogramming techniques, see~\cite{LS19}.
Given a polymorphic function, it is (sometimes) possible to \emph{statically} collect all the types combinations it is going to be used with. The \emph{monomorphization} technique consists in producing a different \emph{specialised} function for each combination. This results in having only \emph{monomorphic} functions, hence the name monomorphization.
To build the set of types combinations for a given function, we iterate on the program's \emph{call graph}. At each \emph{call site}, the combination from this call is added to the set.
Once the set is computed, the original polymorphic function is removed. All the monomorphic functions are generated and added to the program. Finally, each call site is updated to use the right function.
Monomorphization is used by Rust's generics~\cite{Con18}, C++'s templates~\cite{Str88}, Ada's generic packages and subprogram~\cite{ND79}\cite{Bar80}, Coq's extraction~\cite{TAG18}, Why3~\cite{BP11}
Monomorphization may seem similar to the various techniques described in the previous section. The difference lies in the fact that monomorphization is dedicated to handle polymorphism, whereas metaprogramming only allows polymorphism incidentally. Even if C has macros, no one would say that C is a polymorphic language. In the same vein, even if C++, D and Rust respectively have macros, String mixins and procedural macros; they also have a templates/generics system dedicated to polymorphism. The term monomorphization should be used to talk about the \emph{prefered form} of polymorphism.
\paragraph{Code specialisation} The code produced by monomorphization is usually very efficient. Indeed, as the types are known precisely, it is possible to generate machine code fully specialised for this type. This includes the usage of dedicated assembly instructions or calling conventions. Moreover, there's no need for any kind of runtime support as there's no need to act differently depending on the type.
\paragraph{Memory usage} Heap memory usage is optimal as we only store the values we need and no runtime metadata.
TODO: easy to compile ?
\paragraph{Compilation cost} The compilation cost is usually quite high as the number of specialised functions per polymorphic function can be high. This leads to an increase in compilation time and memory use.
\paragraph{Binary size} For the same reason, the compiled binary can be quite huge. Each source polymorphic function potentially leading to a lot of assembly monomorphic functions.
\paragraph{Dynamic languages} Monomorphization doesn't work for dynamic languages. TODO: explain a little bit more, with an example.
\paragraph{Modularity} Monomorphization is not modular as it requires either access to the full source code, either to keep a representation of polymorphic code until linking in order to support separate compilation. TODO: explain more, with an example
\subsubsection{Polymorphic recursion}
Monomorphization is impossible in presence of polymorphic recursion, that is, a recursive function calling itself with a potentially different type each time.
It is quite easy to show with non-uniform recursive types, as they permit to write functions whose type can grow indefinitely depending on their input.
For instance, this implementation of \emph{Revisited Binary Random-Access Lists}~\cite{Oka99} in Rust:
The Rust compiler loops indefinitely on this file as it tries to generate an infinite number of specialised versions of the \texttt{len} function. Note that without the \texttt{main} function, the compiler succeeds. That's because monomorphization can only happen at linking. Without a main function, the \texttt{len} function can't be specialised as its call sites are still partly unknown.
\paragraph{Avoid useless specialisation} When a type parameter is not used by a function, it not necessary to specialise the function for this parameter. This optimisation is performed by the Rust compiler. It's not very common for a type parameter not to be used (i.e. not to appear in arguments or result).
\paragraph{Polymorphization} When functions have closures, closures inherit type parameters from the function they belong to. In this case, it is much more common that they end up unused. The optimisation that prevent closures to be specialised for unused type parameters is called \emph{polymorphization}. The initial implementation for the Rust compiler is described in~\cite{Wo20}.
The \emph{boxing} technique uses an \emph{uniform} representation of values: pointers. \emph{Scalar} values (integers, booleans, characters\ldots) are stored in a heap-allocated \emph{block} (sometimes called a \emph{box}) and represented by a pointer to this block. Blocks usually contain metadata describing their size and what kind of data they're made of. Values that already were pointers (arrays, lists\ldots) are still represented by a pointer, but instead of pointing directly to their data, they're pointing to a block.
TODO: figure explaining the above paragraph
All values being pointers, polymorphic function can deal with any type parameter in an unique way. At runtime, it'll be necessary to perform some operations to box and unbox values when needed. When it is required to discriminate between pointers and scalar (e.g. by the garbage collector), the information is found in the block's metadata.
TODO: figure explaining box metadata ?
It is used by the CPython implementation of Python, where blocks are named \texttt{PyObject}~\cite{pyobject}.
Considering the following code:
The memory layout would be the following:
TODO: say that Python integers are of arbitrary size (size is stored in the block metadata ?)
TODO: say that it also wastes a lot of space (give size of a small int in Python)
TODO: say that small integers are preallocated in Python that's why x and array2\_2 are pointing to the same heap value
It is used in Java~\cite{Bra+98}. TODO: link ; TODO: explain difference with python (Java has some unboxed scalar oustide of generics)
\paragraph{Ease of implementation} Boxing is one of the easiest technique to implement, it only requires to insert some code that will box, unbox and read blocks metadata at runtime.
\paragraph{Compilation cost} The compilation cost of boxing is very low. Indeed, each polymorphic function is compiled only once.
\paragraph{Binary size} The binary size of code produced by boxing is also very low, as each polymorphic function is compiled into a single assembly blob.
\paragraph{Interpreted languages} As shown by Java and Python, boxing is compatible with compiled and interpreted languages
\paragraph{Modularity} Boxing is modular. Indeed, each function can be compiled without knowing all its call site. Therefore, separate compilation is possible with boxing without any difficulty.
\paragraph{Polymorphic recursion} Boxing is compatible with polymorphic recursion. If a function calls itself with an unbound number of types, they'll all have te same representation and can therefore be handled by the same assembly code.
\paragraph{Execution speed} TODO: It's very slow : initialization cost, indirect acces, locality, memory cost and thus GC
\paragraph{Memory usage} TODO: a lot of waste (TODO: give size of a small int in Python)
\section{Tagged union}
\section{Runtime monomorphization}
Prevent empty bib~\cite{KS01}