Machine Learning in compiler optimization

Deep Patel
8 min readDec 26, 2020

“ Why would anyone want to use machine learning to build a compiler?”

Compilers translate programming languages written by humans into binary executable by computer hardware.Machine learning, on the other hand, is an area of artificial intelligence (AI) aimed at detecting and predicting patterns. Perhaps it seems this domains are diverse and are too vast but we shall see further how that compilers and machine learning are a natural fit and have developed into an established research domain.

It’s all about optimization

Compilers have two jobs-translation and optimization. First, they must translate programs into binary correctly. Second, they have to find the most efficient translation possible. There are many different correct translations whose performance varies significantly. The vast majority of research and engineering practices is focused on this second goal of performance, traditionally misnamed optimization. The goal was misnamed because in most cases, until recently, finding an optimal translation was dismissed as being too hard to find and an unrealistic endeavor. Instead it focused on developing compiler to transform the code in the hope of improving performance but could in some instances damage it.. First, despite the year-on-year increasing potential performance of hardware, software is increasingly unable to realize it leading to a software gap. This gap has yawned right open with the advent of multicores. Compiler writers are looking for new ways to bridge this gap. Second, computer architecture evolves so quickly that it is difficult to keep up. Each generation has new quirks and compiler writers are always trying to play catchup. Machine learning has the desirable property of being automatic. Rather than relying on expert compiler writers to develop clever heuristics to optimize the code, we can let the machine learn how to optimize a compiler to make the machine run faster, an approach an accurate model so you have to consider that .

Machine learning predicts an outcome for a new data point based on prior data. In its simplest guise, it can be considered a form of interpolation. This ability to predict based on prior information can be used to find the data point with the best outcome and is closely tied to the area of optimization. Basically there are two reasons for doing it Now , Machine learning relies on a set of quantifiable properties, or features, to characterize the programs .There are many different features that can be used. The advantage of the machine-learning-based approach is that the entire process of building the model can be easily repeated whenever the compiler needs to target a new hardware architecture, operating system, or application domain. The model built is entirely derived from experimental results and is hence evidence based.These include the static data structures extracted from the program source code or the compiler intermediate representation (such as the number of instructions or branches), dynamic profiling information (such as performance counter values) obtained through runtime profiling, or a combination of the both. So here the use of Standard ML algorithms are important as they typically work on fixed length inputs, so the selected properties will be summarized into a fixed length feature vector. Each element of the vector can be an integer, real or Boolean value. We can call this “ The process of feature selection and tuning “ also referred to as “ feature engineering “ . This process may need to iteratively perform multiple times to find a set of high-quality features to build

Lets go through the process

A. Learn a model

In ML training a model to derive a data is a must step , So what I am planning to do is, I will let t he compiler developer will select training programs which are typical of the application domain. For each training program, we calculate the feature values, compiling the program with different optimization options, and running and timing the compiled binaries to discover the best performing option. This process produces, for each training program, a training instance that consists of the feature values and the optimal compiler option for the program. The compiler developer then feeds these examples to a machine learning algorithm to automatically build a model. The learning algorithm’s job is to find from the training examples a correlation between the feature values and the optimal optimization decision. The learned model can then be used to predict, for a new set of features, what the optimal optimization option should be.

B. Deployment

In the final step, the learned model is inserted into the compiler to predict the best optimization decisions for new programs. This is demonstrated in FIG (1) . To make a prediction, the compiler first extracts the features of the input program, and then feeds the extracted feature values to the learned model to make a prediction.

The advantage of the machine-learning-based approach is that the entire process of building the model can be easily repeated whenever the compiler needs to target a new hardware architecture, operating system, or application domain. The model built is entirely derived from experimental results and is hence evidence based.

FIG(1)

how a code transformation will affect eventual performance ?

Lets me let you go through the naive approach first , A naive approach is to exhaustively apply each legal transformation option and then profile the program to collect the relevant performance metric. Given that many compiler problems have a massive number of options, exhaustive search and profiling is infeasible, prohibiting the use of this approach at scale. This search-based approach to compiler optimization is known as iterative compiltaion . So There are two main approaches for solving the problem of scalably selecting compiler options that work across programs.Since, the approach has many limitations under the condition of scalably selecting the compiler options that work across programs. For that researchers have investigated ways to directly predict the best compiler decision using machine learning for relatively small compilation problems . FIG(2) will give you a better idea on what I am trying to say.

FIG(2)

There are also some articles in which I read about building a cost cutting function to estimate the quality of a compiler option , Many compiler heuristics rely on a cost function to estimate the quality of a compiler option. Depending on the optimization goal, the quality metric can be execution time,the code size, or energy consumption, etc. Using a cost function, a compiler can evaluate a range of possible options to choose the best one, without needing to compile and profile the program with each option.You guys can surf it out on the net as it is a complex approach that I cannot summarize it here. Next I have mentioned some models which will tell about the wide range of Machine learning models that can be used , The goal was misnamed because in most cases, until recently, finding an optimal translation was dismissed as being too hard to find and an unrealistic endeavor. Instead it focused on developing compiler to transform the code in the hope of improving performance but could in some instances damage it.

So in that I have defined the approaches and the models you require to implement that particular approach.

  • Supervised Learning approach (Linear/non-linear models , SVMs)
  • Unsupervised Learning approach (K-MEANS , New man clustering)
  • Online Learning approach (Genetic Programming)

Some implementations

There were a few implementations of deep learning for compilers. One of them was a deep neural network used to directly predict the right heuristic value should be for some optimisations in OpenCL. They used LSTMs for this, which enabled them to somewhat understand the structure of the program, such as whether a variable has been declared in the past. The results improved over prior, hand-built features. This research was followed by the incorporation of graph-based representations. Researchers thought that it would suit programming languages better. The instructions of a program can be represented as the edges of the graph, depicting the relationship between variables.

This blog has introduced machine-learning-based compilation and described its power in determining an evidence-based approach to compiler optimization.Machine learning-based compilation is now a mainstream compiler research area and, over the last decade or so, has generated a large amount of academic interest and papers. While it is impossible to provide a definitive cataloger of all research, we have tried to provide a comprehensive and accessible survey of the main research areas and future directions.

Machine learning is not a panacea. It can only learn the data we provide. Rather than, as some fear, it dumbs down the role of compiler writers, it opens up the possibility of much greater creativity and new research areas.

However, the authors also admit that this problem is larger than those to which reinforcement learning is typically applied. The state space is huge, and so is the action space. Evaluating the reward can be time-consuming, and the program must be compiled to a binary and executed with representative inputs enough time to give statistically sound timings.

Modern compilers are multi-million line pieces of software that can take years to master. To leverage ML for efficient compilers, the researchers have few suggestions:

  • Compiler writers to build modular compilers that support iterative compilation and machine learning from the ground up.
  • Machine learning researchers should invent models that are suited to the recurrent, flow-based nature of programs.
  • Enable every optimisation choice to be exposed through discoverable APIs with which iterative search and machine learning can interact.

Open research directions

Machine learning has demonstrated its utility as a means of automating compiler profitability analysis. It will continue to be used for more complex optimization problems and is likely to be the default approach to selecting compiler optimizations in the coming decade.

This blog has introduced machine-learning-based compilation and described its power in determining an evidence-based approach to compiler optimization. It is the latest stage in 50 years of compiler automation. Machine learning-based compilation is now a mainstream compiler research area and, over the last decade or so, has generated a large amount of academic interest.

Originally published at https://deeppatelvit.blogspot.com on December 26, 2020.

--

--