This web page was originally written in February of 2000. Many things
have changed since then, in some cases dramatically. These changes
have been reflected among the groups that were working on compiling
Java into Native code (see They're All
Dead below). As some of these companies went out of business
and others changed their strategies, many of the HTML links referenced
here became invalid. The material that was published on the Web has
been lost as a result. As more and more of humanities information is
published on the Internet, the problem of the transient nature of this
information becomes an issue. To some degree this problem is
addressed by the Internet
Archive, but this archive is incomplete.
This is a growing list of links to groups and companies doing work on Java compilation. I've also added a list of references. True confession time: I hate trying to remember where I saw something, so if I see a Web link on compiling Java that might be useful in the future, I include it here so I can find it later. Like a squirrel burying nuts. Also, like a squirrel which forgets where the treasure is burried, I occasionally forget what is on my web pages, which I have been working since 1995.
I don't claim that this web page is a complete listing. For example, initially I missed the Jove optimizing byte code compiler. So if you know of something that is not included here that fits the grab bag of topics covered, please send me e-mail (email@example.com).
I have links to several commercial products on this page. Unless noted, I have only looked at the literature published on-line by these companies. In most cases I have not used the products. It is hard enough to keep up with all that is going on with Java, Jini, JavaSpaces, without trying to be a product reviewer. Sometimes a product that looks really good "on paper" may not be as impressive once it is used extensively. So I don't "endorse" any product (not that companies are actively seeking my endorsement).
It's also possible (probable?) that despite my best efforts I have misunderstood something about these products. So take what I write with a grain of salt. This is a round about way of stating that I don't want to get outraged e-mail (or letters) from marketeers and lawyers. This is just my opinion. However, if you are a compiler developer or software engineer and you have comments on the material here, I would definitely like to hear from you.
Most Java to native code compilers currently read Java class files, not Java source. This avoids having to implement a Java language front end, since the compiler can read class files generated by Sun's javac, Microsoft's J++ or any other byte code compiler.
The Java class file contains a lot of the original Java source information (see my notes on the Java class file disassembler I wrote). A compiler can read the Java class file symbol information and the Java byte codes and build an internal flow graph that can be used for optimization and code generation.
The Jove Optimizing Java Byte Code Compiler
The Jove compiler from Instantiations is an optimizing compiler that reads Java class files. Like the Marmot compiler, developed my Microsoft Research (see below), Instantiations Jove is a modern optimizing compiler that uses a Static Single Assignment (SSA) intermediate for optimization. Like Tower Technology they aim their Java product line at server side Java applications.
Excelsior JET (XDS Native Java)
Vitaly Mikheev was kind enough to comment on an earlier version of my notes on the Excelsior JET compiler. Any remaining errors, however, are mine, not his.
Excelsior is a new incarnation of a Russian company named XDS. XDS developed compilers for the Russian space program and for Oberon-2, a language developed by Niklaus Wirth. The Excelsior compiler group is based in in the Novosibirsk Scientific Center in Western Siberia. Vitaly writes:
In the Soviet days, Novosibirsk was one of few open places in the Soviet Union. Thirty five research institutes of the Russian Academy of Sciences and the Novosibirsk State University, one of the top three Russian universities, located at Acadimic Town, caused active international reasearch contacts, conferences and so on.
The JET compiler makes use of the optimization technology originally developed for the XDS Oberon-2 compiler. This includes static single assignment (SSA) based optimization. One of the unusual features of the JET compiler is that many of the optimizations are always turned on. In response to my comments about optimization phases being buggier than non-optimizating phases (see Optimization and Software Quality below), Vitaly writes:
When we started developing our compiler construction framework, we made a radical (and hard) decision: most optimizations cannot be turned off in our compiler. This allows us to debug our optimizations in great depth, even when it was difficult. It is probably the only way to debug an aggressively optimizing compiler because the debugging process may take (and usually takes) a great deal of time time and it would be simpler to yield to temptation of turning optimization off.
Always testing with optimization turned on yields a higher quality compiler, but it complicates debugging and exceptions. The Excelsior development suite includes a debugger and linker.
The whole idea of compiling Java to native code is heresy for Sun Microsystems and they certainly have not done much in the language definition or environment to help the Java to native compiler writer. One of the most difficult issue in compiling Java to native code is the fact that Java allows classes to be loaded dynamically (either from a local disk system or over a network). This is key to systems like Jini, which are built on top of Java. A dynamically loaded class cannot be precompiled by a compiler like JET, since it is only known at runtime. The Excelsior JET Professional Edition solves this problem by including a Just In Time compiler (JIT) which compiles the dynamically loaded class. The compiled class can be cached, so the native code will be executed if the class is loaded after the application terminates.
Natural Bridge's BulletTrain optimizing byte code to native compiler
Like Instantiations, Natural Bridge markets an optimizing byte code to native compiler. A good optimizing compiler should be able to produce code that is half the size and twice the speed of unoptimized code. A small software company named Natural Bridge has written an optimizing byte code to native compiler. They also supply a high performance class library. The Natural Bridge web pages did not discuss the details of the optimization technology (which tend to be of interest only to other compiler developers any way). However, they seem to do all of the modern optimizations.
Cygnus's GCJ compiler
Cygnus Support (now part of Red Hat) has a native Java compiler project. A Java front end has been added to the middle and back end of GCC. This compiler can also read Java ".class" files. Here are some links to Cygnus material:
The GCJ Home Page describes the basic features of the Cygnus Java compiler.
The Cygnus Implementation of the Java Language Architecture and Design Manual is published on Gordon Irlam's web page www.base.com. This is a document that was written at Cygnus presumably by Gordon (I have not e-mailed him yet to ask him). I found it an interesting discussion on compiling Java into native code.
This seems to be a more recent paper (published on the Cygnus web site) on compiling Java into native code.
The GNU C++ compiler is competitive with compilers from Sun Microsystems, Microsoft and other vendors. It is, for example, the most widely used compiler on Linux. Whether the Red Hat/Cygnus Java compiler ever reaches a similar point, where it becomes competitive with the commercial native Java compilers listed on this page, remains to be seen. The GNU C++ compiler exists because of Richard Stallman and the Free Software Foundation. It remains to be seen whether Red Hat/Cygnus is willing to invest the money required to develop an open source compiler. The issue of who pays for development when free software zelots and graduate students are unavailable is one of the unaddressed issues in Eric Raymond's The Cathedral and the Bazaar. For a longer discussion of this issue, see my rant on free software.
A byte code to native compiler builds the internal flow graph used by the compiler optimization and code generation phase from the Java byte codes. A Java source to native compiler has a front end that parses Java and does syntactic and semantic analysis. Some tasks, like method in-lining may be easier when there is a Java source front end. A compiler that reads Java source must also be able to read Java class files to properly handle import statements and classes defined in other files.
Java is a complex language. A Java grammar is large and the semantic checking that must be done to catch semantic errors is complicated. As a result, building a Java front end which parses Java source and class files and builds abstract syntax trees (AST), symbol and type information is a significant effort. Since such a front end must also be able to read class files, many compiler vendors skip the complexity of syntactic and semantic analysis by implementing the class file reader. A byte code compiler like Sun Microsystems javac serves as a "first pass". Since javac can be down-loaded and used without a fee, this does not impose too much on the user. So the list of compilers read Java source is smaller (tiny, in fact) than the list compilers that take Java class files as input. (We take a break now for a message from our sponsor, Bear Products International: Bear Products International is developing Java front end, which is available for license. We now return you to your Web page.)
Diab-SDS's fastJ compiler
Diab-SDS, an embedded compiler and debugger company, sells a native Java compiler product named fastJ. Unlike the Cygnus compiler, Diab supports the Java 1.1 language specification, which includes inner classes. Diab's compiler may be a Java front end on top of their C++ compiler back end. It would be interesting to see how the code generated by this compiler benchmarked against the Marmot, Jove Excelsior and BulletTrain compilers (see above).
There are a number of Java to native projects. In fact, it has become difficult to keep these web pages up-to-date because work is proceeding so rapidly. So here are some of the academic Java compiler projects:
The Timber Compiler, Delft University of Technology, Neitherlands
The Timber compiler is a Java to native compiler. It is used as a research platform for parallel computing. The Timber compiler has extended Java to support parallel programming.
The Manta Java to x86 native compiler, Vrije University, Amsterdam, the Neatherlands
The Manta web page states:
Manta is a native Java compiler. It compiles Java source codes to x86 executables. Its goals are to beat the performance of all current Java implementations. Currently it already contains a highly efficient RMI implementation (source code compatible with std. RMI). It is currently about 30 times faster than standard implementations. Class libraries are taken from kaffe, classpath and partly homebrew.
Java Remote Procedure Invokation is particularly difficult to support in a native compiled environment. In an RMI distributed code objects may be moved across a network (e.g., an object being composed of both data and executable functions).
Tower Technology sells a byte code to C Java compiler named towerJ. Much of the material on the Tower Technology web pages discusses how the towerJ compiler can help Java uses get better performance. There is not much mention about optimizations. However, from talking to Tower Technology it appears that they do many of the standard optimizations in preparation for generating C code. The C code generation makes the product quickly protable in a way that native code is not. While C code will never be as efficient as native code, it may be that in corporate server side applications the performance advantage of native code compilation is not as high. Compared to a language like Fortran, Java is new. The computer science community does not have much experience with the language yet.
Performance analysis and characterization is difficult, whether it involves software, hardware or some combination there in. I've spent a lot of my professional life working on high performance computers and parallel processors. Prospective customers always want to know whether the high performance system the vendor is selling is faster than the system they already own. Sales and marketing will say "Yes, it's much faster". But an engineer will usually answer "it depends". The speed-up on the high performance system depends on the characteristics of the application. This is never a very satisfying answer for anyone, but it is inescapable. The same problem exists when trying to understand whether the code generated by one compiler is faster than the code generated by another.
Performance analysis of compiler generated code has a long history. Originally there were no benchmarks. Users would do their own benchmarking on in-house applications. This was fine as long as the users were willing to dedicate time to doing benchmarking, since in most cases they were not willing to publish the source for their applications. This left no way for the compiler vendors to publish performance numbers that would compare their compiler against other compilers. So synthetic benchmarks like Whetstone, Drystone and SPEC were developed. History starts getting ugly here, since some compilers have specific hacks that recognized portions of these benchmarks in order to produce better performance numbers. Because the special recognition was specific to the benchmark and did not generalize to a larger application class, with these compilers good benchmark numbers did not necessarily translate into better application performance.
Compared to Fortran, C and C++, Java is a new language. Many of the benchmark suites have not been translated into Java. Performance analysis of Java compiler generated code has not been a priority, since analysis centered on the speed of the interpreter and the runtime system (e.g., the garbage collector and the class library). See for example the Volano report, which looks at JVM and Java performance in a networked environment. Now that there are several Java compilers that generate native code, performance comparison is more important.
There has been a lot of discussion about the speed of Just-In-Time (JIT) compilers and hot spot optimizers that work with the interpreter. Some early Java to native compilers where only marginally faster than interpreted code. As a result, JIT and hot spot techniques resulted in Java performance that matched the code generated by the native compilers. This led to claims that a JVM could be just as fast as native code (underlying this was the idea that compiling Java to native code was heresy, since it violated Java's "write once, run anywhere" theme). Most native compiler vendors have now published benchmarks showing that the code generated by their compilers is faster than the JIT and/or hot spot interpreters. There has been less work on classic benchmarks, although these could be used for comparison against JIT or hot spot as well.
The Java world moves fast and more classic benchmarks are starting to appear. Microsoft Research did an excellent job benchmarking their Marmot Java compiler (see reference below). Benchmark and/or performance analysis data has been published by all the vendors above (except Cygnus, where the compiler is less mature). See:
The white papers published by Tower Technology provide less benchmark data, simply showing (in marketing terms) that the towerJ compiler is faster than Sun's HotSpot optimizer.
Natural Bridge notes, as I have above, that you have to be careful interpreting benchmarks (see also my discussion of benchmarks on the related web page Why Compile Java Into Native Code?). Most compiler groups claim to perform the full suite of modern optimizations. However, the implementations vary depending on the experience and talents of those groups. In practice this means that compilers that claim to implement the same optimization can differ widely in the performance delivered on real applications.
Java is more difficult to optimize than C++. If a compiler group is good, the performance of optimized code should improve over time. In several cases the Excelsior Java compiler produces code that is faster than code generated by Microsoft's Visual C++ (MSVC). The Marmot paper shows MSVC doing a better job at optimization than Marmot, in general (conspiracy theorists will say "of course", but I think that this is misplaced). So the fact that Excelsior generated code beats MSVC generated code suggests that either Excelsior does a better job or the benchmark is poorly chosen. I'm not sure which is the case.
Java performance remains a controversal issue. Sun continues to claim that the Hot Spot optimizer solves all problems. In fact, Sun's Hot Spot does show impressive performance numbers for long running servers. Hot Spot uses execution tracing for optimization. This can beat statically compiled and optimized code (for example, the Hot Spot optimizer can know that a branch is usually taken). Depsite the usual Sun hype, Sun did not invent execution profile driven optimization. This technique is almost twenty years old and has its roots in the Bulldog compiler implemented by John R. Ellis at Yale. The late mini-supercomputer company Multiflow used a trace scheduled compiler, as did Cydrome (another failed mini-supercomputer company). Profile driven optimization is currently used by some IA-64 compilers.
Long running servers represent only a fraction of the Java application space. All my Java code either consists of short running client/server applications or standard "run once" applications. For these, interpreted Java performance is slow and Hot Spot is of no help.
More benchmarks are starting to appear for Java to native compilers. For example, see The Java Performance Report - Part IV: Static Compilers, and More, August, 2001 by Osvaldo Pinali Doederlein (thanks are due to Dmitry Leskov at Excelsior sending me this link). This report provides some interesting comparisons. Benchmarking a suite of compilers is hard work and it is difficult to synthesize meaningful conclusions from the results (as noted above). As Doederlein (the author of the report) notes, I/O and other system factors can be a significant issue for some Java benchmarks, which contain networking code. Since this code is likely to be I/O bound (or dependent on the networking library), these are not a good measure of the performance of compiled code.
Software quality and testing are critical in a compiler, since it is the foundation used to create all other software. Quality is achieved in two ways:
Optimization phases can be scary to work on. An optimizer takes correct code and rearranges it into another sequence of, it is hoped, correct code which is more efficient in terms of either memory or runtime. The optimization phase of a compiler consists of thousands of lines of code and the algorithms are complex. As any experienced programmer discovers, this means that a compiler is more likely to generate incorrect code when optimization is turned on.
A compiler vendor has a strong incentive to develop compilers that do well on the standard benchmarks. This does not mean that the compiler is a quality product. Unfortunately there is no benchmark that will report the relative quality of a compiler.
Once upon a time I worked for a compiler vendor, which I will not name, that sold compilers that implemented all the standard optimizations. As with most compiler vendors, they put a lot of effort into making sure that they did well on the benchmark suites. However, the quality of the compilers was terrible. The compiler source was full of ifdefs so the logic flow was difficult to understand without using a debugger. Rather than implementing software that at least looked correct by inspection, people would hack in changes and then test the compiler against the test suite. Few people at this company believed that it was possible to implement machine independent optimizations and all optimization was target dependent.
The compilers generated incorrect code much more frequently than they should have and they did not deliver performance on real applications that was proportional to the performance reported on benchmarks. Unfortunately there is no way to tell in advance whether a compiler is a reliable piece of software. Only use will determine this. So when you see a page of benchmark results, remember that quality is also an important metric.
Perhaps it is a mark of how much IBM has changed that they are involved in open source projects like Jikes and shipping systems running Linix. Jikes was written by IBM Research and is available in source form. The IBM Jikes source license is considerably more liberal that the Free Software Foundation and seems to allow use of the IBM source in proprietary software without disclosing the source of the produce that uses the IBM source.
Jikes has been ported to a number of platforms, including Windows, A/IX and Linux. To quote IBM: "The project currently includes the Jikes compiler, the Jikes Parser Generator, and the Jikes Test Suite". Jikes is written in C++, not Java, so it should be considerably faster than Sun's compiler. Jikes Web page contains download information and links to the FAQ.
Sun Microsystems has insisted in maintaining control of the Java language and to date has refused to allow Java to be standardized by an independent body. Sun has claimed that this is to avoid Java falling into the fell hands of those who would adulterate Java, rendering it less than 100% pure. Sun specificly prohibit calling any super-set or sub-set of their Java language "Java". The Pizza language is a super-set of Java (called pizza to avoid problems with Sun). To quote from the "pizza" Web page, the Pizza super-set of Java extends Java in the following ways:
Generic Java: Adding Generics to the Java Programming Language
Some of the people who worked on the Pizza programming language (described above), have developed Generic Java (GJ). At this point it seems likely that GJ will be adopted into Sun's official Java language.
Generic Java is described in Philip Wadler's article GJ: A Generic Java: Java may be in for some changes, published in the February 2000 issue of Dr. Dobb's Journal. The Generic Java site includes several papers on Generic Java, referenced from the Generic Java documents web page. These include the Generic Java Specification which may serve as the base document for the addition to Sun's official Java.
A class file optimizer reads a Java class file and generates another class file that is optimized for size and byte code efficiency. The Jopt tool is a Java class file optimizer written by Markus Jansen of the University of Technology, Aachen, Germany. According to the Jopt Web page, these optimizations include:
Jopt has been tested against a range of Java class files. This web pages publishes the results of the optimizaton research that has been done with Jopt. The Jopt project is collecting Java class files to use as test cases for their optimizer. So if you have some around, send them in.
For those of you who are worried about people decompiling your Java classes, it looks like Jopt will optimize them into unreadability. So it's not only an optimizer, but it's an obfuscator. Personally, I think that if you really want to protect your code against being decompiled, just compile it into optimized native code.
Lister: Where is everybody Hol?
Holly: They're dead Dave!
Lister: Who is?
Holly: Everybody Dave!
Lister: What, Tower Technology?
Holly: Everybody Dave!
Lister: What, Instantiations Jove compiler?
Holly: Everybody Dave!
Lister: What, The NaturalBridge BulletTrain compiler?
Holly: They're all dead. Everybodys dead Dave
Lister: What about Diab Data's fastJ? It's not dead.
Holly: Everybodys dead Dave!
With apologies to the British television program Red Dwarf
Except for the Excelsior Java to native compiler, the commercial Java to native compilers are dead. Tower Technology is no longer in business. Instantiations no longer sells their Jove Java compiler and Natural Bridges is concentrating on a high performance Java interpreter. Like most compiler companies, Diab Data was purchased by another company, in this case WindRiver. I did not find any mention of the fastJ Java to native compiler on their web pages. The GNU gcj compiler, from Red Hat still exists. From talking to people who have used it, this compiler is not of the same quality as the GNU C++ compiler.
When there are multiple products that do similar things it is not surprising to find that some products dominate, while others fail. This is not what seems to have happened in the case of Java to native compilers. Of the commercial products, only the Excelsior compiler still survives. One interpretation might be that Excelsior dominated the market and killed off all the competing products. Another explaination is that the market for java to native compilers is small, at best. Apparently the latter explaination is the one closest to the truth.
There may be a number of reasons for the Java to native compiler die-off. The reasons that occur to me are:
Native processor performace is available (just not in Java). When developers are concerned about application performance they use C++, rather than Java.
In the three years between the time I wrote the initial version of this web page and the time I added this epitaph for Java to native compilers, processor performance has increased by a factor of four (if Moore's Law is a reliable estimator). The cost of memory has dropped some a large factor as well. Multiprocessor systems are now affordable. Although a performace penalty is paid for JVM interpretation, Java may still be fast enough for many applications.
I've had a difficult time understanding how much Java is being used, who is using it and how it is being used. One application that seems to be booming is "enterprise Java". This includes applications like Java stored procedures in the Oracle database (allowing the database programmer to use something other than SQL in some cases), Java application servers based applications (e.g., servlets) and applications that make use of XML.
There is no C++ equivalent that I can think of for a Java application server. Java also provides a rich infrastructure for building XML based applications. So for some "enterprise" applications Java wins because it provides features that are not available anywhere else (well, perhaps in C#, but that is another discussion).
The reality of the market (they're all dead, Dave) is the ultimate argument. Still, I'm surprised. As processor performance increases so do the demands placed on applications. There never seems to be enough speed. Many Java applications, whether enterprise or mathematics models, have terrible performance. To some extent one can throw hardware at the problem. However, a Java to native compiler is considerably cheaper than a high performance multiprocessor.
I also expected that a Java to native compiler would be attractive because an increasing number of younger software engineers are not fluent in C++, since they learn Java in school and use it in industry. Switching to C++ is only an option if you have spent the years needed to develop C++ expertise.
Fortran is one of the best programming languages for compiler optimization. Since optimization has been important to the Fortran community, over the years the language semantics have been cleaned up and clearly specified to aid optimization. Sadly Fortran lacks the abstraction of object oriented languages like C++ and Java. But optimization in object oriented languages, especially in the presence of exceptions, is difficult. The core data structure for optimization is the control flow graph (data flow is the secondary structure, embedded in the control flow graph). Building a correct flow graph when there are exceptions is something that has not been discussed much in the compiler literature. I am still trying to understand how to build a flow graph that properly supports exceptions without totally destroying any chance for classic optimization.
The literature on compiler optimization spans at least thirty years. This list does not claim to cover even a fraction of this literature. It lists some of the material I have been reading currently and centers Java compilation issues. I would be grateful for any Java optimization Web accessable references or books not listed here. If you send me e-mail I will add these references to this list.
For some reason, 1997 was a good year for advanced compiler books. As I mentioned, the literature on compiler design, development and optimization is massive. The first two references below summarize much of this literature and are a great resource for anyone who is serious about compiler design.
Building An Optimizing Compiler, by Robert Morgan, Digital Press, 1997
While I worked at MasPar I had the opportunity to work with a compiler consulting group in the Boston area named Compass (not to be confused with Compass Design Automation, which was engulfed and devoured by Avanti). Compass provided compiler technology to both MasPar and Thinking Machines (both companies made massively parallel processors). Bob Morgan was the chief compiler architect at Compass. Bob has drawn on his many years of compiler design and implementation experience to write an excellent book on compiler optimization.
An optimizing compiler is a huge piece of software and even this 450 page book can only provide an overview of the design of an optimizing compiler. Still, Morgan provides a practical and largely non-academic view of compiler implementation. As Morgan notes, the devil is in the details. So this book is not a cookbook on implementing an optimizing compiler. But it is an excellent place to start.
Advanced Compiler Design and Implementation, Steve S. Muchnick, Morgan Kaufmann, 1997
Robert Morgan's book (above) covers only optimization. It assumes that you know how to implement compiler front ends (which I think is a fine approach). Muchnick's books covers all compiler phases and, as a result, is over twice the size of Morgans book. Muchnick's books is also very good and I have read it to provide another view of various implementation techniques like static single assignment.
Optimizing Compilers for Modern Architectures by Randy Allen and Ken Kennedy, Morgan Kaufmann, 2002
Ken Kennedy is a professor at Rice University. His group has turned out some of the finest workers in the field of compiler design and implementation. Among Prof. Kennedy's students is Randy Allen. This is an excellent book, especially when paired with Muchnicks book. This book does not cover the compiler front end (e.g., parser, semantic analysis and abstract syntax tree construction). The authors concentrate on optimization in the middle and backend passes. As the authors correctly point out, front end topics are covered well elsewhere.
Professor Bill Pugh
Bill Pugh's group at the University of Maryland has been doing work on Java implementation and optimization. Professor Pugh's web page includes a nice list of references titled Pointers for Research on Java Implementation.
Microsoft's Marmot Optimizing Java compiler
I found this reference in Professor Pugh's list. The Marmot Java compiler supports a very large subset of Java (the only thing it does not support is dynamic class loading, reflection and remote method invocation (RMI)). From reading the Microsoft Research report (if you have problems with this link, go to Microsoft Research and search for Marmot). Here is a PDF version. Marmot is a really impressive piece of software. It uses modern optimization and is very professionally done. For example, they generate fully optimized x86 code (rather than C, as is the case with some research compilers). The paper describing Marmot is excellent (it is written by hard core compiler developers for hard core compiler developers, so if you are not familiar with single assignment intermediate (SSA), parts may be obscure (see Robert Morgan's book, for a good reference on SSA). The Marmot group also did a lot of work benchmarking their compiler. I am really impressed.
Espresso, A Java compiler written in Java, by Karl Doerig, Boston University
This paper is a set of design notes on a Java compiler that the authors call Espresso. The paper is beautifuly formatted and includes some nice pictures of the symbol tables and other core data structures. There are some good ideas here, but not all of them have been validated by the hot forge of software implementation. The design is also incomplete in some areas.
Prof. Marc W.F. Meurrens internet programmer links. This is an excellent collection of links and commentary on Java, various Java tools (like decompilers and disassemblers) and software. There is a collection of links on coding standards (not my cup of tea, but some people like them).
Marco Schmidt's list of Java compilers and virtual machines. This page is hosted on Geocities, which always makes me nervous, since I don't know what the life of these pages is.
As noted above, the literature on compiler design and implementation is huge. We can only hope that Knuth will live long enough to actually write a book or books providing a summary of some of this material. The list below is not meant to be comprehensive. When I see something I that might be of interest in the future I try to squirrel it away. If you have references or links, please send me email. With all of those caveats, you can check my links page here.
Ian Kaplan, February 12, 2000
Revised most recently on: January 9, 2004
back to Notes on Software and Software Engineering