Java desktop links of the week, November 5

A quiet week this time, with just two links of note that I could find:

  • GestureFX, a lightweight gesture-enabled pane for JavaFX, has had a new release.
  • I came across the open-source Paintera, which “is a general visualization tool for 3D volumetric data and proof-reading in segmentation/reconstruction with a primary focus on neuron reconstruction from electron micrographs in connectomics.” It is built using JavaFX 2D and 3D interfaces. There is a cool video linked on Twitter (I wish I could find a better one)

Java desktop links of the week, October 29

Java desktop links of the week, October 15

Java desktop links of the week, October 8

Natively compiling Micronaut microservices using GraalVM for insanely faster startups

The Micronaut framework is a microservice framework which would be immediately recognisable to developers familiar with Spring Boot or MicroProfile. It certainly felt that way to me, and this is by design – it makes it easier for developers to consider moving over to this new framework. But why should you? Micronaut takes a different approach to enabling everything we developers take for granted in Spring Boot and MicroProfile. Rather than do runtime annotation processing as Spring Boot and MicroProfile, Micronaut uses annotation processors at compile time to generate additional classes that are compiled alongside your code. This means startup time is reduced due to the substantially lower amount of overhead that is required to scan the classpath of your project. In fact, Micronaut tries to avoid reflection as much as possible, only using it where absolutely necessary.

The benefit of this is obvious. Where Spring Boot and MicroProfile applications can take tens of seconds to start (depending on the complexity of the classpath that must be scanned), Micronaut starts on my machine in less than a second – normally around 650ms in fact.

Despite this, Micronaut offers everything you’ve come to expect from a microservices framework – dependency injection, convention over configuration, service discovery, routing, etc.

This is cool enough, and it is great for testing – starting a server from a clean build is so much less painful when you’re only waiting a second or so. But I wanted to push further, and use GraalVM to compile the Java code down to a native image. This should give us even better startup, making it even more appealing for serverless use cases where you pay just for the execution time.

So – what is necessary to use GraalVM to compile down a Micronaut application to native code? Here’s a quick tutorial on what I had to do:

Firstly, you need to install GraalVM itself. This is essentially JDK 8 with additional tools (such as the one we will use later to create a native image). You can download GraalVM from the website, or you can use a tool like SDKman to download it onto your system. Here are the instructions for installing GraalVM with SDKman:

With GraalVM installed, we need to install the substrateVM library into our local Maven cache. SubstrateVM is a small virtual machine written in Java that GraalVM compiles together with our application code to provide us with GC, memory management, etc.

Assuming that we’ve already installed the Micronaut CLI, we can then create a Graal native microservice using the following command:

Once that is created we can change into that directory and compile the code and run it with a Micronaut tool that will generate a report in build/reflect.json, with information on the reflection that is occurring within the application. This report is fed into the GraalVM compiler to ensure it knows how to properly compile everything.

With this, we can then use the GraalVM native-image tool to generate a native version of our code. The following command is what ended up working for me:

If that completes successfully, you can now run your natively-compiled version of the application as per usual. On my machine, this is what I see:

I can access my microservices at the specified URL as per usual, but the startup time has dropped to 22ms! That’s incredibly fast 🙂

I’ve got a bunch more experiments and cool things underway. I’ll talk about those on this blog, but the best way to keep informed is to follow me on Twitter.