Java desktop links of the week, April 9

A heap of links – enjoy! ūüôā

One guys perspective on JavaFX

I’ve had a lot of people ask me about JavaFX, especially with the recent changes announced by Oracle (there are a few follow-up articles at infoworld and jaxenter for more information), and on my leaving Oracle late last year. I’ve promised for a long time to add my thoughts to the discussion, but up until now I haven’t actually done so. Let me rectify that now…

For those of you that didn’t hear the news, Oracle announced plans related to Java client. In short, some Java client related technologies will be removed from JDK releases. Of most interest is the fact that applets, Java Web Start, and JavaFX will no longer be available in JDK 11 and later. JDK 11 may feel like a long way off given how slow Java releases have been in the past, but remember that with the new release cadence in effect, we’ve already seen JDK 10 released last week, and JDK 11 is due out in September, I believe. This means that from September people downloading the JDK will no longer be guaranteed to have applets, Java web start, or JavaFX APIs available on their machine.

I poured my heart into JavaFX for a really long time – since mid-2009 it was my full time job, firstly at Sun Microsystems and then at Oracle. It was a pleasure and a delight to work with so many smart, dedicated, and professional engineers during this time. I will always fondly remember the time I spent with them, and class it as a career highlight. Equally valued by me was the opportunity to work with such a smart, motivated, creative, and dedicated community of people who were using JavaFX in all manner of projects. It was truly heartening to know that my code was being used to further science, to help send things to space, to trade in markets, to create mobile apps, and so much more.

At the same time, as an engineer working with so many truly excellent engineers, it was heartbreaking to see people leaving Oracle, but this was an all-too-common experience as budgets became tighter and tighter. Java is an open source project, with applications ranging from the smallest of profiles right up to hugely complex computing needs. It can be extraordinarily hard to monetise open source work, and it can therefore be challenging to justify continued investment into engineering efforts where there is no direct benefit. At the end of the day, my feeling is that JavaFX simply fell into this void – whilst there are some Oracle products using JavaFX, there wasn’t enough, and thus justification for ongoing investment became a tough sell, with investment waning over time. Every time we lost some engineers, I felt that surely, we couldn’t cut anymore – but I was all too frequently wrong.

That takes us to today. Oracle has made it clear that it is now in the communities hands to ensure JavaFX flourishes. The process for developing JavaFX itself is opening up, and simplification of the build process makes it more possible to become involved. Discussions are underway into moving JavaFX into modules available via Maven repositories. One day soon JavaFX will be a compile-time dependency that you include in your build script. You won’t care what version of the JDK your end-users have installed, because your build script will compile a native installer with an embedded JDK image, containing everything you need. Developers will create or coalesce around a new project for ensuring easy application updates. In other words, things needn’t be so glum. It’s a case of the King is dead, long live the King!

But challenges do abound. Unlike nature, JavaFX won’t simply adapt for free. There is a lot that must be done, including all I wrote above. Beyond that, what is JavaFX? Who defines it and how does it evolve? How does it become something community owned? Is it forever bound to the OpenJDK, or does it transition to a third party (not unlike Java EE)? Community involvement at this juncture is critical – as the process opens itself up it also becomes ever-more dependent on those who form this community to bring their resources to the fore, or the risk is that JavaFX will wither and die.

In short, there are many ways one can choose to look at this announcement. I’ve had many more months to appreciate this plan than the wider community, but I feel like I straddle the entire emotional spectrum on this ūüôā It goes without saying that I would have loved to have seen Oracle growing investment in JavaFX, so there is sadness in me that this didn’t happen. On the other hand, it is easy to see the lack of business sense in doing this. I think my main emotion now is concern – I really do hope that the community steps up. If it doesn’t, JavaFX will be no longer, and this is a real risk. On the other hand, if the community does step up (and I mean this extremely generally – I hope individuals bring their skills in coding, documentation, testing, advocacy, etc, but I also hope that deep-pocketed companies help to sustain these developers financially), then we may see something amazing happen. Already there are positive signs:¬†Gluon has already worked with Oracle to create a repo on GitHub that enables community interaction and contributions, and there are a number of people already adding improvements into it.

I really hope we see something amazing happen.

I really, really hope that we see something amazing happen.

Finally, I feel like I should finish by saying: Java client was my home for a very long time. I’ve loved working with all of you, and I wish you all the best in everything you do. My time at Sun and Oracle was special. If you have the willingness to join in on JavaFX now, then you should reach out to the openjfx-dev mailing list. I will remain involved however I can (certainly with ControlsFX and Scenic View, and the desktop links post), but it is an outside of Oracle role from here on out ūüôā

Building a serverless url shortener with Azure Functions and Java, part two

This is the second post in a series of blog posts on building a serverless url shortener in Java and Azure Functions. Here’s a list of all the posts I (may) publish eventually (links will be added as the posts are published):

In this post, I wanted to cover the additional code required to use Azure Storage Queues as part of a function app, to reduce the amount of time that your users must wait before a function returns by offloading work that can be done asynchronously into a queue (which is triggered by a separate Azure Function).

My use case is simple: I’d like to do analysis on the use of my URL shortener – which links are popular, when are they popular, who refers them, etc. This analysis involves looking at various request headers, storing data into data storage, and maybe doing some number crunching. Additionally, I wouldn’t mind being pinged on a Slack channel whenever my URL shortener is used, just for fun… The thing is – all these tasks take time, and while I’m happy to pay Azure to do them, I don’t want them done while my users are waiting to be redirected to their desired URL.

Adding Elements to the Queue

This is a perfect use case for the queues support baked into Azure Functions. What we can do is use a queue in the redirect function that we discussed in the last post, and add into the queue the relevant data (as a string). This enables the redirect function to focus on its core task and responding to the user as soon as possible. I can then define another Azure Functions function that, instead of being triggered by an HTTP request, is triggered by a queue having an item added to it. First of all, lets update the redirect function code from the last blog post to have the queue provided as a function argument, so that we can write to it:

Note that new line in the method arguments starting with @QueueOutput. This is the queue that we will write to later in the function. To actually have this queue setup, you should refer to the Azure Functions queue documentation. The name attribute is an internal name, whereas queueName is the name the queue has been given in Azure. The connection value is a connection string Рin my case I am using the standard Azure Storage that is provisioned as part of my Azure Functions deployment (in the same way I use the standard Azure Storage that is part of the Azure Functions deployment for table storage also, for every short code mapping).

Further down the redirect function code I extract out a few useful headers, then I simply concatenate all the values I want into a single string (with the pipe character as a separator), and then I call queue.setValue(...) to add this element to the queue. As far as this function is concerned, this task is now offloaded into the queue, and is no longer its concern.

Queue Processing

Now we move over to the other side of the queue, which I call the processing¬†function. I won’t include the actual data analytics discussion here (because I haven’t done much with it, and so I will cover it at a later date when I have more to say), but what I will cover is getting the trigger, and sending notification out to Slack.

In this code you can see that instead of @QueueOutput we now have a @QueueTrigger annotation. This tells Azure that we are expecting this method to be triggered whenever the specified queue has elements added to it, with the new value that was added to the queue being set as the value of the request string argument. When this queue value is received, you can see we have some analytics code (to be covered in a future post) that takes the pipe-delimited string from earlier and turns it back into separate values, and then logs it and sends a notification to Slack.

That is remarkably simple! We’ve offloaded time-intensive work to a separate function, unblocking our redirect¬†function and enabling it to be more responsive to visitors. At the same time, the complexity of the code is kept to a minimum.

This approach to integrating queues into Azure Functions is really useful, but you can also use queues outside of functions as well. There is a good tutorial on how to use the queues API in Java.

Sending to Slack

I like Slack a lot, and I use it for both communications and notifications. I like that it is really easy to setup and integrate with external systems. Because Slack supports incoming webhooks, I simply added a dependency in my Azure Functions to Feign, and wrote a few lines of code to allow for me to send messages to my account. Here’s the Slack interface I wrote:

I then wrote a SlackUtil class to make consuming this interface even simpler:

To use this code, you can refer back to the processing function earlier: it’s a single line call to send a message directly to the #general channel in my Slack account. The only external element in the webhook URL, which I added as an application setting in Azure, so that it isn’t part of the source code that I ship into GitHub (so people can’t spam me) ūüôā To learn more about how to bring in application settings (both when developing Azure Functions locally and also when deployed to the web), I’ve posted a separate article explaining best practices.


As with my first post on this topic Рserverless programming makes server-side development really easy, even for people who are not skilled server-side developers! If you are a Java developer, you should definitely take a look at Azure Functions today Рget started with the free tier and go from there! As I noted in my last post Рthe cost of operating these services is extremely minimal (cents per month).

Environment variables in Azure Functions with Java

It is often desirable to extract out secret information from source code for security reasons. This allows code to be published to source code repos without accidentally providing credentials to other developers. This can be achieved simply by using environment variables, both when running Azure Functions locally, and when deploying your functions to Azure.

To easily set environment variables when running Azure Functions locally, you may choose to add these variables to the local.settings.json file. If one is not present in the root directory of your function project, feel free to create one. Here is what the file should look like:

Each key / value mapping in the values map will be made available at runtime as an environment variable, accessible by calling System.getenv("<keyname>"), for example, System.getenv("AzureWebJobsStorage"). Adding additional key / value pairs is accepted and recommended practice.

Note: If this approach is taken, be sure to consider whether adding the local.settings.json file to your repository ignore file, so that it is not committed.

With your code now depending on these environment variables, you can log in to the Azure Portal to set the same key / value pairs in your function app settings, so that your code functions equivalently when testing locally and when deployed to Azure. Here’s a screenshot for reference:

Serverless programming makes development of web services and triggers really easy, even for people who are not skilled server-side developers! If you are a Java developer, you should definitely take a look at Azure Functions today Рget started with the free tier and go from there! As I noted in my first post Рthe cost of operating these services is extremely minimal (cents per month).