A slight refactoring

For almost precisely one year I’ve been working in the Microsoft Cloud Developer Advocacy team, focused on all things Java. I’ve enjoyed my time in this team, and I’ve learned a lot. I’ve been staggered by the degree to which the team is a magnet for highly skilled, highly motivated, and highly renowned people (I was the only glitch in the hiring matrix). Personally, my big rewards from the past year were two things:

  1. I managed to improve my public speaking skills and confidence – I started the year a little rusty, but by the middle of the year I was presenting to rooms of 800 and more people quite regularly, and by the end of the year I gave my first keynote at QCon Shanghai. The large number of presentations I gave came about because not enough conferences said ‘no’ to me when I submitted, so I found myself travelling and presenting far more than anticipated! I really enjoyed the ability to share my love of developer experience and good API design far and wide. I put up a page on my site that includes a few of the videos, as well as a DZone Refcard that I wrote.
  2. When I wasn’t travelling, my biggest reward was being able to work across all engineering teams at Microsoft, to ensure their SDKs and APIs were as good as they could be. I really enjoyed the opportunity to bring my Java expertise to a new audience inside Microsoft. I also enjoyed the ability to expose the work of these teams through the Java portal on docs.microsoft.com.

The team I’ve worked with has been exceptional, and the managers I had above me – Bruno Borges, Tim Heuer, Chad Fowler, Jeff Sandquist – even Scott Guthrie – have been extremely open, available, and supportive of me. Microsoft is a great organisation to be a part of, and¬†the Cloud Developer Advocacy team is an awesome team. I highly encourage everyone suitably qualified to consider joining the team. Ping me if you ever want to learn more.

So, today, I’m excited to say that I’m refactoring things a little ūüôā As of next week,¬†I am moving out of the Cloud Developer Advocacy team, and I’m taking on a Senior Software Engineer role in the Azure SDK team. My new role is to serve as the Java representative on the architecture board for Azure SDKs, and to help drive excellence in our Java developer experience. I will be heavily involved in driving the API design for our next generation of Java Azure SDKs, working alongside a team of excellent engineers to make this a reality.

I will continue to work remotely from New Zealand, as I have always done, and I look forward to the challenge ahead.

Triggering Azure Functions with Java when a storage change occurs

Recently my azure-javadocs project has been running into trouble. This project is a simple one that uses Travis CI to clone a number of GitHub projects, generate an aggregate JavaDoc from all of them, and then uses the static hosting feature of GitHub to host that output. The problem is that for every build, all of the files in the static branch on GitHub are overwritten, and over time this has become exceedingly large. Travis CI was configured to run daily, and in the last six months or so of running this service, the .git directory in a fresh clone has grown to 4.18GB! This is clearly not going to end well ūüôā

So, I set about rearchitecting this project, and now I’m using the free¬†Visual Studio Team Services (VSTS) to do the bulk of the work, as well as Azure Functions to do a little bit of work at the end. Essentially, VSTS now acts as my git repository and build system. For every commit into the git repo, a build is triggered automatically (I commit infrequently – just whenever I want to enable / disable a project from the generated Javadoc – so VSTS is also set to run the build daily). The build steps are specified within the VSTS user interface, and run through the standard steps: clone all the external repos, run Maven tasks to install and generate javadoc. The build concludes by creating a zip file containing the javadoc output, and storing it in an ‘incoming’ container within my Azure storage account.

Now that I don’t want to host the JavaDoc within GitHub (because I don’t want to end up with a massive repo again), I make use of a new feature of Azure Storage, which is the built-in support for hosting static sites directly from a storage container (technically the feature is still in-preview, but it works well for me). All I need to do is place my static website (i.e. the JavaDoc output) into a special $web container that is created for me by Azure Storage when I enable the static sites support.

The cool part comes next: I’ve written an Azure Function (in Java, of course) that is set up to trigger on any files being added to the incoming container. When a file appears, Azure makes sure to call my function, where I can then do whatever I want. In my case, I use it as a chance to firstly empty the entire $web container, and then to unzip the new zip file into the $web container, before deleting the zip file. Once this is all done, the docs are available from java.ms/api¬†(which, coincidentally, is another Azure Function written in Java). At present the API docs available here are just a subset of all available docs, but I plan to add the missing ones soon (pending some pull requests into various projects).

The code I wrote is below. The most notable aspect is the use of @BlobTrigger¬† to tell the function that I want to trigger on file changes in the ‘incoming’ container. The code is not overly complex, and mostly just deals with deleting all files and then unzipping all files, but I hope it is helpful for someone who is wanting to work out how to use Azure Functions to act as the glue between different services.

In the fullness of time it is unlikely I will need to even have this Azure Function code at all – VSTS will almost certainly have first class support for static site hosting baked into it any day now, but because static site hosting is currently a preview feature, I took it as a good opportunity to learn how to trigger an Azure Function on something different than an HTTP Trigger, and I’m publishing it here in case others run into any issues trying to achieve this for themselves.

Building a serverless url shortener with Azure Functions and Java, part two

This is the second post in a series of blog posts on building a serverless url shortener in Java and Azure Functions. Here’s a list of all the posts I (may) publish eventually (links will be added as the posts are published):

In this post, I wanted to cover the additional code required to use Azure Storage Queues as part of a function app, to reduce the amount of time that your users must wait before a function returns by offloading work that can be done asynchronously into a queue (which is triggered by a separate Azure Function).

My use case is simple: I’d like to do analysis on the use of my URL shortener – which links are popular, when are they popular, who refers them, etc. This analysis involves looking at various request headers, storing data into data storage, and maybe doing some number crunching. Additionally, I wouldn’t mind being pinged on a Slack channel whenever my URL shortener is used, just for fun… The thing is – all these tasks take time, and while I’m happy to pay Azure to do them, I don’t want them done while my users are waiting to be redirected to their desired URL.

Adding Elements to the Queue

This is a perfect use case for the queues support baked into Azure Functions. What we can do is use a queue in the redirect function that we discussed in the last post, and add into the queue the relevant data (as a string). This enables the redirect function to focus on its core task and responding to the user as soon as possible. I can then define another Azure Functions function that, instead of being triggered by an HTTP request, is triggered by a queue having an item added to it. First of all, lets update the redirect function code from the last blog post to have the queue provided as a function argument, so that we can write to it:

Note that new line in the method arguments starting with @QueueOutput. This is the queue that we will write to later in the function. To actually have this queue setup, you should refer to the Azure Functions queue documentation. The name attribute is an internal name, whereas queueName is the name the queue has been given in Azure. The connection value is a connection string Рin my case I am using the standard Azure Storage that is provisioned as part of my Azure Functions deployment (in the same way I use the standard Azure Storage that is part of the Azure Functions deployment for table storage also, for every short code mapping).

Further down the redirect function code I extract out a few useful headers, then I simply concatenate all the values I want into a single string (with the pipe character as a separator), and then I call queue.setValue(...) to add this element to the queue. As far as this function is concerned, this task is now offloaded into the queue, and is no longer its concern.

Queue Processing

Now we move over to the other side of the queue, which I call the processing¬†function. I won’t include the actual data analytics discussion here (because I haven’t done much with it, and so I will cover it at a later date when I have more to say), but what I will cover is getting the trigger, and sending notification out to Slack.

In this code you can see that instead of @QueueOutput we now have a @QueueTrigger annotation. This tells Azure that we are expecting this method to be triggered whenever the specified queue has elements added to it, with the new value that was added to the queue being set as the value of the request string argument. When this queue value is received, you can see we have some analytics code (to be covered in a future post) that takes the pipe-delimited string from earlier and turns it back into separate values, and then logs it and sends a notification to Slack.

That is remarkably simple! We’ve offloaded time-intensive work to a separate function, unblocking our redirect¬†function and enabling it to be more responsive to visitors. At the same time, the complexity of the code is kept to a minimum.

This approach to integrating queues into Azure Functions is really useful, but you can also use queues outside of functions as well. There is a good tutorial on how to use the queues API in Java.

Sending to Slack

I like Slack a lot, and I use it for both communications and notifications. I like that it is really easy to setup and integrate with external systems. Because Slack supports incoming webhooks, I simply added a dependency in my Azure Functions to Feign, and wrote a few lines of code to allow for me to send messages to my account. Here’s the Slack interface I wrote:

I then wrote a SlackUtil class to make consuming this interface even simpler:

To use this code, you can refer back to the processing function earlier: it’s a single line call to send a message directly to the #general channel in my Slack account. The only external element in the webhook URL, which I added as an application setting in Azure, so that it isn’t part of the source code that I ship into GitHub (so people can’t spam me) ūüôā To learn more about how to bring in application settings (both when developing Azure Functions locally and also when deployed to the web), I’ve posted a separate article explaining best practices.

Summary

As with my first post on this topic Рserverless programming makes server-side development really easy, even for people who are not skilled server-side developers! If you are a Java developer, you should definitely take a look at Azure Functions today Рget started with the free tier and go from there! As I noted in my last post Рthe cost of operating these services is extremely minimal (cents per month).

Environment variables in Azure Functions with Java

It is often desirable to extract out secret information from source code for security reasons. This allows code to be published to source code repos without accidentally providing credentials to other developers. This can be achieved simply by using environment variables, both when running Azure Functions locally, and when deploying your functions to Azure.

To easily set environment variables when running Azure Functions locally, you may choose to add these variables to the local.settings.json file. If one is not present in the root directory of your function project, feel free to create one. Here is what the file should look like:

Each key / value mapping in the values map will be made available at runtime as an environment variable, accessible by calling System.getenv("<keyname>"), for example, System.getenv("AzureWebJobsStorage"). Adding additional key / value pairs is accepted and recommended practice.

Note: If this approach is taken, be sure to consider whether adding the local.settings.json file to your repository ignore file, so that it is not committed.

With your code now depending on these environment variables, you can log in to the Azure Portal to set the same key / value pairs in your function app settings, so that your code functions equivalently when testing locally and when deployed to Azure. Here’s a screenshot for reference:

Serverless programming makes development of web services and triggers really easy, even for people who are not skilled server-side developers! If you are a Java developer, you should definitely take a look at Azure Functions today Рget started with the free tier and go from there! As I noted in my first post Рthe cost of operating these services is extremely minimal (cents per month).

Creating custom routes in Azure Functions

I’ve been working on my URL shortener project recently, but I took a few days away from it to start writing some JavaDoc for the Java APIs for Azure Functions. In doing so I learned about a cool piece of API that might not be readily apparent (although I hope it is now that I’ve written documentation!), and so in this blog post I wanted to quickly introduce the route¬†field on the @HttpTrigger¬†annotation.

By default, when you create a function for an HTTP trigger the function is addressable with a route of the form http://<yourapp>.azurewebsites.net/api/<functionname>. This is fine in many cases, but in some cases you want your endpoint to be parameterised. For example, we’ve all seen URLs such as reddit.com/r/java, where the java¬†can be replaced by any value (I hope I’m not shattering illusions for anyone by informing you that these all aren’t separate HTML pages sitting on a server ūüôā ). Doing this with Azure Functions is trivial, with help from the route¬†property. For example, we could specify a route of products/{category:alpha}/{id:int}, and this would mean that the function is now addressable at http://<yourapp>.azurewebsites.net/api/products/electronics/357.

If the only benefit was that the path was parameterised, we wouldn’t bother using this API, which is why the other half of the feature is the ability to use the @BindingName¬†annotation to bring in these arguments into the function. For example, here is a full method signature with route¬†and @BindingName¬†in use (apologies for the odd line wrapping required):

As can be seen here, we are receiving the category and id values from the URL endpoint and they are being provided to us as arguments into the function itself, where they are immediately usable. The route¬†string can be quite complex, as there are a number of configuration options available. Microsoft has published useful documentation on routing (just ignore the C# noise ūüôā ).

This has just been a quick blog post on the routing support in Azure Functions, because I thought it was neat. I will keep posting this short snippets as I discover API that delights me. As always though: playing with Azure Functions is really fun – I recommend all Java developers take it for a spin to see what they can achieve when they don’t need to worry about all the underlying infrastructure. Even better, you can get started with the¬†free tier¬†where you have 1,000,000 free function calls a month, and go from there! If you want more inspiration, check out my series on URL shorteners, built using Java and Azure Functions.