Jim Driscoll's Blog

Notes on Technology and the Web

Archive for the ‘JavaScript’ Category

Running a no-dependencies Node module in Java

leave a comment »

Sometimes you do something just because you wonder if you can. Today’s example is a prime example of that.

I wonder if it’s possible to run a no-dependencies Node module in Java, without running Project Avatar? The answer, of course, is yes, it is.

For my no-dependencies Node module, I picked semver.js – there’s a pretty well defined external interface there, and all it’s really doing in String manipulation, so there’s no external dependencies to worry about.

Before I go further, a caveat: I actually did this example in Groovy, mostly to save the extra typing necessary in Java, but the example shouldn’t require any knowledge of Groovy, and it should all work with only minor modifications in pure Java (JDK 8, since I’m using nashorn’s engine, but there’s no reason something similar shouldn’t work with Rhino as well).

If you like, you can run the example just by downloading the git repository and typing


at the command line (provided that node and npm are already installed and in your path – if not, installing node is easy).

If you’ve never used JavaScript from within Java, it’s pretty easy.

ScriptEngineManager manager = new ScriptEngineManager();
ScriptEngine engine = manager.getEngineByName("nashorn");
Invocable inv = (Invocable) engine;

Get a manager, use it to obtain an engine, and (optionally) cast it to an Invocable instance.

With the engine, you can conveniently say

engine.eval("javascript code to run")

while with the invocable, you can say:


which is far more convenient if you don’t want to continuously do .toString and string concatenation.

So, with those two basic methods, let’s run a node module. First, we’ll need to set up an exports object, which node modules expect to exist.

engine.eval('exports = {}')

Then load and evaluate the server code:

File semverjs = new File('./node_modules/semver/semver.js')

Set the exports object to be the same as a server object, so our JS code will look a little more natural, and grab that object to use as the context object for invokeMethod (engine.get(varname) will fetch the JS object from Nashorn, and let you use it in Java):

engine.eval('semver = exports')
def semver = engine.get('semver')

With that setup, we can do a simple eval:

println engine.eval('semver.clean("1.2.3")')

or a simple invokeMethod:

println inv.invokeMethod(semver,"clean", "1.2.3");

A somewhat more complex invokeMethod (passing multiple arguments as an array, I needed to say “as Object[]” in Groovy to do an inline cast to an array of Objects):

println inv.invokeMethod(semver,"lt",['1.2.3','4.5.6'] as Object[])

but when we pass in an array as one of the parameters, it all goes sideways:

println inv.invokeMethod(semver, 'maxSatisfying',
                [['1.2.3','1.3.0'] as Object[],'~1',true] as Object[])

will return

TypeError: [Ljava.lang.Object;@c667f46 has no such function "filter" in  at line number 912

So, that's not good. What's going on? When you call invokeMethod with the array of parameters, Nashorn will place each of them as it receives them into the list of parameters on the JavaScript function. But for whatever reason, the Nashorn dev team decided that they would not convert Java arrays automatically into JavaScript arrays during this process - and when 'semver.maxSatisfying' tries to manipulate the first parameter as if it was a JavaScript array, it fails. And I can not find a public Java API in Nashorn to do the conversion. But I can find the JavaScript Nashorn function Java.from, which does that conversion.

There are two ways around this for this use case. I'm not especially fond of either.

First, you can install a shim, so that instead of calling the function which expects the JavaScript array, call the shim, which will do the conversion from Java to JavaScript.

def shim =
semver.maxSatisfyingHack = maxSatisfyingHack;
function maxSatisfyingHack(rversions, range, loose) {
  var versions = Java.from(rversions)
  return maxSatisfying(versions,range,loose);
println inv.invokeMethod(semver, 'maxSatisfyingHack', [['1.2.3','1.3.0'],'~1',true] as Object[])

That works, but now you’re modifying the underlying JS, which isn’t too nice.

Alternately, you can use the underlying JS Java object, and call the from method on it using invokeMethod.

println inv.invokeMethod(semver, 'maxSatisfying', 
    [inv.invokeMethod(engine.get('Java'),'from',['1.2.3','1.3.0']),'~1',true] as Object[])

The downside of this method is that you’re using invokeMethod twice for each invoke, which is going to be a bit expensive.

Essentially, a node module without dependencies is nothing more than straight JavaScript with some conventions, so it’s not surprising that integration is possible. At some point, I’ll try integrating modules with dependencies – that should be much more involved.

Written by Jim Driscoll

March 8, 2014 at 3:22 PM

Posted in Groovy, Java, JavaScript, node

Learning JavaScript

leave a comment »

In the last couple weeks, I’ve had three different people ask “What’s the best way to learn JavaScript?”.

As all engineers know, if you do something more than twice you immediately want to automate it, so here’s a quick description of how I think the best way to learn JavaScript is.

First, get Crockford’s book: JavaScript: The Good Parts.

I think of it as filling the same place for JavaScript as the K&R book does for C – a baseline where you should start, as well as a clear, concise description of the language with only minimal degressions.

Next, I’d encourage you to read it, cover to cover… Then read it again. It’s good enough that I don’t think you’ll mind.

To try out your skills, you’ll probably want a command line interpreter that reads JavaScript. You could always use the “jrunscript” program that comes with Java – but I think an even better choice would be to pull up the console in Chrome, and start typing. Better still would be to create a small program on disk, and include it in a simple HTML web page, then use the Chrome console to add and subtract behaviors that way.

Once you understand why [1,2,13].sort() returns [1,13,2], you’re ready to move on…

Next, you’ll probably want to use JavaScript with HTML, on a web page. I strongly recommend you get Flanagan’s JavaScript: The Definitive Guide. At 1100 pages, it’s big. Real, real big. Don’t worry, you don’t have to read it right away – it’s just a great reference when you get stuck. It really can’t be beat for describing all the different JavaScript functions you can operate on in the DOM. There are other, free resources (notably at Microsoft and Mozilla) which do much the same thing, but nothing beats this monster of a manual for answering most of your questions.

But instead of going head-first into DOM API programming, I recommend that you instead also check out some of the wealth of libraries out there. Two in particular stand out: jQuery and underscore.

jQuery has tons of books out there, and I don’t have a favorite (I did read a few) – but before you get one, I’d recommend checking out the API Docs. They’re small, and the API is pretty well focused. There’s also a Learning Center, where they’ve gathered all the best information for getting started.

The underscore.js docs are so small, I think any book would be superfluous. Don’t worry if you don’t understand what most of the functions are for when you first look over them – it’ll (mostly) become obvious once you’ve used JavaScript to write a few simple programs. Just try to make yourself familiar with most of what it does, so you know that there’s a better way than writing hacky code yourself.

Master class stuff is mostly even more a matter of opinion than beginner work, but I rather like Addy Osmani’s Learning JavaScript Design Patterns, which is either available for free, or for purchase (and if you like it, do purchase a copy, to encourage that sort of behavior).

Similarly, both John Resig and Stoyan Stefanov are legends in the JavaScript community, and everyone thinks highly of their books (though I’m somewhat ashamed to say I haven’t cleared my schedule to do more than skim them).

That should be more than enough to get you started. I’d love to hear any suggestions anyone may have to improve this article.

Written by Jim Driscoll

January 31, 2014 at 12:03 PM

Posted in JavaScript

Thin Server the Hard Way (Routing)

leave a comment »

This post is part of a series examining the Modern Web. Currently, I’m trying to assess pain points in creating a Single Page application, and to do that, I’ve created a simple application which does inventory management. You can find the (somewhat sloppy) code up on github, if you want to follow along.

Previously, I covered the basic architecture. Today I’d like to examine how to handle routing via a front controller. As I’ve mentioned before, this proved to be one of the easier tasks.

As background, recall that the anchor is the last portion of every URL, and is set off from it via the # character, which is also called a hash (or octothorpe, if you’re a serious geek). All of our href values will just have these anchor tags, such as href="#home". When a user clicks on that link, we want to rerender the page to make it look to the user like they’ve gone to a new place – but without having to roundtrip to the server to get all of the new HTML. Possibly, we may not have to go to the server at all, if we’ve cached values. This will give the user a much snappier experience, and is pretty much how most modern web sites work nowadays (though some also use the newish history.pushState function, which lets you avoid all this anchor stuff on compatible browsers).

The pattern to follow here is a simple one: For any action which you want the user to be able to bookmark, use an anchored URL. For any action which is inappropriate to bookmark (such as deleting a record), use a click handler.

First, we create a module (jqcontroller) which will handle all the routing. Inside it, we’ll create a hardcoded routing table, which will associate names with the route to take:

// routing table
var routingTable = {
    home: jqproduct.displayHome,
    products: jqproduct.displayProductsPage,
    categories: jqproduct.displayCategoriesPage,
    productAdd: jqproduct.displayProductAddPage,
    defaultPage: jqproduct.displayHome
defaultPage = jqproduct.displayHome;

So, when we receive a URL that looks like #home, we’ll call the function jqproduct.displayHome.

We also need to create an onhashchange handler:

function hashChangeHanderSetup() {
    // when the hash changes, go through the router
    $(window).on('hashchange', function router() {

Here, we’re using jQuery to add a new handler on the window object. When the URL’s hash changes, call the jqcontroller.route function, passing in the new hash value.

Of course, we have to call that setup message during initialization for it to work. While we’re at it, let’s allow for routing to an initial location, so that when users bookmark the location, navigating back to it functions correctly:

initializeModule: function initializeModule() {
    // initial route

The actual router code called by these functions couldn’t really be much simpler, though it’s complicated by one additional requirement – we also want the hash to contain parameters, so that if you search, for instance, by product name, the hash may look like #products&search=Time – we’ll need to strip that out, and so we’ve created an additional utility method to do that called getPage:

route: function route(url) {
    try {
        var page = pub.getPage();
        var hash = url.split('#')[1];
        if (!!hash) {
            location.hash = hash;
        if (!page || !routingTable[page]) {
        } else {
    } catch (error) {
        // in production, this could write to console, 
        // or do something else useful with error reporting

Here, the meat of the code is simply calling routingTable[page](), which means “look up the value of page in the routing table, and execute that as a function”.

So, that’s it in a nutshell. As I mentioned, there’s additional code to handle parameter passing in a hash, but otherwise, there’s not much else.

As pain points go, this isn’t so bad. It’d be nice to have all this code encapsulated in a reusable library, but doing it myself wouldn’t be a terribly difficult task. Of more concern is that there isn’t any support in my code for history.pushState() and related APIs. Though as I mentioned, there needs to be server side support for that as well.

So, any MV* framework would need to support such a simple front controller pattern, as well as (optional) pushState. But since that’s a rather low bar, I don’t expect that to be an issue.

Next up, I’ll talk about implementing the Model layer, which was another fairly easy task.

Written by Jim Driscoll

January 22, 2014 at 6:54 PM

Posted in JavaScript, web

Thin Server the Hard Way (Basic Architecture)

leave a comment »

As I mentioned in my previous post, I’m looking to create a basic Thin Server Architecture without any particular framework, mostly to see what the pain points are. Knowing what problems these MV* frameworks are trying to solve is going to be critical in further evaluation.

So, with that as the goal, I created a simple Thin Server front end around the REST endpoints provided by QEDServer.

I’ve set up a Github repository to hold all the Thin Server clients I’m writing, and made this project a subdirectory in the repo. To use this subdirectory, download QEDServer, clone the Github repo, and symlink QEDServer public directory to point to the subdirectory. I do recommend you try this out if you’re going to read further, since it’ll show clearly what kinds of nifty responsive layout options you get from Bootstrap, if nothing else.

The app is really just two monolithic blocks – a rather large HTML file, and a very large JavaScript file. There are also two other files – a load indicator gif (generated from a lovely website devoted to that purpose), as well as a tiny css file to augment the base Bootstrap setup.

As a side note before we go any further, I do feel the need to mention that I’m not particularly proud of this code – it served it’s purpose, and it’s mostly bug free, but as I continued on, I started a number of refactorings which I never quite finished. For that reason, I almost didn’t release it – but in the end, I decided it was worthwhile as a launch point for a discussion.

So, on the to code.

HTML file

The HTML file (about 250 lines) can be thought of being divided into about 4 separate sections. I’ll outline them in summary, and then go into detail in a later post. For added maintainability, I went with an Unobtrusive JavaScript approach, meaning that there’s no JavaScript in the HTML at all, just script tags which act on the HTML once loaded.

Header info

The header info contains the loading information for all the base JavaScript and CSS. Since I wanted to keep this simple, I just used CDN loaded versions of all the libraries. Note that some additional shims are required to get things running for IE8, but that’s not something I was interested in testing out – like most people, I can’t wait for that browser to die in a fire.

Nav bar

This section contains the Navigation and branding that’s used throughout the application. The markup is pretty simple, and really shows off what kinds of things that Bootstrap can do. That’s the topic for a whole separate post.

Swappable Divs

Underneath the navigation section are all the different “pages” which will be visible to our users. Since only one will be visible at any given time, I opted to place all of them in separate divs, and then switch between them via hide/show. This is a very performant way to do things, but has two drawbacks – you take a hit on initial load, and for large apps, it will become utterly unwieldy.

Cloning area

Similarly, I have a separate cloning area for sections of HTML code which I’ll be copying and placing into the app, completely wrapped in a hidden div. Again, this is a simple, fairly performant way to do things, at a cost in initial load and maintainability.

JavaScript file

At almost 800 lines, this ended up being a pretty big lump of code. I opted to use the module pattern (with import and export) to organize things. I further divided up the code into several modules, to do a proper division of responsibilities – and since it was going to be evaluating MV* frameworks, it seemed to make sense to use a similar structure.

View layer

The first module just controls the view layer (in conjunction with the HTML, meaning that that doesn’t fit properly into an MVC pattern). A variety of functions handle displaying the different pages via show/hide on the divs in the page, as well as event handlers, an alert messaging system, and other action functions for adding and deleting products. I refactored this code a number of times (and actually stopped in the middle of my last refactoring), but never really got the kind of cleanliness that I wanted. In particular, I really wanted to separate out the binding between the rendered HTML and the functions that acted on it that I felt would be necessary for maintainable code. Of the three layers I tackled, this one was the one that left me the most dissatisfied.

Front Controller / Router

By using onhashchange as a listener for the routing code, it was pretty easy to come up with a basic routing mechanism. With the exception of some annoying boilerplate that I had to write for parameter handling, this proved to be some of the easiest code to write for the whole app. While complexity would grow linearly with the complexity of the app, it doesn’t look like this is really something that other MV* frameworks are going to address – but maybe they’ll surprise me. It would be nice to have something that handled history.pushState automatically, but since QEDServer doesn’t really handle that, I didn’t take a stab at it.

Model layer

A thin ajax layer across the REST interface of QEDServer was, like the router code, rather easy to write (once jQuery’s excellent ajax support was added in). Since it’s only a thin layer, it does expose the underlying data structure of the REST server to the View layer, but you could just as easily change it to massage the returned data to any desired format, in the event that the returned data changed it’s structure. While there was a tiny bit of boilerplate, this doesn’t seem to be that big a deal. Adding in CORS support looks easy, but I didn’t try it, since QEDServer didn’t support it. Like the router code, this looks like complexity will grow linearly with the complexity of the REST API being modeled, and it’s not clear how a framework could really improve on things. Again, I’m hoping somebody will surprise and delight me by proving me wrong.

Initialization and utility methods

The end blocks contain a variety of utility and initialization methods. There was a fair bit of code in here that really felt like reinventing the wheel, and in the end, I even wrote a really primitive templating solution – I’ve no doubt that this is tackled in any number of other places (and more recent efforts have certainly proved that out).

Pain points

I was more than a little surprised by what I found. The two bits I thought would be hard, routing and Ajax retrieval from REST, were handled pretty easily, while the part I thought I knew how to do, view manipulation, proved to be a pretty tough nut to crack.

  • Templating is a pretty large need.
    • Balancing initial load time with performant changes
    • Better code organization for large applications
  • A simple setup for routing would be convenient, but not especially critical
  • Removing some boilerplate for REST endpoint wrapping would be a nice-to-have
  • View state management looks to be critical.
    • Binding both data values and entire views to HTML tags
    • Binding event handlers
    • Managing view transitions and lifecycle

I’ve already ported this application to the first MV* framework that I looked it, Backbone, but it’s probably worthwhile going through a few more items on this first effort first before diving in. Look for that next.

Written by Jim Driscoll

January 22, 2014 at 6:41 PM

Posted in JavaScript, REST

Thin Server the Hard Way (Getting Set Up)

leave a comment »

After the diversion I just had with Java 8, time to get back to describing some features of the Modern Web.

There are any number of MV* client side frameworks out there. I’ve already mentioned TodoMVC, where you can find an extensive list, as well as sample code for each.

But before you evaluate tools, it always pays to know what pain points you’re trying to solve. So, with that in mind, I decided my first task would be to do a Thin Server application the hard way, using only jQuery to manipulate the DOM and handle the data.

Now, I didn’t want to not learn anything new during this exercise, so to keep in interesting, I added in one new dependency, Bootstrap.

So, here’s the recipe list I started with to develop the application:

Tools in Use

  • Brackets, which I wanted to evaluate as an IDE
  • QEDServer, which provides default REST endpoints, as well as a public directory to serve files
  • jQuery, because why on earth would you use the built in DOM APIs if you didn’t have to?
  • Bootstrap, to make the site look pretty

So, not quite starting at the bare metal, but close enough.


I’m not going to go over jQuery at all in describing my solution. Even though it was the first time I used it for writing anything more than a few lines, I think it’s pretty likely that anyone reading this will almost certainly know it. And if you don’t… There are any number of books out there on it, but before you invest in a book, just check out the API Documentation. It’s a small API, and quite restrained in what it’s trying to accomplish. Essentially, it’s for DOM manipulation, AJAX requests, and handful of utility functions to make those two functions easier. If you already know much about the browser DOM, it’ll take you a weekend to get up to speed. If you don’t know the browser DOM… then that’s the problem you need to solve, not learning jQuery.


There are a number of “website boilerplate” projects out there. Besides Bootstrap, Foundation and HTML5 Boilerplate seem to be the most popular – but here’s a list of 10, if you’re interested.

I picked Bootstrap simply because it’s the most popular one right now, and they handle a bunch of things out of the box that I thought would be quite tricky for me to do on my own.

Bootstrap provides you with a set of CSS and JavaScript libraries that you can use as a starting point for your pages, and by default, their look and feel is both clean and modern. Additionally, adding simple effects (like collapsing navigation bars, popups, and closable alert boxes), can be done almost entirely declaratively. Like jQuery, I found it to be incredibly simple to use, and rock solid – I only found one bug during the process, and that was certainly because I was doing something that was an unexpected use of the product.

Also, because it’s the most popular framework, I was able to find lots of information on the web about various setup questions, including the fix for the bug I found.

Bootstrap’s only requirement was that jQuery be included in the same page, so that also worked out pretty well for me.


I’d certainly recommend both libraries as something that a beginner should start out with – they’re both self contained and easy to learn.

Next up, I’ll out line the architecture that I used to create my first Thin Server application

Written by Jim Driscoll

January 20, 2014 at 3:02 PM

IDEs for the Modern Web

with 2 comments

As with the discussion of Thin Server, if you’ve already got a favorite IDE for the Modern Web, feel free to skip forward. It’s unlikely that I’m going to change your mind.

A programmer without tools is a sad little thing. Reduced to coding in vi like some sort of caveman, or worse, edlin, like an animal.

So, first order of business was to do a quick survey of what kind of IDEs folks were using for modern Web Development.

If you’re a Java programmer, you probably use Eclipse, IntelliJ IDEA, or even Netbeans (which is actually my favorite for small projects). And while these can tackle the job of web programming, I wanted something purpose built for the task – I didn’t intend to do any Java work as part of this process, and so there was no need for any of the nice features those tools give you.

I wanted to pick a few different tools with different workflow characteristics, and I didn’t want to spend any money up front, though I’d be happy to license whichever I liked, as long as it was in the sub-$100 range. Quickly, I was able to narrow down the field to three, and at this point, I’ve tried all of them for about two weeks each.

The Three Candidates

  • WebStorm, a “does everything” IDE from JetBrains
  • Sublime Text 2, a visual editor which relies more on outside processes than built in functions
  • Brackets, an editor built entirely in JavaScript, extensible in JavaScript as well


I picked Brackets first, because it was entirely free and open source – meaning, in part, that I’d never have to pay for it even if I liked it. Also, Brackets had also gotten a lot of positive buzz at the HTML5 Dev Conference, and I certainly wanted to see what the fuss was about.

Brackets is currently under very active development, and was at Sprint 31 at the time I tried it. (It’s already in Sprint 34 at the time of this writing.)

At first, I have to say I really quite liked it. It’s surprisingly small and fast, and since it’s extensible in JavaScript, there are any number of user written plugin which you can adopt into your workflow. In particular, I immediately wanted to add code folding, as well as jshint and htmlhint support, as well as additional themes. There was even a code completion plugin (and although it was pretty primitive, without inferences, it generally got the job done).

With these plugins installed, Brackets was a rather useful lightweight IDE, with active syntax highlighting, code folding, as well a color theme I could actually read.

But… it’s obvious that Brackets is still in it’s early days. The plugins would sometimes interfere with each other, and bulk editing operations weren’t really fully there yet.

Additionally, Brackets offered integration with Chrome for debugging – and when it worked, it was wonderful – but it would often get wedged, and require a restart of Chrome and Brackets to resume.

So while I’ll definitely be watching Brackets closely over the next year, it’s not for me at this time.


Reportedly an outgrowth of IDEA’s popular PHPStorm tool, WebStorm will be familiar to anyone who’s done work in IntelliJ IDEA. The interface is pretty much the same, the inspectors are the same, and so on. So, if you love working in IntelliJ, then the odds are good that you’ll love working in WebStorm.

I have to admit that I never really loved working in IntelliJ. While there are an endless number of knobs you can fiddle with to get your desired behavior, it’s always been a common problem for me that I’ll end up burning 30 minutes of precious coding time trying to find the right option.

As one example, an errant right click in my HTML document resulted in the HTML inspector displaying an alarming number of (incorrect) errors in my code. Apparently, I’d accidentally changed how the inspector operated. You’d think that that wouldn’t take long to find and correct… but as usual, it was about 20 minutes of looking.

WebStorm also offered a builtin build system, which again is very similar to the one that’s found in IntelliJ. While this was handy when I was just starting out, it began to chafe a bit when I tried integrating external build tools – which, I was reminded, was one of the things that frustrated me about IntelliJ as well.

I did run into one significant bug. Like Brackets, WebStorm integrates with Chrome to do debugging, via a Chrome plugin. The plugin allows you to use the very nice IDEA debugging interface, and never need to leave your IDE.

In principle, this is awesome. In practice, not so much. Like Brackets, this communication could easily become wedged, and require a restart to resume. In the end, I ended up just using Chrome’s most excellent debug tools right in the browser, and you know what? Staying inside the IDE is overrated. I lost almost nothing (except frustration) by having to swap back and forth between the two toolsets.

In the end, I certainly liked WebStorm more than Brackets – it was stable and almost bug free. But IDEA’s insistence on controlling the entire build infrastructure led me to consider using what was for me a surprising choice – Sublime Text, an editor which was almost the polar opposite of the IDEs I’ve been using for the last 10 years.

Sublime Text

In the third stage of my project (which I’ll detail at length in a later post), I moved to using a Yeoman setup with Grunt (a make-like task runner). Rather than learn how to integrate WebStorm into the build system that Yeoman sets up for you as part of installation, I opted to try a real departure from the classic “everything in the tool” IDE.

Sublime Text is a text editor, not an IDE, and I have to admit to a serious bias against using text editors for development. I was slow to move off of them (I was a vi guy for the longest time) when IDEs first came out, but I just couldn’t imagine giving up inline error highlighting. Because, when you think about it, what’s an IDE for? Build, debug, error highlight, edit. Build was being handled (quite well) by grunt. Debug was already being done in browser… and Sublime Text is truly a wonderful text editor.

Sublime Text won me over. Want to edit every matching phrase in a file? It’s only a couple of keystrokes (with no annoying GUI). In fact, you could say that about almost everything in Sublime – no annoying GUI. Nothing that gets in your way.

And inline errors? Grunt handles that. There’s a pretty simple configuration that will run all your files through jshint/jslint and htmlhint on every save, as well as automatically running your tests, all running as a background watch process. Just watch the terminal output in a separate window.

So much to my surprise, I’ll probably be shelling out for a Sublime license. Not the outcome I was expecting.


Of course, that might not be your conclusion. If you like the idea of extending your IDE’s behavior in JavaScript, then Brackets is something you should check out. I’ve got every confidence that they’ll work out their bugs.

And if you like IntelliJ, definitely try out WebStorm, you’ll almost certainly like that too.

Next, let’s look at one last tool, then I can start to code.

Written by Jim Driscoll

December 22, 2013 at 10:26 AM

Posted in JavaScript, tools

Thin Server Architecture

leave a comment »

As I mentioned previously, I’m looking at the new ways of developing Web Applications that have turned up in the last few years. If you already know about Thin Server Architecture, feel free to skip ahead.

Single Page Applications were already the Next Big Thing back in the mid-2000′s – serve a single page for your web application, then as much as possible (in practice, almost always) simply serve deltas back to the user to avoid page reloading. The advantage you gain from doing this is a much more snappy use experience, with a page that starts to feel a lot like a native app. However, for many (i.e. most) of these initial SPAs, much of the page rendering logic still resided on the server.

Thin Server Architecture takes this a step further. By moving page rendering logic onto the client, you gain additional advantages – using REST means that you can take advantage of proxied requests. Also implicit in this design is that you’re moving state management to the client as well. Moving state management and page generation to the client can radically reduce server load, which immediately gives significant advantages for scalability. Remember that, in total, your users have way more CPU than you do.

Essentially, what you’ve done is keep only database-centric tasks on the server. Authentication (login identity), Authorization (data view permissions) and Validation are the three most commonly cited. The server becomes a thin shell on top of the database. Even validation is done client side, with server side validation done primarily to protect data integrity, not as an otherwise critical part of the control flow.

Is that an oversimplification? Certainly. Just as you can move critical business logic into your database in stored procedures, you can do the same thing with Thin Server. And for some things, like cascading changes to maintain data integrity, this would make real sense.

In addition to page view logic moving to the client, many newish HTML5 APIs also provide additional capabilities that you can exploit. You can store data on the client (or just cache it) via localStorage. You can use history.pushState to update the browser’s URL to add bookmarkability (real URLs, not just anchors! But of course, there’s no support for this in IE8). This works rather well, up to a point.

Where this becomes problematic is when users do deep linking of those URLs that were generated from history.pushState. While it’s true that clients, in total, have more CPU than you, for many of them (especially those on mobile devices), the amount of CPU they can bring to bear at any given moment is limited. The last thing you want to do is to do all this work to get a server that can shovel pages out the door with sub 50ms response times, only to have the client’s phone browser spend 3s rendering it, as it plays through a bunch of JavaScript to build up the correct state.

The obvious solution is to do an initial page render on the server, serve that, and do all further renderings on the client. Additionally, while you’re rendering the client web page, you can seed all your JavaScript functions with the JSON state that they’ll need to spin up, enabling everything to get up and running after the initial load. This hybrid approach is certainly going to be considered best practice (if not already), and you can already see folks referring to it as such – though it does seem as though it might be easiest to do with a framework that spanned both client and server using the same language and libraries… Which is a strong argument for node.js.

Certainly something to watch.

Another buzz phrase that’s popular right now is API First Design. The rise in popularity of mobile devices has led to a conflicting need – there are now effectively two web platforms to design for, a Mobile Web, with severe memory, bandwidth and resource constraints, and a Desktop Web, where those restrictions only apply modestly, if at all. (This is known as the “web first” vs. “mobile first” argument.) The API First design movement says that you should design your REST API first, around the information you actually wish to convey and manipulate. Once you have that, you can design for anything you want – desktop web, mobile web, an application, or even open things up to a third party. Frankly, the idea makes a lot of sense, and I’m reminded of the reason why Google Plus bellyflopped on it’s first outing. Lack of a defined API can be crippling to a product in surprising ways.

Hopefully, this was helpful in understanding this new Web world we’re living in.

Next, let’s look at what folks are using for IDEs these days.

Written by Jim Driscoll

December 20, 2013 at 10:58 PM

Posted in JavaScript, REST, web

A Visit to the Modern Web

leave a comment »

My previous post covered how I came to be a bit of a time traveller to the world of modern Web technology.

This time, let’s talk more about the world as I left it.

The Web Development World in 2008

Single page applications (SPAs) were all the rage. The various JavaScript libraries were duking it out. GWT was compiling Java into JavaScript. There was quite a bit of tension between the SOAP and REST camps, and JSON was still kind of a new thing (IE added support, finally, in 2008). Ruby on Rails had reached Maximum Hype, and subsided, and everyone was writing their own Rails-like framework in their own favorite language.

On the Java side, there was still a notable tussle between varying page definition frameworks, like Tapestry, Wicket and JSF – all server side technologies, though increasingly, support for SPAs was being rapidly added. IE6 was still a (rapidly vanishing) thing, and support for things like WebSockets was forecast to not really be fully adopted for years. For client side work, libraries were primarily downloaded off a website and incorporated into existing build frameworks (maven, for instance, if you were a Java shop), and testing was done with tools like Selenium. Relational databases were pretty much the only game in town, though some bleeding edge early adopters were using other technologies (the term NoSQL wasn’t even in common use until 2009).

Contrast that with what I’m seeing now… Warning: Opinions ahead.

The Web Development World of late 2013


As a complete non-surprise, JSON appears to have won out over XML as a data-interchange format. It’s just so much simpler for the simple case, and with native support in pretty much every available browser, I don’t see XML used anywhere client-side except for the DOM. From my recent observations, it seems to be rapidly bleeding into the server side as well. And speaking of non-surprises…


As a Web partisan, I can’t say I’m especially surprised by this one. Yes, I know that (insert name of shop here) is actively using SOAP and loves it, just as there were always shops who swore by the control that CORBA gave you. But really, for mainstream adoption, REST won, and won big. This leads directly to the next point…

SPAs are pretty much all Thin Server

Most SPAs that I’ve seen described for startup are pretty much all based on the Thin Server architecture. Stateless REST calls, transmitting JSON to the client. The advantages for scaling are simply so large, and the performance benefits that you can get from proxied requests is also so large, that it’s the default architecture that’s adopted by most new sites. As a (former) stateful framework guy, I’d argue there’s a place for that architecture too, but there’s no denying which way things are moving. But as a downside to this, you really need a new way to organize your JavaScript on the client. More (much more) on that shortly. (Also, if you’re not sure what Thin Server really is, I intend to go over that in a later post.)

jQuery won

While other libraries such as Dojo and ExtJS are certainly still around and in wide use, jQuery as the main mechanism for DOM manipulation is just ubiquitous. (Trust me, at the time, this wasn’t as obvious an outcome as it now seems.) The other frameworks that it competes with generally try to do a lot of other things in addition to DOM manipulation, and for the client side web, small wins. Which brings me to a second point – for the mobile web, even jQuery seems to be too large for some purposes, and an even newer jQuery compatible library, Zepto, appears to be ascendant. It’s both modular and much smaller than jQuery – but it gets that partially by ignoring IE (they only just added support for IE10, and earlier versions of IE are not supported, and apparently never will be). Which leads to one of my more surprising discoveries…

IE just doesn’t matter like it used to

It’s more than a little early to say: “ding-dong the witch is dead” – IE8 still has at least a 10% market share – but rather surprisingly, I’m seeing more and more libraries as I keep exploring that simply don’t bother to support it. And not just IE8, which is certainly on it’s way out the door, but IE in general seems to be something developers are increasingly caring less about. (Zepto, for instance, used to bluntly say they didn’t support IE at all, and only recently added IE10 support). And for some uses, that makes total sense – if you’re targeting mobile, the number of mobile IE browsers is microscopic. Sadly, Microsoft still can’t get it together well enough on the standards support, even in IE11, and you constantly run across the phase “works in all browsers except IE”. Maybe next year.

JavaScript is cool. Really cool.

It was while reading Crockford’s “JavaScript: the Good Parts” that I first really understood that JavaScript was actually a pretty nifty little language. But back then, I didn’t have very much company. Today, everyone is writing in JavaScript3. In fact, they prefer it – so much so, that…

Full Stack JavaScript is a thing.

You can write your client side code in JavaScript using an IDE written (and extended) in JavaScript, utilizing JavaScript libraries downloaded and managed by JavaScript, build with a JavaScript program scripted in JavaScript, minified, checked and tested (with a headless server!) via JavaScript, and deploy to a JavaScript server which visits a database which is extended in JavaScript. Crikey.

Preprocessors are a big deal.

For a language that’s getting so much love, there sure are a lot of projects to hide it. CoffeeScript seems to be the most popular by far, but besides that, there’s Dart, TypeScript, and dozens more. They add static typing, async processing, and various other language extensions.

It’s not just JavaScript that’s a target for preprocessors – CSS is also the target of preprocessors. The big three are Less, Sass, and Stylus. They add additional structural features to CSS, such as methods and mixins.

You want to use the Node.js ecosystem

Pretty much everything you’ll want to do to build and maintain a modern website is available to you via the node.js ecosystem. It’s package manager, npm, is so ubiquitous that it’s possible to argue that if you aren’t on the npm registry, your software may as well not exist. And though many of the open source tools on there aren’t more than a year or two old, they’re mostly of commercial quality. This is a big enough topic that it deserves a separate post.

All the cool kids are using a NoSQL database

The structure of a NoSQL database makes them much more suited to use in an on-demand cloud instance. Their general lack of transaction support doesn’t mean they’re a good fit for every task, but when you’ve got a massive amount of mostly static data to serve, they’re just the thing. (They’re also a great thing for searching really large datasets as well.) More about that in a future post as well, though I don’t yet feel comfortable talking at length on this topic yet.

There’s a JavaScript library for everything. In fact, there’s 10.

Want to using templating on your web client? Pick from any of a dozen (though there are certainly some that are the most popular). Need to manipulate arrays (or fix this to make sense)? You’re covered. A lot. Want to test your code? You’re spoiled for choices. If you want to do it in JavaScript, someone’s already written a library for you. And the odds are good that it’s small, fast and covers most corner cases already. Did I say we’re in the early adopter phase? We may have already crossed the chasm to early mainstream.

So, that’s where we are – a changed world of Web Development. Next, let’s talk about the architecture that’s becoming the default choice for large scale web apps – Thin Server.

Written by Jim Driscoll

December 16, 2013 at 5:32 PM

Posted in JavaScript, web

Catching up with the Modern Web

leave a comment »

I’ve been doing work on various Web technologies since Clinton was President. I’ve been involved in Internet technologies since… before that. Throughout the late 90′s and most of the next decade, I worked on Web technologies in one fashion or another, either as an engineer or a manager. We worked on, and invented, any number of technologies whose names you’d recognize (though I usually had a peripheral role).

My last project for Sun, implementing the Ajax front end for JSF‘s reference implementation, was heavy JavaScript work. Because we didn’t want to have any dependencies on external libraries, it was done as close to the metal as you can get in DOM programming. (And since IE6 support was, IIRC, required, it was quite an adventure.)

But then, as Sun spiraled downward toward it’s eventual dissolution, I ended up leaving hard core Web technologies to work on solutions for a somewhat smaller (though still pretty large) set of customers – and though I still worked making Web based solutions, I began to focus primarily on more backend problems like DSLs and metaprogramming.

Gradually, I stopped paying attention.

Every now and then, something would peep through my blinders. Since I was paying attention to language developments, I heard about Node.js, CoffeeScript, and Dart (though I admit to not being exceptionally impressed by any of them on first hearing). I attended a number of talks by Douglas Crockford, and I had some vague notion that the Thin Server model was taking over. I heard, often through friends, that NoSQL was a thing now. There were offhand references to client-side MVC here and there, as well as more esoteric things like Hypermedia and HATEOAS.

As part of my job, I began working in JavaScript somewhat heavily again, mostly instrumenting a JavaScript component (CodeMirror, which I can’t say enough kind things about, nice stuff). Almost on a lark, I decided to attend the local HTML5 Developers Conference – one of the major advantages of working in Silicon Valley is that such things are readily available to you if you’re willing to take the time to seek them out.

The conference was extremely eye-opening. While I was napping, a whole new ecosystem had opened up around the world of Web Development. As is usual in technology, after a brief period of consolidation and extension of the latest greatest tech (server side Java), a whole new way of doing things was being born.

In the month since, I’ve immersed myself as much as possible in this new world, and I love what I’ve found.

If, like me, you haven’t been paying as much attention to new developments as you should, it’s time to start. Things have settled down a little bit, but are still in the early adopter phase of the adoption lifecycle — though I believe that they’re rapidly crossing the adoption chasm.

Don’t have a month or two to burn to learn all this new stuff? Stick around – I intend to report what I’ve found.

Written by Jim Driscoll

December 14, 2013 at 9:47 AM

Posted in JavaScript, web

IE, Memory Management, and You

leave a comment »

In a recent blog, commenters took me to task for a perceived IE 6 memory leak. It wasn’t actually there (they were wrong), but in attempting to prove myself right, I found a couple of memory leaks under IE in JSF’s Ajax support. Since I just spent a week learning how all this functioned, I thought I’d set it down so that others could learn from my efforts.

Now, none of the information that I’ll present here is new – it’s been discussed among Ajax programmers for at least the last 4 years. If you’re a web guru, it’s likely that you’re not going to learn anything new here (thought I’d welcome any additional information and corrections). But at least a couple of the points I’ll illustrate below are either poorly communicated or misunderstood. I’ll include a number of links at the end of this article. There are also very significant differences between IE 8 (which mostly works), IE 7 (which is bad), and IE 6 (which is just awful). I’ll try to point out the differences as they matter for each.


First – use the right tool for the job: In order to spot leaks, you’ll need to download a tool that can detect them. By all accounts, sIEve is the way to go. It uses IE itself, and introspects to get it’s data. The UI is pretty primitive, but I can’t recommend it enough – it’s truely invaluable. Since it uses IE for it’s work, you’ll need to run it on a machine that has IE6 installed – presumably in a VM. You’ll also want to have it running on a machine that has IE 7 and IE 8 as well, just to be sure. XP fits nicely on a VM that runs on my Mac, and this is how I use it.

Cyclic Leak

Now that that’s out of the way, it’s time to talk about the very worst of the memory leaks in IE – the dreaded cyclic reference, which the commenters thought that I’d committed. Under certain conditions, IE 6 will “leak” DOM nodes, retaining them, and the javascript objects that point to them, until the browser is either shut down, or crashes entirely due to lack of memory. Ugh! To understand how this happens, you really only need to know two things:

  1. IE 6 (and 7!) reportedly has very primitive garbage collection using reference counting
  2. There are two memory spaces in IE, one for JavaScript, and the other for the DOM, and they don’t communicate well.

What could go wrong? Well, lots. The commenters thought that the rule was: A leak will occur if any reference is made in JavaScript to an element that isn’t eventually set to null. That’s close, but not quite correct. The real rule is: A leak will occur if the JavaScript code contains any reference to the DOM that isn’t released in some way, either by going out of scope or being explicitly unset.

When IE 6 sees a JavaScript variable that is pointing to something in the DOM (typically, an element or node), it will record that reference, and not collect it – even when you surf over to a new page. And the DOM won’t be collected, since there’s a reference to it from JavaScript. These two objects, and all the stuff that references them, will stick around until shutdown. In IE 7, the geniuses at Microsoft saw the bug, and said “Hey, I know how to fix that, let’s garbage collect everything when we leave the page.”. Nice improvement, but it still doesn’t fix the bug, since if you’re developing a page that is designed to be used for a long period of time (like many page-as-application apps are now), it’ll still crash the browser. Apparently, they saw the error of their ways eventually, since this behavior is no longer present in IE8. (All this is confirmed by my testing with sIEve.)

So, in the example that had in my previous blog, there was no memory leak, because the variable that pointed to the element eventually went out of scope. So – how to you create variables that don’t go out of scope? The easiest way is to put them in an object – this was the leak that I eventually found in JSF. The fix there was to null out the object manually. But there’s another, more insidious way to create an object – create a closure. That creates a function object implicitly under the window object, and that will never go out of scope. But the key thing to remember is that you need to be aware of when things go out of scope when coding in IE, and act accordingly.

But wait! There’s more

If that was the only problem, life would have been fairly easy for me the last week. But that’s not the only bug that the Web Wizards of Redmond chose to deliver to their unsuspecting consumers. There’s another bug in IE (again, only in IE 6 and 7 – IE 8 appears to have fixed it per my testing), which also leaks DOM nodes that aren’t cleaned up until you leave the page. Apparently, when the IE DOM receives a call from the removeChild or replaceChild functions, it doesn’t actually, err, remove the nodes. It just leaves them there, hanging around the DOM like party guests that don’t have the sense to leave after the host has started handing out coats. While these nodes will eventually be cleaned up when the user leaves the page, this still causes problems for page-as-app programs, as in the cyclic leak for IE 7, above. While the removeChild call is fairly notorious for this, I had to find out about replaceChild with my own testing (though I did find a few obscure references once I went looking for it).

That means that instead of saying node.parentNode.replaceChild(newNode, node), you instead should say something like: node.parentNode.insertBefore(newNode, node); deleteNode(node); (with an appropriate if statement for isIE(), and a deleteNode function that doesn’t use removeChild). And instead of saying node.parentNode.removeChild(node); you instead are reduced to coding something like: node.outerHTML = ”; (again, with browser check). Except that when you combine that with IE’s horrible problems with manipulating tables, it may fail. So instead, you’re probably better off with something like this:

                var temp = document.createElement('div');
                try {
                    temp.innerHTML = ""; // Prevent leak in IE
                } catch (e) {
                    // at least we tried

Again, possibly with an isIE() check.

Hopefully you found this description of IE’s Memory “Management” useful. Here’s a few of the links that I used for research, that I found the most helpful.

As always, I look forward to any comments. Especially about this topic – I’m far from expert in this area.

UPDATE: John Resig just posted about a very interesting looking tool. Haven’t checked it out yet, but if it’s got him excited…

(This article originally published on my java.net blog on November 13, 2009.)

Written by Jim Driscoll

February 9, 2010 at 11:19 PM

Posted in JavaScript, web


Get every new post delivered to your Inbox.

Join 411 other followers