Jim Driscoll's Blog

Notes on Technology and the Web

Archive for the ‘JavaScript’ Category

Running a no-dependencies Node module in Java

leave a comment »

Sometimes you do something just because you wonder if you can. Today’s example is a prime example of that.

I wonder if it’s possible to run a no-dependencies Node module in Java, without running Project Avatar? The answer, of course, is yes, it is.

For my no-dependencies Node module, I picked semver.js – there’s a pretty well defined external interface there, and all it’s really doing in String manipulation, so there’s no external dependencies to worry about.

Before I go further, a caveat: I actually did this example in Groovy, mostly to save the extra typing necessary in Java, but the example shouldn’t require any knowledge of Groovy, and it should all work with only minor modifications in pure Java (JDK 8, since I’m using nashorn’s engine, but there’s no reason something similar shouldn’t work with Rhino as well).

If you like, you can run the example just by downloading the git repository and typing


at the command line (provided that node and npm are already installed and in your path – if not, installing node is easy).

If you’ve never used JavaScript from within Java, it’s pretty easy.

ScriptEngineManager manager = new ScriptEngineManager();
ScriptEngine engine = manager.getEngineByName("nashorn");
Invocable inv = (Invocable) engine;

Get a manager, use it to obtain an engine, and (optionally) cast it to an Invocable instance.

With the engine, you can conveniently say

engine.eval("javascript code to run")

while with the invocable, you can say:


which is far more convenient if you don’t want to continuously do .toString and string concatenation.

So, with those two basic methods, let’s run a node module. First, we’ll need to set up an exports object, which node modules expect to exist.

engine.eval('exports = {}')

Then load and evaluate the server code:

File semverjs = new File('./node_modules/semver/semver.js')

Set the exports object to be the same as a server object, so our JS code will look a little more natural, and grab that object to use as the context object for invokeMethod (engine.get(varname) will fetch the JS object from Nashorn, and let you use it in Java):

engine.eval('semver = exports')
def semver = engine.get('semver')

With that setup, we can do a simple eval:

println engine.eval('semver.clean("1.2.3")')

or a simple invokeMethod:

println inv.invokeMethod(semver,"clean", "1.2.3");

A somewhat more complex invokeMethod (passing multiple arguments as an array, I needed to say “as Object[]” in Groovy to do an inline cast to an array of Objects):

println inv.invokeMethod(semver,"lt",['1.2.3','4.5.6'] as Object[])

but when we pass in an array as one of the parameters, it all goes sideways:

println inv.invokeMethod(semver, 'maxSatisfying',
                [['1.2.3','1.3.0'] as Object[],'~1',true] as Object[])

will return

TypeError: [Ljava.lang.Object;@c667f46 has no such function "filter" in  at line number 912

So, that’s not good. What’s going on? When you call invokeMethod with the array of parameters, Nashorn will place each of them as it receives them into the list of parameters on the JavaScript function. But for whatever reason, the Nashorn dev team decided that they would not convert Java arrays automatically into JavaScript arrays during this process – and when ‘semver.maxSatisfying’ tries to manipulate the first parameter as if it was a JavaScript array, it fails. And I can not find a public Java API in Nashorn to do the conversion. But I can find the JavaScript Nashorn function Java.from, which does that conversion.

There are two ways around this for this use case. I’m not especially fond of either.

First, you can install a shim, so that instead of calling the function which expects the JavaScript array, call the shim, which will do the conversion from Java to JavaScript.

def shim =
semver.maxSatisfyingHack = maxSatisfyingHack;
function maxSatisfyingHack(rversions, range, loose) {
  var versions = Java.from(rversions)
  return maxSatisfying(versions,range,loose);
println inv.invokeMethod(semver, 'maxSatisfyingHack', [['1.2.3','1.3.0'],'~1',true] as Object[])

That works, but now you’re modifying the underlying JS, which isn’t too nice.

Alternately, you can use the underlying JS Java object, and call the from method on it using invokeMethod.

println inv.invokeMethod(semver, 'maxSatisfying', 
    [inv.invokeMethod(engine.get('Java'),'from',['1.2.3','1.3.0']),'~1',true] as Object[])

The downside of this method is that you’re using invokeMethod twice for each invoke, which is going to be a bit expensive.

Essentially, a node module without dependencies is nothing more than straight JavaScript with some conventions, so it’s not surprising that integration is possible. At some point, I’ll try integrating modules with dependencies – that should be much more involved.


Written by jamesgdriscoll

March 8, 2014 at 3:22 PM

Posted in Groovy, Java, JavaScript, node

Learning JavaScript

leave a comment »

In the last couple weeks, I’ve had three different people ask “What’s the best way to learn JavaScript?”.

As all engineers know, if you do something more than twice you immediately want to automate it, so here’s a quick description of how I think the best way to learn JavaScript is.

First, get Crockford’s book: JavaScript: The Good Parts.

I think of it as filling the same place for JavaScript as the K&R book does for C – a baseline where you should start, as well as a clear, concise description of the language with only minimal degressions.

Next, I’d encourage you to read it, cover to cover… Then read it again. It’s good enough that I don’t think you’ll mind.

To try out your skills, you’ll probably want a command line interpreter that reads JavaScript. You could always use the “jrunscript” program that comes with Java – but I think an even better choice would be to pull up the console in Chrome, and start typing. Better still would be to create a small program on disk, and include it in a simple HTML web page, then use the Chrome console to add and subtract behaviors that way.

Once you understand why [1,2,13].sort() returns [1,13,2], you’re ready to move on…

Next, you’ll probably want to use JavaScript with HTML, on a web page. I strongly recommend you get Flanagan’s JavaScript: The Definitive Guide. At 1100 pages, it’s big. Real, real big. Don’t worry, you don’t have to read it right away – it’s just a great reference when you get stuck. It really can’t be beat for describing all the different JavaScript functions you can operate on in the DOM. There are other, free resources (notably at Microsoft and Mozilla) which do much the same thing, but nothing beats this monster of a manual for answering most of your questions.

But instead of going head-first into DOM API programming, I recommend that you instead also check out some of the wealth of libraries out there. Two in particular stand out: jQuery and underscore.

jQuery has tons of books out there, and I don’t have a favorite (I did read a few) – but before you get one, I’d recommend checking out the API Docs. They’re small, and the API is pretty well focused. There’s also a Learning Center, where they’ve gathered all the best information for getting started.

The underscore.js docs are so small, I think any book would be superfluous. Don’t worry if you don’t understand what most of the functions are for when you first look over them – it’ll (mostly) become obvious once you’ve used JavaScript to write a few simple programs. Just try to make yourself familiar with most of what it does, so you know that there’s a better way than writing hacky code yourself.

Master class stuff is mostly even more a matter of opinion than beginner work, but I rather like Addy Osmani’s Learning JavaScript Design Patterns, which is either available for free, or for purchase (and if you like it, do purchase a copy, to encourage that sort of behavior).

Similarly, both John Resig and Stoyan Stefanov are legends in the JavaScript community, and everyone thinks highly of their books (though I’m somewhat ashamed to say I haven’t cleared my schedule to do more than skim them).

That should be more than enough to get you started. I’d love to hear any suggestions anyone may have to improve this article.

Written by jamesgdriscoll

January 31, 2014 at 12:03 PM

Posted in JavaScript

Thin Server the Hard Way (Routing)

leave a comment »

This post is part of a series examining the Modern Web. Currently, I’m trying to assess pain points in creating a Single Page application, and to do that, I’ve created a simple application which does inventory management. You can find the (somewhat sloppy) code up on github, if you want to follow along.

Previously, I covered the basic architecture. Today I’d like to examine how to handle routing via a front controller. As I’ve mentioned before, this proved to be one of the easier tasks.

As background, recall that the anchor is the last portion of every URL, and is set off from it via the # character, which is also called a hash (or octothorpe, if you’re a serious geek). All of our href values will just have these anchor tags, such as href="#home". When a user clicks on that link, we want to rerender the page to make it look to the user like they’ve gone to a new place – but without having to roundtrip to the server to get all of the new HTML. Possibly, we may not have to go to the server at all, if we’ve cached values. This will give the user a much snappier experience, and is pretty much how most modern web sites work nowadays (though some also use the newish history.pushState function, which lets you avoid all this anchor stuff on compatible browsers).

The pattern to follow here is a simple one: For any action which you want the user to be able to bookmark, use an anchored URL. For any action which is inappropriate to bookmark (such as deleting a record), use a click handler.

First, we create a module (jqcontroller) which will handle all the routing. Inside it, we’ll create a hardcoded routing table, which will associate names with the route to take:

// routing table
var routingTable = {
    home: jqproduct.displayHome,
    products: jqproduct.displayProductsPage,
    categories: jqproduct.displayCategoriesPage,
    productAdd: jqproduct.displayProductAddPage,
    defaultPage: jqproduct.displayHome
defaultPage = jqproduct.displayHome;

So, when we receive a URL that looks like #home, we’ll call the function jqproduct.displayHome.

We also need to create an onhashchange handler:

function hashChangeHanderSetup() {
    // when the hash changes, go through the router
    $(window).on('hashchange', function router() {

Here, we’re using jQuery to add a new handler on the window object. When the URL’s hash changes, call the jqcontroller.route function, passing in the new hash value.

Of course, we have to call that setup message during initialization for it to work. While we’re at it, let’s allow for routing to an initial location, so that when users bookmark the location, navigating back to it functions correctly:

initializeModule: function initializeModule() {
    // initial route

The actual router code called by these functions couldn’t really be much simpler, though it’s complicated by one additional requirement – we also want the hash to contain parameters, so that if you search, for instance, by product name, the hash may look like #products&search=Time – we’ll need to strip that out, and so we’ve created an additional utility method to do that called getPage:

route: function route(url) {
    try {
        var page = pub.getPage();
        var hash = url.split('#')[1];
        if (!!hash) {
            location.hash = hash;
        if (!page || !routingTable[page]) {
        } else {
    } catch (error) {
        // in production, this could write to console, 
        // or do something else useful with error reporting

Here, the meat of the code is simply calling routingTable[page](), which means “look up the value of page in the routing table, and execute that as a function”.

So, that’s it in a nutshell. As I mentioned, there’s additional code to handle parameter passing in a hash, but otherwise, there’s not much else.

As pain points go, this isn’t so bad. It’d be nice to have all this code encapsulated in a reusable library, but doing it myself wouldn’t be a terribly difficult task. Of more concern is that there isn’t any support in my code for history.pushState() and related APIs. Though as I mentioned, there needs to be server side support for that as well.

So, any MV* framework would need to support such a simple front controller pattern, as well as (optional) pushState. But since that’s a rather low bar, I don’t expect that to be an issue.

Next up, I’ll talk about implementing the Model layer, which was another fairly easy task.

Written by jamesgdriscoll

January 22, 2014 at 6:54 PM

Posted in JavaScript, web

Thin Server the Hard Way (Basic Architecture)

leave a comment »

As I mentioned in my previous post, I’m looking to create a basic Thin Server Architecture without any particular framework, mostly to see what the pain points are. Knowing what problems these MV* frameworks are trying to solve is going to be critical in further evaluation.

So, with that as the goal, I created a simple Thin Server front end around the REST endpoints provided by QEDServer.

I’ve set up a Github repository to hold all the Thin Server clients I’m writing, and made this project a subdirectory in the repo. To use this subdirectory, download QEDServer, clone the Github repo, and symlink QEDServer public directory to point to the subdirectory. I do recommend you try this out if you’re going to read further, since it’ll show clearly what kinds of nifty responsive layout options you get from Bootstrap, if nothing else.

The app is really just two monolithic blocks – a rather large HTML file, and a very large JavaScript file. There are also two other files – a load indicator gif (generated from a lovely website devoted to that purpose), as well as a tiny css file to augment the base Bootstrap setup.

As a side note before we go any further, I do feel the need to mention that I’m not particularly proud of this code – it served it’s purpose, and it’s mostly bug free, but as I continued on, I started a number of refactorings which I never quite finished. For that reason, I almost didn’t release it – but in the end, I decided it was worthwhile as a launch point for a discussion.

So, on the to code.

HTML file

The HTML file (about 250 lines) can be thought of being divided into about 4 separate sections. I’ll outline them in summary, and then go into detail in a later post. For added maintainability, I went with an Unobtrusive JavaScript approach, meaning that there’s no JavaScript in the HTML at all, just script tags which act on the HTML once loaded.

Header info

The header info contains the loading information for all the base JavaScript and CSS. Since I wanted to keep this simple, I just used CDN loaded versions of all the libraries. Note that some additional shims are required to get things running for IE8, but that’s not something I was interested in testing out – like most people, I can’t wait for that browser to die in a fire.

Nav bar

This section contains the Navigation and branding that’s used throughout the application. The markup is pretty simple, and really shows off what kinds of things that Bootstrap can do. That’s the topic for a whole separate post.

Swappable Divs

Underneath the navigation section are all the different “pages” which will be visible to our users. Since only one will be visible at any given time, I opted to place all of them in separate divs, and then switch between them via hide/show. This is a very performant way to do things, but has two drawbacks – you take a hit on initial load, and for large apps, it will become utterly unwieldy.

Cloning area

Similarly, I have a separate cloning area for sections of HTML code which I’ll be copying and placing into the app, completely wrapped in a hidden div. Again, this is a simple, fairly performant way to do things, at a cost in initial load and maintainability.

JavaScript file

At almost 800 lines, this ended up being a pretty big lump of code. I opted to use the module pattern (with import and export) to organize things. I further divided up the code into several modules, to do a proper division of responsibilities – and since it was going to be evaluating MV* frameworks, it seemed to make sense to use a similar structure.

View layer

The first module just controls the view layer (in conjunction with the HTML, meaning that that doesn’t fit properly into an MVC pattern). A variety of functions handle displaying the different pages via show/hide on the divs in the page, as well as event handlers, an alert messaging system, and other action functions for adding and deleting products. I refactored this code a number of times (and actually stopped in the middle of my last refactoring), but never really got the kind of cleanliness that I wanted. In particular, I really wanted to separate out the binding between the rendered HTML and the functions that acted on it that I felt would be necessary for maintainable code. Of the three layers I tackled, this one was the one that left me the most dissatisfied.

Front Controller / Router

By using onhashchange as a listener for the routing code, it was pretty easy to come up with a basic routing mechanism. With the exception of some annoying boilerplate that I had to write for parameter handling, this proved to be some of the easiest code to write for the whole app. While complexity would grow linearly with the complexity of the app, it doesn’t look like this is really something that other MV* frameworks are going to address – but maybe they’ll surprise me. It would be nice to have something that handled history.pushState automatically, but since QEDServer doesn’t really handle that, I didn’t take a stab at it.

Model layer

A thin ajax layer across the REST interface of QEDServer was, like the router code, rather easy to write (once jQuery’s excellent ajax support was added in). Since it’s only a thin layer, it does expose the underlying data structure of the REST server to the View layer, but you could just as easily change it to massage the returned data to any desired format, in the event that the returned data changed it’s structure. While there was a tiny bit of boilerplate, this doesn’t seem to be that big a deal. Adding in CORS support looks easy, but I didn’t try it, since QEDServer didn’t support it. Like the router code, this looks like complexity will grow linearly with the complexity of the REST API being modeled, and it’s not clear how a framework could really improve on things. Again, I’m hoping somebody will surprise and delight me by proving me wrong.

Initialization and utility methods

The end blocks contain a variety of utility and initialization methods. There was a fair bit of code in here that really felt like reinventing the wheel, and in the end, I even wrote a really primitive templating solution – I’ve no doubt that this is tackled in any number of other places (and more recent efforts have certainly proved that out).

Pain points

I was more than a little surprised by what I found. The two bits I thought would be hard, routing and Ajax retrieval from REST, were handled pretty easily, while the part I thought I knew how to do, view manipulation, proved to be a pretty tough nut to crack.

  • Templating is a pretty large need.
    • Balancing initial load time with performant changes
    • Better code organization for large applications
  • A simple setup for routing would be convenient, but not especially critical
  • Removing some boilerplate for REST endpoint wrapping would be a nice-to-have
  • View state management looks to be critical.
    • Binding both data values and entire views to HTML tags
    • Binding event handlers
    • Managing view transitions and lifecycle

I’ve already ported this application to the first MV* framework that I looked it, Backbone, but it’s probably worthwhile going through a few more items on this first effort first before diving in. Look for that next.

Written by jamesgdriscoll

January 22, 2014 at 6:41 PM

Posted in JavaScript, REST

Thin Server the Hard Way (Getting Set Up)

leave a comment »

After the diversion I just had with Java 8, time to get back to describing some features of the Modern Web.

There are any number of MV* client side frameworks out there. I’ve already mentioned TodoMVC, where you can find an extensive list, as well as sample code for each.

But before you evaluate tools, it always pays to know what pain points you’re trying to solve. So, with that in mind, I decided my first task would be to do a Thin Server application the hard way, using only jQuery to manipulate the DOM and handle the data.

Now, I didn’t want to not learn anything new during this exercise, so to keep in interesting, I added in one new dependency, Bootstrap.

So, here’s the recipe list I started with to develop the application:

Tools in Use

  • Brackets, which I wanted to evaluate as an IDE
  • QEDServer, which provides default REST endpoints, as well as a public directory to serve files
  • jQuery, because why on earth would you use the built in DOM APIs if you didn’t have to?
  • Bootstrap, to make the site look pretty

So, not quite starting at the bare metal, but close enough.


I’m not going to go over jQuery at all in describing my solution. Even though it was the first time I used it for writing anything more than a few lines, I think it’s pretty likely that anyone reading this will almost certainly know it. And if you don’t… There are any number of books out there on it, but before you invest in a book, just check out the API Documentation. It’s a small API, and quite restrained in what it’s trying to accomplish. Essentially, it’s for DOM manipulation, AJAX requests, and handful of utility functions to make those two functions easier. If you already know much about the browser DOM, it’ll take you a weekend to get up to speed. If you don’t know the browser DOM… then that’s the problem you need to solve, not learning jQuery.


There are a number of “website boilerplate” projects out there. Besides Bootstrap, Foundation and HTML5 Boilerplate seem to be the most popular – but here’s a list of 10, if you’re interested.

I picked Bootstrap simply because it’s the most popular one right now, and they handle a bunch of things out of the box that I thought would be quite tricky for me to do on my own.

Bootstrap provides you with a set of CSS and JavaScript libraries that you can use as a starting point for your pages, and by default, their look and feel is both clean and modern. Additionally, adding simple effects (like collapsing navigation bars, popups, and closable alert boxes), can be done almost entirely declaratively. Like jQuery, I found it to be incredibly simple to use, and rock solid – I only found one bug during the process, and that was certainly because I was doing something that was an unexpected use of the product.

Also, because it’s the most popular framework, I was able to find lots of information on the web about various setup questions, including the fix for the bug I found.

Bootstrap’s only requirement was that jQuery be included in the same page, so that also worked out pretty well for me.


I’d certainly recommend both libraries as something that a beginner should start out with – they’re both self contained and easy to learn.

Next up, I’ll out line the architecture that I used to create my first Thin Server application

Written by jamesgdriscoll

January 20, 2014 at 3:02 PM

IDEs for the Modern Web

with 3 comments

As with the discussion of Thin Server, if you’ve already got a favorite IDE for the Modern Web, feel free to skip forward. It’s unlikely that I’m going to change your mind.

A programmer without tools is a sad little thing. Reduced to coding in vi like some sort of caveman, or worse, edlin, like an animal.

So, first order of business was to do a quick survey of what kind of IDEs folks were using for modern Web Development.

If you’re a Java programmer, you probably use Eclipse, IntelliJ IDEA, or even Netbeans (which is actually my favorite for small projects). And while these can tackle the job of web programming, I wanted something purpose built for the task – I didn’t intend to do any Java work as part of this process, and so there was no need for any of the nice features those tools give you.

I wanted to pick a few different tools with different workflow characteristics, and I didn’t want to spend any money up front, though I’d be happy to license whichever I liked, as long as it was in the sub-$100 range. Quickly, I was able to narrow down the field to three, and at this point, I’ve tried all of them for about two weeks each.

The Three Candidates

  • WebStorm, a “does everything” IDE from JetBrains
  • Sublime Text 2, a visual editor which relies more on outside processes than built in functions
  • Brackets, an editor built entirely in JavaScript, extensible in JavaScript as well


I picked Brackets first, because it was entirely free and open source – meaning, in part, that I’d never have to pay for it even if I liked it. Also, Brackets had also gotten a lot of positive buzz at the HTML5 Dev Conference, and I certainly wanted to see what the fuss was about.

Brackets is currently under very active development, and was at Sprint 31 at the time I tried it. (It’s already in Sprint 34 at the time of this writing.)

At first, I have to say I really quite liked it. It’s surprisingly small and fast, and since it’s extensible in JavaScript, there are any number of user written plugin which you can adopt into your workflow. In particular, I immediately wanted to add code folding, as well as jshint and htmlhint support, as well as additional themes. There was even a code completion plugin (and although it was pretty primitive, without inferences, it generally got the job done).

With these plugins installed, Brackets was a rather useful lightweight IDE, with active syntax highlighting, code folding, as well a color theme I could actually read.

But… it’s obvious that Brackets is still in it’s early days. The plugins would sometimes interfere with each other, and bulk editing operations weren’t really fully there yet.

Additionally, Brackets offered integration with Chrome for debugging – and when it worked, it was wonderful – but it would often get wedged, and require a restart of Chrome and Brackets to resume.

So while I’ll definitely be watching Brackets closely over the next year, it’s not for me at this time.


Reportedly an outgrowth of IDEA’s popular PHPStorm tool, WebStorm will be familiar to anyone who’s done work in IntelliJ IDEA. The interface is pretty much the same, the inspectors are the same, and so on. So, if you love working in IntelliJ, then the odds are good that you’ll love working in WebStorm.

I have to admit that I never really loved working in IntelliJ. While there are an endless number of knobs you can fiddle with to get your desired behavior, it’s always been a common problem for me that I’ll end up burning 30 minutes of precious coding time trying to find the right option.

As one example, an errant right click in my HTML document resulted in the HTML inspector displaying an alarming number of (incorrect) errors in my code. Apparently, I’d accidentally changed how the inspector operated. You’d think that that wouldn’t take long to find and correct… but as usual, it was about 20 minutes of looking.

WebStorm also offered a builtin build system, which again is very similar to the one that’s found in IntelliJ. While this was handy when I was just starting out, it began to chafe a bit when I tried integrating external build tools – which, I was reminded, was one of the things that frustrated me about IntelliJ as well.

I did run into one significant bug. Like Brackets, WebStorm integrates with Chrome to do debugging, via a Chrome plugin. The plugin allows you to use the very nice IDEA debugging interface, and never need to leave your IDE.

In principle, this is awesome. In practice, not so much. Like Brackets, this communication could easily become wedged, and require a restart to resume. In the end, I ended up just using Chrome’s most excellent debug tools right in the browser, and you know what? Staying inside the IDE is overrated. I lost almost nothing (except frustration) by having to swap back and forth between the two toolsets.

In the end, I certainly liked WebStorm more than Brackets – it was stable and almost bug free. But IDEA’s insistence on controlling the entire build infrastructure led me to consider using what was for me a surprising choice – Sublime Text, an editor which was almost the polar opposite of the IDEs I’ve been using for the last 10 years.

Sublime Text

In the third stage of my project (which I’ll detail at length in a later post), I moved to using a Yeoman setup with Grunt (a make-like task runner). Rather than learn how to integrate WebStorm into the build system that Yeoman sets up for you as part of installation, I opted to try a real departure from the classic “everything in the tool” IDE.

Sublime Text is a text editor, not an IDE, and I have to admit to a serious bias against using text editors for development. I was slow to move off of them (I was a vi guy for the longest time) when IDEs first came out, but I just couldn’t imagine giving up inline error highlighting. Because, when you think about it, what’s an IDE for? Build, debug, error highlight, edit. Build was being handled (quite well) by grunt. Debug was already being done in browser… and Sublime Text is truly a wonderful text editor.

Sublime Text won me over. Want to edit every matching phrase in a file? It’s only a couple of keystrokes (with no annoying GUI). In fact, you could say that about almost everything in Sublime – no annoying GUI. Nothing that gets in your way.

And inline errors? Grunt handles that. There’s a pretty simple configuration that will run all your files through jshint/jslint and htmlhint on every save, as well as automatically running your tests, all running as a background watch process. Just watch the terminal output in a separate window.

So much to my surprise, I’ll probably be shelling out for a Sublime license. Not the outcome I was expecting.


Of course, that might not be your conclusion. If you like the idea of extending your IDE’s behavior in JavaScript, then Brackets is something you should check out. I’ve got every confidence that they’ll work out their bugs.

And if you like IntelliJ, definitely try out WebStorm, you’ll almost certainly like that too.

Next, let’s look at one last tool, then I can start to code.

Written by jamesgdriscoll

December 22, 2013 at 10:26 AM

Posted in JavaScript, tools

Thin Server Architecture

leave a comment »

As I mentioned previously, I’m looking at the new ways of developing Web Applications that have turned up in the last few years. If you already know about Thin Server Architecture, feel free to skip ahead.

Single Page Applications were already the Next Big Thing back in the mid-2000’s – serve a single page for your web application, then as much as possible (in practice, almost always) simply serve deltas back to the user to avoid page reloading. The advantage you gain from doing this is a much more snappy use experience, with a page that starts to feel a lot like a native app. However, for many (i.e. most) of these initial SPAs, much of the page rendering logic still resided on the server.

Thin Server Architecture takes this a step further. By moving page rendering logic onto the client, you gain additional advantages – using REST means that you can take advantage of proxied requests. Also implicit in this design is that you’re moving state management to the client as well. Moving state management and page generation to the client can radically reduce server load, which immediately gives significant advantages for scalability. Remember that, in total, your users have way more CPU than you do.

Essentially, what you’ve done is keep only database-centric tasks on the server. Authentication (login identity), Authorization (data view permissions) and Validation are the three most commonly cited. The server becomes a thin shell on top of the database. Even validation is done client side, with server side validation done primarily to protect data integrity, not as an otherwise critical part of the control flow.

Is that an oversimplification? Certainly. Just as you can move critical business logic into your database in stored procedures, you can do the same thing with Thin Server. And for some things, like cascading changes to maintain data integrity, this would make real sense.

In addition to page view logic moving to the client, many newish HTML5 APIs also provide additional capabilities that you can exploit. You can store data on the client (or just cache it) via localStorage. You can use history.pushState to update the browser’s URL to add bookmarkability (real URLs, not just anchors! But of course, there’s no support for this in IE8). This works rather well, up to a point.

Where this becomes problematic is when users do deep linking of those URLs that were generated from history.pushState. While it’s true that clients, in total, have more CPU than you, for many of them (especially those on mobile devices), the amount of CPU they can bring to bear at any given moment is limited. The last thing you want to do is to do all this work to get a server that can shovel pages out the door with sub 50ms response times, only to have the client’s phone browser spend 3s rendering it, as it plays through a bunch of JavaScript to build up the correct state.

The obvious solution is to do an initial page render on the server, serve that, and do all further renderings on the client. Additionally, while you’re rendering the client web page, you can seed all your JavaScript functions with the JSON state that they’ll need to spin up, enabling everything to get up and running after the initial load. This hybrid approach is certainly going to be considered best practice (if not already), and you can already see folks referring to it as such – though it does seem as though it might be easiest to do with a framework that spanned both client and server using the same language and libraries… Which is a strong argument for node.js.

Certainly something to watch.

Another buzz phrase that’s popular right now is API First Design. The rise in popularity of mobile devices has led to a conflicting need – there are now effectively two web platforms to design for, a Mobile Web, with severe memory, bandwidth and resource constraints, and a Desktop Web, where those restrictions only apply modestly, if at all. (This is known as the “web first” vs. “mobile first” argument.) The API First design movement says that you should design your REST API first, around the information you actually wish to convey and manipulate. Once you have that, you can design for anything you want – desktop web, mobile web, an application, or even open things up to a third party. Frankly, the idea makes a lot of sense, and I’m reminded of the reason why Google Plus bellyflopped on it’s first outing. Lack of a defined API can be crippling to a product in surprising ways.

Hopefully, this was helpful in understanding this new Web world we’re living in.

Next, let’s look at what folks are using for IDEs these days.

Written by jamesgdriscoll

December 20, 2013 at 10:58 PM

Posted in JavaScript, REST, web