Tag Archives: Javascript

ES6 Template Strings without Indent via tags

ES6 is pretty cool. One new feature is template strings which allow for multiline interpolated strings.

For example, from MDN:

console.log(`string text line 1
string text line 2`);
// "string text line 1
// string text line 2"

They also allow for evaluated variables embedded within the string (again from MDN):

var a = 5;
var b = 10;
console.log(`Fifteen is ${a + b} and\nnot ${2 * a + b}.`);
// "Fifteen is 15 and
// not 20."

The problem is that you may be using these strings within a class that already has indentation.

export class App {
    text =`
       This is an example of what could be considered a 
       "problem" with ES6 multiline strings.
       
       Each of these lines will have 7 characters of whitespace
       at the beginning of the line.`;
    }
}

This is probably not what you want in every situation.

Luckily, ES6 supports something called tagged template strings. Using a special function syntax, you can hook into the template functionality and either change how the template is evaluate or modify the output of the template. You can use these tagged template strings to achieve something like Scala’s stripMargin function. If you’re not familiar with stripMargin, you basically put a pipe (the | character) at the start of the line and indentation before the pipe is ignored. You can achieve this with tag template that performs a regex replace:

Here’s how this baby works:

See the Pen xZZmVV by Jim Schubert (@jimschubert) on CodePen.

Flattr this!

[javascript] Be sure to read documentation carefully…

I have a love-hate relationship with JavaScript. I love the language, but I hate the idiosyncrasies. And like many others, I use MDN pretty regularly to verify function signatures, browser support, or to find example code or shims.

The recently posted article on MDN, “A re-introduction to JavaScript”, demonstrates something I wish JavaScript developers would stop demonstrating: cleverness. The article links to one of Crockford’s posts about JavaScript being the most misunderstood language. Sure, there are quirks, but they’re defined quirks in most cases.

I showed a coworker an example of an oddity in JavaScript the other day.

var x = "016";
var y = 0|x;
console.log(y);
// 16

var x = "0o16";
var y = 0|x;
console.log(y);
// 14

console.log(016);
// 14

console.log(0o16);
// 14

This example works differently in ECMAScript 3 and ECMAScript 5, as beautifully explained by CMS on StackOverflow.

Those are ‘gotchas’ of the language that you just have to know. Yes, it sucks. The big problem is when people share code that attempts to be ‘faster’ or ‘more efficient’. Not only do JavaScript developers have to weed out unexpected behavior from the language, but they also have to be diligent to not assume every other JavaScript developer is sharing bug-free code.

This is what you’d see if you were skimming through the recent re-introduction article:

A_re-introduction_to_JavaScript__JS_tutorial__-_JavaScript___MDN

If you’re in a hurry, you’ll probably not see the important bit of text immediately following the examples:

Here we are setting up two variables. The assignment in the middle part of the for loop is also tested for truthfulness — if it succeeds, the loop continues. Since i is incremented each time, items from the array will be assigned to item in sequential order. The loop stops when a “falsy” item is found (such as undefined).

Note that this trick should only be used for arrays which you know do not contain “falsy” values (arrays of objects or DOM nodes for example). If you are iterating over numeric data that might include a 0 or string data that might include the empty string you should use the i, len idiom instead.

If you’re like me and skimming the article for anything interesting, you’ll have probably also missed the text but immediately caught the possible bugginess of the “nicer idiom”. A less experienced skimmer will see the “nicer idiom”, think it’s clever, and use it somewhere it shouldn’t be used. That’s just how the JavaScript community works, even when code on a trusted website like MDN doesn’t work in a given situation.

For a clear example of the problem, consider this code that uses the “nicer idiom” in the wrong way:

// Correction on Mozilla's 're-introduction'
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript
var arr = [],
    terminate = 20,
    i = 0;

while(i <= terminate){
    arr[i] = i++;
}

delete arr[5];

// NOTE: arr[0] is 0, which is falsey. This won't print!!!
// change idx=1 and this terminates at index 5.
for (var idx = 0, item; item = arr[idx++];) {
    console.log(item);
}

/*
// proper iteration
for (var i = 0; i < arr.length; i++) {
    var item = arr[i];   
    console.log(item);
}
*/

Play with this in jsfiddle

The issue here is that the array must not contain falsy values. This includes undefined values in sparse arrays, null, zero, "0", "0.0", "", NaN, document.all, or any other values that coerce to false. The proper way to iterate over an array is to explicitly visit every index unless you have a clearly defined and documented reason not to continue through the array.

The lesson to learn here is twofold: actually read documentation and articles, and don’t just assume JavaScript code is usable code.

Flattr this!

Create configurable Angular services using providers

With Angular 2 on its way, it seems like a lot of developers are now afraid of Angular 1.x. I have nothing bad to say about Angular. Sure, some things could be made easier but that’s why Angular 2 is on its way. I’ve played with Angular 2 a bit, and it only works in Chrome. I’m fairly confident Angular 1.x will be around for quite a while.

In that light, let’s talk about creating configurable services. One reason why AngularJS is so popular is because you can easily create modular, reusable code. This code can encapsulate logic, directives, extensible services and factories (via decorators), as well as configurability. I’ll talk about creating directives and decorators in future articles.

This example will be a simple URL constructor for placekitten.com. Any AngularJS service can be written to be configurable (if there’s reason to configure the service, of course).

First of all, let’s see how this service will be used.

.controller('View1Ctrl', ['$scope', 'kittyService', function ($scope, kittyService) {
    $scope.kittycat = kittyService.getUrl();
}])
  <p>How about a kitty cat?</p>
  <p><img ng-src="{{kittycat}}"/></p>

Pretty simple. We’ll call kittyService.getUrl() and use the string returned from the service as the src for an image.

To create a configurable service, we’ll use an angular provider instead of a service or factory. The structure I use for a provider is this:

(function () {
    'use strict';
    var module = angular.module('myApp.services', []);

    module.provider('serviceName', function ServiceName() {
        // use `this` for configurable properties or functions

        this.$get = [function () {
            var service = {};
            // This can be any valid return that you'd normally use
            // with module.service or module.factory
            return service;
        }];
    });
})();

This structure is only a little different from a service or factory. Rather than return an object or constructor function, you use this.$get to create your reusable code. What you assign to this property is the array injector for your service definition. You’ll feel comfortable with this syntax if you define your services using the array-based dependency injection:

angular.module('myApp')
.service('serviceName', [function (){
            var service = {};
            // This can be any valid return that you'd normally use
            // with module.service or module.factory
            return service;
}]);

You can use instance properties and methods on the provider definition to configure your service, then use the private context of your provider to share the configuration with the service. Here’s the full code for the kittyService:

(function () {
    'use strict';
    var module = angular.module('myApp.services', []);

    module.provider('kittyService', function KittyServiceProvider() {
        var height = 100,
            width = 100;

        this.setHeight = function (h) {
            height = h;
        };
        this.setWidth = function (w) {
            width = w;
        };

        this.$get = [function () {
            var service = {};

            service.getUrl = function () {
                return 'http://placekitten.com/g/' + width + '/' + height;
            };

            // This can be any valid return that you'd normally use
            // with module.service or module.factory
            return service;
        }];
    });
})();

The provider has two private variables: height and width. The two methods on the provider object allow you to update these values, but only during the config phase. After your application’s config function has completed, your provider’s functions are no longer accessible.

One problem with this service is that placekitten.com shows the same image for a given combination of height and width. In the config phase of our application, the service’s height and width can be configured randomly:

.config(['kittyServiceProvider', function (kittyServiceProvider) {
    var widths = [200,300,400,500,600];
    var heights = [300,400,500,600,700];
    kittyServiceProvider.setHeight(heights[Math.floor(Math.random()*heights.length)]);
    kittyServiceProvider.setWidth(widths[Math.floor(Math.random()*widths.length)]);
}]);

The name injected into the config phase is kittyServiceProvider and not kittyService. This is important. AngularJS knows that when you request kittyServiceProvider in the config phase, you want the object containing your service definition (the KittyServiceProvider function in the example above). Once the config phase completes, the $injector service will prepare your service from the this.$get definition of your service.

If you load the example code, refresh the page a few times and you’ll receive a new kitty image.

NOTE: The heights and widths may not always combine in a way that retrieves an image. Just refresh for another example.

Why not do this in a service?

True, you could do this directly in the service if this was just your code. The benefit here is when you want to share a service with other developers, perhaps on a different project within your company or publicly on GitHub. What if your team wants a 200×300 kitty and HR wants a 500×500 kitty? In these situations you can’t just modify the code with your hard-coded values. True, in this example you could use constants or values instead but then your service would have a dependency on the consumer (this is bad).

A service that dumps a string for displaying a cat image is probably not that useful in the real world. Consider other possibilities. I’ve used providers for creating a reusable notification display which allows you to define whether a notification will automatically close and how long notifications will be displayed if they auto-close. Suppose your application often receives 40x errors when users leave their browsers unattended long enough for an authenticated session to time out; you could create a provider which caches failed requests and retries up to X (configurable) times before giving up. Or, you could create a configurable service which caches those failed requests, displays a login dialog, then automatically retries the failed requests X (configurable) milliseconds after logging in successfully. Those examples would require the use of an http interceptor, and would have been quite a bit more complex than a kitty image linker.

Code

Sample code for this blog post is available on github at jimschubert/blogs.

Further Reading

If you really want to dig into the guts of Providers (versus services, factories, constants, or values), check out Adrian Castillo’s blog post, AngularJS Providers under the hood.

For reference to each recipe, check out AngularJS Provider Recipes on the official docs.

Flattr this!

io.js 1.0.1 (unstable) is out. ES6!

A team of developers, including some core node.js developers, forked node.js a while ago into a project called io.js. They’ve just released version 1.0.1 (Unstable) and it brings ES6 natively to server-side JavaScript applications.

To test drive, you can download io.js. Be warned, it will completely replace your node and npm system links. Some people have complained about this or reported unexpected behavior after compiling io.js from source.

“Why io.js?” you ask? Joyent was a little stingy with node.js progress (it’s the basis for a huge money-maker for them). The io.js team’s goal is to make io.js more of a community-driven project. It seems to have worked, judging by the progress they’ve made.

Version 1.0.0 of io.js ships with V8 3.31.71.4, which includes ES6 features well beyond version 3.26.33 that will be shipped with joyent/node@0.12.x.

No more –harmony flag

With io.js, you can now start using some ES6 features natively (e.g. no transpiling using Traceur or 6to5). Here’s the fibonacci example using an ES6 iterator taken from 6to5’s tour:

// fib.js
var fibonacci = { };
fibonacci[Symbol.iterator] = function*() {
  var pre = 0, cur = 1;
  for (;;) {
    var temp = pre;
    pre = cur;
    cur += temp;
    yield cur;
  }
}

for (var n of fibonacci) {
  // truncate the sequence at 1000
  if (n > 1000) {
    break;
  }
  process.stdout.write(n + '\n');
}

You’ll notice that I’ve had to change how an iterator is constructed and how the output is written. Run this with node fib.js (remember, io.js will overwrite the system node).

While this is excellent news, you’ll need to be aware of what ES6 features are currently supported. For example, io.js does not yet support classes. This won’t work in io.js:

class Animal{
  constructor(legs) {
    switch (legs) {
      case 2:
        this.locomotion = 'bipedal';
        break;
      case 4:
        this.locomotion = 'quadrupedal';
      default:
    }
  }
  move() {
    process.stdout.writeln("I'm a %j animal!", this.locomotion);
  }
  static getTypeName() {
    return "Animal";
  }
}

class Bird extends Animal {
  constructor(){
    super(2);
  }
}

class Lion extends Animal {
  constructor(){
    super(4);
  }
}

var bird = new Bird();
bird.move();

var lion = new Lion();
lion.move();

This will, however, work via transpilers:

These examples are on github.

Flattr this!

Decorating directives with isolate scope in Angular 1.3

A question from reddit excited my interest. Angular 1.3 apparently has broken the ability to decorate isolate scopes.

I tried version of Angular between the 1.2.26 known working version to the 1.3.0 broken version and found the issue was introduced in 1.3.0-rc.2.

The way to get around this is hacky:

app.config(function($provide) {
    $provide.decorator('mycustomDirective',['$delegate','$controller', function($delegate, $controller) {
        var directive = $delegate[0];
        directive.$$isolateBindings['othervar'] = {
          attrName: 'othervar',
          mode: '=',
          optional: true
        };
        
        return $delegate;  
    }]);
});   

Here’s a working example: http://plnkr.co/edit/CRxhX6?p=preview

Flattr this!

Your first app: Node.js (update)

I am currently writing a book called Your first app: Node.js. I am self-publishing this book through leanpub.com. You can check it out here.

One cool thing about self-publishing on leanpub is that people can purchase the book early and receive all future updates as I write. Feedback from this process may also help me to revisit areas I’ve unintentionally left a little light or not explained very well.

The book is currently about 66% completed. I recently completed the chapters through the node.js API. To complement that API, I will be writing a chapter on creating an AngularJS front-end. Then I’ll explain how to deploy your application to Digital Ocean (although the steps will be similar for many other hosts). Finally, I will provide some suggestions for further learning.

Below is the book description as shown on leanpub.

ABOUT THE BOOK

This book is intended for professionals or students who have a comfortable working knowledge of JavaScript, HTML, CSS, and the stateless nature of HTTP. If you’re not comfortable with JavaScript, I highly recommend reading both JavaScript: The Definitive Guide and JavaScript Garden. Although I’ll quickly go over some necessary fundamentals for beginners, it will be difficult to become a ‘node.js developer’ without a solid foundation in JavaScript. HTML, CSS, and client-side JavaScript will be necessary for the presentation of your application, while understanding the stateless nature of HTTP will make some of the techniques discussed in this book easier to learn.

Why this book?

Many books on a given technology focus on really digging into a certain framework or a few simple concepts. Node.js is an interesting technology for many reasons, including:

  • it popularized server-side JavaScript
  • encourages modularization
  • has a fairly large community
  • provides amazing throughput for data

I have read many technology books and yet I’ve always felt like there was a gap between books covering a given technology and how you’d actually undertake creating a full application on your own. When I realized that node.js is attracting a lot of front-end developers and hobbyists to the community, I understood what that missing “gap” was: the application life cycle. For developers without a degree in Computer Science or Information Systems, it may be difficult to figure out how to actually start creating a full application. I hope to address this in the text by providing some simple guidelines that help to lay out a clear path between the starting point and the finished product. The intended audience for this book is anyone familiar with HTML, CSS and JavaScript. You don’t need to be a professional engineer or even have years of experience. In early chapters, I discuss the aspects of JavaScript that I feel are crucial to understanding how to write an application in node.js.  Where I feel newer developers may benefit from further reading, I suggest online or print resources that I’ve found helpful.

What makes this book different?

When working with Junior developers, I try to avoid the sink or swim mentality that many Seniors seem to have.  Instead, I like to explain why I’ve chosen one thing over another. In this book, I include an entire chapter to discuss development considerations which are usually omitted from mainstream texts.  For instance, when was the last time you read a book that suggested a build automation tool? I can’t remember any book that clearly explained why a sample application was written to use MongoDB instead of Redis or MySQL. These are things I attempt to cover in this book before diving in to the meat of the text you’d find in similar books. I even have a short prototyping chapter for readers who are unfamiliar with the technologies used in the book to try some hands-on code before applying this knowledge to the sample application.

These are all things people with professional full-stack (e.g. client-server-database) experience do on a regular basis. With such a fresh, young community being attracted to node.js application development (many without server-side or database experience) I think this book’s early focus on process and maintainability are important aspects for any beginner to pick up.

Technologies

Writing an application requires a developer to pick up skills in multiple technologies for frameworks. I plan to at least cover the basics of all the tools necessary to create a full node.js application. Some of the technologies covered in this book are:

  • express.js (node.js)
  • grunt.js (node.js)
  • mocha (node.js)
  • Twitter’s Bootstrap (CSS)
  • angular.js (client JavaScript)
  • MongoDB (database)
  • git (source control)

The intention is not to fully teach a developer everything about every one of these technologies, but I will go over the basics for everything that is not considered node.js and go over the node.js technologies in greater detail.

Planned Chapters

Future chapters will focus on:

  • ✓ git workflow
  • ✓ project structure and setup
  • ✓ setting up a database
  • ✓ modeling a database “schema”
  • ✓ creating a minimal API
  • creating a front-end application
  • asynchronous communication between client and server
  • ✓ authentication strategies
  • … more

As this book is written, I welcome your feedback and comments at nodejs@ipreferjim.com

Status

This book is currently about 66% complete.

The final product is estimated to be between 200 and 250 pages in length (not including appendices).

Sample Code

Sample code for this book is located on github at jimschubert/yfa-nodejs-code.

Flattr this!

Release: New Tab Redirect 3.0

On Wednesday, I began the rollout of New Tab Redirect 3.0 with some bug fixes and a huge new feature: a built in ‘Apps’ page.

Preface for Searchers

By 9pm of release day for New Tab Redirect 3.0, I lost all faith in humanity. I’m a professional software engineer, so I realize users want what they want and if you can’t or don’t give it to them they get really bitchy. But, New Tab Redirect is free software so I technically have nobody to answer to. I’m a human and I make mistakes. I rolled out to 50% of users and forgot to add an explanation of the permissions changes to the description on the web store and to the wiki. Luckily, one user submitted an issue and another emailed me. That’s two users out of 750,000/2 users.

For anyone looking for an explanation of the new permissions, I’ve explained it on the wiki. If you read the explanation and still don’t agree with the added permissions, use another extension… it’s that simple. Don’t be a jerk and call me names or suggest that I’m doing something illegal with no grounds for such defamation. I have no respect for people who intentionally hurt others. One person went as far as to say I had a huge ego; really… I don’t see how giving people something for free and continuously improving and maintaining it for 5 years means I have a huge ego. But, whatever. The rest of this post is about some awesome technical stuff in the new version, namely the New Tab Redirect ‘Apps’ page.

Why a new version?

When Chrome 33 removed chrome-internal://newtab (which pointed to the ‘good’ New Tab Page), I realized that I could create something that I actually wanted personally, while at the same time fixing the whole ‘address bar does not focus’ problem all users targeting chrome://apps were having. Because New Tab Redirect actually redirects to the user’s optional URL, the focus that Chrome gives to New Tab override pages gets lost. That’s just how it is. I’ve documented workarounds on the extension’s wiki.

Another reason for the new version is because Google is cracking down on extensions. Extensions must now have a single visible UI. In other words, although New Tab Redirect 2.2 opened a welcome page and provided an options page, there was no user interface. It was a ‘New Tab Override’ page that didn’t directly offer an override of the new tab page. In other words, there was no default override on installation. I fixed that by creating the override page I wanted by default (I used chrome-internal://newtab). Doing this also meant I would get 5-10 less emails each week from users saying “I use chrome://apps, but I wish the address bar would focus so I could search just like the old new tab page”. Done and done.

Technology: AngularJS

At work, we’ve been using AngularJS on a new project. I’m really loving it. I decided to use AngularJS for the New Tab Redirect ‘Apps’ page because it would mean minimal, reusable code with a clean structure that my users could read and understand.

AngularJS is really web development for engineers. Normal JavaScript is often written with all kinds of haphazardly-structure files, crazy include structures, and continuation passing style causing ridiculously nested functions and hard to read code. AngularJS offers a clean modular structure with a service locator, dependency injection, and the ability to declaratively extend HTML.

The main parts of AngularJS are:

Services

Singletons that provide some shared functionality. A Singleton means there will be one instance of an object in the application a time. An interesting thing about services in AngularJS is that you can actually create factories with static data via services.

Factories

Factories are like services, except they’re meant to represent something that is created anew every time it is injected into a controller.

Controllers

Controllers are blocks of code meant to be tied directly to blocks of the DOM (either directly, via routes, or via directives).

Directives

Directives are constructs in AngularJS that allow you to define new HTML elements or attributes that can be applied at runtime to existing elements.

Scope

Probably the hardest part of AngularJS to grasp is the concept of ‘scope’. It’s not the same as JavaScript scope. In AngularJS, scope can be considered the same as the model in the MVC pattern; it’s bound to the view via the controller. AngularJS does dirty checking on the scope to incorporate two-way binding. That just means changes in the DOM to scope-bound properties will immediately be available in the controller, while changes in the controller that are done within an angular digest get updated in the DOM. Generally, there’s no need for manually binding event listeners.

If you want to learn more about AngularJS, the tutorial on angularjs.org is an excellent place to start.

Technology: Chrome Extension APIs

Chrome JavaScript APIs for extensions are really a pain in the ass. In order to use functionality, extensions must ask for permissions. However, Google doesn’t offer read-only permissions. This caused a lot of contention in the permissions request for New Tab Redirect 3.0. If you’re writing an extension and you plan to add some functionality that requires a permission that says ‘Read and modify’, my suggestion is that you don’t add that feature.

In order to create an ‘Apps’ page that somewhat resembled the old New Tab page, I needed:

  • bookmarks (permission: bookmarks, Read and modify your bookmarks)
  • Most Visited Sites (permission: topsites, Read and modify your browsing history.. yeah the description is stupid)
  • Apps management (permission: management, Manage your apps, extensions, and themes)
  • chrome://favicon/ (no permissions needed)

The permissions I needed were only:

  • pulling up to 40 bookmarks from the bookmarks bar only
  • querying top sites (the API call only gives you 20 sites, why describe it as Read and modify your browsing history?)
  • apps management, not extensions and not themes

I can only imaging that Google has a hard enough time as a company, considering how they’ve dropped quite a few products that I loved. They’ve also invested a lot of time in technologies like Google Glass and cars that can drive themselves. They’re likely not at all interested in making a more robust permissions system for extension developers.

The permissions required for the New Tab Redirect ‘Apps’ page are what they are. So many users got scared and even went as far as to falsely accuse me of anonymously collecting their data. The extension is open source, so accuse away: you’re completely wrong. Then again, people that use Google Chrome and think Google is not stealing gobs and gobs of their data are ignorant to the many mechanisms built into Chrome specifically to steal their data. Consider the data stored at chrome://predictors/: if Google has 1 billion Chrome users, they can now collect data from 1 billion distributed machines that tells them what users type into the address bar and where that user ends up navigating. Extension developers are the least of your worries (like you, I don’t trust extension developers either).

The code, a brief walkthrough

New Tab Redirect previously loaded redirect.js, which simply redirected to a page defined on the options page. I wanted to keep this functionality, but at the same time I did not want the New Tab Redirect ‘Apps’ page to start loading if the user defiend an optional redirect.

app.js

AngularJS makes that easy. You can provide a flag to defer bootstrapping of the application until you explicitly tell angular ‘Go!’:

window.name = 'NG_DEFER_BOOTSTRAP!';

This just has to be declared before you call angular.run(). Then, when you’re ready, you call:

angular.resumeBootstrap();

Then, angular will begin wiring itself up to the page. In this way, the new ‘Apps’ page DOES NOT LOAD unless the user is using that page.

One interesting hurdle when writing an AngularJS application for Google Chrome Extensions is that AngularJS wraps some standard HTML elements like a, input, img, and form. I’m only using anchor tags and images in New Tab Redirect, but I didn’t undrestand why images for Apps wouldn’t load. The problem is that for images, you have to whitelist the chrome protocol, and for anchor hrefs you need to whitelist chrome-extension so the AngularJS compiler will be happy with the rendered HTML. This all gives the following clean app.js:

'use strict';
// Setting the window.name property in this way allows us to call app.run(), but block until it's ready to be resumed.
// the resume happens in redirect.js if no redirected new tab url has been specified.
window.name = 'NG_DEFER_BOOTSTRAP!';
var app = angular.module('newTab', ['newTab.controllers', 'newTab.directives', 'newTab.filters']);

app.config(['$compileProvider', function($compileProvider) {
    // see https://github.com/angular/angular.js/issues/3889
    $compileProvider.imgSrcSanitizationWhitelist(/^\s*(https?|ftp|file|blob|chrome):|data:image\//);
    $compileProvider.aHrefSanitizationWhitelist(/^\s*(https?|ftp|mailto|tel|file|chrome|chrome-extension):/);
}]);

app.run();

controllers.js

The new ‘Apps’ page has a single controller on the main page. When it loads, it checks for the user’s synced preferences, then loads apps followed by bookmarks and top sites only if the user wants to load them. The cool thing about AngularJS is how clean doing this becomes (I’ve removed function logic below for brevity full file here):

'use strict';
var controllers = angular.module('newTab.controllers', ['newTab.services']);

controllers.controller('MainController', ['$scope', 'Apps', function ($scope, Apps){
    var enable_top_key = 'ntr.enable_top',
        enable_bookmarks_key = 'ntr.enable_bookmarks',
        bookmarks_count_key = 'ntr.bookmark_count',
        top_count_key = 'ntr.top_count';

    $scope.extension_name = "New Tab Redirect!";
    $scope.enable_bookmarks = false;
    $scope.enable_top = false;
    $scope.bookmarks = [];
    $scope.show_prefs = false;
    $scope.bookmark_count = 10;
    $scope.top_count = 10;

    $scope.save_preferences = function(){ };

    function loadBookmarks() { }
    function loadTopSites() { }
    function loadApps() { }

    $scope.$on('UninstalledApp', loadApps);

    // initial page setup
    var querySettings = [enable_top_key, enable_bookmarks_key, bookmarks_count_key, top_count_key];
    Apps.getSetting(querySettings)
        .then(function(settings){
            // assign settings to scope
        })
        .then(function(){
            loadApps()
                .then(function(){
                    loadBookmarks();
                    loadTopSites();
                });
        })
        .then(function setupWatches(){
            $scope.$watch('bookmark_count', loadBookmarks);
            $scope.$watch('top_count', loadTopSites);
        });
}]);

When you declare a Controller, you can pass just a function as the second parameter and the parameter names will tell AngularJS how to look up services, factories, or other injectables so they’re available in your controller. Another way to do this is as I’ve done it and pass an array where the beginning of the array are the names of the dependencies and the last element of the array is the function of the Controller. This style allows you to later minify code without breaking AngularJS’s dependency injector.

You’ll notice here that Apps.getSetting is a Promise. Promises make code way cleaner. The way the getSetting promise is setup here allows me to set all settings to their properties on the scope, then load apps (always done), then conditionally load bookmarks and top sites. After everything is loaded, a $scope.$watch call allows us to say that anytime bookmark_count or top_count changes, call the relevant function again. If I had bound these watch functions anytime before in this promise chain, these functions would be called multiple times. Each function simply delegates to the Apps service call and returns a function, so that code is omitted from the above logic.

services.js

The services file actually contains a single service, Apps. The service has the following interface:

{
    getAll: function () { },

    launch: function(id){ },

    pinned: function(url){ },

    newWindow: function(url){ },

    uninstall: function(id){ },

    tab: function(url){ },

    navigate: function(url){ },

    topSites: function(){ },

    saveSetting: function(obj){ },

    getSetting: function(obj) { },

    getBookmarksBar: function(limit){}
};

A better design interface would have probably been to create a service facade around the ‘Tabs functionality (pinned, newWindow, tab) and the Config functionality (saveSetting, getSetting). To keep it simple I used only ‘Apps’.

Each of these functions returns a promise to allow for clean chaining of asynchronous functions.

Rather than explain each function in depth, I’ll cover just the first one and you can look at the others on GitHub.

Apps.getAll() will retrieve all apps using the chrome.management.getAll API call provided by chrome:

getAll: function () {
    var deferred = $q.defer();

    chrome.management.getAll(function (results) {
        $rootScope.$apply(function(){
            deferred.resolve(results);
        });
    });

    return deferred.promise;
},

The problem here is that AngularJS is only aware of changes on the scope that occur during the digest loop by explicitly applying ‘regular JavaScript’ to the AngularJS internals. This is done with $rootScope.$apply whenever you have some data that you want AngularJS to consider. Chrome doesn’t provide error callbacks on the API because errors are handled within the browser, making it easy for client applications like this to handle logic. A ‘deferred’, created by $q.defer(); is how JavaScript code can promise a future value to callers and is not AngularJS-specific; jquery has deferreds, Kris Kowal has an excellent Promises implementation called q, and versions of the CommonJS Promises proposal is implemented in many other frameworks or utilities.

A deferred object has two states: success and failure. To trigger the success state of a deferred object (which completes the promise successfully), you would call deferred.resolve. To initiate an error, you’d call deferred.reject. You can chain promises by returning a new promise from a chainable function, usually .then(fn).

HTML

I’m going to cheat a little here and show you the HTML before the directives. I’ll only show two snippets of HTML: one contained in main.html and a template.

In main.html you’ll see:

<div class="container app-container clear" ng-class="{'after-bookmarks': enable_bookmarks && bookmarks.length > 0,'populated':apps.length > 0}">
    <div><input type="search" ng-model="q.name" ng-show="apps.length > 5" placeholder="Filter apps"></div>
    <chrome-app ng-repeat="app in (apps | filter:q)" app="app"></chrome-app>
    <div ng-show="(apps | filter:q).length == 0" style="margin:1.5em">No matches found.</div>
    <span class="clear"></span>
</div>

This actually has a lot of AngularJS stuff in it. The opening div tag has this weird ng-class attribute in which the content looks like a JSON object. This is one of many built-in AngularJS directives. It applies the class ‘after-bookmarks’ based on the condition in the value of the property at runtime (during a digest loop). This means that any time enable_bookmarks changes, the classList might change from “container app-container clear” to “container app-container clear after-bookmarks”. If the ‘populated’ condition later changes, we’d have “container app-container clear after-bookmarks populated”. Then, the user could disable the bookmarks setting and AngularJS would automatically update the classList to “container app-container clear populated”. There’s no additional work you need to do.

Next, there’s an input type="search" that has an ng-model and ng-show attribute. Usually, something applied to ng-model will represent a property on your $scope object. So, if you had ng-model="favorite.color", and in your controller, you’ve set $scope.favorite = { color: 'blue' }, the value of the text box would read ‘blue’. If you changed the input text from ‘blue’ to ‘red’, you would immediately have the true condition in your controller $scope.favorite.color === 'red'. The ng-show toggles the display:none style of the element based on the condition.

Next, there’s this weird non-standard XML element, <chrome-app ng-repeat="app in (apps | filter:q)" app="app"></chrome-app>. That’s a directive I’ve defined and will discuss later. The attribute, ng-repeat acts as a foreach loop. The value of that attribute says for each value in apps | filter:q, apply app to the ‘app’ attribute of my custom directive, and generate the templated structure. The apps | filter:q syntax is how AngularJS declaratively applies filters (another available application module like a service or factory). The | is what actually applies the filter, the filter:q is how we define what filter to call and what parameter to pass the filter. In standard JavaScript, this might look like:

var apps = [];
var filtered = apps.filter( function filter(q) {
    // filter:q logic here.
});

In AngularJS, the filter filter is way more involved.

The <chrome-app></chrome-app> directive will take the ‘app’ object assigned to the ‘app’ attribute (app="app") and apply it to the following template:

<div class="app-icon">
    <a href="app.appLaunchUrl" chrome-launch="app.id" chrome-type="app.type" class="app-icon-128">
        <img src="{{app.icons|iconsize:128:app}}" title="{{app.name}}"/>
        <span class="app-desc">{{app.name}}</span>
    </a>

    <div class="app-actions">
        <a href="app.appLaunchUrl" ng-if="app.type != 'packaged_app'" chrome-pinned="app.id"
           title="Open {{app.name}} in a pinned tab"><i class="fa fa-2x fa-thumb-tack"></i></a>
        <a href="app.appLaunchUrl" ng-if="app.type != 'packaged_app'" chrome-new-tab="app.id"
           title="Open {{app.name}} in a new tab"><i class="fa fa-2x fa-level-up"></i></a>
        <a href="app.appLaunchUrl" ng-if="app.type != 'packaged_app'" chrome-new-window="app.id"
           title="Open {{app.name}} in a new window"><i class="fa fa-2x fa-external-link"></i></a>
        <a href="app.optionsUrl" chrome-options="app.id" ng-if="app.optionsUrl" title="Open options for {{app.name}}"><i
                class="fa fa-2x fa-wrench"></i></a>
        <a href="#" chrome-uninstall="app.id" title="Uninstall {{app.name}}"><i class="fa fa-2x fa-trash-o"></i></a>
    </div>
</div>

The app object is the result object from chrome’s API call.

The new thing in this snippet is the introduction of {{ somePropertyName }}. This is an interpolation in AngularJS. It’s actually not that performant a lot of the time, but for something like title="{{app.name}}" it is usually the only way to dynamically set string contents. Here you see another AngularJS directive, ng-if, which conditionally adds to or removes from the DOM the whole element it’s applied to. Then, you see other custom directives, chrome-pinned, chrome-new-tab, chrome-new-window.

With these examples, you now can understand probably 75% of the code you’d find in AngularJS

directives.js

The last bit to cover is directives. This is my favorite aspect of AngularJS so I’ll try really hard not to ramble on the topic.

AngularJS lets you define directives to run for any attributes, elements, or CSS class names. I don’t generally use the class feature. Directives allow you to hook into the compile phase or the link phase (but not both simultaneously). You can also define a controller or explicitly create a child scope or isolated scope for the given element(s). I like to keep things simple and stick to adding functionality in the link phase.

Here’s an example:

directives.directive('chromePinned', ['$log', 'Apps', function($log, Apps){
    return {
        // attribute only
        restrict: 'A',

        scope: {
            id: '=chromePinned',
            url: '=href'
        },

        link: function($scope, $element, $attrs) {
            if($scope.id){
                $element.bind('click', function(e){
                    e.preventDefault();
                    Apps.pinned($scope.url)
                        .then(function(tab){
                            $log.debug("Opened app id %s in pinned tab #%d", $scope.id, tab.id);
                        });
                });
            }
        }
    };
}]);

The directive here is declared in the same dependency injection style as the controller, where we define the dependency names and function in an array to prevent breaking the injector if the code is ever minified. The directive must return and object that represents a directive definition.

This directive is restricted to only work on attributes (however, restrict:'AEC' would work on attributes, elements, and class names). Defining an object for the scope property creates an isolated scope where id is data-bound to the property of chromePinned (in attribute format it is chrome-pinned‘s value) and url is bound to href. Isolated scopes break the directive out of the scope hierarchy that AngularJS maintains by default. Isolated scopes can take some getting used to, but I think they’re safer and easier to follow.

The linking function binds a click event to the element (e.g. an anchor tag of <a href="#" chrome-pinned="12345">). Any on click, we call Apps.pinned with the url defined by the anchor’s href attribute. Whenever you work on an element in a linking function, you’re working in standard JavaScript (i.e. outside of AngularJS’s digest). You should always call $scope.$apply with whatever function you want to make AngularJS aware of. In this directive, you don’t see that call to apply because it’s done within the service itself.

Conclusion

In all, the new feature was quick and painless to code. The files are clean, easy to read, and well organized.

The code, as always is on github: jimschubert/NewTab-Redirect

Flattr this!

What is node.js?

I’ve recently decided to write a book about node.js. I don’t feel like a lot of books out there (on any technology) cover real-world scenarios. Most of them are introductions to languages or deep-dives into a single aspect of a language or technology. There are very few books that cover real-world application development scenarios to include things like process, build, sample application, and deployment. A good example of the type of book I’d like to write is Agile Web Development with Rails.

Below is an excerpt from one of the first chapters of the book. This is *very* rough draft, but I thought I would share because it may be useful for anyone beginning in node.js development.


What is node.js?

Node.js is a server-side JavaScript environment which provides evented asynchronous I/O and a minimalistic set of core features. In the beginning, node.js combined v8 with libev. Unfortunately, libev is an evented I/O library for Unix-based systems only. Eventually, libev was replaced with a higher-level abstraction library called libuv which allows node.js to run on Unix-based systems and Windows. In 2011, Microsoft began contributing to node.js to bring it to the Windows platform. More recently, libuv removed libev as a dependency.

The key concept for node.js is evented I/O (mostly network-based) operations. This is handled via an event loop. Although understanding the implementation details of event loops can be pretty complicated, the idea is fairly simple. Let’s analogize.

Try to think about an application as a fast food restaurant. A cashier is patiently waiting at the register. A customer walks up to the register and places an order. The cashier hands the order off to the kitchen and maybe fulfills the customer’s drink order before returning to the register. Another customer arrives and places an order, initiating the same scenario as before. A third customer is walking to the register, but just before the customer reaches the register a cook dings a bell and the cashier steps away to grab the first order and hand it to the customer. The cashier then begins taking the third customer’s order. Midway through the order, the second customer’s order is ready. Because the cashier is not rude (maybe this is in Canada?), the third order must be completed before the second customer’s order can be handed back to the customer. But, the cashier can easily hand over the second customer’s order before filling the third customer’s drink order. Eventually, the third customer’s order is fulfilled and there are no more customers. The cashier waits patiently.

It may sound strange, but the above scenario is a simplification of an event loop. We see things like this every day. New node.js developers seem to have a hard time either understanding the event loop or understanding where it comes from. Take the following example, for instance:

function main(){
  console.log("Hello, world!");
}

main();

There’s a little more going on in this example than the standard obligatory printing of “Hello, world!” I’ve wrapped the output line in a function called main and subsequently called that function. If you run this file, you’ll see that it prints out “Hello, world!” and exits. There is no event loop because we haven’t told the runtime to listen to or emit any events.

There are a couple ways we can initiate an event loop. Possibly the easiest way to visualize this is to use the standard JavaScript setInterval function to cause our application to write out “Hello, world!” every 250ms. To kill this application, you’d need to press Control-c.

function main(){
  console.log("Hello, world!");
}

setInterval(main, 250);

The above example is technically an event loop, although there aren’t really any events going on other than the setInterval function. Another way to initiate an event loop is to bind to or set up any event which interacts in some way with a file handle. In Unix, nearly everything is considered a file, so this would mean binding to a socket, file descriptor, or even screen input:

// respond to the SIGINT event
process.on('SIGINT', function(){
    process.stdout.write('\nHandling SIGINT event!\n');
    // cleanly exit with non-error status
    process.exit(0);
});

// listen to events on standard input
process.stdin.setEncoding('utf8');
process.stdin.on('data', function(chunk) {
  process.stdout.write('Hello, world!\n');
});

// Start reading from stdin so we don't exit.
process.stdin.resume();
console.log('Type anything and press ENTER...');

In the above example, process.stdin.resume() causes node.js to continually monitor the readable standard input stream for data. Binding a listener to watch for ‘SIGINT’ does not cause node to create an event loop on its own. Play around with this last example to get an idea of how the event loop works. If you’re interested in how libuv creates and maintains an event loop, there is a free uvbook available which dives into the internals of libuv.

Flattr this!

Karma error: RangeError: Maximum call stack size exceeded

I’ve been working on a bootstrap application for express.js, angular.js, and bootstrap. The idea being that I can provide a boilerplate for quick prototyping of applications.

I was having some issues setting up karma, the test running used by angular.js. I was receiving an error:

...
	
INFO [Chrome 28.0 (Linux)]: Connected on socket id 2uhtx5qf6HDM8_A1qssQ
INFO [karma]: Delaying execution, these browsers are not ready: Chrome 28.0 (Linux)
Chrome 28.0 (Linux) controllers should .... FAILED
	RangeError: Maximum call stack size exceeded
	    at String.match (native)
	    at Object.b.event.remove (/home/jim/projects/express-angular-bootstrap-boiler/src/public/javascripts/vendor/jquery-1.9.1.min.js:3:25827)

...

I messed around with the file includes for a while, then Googled a bit for any possible issues.

It turns out that I was using bootstrap v 2.3.0, which appears to have issues with jQuery 1.9.1 as reported in this issue.

As noted in the issue, upgrading to a newer version of Twitter’s Bootstrap (2.3.2) resolved the issue.

Flattr this!

Why I don’t recommend the Step module [node.js, npm]

I prefer asyc to step. The async module has a cleaner interface, more options, and utilities. It’s a little beefier in size, but those things are beside the point.

Here’s why I don’t really like step. Tell me what you’d expect from this function:

var Step = require('step');

Step(
	function first(){
		console.log('first');
		
		this();
	},
	
	function second(){
		var x;
		console.log('second');
		if(!x){
			// do something that should set x
		}
		return x;
	},
	
	function third(){
		console.log('third');
	}
)

Did you guess:

$ node step-example.js 
first
second

If you’re not familiar with Step, you’ll probably look at that first function and wonder what this(); actually does. Well, it is actually shorthand for next(); (I’m simplifying a little here).

Assuming you’re at least somewhat familiar with asynchronous control flow, you could assume this(); calls the next function. But, what about the second function? It’s returning a value. Does that return from second, or from Step, or from some other function? It returns from second(), but… it passes the value to third. Or, it should. Unless x is undefined, in which case it will be considered an unchained function. Now, I don’t think your variables should regularly be ‘undefined’ at the point of return, but what if you use or someone on your team uses the popular convention of calling a callback as return callback(x);. If the Step module’s convention of tweaking this semantics is ignored, another developer may look at it as a typo, “You can’t call the context like that…” Right? Also, what if someone doesn’t understand the return value can’t be undefined and comments out the middle of that function? You may have cleanup logic that isn’t getting checked in third().
We’ve all seen that happen before.

In the above example, if x was not undefined, it would be passed as a parameter to third().

It’s this inconsistency which makes me feel like Step is an accident waiting to happen in a regular development team. The source code is pretty well written and concise. I think the author has done a great job, but the usage is unintuitive and I wouldn’t recommend using the module.

On the other hand, async is beautifully intuitive.

Consider this example:

var async = require('async');

async.series([
	function first(done){
		console.log('first');
		done(null, 'first');
	},
	function second(done){
		var x;
		console.log('second');
		if(!x){
			// do something that should set x
		}
		done(null, x);
	},
	function third(done){
		console.log('third');
		
		done(null, 'third');
	}
], function(err, results){
	console.log(results);
})

async.series allows us to run a series of tasks (functions defined within the array), which pass a value to the callback (that last function you see).

If you forget to pass a value, the results will contain undefined at the index of the array. Your third function won’t be skipped unless you forget to invoke the callback. To me, this is a pretty obvious error and one that is easy to spot and correct. Here’s is the output of the above call:

first
second
first
third
[ 'first', undefined, 'third' ]

To be fair, the example of Step acts more like a waterfall function. Here’s what that would look like in async.

async.waterfall([
	function first(done){
		console.log('first');
		done(null, 'first');
	},
	function second(val, done){
		var x;
		console.log('second has val: %j', val);
		if(!x){
			// do something that should set x
		}
		done(null, x);
	},
	function(val, done){
		console.log('third has val: %j', val);
		
		done(null, 'third');
	}
], function(err, val){
	console.log('callback has val: %j', val)
});

The above code is available on github: https://github.com/jimschubert/blogs

Flattr this!