Hard link in OS X

Under linux, I used mount --bind olddir newdir to link a source code directory from a git repository in my Dropbox folder to another public git repository. I did this to avoid using git submodules and, for whatever reason, git had trouble following hard links under Ubuntu.

I recently purchased a Macbook Pro, and I was trying to use the same bound mount point. The mount provided with OS X doesn’t offer --bind. A main suggestion I found online is to use bindfs. bindfs didn’t work for my needs, and I didn’t want to manually compile a link/unlink application as others suggested.

I found that the homebrew package coreutils provides the gnu version of ln. To install:

brew install coreutils

This package doesn’t overwrite OS X default packages unless you run the above command with the --default-names switch, otherwise all utilities are prefixed with a g. Using the gnu ln program, I was able to hard-link successfully:

gln -d src target

I don’t know what the difference is between Ubuntu and OS X (both running git 1.8.4), but I can now work properly under git with hard links as expected.

Installing more gnu utilities

You can go one step further and install more gnu utilities (found on StackExchange):

brew install coreutils findutils gnu-tar gnu-sed gawk gnutls gnu-indent gnu-getopt

Then, add the following to your .bashrc:

PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
MANPATH="/usr/local/opt/coreutils/libexec/gnuman:$MANPATH"

Thanks to the Apple StackExchange user lri for the excellent post.

flattr this!

System76 notebooks… are they worth the money?

On June 19 2012, I purchased a 15″ Pangolin Performance with the following configuration:

  • Ubuntu 12.04 LTS 64 bit
  • 15.6″ 720p High Definition LED Backlit Display ( 1366 x 768 )
  • Intel HD Graphics 4000
  • 3rd Generation Intel Core i7-3610QM Processor ( 2.30GHz 6MB L3 Cache – 4 Cores plus Hyperthreading )
  • 16 GB Dual Channel DDR3 SDRAM at 1600MHz – 2 X 8 GB
  • 750 GB 7200 RPM SATA II
  • 8X DVD±R/RW/4X +DL Super-Multi Drive
  • Intel Centrino 1030 – 802.11 b/g/n Wireless LAN + Bluetooth Combo Module

Total price: $1,038.00

The price was higher than I wanted to spend, but I considered this purchase an investment. After all, I had previously owned a Sony Vaio for almost 5 years, and which I had purchased for $800. The reviews for System76 seemed a bit mixed. I’ve been using Linux or BSD as my main desktop environment since 1998, so I took a chance on the mixed reviews and made the investment.

Issue #1

After two months, I had to open a ticket because the Ubuntu version preinstalled on the machine displayed the graphics card as ‘Unknown’ and the screen itself exhibited a sort of wave of energy which was giving me headaches. I was told by the support engineer that the fix would be to install the mesa-utils package. Why is mesa-utils not installed by default if it’s a necessary package? I have no idea, but here’s the response I received from my support ticket:

It’s actually a known issue without having the mesa-utils installed. Apparently, the info is actually pulled from glxinfo.

Mesa-Utils isn’t part of the default install, and admittedly, we do not add it either. I know there’s talk of trying to add it as part of the install.

Sure.

The graphics issues were actually resolved by an early upgrade to Ubuntu 13.04.

Issue #2

In October 2012, I decide to dual boot a copy of Windows 8 from MSDN. There were numerous issues with drivers. After opening a support ticket, I was told that System76 doesn’t have access to drivers for their hardware from their vendors (bison or chicony hardware). That seems really shady, considering a machine for such a high price should have quality hardware. I ended up removing Windows 8 and installing Windows 7 within a VirtualBox VM. The hardware is beefy enough to host multiple concurrent VMs, so this is actually my preferred method for cross-platform development. However, a machine that can’t dual boot Windows basically renders the machine useless for developers who need to code at a low level like packet processing or driver development.

Issue #3

On June 15 2014, just shy of two years after purchasing this machine, I decided to open a ticket regarding battery issues I started having after installing Ubuntu 14.04. My battery began to hold no more than 75% or so charge. I inquired how to troubleshoot whether or not there were some other factors causing battery issues. For reference, here are screenshots of the battery statistics:

I was told by the support team that my battery was causing problems because:

The statistics indicated this is within your battery. The energy designed (48.8 W/hr) and full (40.0 W/hr) is where your discrepancy lies.

So, I opened a ticket saying that my machine had a max of 76% charge after hours of charging (both booted and off) and support replied saying my problem was that my battery could only hold a max charge of 83.3%. The resolution offered was to purchase a new battery at $105.

That’s not right. Every two years of ownership, I will need to purchase the same low quality battery at $105 out of pocket? In a sense, I have purchased a machine that also requires a $50/year battery fee. This seems really silly.

I admit that I had no battery issues for about two years. In fact, I was surprised by the battery life (2.5 to 3 hours) early on. Machines with a mechanical hard drive and 16GB of RAM are generally considered battery hogs. I don’t know of anyone who actually works on such a machine on battery.

Conclusion

Although my machine has worked well for two years, I don’t like the quality of the hardware or the poor response from the company about the issues I have had. I probably wouldn’t mind it if the quality of support offset the quality of the hardware. For example, if System76 actually stood behind their product and replied “Wow, our batteries should last more than 1.95 years. We’ll send you a replacement immediately,” I would recommend purchasing from the company. You do get a pretty beefy machine for relatively cheap. As a software engineer with multiple side projects and a 9 month old son, I don’t have time to continuously try to troubleshoot issues that I really shouldn’t be having with a quality piece of hardware.

My own resolution was to purchase a 13″ Macbook Pro. This machine has 8GB RAM and a 256GB SSD Hard Drive. I have been getting 8 or 9 hours of battery life, and that’s with the power-sucking Google Chrome open at all times. I became accustomed to the OS X environment while working at Expedia. Mac OS X is a UNIX certified OS, so I feel at home in the environment. The only issue I had with OS X when I first started working in the environment was the difference between COMMAND, OPTION, and CONTROL. This took a week or two to become habit. My wife also left her Sony VAIO for a Macbook Air last year and she loves it. You’d be hard-pressed to find a Linux distribution which allows for such a smooth computing experience for non-geeks.

I will be selling my System76 machine after backing up all important information. If you’re interested, let me know. You’d be getting a beefy machine and not taking along any of the annoyance of paying over $1000 for subpar customer support.

To answer the question “Are they worth it?”, I would have to say “It depends.” If you want a machine that would cost $2500-3000 elsewhere for around $1000, then yes it is definitely worth it. If you don’t mind replacing a battery after two years, then yes it’s worth it. If you’ve used Linux for a long time like I have and don’t mind spending an hour or two every time a ‘surprise’ surfaces, then yes it’s definitely worth it. If you’re like me and you have little free time, then you’ll want to open your machine and expect everything to work as expected with little or no interaction with customer support. In my case, it’s just not worth it to own a machine that requires so much maintenance. If this system was a car, I would sell it as a car that runs well and needs little or no work. That is, if you don’t work off battery you’d be all set.

flattr this!

HostGator’s upgrade to PHP 5.4

Recently, HostGator sent around some emails saying they were forcing a PHP upgrade to 5.4 in the coming weeks. The steps provided by HostGator are basically:

  1. Login to CPanel
  2. Go to PHP Configuration
  3. Change from 5.2 to 5.4
  4. See if your site works

I’m off work this week to work on my self-published book, so I went ahead and tried the PHP 5.4 upgrade. First, I made sure my sites were backed up and all WordPress plugins/themes were updated.

After running the PHP 5.4 configuration change, my site didn’t work. I received an HTTP 500 error with the cute alligator smiling his smug face at me. I’m on the current version of WordPress with everything (including my database) fully up-to-date. Naturally, the first thing I did was check the error logs in CPanel… empty. Next, I contacted customer support.

I was on with support for 1h 15m. At first, the support representative couldn’t see the internal server error page. She could run through kproxy.com and see my main page. I tried kproxy.com and saw my site, while other proxies and is-it-down-type checkers reported my site as being down.

After some time, we tracked down the following error message:

Cannot load the ionCube PHP Loader - it was built with configuration 2.2.0, whereas running engine is...

I was initially told this was a problem with my theme (which is the default theme with WordPress, btw). I was getting frustrated, but with the error message I was able to Google what would cause this and it turned out to be a php.ini issue. At first, I assumed it was the server’s php.ini causing problems. After about 5-10 minutes of the support representative investigating further, I loaded ~/www/php.ini from within my SSH session and found that the PHP configuration switcher in HostGator’s CPanel doesn’t modify php.ini in any way to account for the updated PHP version.

zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.2.so"

The fix for this whole problem was to change the version of IonCube used for script loading to match the version of PHP I chose in the PHP Configuration switcher. That’s really something that should be done automatically by a utility. Without error logs displaying in CPanel, I don’t know how the average customer is supposed to figure that one out.

Even after the day of downtime recently, I still give HostGator a 9/10. They’re always quick to respond to customer concerns and very nice. They resolve issues as quickly as possible and provide a pretty solid and affordable hosting environment.

flattr this!

Install Balsamiq, Ubuntu 14.04

Balsamiq for Desktop requires Adobe AIR, which is no longer supported.

Here are the steps I took to get AIR and Balsamiq to install.

gnome-keyring is required by AIR, gdebi is required to install the balsamiq package which will fail as “bad quality” in the Ubuntu Software Center. You can skip the gdebi install if you’re using X File Package (or probably anything other than Ubuntu Software Center) to install .deb packages.

sudo apt-get install gnome-keyring gdebi gdebi-core

Lots of sites say you need to install ia32-libs, which don’t exist in the Ubuntu repositories. Here is what’s required (taken from this post on askubuntu.com and this other post on askubuntu.com )

sudo apt-get install gtk2-engines:i386 libart-2.0-2:i386 libcairo2:i386 libcanberra-gtk0:i386 libdatrie1:i386 libgail-common:i386 libgconf2-4:i386 libgtk2.0-0:i386 liblua5.1-0:i386 libpango1.0-0:i386 libpixman-1-0:i386 libqt4-network:i386 libqt4-test:i386 libqtcore4:i386 libthai0:i386 libbonobo2-0:i386 libglade2-0:i386 libgnomecanvas2-0:i386 libidl0:i386 liborbit2:i386 libwmf0.2-7:i386 gtk2-engines-murrine:i386 libxml2:i386 libxslt1.1:i386 libxt6:i386 lib32nss-mdns libnspr4-0d:i386 libnss3-1d:i386 

Now, download AdobeAIRInstaller.bin version 2.6 and MockupsForDesktop64bit.deb

cd ~/Downloads
chmod +x AdobeAIRInstaller.bin # make Adobe AIR executable
LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu ./AdobeAIRInstaller.bin # install Adobe AIR

Once AIR is installed, open your file manager, right-click on the balsamiq package and choose open with -> gdebi. The install should now complete.

flattr this!

Release: New Tab Redirect 3.0

On Wednesday, I began the rollout of New Tab Redirect 3.0 with some bug fixes and a huge new feature: a built in ‘Apps’ page.

Preface for Searchers

By 9pm of release day for New Tab Redirect 3.0, I lost all faith in humanity. I’m a professional software engineer, so I realize users want what they want and if you can’t or don’t give it to them they get really bitchy. But, New Tab Redirect is free software so I technically have nobody to answer to. I’m a human and I make mistakes. I rolled out to 50% of users and forgot to add an explanation of the permissions changes to the description on the web store and to the wiki. Luckily, one user submitted an issue and another emailed me. That’s two users out of 750,000/2 users.

For anyone looking for an explanation of the new permissions, I’ve explained it on the wiki. If you read the explanation and still don’t agree with the added permissions, use another extension… it’s that simple. Don’t be a jerk and call me names or suggest that I’m doing something illegal with no grounds for such defamation. I have no respect for people who intentionally hurt others. One person went as far as to say I had a huge ego; really… I don’t see how giving people something for free and continuously improving and maintaining it for 5 years means I have a huge ego. But, whatever. The rest of this post is about some awesome technical stuff in the new version, namely the New Tab Redirect ‘Apps’ page.

Why a new version?

When Chrome 33 removed chrome-internal://newtab (which pointed to the ‘good’ New Tab Page), I realized that I could create something that I actually wanted personally, while at the same time fixing the whole ‘address bar does not focus’ problem all users targeting chrome://apps were having. Because New Tab Redirect actually redirects to the user’s optional URL, the focus that Chrome gives to New Tab override pages gets lost. That’s just how it is. I’ve documented workarounds on the extension’s wiki.

Another reason for the new version is because Google is cracking down on extensions. Extensions must now have a single visible UI. In other words, although New Tab Redirect 2.2 opened a welcome page and provided an options page, there was no user interface. It was a ‘New Tab Override’ page that didn’t directly offer an override of the new tab page. In other words, there was no default override on installation. I fixed that by creating the override page I wanted by default (I used chrome-internal://newtab). Doing this also meant I would get 5-10 less emails each week from users saying “I use chrome://apps, but I wish the address bar would focus so I could search just like the old new tab page”. Done and done.

Technology: AngularJS

At work, we’ve been using AngularJS on a new project. I’m really loving it. I decided to use AngularJS for the New Tab Redirect ‘Apps’ page because it would mean minimal, reusable code with a clean structure that my users could read and understand.

AngularJS is really web development for engineers. Normal JavaScript is often written with all kinds of haphazardly-structure files, crazy include structures, and continuation passing style causing ridiculously nested functions and hard to read code. AngularJS offers a clean modular structure with a service locator, dependency injection, and the ability to declaratively extend HTML.

The main parts of AngularJS are:

Services

Singletons that provide some shared functionality. A Singleton means there will be one instance of an object in the application a time. An interesting thing about services in AngularJS is that you can actually create factories with static data via services.

Factories

Factories are like services, except they’re meant to represent something that is created anew every time it is injected into a controller.

Controllers

Controllers are blocks of code meant to be tied directly to blocks of the DOM (either directly, via routes, or via directives).

Directives

Directives are constructs in AngularJS that allow you to define new HTML elements or attributes that can be applied at runtime to existing elements.

Scope

Probably the hardest part of AngularJS to grasp is the concept of ‘scope’. It’s not the same as JavaScript scope. In AngularJS, scope can be considered the same as the model in the MVC pattern; it’s bound to the view via the controller. AngularJS does dirty checking on the scope to incorporate two-way binding. That just means changes in the DOM to scope-bound properties will immediately be available in the controller, while changes in the controller that are done within an angular digest get updated in the DOM. Generally, there’s no need for manually binding event listeners.

If you want to learn more about AngularJS, the tutorial on angularjs.org is an excellent place to start.

Technology: Chrome Extension APIs

Chrome JavaScript APIs for extensions are really a pain in the ass. In order to use functionality, extensions must ask for permissions. However, Google doesn’t offer read-only permissions. This caused a lot of contention in the permissions request for New Tab Redirect 3.0. If you’re writing an extension and you plan to add some functionality that requires a permission that says ‘Read and modify’, my suggestion is that you don’t add that feature.

In order to create an ‘Apps’ page that somewhat resembled the old New Tab page, I needed:

  • bookmarks (permission: bookmarks, Read and modify your bookmarks)
  • Most Visited Sites (permission: topsites, Read and modify your browsing history.. yeah the description is stupid)
  • Apps management (permission: management, Manage your apps, extensions, and themes)
  • chrome://favicon/ (no permissions needed)

The permissions I needed were only:

  • pulling up to 40 bookmarks from the bookmarks bar only
  • querying top sites (the API call only gives you 20 sites, why describe it as Read and modify your browsing history?)
  • apps management, not extensions and not themes

I can only imaging that Google has a hard enough time as a company, considering how they’ve dropped quite a few products that I loved. They’ve also invested a lot of time in technologies like Google Glass and cars that can drive themselves. They’re likely not at all interested in making a more robust permissions system for extension developers.

The permissions required for the New Tab Redirect ‘Apps’ page are what they are. So many users got scared and even went as far as to falsely accuse me of anonymously collecting their data. The extension is open source, so accuse away: you’re completely wrong. Then again, people that use Google Chrome and think Google is not stealing gobs and gobs of their data are ignorant to the many mechanisms built into Chrome specifically to steal their data. Consider the data stored at chrome://predictors/: if Google has 1 billion Chrome users, they can now collect data from 1 billion distributed machines that tells them what users type into the address bar and where that user ends up navigating. Extension developers are the least of your worries (like you, I don’t trust extension developers either).

The code, a brief walkthrough

New Tab Redirect previously loaded redirect.js, which simply redirected to a page defined on the options page. I wanted to keep this functionality, but at the same time I did not want the New Tab Redirect ‘Apps’ page to start loading if the user defiend an optional redirect.

app.js

AngularJS makes that easy. You can provide a flag to defer bootstrapping of the application until you explicitly tell angular ‘Go!’:

window.name = 'NG_DEFER_BOOTSTRAP!';

This just has to be declared before you call angular.run(). Then, when you’re ready, you call:

angular.resumeBootstrap();

Then, angular will begin wiring itself up to the page. In this way, the new ‘Apps’ page DOES NOT LOAD unless the user is using that page.

One interesting hurdle when writing an AngularJS application for Google Chrome Extensions is that AngularJS wraps some standard HTML elements like a, input, img, and form. I’m only using anchor tags and images in New Tab Redirect, but I didn’t undrestand why images for Apps wouldn’t load. The problem is that for images, you have to whitelist the chrome protocol, and for anchor hrefs you need to whitelist chrome-extension so the AngularJS compiler will be happy with the rendered HTML. This all gives the following clean app.js:

'use strict';
// Setting the window.name property in this way allows us to call app.run(), but block until it's ready to be resumed.
// the resume happens in redirect.js if no redirected new tab url has been specified.
window.name = 'NG_DEFER_BOOTSTRAP!';
var app = angular.module('newTab', ['newTab.controllers', 'newTab.directives', 'newTab.filters']);

app.config(['$compileProvider', function($compileProvider) {
    // see https://github.com/angular/angular.js/issues/3889
    $compileProvider.imgSrcSanitizationWhitelist(/^\s*(https?|ftp|file|blob|chrome):|data:image\//);
    $compileProvider.aHrefSanitizationWhitelist(/^\s*(https?|ftp|mailto|tel|file|chrome|chrome-extension):/);
}]);

app.run();

controllers.js

The new ‘Apps’ page has a single controller on the main page. When it loads, it checks for the user’s synced preferences, then loads apps followed by bookmarks and top sites only if the user wants to load them. The cool thing about AngularJS is how clean doing this becomes (I’ve removed function logic below for brevity full file here):

'use strict';
var controllers = angular.module('newTab.controllers', ['newTab.services']);

controllers.controller('MainController', ['$scope', 'Apps', function ($scope, Apps){
    var enable_top_key = 'ntr.enable_top',
        enable_bookmarks_key = 'ntr.enable_bookmarks',
        bookmarks_count_key = 'ntr.bookmark_count',
        top_count_key = 'ntr.top_count';

    $scope.extension_name = "New Tab Redirect!";
    $scope.enable_bookmarks = false;
    $scope.enable_top = false;
    $scope.bookmarks = [];
    $scope.show_prefs = false;
    $scope.bookmark_count = 10;
    $scope.top_count = 10;

    $scope.save_preferences = function(){ };

    function loadBookmarks() { }
    function loadTopSites() { }
    function loadApps() { }

    $scope.$on('UninstalledApp', loadApps);

    // initial page setup
    var querySettings = [enable_top_key, enable_bookmarks_key, bookmarks_count_key, top_count_key];
    Apps.getSetting(querySettings)
        .then(function(settings){
            // assign settings to scope
        })
        .then(function(){
            loadApps()
                .then(function(){
                    loadBookmarks();
                    loadTopSites();
                });
        })
        .then(function setupWatches(){
            $scope.$watch('bookmark_count', loadBookmarks);
            $scope.$watch('top_count', loadTopSites);
        });
}]);

When you declare a Controller, you can pass just a function as the second parameter and the parameter names will tell AngularJS how to look up services, factories, or other injectables so they’re available in your controller. Another way to do this is as I’ve done it and pass an array where the beginning of the array are the names of the dependencies and the last element of the array is the function of the Controller. This style allows you to later minify code without breaking AngularJS’s dependency injector.

You’ll notice here that Apps.getSetting is a Promise. Promises make code way cleaner. The way the getSetting promise is setup here allows me to set all settings to their properties on the scope, then load apps (always done), then conditionally load bookmarks and top sites. After everything is loaded, a $scope.$watch call allows us to say that anytime bookmark_count or top_count changes, call the relevant function again. If I had bound these watch functions anytime before in this promise chain, these functions would be called multiple times. Each function simply delegates to the Apps service call and returns a function, so that code is omitted from the above logic.

services.js

The services file actually contains a single service, Apps. The service has the following interface:

{
    getAll: function () { },

    launch: function(id){ },

    pinned: function(url){ },

    newWindow: function(url){ },

    uninstall: function(id){ },

    tab: function(url){ },

    navigate: function(url){ },

    topSites: function(){ },

    saveSetting: function(obj){ },

    getSetting: function(obj) { },

    getBookmarksBar: function(limit){}
};

A better design interface would have probably been to create a service facade around the ‘Tabs functionality (pinned, newWindow, tab) and the Config functionality (saveSetting, getSetting). To keep it simple I used only ‘Apps’.

Each of these functions returns a promise to allow for clean chaining of asynchronous functions.

Rather than explain each function in depth, I’ll cover just the first one and you can look at the others on GitHub.

Apps.getAll() will retrieve all apps using the chrome.management.getAll API call provided by chrome:

getAll: function () {
    var deferred = $q.defer();

    chrome.management.getAll(function (results) {
        $rootScope.$apply(function(){
            deferred.resolve(results);
        });
    });

    return deferred.promise;
},

The problem here is that AngularJS is only aware of changes on the scope that occur during the digest loop by explicitly applying ‘regular JavaScript’ to the AngularJS internals. This is done with $rootScope.$apply whenever you have some data that you want AngularJS to consider. Chrome doesn’t provide error callbacks on the API because errors are handled within the browser, making it easy for client applications like this to handle logic. A ‘deferred’, created by $q.defer(); is how JavaScript code can promise a future value to callers and is not AngularJS-specific; jquery has deferreds, Kris Kowal has an excellent Promises implementation called q, and versions of the CommonJS Promises proposal is implemented in many other frameworks or utilities.

A deferred object has two states: success and failure. To trigger the success state of a deferred object (which completes the promise successfully), you would call deferred.resolve. To initiate an error, you’d call deferred.reject. You can chain promises by returning a new promise from a chainable function, usually .then(fn).

HTML

I’m going to cheat a little here and show you the HTML before the directives. I’ll only show two snippets of HTML: one contained in main.html and a template.

In main.html you’ll see:

<div class="container app-container clear" ng-class="{'after-bookmarks': enable_bookmarks && bookmarks.length > 0,'populated':apps.length > 0}">
    <div><input type="search" ng-model="q.name" ng-show="apps.length > 5" placeholder="Filter apps"></div>
    <chrome-app ng-repeat="app in (apps | filter:q)" app="app"></chrome-app>
    <div ng-show="(apps | filter:q).length == 0" style="margin:1.5em">No matches found.</div>
    <span class="clear"></span>
</div>

This actually has a lot of AngularJS stuff in it. The opening div tag has this weird ng-class attribute in which the content looks like a JSON object. This is one of many built-in AngularJS directives. It applies the class ‘after-bookmarks’ based on the condition in the value of the property at runtime (during a digest loop). This means that any time enable_bookmarks changes, the classList might change from “container app-container clear” to “container app-container clear after-bookmarks”. If the ‘populated’ condition later changes, we’d have “container app-container clear after-bookmarks populated”. Then, the user could disable the bookmarks setting and AngularJS would automatically update the classList to “container app-container clear populated”. There’s no additional work you need to do.

Next, there’s an input type="search" that has an ng-model and ng-show attribute. Usually, something applied to ng-model will represent a property on your $scope object. So, if you had ng-model="favorite.color", and in your controller, you’ve set $scope.favorite = { color: 'blue' }, the value of the text box would read ‘blue’. If you changed the input text from ‘blue’ to ‘red’, you would immediately have the true condition in your controller $scope.favorite.color === 'red'. The ng-show toggles the display:none style of the element based on the condition.

Next, there’s this weird non-standard XML element, <chrome-app ng-repeat="app in (apps | filter:q)" app="app"></chrome-app>. That’s a directive I’ve defined and will discuss later. The attribute, ng-repeat acts as a foreach loop. The value of that attribute says for each value in apps | filter:q, apply app to the ‘app’ attribute of my custom directive, and generate the templated structure. The apps | filter:q syntax is how AngularJS declaratively applies filters (another available application module like a service or factory). The | is what actually applies the filter, the filter:q is how we define what filter to call and what parameter to pass the filter. In standard JavaScript, this might look like:

var apps = [];
var filtered = apps.filter( function filter(q) {
    // filter:q logic here.
});

In AngularJS, the filter filter is way more involved.

The <chrome-app></chrome-app> directive will take the ‘app’ object assigned to the ‘app’ attribute (app="app") and apply it to the following template:

<div class="app-icon">
    <a href="app.appLaunchUrl" chrome-launch="app.id" chrome-type="app.type" class="app-icon-128">
        <img src="{{app.icons|iconsize:128:app}}" title="{{app.name}}"/>
        <span class="app-desc">{{app.name}}</span>
    </a>

    <div class="app-actions">
        <a href="app.appLaunchUrl" ng-if="app.type != 'packaged_app'" chrome-pinned="app.id"
           title="Open {{app.name}} in a pinned tab"><i class="fa fa-2x fa-thumb-tack"></i></a>
        <a href="app.appLaunchUrl" ng-if="app.type != 'packaged_app'" chrome-new-tab="app.id"
           title="Open {{app.name}} in a new tab"><i class="fa fa-2x fa-level-up"></i></a>
        <a href="app.appLaunchUrl" ng-if="app.type != 'packaged_app'" chrome-new-window="app.id"
           title="Open {{app.name}} in a new window"><i class="fa fa-2x fa-external-link"></i></a>
        <a href="app.optionsUrl" chrome-options="app.id" ng-if="app.optionsUrl" title="Open options for {{app.name}}"><i
                class="fa fa-2x fa-wrench"></i></a>
        <a href="#" chrome-uninstall="app.id" title="Uninstall {{app.name}}"><i class="fa fa-2x fa-trash-o"></i></a>
    </div>
</div>

The app object is the result object from chrome’s API call.

The new thing in this snippet is the introduction of {{ somePropertyName }}. This is an interpolation in AngularJS. It’s actually not that performant a lot of the time, but for something like title="{{app.name}}" it is usually the only way to dynamically set string contents. Here you see another AngularJS directive, ng-if, which conditionally adds to or removes from the DOM the whole element it’s applied to. Then, you see other custom directives, chrome-pinned, chrome-new-tab, chrome-new-window.

With these examples, you now can understand probably 75% of the code you’d find in AngularJS

directives.js

The last bit to cover is directives. This is my favorite aspect of AngularJS so I’ll try really hard not to ramble on the topic.

AngularJS lets you define directives to run for any attributes, elements, or CSS class names. I don’t generally use the class feature. Directives allow you to hook into the compile phase or the link phase (but not both simultaneously). You can also define a controller or explicitly create a child scope or isolated scope for the given element(s). I like to keep things simple and stick to adding functionality in the link phase.

Here’s an example:

directives.directive('chromePinned', ['$log', 'Apps', function($log, Apps){
    return {
        // attribute only
        restrict: 'A',

        scope: {
            id: '=chromePinned',
            url: '=href'
        },

        link: function($scope, $element, $attrs) {
            if($scope.id){
                $element.bind('click', function(e){
                    e.preventDefault();
                    Apps.pinned($scope.url)
                        .then(function(tab){
                            $log.debug("Opened app id %s in pinned tab #%d", $scope.id, tab.id);
                        });
                });
            }
        }
    };
}]);

The directive here is declared in the same dependency injection style as the controller, where we define the dependency names and function in an array to prevent breaking the injector if the code is ever minified. The directive must return and object that represents a directive definition.

This directive is restricted to only work on attributes (however, restrict:'AEC' would work on attributes, elements, and class names). Defining an object for the scope property creates an isolated scope where id is data-bound to the property of chromePinned (in attribute format it is chrome-pinned‘s value) and url is bound to href. Isolated scopes break the directive out of the scope hierarchy that AngularJS maintains by default. Isolated scopes can take some getting used to, but I think they’re safer and easier to follow.

The linking function binds a click event to the element (e.g. an anchor tag of <a href="#" chrome-pinned="12345">). Any on click, we call Apps.pinned with the url defined by the anchor’s href attribute. Whenever you work on an element in a linking function, you’re working in standard JavaScript (i.e. outside of AngularJS’s digest). You should always call $scope.$apply with whatever function you want to make AngularJS aware of. In this directive, you don’t see that call to apply because it’s done within the service itself.

Conclusion

In all, the new feature was quick and painless to code. The files are clean, easy to read, and well organized.

The code, as always is on github: jimschubert/NewTab-Redirect

flattr this!

Disable scrollbar overlays in Ubuntu 13.10

Working with Tizen IDE, I noticed the overlay doesn’t actually display for the scrollbar in Ubuntu 13.10. This isn’t so bad for vertical scrolling because I use the touchpad to scroll. For horizontal scrolling, it is a huge pain. I’ve never liked the overlays, and they don’t really fit well with Cinnamon.

Here’s how to set the scrollbars back to normal:

$ gsettings set com.canonical.desktop.interface scrollbar-mode normal

If you discover the normal mode causes problems for you, you can reset it to the overlay:

$ gsettings set com.canonical.desktop.interface scrollbar-mode overlay-auto

Or, if you prefer a GUI, open dconf-editor and expand the node on the left to com→canonical→desktop→interface, then select the scrollbar style you like best.

flattr this!

Tizen SDK 2.2.1 on Ubuntu 13.10

I’ve registered for the Tizen Seattle Devlab & Hack which takes place on December 6-7 2013.  To prepare, I downloaded and installed the Tizen SDK. I’m really enjoying it so far, but I had a couple of issues with installation and startup on Ubuntu 13.10.

Remove Google Talk

First of all, I had issues starting Tizen IDE. It turned out that I need to uninstall google-talk

$ sudo apt-get remove google-talkplugin

Install Oracle JDK

Then, I had to install Oracle’s JDK as mentioned on Tizen’s prerequisites page.

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java6-installer

The last thing to do is to make sure your system is configured to use Oracle’s JDK instead of OpenJDK.

$ sudo update-alternatives --config java

Just select the number of the java6 JDK. For example, in the following display you would choose 2 to switch from Oracle JDK7 to JDK6:

There are 4 choices for the alternative java (providing /usr/bin/java).

  Selection    Path                                            Priority   Status
------------------------------------------------------------
  0            /usr/lib/jvm/java-6-oracle/jre/bin/java          1076      auto mode
  1            /usr/bin/gij-4.8                                 1048      manual mode
  2            /usr/lib/jvm/java-6-oracle/jre/bin/java          1076      manual mode
  3            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      manual mode
* 4            /usr/lib/jvm/java-7-oracle/jre/bin/java          1075      manual mode

Press enter to keep the current choice[*], or type selection number: 

Note: This seems to work with both Oracle’s JDK6 and JDK7. If you’d like to use JDK7 instead of the suggested prerequisite, you can follow the same steps above but instead install version 7 of the JDK:

$ sudo apt-get install oracle-java7-installer

Link libudev.so.1 to libudev.so.0

After all of these steps, you should be able to start up the Tizen IDE (which is a modified Eclipse IDE). However, I had a message that sdb failed to start. This will prevent you from starting an application on an emulator (you can still start on the web simulator for a web application). The sdb issue is caused by a newer libudev library version installed from Ubuntu 13.10′s repository. The fix (which you can also find on askubuntu) is to symlink the newer version of the shared library to the expected version:

$ # 32-bit:
$ sudo ln -sf /lib/i386-linux-gnu/libudev.so.1 /lib/i386-linux-gnu/libudev.so.0
$ # 64-bit:
$ sudo ln -sf /lib/x86_64-linux-gnu/libudev.so.1 /lib/x86_64-linux-gnu/libudev.so.0

Try Tizen SDK

The download of the Tizen SDK is available here.

flattr this!

Functional Thinking is… functional

Submitted at: O’Reilly
Pros:
Concise, Easy to understand, Accurate, Helpful examples
Best Uses:
Intermediate, Expert, Novice, Student
Describe Yourself:
Developer

I’ve reviewed Neal Ford’s “Functional Thinking” video series as part of O’Reilly’s bloggers program and I must say it is one of the best videos I’ve seen.

I’m familiar with functional programming techniques from other languages, but I wish I had originally learned some of the concepts from this video. For example, when Neal gives an example of currying versus partial application, I felt like any developer I know would fully understand the difference between the two (not an easy feat).

Neal covers quite a bit of ground in this video, and he does it in a way that is easy to follow and clearly understand. One of the things I liked best about this video series is how examples are given in a format of first demonstrating code in an object-oriented Java snippet followed by a transitional snippet that is somewhat more functional, then a final fully functional example in one or more functional languages.

That said, I also think this video series could have done without all of the Clojure examples. It was nice to see Groovy, Scala, and some functional Java examples. I feel like the content matter would have been a little more effective to someone unfamiliar with functional programming without the common shock of trying to discern a Lisp-style language.

I especially enjoyed Neal’s discussion about the paradigm shift from object-oriented programming to more functional programming. He is able to support this with concrete examples of functional concepts like map/fold, functional data structures, preferring composition over inheritance, and a lot more that I think any developer of any experience level would benefit from watching.

I’d recommend this video series to everyone interested in functional programming.

flattr this!

Developer James Schubert shares his code and his thoughts.