Tag Archives: Advanced

[javascript] Be sure to read documentation carefully…

I have a love-hate relationship with JavaScript. I love the language, but I hate the idiosyncrasies. And like many others, I use MDN pretty regularly to verify function signatures, browser support, or to find example code or shims.

The recently posted article on MDN, “A re-introduction to JavaScript”, demonstrates something I wish JavaScript developers would stop demonstrating: cleverness. The article links to one of Crockford’s posts about JavaScript being the most misunderstood language. Sure, there are quirks, but they’re defined quirks in most cases.

I showed a coworker an example of an oddity in JavaScript the other day.

var x = "016";
var y = 0|x;
// 16

var x = "0o16";
var y = 0|x;
// 14

// 14

// 14

This example works differently in ECMAScript 3 and ECMAScript 5, as beautifully explained by CMS on StackOverflow.

Those are ‘gotchas’ of the language that you just have to know. Yes, it sucks. The big problem is when people share code that attempts to be ‘faster’ or ‘more efficient’. Not only do JavaScript developers have to weed out unexpected behavior from the language, but they also have to be diligent to not assume every other JavaScript developer is sharing bug-free code.

This is what you’d see if you were skimming through the recent re-introduction article:


If you’re in a hurry, you’ll probably not see the important bit of text immediately following the examples:

Here we are setting up two variables. The assignment in the middle part of the for loop is also tested for truthfulness — if it succeeds, the loop continues. Since i is incremented each time, items from the array will be assigned to item in sequential order. The loop stops when a “falsy” item is found (such as undefined).

Note that this trick should only be used for arrays which you know do not contain “falsy” values (arrays of objects or DOM nodes for example). If you are iterating over numeric data that might include a 0 or string data that might include the empty string you should use the i, len idiom instead.

If you’re like me and skimming the article for anything interesting, you’ll have probably also missed the text but immediately caught the possible bugginess of the “nicer idiom”. A less experienced skimmer will see the “nicer idiom”, think it’s clever, and use it somewhere it shouldn’t be used. That’s just how the JavaScript community works, even when code on a trusted website like MDN doesn’t work in a given situation.

For a clear example of the problem, consider this code that uses the “nicer idiom” in the wrong way:

// Correction on Mozilla's 're-introduction'
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript
var arr = [],
    terminate = 20,
    i = 0;

while(i <= terminate){
    arr[i] = i++;

delete arr[5];

// NOTE: arr[0] is 0, which is falsey. This won't print!!!
// change idx=1 and this terminates at index 5.
for (var idx = 0, item; item = arr[idx++];) {

// proper iteration
for (var i = 0; i < arr.length; i++) {
    var item = arr[i];   

Play with this in jsfiddle

The issue here is that the array must not contain falsy values. This includes undefined values in sparse arrays, null, zero, "0", "0.0", "", NaN, document.all, or any other values that coerce to false. The proper way to iterate over an array is to explicitly visit every index unless you have a clearly defined and documented reason not to continue through the array.

The lesson to learn here is twofold: actually read documentation and articles, and don’t just assume JavaScript code is usable code.

Flattr this!

Review: Microsoft .NET – Architecting Applications for the Enterprise, 2nd Edition

I’ve recently finished reading Microsoft .NET – Architecting Applications for the Enterprise, 2nd Edition by Dino Esposito and Andrea Saltarello. This book caught my attention for two reasons. First, I really enjoy software architecture. Second, I’ve read lots of work by Dino and always found them enjoyable.

The book gives examples of a few architectural patterns like Domain Driven Design (DDD), Command Query Responsibility Segregation (CQRS), and Event Sourcing. It starts with a pretty heavy focus on DDD and seems to assume that you’ve read *The* Domain Driven Design book by Eric Evans. I’ve only skimmed through that book, but it was enough to get many of the references. This book does cover the high level definitions of DDD in a way that the reader can jump right in. The accompanying source code even provides the same example in DDD and CQRS (the two main architectures being discussed).

There were two major things I liked about this book. I really enjoyed how the book isn’t presented as ‘this is the best software architecture, so you should use it.’ Things are presented in a way that clearly states the authors’ position on each architecture. For example, DDD is discussed as a the typical go-to architecture for enterprise systems. It’s a proven pattern that ‘just works’ in many cases. The discussion of CQRS clearly identifies some weaknesses in DDD and supplies an alternative with the explicit caveat that a CQRS-based architecture will change how you think about data.

The other main point I liked about the book is how real-world the discussions are. I usually get that feeling from Microsoft Press books. Early on the authors say, “To design a system that solves a problem, you must fully understand the problem and its domain.” In one of my previous positions, one of the first things I asked when joining the team was, “How have you documented what we’re trying to build?” I received a weird non-answer from the team’s senior developer. I went to the team’s manager and asked the same thing so I could understand the requirements gathering and technical design analysis that went into the product. He said, “We don’t have any of that — we’re Agile.” I thought he was joking, but he wasn’t. Obviously it would strike a chord with me when the authors said, “Agile architecture is sometimes presented as an oxymoron, like saying that if you’re agile you don’t do any architecture analysis, you just start coding…” Elsewhere they described the businessman-developer disconnect that occurs when requirements are incomplete, which is exactly what happened on the previous team I mentioned. It’s like I always say, “If you don’t know where you’re going, how will you get there?”

Another aspect of the book that I think can get buried in reading is the point about developing for a task-based user experience. Too often do developers start coding at the database and work their ways up to the user interface. Without a clean architecture, this can lead to situations where a column in your relational database has to change all the way to the UI. That’s silly. With a good architecture that clearly identifies boundaries in code, you inherently create more reusable and more maintainable code. I think this should be everyone’s main goal while writing code.

I also thought it was cool how the book and accompanying example code demonstrate using a NoSQL database (RavenDB) for an event sourcing application. While I don’t know where I’d ever use Event Sourcing over a more common architecture (mainly because I have to account for the expertise of other developers), I really like the Event Sourcing pattern. Now that Akka.NET is gaining traction, I wonder if Event Sourcing will become the way of the future.

One minor issue I had with the companion content was that it didn’t run as-is. There was a problem with the NuGet packages and MVC5 for which I found a solution at https://naa4e.codeplex.com/workitem/1.

I would definitely recommend this book for senior developers and engineers. It provides concrete examples and guidance from experienced architects toward more solid enterprise application design.

You can download the companion code for the book at https://naa4e.codeplex.com/

Flattr this!

Tips for debugging a WiX MSI

WiX is an excellent technology that simplifies the creation of MSI files using an XML abstraction on top of the Windows Installer APIs. WiX doesn’t change the underlying MSI architecture and it can be a huge pain in the ass sometimes. I thought I would share some tips for debugging what’s going on with an MSI. These tips aren’t specific to WiX, that’s just the technology I’m familiar with.


The first thing you should try is to add the log switch when you run your installer. To do this, open a command prompt in the directory where your MSI file is located and run it with the following parameters:

msiexec yourinstaller.msi /L*v install.log

The /L*v says to log verbose messages to a file.

For a full list of msiexec switches, check out MSDN’s documentation for msiexec.

Re-cache MSI

If you think your package was somehow corrupted or you’ve made a simple such as changing a file’s guid or a feature condition, you can re-cache the package with the following command:

msiexec /fv yourinstaller.msi

This is generally used when you install a product and then thing “oops, I forgot something…”. You don’t want to create a whole new release, so you can update the installed/cached MSI.

Be careful to read the documentation. With the /f parameter, you can’t pass properties on the command line.

Increased Debug Messages

You can actually increase the amount of messages exposed to MSI logs by setting a machine policy for Windows Installer.

Open regedit and go to HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Installer (you may need to create this key). Add a DWORD key called Debug with a value of 7.

The value of 7 will write all common debugging messages to the log as well as command line information. If you’re passing sensitive data to the installer on the command line, be warned that this information gets dumped via the OutputDebugString function. This means that anyone with physical access to the machine or access to the machine via TCP/IP may be able to read these messages. The documentation for the policy settings warns that this setting is for debugging purposes only and may not exist in future versions of Windows Installer.

For a full list of available settings, check out Microsoft’s Windows Installer Machine Policies.

System Debugging

Passing the logging switch and log file name to an MSI every time you run an MSI can be annoying. You can set the Logging machine policy to have Windows Installer write a log to a temp file, but then you have to track down the temp file.

Instead, you can run Sysinternals’ DebugView. This utility allows you to catch debugging information exposed by the Win32 OutputDebugString function on local and remote machines. This doesn’t just gather information from MSIs. For example, you can open a TFS query editor in Visual Studio 2013 and inspect the queries made via TFS.


Lastly, the Windows SDK contains a file called Orca.msi. Orca is a tool to inspect the contents of an MSI. An MSI is essentially just a database of tables and fields with values. You can open an existing MSI, edit a condition, and save it. This can make investigation a little easier. For instance, if you’re trying to figure out why a condition on a custom action doesn’t seem to be passing, you can edit the MSI and modify the condition instead of rebuilding the MSI. It’s also helpful if you’re trying to figure out how another MSI implements some logic.

I’m sure there are plenty of other techniques for debugging MSI files.

Flattr this!

io.js 1.0.1 (unstable) is out. ES6!

A team of developers, including some core node.js developers, forked node.js a while ago into a project called io.js. They’ve just released version 1.0.1 (Unstable) and it brings ES6 natively to server-side JavaScript applications.

To test drive, you can download io.js. Be warned, it will completely replace your node and npm system links. Some people have complained about this or reported unexpected behavior after compiling io.js from source.

“Why io.js?” you ask? Joyent was a little stingy with node.js progress (it’s the basis for a huge money-maker for them). The io.js team’s goal is to make io.js more of a community-driven project. It seems to have worked, judging by the progress they’ve made.

Version 1.0.0 of io.js ships with V8, which includes ES6 features well beyond version 3.26.33 that will be shipped with joyent/node@0.12.x.

No more –harmony flag

With io.js, you can now start using some ES6 features natively (e.g. no transpiling using Traceur or 6to5). Here’s the fibonacci example using an ES6 iterator taken from 6to5’s tour:

// fib.js
var fibonacci = { };
fibonacci[Symbol.iterator] = function*() {
  var pre = 0, cur = 1;
  for (;;) {
    var temp = pre;
    pre = cur;
    cur += temp;
    yield cur;

for (var n of fibonacci) {
  // truncate the sequence at 1000
  if (n > 1000) {
  process.stdout.write(n + '\n');

You’ll notice that I’ve had to change how an iterator is constructed and how the output is written. Run this with node fib.js (remember, io.js will overwrite the system node).

While this is excellent news, you’ll need to be aware of what ES6 features are currently supported. For example, io.js does not yet support classes. This won’t work in io.js:

class Animal{
  constructor(legs) {
    switch (legs) {
      case 2:
        this.locomotion = 'bipedal';
      case 4:
        this.locomotion = 'quadrupedal';
  move() {
    process.stdout.writeln("I'm a %j animal!", this.locomotion);
  static getTypeName() {
    return "Animal";

class Bird extends Animal {

class Lion extends Animal {

var bird = new Bird();

var lion = new Lion();

This will, however, work via transpilers:

These examples are on github.

Flattr this!

Decorating directives with isolate scope in Angular 1.3

A question from reddit excited my interest. Angular 1.3 apparently has broken the ability to decorate isolate scopes.

I tried version of Angular between the 1.2.26 known working version to the 1.3.0 broken version and found the issue was introduced in 1.3.0-rc.2.

The way to get around this is hacky:

app.config(function($provide) {
    $provide.decorator('mycustomDirective',['$delegate','$controller', function($delegate, $controller) {
        var directive = $delegate[0];
        directive.$$isolateBindings['othervar'] = {
          attrName: 'othervar',
          mode: '=',
          optional: true
        return $delegate;  

Here’s a working example: http://plnkr.co/edit/CRxhX6?p=preview

Flattr this!

Your interview questions probably suck

I have interviewed somewhere between 200 and 300 people in my professional career. I’ve learned a lot from the process. In fact, I am planning to write a short book containing some of this “knowledge” about interviewing.

One thing I’ve always had an odd distaste for is interview coding questions based on some ridiculous scenario. For example: “There’s no such thing as a queue, create a queue using two stacks.” or “There’s no such thing as arrays, create a queue with something else.”

Two things immediately happen when presented with this kind of question:

  1. I get pissed
  2. I feel sorry for the interviewer


These types of questions are an attempt to trip you up. “Quick, think outside the box!” These questions are actually really easy, but the intentional psychological disorientation is like a verbal punch to the face. These types of questions are actually no different from saying “I need you to write me a program, but there’s no such thing as keyboards.” It can be done, but the problem has already been solved. This is why I get pissed.

We live in the real world where engineers have access to a wealth of information when we’re presented with a problem. I’ve interviewed plenty of people virtually and told them that they could use the internet if they wanted to while solving a problem. I’m realistic.

Early in my career, I gave an interview candidate a question based on an unrealistic scenario. I admittedly found this question while researching online for questions used by big software companies. The dude aced the question and was hired. He sucked as a programmer. Unrealistic interviews often yield unexpected employees. This is why I feel bad for interviewers who haven’t yet understood this. In case the message isn’t clear: unrealistic coding questions are no gauge of actual ability.

Consider the state of mind of someone interviewing for a position. If your company isn’t hugely known, your candidates can be in any number of mindsets. Including:

  • Passively open to new opportunities – not 100% interested in your company or the position, possibly only interviewing to remain ‘in the game’
  • Passively looking for new opportunities – candidate is casting a line but not expecting any bites
  • Actively looking with no other prospects – your interview is the one this candidate wants (or needs)
  • Actively looking with plenty of prospects (and possibly pending offers) – the candidate may be looking for more money or just there to ‘see how it goes’
  • Not looking – maybe recruited by your company, which increases the feeling of expectations in the candidate

In all of these scenarios, there’s an element of uncertainty which causes the brain to do some funny things (see behavioral science field of neuroeconomics). You’re putting a candidate into an unfamiliar place by having that candidate interview with your company. If the candidate comes into your office, it can heighten the risk and uncertainty that can actually hinder the application of rational thought processes to begin with. As if this isn’t enough, interviewers asking these ridiculous questions are further taking away something familiar from the candidate (even worse if asked to whiteboard these questions or use an unfamiliar coding environment). This state of mind is nearly unavoidable. The loss aversion affecting candidates who are dying for the position only furthers the problem. These feelings are magnified if you work at a company like Expedia where many candidates would do almost anything to work with you. Many developers are anti-social, making the interaction even more unwelcoming. There’s a lot on a candidate’s plate and you’re probably making it worse.

The only way I’ve found as an interviewer and as an interviewee to overcome some of this dampening mindset is to understand that an interview is a two-way street. Your company is interviewing a candidate because you need someone good on your team, but a lucid candidate is also interviewing you to see understand if your company is where they want to be for a while. From this perspective, you’re doing both sides a disservice by asking asinine questions. I’ve been told in the past that questions like these are designed to show how well someone can ‘think on their feet.’ That’s just bullshit. You can think on your feet no matter what data structures are available to you. A candidate is less likely to be able to think on their feet when their ‘fight or flight’ instincts are engaged or they’re afraid of failing an interview miserably. One could argue that the industry is constantly in flux and developers must always adapt; the fact is that the industry is constantly improving, while ridiculous scenarios are based on unrealistic regressions.

It’d be foolish to not acknowledge why these types of questions are asked. They’re often asked to demonstrate knowledge of data structures and algorithms. They’re often asked to evaluate how well a candidate can talk through a problem and come to a solution. If you’re asking such a question, allow your candidate to speak. I’ve been in multiple interviews where I just wanted to walk out because the interviewer literally wouldn’t shut up and let me finish one sentence. There’s a difference between being helpful and being rude. If you’re being rude, not only will it make a possibly good interview go poorly but it will make a smart candidate decide against your company. You may both be at a loss.

Then, there’s the level of unrealism of your question. For instance, if there’s no such thing as arrays… how is the computer operating? If there are no stacks, queues, lists, or arrays, can the candidate assume there is no such thing as contiguous memory? If there are no arrays, can I access a .Keys property of some other data structure? Are there functionally no enumerable structures?! I know I’m being ridiculous, but so is your question.

So what can you do as an interviewer?

First, put yourself in your candidate’s shoes. Don’t just put yourself in your candidate’s shoes as someone who has worked at your company for 8 years and can attack any problem with a fully clear head. Put yourself in your candidate’s shoes, feeling completely naked in an unfamiliar interrogation room with something the candidate either wants or needs dangling in front of yourself by the thinnest of threads. Your interview question probably sucks.

Second, be more realistic. Whether you’re doing a whiteboard exercise, phone screen, or virtual coding exercise, tell the candidate that they’re more than welcome to search the internet for assistance. This will immediately put your candidate at ease. I’ve told many candidates this and I’ve only had one or two people actually use the internet. Something this simple actually works wonders to unveil the true nature of a candidate. If you end up with a candidate who searches for a solution and solves your problem immediately, this means that you either have a problem that’s no problem at all (easily accessible on the internet) or that you’ve found a candidate that can get things done quickly. This will naturally lead you as an interviewer to come up with an original and realistic coding question for the next candidate.

The challenge here is to not ask puzzlers like “all numbers occur an even number of times except one, find it.” Instead, ask a question like “model an ATM” or “create a utility that dynamically loads types from every assembly in a directory.” You could ask a more advanced question like “Write a producer-consumer using a BlockingQueue” or “Write a SIP header parser”. Don’t ask questions like “Solve this obscure IIS problem that took me 5 days of research to figure out” (paraphrasing, but I’ve seen this happen). If you’re going to ask algorithm and data structures questions and you want to evaluate an area the candidate may not be familiar with, consider a reverse polish notation evaluator and the Shunting Yard algorithm.

I was prompted to write this after reading an article on Programmers StackExchange. The question and answers are a pretty good read to get both sides of the interviewing wall.

Just to be clear: I am all for asking complex interview questions. In fact, I think most actual coding questions should be questions that can’t be written in the allotted time (a realistic, yet unfortunate, software management approach). I think it’s fine to make candidates sweat a little because of complexity, but it’s a little unfair to make a candidate sweat because of unfamiliarity.

I’d be happy to hear your thoughts so I can improve as an interviewer.

Flattr this!

Your First App: Node.js is complete!

I’m stoked to announce that I’ve finished writing my first self-published book, Your First App: node.js.

The book is a full-stack application development tutorial using what is commonly known as the ‘MEAN’ stack, but with a heavier focus on the backend technologies: node.js, express.js, and MongoDB. The book ends with a starter AngularJS project of an HTML5 instant messaging chat application.

While following along in the book, it may be useful to run test queries in a REST client. Postman is available for Google Chrome as a packaged application. Import `yfa-nodejs.postman_dump.json` (located in the root of the supplemental code’s repository) into Postman to evaluate the application’s API.

Check out the code on GitHub

Flattr this!

Data at the root level is invalid. Line 1, position 1.

Recently, I encountered a really weird problem with an XML document. I was trying to load a document from a string:

var doc = XDocument.parse(someString);

I received this unhelpful exception message:

Data at the root level is invalid. Line 1, position 1.

I verified the XML document and retried two or three times with and without the XML declaration (both of which should work with XDocument). Nothing helped, so I googled for an answer. I found the following answer on StackOverflow by James Brankin:

I eventually figured out there was a byte mark exception and removed it using this code:

string _byteOrderMarkUtf8 = Encoding.UTF8.GetString(Encoding.UTF8.GetPreamble());
if (xml.StartsWith(_byteOrderMarkUtf8))
    xml = xml.Remove(0, _byteOrderMarkUtf8.Length);

This solution worked. I was happy. I discussed it with a coworker and he had never heard of a BOM character before, so I thought “I should blog about this”.

Byte-Order Mark

The BOM is the character returned by Encoding.UTF8.GetPreamble(). Microsoft’s documentation explains:

The Unicode byte order mark (BOM) is serialized as follows (in hexadecimal):

  • UTF-8: EF BB BF
  • UTF-16 big endian byte order: FE FF
  • UTF-16 little endian byte order: FF FE
  • UTF-32 big endian byte order: 00 00 FE FF
  • UTF-32 little endian byte order: FF FE 00 00

Converting these bytes to a string (Encoding.UTF8.GetString) allows us to check if the xml string starts with the BOM or not. The code then removes that BOM from the xml string.

A BOM is a bunch of characters, so what? What does it do?

From Wikipedia:

The byte order mark (BOM) is a Unicode character used to signal the endianness (byte order) of a text file or stream. It is encoded at U+FEFF byte order mark (BOM). BOM use is optional, and, if used, should appear at the start of the text stream. Beyond its specific use as a byte-order indicator, the BOM character may also indicate which of the several Unicode representations the text is encoded in.

This explanation is better than the explanation from Microsoft. The BOM is (1) an indicator that a stream of bytes is Unicode and (2) a reference to the endianess of the encoding. UTF8 is agnostic of endianness (reference), so the fact that the BOM is there and causing problems in C# code is annoying. I didn’t research why the UTF8 BOM wasn’t stripped from the string (XML is coming directly from SQL Server).

What is ‘endianness’?

Text is a string of bytes, where one or more bytes represents a single character. When text is transferred from one medium to another (from a flash drive to a hard drive, across the internet, between web services, etc.), it is transferred as stream of bytes. Not all machines understand bytes in the same way, though. Some machines are ‘little-endian’ and some are ‘big-endian’.

Wikipedia explains the etymology of ‘endianness’:

In 1726, Jonathan Swift described in his satirical novel Gulliver’s Travels tensions in Lilliput and Blefuscu: whereas royal edict in Lilliput requires cracking open one’s soft-boiled egg at the small end, inhabitants of the rival kingdom of Blefuscu crack theirs at the big end (giving them the moniker Big-endians).

For text encoding, ‘endianness’ simply means ‘which end goes first into memory’. Think of this as a direction for a set of bytes. The word ‘Example’ can be represented by the following bytes (example taken from StackOverflow):

45 78 61 6d 70 6c 65

‘Big Endian’ means the first bytes go first into memory:

45 78 61 6d 70 6c 65

‘Little Endian’ means the text goes into memory with the small-end first:

45 78 61 6d 70 6c 65

So, when ‘Example’ is transferred as ‘Big-Endian’, it looks exactly as the bytes in the above examples:

45 78 61 6d 70 6c 65

But, when it’s transferred in ‘Little Endian’, it looks like this:

65 6c 70 6d 61 78 45

Users of digital technologies don’t need to care about this, as long as they see ‘Example’ where they should see ‘Example’. Many engineers don’t need to worry about endianness because it is abstracted away by many frameworks to the point of only needing to know which type of encoding (UTF8 vs UTF16, for example). If you’re into network communications or dabbling in device programming, you’ll almost definitely need to be aware of endianness.

In fact, the endianness of text isn’t constrained by the system interacting with the text. You can work on a Big Endian operating system and install VoIP software that transmits Little Endian data. Understanding endianness also makes you cool.


I don’t have any code to accompany this post, but I hope the discussion of BOM and endianness made for a great read!

Flattr this!