Tag Archives: rant

Google Now, the new NSA

I’m going to make a bold claim here, but I don’t care. Google Now is essentially a mechanism to spy on people and I think everyone should disable the feature immediately.

I’ve had a lot of respect for Google over the years since I started using Google Search back in 1998. I really appreciated the “do no evil” mantra, as well as the focus on open source technology and giving back to the community.

Then something happened and a lot of free services started getting cancelled. Next there seemed to be a huge push toward paid goods and services (Play Store, Play Music, Fiber, Fi, and now YouTub Red). Meanwhile many of the open source technologies seem to be self-serving– sure I like AngularJS and am intrigued by Go, but I have no reason to believe Google won’t drop these like hot potatoes.

Despite the progression of the company, back in 2012 I decided to try my hand at interviewing with Google. It literally started as the rudest interview I’ve had to date. At the time, I had only had .NET programming experience in my professional career. The interviewer obviously hadn’t researched me as a candidate to understand that my passion lies in Unix-based operating systems and open source technologies*. The interviewer’s introduction went like this:

So… you’re a .NET Developer? *3s long sigh* Uh.. yeah.. I guess let’s get started.

* After an 18 month hiatus, I returned to a .NET position solely because Microsoft announced plans to open source .NET. That’s how serious I am about it.

I’m sure there’s a bad egg or two at Google, but my experiences haven’t left a positive impression on me. Then, this happened…

I’m a software architect, working on a new web module for club management software. On October 6, I entered some tasks into our project management system by listing out a rough model of some things to code. A few of these tasks included variations on the word “slots”, since this is a module for scheduling players to tees at a golf course.

On October 22, I got around to the task related to implementing slots. I looked over my initial model and felt like there was something missing, so I discussed it verbally with a veteran programmer at the company. This was a 15 minute, in person conversation in which the word “slot” probably came up 50 or 60 times. Later that night, while browsing Google’s Play Store on my phone:

Google is always listening.
Google is always listening.

What. The. Hell.

I immediately texted that screenshot to the guy I talked with that day. We both thought it was weird and a little creepy.

I had previously noticed ads targeted based on things I knew I had conversations about, but thought little of it. “I’m sure I googled it if I was talking about it,” I’d tell myself. This time, I’m 100% positive I hadn’t googled anything about a ‘slot’. The concept of a ‘slot’ in the system I’m working with is an abstract way to relate a time (07:15am) to a tee (Tee 1) on an certain day (10/22/2015). I would have had about as much reason to search for ‘slot’ as I would have to search for ‘turkey sandwich’. Just to be sure, I reviewed my search history at https://www.google.com/history as well as locally in my browser. As I thought, there was no recent activity for the word ‘slot’ aside from 10-15 entries on 10/16 about an Amazon deal of the day.


Being a technologist has its perks. I had recently installed CyanogenMod on my phone and it exposes functionality to individually manage permissions for an app. I went to Settings -> Apps -> Google App and clicked the ‘Modify’ button. I then disabled permissions for the microphone (which at the time had been allowed thousands of times). I started watching TV to let my phone sit with the screen off. After 19 minutes, I checked back and the Google App microphone permission had been denied 43 times. In 19 minutes.

I left the ‘OK Google’ detection enabled on my phone for the next day to see how much Google was actually listening to me. This turned out to be 1185 times.

2015-10-24 15.20.01

But, this whole “Google is a bad guy” thing is just conspiracy theory, right?

Consider the fact that within the past year, Google introduced “Google Now” into their Chrome browser. Earlier this year, users became outraged to find out a secret component in the browser allowed Google to listen to everything going on in your home or office. Google’s response was that they don’t care about your conversations and that the recording is only available on google.com or on the new tab page of the browser. Basically, “Don’t worry about it.”

But then, why do they hold a patent for delivering ads based on environmental conditions and background noise? The first image of the patent clearly displays a “client terminal” as a browser or mobile device.

There are a few issues with the Google Now service and the patent that Google holds for targeting ads based on my ‘environmental conditions’. First of all, I opted into letting Google listen to my microphone at regular intervals for “OK Google”. I don’t deny that. I did not opt into Google using my conversations while my phone is off to deliver personalized ads. Maybe I’m wrong and these targeted ads weren’t based on audio recorded while my phone was laying, turned off, on my desk at work while I was having a conversation? If Google Chrome is the culprit, the same issue exists. I’ve opted into Google targeting ads based on my search history. I did not opt into Google intercepting every keystroke in my browser (especially not for private intranet sites) to target me with ads.

Google’s service is apparently still a part of the Chrome browser (navigate to chrome://voicesearch and see for yourself). However, I don’t see a way to disable this. From what I can tell in searching it seems like Google has at least temporarily removed Google Now functionality from Chrome. I can’t tell whether or not the browser has access to randomly record audio while the browser is open.

I wouldn’t be so bothered by this, except that Google hasn’t been completely transparent about what’s being recorded from my phone or browser and transferred to their servers. What if I worked in the US Government? How could Google not be held accountable for their actions when the NSA’s actions have been deemed unconstitutional?

You can disable lots of Google’s search and ad targeting services in your Account Settings. I link to it directly because, as you probably guess, it’s not exactly easy to find.

Your interview questions probably suck

I have interviewed somewhere between 200 and 300 people in my professional career. I’ve learned a lot from the process. In fact, I am planning to write a short book containing some of this “knowledge” about interviewing.

One thing I’ve always had an odd distaste for is interview coding questions based on some ridiculous scenario. For example: “There’s no such thing as a queue, create a queue using two stacks.” or “There’s no such thing as arrays, create a queue with something else.”

Two things immediately happen when presented with this kind of question:

  1. I get pissed
  2. I feel sorry for the interviewer


These types of questions are an attempt to trip you up. “Quick, think outside the box!” These questions are actually really easy, but the intentional psychological disorientation is like a verbal punch to the face. These types of questions are actually no different from saying “I need you to write me a program, but there’s no such thing as keyboards.” It can be done, but the problem has already been solved. This is why I get pissed.

We live in the real world where engineers have access to a wealth of information when we’re presented with a problem. I’ve interviewed plenty of people virtually and told them that they could use the internet if they wanted to while solving a problem. I’m realistic.

Early in my career, I gave an interview candidate a question based on an unrealistic scenario. I admittedly found this question while researching online for questions used by big software companies. The dude aced the question and was hired. He sucked as a programmer. Unrealistic interviews often yield unexpected employees. This is why I feel bad for interviewers who haven’t yet understood this. In case the message isn’t clear: unrealistic coding questions are no gauge of actual ability.

Consider the state of mind of someone interviewing for a position. If your company isn’t hugely known, your candidates can be in any number of mindsets. Including:

  • Passively open to new opportunities – not 100% interested in your company or the position, possibly only interviewing to remain ‘in the game’
  • Passively looking for new opportunities – candidate is casting a line but not expecting any bites
  • Actively looking with no other prospects – your interview is the one this candidate wants (or needs)
  • Actively looking with plenty of prospects (and possibly pending offers) – the candidate may be looking for more money or just there to ‘see how it goes’
  • Not looking – maybe recruited by your company, which increases the feeling of expectations in the candidate

In all of these scenarios, there’s an element of uncertainty which causes the brain to do some funny things (see behavioral science field of neuroeconomics). You’re putting a candidate into an unfamiliar place by having that candidate interview with your company. If the candidate comes into your office, it can heighten the risk and uncertainty that can actually hinder the application of rational thought processes to begin with. As if this isn’t enough, interviewers asking these ridiculous questions are further taking away something familiar from the candidate (even worse if asked to whiteboard these questions or use an unfamiliar coding environment). This state of mind is nearly unavoidable. The loss aversion affecting candidates who are dying for the position only furthers the problem. These feelings are magnified if you work at a company like Expedia where many candidates would do almost anything to work with you. Many developers are anti-social, making the interaction even more unwelcoming. There’s a lot on a candidate’s plate and you’re probably making it worse.

The only way I’ve found as an interviewer and as an interviewee to overcome some of this dampening mindset is to understand that an interview is a two-way street. Your company is interviewing a candidate because you need someone good on your team, but a lucid candidate is also interviewing you to see understand if your company is where they want to be for a while. From this perspective, you’re doing both sides a disservice by asking asinine questions. I’ve been told in the past that questions like these are designed to show how well someone can ‘think on their feet.’ That’s just bullshit. You can think on your feet no matter what data structures are available to you. A candidate is less likely to be able to think on their feet when their ‘fight or flight’ instincts are engaged or they’re afraid of failing an interview miserably. One could argue that the industry is constantly in flux and developers must always adapt; the fact is that the industry is constantly improving, while ridiculous scenarios are based on unrealistic regressions.

It’d be foolish to not acknowledge why these types of questions are asked. They’re often asked to demonstrate knowledge of data structures and algorithms. They’re often asked to evaluate how well a candidate can talk through a problem and come to a solution. If you’re asking such a question, allow your candidate to speak. I’ve been in multiple interviews where I just wanted to walk out because the interviewer literally wouldn’t shut up and let me finish one sentence. There’s a difference between being helpful and being rude. If you’re being rude, not only will it make a possibly good interview go poorly but it will make a smart candidate decide against your company. You may both be at a loss.

Then, there’s the level of unrealism of your question. For instance, if there’s no such thing as arrays… how is the computer operating? If there are no stacks, queues, lists, or arrays, can the candidate assume there is no such thing as contiguous memory? If there are no arrays, can I access a .Keys property of some other data structure? Are there functionally no enumerable structures?! I know I’m being ridiculous, but so is your question.

So what can you do as an interviewer?

First, put yourself in your candidate’s shoes. Don’t just put yourself in your candidate’s shoes as someone who has worked at your company for 8 years and can attack any problem with a fully clear head. Put yourself in your candidate’s shoes, feeling completely naked in an unfamiliar interrogation room with something the candidate either wants or needs dangling in front of yourself by the thinnest of threads. Your interview question probably sucks.

Second, be more realistic. Whether you’re doing a whiteboard exercise, phone screen, or virtual coding exercise, tell the candidate that they’re more than welcome to search the internet for assistance. This will immediately put your candidate at ease. I’ve told many candidates this and I’ve only had one or two people actually use the internet. Something this simple actually works wonders to unveil the true nature of a candidate. If you end up with a candidate who searches for a solution and solves your problem immediately, this means that you either have a problem that’s no problem at all (easily accessible on the internet) or that you’ve found a candidate that can get things done quickly. This will naturally lead you as an interviewer to come up with an original and realistic coding question for the next candidate.

The challenge here is to not ask puzzlers like “all numbers occur an even number of times except one, find it.” Instead, ask a question like “model an ATM” or “create a utility that dynamically loads types from every assembly in a directory.” You could ask a more advanced question like “Write a producer-consumer using a BlockingQueue” or “Write a SIP header parser”. Don’t ask questions like “Solve this obscure IIS problem that took me 5 days of research to figure out” (paraphrasing, but I’ve seen this happen). If you’re going to ask algorithm and data structures questions and you want to evaluate an area the candidate may not be familiar with, consider a reverse polish notation evaluator and the Shunting Yard algorithm.

I was prompted to write this after reading an article on Programmers StackExchange. The question and answers are a pretty good read to get both sides of the interviewing wall.

Just to be clear: I am all for asking complex interview questions. In fact, I think most actual coding questions should be questions that can’t be written in the allotted time (a realistic, yet unfortunate, software management approach). I think it’s fine to make candidates sweat a little because of complexity, but it’s a little unfair to make a candidate sweat because of unfamiliarity.

I’d be happy to hear your thoughts so I can improve as an interviewer.

System76 notebooks… are they worth the money?

On June 19 2012, I purchased a 15″ Pangolin Performance with the following configuration:

  • Ubuntu 12.04 LTS 64 bit
  • 15.6″ 720p High Definition LED Backlit Display ( 1366 x 768 )
  • Intel HD Graphics 4000
  • 3rd Generation Intel Core i7-3610QM Processor ( 2.30GHz 6MB L3 Cache – 4 Cores plus Hyperthreading )
  • 16 GB Dual Channel DDR3 SDRAM at 1600MHz – 2 X 8 GB
  • 750 GB 7200 RPM SATA II
  • 8X DVD±R/RW/4X +DL Super-Multi Drive
  • Intel Centrino 1030 – 802.11 b/g/n Wireless LAN + Bluetooth Combo Module

Total price: $1,038.00

The price was higher than I wanted to spend, but I considered this purchase an investment. After all, I had previously owned a Sony Vaio for almost 5 years, and which I had purchased for $800. The reviews for System76 seemed a bit mixed. I’ve been using Linux or BSD as my main desktop environment since 1998, so I took a chance on the mixed reviews and made the investment.

Issue #1

After two months, I had to open a ticket because the Ubuntu version preinstalled on the machine displayed the graphics card as ‘Unknown’ and the screen itself exhibited a sort of wave of energy which was giving me headaches. I was told by the support engineer that the fix would be to install the mesa-utils package. Why is mesa-utils not installed by default if it’s a necessary package? I have no idea, but here’s the response I received from my support ticket:

It’s actually a known issue without having the mesa-utils installed. Apparently, the info is actually pulled from glxinfo.

Mesa-Utils isn’t part of the default install, and admittedly, we do not add it either. I know there’s talk of trying to add it as part of the install.


The graphics issues were actually resolved by an early upgrade to Ubuntu 13.04.

Issue #2

In October 2012, I decide to dual boot a copy of Windows 8 from MSDN. There were numerous issues with drivers. After opening a support ticket, I was told that System76 doesn’t have access to drivers for their hardware from their vendors (bison or chicony hardware). That seems really shady, considering a machine for such a high price should have quality hardware. I ended up removing Windows 8 and installing Windows 7 within a VirtualBox VM. The hardware is beefy enough to host multiple concurrent VMs, so this is actually my preferred method for cross-platform development. However, a machine that can’t dual boot Windows basically renders the machine useless for developers who need to code at a low level like packet processing or driver development.

Issue #3

On June 15 2014, just shy of two years after purchasing this machine, I decided to open a ticket regarding battery issues I started having after installing Ubuntu 14.04. My battery began to hold no more than 75% or so charge. I inquired how to troubleshoot whether or not there were some other factors causing battery issues. For reference, here are screenshots of the battery statistics:

I was told by the support team that my battery was causing problems because:

The statistics indicated this is within your battery. The energy designed (48.8 W/hr) and full (40.0 W/hr) is where your discrepancy lies.

So, I opened a ticket saying that my machine had a max of 76% charge after hours of charging (both booted and off) and support replied saying my problem was that my battery could only hold a max charge of 83.3%. The resolution offered was to purchase a new battery at $105.

That’s not right. Every two years of ownership, I will need to purchase the same low quality battery at $105 out of pocket? In a sense, I have purchased a machine that also requires a $50/year battery fee. This seems really silly.

I admit that I had no battery issues for about two years. In fact, I was surprised by the battery life (2.5 to 3 hours) early on. Machines with a mechanical hard drive and 16GB of RAM are generally considered battery hogs. I don’t know of anyone who actually works on such a machine on battery.


Although my machine has worked well for two years, I don’t like the quality of the hardware or the poor response from the company about the issues I have had. I probably wouldn’t mind it if the quality of support offset the quality of the hardware. For example, if System76 actually stood behind their product and replied “Wow, our batteries should last more than 1.95 years. We’ll send you a replacement immediately,” I would recommend purchasing from the company. You do get a pretty beefy machine for relatively cheap. As a software engineer with multiple side projects and a 9 month old son, I don’t have time to continuously try to troubleshoot issues that I really shouldn’t be having with a quality piece of hardware.

My own resolution was to purchase a 13″ Macbook Pro. This machine has 8GB RAM and a 256GB SSD Hard Drive. I have been getting 8 or 9 hours of battery life, and that’s with the power-sucking Google Chrome open at all times. I became accustomed to the OS X environment while working at Expedia. Mac OS X is a UNIX certified OS, so I feel at home in the environment. The only issue I had with OS X when I first started working in the environment was the difference between COMMAND, OPTION, and CONTROL. This took a week or two to become habit. My wife also left her Sony VAIO for a Macbook Air last year and she loves it. You’d be hard-pressed to find a Linux distribution which allows for such a smooth computing experience for non-geeks.

I will be selling my System76 machine after backing up all important information. If you’re interested, let me know. You’d be getting a beefy machine and not taking along any of the annoyance of paying over $1000 for subpar customer support.

To answer the question “Are they worth it?”, I would have to say “It depends.” If you want a machine that would cost $2500-3000 elsewhere for around $1000, then yes it is definitely worth it. If you don’t mind replacing a battery after two years, then yes it’s worth it. If you’ve used Linux for a long time like I have and don’t mind spending an hour or two every time a ‘surprise’ surfaces, then yes it’s definitely worth it. If you’re like me and you have little free time, then you’ll want to open your machine and expect everything to work as expected with little or no interaction with customer support. In my case, it’s just not worth it to own a machine that requires so much maintenance. If this system was a car, I would sell it as a car that runs well and needs little or no work. That is, if you don’t work off battery you’d be all set.

Android Studio and Library Projects

This is basically a quick brain-dump post. I have previously attempted to get into Android application development, but with only 8-10 hours of “free” time per month, it was difficult to get traction with an app before the goog machine overhauled everything. This happened repeatedly. I had some great ideas for a mobile app over the weekend, and thought I’d give it another go and guess what? Everything is overhauled: Android Studio is the next big thing, ant builds are out and gradle builds are in. I decided to force myself to overcome my annoyances and try out Android Studio.

Android Studio is beautiful and operationally stable, although very buggy. I’ve found four bugs in less than two days. The biggest bug and usability issue, however, is in creating an Android Library project and adding a reference to it in another project.

Here are the steps I took to reference another project. These steps may not be accurate, but I don’t care. I figured they may help save someone some time.

  1. Create an Android library project using Android Studio.

    There’s a bug in Android Studio 0.1.3 which apparently does not mark android library projects as library projects, so navigate to MyLibProject/MyLib and change the android plugin reference to:

    apply plugin: 'android-library'
  2. Create a main application project, in my case MyApplication2
  3. Add library as git submodule or nested directory under MyApplication2
  4. Edit settings.gradle in the main project to:
    include ':MyLibProject:MyLib’, ':MyApplication2'
  5. Edit MyApplication2/build.gradle to compile the lib project in the dependencies task:
    compile project(':MyLibProject:MyLib’)
  6. Navigate to your library subdirectory and execute:
    gradle clean && gradle assemble
  7. Press CTRL+ALT+SHIFT+S to open the Project Structure dialog
  8. Create a new module, change module name and point content root to MyLibProject
  9. Change Package name to the package name of your lib and press Finish
  10. Click MyApplication2 (not MyApplication2Project) in the Project Structure dialog and select the Dependencies tab.
  11. Click the green plus icon and select Library|Java
  12. Select MyLibProject/MyLib/build/bundle/release folder, choose Project Library, and hit ok
  13. Save. The library should now be usable.

These instructions may seem a bit hurried, but it should get the job done. I’ve run through numerous attempts at different options, and these are the only ones that seem to have stuck.

I might also mention, I’ve created an ANDROID_HOME environment variable to load in my shell which points to the sdk directory under the android-studio installation. I’ve also downloaded gradle-1.6 to ~/bin, and symlinked gradle to ~/bin/gradle which adds gradle to my path.

Why I don’t recommend the Step module [node.js, npm]

I prefer asyc to step. The async module has a cleaner interface, more options, and utilities. It’s a little beefier in size, but those things are beside the point.

Here’s why I don’t really like step. Tell me what you’d expect from this function:

var Step = require('step');

	function first(){
	function second(){
		var x;
			// do something that should set x
		return x;
	function third(){

Did you guess:

$ node step-example.js 

If you’re not familiar with Step, you’ll probably look at that first function and wonder what this(); actually does. Well, it is actually shorthand for next(); (I’m simplifying a little here).

Assuming you’re at least somewhat familiar with asynchronous control flow, you could assume this(); calls the next function. But, what about the second function? It’s returning a value. Does that return from second, or from Step, or from some other function? It returns from second(), but… it passes the value to third. Or, it should. Unless x is undefined, in which case it will be considered an unchained function. Now, I don’t think your variables should regularly be ‘undefined’ at the point of return, but what if you use or someone on your team uses the popular convention of calling a callback as return callback(x);. If the Step module’s convention of tweaking this semantics is ignored, another developer may look at it as a typo, “You can’t call the context like that…” Right? Also, what if someone doesn’t understand the return value can’t be undefined and comments out the middle of that function? You may have cleanup logic that isn’t getting checked in third().
We’ve all seen that happen before.

In the above example, if x was not undefined, it would be passed as a parameter to third().

It’s this inconsistency which makes me feel like Step is an accident waiting to happen in a regular development team. The source code is pretty well written and concise. I think the author has done a great job, but the usage is unintuitive and I wouldn’t recommend using the module.

On the other hand, async is beautifully intuitive.

Consider this example:

var async = require('async');

	function first(done){
		done(null, 'first');
	function second(done){
		var x;
			// do something that should set x
		done(null, x);
	function third(done){
		done(null, 'third');
], function(err, results){

async.series allows us to run a series of tasks (functions defined within the array), which pass a value to the callback (that last function you see).

If you forget to pass a value, the results will contain undefined at the index of the array. Your third function won’t be skipped unless you forget to invoke the callback. To me, this is a pretty obvious error and one that is easy to spot and correct. Here’s is the output of the above call:

[ 'first', undefined, 'third' ]

To be fair, the example of Step acts more like a waterfall function. Here’s what that would look like in async.

	function first(done){
		done(null, 'first');
	function second(val, done){
		var x;
		console.log('second has val: %j', val);
			// do something that should set x
		done(null, x);
	function(val, done){
		console.log('third has val: %j', val);
		done(null, 'third');
], function(err, val){
	console.log('callback has val: %j', val)

The above code is available on github: https://github.com/jimschubert/blogs

Note about Copyrights

You wouldn’t think this would be necessary, but it seems that it is.

All work on my site and in my projects is copyrighted. I am the copyright holder. I have bound most of my work by the MIT, GPL, or Creative Commons licenses.

For the most part, I don’t mind if you use code on my site as long as you give proper credit. “Proper credit” can be as simple as mentioning, “I found this on ipreferjim.com” and linking back to my site.

Copying my code verbatim which contains my name and posting it on your site as your own work is a violation of the very relaxed licenses I use. Don’t do that unless you give proper acknowledgement first. If something is questionable, contact me.

Mixing Business Logic and the Presentation Layer… Why do people do it?

In .NET Application Architecture Guide, 2nd Edition, the writers explicitly say:

Do not mix different types of components in the same logical layer. Start by identifying different areas of concern, and then group components associated with each area of concern into logical layers. For example, the UI layer should not contain business processing components, but instead should contain components used to handle user input and process user requests.
This is the standard way of developing today. However, I’ve seen time and again where a developer will do ALL of the business logic processing in the presentation layer. More than merely being annoying, this is wrong for a number of reasons including:
  • redundancy in code
  • “scattered” logic
  • poor abstraction
  • data access in the presentation layer
  • strong-coupling
That’s just to name a few. Every time I see this coding snafu occur, I think to myself, “Do I just not know what I’m doing?” After all, it’s much more seasoned developers doing this. Then, I remind myself that object-oriented programming has changed dramatically over the past 10 years.
I can understand some points of view “Too much abstraction is a bad thing!”, “Keep the logic as close to where you’re using it as possible!”, “I don’t understand what you mean by business logic.” But, ignorance of norms and coding standards doesn’t inherently and automagically produce good, clean, scalable code.
N-tier architecture has now become the norm, but is it too much to ask for some logical layering within a tier?