Wednesday, September 9, 2015


Remember, the people who use your software are people.

Wednesday, January 8, 2014

Stanke's first agile incompleteness theorem

I believe in agile. Who doesn't? But I'm also vexed by it. Who isn't? I was at a great presentation on agile product management this morning by Josh Seiden, and it helped me develop some thoughts around the friction that can arise between agile and business. I feel the root issue isn't a concern about process, but about results. Specifically, about defining when a project is "done."

Here's an imagined conversation:

Client/stakeholder: "Okay, so you're going to work on this project for a while, and when it's all done, you'll deploy it."

Technologist: "Actually, we'd like to do this the Agile way: we'll do a series of small Sprints, and deploy frequently, analyzing and iterating as we go."

Client: "Okay, sure. Whatever. You'll do your little sprints, and deploy frequently, until the project's all done."

Technologist: "Oh no. The project will never be done."

Client: "Say what now?"

Monday, November 18, 2013


I was recently asked for my opinion about the trend toward eliminating QA as a unique team, instead merging it into the responsibility of the engineers. I've had experience with both structures, and success with both. I'll try to offer a balanced opinion:

Traditional QA

The traditional arrangement is to have completely distinct engineering and QA teams; they may even work for different companies, in different countries.


  • QA staff really knows QA
  • Engineers (who are more costly than QA staff) spend all their time writing code
  • Peace of mind: independent auditors are validating software before launch
  • Engineers are allowed to be lazy (they "throw their code over the wall" and aren't responsible for making sure it works)
  • Increased cycle time and waste work: bouncing issues back and forth takes time and communication effort. Plus, lazy engineers introduce more bugs which then take time to fix.

Implicit QA

A recent movement aims to do away with formal QA teams, and make engineers responsible for their own software quality. I'll call it "implicit QA."

  • Engineers assume ownership of software quality, so they're motivated to get it right the first time
  • Rapid development cycles: no waiting for QA validation before code can be deployed
  • Users are engaged and their feedback is more relevant
  • More defects are released to production (Maybe! I haven't systematically studied this. Does anyone have data?)
  • Requires engineers to wear many hats and take more responsibility; some may balk

Philosophical Distinction

The only true measure of software quality is user satisfaction. The question is, to what extent can that be predicted before shipping? Traditional QA uses defect counts as the proxy for user complaints, and refuses to ship until all defects are eliminated. Implicit QA asserts that only users can reliably report their own concerns, and rather than trying to catch every defect before shipping, aims to optimize the process of finding and addressing bugs in the wild. You can hear echoes of the Waterfall vs Agile debate here. 

Situational Considerations

An important factor in this decision is the impact that a defect would have if it's released to the wild. The severity of a bug can be placed on a spectrum: 
life-threatening > threat to software company's business > threat to individual client's business/mission > major user inconvenience > minor user inconvenience
This magnitude, divided by the resolution speed, yields the impact of a bug in the wild. In aggregate, the potential impact of all potential bugs indicates the risk presented by low software quality. The higher this risk, the more attention needs to paid to QA. This attention can be paid either to reducing the number of bugs (traditional QA) or to improving resolution time (implicit QA). Or, both. Both philosophies encourage improvements of both of these dimensions, but each philosophy prioritizes one over the other.

Making the problem smaller

In either model, the number of bugs in production and the risk therein can be significantly reduced through automated testing. All modern languages have extensive frameworks for automated testing, against everything from low-level APIs up through front-end UX. The more extensive the test coverage, the fewer bugs make it out of dev. Test-Driven Development (TDD) is the ultimate expression of this approach, but isn't strictly necessary for achieving good test coverage. Pair programming also helps, by having two sets of eyes on the code before it even gets committed.


Either approach is good, and the choice of methodology is less important than hiring good people (devs and/or QA) and aligning them to the mission. Any organization should choose an approach that makes sense for the mission, integrates well with the engineering process, and feels right for the user relationship. Still, I usually lean toward the "implicit" approach, heavily mitigated through automated testing. You'll get faster release cycles and more rapid feature evolution, and a closer connection to the users. Nobody likes bugs, but (IMHO) most communities can tolerate a few defects in exchange for rapid innovation.

Friday, October 11, 2013

Why does Google serve all browser styles to all browsers?

Google has long been at the forefront of Web performance: serving a quality user experience with minimal loading time. And with their new flat logo, they're pushing even fewer bytes to the browser for their core search page. (That's because images with areas of flat color can be compressed much smaller than images with bevels and shadows.)

So, on a whim, I checked out the source code for, to see what else they're up to. The source is minified and therefore hard to read, but one thing jumped out at me and was a surprise: Internet Explorer-specific CSS. Of course, it makes sense that they provide styles for all browsers, but I'm using Chrome.

HTML source of, using Chrome browser
Now, before all you blog readers start a stampede (is there a word for a stampede of one?), I know it's standard practice to include all browser fallbacks in a site's master stylesheet. But, with a site at the scale of Google, and with Google's need for speed, I would expect them to read my user-agent request header and respond with only the styles that my browser will actually use.

I'm sure there's a good reason -- probably, the overhead of browser sniffing outweighs the benefit of trimming unused styles. Just curious.

Monday, August 19, 2013

The next time I launch a website

Over the past decade, I've launched a lot of websites. In that time, it's gotten easier and easier to scale them, especially content sites with emphasis on read queries. But there are many challenges remaining, and until we can use seamless iframes (dammit, when?), scaling page requests will require a mix of technologies to balance breadth (pushing the same content out to lots of people, usually via edge caching on a CDN) versus depth (pushing unique content out to one person, possibly via Ajax personalization of a generic page). Traditionally, we've started by making everything dynamic -- all requests hit the app servers, all responses unique, even if they have similar content. Then we layer on the cache, re-engineering and refactoring as the site grows and we discover performance bottlenecks. This approach works, but it's not optimal, because it's reactive: we wait until we observe the problem before addressing it. Possibly, we wait too long.

But there may be a better way, and it doesn't require premature optimization...

A sub-optimal route

At the AWS conference, I learned that CloudFront (Amazon's CDN) will accelerate page delivery even if nothing is cached. This is due to route optimization: requests hit Amazon's edge servers, located all over the world, and then immediately enter the AWS optimized private network, traversing fewer hops over better pipes on their way to the origin S3/EC2 sources. They don't bounce all around the 'net, trying to find their way to Virginia. (Important: this applies only if the origin is within AWS.) So, I will configure CloudFront with no caching at all -- every request will be passed to the origin for unique resolution. Effectively, there's no CDN at all, but I will get the network benefits. Importantly, though, the caching mechanism is in place from day one. Any engineering that must be done to effectively work within this infrastructure can be incorporated into the first build-out, not delayed until it starts being an issue. As traffic grows, I'll dial up the cache. Maybe just 30 seconds at first. And of course, different kinds of content can be cached for different amounts of time. Plus, individual cookies can be acknowledged or ignored, so personalized requests can be passed to the origin while generic requests are cached, even at the same URI.

Another good practice from day one is to plan for subdomains: as part of scalability, I can expect that I'll want to serve different content from different subdomains --,, etc. It's easier to deal with this later if the code already knows about it. At first, these can all resolve to the same place. Importantly for CloudFront, I'll also need a URL to receive form POST requests, which CloudFront won't handle (AWS: please implement a feature to forward POSTs to the origin!).

[UPDATE: CloudFront now supports POSTs -- as well as PUTs and other verbs. Whoo!]

So, the next time I launch a website, I'm going to put the whole thing behind CloudFront, from day one.

Thursday, April 4, 2013

Use Your Words

An up-and-coming User Experience (UX) designer recently asked me whether it's possible to fully separate UX from visual design -- to create a "pure" UX that's only functional and not aesthetic. Good question, up-and-comer! This is a very appealing idea: the purpose of most applications is functional, whether it's selling something, or distilling information, or what have you. Therefore, the goal of UX is to facilitate the achievement of that functional objective. Aesthetics are an important but secondary concern.

To this end, many UXers employ wireframes in the design process. A wireframe is a bare-bones representation of an application screen, with only black text and boxes on a white background. Without any influence of colors, pictures, or fonts, it's meant to represent a strictly functional view -- pure UX. Frequently, faux-Latin "Lorem Ipsum" or other placeholder language is used to indicate areas of text. Desginers often present these to client stakeholders, asking for sign-off that yes, this interface is appropriate for the client's goals. Unfortunately, it's often difficult to get users to really connect with such "low-fidelity" interface mock-ups. Lacking visual excitement, the wireframe appears as a sqiggly field of grey, and users will often say "Sure, that looks fine," regardless of what you put in front of them. What can we do to grab stakeholders' attention? At this stage, we don't want to rely on fonts and colors; that puts us at risk of making something pretty but dysfunctional. The solution is words.

I've found that stakeholder engagement increases significantly when we use real words in our wireframes. Words hit the brain really fast, and they evoke all sorts of meaning to the user. So, instead of Lorem Ipsum, write wireframes with actual representative content, which makes it much easier for a client to understand how users will perceive the application. If the client hasn't provided content, then make it up! This may lead to a debate with the client, since you're putting words in his or her mouth. Good -- now is the time to have that debate. If you don't know what the content is going to be, you won't be able to help users navigate that content. Use the wireframing process as an opportunity to elicit a content strategy. Even if you do receive content, tweak it. Make it more extreme. Let your client's reaction inform your understanding of the project objectives.

Let's take an example. Here are two wireframes with generic text (click to enlarge):

To the well-trained eye, they look... um... sort of different? To a client, they're indistinguishable. If you're lucky, the reaction will be: "Sure, looks good. I'll sign off on that. Now let's see some colors!" More likely, it will be: "Why are you wasting my time with this?"

Now consider these wireframes, with specific language:

Ah, now we can see the difference. When a client signs off on one of these, that really means something. And, it gives you a big head start on visual design. Everything flows from the language.

Therefore, to the original question -- "can good UX be achieved without introducing visual design?" -- I reply: Maybe! But to a different question -- "can good UX be achieved without introducing content?" -- I say: No. So use your words! You can go a long way towards alignment to client objectives, before you break out the colored pencils.

Friday, March 1, 2013

Regarding Ruby: single-page apps vs. "Turbolinks" [updated]

The Ruby/Rails folks are taking a stand! It seems a major focus of Rails 4.0 is to empower users to create server-side applications that are as fast as client-side JS/JSON (single-pagey) apps: Rails 4.0b1

Their messaging reflects a belief that delivering dynamic HTML from the server is superior to pushing it all to the browser and communicating only via API. I think I might kind of like this. It can get messy when we cede control to the browser via single-page apps. Of course, much good work is being done here, but it's all too easy to expose security risks, or just business-logic inconsistencies, when the content doesn't really exist until the browser chooses to create it. And with today's browsers pushing updates every other minute... oof. Future-Dave, let's keep an eye on this one.

Update: I spoke to a friend who's a Ruby expert and has a very practical brilliance. He's not a fan of Turbolinks, and predicts it will be a massive failure. DOA?