Who ordered the scrambled brains?

Serving Glendale's Irish-Pilipino community since 2015.

Shutdown plurality voting!

I’m on the Congress-hating bandwagon, but I’m also on the we-have-no-one-to-blame-but-ourselves bandwagon, up on the roof of it for a while, jumpin’ and shoutin’ (and tweetin’):

  • Government shuts down, due to
  • Ideological fundamentalism in Congress, due to
  • Ideological fundamentalists voted into Congress by the American people, due to
  • Dysfunctional, big money, two-party political system generating few election options, due to
  • Widespread use of simplistic Plurality Voting system based on single-mark ballots, due to
  • It being used when the country was founded and now being ingrained in our culture, due to
  • Single-mark ballots being easy to tabulate by hand.

Yes, in 2013, after landing rovers on Mars, sequencing the human genome, and creating a machine than can beat humans on Jeopardy!, Americans still use a simplistic system based on single-mark ballots and plurality voting because historically they were easier to count by hand!

Fortunately the Constitution (and local and state law) can be changed. ;) Do we want to move to more evolved, robust systems, such as Preferential Voting (ranked voting) or Proportional Representation (wherever possible), to create a more representative, satisfactory, efficient government? Or do we want to keep our ‘merican gladiator system that produces ideological meatheads who’d rather fight on camera than solve problems?

Read about it for yourself: proportional representation and single transferable vote (fancy for “preferential, ranked voting”).

NGO worth checking out: http://www.fairvote.org/

Aside from Congress and the American people, I also question Obama’s 2-week absence and the spotty coverage of the mainstream press (our vaunted Fourth Branch of government). Why hadn’t Obama, his minions, or the press been driving attention to concrete consequences, rather than abstractions like “Americans will be hurt”–as much as I care about Americans, I wanted to know the direct affects of The Shutdown TM on me. Over the last two weeks, I’d found very little, and only last night did the deluge of stories about what will and won’t be affected by The Shutdown TM come out. If only there had been an easy-to-use website for a normal guy like me to drive attention to that issue…

Upgrade your gray matter

I’ve wanted to properly avail a useful software creation as open source for a long time, but work (both employment and entrepreneurial) had always preoccupied me. As my startup Blocvox matured, I saw in it the possibility to fulfill this desire. Recently my task list has been clearing up, so I spent the necessary few hours isolating, documenting and polishing some code for you, dear reader. Blocvox is fairly well decoupled and many parts are ripe for packaging, but there was one piece that stood out as particularly useful and well-contained. I give you, Blocvox.GrayMatter.

Blocvox uses a command-query responsibility separation (CQRS) architecture to simplify both the domain model and the user interface code, and to enhance performance by computing derived values once at write-time rather than repeatedly at read-time. Blocvox.GrayMatter plays a major role in connecting these two sides of the Blocvox brain, allowing them to remain simple and agnostic.

This is a custom Castle Windsor facility that allows easy registration of event handlers, and high-performance, decoupled attachment at resolve-time. On the registration side, the package provides a strongly-typed fluent API and a reflection-friendly API. On the resolution side, the facility dynamically constructs, compiles, and caches the code needed to add event handler Delegate instances to events, in order to remove the reflection performance hit. It also wraps the delegate to provide “just-in-time” resolution of the subscriber, in order to mitigate huge dependency graphs being resolved up-front which might not even be used. This takes advantage of, and reinforces, the decoupled semantics of the event keyword.

Call me old-fashioned, but I find event to be the Way to do decoupled event-driven programming in C#. No reactive/observer/observable ceremony and threading black boxes. And no God-awful pub-sub/service bus God objects. (Unless you’re dealing with significantly disparate business units, pub-sub is doing it oh-so-painfully-unnecessarily-wrong. But it’s easy, right?) Despite its own warts, nothing is as idiomatic, well-understood, and portable as event for decoupling event providers from subscribers.

It’s not a lot of code, but I’ve found it it to be immensely valuable, primarily in keeping the cognitive load of the codebase low. I hope it is of use to others. I plan to maintain and improve it, but I don’t foresee drastic changes in the near future.

I leave you with these timeless words from a futuristic poet: “Upgrade your gray matter, ’cause one day it may matter.”

Decentralizing trust on the web

Update 2013-10-30: Yes, I am an idiot. This is all moot, as SSL’s single-point-of-failure is mitigated by cipher suites using perfect-forward secrecy. Carry on, nothing to see here.

I ‘d like to sketch out an idea for a practical, improved encryption mechanism on the web. As it stands, HTTPS relies on SSL key certificates, which are “endowed” with trustworthiness by certificate authorities. There are relatively few certificate authorities on the whole of the Internet. Because a compromise of those certificates means a nefarious agent can create/sign their own authentic-looking certificates and then perpetrate a man-in-the-middle attack on any so-protected web server, I contend the current state of web encryption depends on too few points of failure.

I propose replacing/augmenting this centralized trust model with the decentralized one of asymmetric public-key cryptography. Rather than one-key-serves-all, in public-key cryptography each communicating pair has their own key set. As a practical requirement, I propose relying on HTTP or HTTPS as the transport, but encrypting all request and response bodies with the parties’ individual public keys. Ideally, support is built into the browser, but short of that (or in the interim) we can use browser extensions/add-ons to hook into request/response events and perform the encryption/decryption.

Browsers that support this would notify the web server with a HTTP request header, perhaps X-Accepts-Key with a value of a supported public key’s fingerprint. This would allow the server to lookup the supported public key via fingerprint. Such browsers could also send messages encrypted with the server’s public key and indicate this in the response with the header X-Content-Key specifying the server’s key fingerprint. Likewise, server responses would include X-Content-Key in their responses to indicate the user’s public key. These headers should be considered alongside other HTTP content negotiation parameters (influenced by request Accept* headers and specified in response Content* headers) in determining HTTP cacheability.

Web servers will have to retrieve the public key specified in the request headers. I do not propose the exact mechanism for this, but a simple approach would be to allow users to associate a public key with their “user account” (i.e. their unique security identity), either by POST-ing a web form over plain-old HTTPS—or perhaps in person at a corporate or field office! (I imagine a market for physical key delivery could crop up if the public demanded it… think armored trucks and bankers boxes.) Likewise, the server will provide a public key to the user/user-agent; this associated keypair should be unique to the user to provided enhanced security. (Users can check this among themselves by comparing keys used between their individual communications with the server.)

Servers should also support the OPTIONS request and include something like X-Allows-Encryption: rfc4880. Both server and user agent should dynamically fall back to “plaintext” HTTPS when either side lacks support. In particular, due to non-idempotency of certain HTTP methods, URLs of encrypted requests should first be OPTIONS-ed. Unfortunately, OPTIONS is not cacheable, but this overhead is a small price to pay when security is tantamount. It would be nice to simply try the encrypted request and rely on the server to properly reject it with a 400 (which would indicate the need to retry without encryption), but it’s conceivable that the semantics of certain resources do not allow differentiation between plain and cipher text (PUT-ing or POST-ing binary data).

Ultimately, while not being the end-all of web security, this seems to me to add a “pretty good” layer of complexity to existing security conventions. Of course, I’m no security or cryptography expert, so I can’t assert the merit of this idea. But that doesn’t stop me from discussing and thinking about this important issue.

Update, 2013-10-03: Perhaps another alternative would be to make it easier for users to act as certificate authorities, and for websites to create unique SSL certificates per-user that the user can then sign. For example, a new website user will log into a website that is protected with an SSL certificate signed by a third-party “trusted” authority. The website will then, perhaps in automated fashion, create a unique SSL certificate for that user and request that the user act as a certificate authority and sign the certificate. Thereafter, the user will access the website via a unique subdomain, perhaps of the form https://<username>.example.com. While leaving the initial certificate-signing stage protected by only a common (single point of failure) SSL certificate, this does create a proliferation of SSL certificates thereafter, and cracking/forceful entities will have significant difficulty conducting mass surveillance.

Sexism in tech

This is certainly a hot-button issue. I’ve seen an increased focus and willingness to acknowledge and address this issue throughout the tech community, but many still deny that it exists. Much has been said of the inertia of male privilege and the meritocratic ideals of the tech industry, both of which are invariably characterized sociologically, as some metaphysical Force that operates on a level separate of the individual. At the same time, the sociological/systemic problem continues to be defined as the aggregate of interpersonal—not societal—failures (an authority figure looking the other way toward degrading behavior, men telling insensitive/sexist jokes, etc). This interpersonal dimension is not consciously discussed, leading to a type of stall in resolving the group inequity that seems common. The outlines of many inter-group conflicts are drawn in broad sociological terms, which motivates political action to achieve a negotiated level of structural equity. But if the focus never shifts to interpersonal equity, progress can stall culturally and the problem can persist and stagnate. For example, because of affirmative action and equal opportunities and desegregation, many today believe that race relations have achieved a satisfactory state, but the majority of minorities will disagree with this.

An article hit Hacker News today about the sexist bullying experienced by one female high school student in her computer science class. This obviously spills out of the tech field into high school culture, but generated a lot of discussion on Hacker News regardless. One comment called for men to simply accept that women have subjectively different experiences than they do. I agree, but the questions remain: why haven’t men already done this, and how do we progress from there?

I have observed that for many men having difficulty in comprehending/accepting that women experience the industry so differently than they do, they either A) over-generalize from an exceptional interaction, or B) follow those that have over-generalized. By “A” I mean that men can rely on confirmation bias to cement their impression of the female experience based on a few choice interactions, in order to create an intellectually convenient worldview. For example, confirmation bias can allow a random chat with a well-adjusted, confident woman who appears impervious to tech sexism to dispel for many years any notion in that man’s mind that sexism exists in the industry. Thereafter, contradictory signals can themselves be dismissed as the exceptions, and because of cognitive dissonance, can even sere to reinforce the misconceptions. (It should be noted that even though a woman might appear impervious, she actually may not be anyway.)

By “B” I mean that men with no relevant direct interactions with women (not uncommon given their low numbers) may follow the lead of the people with whom they associate, who are by definition men. So any confirmation bias of those men then spreads to them.

In considering such interpersonal breakdowns, what is not often recognized is that individual women have unique experiences. They are affected to varying degrees and in various ways by prejudism and ostracization. As a male, rather than tip-toe around or ignore the issue with a female colleague—allowing the assumption of the most intellectually convenient possibility—I’ve found the best hueristic for recognizing your potential to participate and perpetuate a toxic environment is to earnestly sense/inquire the nature of her individual past experience. (You may also share your own relevant experiences, if any.) Such a dialogue can help establish a common foundation and framework for maximizing the team and progressing the industry.

I believe the widespread focus on the direct, open, and individual treatment of interpersonal relationships (and moving away from the macroscopic one-experience-fits-all mentality, which lacks common sense and is susceptible to confirmation bias) is an important next step for evolving stalled relations between social groups in general.

Mobile SEO

This “SEO Cheat Sheet” has been making the rounds today. Pretty good stuff, but the second suggestion about mobile development should, at the least, be given a big asterisk if not removed entirely.

When differentiating content based on user-agent and including the “Vary: user-agent” header in responses, you are effectively disabling HTTP caching. Because of the huge number of user agent strings, neither the server’s output cache nor any CDN/intermediary cache will be effective at reducing request processing load. This is a very poor trade-off, and typically unacceptable.

If you must serve dynamic content based on user agent, the third option on the cheat sheet is probably better: use rel=canonical with separate URLs per device class. On each request, the server would still sniff the device class from the user agent string, but if the sniffed class does not match the one designated by the URL, the server 302-redirects (temporarily) to the device-specific URL (else, it serves the appropriate HTML). This requires a little more programming effort, but is usually worth having both caching and SEO.

I consider it a must to have the server take into account an override cookie when sniffing the device class, which the user can set through UI in the site header/footer. Also, I abhor URLs representing the same essential content to vary in domain or path (goes against the spirit of HTTP content negotiation), so I distinguish them with a simple “?lite” query parameter.

Gravitational Insanity

I’m introducing a new category of brain scramblings, Physics Insanity. I’m not sure how much I’ll actually post in it, but I have on several occasions composed my thoughts about matters relating to fascinating physical properties of the universe, so I might as well share them here. My point of view is less mathematical and more of a layman seeking a broader, deeper understanding of whatever this thing that we’re part of is. Expect to see the phrase “my mind was just blown” frequently.

My mind was just blown. It turns out that if a body (like the Sun) is in motion, its gravity doesn’t pull directly back toward the originating body, but toward a point ahead of the body. The further away from the body, the further ahead of it its gravity pulls.

Background: I think at least for this concept, it’s best to think of gravity not as a field surrounding a body (like the Sun), but as a type of radiation that continuously emanates from a body. As this radiation passes through surrounding objects, they are pulled in a certain direction by it. Two things are interesting about this. First, gravitational interaction between two bodies happens indirectly through “gravitational radiation”. Second, gravity is not an instantaneous effect, but one that travels through space, just like radiation from a nuclear meltdown or light from a bulb. Gravitational radiation travels at the speed of light, and when the mass/weight or location of a body changes, the corresponding changes to its gravity also travel away from the body at the speed of light. For example, if the Sun were to suddenly double in mass, the increased gravitational pull is not instantaneously felt by the various planets in the Solar System. Instead, Mercury would feel the corresponding increase in gravitational pull 3 minutes later (the same time Mercurians would first see the Sun double in size), and the Earth 8 minutes later, because that’s how long it takes for the corresponding in increase gravitational radiation to reach them.

Ok that’s neat, gravity radiates. Makes sense given what we’ve told about nothing (matter, energy, and mere information about physical changes) being able to travel faster than the speed of light. But there’s more to the story, so much more, dear reader. In the mundane, normal scenario of the Sun just drifting along at constant speed and direction, the direction of the gravitational pull that is radiating from it changes as it travels further away from the Sun! When you’re really close to the Sun, the gravitational radiation that passes through you pulls you straight back toward the Sun. But further away, for example at 92 million miles where Earth orbits, the gravitational radiation will have gradually changed so that its not pulling you straight back, but rather at a slight angle, toward the point in space the Sun will have drifted to during that 8 minutes! After traveling away from the Sun for another 22 minutes, the same gravitational radiation will reach Jupiter and will have changed more, such that it pulls toward wherever the Sun should have drifted by that time. As gravitational radiation travels away from a constantly moving body, the direction that it pulls in continuously changes so that it’s always pulling toward that body’s present expected location, not simply straight back to the location it was in when the radiation left it!!!!!1111!!1 o_O

If I explained that well, our minds are now in equivalent states of blown-ness. It’s awesome to think gravity has this dynamic nature. A crazy way to think about it is that even though we see the Sun in one location (where it was 8 minutes ago), we’re pulled by its gravity toward a different location (where it actually is right now). Apparently, the law of conservation of momentum requires that gravity behaves this way (I don’t understand this point conceptually yet). Also apparently, if the Earth were always being pulled toward the straight back to the old location of the Sun, and likewise all inter-planetary gravitational interactions were pulling straight back to “outdated” locations, their orbits wouldn’t be stable and the planets would fling each other out of the Solar System (assuming they would ever be able to assume an orbit, or even coalesce into existence, in the first place). Another crazy thing, each of us radiates weak gravity that, when it leaves us, is imbued with a sense of what direction we were heading in at that moment!

(I should note that gravitational radiation pulls toward wherever a body should be as long as its speed and direction remained unchanged. If either abruptly changes, the gravitational radiation already traveling through space does not magically “know” that it needs to recalibrate to the body’s new heading. Also, what I dub “gravitational radiation” is just my way of capturing the essence of its nature linguistically. I don’t understand the intricacies of quantum gravity or anything else that attempt to explain gravity more deeply.)

A cheesy visualization. At a given moment, gravity can be thought of as spherical arrangements of miniature electric fans floating away from a moving body, capable of blowing on anything they pass by. From a massive stationary rock, the set of fans “launched” at any given moment will always be pointed straight back at the rock, no matter how far away they float. But if the rock was drifting at a constant speed/direction when the fans were launched, each one of them will slowly change the direction they’re blowing in as they travel away from the rock, such that they’re always pointing to where the rock has drifted. The rock’s very motion puts a slight spin on the fans when they launch. The faster the rock is travelling, the stronger that initial spin; the slower its travelling, the weaker the spin. Such is gravity.


So, when’s our first gulag opening?

The British government (aka the US’s very own Mini-Me) has taken to intimidating the *friends and family* of disruptive journalists! This appalling, abusive violation of free speech resembles the very worst from repressive democracies like Russia!

It’s disheartening that there seems to be so little chatter about this issue on social networks. Is that a ramification of the incentive structures of the networks? Or of people’s belief that a magical force (an army of unicorns?) will keep life as we know it from ever changing? Or something else?

Should web designers know how to code?

UX expert Josh Seiden recently posted his affirmative thoughts about this question. I wanted to follow up with some thoughts of my own.

To me, this question is like asking should a writer know about the publishing business. If you’re “web-designing” solely as an artistic, self-directed endeavor, then “no” (but then we might prefer the term web art as opposed to web design). If you’re web-designing as part of anything larger, then “as much as possible, yes.”

As a developer and (novice) designer, I strive for pragmatism in both pursuits. Having both skillsets supports this. It’s pretty self-evident that the more of a system one understands, the more concerns and constraints they can simultaneously address/balance when working in that system. Understanding the relationship between CSS and HTML, their document-oriented roots, and the interpretation differences between browsers, better allows one to create flexible, idiomatic, conceptual, usable (e.g. performant) CSS while also supporting accessibility, SEO, and DOM and server performance.

I’ve read that the shift toward web design minimalism is about respecting the user. I agree, but I argue that it’s also about respecting the development process. Complex or pixel-perfect designs are very fragile and costly, wear on the development team, and hinder the business, usually with no marginal value over simpler designs.

Designing for web with strong knowledge of CSS is respectful of the process and environment of that work.

Done ain’t necessarily better than perfect

Many in the Agile software development community espouse the advice that “Done is better than Perfect”, meaning that it’s better for a software development task to be functional in some way (”done”) than it is for it to feel fully complete in every way (”perfect”). While I’m sure the Borg won’t appreciate that, it is useful for individuals or teams that are struggling to ship, or that lose sight of their goal of delivering value to the customer and instead get pre-occupied with their personal pride in their work. Don’t get me wrong, having pride in one’s work is fine, but when it becomes a pre-occupation (i.e. obsession), it’s harmful to customer-oriented productivity.

My issue with this mantra is that it is too simple and has therefore developed a cargo cult (as all simple precepts can). Seeking perfection is obviously foolish, but contrasting “perfect” with “done” creates a false dichotomy where anything off the shortest path to a minimally-functioning (i.e. barely done) implementation is foolish. This can lead to an inadvertent habit, or even the explicit goal, of shipping debt-heavy, barely-done code. Such code is unsustainable and therefore is itself foolish. Done ain’t necessarily better than Perfect.

What the mantra attempts to do is discourage work that doesn’t add significant value, but it’s heavy-handed simplicity overreaches. I prefer to avoid prideful, non-valuable work by repeating the much more pointed mantra, YAGNI (along with what I perceive to be its corollary, “ASAYNI”, as soon as you need it, which encourages useful investment in code rather than taking on technical debt). For example, by following ASAYNI in my own exploration of quality code, I discovered that as you refactor code into stable underlying structures (forming a “framework” if you will), the codebase not only becomes more consistent and predictable, but a strong “opinionation” also emerges. There are two things to note about this.

First, ASAYNI promotes investment, not only in projects but in developers. The opinionation yielded by the commitment to refactoring has ongoing effects on correctness and maintainability/readability for the life of the project. Also, I am now armed with better refactoring skills, and can refactor more efficiently on future projects (which further improves my refactoring skills). These are effects that compound with time. Despite this, refactoring into anything resembling a “framework” is often looked at as bad practice, not only when YAGNI, but because to exceed barely done is to foolishly seek perfection.

Second, although it’s obvious in hindsight, I did not expect strong opinionation to emerge from extensive and aggressive refactoring. I did expect a tangible increase in re-use and a standardization of code, but not the intangible feeling that I built an over-arching “way” of doing something. Despite study of refactoring, I’d never applied it so aggressively, and this experience of something unexpected happening hammers home the idea that some things can only be learned by experimentation. Sure, not all experiments yield unexpected discoveries, but by limiting yourself to “barely done”, you categorically cut yourself off from finding them.

Software development is a balancing act, the necessary accommodation of multiple orthogonal concerns (correctness, maintainability/readability, performance, security, extensibility, defensiveness, re-usability, development cost, testability to name a few). If we’re constantly focused merely on reducing development cost, not only are our projects constantly tipping over, but we also never get good at balancing them.

More on modular CSS

I’m always on the lookout for thought-provoking discussion about CSS, since I find it so rare. Smashing Magazine can occasionally turn up quality information about CSS, as they did recently in a guest piece that described the nose-to-tail rebuild of the London Times’ website. An interesting read overall, the part that stood out most to me detailed their approach to CSS. They anticipated pain around HTML view re-use and re-composition (into various layouts), and sought to structure their CSS accordingly. I applaud their efforts (despite dropping the goal of semantic naming, which would seriously concern me), and noticed that it resembled an approach I had taken early in Blocvox’s development. I weighed in to share my experiences, and wanted to reproduce it here as a follow up to techniques described in my recent post about the motive for modular CSS.

I started with the same use of long/multipart class names, and although it is performant, I found it a bit cumbersome to develop with. In my approach, I strictly adhered to representing each level of structure in the class name, so a link in the headline would have the class name ‘.apple_headline_link’. This made nesting or un-nesting elements require a lot of tedious class renaming, making rapid experimentation very burdensome.

Instead, I switched to a convention that relies on the child combinator. E.g.,

.apple { /* rules */ }
.apple > .-headline { /* rules */ }
.apple > .-headline > .-link { /* rules */ }
.apple > .-subHead { /* rules */ }
.apple > .-subHead > .-link { /* rules, can differ from header link */ }
.apple > .-body { /* rules */ }

The benefit of this comes when using a CSS preprocessor. I use Stylus, since I use Node.js for build tooling.

  // rules
  > .-headline
    // rules
    > .-link
      // rules
  > .-subHead
    // rules
    > .-link
      // rules
  > .-body
    // rules
    background-color: green
    > .-headline
      color: white

Now moving elements around only requires adding/removing a selector and indenting/un-indenting a bunch of declaration blocks. Performance is still good, but theoretically not as good as single class selectors.

A necessary part of this approach is that sub-elements have class names beginning with a dash, while root elements do not. A corollary is that all class names beginning with a dash are only used in selector groups scoped to a root class. With this convention, sub-elements are like private variables in OOP, with no meaning in the global class namespace.

I also use “flags” (e.g. “_green”, using an underscore prefix naming convention), which are analagous to a public boolean property in the OO world. Consumers are free to “set” any supported flag on a module’s root element.