Who ordered the scrambled brains?

Live and late-breaking coverage of Michael

Done ain’t necessarily better than perfect

Many in the Agile software development community espouse the advice that “Done is better than Perfect”, meaning that it’s better for a software development task to be functional in some way (”done”) than it is for it to feel fully complete in every way (”perfect”). While I’m sure the Borg won’t appreciate that, it is useful for individuals or teams that are struggling to ship, or that lose sight of their goal of delivering value to the customer and instead get pre-occupied with their personal pride in their work. Don’t get me wrong, having pride in one’s work is fine, but when it becomes a pre-occupation (i.e. obsession), it’s harmful to customer-oriented productivity.

My issue with this mantra is that it is too simple and has therefore developed a cargo cult (as all simple precepts can). Seeking perfection is obviously foolish, but contrasting “perfect” with “done” creates a false dichotomy where anything off the shortest path to a minimally-functioning (i.e. barely done) implementation is foolish. This can lead to an inadvertent habit, or even the explicit goal, of shipping debt-heavy, barely-done code. Such code is unsustainable and therefore is itself foolish. Done ain’t necessarily better than Perfect.

What the mantra attempts to do is discourage work that doesn’t add significant value, but it’s heavy-handed simplicity overreaches. I prefer to avoid prideful, non-valuable work by repeating the much more pointed mantra, YAGNI (along with what I perceive to be its corollary, “ASAYNI”, as soon as you need it, which encourages useful investment in code rather than taking on technical debt). For example, by following ASAYNI in my own exploration of quality code, I discovered that as you refactor code into stable underlying structures (forming a “framework” if you will), the codebase not only becomes more consistent and predictable, but a strong “opinionation” also emerges. There are two things to note about this.

First, ASAYNI promotes investment, not only in projects but in developers. The opinionation yielded by the commitment to refactoring has ongoing effects on correctness and maintainability/readability for the life of the project. Also, I am now armed with better refactoring skills, and can refactor more efficiently on future projects (which further improves my refactoring skills). These are effects that compound with time. Despite this, refactoring into anything resembling a “framework” is often looked at as bad practice, not only when YAGNI, but because to exceed barely done is to foolishly seek perfection.

Second, although it’s obvious in hindsight, I did not expect strong opinionation to emerge from extensive and aggressive refactoring. I did expect a tangible increase in re-use and a standardization of code, but not the intangible feeling that I built an over-arching “way” of doing something. Despite study of refactoring, I’d never applied it so aggressively, and this experience of something unexpected happening hammers home the idea that some things can only be learned by experimentation. Sure, not all experiments yield unexpected discoveries, but by limiting yourself to “barely done”, you categorically cut yourself off from finding them.

Software development is a balancing act, the necessary accommodation of multiple orthogonal concerns (correctness, maintainability/readability, performance, security, extensibility, defensiveness, re-usability, development cost, testability to name a few). If we’re constantly focused merely on reducing development cost, not only are our projects constantly tipping over, but we also never get good at balancing them.

More on modular CSS

I’m always on the lookout for thought-provoking discussion about CSS, since I find it so rare. Smashing Magazine can occasionally turn up quality information about CSS, as they did recently in a guest piece that described the nose-to-tail rebuild of the London Times’ website. An interesting read overall, the part that stood out most to me detailed their approach to CSS. They anticipated pain around HTML view re-use and re-composition (into various layouts), and sought to structure their CSS accordingly. I applaud their efforts (despite dropping the goal of semantic naming, which would seriously concern me), and noticed that it resembled an approach I had taken early in Blocvox’s development. I weighed in to share my experiences, and wanted to reproduce it here as a follow up to techniques described in my recent post about the motive for modular CSS.



I started with the same use of long/multipart class names, and although it is performant, I found it a bit cumbersome to develop with. In my approach, I strictly adhered to representing each level of structure in the class name, so a link in the headline would have the class name ‘.apple_headline_link’. This made nesting or un-nesting elements require a lot of tedious class renaming, making rapid experimentation very burdensome.

Instead, I switched to a convention that relies on the child combinator. E.g.,

.apple { /* rules */ }
.apple > .-headline { /* rules */ }
.apple > .-headline > .-link { /* rules */ }
.apple > .-subHead { /* rules */ }
.apple > .-subHead > .-link { /* rules, can differ from header link */ }
.apple > .-body { /* rules */ }

The benefit of this comes when using a CSS preprocessor. I use Stylus, since I use Node.js for build tooling.

.apple
  // rules
  > .-headline
    // rules
    > .-link
      // rules
  > .-subHead
    // rules
    > .-link
      // rules
  > .-body
    // rules
  &._green
    background-color: green
    > .-headline
      color: white

Now moving elements around only requires adding/removing a selector and indenting/un-indenting a bunch of declaration blocks. Performance is still good, but theoretically not as good as single class selectors.

A necessary part of this approach is that sub-elements have class names beginning with a dash, while root elements do not. A corollary is that all class names beginning with a dash are only used in selector groups scoped to a root class. With this convention, sub-elements are like private variables in OOP, with no meaning in the global class namespace.

I also use “flags” (e.g. “_green”, using an underscore prefix naming convention), which are analagous to a public boolean property in the OO world. Consumers are free to “set” any supported flag on a module’s root element.

Triggers of rage

No, I’m not referring to automatically executed SQL/DML code, but to an article I just read. It’s about the nature of things that can spark irrational anger, particularly in employment arrangements, and is authored by veteran software project manager Michael Lopp. In it he describes how simple things like a key slowly failing on his keyboard can evoke the same intense, aggressive and irrational thoughts and actions that discussions of the Big Three (title, salary, and location) can during performance reviews. He proceeds to advise managers to approach such conversations with patience while avoiding analysis paralysis (endless cycles of “should I give a bigger raise?” or “does he deserve the promotion?”). Ultimately he chalks these reactions up to the faulty mental wiring innate to all humans.

I instantly drew the connection from Lopp’s triggers to unmet expectations, a concept core to my own understanding of relationships. I chimed in with the following.

Revolving the discussion around the concept of triggers is an appreciable basis for giving advice, but as a student of interpersonal communication, I’d argue a better center is the concept of “expectations”. In fact, you touched upon this throughout the article.

Your concept of triggers, and the Big Three you mention, evolve from miscommunication of expectations. One party believes the other party has agreed to meet certain expectations, when in fact they have not. (This describes the vast majority of problems between rational, self-interested entities, including romantic relationships and business partnerships.) Most often, this is because the expectations simply have not been expressed, were vague, or have changed. One of the common traits of the Big Three are that they are circumstances of voluntary employment that are rarely discussed.

From there, different bits of advice might be derived, including: make ongoing efforts both to break down cultural barriers to discussing the Big Three and to explicitly identify other unexpressed expectations, to understand disappointment/rage in terms of unmet expectations (that you expect keyboards to have a long key life, and that others will not chew with their mouths open, perhaps because you don’t do so), [and] to frame disagreements in terms of unclear expectations. This can restore a sense of rationality, responsibility, and constructivity to the situation.

The “how did I screw this up?” bias

I love TechWell. They’re an online publication covering all stages of the software development lifecycle. Their articles are heavy on soft-skills, and I’ve found them very insightful, but they don’t seem to be too popular amongst software developers in general (based on the number of times I’ve seen them cited on HN, Reddit, Twitter, or in conversation).

Anyway, they recently posted a personal/process improvement article about various cognitive biases. I’m fascinated by these illogical tendencies in day-to-day human thinking, and I notice them a lot, in myself and in others. After reading the article, I added this comment, informally describing a “self-accountability bias” I’ve noticed.



Another debilitating cognitive tendency I strive to avoid is a form of “hasty generalization” (which is when a generalization is made about a population from a statistically insignificant sample, perhaps due to confirmation bias).

This particularly debilitating form is compounded by negativity bias. When it occurs, the “population” is the observer himself, and the sample consists of personal experiences connected to some relatively large, costly expectation or goal that is not met. The failure is compounded by the costs (both real and opportunistic) and can instigate a strong negativity bias. The generalizations are made about their own decisions, behaviors or traits. For example, when launching a startup, entrepreneurs are forced to make dozens of complex decisions with imperfect information. All of these decisions could be reasonably rational, but if the venture fails, the entrepreneur might conclude that the decisions were wrong and illogically “learn” not to repeat such decisions. Another example can spring from a failed long-term romantic relationship. Much time, energy, and consideration is put into such a relationship, and if it ultimately fails, a participant might be driven to attribute the failure to certain behaviors or traits of theirs, however illogical.

(The problem is not that individuals shouldn’t learn anything from these experiences, it’s that they tend to identify lessons that are based on unique circumstances but to hold them in broader circumstances. Identifying the useful lessons from complex failures is another matter.)

The extreme negativity of the experience can cloud their recognition of its complexity and uniqueness, and set them on a path of learning anything they can from it. These lessons can improperly deter them from goals or situations they can successfully manage. Perhaps this is the opposite extreme of hindsight bias, but the damage can be just as bad if not worse. A way to avoid this is to remind oneself of the uniqueness of the situation and that one acted in accordance with their morals, worldview, and perhaps limited knowledge of the situation; to think of times when suspect behaviors were actually beneficial to oneself; to acknowledge the variables beyond one’s control that played a role in the failure; and to reaffirm (or define) one’s morals and worldview to support a steady-handed outlook.

Programmatic Advertising

strawberries and milkProgrammatic what now?! Who-how advertising?!

I’m still a little fuzzy on the details myself, but it’s a topic I’m trying to catch up on quickly. What follows is my current, nascent understanding. (I hereby disclaim any assertion of truth, factual correctness, or entertainment value.) Programmatic advertising is the nouvelle vague of online advertising, that seeks to cut out the middleman while enriching a publisher’s ad inventory to increase margins for publishers.

Okay, some background. Online advertising traditionally works in one of two ways, both involving media and content publishers (who have empty ad space on their webpages, and are AKA sellers), who sell their ad inventory (i.e. their ad space) to advertisers (AKA buyers). This ad inventory is constantly being generated: each time a user loads a webpage, each ad placement represents a salable ad space, known as an ad impression.

The first way to capitalize on these ad impressions is through direct sales, in which a publisher staffs an in-house ad sales team that works directly with advertisers to transact sales of ad impression packages for advertisers to run their campaign(s) through. The packages can be targeted to the advertisers needs, for example, only at website visitors that are logged in and have purchased a waffle-maker from the site in the past two months. The more an advertiser can hone in on a particular marketing segment, by running highly-targeted campaigns, the more they are willing to pay for the ad inventory. It’s important to note that in this way, ad inventory is sold in advance of its actual generation: an advertiser buys a package of 10,000 impressions, and as those impressions are generated by users viewing pages on the site, the website will populate them with the advertiser’s ads.

The second way is through an ad network. An ad network is a giant middleman that buys ad inventory from publishers, standardizes it, enriches it, and then auctions it off to advertisers, algorithmically, piecemeal, in real-time. This means that publishers don’t need in-house ad sales teams pitching and managing accounts with advertisers. Instead, they can hook their website into the ad network via simple programming methods and quickly gain access to a huge market of advertisers. (Note: Even though this involves some programming, this is not what is known as programmatic advertising.)

“Piecemeal” and “real-time” mean that ad impressions are sold on an individual basis, automated at very high speed by the ad network. These transactions do not involve future inventory, like with direct ad sale packages, but instead involve ‘fresh” inventory as it is generated. (Yes, each ad you see on most websites you visit was transacted between the time you started loading the page and the moment the ad appeared!)

On the other hand, these ad networks don’t differentiate those fresh ad impressions in very many ways. This means publishers’ can’t capitalize on their ad inventory’s unique premium value.

Ad network: “Give us your fresh ad impressions. Put ‘em in the 5¢-bucket if the page it was generated on has anything to do with cheese, the 3¢-bucket if its display size is 200 pixels by 150 pixels, or the 1¢-bucket if neither.”
Publisher: “Wow, those are my choices? What about impressions generated by logged in users that bought a waffle-maker in the last two months? Advertisers have paid lots for that in the past.”
Ad network: “Pfft. Put it in the 1¢-bucket, or buck off!” [Evil laugh, as follows.]

Thousands of publishers continuously pour their ad inventory into these networks. One interesting ramification is this is that as an individual surfs between websites serviced by the same ad network, that network is able to track that user (through the magic of third-party browser cookies). While this threatens user privacy, it allows the ad network to enrich the inventory it has purchased from publishers with cross-website user attributes that the publisher alone could never know. This is valuable to advertisers, the ad network profits from it, though arguably at the cost of user privacy.

Anyway, back to that evil laugh. Because of it, so-called programmatic adversiting techniques and products have evolved. The essential idea is that publishers should be able to enhance their direct sales with their own real-time ad transacting service tailored to all the premium data attributes that are unique to their inventory. This cuts out the middleman (a significant agent of user privacy invasion), provides more targeting and value to the advertiser, and therefore increases revenue for the publisher. User behavior tracking is still involved, but it is now silo-ed within each website, which I think fits much more naturally with user expectations. These techniques, products, and services are difficult to implement but are rapidly becoming more affordable.

<analogy>

Allow me to illustrate. Let’s say I own a dairy farm, and I produce milk. I produce plain milk, but I also produce chocolate milk, and my family’s secret-recipe strawberry milk. Also, all my cows are rBST-free. I make money in two ways. One way is by painstakingly setting up contracts with local grocers. This requires sales, legal, billing, and delivery services, but at the same time the grocers pay a pretty penny, because they know they are getting high-quality plain, chocolate and strawberry rBST-free milk from a local farm. This is akin to content publishers using direct sales to sell targeted ad inventory at a premium; advertisers pay extra to be confident their ads will be displayed precisely to who they want to reach.

The second way I make money is by selling wholesale to a big national distributor. I truck off huge containers of my various kinds of milk. I don’t make as much profit per gallon, so it makes sense for me only if I can scale my milk production very high, and just profit on volume. One of the reasons the prices aren’t as high as when selling locally is that the distributor doesn’t differentiate between rBST-free milk and rBST-contaminated milk. They also don’t differentiate strawberry milk. (They just add it to their industrial tanks of plain milk!) All of my premium selling points are worthless to them. But because they have national reach, and I can’t sell all my milk locally (for one, I don’t have the legal and delivery resources, and two, there isn’t enough local demand), they are still a useful revenue channel. At the same time, the distributor might differentiate by region (so that consumers can buy fresher milk from a region close to them), something that is largely irrelevant to my local buyers.

(Here’s where I have to stretch the analogy just a bit.) Luckily, some smart people have come along and invented an affordable way for me to own my own national sales and distribution center! Sales are done directly to the grocery chains throughout the nation (and even directly to households) through a sophisticated website, that is customized to denote the premium rBST-free and strawberry milk that I offer. Let’s say the distribution center is manned by low-cost robots and self-driving cars. I still need to invest in some sales and other support staff, but not as much as before and with much greater return. Now I can have the best of both worlds: selling my premium products at premium rates, and reaching a national audience.

</analogy>

Reading about all this has opened my eyes to the dynamism of the online advertising space. What I thought was a static environment is actually a landscape of innovation and evolution. To digress a bit, another interesting aspect of online advertising I recently discovered is called ad network optimization. This is, as I understand it, when a publisher actually methodically measures the (revenue) performance of various ad networks against various ad placements and other impression attributes, and then automatically re-allocates inventory to maximize revenue. Online advertising, important privacy considerations notwithstanding, is a much more vibrant business than I’d previously thought.

One thing I will credit myself with: I’m stupid and I know it. And, if I’m gonna attempt to participate in this space, I’m gonna keep trying to learn more about where it’s going.


In the post above, I used the phrase “ad network,” though I might have meant “ad exchange.” I’m not 100% sure. My perception is that most ad purchases are bought through real-time bidding, which occurs on ad exchanges, but I seem to be reading that ad exchanges are still considered novel to a degree, and that ad networks handle most ad transactions. So, I’m not sure what the right phrase is, or what the current reality is. I.e., I’m really stupid.

Tunneling to Freedom without a Jailbreak

If you’re anything like me, you’re an idiot who lost your jailbreak today when you tried to upgrade from iOS 6.0 to iOS 6.1.2 (which supports an untethered jailbreak) but were forced to upgrade to iOS 6.1.3 since Apple’s servers already closed the signing window for iOS 6.1.2 installs. [Shakes fist in direction of Cupertino.] And that means you lost the ability to use GuizmoVPN to access your startup’s access-restricted development server.

In the past, I’d tried using OpenVPN Connect, the Apple-sanctioned OpenVPN client app, but it required the use of a so-called “tun” adapter. Tun operates at OSI level 2, as opposed to Level 1 which GuizmoVPN supports via the tap adapter. Of course the beauty of tap is that it can be bridged to your VPN servers LAN interface, meaning no mucking around with subnets and routes.

Anyway… today I was forced to get tun working. You could say I was in for a “tun” of fun! Digression. I needed to progress through a series of puzzles, riddles and enigmas, each more beguiling than the last, carefully designed to test a different skill and/or bit of administrivia, before I gained the power of tun.

Stage 1: The Wall of Fire

First, I had to open/forward a port on my private networks WAN gateway, and also on the local firewall of the VPN server itself (God bless ufw!), since I wanted to retain the operation of the existing tap-based OpenVPN server. Kids stuff.

Stage 2: The Tunnels

Next came OpenVPN configuration. The routing wasn’t as bad as I thought, but the general notion of having a separate subnet for the tunnel just felt wrong next to the simplicity of bridging. OK, say the private network uses 192.168.1.0/24, and I chose 192.168.2.0/24 as the tunnel subnet. Also, the OpenVPN server itself has a local IP of 192.168.1.2, and the private WAN gateway is at 192.168.1.1. This was my server OpenVPN conf.

port 13337
dev tun
mode server
tls-server
server 192.168.2.0 255.255.255.0
push "route 192.168.1.0 255.255.255.0"
push "redirect-gateway"
# TLS encryption stuff.
# Other (non-germane) stuff.

I could’ve placed the `push` settings on the client configuration but I prefer to manage it centrally. This works for both OpenVPN Connect on iOS as well as the Windows OpenVPN client. Flipping/flicking the switch in OpenVPN Connect resulted in a rewarding green checkmark. I’ll take it!

Mini-game 1: Pong

With that up and running, I tried a little `ping`. I could ping across the tunnel from the client to the VPN server, but not out to hosts on the private network. I thought all it would require was a simple:

# echo 1 > /proc/sys/net/ipv4/ip_forward
# iptables -A FORWARD -i tun+ -j ACCEPT

However, I thought wrong, and pinging into the private network still failed. Oi! A couple head-smash-upon-keyboards later, I read that they need a route back to the pinging clients, since they live on separate subnets. Of course, it’s like playing tennis across parallel universes. Just like that. Luckily, all hosts on the private network are configured, by DHCP, to use the WAN gateway by default, so all I needed to do was add a static route on the gateway to 192.168.2.0/24 over 192.168.1.2. (Shout out to Tomato firmware, “Awww yeah!”) With that, pings were routed back to the originating client successfully. Game. Set. Match.

(You want to configure the IP forwarding to survive reboots. On Ubuntu, this is done in /etc/sysctl.conf.)

Mini-game 2: Dig-Dug

I’ve got dnsmasq running on the private network as well, which is a crucial piece of infrastructure when it comes to hostname-based web hosting. So once I confirmed pinging to hosts on the private subnet, I tried a quick `dig development-site`, but to no avail. Even worse `dig @192.168.1.1 development-site` was silent as well. So I set about getting DNS queries to go to the remote network. First piece of the puzzle was to inform the clients of the private DNS server. Adding

push "dhcp-option DNS 192.168.1.1"

to the OpenVPN config should’ve done the trick. But it didn’t. Fortunately, I had just recently been futzing around with my dnsmasq configuration, and I recalled that it could be configured to respond only to requests coming from certain interfaces. Peeking into the configuration revealed that it was indeed constrained to respond only to the LAN interface. A quick configuration change to loosen that up, and I was digging up my tunnel like nobody’s business!

Boss Stage: The Spider’s Nest

Finally I it was time to try `wget development-site`, but (altogether now) to no avail! “WAAHHH! But, but, but… I did the thing with the routes and the tunnels, blasting the hole in the firewall, the digging and the ping-ponging across parallel universes… alas, must I turn back, again?” I couldn’t let it go this time. But my dev site had alwaysbeen accessible in the past with tap OpenVPN, what gives?!

Suddenly, Alec Guiness’ voice boomed in my ear, “USE THE LOGS, LUKE! I MEAN, USE THE FORCE, MIKE. ER, USE THE MIKE, LOGS? UH, HEHE, YOU GET IT, RIGHT?” Looking at the logs, it turned out the requests were rejected because they were coming from IPs on the VPN subnet, not the private subnet! Of course. Even though tap gave VPN clients IP addresses on the private subnet, tun gave them addresses on the special tunnel subnet. Since Apache was configured with `Access 192.168.1`, requests from the tunnel subnet were rejected. A quick one-two punch to the Apache configuration (by the way, Access arguments are space-delimited, not comma-delimited), and BAM! Apache was mine!

Crossing my fingers, toes, and eyes, I flicked OpenVPN Connect to the On position, and Mobile-Safari’d my way to http://development-site… And, like a divine light emanating from the face of God himself, my development website graced my eyeballs. [Cue end theme.]

I hope this helped you out, dear reader/person that is “anything like me.” Also, if you’re anything like me, you’d probably like to know that neither of the Starbucks in Union Square have power outlets, but Think Coffee on 4th Av. does. Now you can access your private dev site while having fresh coffee and/or beer. You’re welcome.

How to Iterate Your Project Off a Cliff

That is what this article should have been titled, as it completely and dangerously misses the key insight of Scrum. Scrum is not merely a sequence hoops to jump through.

Fundamentally, Scrum is a recognition of certain risk-bearing concerns in projects, and the assignment of those concerns to separate roles. The division of concerns sets up a structural tension that drives attention to risks, and which are necessarily resolved through open negotiation. To truly reap the benefits of Scrum is to appreciate the value and organic nature of this tension.

All the steps listed in the article follow from this basic understanding, but these steps do not entail the myriad supporting activities that make Scrum valuable.

Patterns of pattern ignorance

Those wanting to pick up Backbone.js and can see past it’s current placement in the trough of disillusionment (see hype cycle) would likely encounter confusion with the names of its major components. Having read the codebase and coming from a Rails-flavored MVC/Model 2 background (namely ASP.NET MVC), I tend to understand the underlying patterns of Backbone.js components accordingly:

  • Routers work as front controllers,
  • templates (along with the browser’s DOM) as views, and
  • Views as presenters (from the MVP pattern).

Models are flavored with the Active Record pattern. Presenters and views are tightly coupled.

MVC and MVP coexist because the controllers respond to navigational input (mapping to model-presenter pairs) and the presenters respond to page-specific input (mapping to model manipulations and, possibly, server calls).

Considering that prominent tech bloggers are encountering similar naming confusion with other prominent libraries, this seems to be reflective of either lack of academic discipline by library authors or calculated marketing choices. Combined, that yields a zero net effect on adoption.

The Selector-Space Race

The article “Writing Efficient CSS Selectors” made the front page and generated some discussion on Hacker News today. These are always interesting articles, but rarely of practical value, and show just another way the software development community refuses to scrutinize CSS at the development process level.

Performance is only one concern of a website project. I’ve found that a greater risk by far is scalable stylesheet manageability, a topic with little useful engineering attention. Any decent-sized project is going to have to decompose their presentation declarations into manageable artifacts. This eases collaboration but has the unwanted side-effect of hiding existing selectors in multiple files. Add to this a tight project timeframe, a development team that regards CSS as a second-rate language (as almost all that I’ve encountered do, worsening the more senior they are), and CSS’ lack of usable “selector-spacing” (in the namespace sense), and you have a recipe for thermonuclear battles over selector specificity.

Selector and naming conventions can basically eliminate this problem. Module root elements have a class name that starts with a letter; module nested elements have a class name that starts with a dash. These are combined using the child selector (not the descendant selector!) to constrain their applicability by implying a much stricter subject structure. Each module goes into a separate file in a single directory, where each file shares the name of the module’s root class name.

This brings selector specificity to the forefront and establishes practical, explicit selector-spacing. What was once super-ambiguous, e.g.

body { color: black; }
.menu .item { color: orange; }
.menu .item .menu { color: black; }
.menu .item .menu .item { color: orange; }
/* and on */

is now very precise:

body { color: black; }
.menu > .-item { color: orange; }

Is it the most efficiently-interpreted ruleset? No, and I don’t care. It’s the most efficiently-manageable ruleset. (At least until Web Components are ubiquitous.)

The Company You Keep

I’ve always appreciated the efforts of companies to express the character they aspire to have, and the values they aspire to uphold and promote. Not only do such efforts demonstrate consciousness about their identity, they also show respect towards those who they may engage. Ultimately, actions speak louder than words, but certain words, when published, can constitute an act that speaks to the commitments a company is willing to make to themselves and to others.

To this end, I’ve shared Blocvox’s guiding principles on its blog.

1