ruby

Priorities v0.01

I spent a chunk of yesterday and a few hours today pulling a bunch of plumbing out of the Docs Decomposer so I could remake it into what I’m going to just call Priorities.

It’s a tool that lets you step through the prioritization exercise I outlined yesterday, and it provides a few extras because it’s happening in the context of a dynamic web page.

The Docs Decomposer had a prioritization tool bolted on as an afterthought, so it was missing a few things, including a conception of “teams.” Priorities has a Team model that allows multiple teams to exist inside of it, each with their own members and lists of priorities.

It’s in a somewhat usable state now, to the extent you could check it out of GitHub, run bundle, run the migrations, and self-provision your account, team, and priorities. The workflows for some of this aren’t great, and a few of the relationships that are meant to be two-way are as yet one-way. I think its documentation page is still that of the Docs Decomposer’s, too, so it won’t be super helpful.

Here’s a quick demo of the basic use:

There are a few places I’d like to take it:

Teams have but a single flavor now. Making the tool think of teams in terms of scrum teams or services teams would make it easier to extend Priorities into a general purpose prioritization and auditing tool. Services managers, for instance, could have a dashboard where they characterize the level of support they’re offering each scrum team, or would have mandatory priority objects for each scrum team the app knows about.

I’ve temporarily removed the tying of users to priorities. It would be easy to put them back, but the current approach to that problem assumes everyone on a team would be an active user of the app. People shouldn’t have to have logins to review team priorities. If I can solve how to do that, then active users can start having their own user pages, where they can see efforts they own and perhaps contribute to reporting around the priorities they’re responsible for.

The organizational model is flat. Extending the models to include a conception of “organizations” and adding some notion of whether a priority is tied to (or is) something like an OKR would make it possible to model an entire company.

Anyhow, next steps probably ought to be:

  • Cleaning it up under the hood a little.
  • Finally delivering on better user password reset tools.
  • Figuring out what it takes to Herokuize it.

Docs Decomposer Week 7

This weekend I solved a problem I’ve been bothered by for weeks: Given a list of pages with a dynamic priority or risk button on each row, the sortable tables didn’t reflect changes in priority or risk until I reloaded the page. It felt good to see that one go down, and it felt good to drop a bunch of lines of code in the process.

So, I think we’re done here. Or at least, we’re in a place where I can do everything with this tool I wanted to do, plus a few things I hadn’t thought of at the time.

I didn’t keep very close track of my time. If I had to guess, I think I spent somewhere around 40-50 hours, or an average of an hour a day from my first commit on February 13. The vast majority of that time was here and there on weekends, but I put in a few lunches, too. It occupied a funny space: I didn’t want to devote work work to it in case I got stuck and couldn’t see it through. So it felt better to file it under “hobby that may prove useful.”

Other stuff I finished up this weekend:

  • Some rake automation, which makes setting everything up pretty quick.
  • A HTML reimport button, to refresh the contents of a page if the underlying content in the git repo hasn’t been recently refreshed.
  • User pages, so it’s possible to look up a user and all their commented or flagged pages.
  • Sortable tables that work.
  • Unicorn/nginx, which was ops’ recommended approach to get this thing onto an internal box.

Stuff I’d like to do next:

  • Connecting Devise, which I’m using for authentication, with LDAP.
  • An “export new metadata” gadget, to make it easier for writers to quickly generate new YAML frontmatter that includes their tags and risk assessment, for copy and paste into documentation.
  • Revising the importers to understand our pre-release content repository and internal preview servers.
  • Write some tests. I’ve never really done much with testing, and I’d like to learn how. That’d make my friend in ops, who’s been helping me get this ready to deploy somewhere, a little happier with the whole affair.

Docs Decomposer #4

Tags ended up going in pretty easily with acts-as-taggable-on. It allows for mutltiple kinds of tags, which means with judicious use of forms you can pretty much create any kind of taxonomy you want. I’m just using keywords for now, in a classic free-tagging setup. With judicious use of forms to control input, I’ll be able to add risk and priority tags.

Docs Decomposer

Which means, I guess, that the thing is pretty much “done” in terms of the basic features I’d like it to have:

  1. Individual user accounts.
  2. The ability to quickly reduce a given page to just the steps and CLI instructions.
  3. The ability to flag a page with a click.
  4. The ability to comment on a page.
  5. The ability to tag a page.

Over lunch, I broke my “don’t do this in the office” rule briefly to add some markup to our Jekyll templates to make it easier to grab the rendered content for import. The importer became a little simpler as a result, and the pages lose a little bit of extra nav cruft from our templates.

So, as it stands, I could pretty much run an inventory session for the team from this thing running on my laptop. The one thing that’s still vexing me about it is the notion of unique and trackable page elements (ordered lists and code blocks, mostly).

In my Padrino prototype, each page showed the rendered content, plus a tab that showed only ordered lists and pre-blocks. Each <ol> and <pre> was checksummed, and the “elements” tab showed which other pages in the docs corpus had the exact same content.

What I really, really want is something like this:

  • You look at the page and see a <pre> block that concerns you, either because it looks very perishable or could be harmful if the information has aged out.
  • You hover over the element to expose a flag or comment button, plus a little stats box that tells you where else that element appears.
  • Once you flag that element, it’s flagged everywhere it appears in the corpus, receiving a special visual treatment.

How’s that work?

Everything is parsed by Nokogiri, so I can just write a little checksumming service in my Elements controller that will be seeing the same HTML whether it’s getting it during the content import phase, or the presentation/review phase. So:

  1. User loads a page.
  2. JavaScript finds each element of interest on the page (each <pre> and <ol>) and sends it over to the checksumming service.
  3. The checksumming service returns a unique i.d. for the element.
  4. The element gets wrapped in a div with that i.d. (in case it already has an i.d.)
  5. The comment/flag widget for that element is just AJAX calls to the controller against the i.d. of the wrapper, which gets a class to reflect its flagged status.
  6. Each flagged element gets either a modal/lightbox or a page with the comment history.

Now that I write it all out, though, it seems pretty doable. I should probably write more stuff out.

Flaws in the plan? Mostly that it’s going to add some load time as each page is pulled in, dissected, and checksummed. Fortunately, there are advanced GIF technologies to make that seem almost pleasant:

712

It’s something I could do at import time, too, I guess. There’s nothing sacred about the underlying markup.

Oh, the other problem is “what about when that element changes, even if it’s for the better?” At that point, the checksum changes and the flag/comment history disappears because it becomes a “new” element. There goes the inventory. Which means the inventory might become something less about the literal content and more about what/where it is, e.g. “ordered list on page #{foo} under the heading #{bar}.” So when a flag gets thrown on an element, the existence and “coordinates” of the thing are logged somewhere, along with a snapshot of what it looked like at the time. That’s probably going to be enough to find it again.

Hm.

So we could go either way here. I think writing the checksumming service and coding up the process sounds interesting and fun. I think deciding that flagging elements and leaving comments on them as a one-time process with no expectation of permanence might be okay. I think automating a log based on “there’s a fishy set of instructions on the page called #{foo} under the heading #{bar}” sounds like a useful middle ground. I’ve also already figured out the xpath needed to do just that: “Show me a thing plus the most recent heading that occurred before it,” which has proven great for explaining what some of the lists and pre blocks are trying to tell you at a glance.

Well … something to think about.

Docs Decomposer Weekend 3

Okay!

I added a Comment model to the project this weekend, then wired it up to a modal form. Useful things I learned to use this weekend:

  • simple_form, which makes it a little easier to write forms in Rails, and which understands Bootstrap.
  • markdownizer, which can take a text column and render it to Markdown on creation, which will help make the comments a little more expressive.

And since I’m using twitter-bootstrap-rails, I got some handy freebies for styling the flash with Bootstrap text styles.

A few things I didn’t get to:

  • Making it possible for users to delete their own comments.
  • Making it possible for users to edit their own comments.
    In other words, the comments feature sort of sucks, but at least you can suffer in Markdown.

  • Connecting the act of flagging to invoking the comments form.

And to replace the spreadsheet we came up with to review the upstream platform docs last week, I probably need to add a tags field of some kind. I think that’s a solved problem.

One more weekend?

Docs Decomposer

Menubar and Docs Decomposer and CSS Bootstrap

Docs Decomposer Weekend 2

Last week I left off with a gnawing sense of unhappiness because the flagging buttons weren’t dynamic: When Rails dropped Prototype.js, it dropped pretty much all the stuff I knew about AJAX and Rails. I could have left well enough alone, but it bothered me; and we’re doing this in the name of (re)learning.

I had fun working on it. Last weekend’s time involved a lot of re-orientation on the basics. This week felt pretty fluid as stuff I’d forgotten came back, and as I got used to a few of the tweaks I added to Emacs last week (including flymake and Projectile).

In the process of rewriting the flagging code into something a little easier to work with, I learned about acts_as_votable, and that encouraged me to just toss the flagging code I’d written altogether. Yay, cargo culting. That didn’t make the dynamic flagging button situation any better, but it did make a few other features fall down much more quickly.

So this weekend I added:

  • Flag status indicators to the page lists. You can tell if you’re the one who flagged the page, and if someone else also flagged it along with
  • A toggle to hide all the unflagged pages
  • A little list of who flagged a page on the page itself, along with next/previous buttons for each page, to make it easier to run through review without going back and forth to the master page list:

Next up, I guess, is commenting. I was hoping to get to it this weekend, but once I had acts_as_votable in place, it was a lot easier to run through a bunch of flag-related things and implement them, and I was ready for some easy stuff once I’d finally managed to make the flagging buttons dynamic.

Once I’ve got commenting in place, it’ll be time to go back through and do a lot of housekeeping. I’ve got view code that probably ought to go into partials, controllers that could probably stand to have some logic moved out into the models, and a lot of code that … well … it’s optimistic is what it is!

After that?

The thing I keep thinking about and sketching out is how being able to flag a given element on a page might work. One of the reasons I decided to just import HTML straight into the app was that it gave me access to the markup. For instance, once you find and checksum an ordered list, it’s pretty easy to wrap it in a div and give it an i.d. At that point, it can be targeted for flagging widgets and such.

Dunno. Bedtime.

Weekend Science Project (Rails After Time Away)

We need to do a documentation inventory at work. We’ve added a lot to the docs over the past year, and we’ve got a few things lurking around in there that need to be flagged for QA or revalidation. The stock answer is “put all the pages in a spreadsheet and start clicking/scrolling and annotating the list,” but I’ve been pining to fiddle around with a tool for a little while, and spreadsheets suck, so I decided to see how quickly I could put something together to make the job a little easier for the team (and reviewers down the road).

I started out with a Padrino app, which was a great way to do a proof of concept on what I was after:

  • Find all the Markdown files in the docs repo.
  • Use the filenames to compose the live URLs of each document.
  • Pull the HTML in from the server and store it for fast retrieval/decomposition
  • Identify all the elements of interest on each page and store them with a checksum.

For that last, “elements of interest” became “ordered lists and things inside pre tags.” They represent the step-by-step instructions in the docs; either in “do this, then this, then this” form, or in “start typing this stuff on the command line” form. They’re the things people gravitate to and start following, and we know that the average technical user is prone to looking for just those things and not looking at the surrounding text.

Putting a checksum on each of them will provide a way to do reports that show when an element appears on multiple pages, and across multiple versions of the docs. You kind of expect things like ordered lists to change a little over time. If one has persisted over four or five releases without changing, you might want to look at it and make sure it hasn’t been overlooked.

So after four or so hours of work, my Padrino app could do all that stuff, wrapped up in Bootstrap. I could fire up the app, get a list of all the documentation pages by product version, and use some widgets to do a few useful things:

  • See all the elements of interest in their own tab.
  • Preview a docs page, then click a button to show only the headings, pre blocks, and ordered lists.

That seemed pretty useful just as a way to help someone rip through a collection of pages and look for just the things most likely to cause a user pain, then maybe enter them in a log.

Once I was home for the weekend, though, I realized that I wasn’t as familiar with Padrino as I once had been with Rails, so I decided to do a quick port. Since I’d used ActiveRecord for the original app, and since I was happy with my db schema, it was pretty simple to set up the models, re-run the migrations, and reuse the importer scripts to repopulate my development db with content.

I spent a few hours on Saturday and a big chunk of Sunday trying to see how far I could get before, well, having enough time to blog about it before ending the long weekend and going to bed. I didn’t want to put any more time in on it at work, because if I ran into a dead end and couldn’t make what I wanted, I didn’t want to feel obligated to try using it.

I had to relearn a few things about Rails that have changed since I last used it much (during v2 times), and I had to learn some new things about jQuery that I’ve never dealt with before. Still, I’m pretty happy with where it’s at now:

First, I implemented flags. For now, all you can do is flag a page if you see something you think might be a problem that needs further review. I’ve got a few ideas about how to flag individual elements. One way is super simple, but doesn’t allow you to flag them in the context of the text. The other is tougher and I’m still working on relearning Rails’ AJAXy stuff to figure out a way to flag an element in place, store it, then have it highlighted as flagged next time you visit that page.

Next, I added a working user authentication model with Devise, so flags can belong to individual people. For now, it just means that if two people look at the same page, they don’t have to agree on its flaggability. Down the road, it’ll mean there’s a way to share the tool with all our technical reviewers so that they can flag things and we can capture all the flag data from them. I’d like to add a way to enter a comment, too, but one thing at a time.

Finally, I got thiiiiis close to making the flag button truly dynamic. All the AJAX stuff I knew from when I built my last Rails app has changed, and I couldn’t figure it all out in time, so for now I just force a page refresh when the user clicks the flag button to get it to change its state from “flag this” to “unflag this.”

There are a few more things I’d like to get to:

  • Report pages, naturally: Every flagged page, how many flags it has. That won’t take much.
  • Flaggable elements (ol, pre): I can already do this the easy way, but I’d love to do it the hard way.
  • Comments to go with flags, with design to accommodate a running list of flag comments down the side of a page preview.
  • Dynamic page import. Right now, we get the map of all the files then suck in their HTML and store it. The advantage is that it’s pretty fast. The disadvantage is that the content will drift from reality over the course of a release cycle. Way better to either suck it in each time the page is viewed or offer a “re-import the HTML” button people can hit.
  • Links straight to the files in the Github repo, so people can quickly fix things from the app if they spot something.

It’s in a place where I knock most of that stuff off with another leisure-time sprint, then see about hosting it somewhere relatively secure where I can put it in front of a few people for real feedback.

It also reminds me of all the stuff I wish I knew more about, like testing. I guess if I can get it into a useable state for other people, I can use it as a sandbox for learning about that stuff.

Anyhow, here it is. Hope you had a good Presidents Day weekend. Good night.

In Which I Write Some AppleScript to Save the Big Magical Gnu

Meditate

People at the Friday all-hands made fun of Emacs. I briefly imagined a big, magical gnu sadly, slowly fading away because nobody believed in it anymore, and then I thought of all the ways I’ve failed to support that big, magical gnu. So in a fit of emacsimalism I wrote some AppleScript to convert all my Things projects to org-mode projects, and their tasks to org-mode todos.

It understands:

  • Tags, and converts them to org-mode-style, colon-delimited lists.
  • Due dates, and converts them to deadlines.
  • Status:
    • “open” converts to “TODO”
    • “completed” converts to “DONE”
    • “canceled” converts to “CANCELED,” which you’ll need to add to your org-mode configuration with something like this:
      (setq org-todo-keywords '((sequence "TODO" "|" "DONE" "CANCELED")))

I stopped short of:

  • Mapping “Areas” to something (like org-mode properties, I guess)
  • Mapping “Contacts” to something (I always use the “@name” convention to tag people)
  • Making it put its output somewhere. It just returns a big string you can copy out of Script Editor’s output field and paste into a text file.
  • Bothering with the idiocy needed to get AppleScript to pad any single-digit elements of a date with zeroes. I just hand off to a Ruby one-liner with do shell script.

Cheer up, magical gnu!

Not Advocatist, but …

I have this idea I’d like to work on some time:

MacRumors has this nifty buyer’s guide that tells you when it’s a good idea to be thinking about an upgrade because a given product family is due. If you take a look at the top, there’s some tidy, consistent markup:

There’s a table (class = buyerstable) and number of data cells with lists (class "buyerslist"), each of which contain list items (with classes of red, green and yellow) wrapping the name of each product family.

So … Nokogiri:

#!/usr/bin/env ruby 

require "rubygems"
require "nokogiri"
require "open-uri"

interesting_products = ["iPad", "iPhone", "iPad Mini", "MacBook Air"]

url = "http://buyersguide.macrumors.com"

doc = Nokogiri::HTML(open(url))

doc.search('ul.buyerslist').each do |bl|
  bl.search('li.green').each do |product|
    if interesting_products.include?(product.text)
      puts product.text unless product.text.include?("Gazelle")
    end
  end
end

So, it seems to me there’s a programmatic way to tell if interesting things might be about to happen in Apple’s product lines. Do I care because I’m interested in knowing the best time to upgrade my Apple products?

No!

I care because most Apple writers become insufferable when there aren’t new products for them to fawn over. They delve into bitter, weird little obsessions, spend their time gloating over nits they have successfully picked, and do weird shit like call out Paul Thurott (whom I have met and is a nice guy who gave me a funny Microsoft mug when I was a Linux columnist) and Rob Enderle (who is an absurd thinker whose main contribution to the world of consulting is the productization of trolling). In other words, Mac writers, given no new stuff to talk about, become as bad as the things they hate, just like Nietzsche says people will.

So it occurs to me that if you’ve got a programmatic way to see if product changes are coming, you’ve got a programmatic way to see if the Mac people have anything worth reading. So if you’ve got an RSS reader that can programmatically turn subscriptions on or off, you’ve got a programmatic way to ignore people who become annoying on reasonably predictable cycles. Alternately, maybe something like grapi would do the trick by talking to Google Reader directly (meaning your changes show up on all your devices).

More Precise Annoyance Removal

But what about feeds that are good most of the time, but have annoyances embedded in them? Kind of a nice NetNewsWire feature is the ability to make any script that returns a valid feed into a subscription. I used the example code from the Ruby 1.9.3 RSS library to make a simple script that reads in a feed, skips the stuff that matches a pattern, and then adds the remainder back to a feed:

#!/usr/bin/env ruby
require 'rubygems'
require 'rss'
require 'open-uri'
require 'sanitize'


url = 'http://rss.macworld.com/macworld/feeds/main'
scrub_pattern = /(the\ week\ in|macalope)/i
title = "Scrubbed MacWorld"
author = "Mike's RSS Scrubber"

scrubbed_items = []
open(url) do |rss|
  feed = RSS::Parser.parse(rss)
  feed.items.each do |item|
    title = item.title
    unless item.title.match(scrub_pattern)
      scrubbed_items << item
    end
  end
end

rss = RSS::Maker.make("atom") do |maker|
  maker.channel.author = author
  maker.channel.updated = Time.now.to_s
  maker.channel.about = "http://www.ruby-lang.org/en/feeds/news.rss"
  maker.channel.title = title

  scrubbed_items.each do |scrubbed_item|
    maker.items.new_item do |item|
        item.link = scrubbed_item.link
        item.title = scrubbed_item.title
        item.description = Sanitize.clean(scrubbed_item.description)
        item.pubDate = scrubbed_item.pubDate
    end
  end
end

puts rss

This clears out a lot of MacWorld’s “second bite at the apple” (heh) roundup posts, and the Macalope, who is dull.

It occurs to me that maybe the overarching project here is to define a set of feeds I have an uneasy relationship with in a simple YAML file, like:

---
title: Some Mac Dude's Feed
shortname: some_mac_dude
url: http://somemacdude.com/rss
scrub: /somepattern/

Then genericize the script above so a simple Sinatra service can serve up feeds from an endpoint, like http://mph.puddingbowl.org/feeds/some_mac_dude. Then they’re scrubbed, available to Google Reader, and I can build the “is this blog going to irritate me when Apple’s not rolling out new product” logic into the app.

Say! We can also add tags!

---
title: Some Mac Dude's Feed
shortname: some_mac_dude
url: http://somemacdude.com/rss
scrub: /somepattern/
tags: 
    - mac
    - coffee
    - children

And set up an “annoyance” endpoint on the Sinatra app that can add and destroy annoyances from the feeds served. Then a little mobile app to manually toggle annoyances on and off as they occur to me.

Gosh, Mike …

Yeah, I know, o.k.? I know.

Turn a Redmine Issue into an OmniFocus Inbox Item

I dropped this into an Automator Run Shell Script item and turned it into a service. It acts on your selected text and assumes you’ve either selected a valid Redmine issue URL (e.g. “http://projects.puppetlabs.com/12345”) or a valid Redmine issue number (e.g. “12345”). I tied it to the keystroke C-S-r, to drop the issue description and URL into the OmniFocus inbox.

The sad part of this story is that rb-appscript isn’t under active development, so stuff like this won’t last much longer. Then it’s on to something Apple-sanctioned, like MacRuby or RubyCocoa (or falling back to the dark times of do shell script).


require "rubygems"
require "appscript"
require "redmine_client"
include Appscript

RedmineClient::Base.configure do self.site = 'http://projects.puppetlabs.com' self.user = 'USER' self.password = 'PASS' end

ARGV.each do |f| issue_select = f

if issue_select.match(/^http/) issue_id = issue_select.match(/\d{1,}$/)[0] else issue_id = issue_select end

issue = RedmineClient::Issue.find(issue_id) issue_url = "http://projects.puppetlabs.com/issues/#{issue_id}"

of = app("Omnifocus").documents[0]

of.make(:new => :inbox_task, :with_properties => { :name => "(##{issue_id}) #{issue.subject}", :note => "#{issue_url}\n\n#{issue.description}", }) end

app("Omnifocus").activate

0 to rbenv on Mountain Lion (late 2012 edition)

I thought I was going to sell my 11″ MacBook Air and even had it zeroed out for handoff once I found a buyer, but I decided I missed it, so I set it back up again this weekend, reinstalling Mountain Lion. That meant getting rbenv back onto it with minimal hassle.

If you can live with plain old system Ruby (1.8.7-p358) on Mountain Lion, steps 4 and 7 don’t really matter: Ruby builds with Apple’s own compilers. If you want to have a rbenv-managed Ruby 1.8.7, you’ll need gcc, not llvm.

This is the second time in about a month I’ve been through this, and what I’ve got here represents the fastest path I could manage to get from “new Mac with Mountain Lion installed” to “rbenv that can build rubies prior to 1.9.x.”

  1. If you don’t have XCode installed, you can install the XCode command line tools found here: https://developer.apple.com/downloads/index.action . Alternately, if you don’t have an Apple account, you can get a package here: https://github.com/kennethreitz/osx-gcc-installer

  2. Install Homebrew: http://mxcl.github.com/homebrew/

  3. Install a new git from Homebrew:

    $ brew install git

  4. Install Homebrew’s gcc42:

    $ brew tap homebrew/dupes && brew install apple-gcc42

  5. Install rbenv with Homebrew:

    $ brew install rbenv

  6. Install ruby-build:

    $ brew install ruby-build

  7. For ruby 1.8.7, disable Tcl/Tk support to build with rbenv:

    $ CONFIGURE_OPTS="--without-tk" rbenv install 1.8.7-p370

That’s pretty much it. The longest part of the process is downloading and installing the command line tools (115MB), but that’s much better than downloading and installing the behemoth that is XCode, then installing the command line tools.

I suppose there’s one more thing to note, which I discovered in the process of trying to see if there was a quick way to uninstall XCode’s command line tools: There’s no command you can just run to do that, but there’s this helpful script, which I found on Cocoanetics’ website, and which seemed to do the trick:

© Michael Hall, licensed under a Creative Commons Attribution-ShareAlike 3.0 United States license.