ruby

Use Automator to Append to an org-mode Inbox

I’m pretty sure there are some org-mode modules for skinning something like this cat, but I read a tip in MacWorld today about creating reminders outside of the Reminders app using Automator and it seemed like a thing to adapt to org-mode once I’d finished my sandwich.

So … you fire up Automator and tell it you’re creating a new service, then:

  • you tell it “Service receives no input in any application”

  • you tell it “Ask for Text” and add a useful format reminder, such as “Add an item to the org-mode inbox (item |yyyy-mm-dd)”

  • you tell it “Run Shell Script,” passing input to stdin.

  • you include this script, seasoned to taste where your own org-mode inbox is concerned:

Then you save it as a decent name you don’t mind seeing in your services menu, visit the Services Preferences panel and assign it a shortcut you’ll remember, and you get a quick dialog each time you hit the shortcut key that lets you add something to your org-mode inbox.

It’s super-simple, and you need to remember to separate the todo with a pipe character (and never use a pipe character for the headline of your org-mode entry), then enter your todo date in the correct (yyyy-mm-dd) format. If you don’t add a date, it won’t try to, either.

If you’d rather it use the selected text from your current application, you can always tweak the workflow so the service “receives text” from “any application,” which will prepopulate the popup form with your selection.

For grabbing links or selections out of a small set of Mac apps and getting them into org mode, there’s always org-mac-link-grabber and there’s also the handy org-mac-message for getting links to the current selected Mail.app message into org-mode. I put that there mainly to save you from doing what I once thought seemed perfectly sensible, which is jettisoning Mail.app for GNUS.

I guess neater still, were I a “neater still” sort of fellow, would be to grab the selected text of the current application, but offer the dialog, turning the selection into plain text under the new org-mode headline.

Notes on migrating to an open system plus omnioutliner2orgmode

I’ve been fiddling around with Mountain Lion since a day or two after it landed. Every new release of anything provokes some anxiety among people who were used to the previous version, and Mountain Lion has not been exempt from that.

What’s driving the anxiety for a lot of people this time around is how Apple’s vision for user experience on a Mac is converging with its vision for UX on an iPad or iPhone,  the ways in which Apple is deciding things in favor of “the average user,” and whether or not the next iteration of OS X will push things even further along a path that eventually involves people having to jailbreak their MacBooks to get any real work done.

I don’t think we’re quite there yet, and I’ve been pretty pleased with how well Mountain Lion has been running on both my 2009 iMac and 2010 MacBook Air. That doesn’t mean I’m comfortable with everything I’ve seen, and it means it’s time to start making sure I know how to get off a platform that may make changes I can’t accept when it comes time to decide whether to re-up. 

I had a chat with my boss about this last week. As I was talking about the things iCloud makes pretty convenient: Automatically synced Safari reading list and tabs, synced notes, synced documents between iPad and Mac, he argued that there has to come a point where you don’t really want to cede too much to Apple to make “just work,” because you could eventually lose control of the things you care about most based on Apple’s whims. I think a lot of people have hit that in the past: You get to love some feature, a new release comes out, everyone insists there are no barriers to upgrading right now, and then you discover that nobody telling you to come on in because the water was fine cared about that one feature you loved that is now gone, maybe taking the data you kept in it with it.

That’s why I haven’t really allowed iCloud to get at anything I can’t easily export into a more open format. To Apple’s credit, there are a number of things that are easily converted into something useful by other software. Calendars, addresses and bookmarks can all be exported from a user-accessible menu. The new Notes app doesn’t have an export option, but it’s got a scripting dictionary. The new Reminders app is similarly provisioned. iWork allows users to quickly export any of its files to their Microsoft Office analog format. 

At the same time, Apple’s new sandboxing rules, and the way it is restricting what can be sold through its Mac App Store (MAS) suggests that things could get dicey for apps more complex than a notepad or reminder list. Developers who depend on a freer run of the computer than Apple is willing to grant can always just sell outside the MAS, but they’ll be competing against products that might be willing to trade away some functionality to stay in the MAS. What’s good enough for the bulk of MAS users may not be good enough for me, and that gap will widen as we bring in apps that address more sophisticated functionality. That will have to place some pressure on developers. We don’t know what that’s going to mean, but that’s enough uncertainty for me to be thinking about what might have to be next. Which gets us to the point of this post: 

I’ve started working on an outline that sketches out the issues I see involved in moving from a closed system (like a Mac, but it could easily be Windows) to a more open one (like Linux). This isn’t a new set of considerations for me: Keeping my data portable, or in apps where it can be easily exported, is something that has always been important to me, but I want to start thinking about more practically about how to make the move to a more open system in a way that doesn’t hamper my ability to keep getting work done all through the process, and that doesn’t force me to renounce the wonderful utility I get out of my current Mac software. So, the outline defines some terms and provides a place to think about the issues and also serves as a very practical inventory for the software I’m using, what its analogs are in the world of free or open source software, and whether those analogs are good enough yet.

It’s also a work in progress. I started work on it in OmniOutliner (which is excellent), but in an early effort to test OmniOutliner’s own export to more open formats I learned that its export/import capabilities are a little limited (maybe due to gaps in the OPML spec, maybe due to an oversight). I also didn’t want to pay $20 for an iOS version when document syncing still isn’t there and I’m not sure what my upgrade costs would be to get that syncing when OmniOutliner 4 comes out. So I decided to move the outline to Emacs’ org-mode, which uses flat text files and, as I’ve already been half-sorry to discover, has an iOS app.

I could have copied and pasted the plain text export of OmniOutliner into an Emacs buffer and reindented it, but it was easier to write a few lines of Ruby:

It doesn’t, obviously, do anything besides tack some asterisks on to the front of each topic row and drop in a row note if it exists, but it saved me some fiddling around.

Put the current Chrome URL in your Safari reading list

… or “Hey, dawg, we heard you like Safari so we put your Chrome in your Safari so you can Safari when you’re done Chroming!”

safari_service_menu

I tend to do desktop surfing with Google Chrome because I do a lot of “open 10 tabs at the same time” stuff on our Drupal multi-site and Safari’s not very good at that. Sometimes, when it’s time to head downstairs for lunch or by the fireplace for some iPad surfing, I find myself wanting to take a few things I had open in Chrome with me on the iPad. I used to use Pinboard, but Safari’s new reading list is more convenient and doesn’t clutter up Pinboard with short-lived links, so it’s cool that there’s an AppleScript command to add things to it:

And here’s one that does the same thing with the selected NetNewsWire headline:

Hook them up to the actual apps to taste.

I made the mistake of writing a quick Automator workflow that created a Service Menu item to do the same thing with any selected URL in any app, but when I went to try out my new service I noticed Apple had already thought of that. Easier to go into System Preferences and create a shortcut for it.

If you don’t mind the extra keystrokes, you could also just ⌘l then ⌘c to get Chrome’s current URL and use the service, no need for AppleScript at all, but I wrote it all before I realized Apple had done it for me.

What about Firefox?

No real scripting support because Firefox is lame like that and always has been. You can get the current Firefox URL by using AppleScript to press ⌘l then ⌘c:

Yuck.

Quick sharing from Acorn to imgur

Skitch is neat, but I usually have Acorn open already and it has a very nice layered screenshot tool. This gist just takes the frontmost Acorn image, web-exports it to a PNG file, uploads it to imgur then copies the resulting URL to the system clipboard for sharing.

TMTOWTDI – mysql2 and Windows edition

So, you’re trying to get Rails going on Windows, and you’ve used the handy package at RubyInstaller.org, installed the equally handy Ruby DevKit, and now all you need to do is install the mysql2 gem to get going.

At this point, if you do what any normal person might think to do, which is gem install mysql2, you’ll fail because the gem wants a bunch of MySQL dev headers. That’s not a bad thing … we live in a world of dependencies that must be met.

One answer you can pursue is to download the big MySQL archive and pass a few flags along when you install the gem:

gem install mysql2 -- --with-mysql-lib="c:\Program Files\mysql\lib" --with-mysql-include="c:\Program Files\mysql\include"

Then be ready to tell Bundle the same thing for your Rails app proper, which I couldn’t get working.

What also works, and is much more simple, is just getting the version that works with Mingw32 build environments (which is what you get when you install the DevKit):

gem install mysql2 --version 0.2.6

Bundle file?

gem 'mysql2', '0.2.6'

then bundle install

This is all, by the way, another lovely use case for VMWare: Freeze dry a VM at the point before you start desperately following whatever random advice you can find from people trying to make some portion of Ruby or Rails work on Windows so once you think you have it figured out you can roll back to a pristine state and make sure it all works before you document.

Not a bad time at all (tech edition)

Sometimes it’s nice to stop and look around and think about all the things you’ve been taking for granted. I’ve been in the process of handing two projects over to coworkers, one small and one large, and it’s caused me to go out looking for ways to make that easier. I don’t work on an actual dev team: My stuff has been solo efforts, so I’m aware that it probably looks and feels a little different from what everybody else is used to. It was good to find a few tools to make the handover less painful.

So here are a few things I’m newly grateful for:

GitHub

Wow. TextMate has great support for git repositories, so it was super-easy to set up a repository, add all the files, write a few READMEs in Markdown and hand things over. When something needed tweaking, it was a simple matter to make a fix, push it up, and let the recipients know. One of them was on Windows and needed different instructions to get things running? Fine … I made a Windows branch with its own instructions and dependency lists.

Maybe not as important, but GitHub made me feel better about dropping that code on my coworkers because it made everything more approachable. It makes the READMEs look good (and readable) and it makes the source easy to browse with pleasant syntax highlighting.

RVM

I’ve been developing with Rails on OS X, able to install whatever I need to get a gem working, updating my rubygems install whenever it suits me, etc. etc. I didn’t want to make my coworkers go through custom-building their own Ruby or rubygems, and after a few years of getting everything just so I know there’s a chance I’ve got something on my machine they don’t have on theirs. So RVM completely rocked.

RVM stands for “Ruby Version Manager,” and it’s a way to get a self-contained Ruby sandbox running on an unprivileged account. At its simplest, RVM lets you bypass your distro’s elderly Ruby packages and update your Rubygems package without getting a scolding about how your distro will be pleased to update that on its own timetable.

RVM can do a lot more: You can install multiple versions of Ruby and switch back and forth between them with something like:

rvm use 1.8.7

If you’d like to use whatever Ruby came with your distro:

rvm system

If you’ve got three or four Rubies floating around, you can set a default:

rvm --default use 1.9.2

You can also create swappable gem sets.

VMWare

I first reviewed VMWare 11 years ago, when this is what passed for a middling machine:

VMware’s base system requirements are a Pentium II/266 MHz processor and at least 96 MB of RAM. We tested the software on several configurations, ranging from a machine at the very lowest end of the recommended specifications to a Pentium III/500 with 128 MB RAM. Performance is clearly helped by devoting plenty of RAM to the virtual machine. A computer with 160 MB or more is closer to ideal, unless you run X with a conservative window manager and few applications.

I could have spent some time guessing about what might be needed to get my projects working for somebody else, but it was a lot easier to ask “which distro are you running?” download the necessary disc image, set up a VM to match their environment, then run through installation and testing. I did that twice this week: Once to match a developer’s workstation, once to make sure I was testing against the company server platform.

Ubuntu each time, by the way. I don’t know what the current state of the distro wars is, but Ubuntu went up about the way I remember Debian going up: easily and thoughtfully, leaving me with something that Just Worked for my limited purposes. So good for it.

progress_bar

So, _I_ know the script is doing what it’s supposed to be doing because I’ve been fiddling with it for a year. Not everybody else does, so there are a few ways to denote that something is happening. One is just adding an extra bit of noise to each iteration of the main loop to simulate a comforting beeping noise. Another is to use the handy progress_bar gem, which adds a progress bar that can show a counter, percentage completed, rate and ETA with just a few extra lines. Looks nice, less noisy, took all of a minute to add.

Padrino

Neato! Sam tipped me to Padrino, which sits on top of Sinatra and provides a sort of bridge between Sinatra’s minimalism and Rails’ … uh … Railsiness.

I’ve been fiddling around with the idea of building an app that would help me keep track of client sites: What plugins they have installed, what version their core software is at, what needs to be added to them, etc. It’s the kind of thing you could keep in a spreadsheet, but as I’ve sketched out what I want to keep track of, it’s gotten more and more unwieldy. I’d like, for instance, to keep track of which version WordPress is at in one place by updating just my WordPress entry, then list all the sites and get some quick visual feedback on which ones need an update. Same with plugin versions, template versions, etc.

It’d also be nice to have on place to keep track of which hosts each site lives on, what the path to the site files on the server is, client contact info, client support/development rates, and a worklog for each site (the better to have invoicing at some point).

Padrino’s cool because it provides a way to generate an admin interface that’s a step beyond Rails’ basic scaffolding. It’s nothing you couldn’t do in Rails pretty quickly, but Padrino makes it pretty and simple. For instance, here’s how to generate a Site model for my app:

g model site name:string url:string client_id:integer cms_package_id:integer server:string path:string

And here’s how to quickly create an admin panel to manage sites:

padrino g admin_page site

Then you just load up /admin/sites, enter your user name and password, and you have a password-protected Web admin interface from which you can create and edit sites:

Padrino panel

It doesn’t figure out associations, so you have to hand-edit here and there to do stuff like create a select item to pick which of your clients a site belongs to. You can do that by defining a collection in the controller (e.g. “clients”) and then use that collection with a form helper:

And that gets you this:

Padrino helpers

Combined with Pow and an hour of poking around the docs to figure out how to make the helpers work, Padrino gave me a working site manager app in under an hour. Nice.

Pow! etc.

It seems as if a few colleagues might need to use that little Sinatra-based reclassifier ditty, which is always a nice thing: The first iteration of these things usually seems to come close to breaking even on time saved if I just use it once, but if I can make the same tool available to one or two more people, then it’s a lock to earn the development time back and start saving a lot of time.

The problem with sharing stuff like Rails and Sinatra is deployment. Easy for me to set something up on my desktop that never seems to leave dev mode, but less of a pleasure to get it working for others, especially on shared hosting.

Several things will help with that:

First, Pow! from 37 Signals makes it easy to create a local development domain for webapps that use Rack. You just install it, create a config.ru file for your app in its directory, then symlink the app directory to ~/.pow and it becomes available at http://yourappdir.dev. Besides being a great way to work on a number of webapps in parallel, it’s a nice way to practice dealing with Rack.

Second, it looks like it’s not too hard to use those Rack-enabled apps on Dreamhost. This worked for my config.ru file, and it’s the same as the one I use for my desktop machine:

Third, it’s not too hard to add http authentication to a Sinatra app. That means I can deploy little apps that don’t require a robust user model to a subdomain and protect them for as long as they need to be up, which is just a day or two. Here’s all I had to add to the top of my app file to get http authentication:

This all makes me think back to 2008, when I wrote about my recent discovery of Rails: The combination of Sinatra, Pow! and Rack does even more to provide me with a comfortable replacement for lightweight databases, and makes it much easier to share my work, which is even better.

Sinatra ftw (Updated)

So, I have this task to accomplish:

I’ve got 1,180 articles that were all placed in a website directory I no longer want to keep. The friendly dbas gave me an Excel spreadsheet showing me each article’s title, url, pub date, author and current directory. I’m to go down the list and reassign each article.

I made it through 15 entries before realizing it’s hard to tell what an article’s about from just the headline. Some things look like tutorials, but they’re reviews, some things look like reviews, but they’re analysis. I can either guess, or I can copy the URL, flip to my browser and load the article then decide how to classify it. This process sucks:

  • Copying and pasting URLs is a drag
  • Waiting for articles to load if an element on the page hangs is a drag
  • Entering stuff in an Excel spreadsheet is a drag

So I gathered together a few tools:

  • Sinatra, which is a Ruby micro-framework for Web apps
  • Blueprint CSS, which is a CSS framework for quick/easy pretty (or at least not awful) HTML
  • Summarize, a gem that can take a bunch of text and return the relevant parts for a given percentage of the original

Then I set up a sqlite database & fed the Excel spreadsheet into it and sat down with Sinatra.

Here’s the finished app (minus the one template page I use):

It’s missing some affordances, but the long and short is demonstrated here:

Reclasser

You start with a given article. It shows you the headline, pub date and author. Using the summarizer gem, it pulls in a summary representing 25 percent of the total text. You pick a new category for it from the radio buttons, click “Reclassify,” and it takes you to the next record (unless you’re done, in which case it just reloads the current record.

The time gain? I dunno. If I worked like a machine, I’d probably be done in four hours. Now it takes less time to update a record and I don’t have to do as much to update it: Just click a radio button, click a submit button, move on. Even if it turns out the time I save is offset by the coding time (two hours, with a break for coffee, chatting, answering mail), I guess I figured out a way to spend some time that would be wasted screwing with Excel getting to play with Sinatra. And since I’m going to have to do the same task in a few weeks for another site, I’ll have time-saving code ready to go.

Update: Even better with a bit of jQuery: Rather than loading the article summary every time (even though I don’t always need it to tell what kind of article I’m dealing with), I wrote an additional route that handles providing just the summary and used jQuery’s load function to pull in the summary for review when I click on a “load summary” button. That makes things a lot faster when I can tell what an article is just from the headline.

Update2 : Sinatra uses Mongrel by default, which is fine, except I noticed that every now and then a page load would go from taking an average of a fifth of a third or so to five seconds. Meh. So I sudo gem install thin‘d the thin gem, and got much more consistent, snappy performance.

Ballpark Digest Relaunched

Ballpark Digest

The new Ballpark Digest went live today.

This job was pretty similar to the Arena Digest relaunch, and involved a lot of the same tasks.

There were over 2,500 legacy articles that needed to be imported this time around, and preserving their search engine placement was a little more important because the site was pretty well indexed. I had to pick up one new trick, too:

Not having any legacy i.d. numbers to work with during the import, I ended up having to figure out the legacy URLs on my own. I knew the article were, at least, in the proper order, so at the beginning of the import data, the second article in the list had an i.d. of “2”, the tenth in the list had an i.d. of “10”, etc. Unfortunately, that 1-1 mapping broke down the first time an article was published then taken down, because my import data didn’t note missing articles. By the time I got to the 2,000th article, the relationship between import row and legacy i.d. was off by a pretty substantial amount.

I grabbed the RBing gem and automated the process of searching by article title and using the URL I got back to figure out the article’s old i.d. and URL. That didn’t work perfectly, because there were some gaps in Bing’s indexation of the site. So I had to write a second script that ran down the list of articles and looked at each i.d., applying the following algorithm:

  • If the article i.d. was one greater than the i.d. of the article before it, and one less than the i.d. of the article after it, I assumed it was o.k.

  • If the article i.d. didn’t match the above criteria, but the i.d. of the article after it was two greater than the i.d. of the article before it, I assumed the real page for that article had failed to be indexed, and I assigned it an i.d between those of the articles on either side of the sequence.

  • If the article i.d. didn’t meet either of those criteria, I flagged it for review.

Most of the time, the ones that were flagged for review were part of a streak of articles that hadn’t been indexed properly to begin with, so the best result Bing could produce was an easily recognizable archive page URL. It was easy to consult the list and see sequences like this:

  • 453

  • 454

  • archive URL

  • archive URL

  • archive URL

  • archive URL

  • 459

Clearly the third through sixth articles in the list had to be 455, 456, 457 and 458. I felt a little guilty for not taking the time to work out a way to do that programatically, but there were only three or four sequences like that so I sucked it up. There were also a few sequences where there was no discerning the proper sequence, but that list totaled fewer than 15.

Once all the i.d.’s were straightened out, I wrote a script to generate the redirects, and plopped it into the site .htaccess.

© Michael Hall, licensed under a Creative Commons Attribution-ShareAlike 3.0 United States license.