Chainsaw on a Tire Swing

Blogging with teeth!

You’re invited to a public talk on Wednesday

… & you’re invited!

Title: Intellectual Property: The Good, the Bad, & the Ugly in 2 Hours

Description: There are four core subjects of so-called intellectual property law: patents, copyrights, trade secrets, and trademarks. Attorney Denise Lieberman and Professor Scott Granneman have taught on this subject many times, and they’ll be covering all four of those core subjects. What are they? How do they work? What’s good about them and what’s bad (and there’s lots of bad)? And just why does Scott insist on calling it “so-called” intellectual property law? Join us and find out!

Who: Scott Granneman (me!) & noted local attorney Denise Lieberman

When: 6:30-9 pm

Where: Graybar Electric Co., Inc., facility at 11885 Lackland Road, 63146 (directions at; map at Park in the lot, come inside, sign in with the guard, & take the elevators to the bottom floor.

Denise & I have taught & lectured on this subject quite a bit, & we’re very excited to be giving this talk. Please join us—I guarantee you’ll learn a lot, & you’ll be entertained as well. Oh yeah, & there’s also free food.

See you there!

Changing a File or Folder’s Label Color in the Finder with Keyboard Maestro

Apple has supported label colors with folders & files for years (although it now calls them Tags), & it’s a tremendously useful feature. What’s really nice is that you can use label colors in both Finder & in my favorite Finder replacement, Path Finder. Of course, one way to apply the colors is by right-clicking, but that’s often tedious. Keyboard Maestro to the rescue!


When I call up a Keyboard Maestro palette for the Finder, I simple press L to choose a Label color. Here’s the Keyboard Maestro macro:

Keyboard Maestro macro for choosing label color in Finder

It’s just two Keyboard Maestro Actions:

  1. Prompt for User Input
  2. Execute AppleScript

You can grab the AppleScript at

Path Finder

Things are both easier & more complicated in Path Finder. Easier, because you don’t need the AppleScript; harder, because instead of one Keyboard Maestro macro, I have eight, one for each of the seven colors & one that sets the Label color to none.

When I call up my Path Finder palette using Keyboard Maestro, I see this at the end:

Keyboard Maestro palette for Path Finder showing Label colors

As you can see, I press 1 for Orange, 2 for Red, 3 for Yellow, & so on, through the colors, or I press 0 to set the Label color back to none.

Here’s the Keyboard Maestro macro for changing the Label color to Orange:

Change a Label color to Orange in Path Finder using Keyboard Maestro

It’s simple: it’s just selecting a menu item for me. The others are exactly like it, just with different menu choices, so I’m not going to reproduce them here.

Here’s the one for a Label color of None:

Change a Label color to None in Path Finder using Keyboard Maestro

Same process—just select a menu item.

I use these Keyboard Maestro macros constantly, because I find Label colors to be tremendously helpful. I hope you do too!

My favorite Mac OS X software of 2013, mostly aimed at power users

’Tis the season for nerds to create lists of the software they found most useful in the past year, so before the year closes, I wanted to get in on the fun. Keep in mind that this post focuses entirely on software for Mac OS X. I don’t go into iOS apps at all, not because I don’t use & enjoy iOS daily (some days, hourly!), but because I wanted to keep the focus on desktop apps that I actually use & enjoy (some more than others), & that make me more productive.

Before I start, I have to call out one program that is difficult to classify, as it falls into so many categories for me: Homebrew. I tried Fink, & I tried MacPorts, but neither works as well as Homebrew. I use the UNIX tools Homebrew provides me to download & manage with almost every category of software on my Mac: with automation tools like Keyboard Maestro, & for capturing & manipulating images (thanks, ImageMagick!), & to automate essential tasks in DEVONthink Pro, & to make Path Finder even better. It’s one of the first things I install on a new Mac, & if you have UNIX in your blood, it deserves a home on your Mac as well.

  1. Automation
  2. Web Development
  3. Writing & thinking
  4. Security
  5. Multimedia
  6. Utilities
  7. Transitions
  8. Disappointments
  9. Conclusion


I’ve been using these tools for a few years, & my reliance on them just kept on increasing this year. All of them are brilliant time-savers, & I can’t imagine using my Mac without them:

Web Development

Sublime Text is my main text editor that I actually use for far more than just coding. I write pretty much everything in Sublime Text (like this post, for instance!), & while I still think BBEdit is better in a few areas, Sublime Text works beautifully, has features that BBEdit still lacks (multiple cursors, anyone?), & is cross-platform, which helps me when I’m trying to teach courses at Washington University in St. Louis & Webster University. All that said, I’m giving the new lightweight editor Brackets serious consideration for my Spring courses. So far, I think it’s brilliant, & the students to whom I’ve shown it have responded enthusiastically. We may have a winner on our hands—I’ll let you know next year.

Dash has turned into a must-have. I was looking for a nice code snippet manager, which Dash is, & I also found something I didn’t know I was looking for: an offline documentation & API browser. Good grief, do I use this all the time. And more & more tools are hooking into Dash as well. Hop on board the Dash train!

Transmit is what I use to transfer files via SFTP & S3. It has a few annoyances that need fixes, but it’s still the best file transfer app I’ve found.

I don’t do a lot with graphics (that ain’t my area), but when I do, I use Acorn. It has the right amount of features & power for me, but I still know that there’s a million & one things I could do with it if I needed to.

Need a good color picker? Check out Hues. Cheap & full-featured.

Writing & thinking

I already mentioned Sublime Text, my main writing app. For keeping short notes about everything, though, I use nvALT.

The best journaling app on OS X is Day One. I don’t write in it every day, but I make sure it’s kept filled with the activity of my life by setting up Brett Terpstra’s amazing Slogger.

This year I got into DEVONthink Pro in a big way (for more, see “How to save a perfectly-scraped webpage into DEVONthink using IFTTT, Diffbot, Hazel, & several command line tools”). Now it’s where I store websites, PDFs, text files, & other files that I plan to use for teaching, web development, writing, or just long term storage. It’s a powerful & deep program, & it’s never let me down. I’m glad it’s in my arsenal.

Of course, sometimes I just want the text in an image & not the image itself. When that’s the case, I bust out Prizmo. The UI is a bit confuzzling, but it does an amazing job with OCR.

I own Scrivener, & I think it’s the best program of its kind (deep-featured writing apps for large, complex projects), but it’s not something I’ve had a need to use. The developer released a new program this year that I have found myself using whenever I have to plan a class or try to understand a knotty issue: Scapple. It’s super-simple mind-mapping that works for me.


If I could only install 5 apps on a new Mac, one of them would be 1Password. It’s that essential, & that good.

David Ulevitch, the founder of OpenDNS, hails from my alma mater & employer, Washington University in St. Louis, & I’m just more impressed every year with his technical & business acumen. This year I started using DNSCrypt, a service from OpenDNS that helps ensure that (a) I’m always using OpenDNS, & (b) my DNS queries are encrypted. It’s free, & there’s absolutely no reason not to use it. So get it!

When it comes to my data, better safe than sorry. For pictures & lots of documents, I use Dropbox. For syncing data to & from my iPhone & iPad, I use Dropbox. For syncing app preferences & data, I use Dropbox. For sharing business documents, I use Dropbox. It’s awesome, & I can’t imagine not using it.

Like I just said, though, better safe than sorry. For an online backup solution, I rely on CrashPlan. Lately it’s been giving me errors that are seriously starting to anger me, but I’m going to be nice & assume that the friendly support I’ve received in the past will help fix things. The really great thing about CrashPlan is that it doesn’t force me to rely on its encryption keys, which is next to useless. Instead, it lets me use my own keys, which is exactly what you should do.


RipIt. Rips DVDs quickly & accurately, so Handbrake can take over after that.

Handbrake. Still the best DVD encoding tool out there.

ImageOptim. The quickest & easiest way to reduce image file sizes.

iDentify 2. Not the prettiest UI, but the best way to add metadata to video files.

Filebot. Horrible UI, but the best thing I’ve found for renaming torrents that you download.


Path Finder. Apple improved the Finder in Mavericks, but it’s still not as good as Path Finder. If my Mac is on, Path Finder is open. And by now, I have so many Keyboard Maestro macros for it that it works like magic.

PCKeyboardHack & KeyRemap4MacBook. This year I discovered the wondrous joys of the Hyper Key, & I am never, ever going back. Hyper Key, you complete me.

Bartender. Earlier versions of this app completely borked my MacBook Pro, but I kept waiting & hoping that they’d get it right, & I’m happy to report that they have. It works, it’s essential, & it’s brought sanity to my Mac’s menu bar.

Display Menu. If you use your Mac to give presentations, you need this app. It’s cheap, & it will make your life much easier. Trust me on this, & go get it.

MacUpdate Desktop. How else am I going to keep my non-Mac App Store software (that is to say, the vast majority of my software) up to date? $20 for 5 Macs is a steal.

Palua. In some programs, I want my F keys to act like F keys (as in Path Finder, where I use F5 to copy between panes constantly); in others, I want to use them to increase brightness or lower the volume or enable Mission Control. Palua lets me specify how my function keys work on a program-by-program basis. Set it & forget it.


For years I’ve been using Chrome as my main web browser. As a result of Edward Snowden’s revelations, I resolved to reduce the amount of information that Google has about me. First, I moved from Chrome to Safari, & I’ve been very happy with my decision, especially once I used Keyboard Maestro to get Safari set up the way I like. I still keep Chrome around, of course, as well as Firefox & a few other web browsers for play & for testing, but Safari is what I use 99% of the time.

Next, I moved my personal email account from Gmail to Fastmail, & I couldn’t be happier. Again, Google has too much info on me, & I also grew disenchanted with Google’s attempts to “fix” the UI of Gmail in ways that made it far worse. Fastmail is fast (it better be with that name!), reasonably priced, has great service & support, & nicely supports both normal & power users. On top of that, it uses real IMAP, not some pseudo-IMAP-like protocol like Gmail does, & it’s webmail interface is fast, intuitive, & sports features that Gmail doesn’t have but should (pinning important emails at the top of the list, for instance)!

For Git, I started the year using Tower, which is a nice program, but it has one big annoyance: it doesn’t automatically check to see if the repos you follow have updates unless you jump through hoops for each repo. As a result, I switched to SourceTree, which has a more complex GUI but checks the status of my repos when I open the program (I like to keep an eye on a lot of repositories!). Bonus: it’s free & cross-platform.

Occasionally I have to edit an EPUB, & for the last few years I’ve had to fire up Sigil to do that job. I’ve never been happy about Sigil—it’s a ugly program that’s never a pleasure to use, but there really wasn’t anything else that was reasonably-priced (Sigil was free, but I would have gladly paid for something decent). In a surprise move late this year, the amazingly-prolific developer of the (non-native but still very usable) ebook manager Calibre added the ability to edit EPUBs to the already full-featured program, & wonder of wonders, even in beta, it’s a lot better than Sigil! I donated money to Kovid Goyal, the developer of Calibre, before, & now I think I’m going to have to do so again.

For years I’ve thought that Google Calendar is the best calendaring program I’ve ever used, & I still think it’s pretty good. But over the last few months I’ve started using Fantastical on my Mac & iPhone (& even iPad), & it’s won me over. It looks great, & the natural language feature for adding new events just works. I like it a lot.

I’ve never liked Apple’s Mail program. It just felt bloated & clunky to me (kind of like Word feels when I have to use it). I tried lots of others—Thunderbird (OK), Postbox (which is an improved Thunderbird & isn’t bad, & was what I turned too for offline work, but which really isn’t supported any longer), Eudora (sad), Sparrow (nice while it lasted), AirMail (which never worked very well), & others that I can’t remember. Finally, since I used Gmail, I stuck with Mailplane, which is excellent if you use Gmail. When I switched from Gmail, I had to leave Mailplane behind. I’d tried all the others, but then I started hearing more about MailMate, & I’ll be damned if it didn’t win me over. This is an email program that’s proudly & unabashedly focused on nerds, & it delivers. It’s expensive, but it’s powerful & it is worth it.

“Why buy Adobe Acrobat?” I always tell people. “Preview does 99% of what you want!” This is true, but sometimes you need something with a bit more oomph, and when PDFpenPro was released this year in a new version with a temporary lower price, I jumped on it & so far have found it useful. Occasionally it hiccups when it runs across a weirdly-formatted PDF, but most of the time it allows me to read, annotate, fill in, & create PDFs to my heart’s content.

When I’m working, I like to have music playing. When I’m driving my son around in the car, I like to play music for him. After struggling for years trying to keep good tunes on my iPhone via syncing, I finally said goodbye to that mess & just started using Rdio. Now I’m sold. I use it everywhere, on all my devices, & I love it. And the fact that Rdio has a family plan that let’s my wife listen to her own stuff for a modest extra fee sits right with us too. I tried Spotify, but I didn’t like the obsessive focus on playlists or the UI, two things that Rdio does right.

As a teacher & speaker, I give a lot of presentations every year, & I’ve been an enthusiastic user of Apple’s Keynote for a long time. When the new version of Keynote came out this fall, I had two quick reactions:

  1. Where the hell are my Smart Builds?
  2. This is really pretty nice!

I’m still sorry that Smart Builds were taken out, but I’ve learned to deal with their absence while I still hope for their eventual return. I ended up using the new Keynote for at least 25 different talks this fall, & my enjoyment of it has only grown. It’s better than the old Keynote ’09 in almost every conceivable way. And I’ll go so far as to say the same about Pages & Numbers—both, in my experience, are vastly improved over the old models.

Finally, I switched this blog’s backend from WordPress over to Octopress earlier this year. It’s been interesting, as it forced me to learn a lot about ruby, something I’d never really used before. So far, I’m glad I made the move, but I will say this, it’s not for the light-hearted.


For years I’ve been a happy customer of Valve’s Steam service. I buy a lot of games for myself & for friends as gifts, & so far I’ve been completely satisfied with the software & the service. But on Christmas Day, Steam was down for most of the day, which was just amazingly incompetent. One of the, if not the, biggest gaming days of the year, & no one at Valve thought they should requisition more servers? And especially when they knew they were going to be giving away free copies of Left4Dead 2, one of the biggest games of recent years? And what made the whole thing worse was that there was absolutely no word on Steam’s official Twitter accounts, which was insult added to injury. Major fail by Valve.

After the Snowden revelations began, I determined to switch over to a more secure IM service than Microsoft’s Skype or Apple’s Messages (although, to be fair, it appears that Apple has been better about user privacy & security than Microsoft, who just rolled over & gave the NSA whatever they wanted & then some). I decided to try Adium. again, but this time with OTR (Off-the-Record) enabled, which provides robust encryption for messaging. Ignoring the still-clunky UI, I discovered the OTR worked well (not great, but well) if everyone was using Adium.

Then I tried adding in some Windows users who have the Adium equivalent, called Pidgin—& keep in mind that Adium is essentially the Mac version of Pidgin. At their heart, they’re the same codebase. But alas, for reasons known only to the developers, the way OTR is implemented in Adium & the way that it’s implemented in Pidgin work in incompatible ways. After a few days trying to get it to work, I gave up, which means that we’re back to Skype & Messages until something better comes along (I’m keeping my eye on TextSecure). What a missed opportunity.


Overall, it was another great year for software. I enjoyed using some old favorites, & lots of new tools entered my collection. The three best parts of using Mac OS X are the high quality native software, the powerful automation tools that are available, & the UNIX underpinnings that let me do pretty much anything I want. As long as that’s the case, I’ll remain sold on OS X as a platform for years to come.

Automatically grab screenshots of web pages, sized perfectly for the viewport

I’m a teacher & speaker, so I give a lot of presentations. I learned a long time ago that a picture is often much more effective than words, & since I’m often talking about websites & web services, I end up inserting a lot of screenshots of webpages into my presentations. However, this has traditionally been a time-consuming & tedious process. Why? Because…

  • I want the image to fill the entire slide so that it fills the entire screen when I’m presenting.
  • My slides are sized for 1024×768, so the image needs to be exactly that big (since I’m using a MacBook Pro with Retina Display, they actually end up being double that: 2048×1536).
  • When taking pictures of webpages, 99% of time I only want the actual webpage itself—what is in the viewport—& not the browser chrome1 around it.

Web browser viewport
The web browser viewport displays the actual webpage.

That’s pretty specific. It’s easy to take a screenshot on your Mac—just use Command+Shift+4, then press the Spacebar to focus on a window, & then click—but it’s far more difficult to meet the needs outlined above. The problem with the screenshot method is that it makes it easy to focus on a window, but not the viewport. And even then, how do I make sure that the viewport is sized to 1024×768? And on top of that, I still have to manually crop the viewport out of the image. Ugh.

So here’s my solution, which is working wonderfully. To use it, you’ll need the following:


Paul Hammond, the creator of webkit2png, describes it as follows:

webkit2png is a command line tool that creates screenshots of webpages.

It’s easy to install with Homebrew:

$ brew install webkit2png

The options that you need to know are:

  • -W 1024 (or --width=1024)
    The width of the resulting image.
  • -H 768 (or --height=768)
    The height of the resulting image, but keep in mind that this is ignored if the webpage is taller than the number you specify. As Paul Hammond puts it: “With tall or wide pages that would normally require scrolling, it takes screenshots of the whole webpage, not just the area that would be visible in a browser window.” This is fine, as you’ll see—and actually, I like having the whole page available in an image, just in case I want to use more than the first visible part in the viewport.
  • -F (or --fullsize)
    Just get the fullsize grab, without also creating a thumbnail. If for some reason you also wanted a thumbnail, you’d include -T (or --thumb) here.
  • -d (or --datestamp)
    Include the date, formatted as YYYYMMDD, in the filename.
  • -D /path/to/image.png (or --dir=/path/to/image.png)
    Specify the directory in which images are saved, instead of the current working directory.

So, if I wanted to grab a screenshot of my blog, I’d use this:

$ webkit2png -W 1024 -H 768 -F -d -D /Users/scott/Desktop

And the result would look like this (obviously shrunk way down—I don’t want to get too meta here!—& cropped, otherwise it’s 30,000 pixels tall2):

Chainsaw on a Tire Swing, captured by webkit2png


ImageMagick is one of the most useful & most confusing programs in the UNIX world. It’s amazingly powerful, but along with that power comes a bewildering array of programs (ImageMagick is actually several programs), options, & features. Everytime I want to do something that I know ImageMagick can do, I end up spending about a half hour figuring out how exactly to do it.

What we want to do is crop the image that webkit2png grabbed for us. To do this, you first use the identify command to figure out how wide the image is. Why? Because we’re going to be cropping programmatically, & if the image is greater than or equal to 2048 pixels, then we need to ultimately crop it to 2048×1536, but if the image is less than that, then we need to crop it to 1024×768. Trust me—it works.

Running the identify command on an image with the -format %w (for width) gives me what I want:

identify -format %w 20131128-wwwchainsawonatireswingcom-full.png

The actual cropping is done with the convert command, another part of ImageMagick. The key option we need is (big surprise!) -crop. To use the option, you specify the following:

  • width
  • height
  • x coordinate for the top left corner of the crop
  • y coordinate for the top left corner of the crop

I want the image to be 2048×1536, & I want the very top of the image, so I want the top left corner of the crop to match the top left corner of the original image, which would mean an x coordinate of 0 and a y coordinate of 0. So my option looks like this: -crop 2048x1536+0+0.

So, to crop the image that webkit2png grabbed, I’d use convert like this:

$ convert 20131128-wwwchainsawonatireswingcom-full.png -crop 2048x1536+0+0 20131128-wwwchainsawonatireswingcom-cropped.png

First the convert command, then the file name of the full-size image I’m cropping, then the -crop option & its details, & then the name of the resulting cropped image. The results (shrunk way down, obviously):

Chainsaw on a Tire Swing, cropped

OK, now let’s automate everything with the always-awesome Keyboard Maestro!

Keyboard Maestro

I use Keyboard Maestro palettes a lot, & Safari is my default browser, so the following is a macro for the Safari palette. However, it would work just as well with any browser that supports Command+L to focus the address bar (which is all of them, to my knowledge).

Here’s the whole macro, & then I’ll walk through the components:

Keyboard Maestro macro for perfectly-sized screenshots of webpages

I use /bin/date +%Y-%m-%d to generate a date in the format of YYYY-MM-DD, because I want to include the date in the final, cropped filename, & that’s how I like it formatted.

I use /bin/date +%Y%m%d to generate a date in the format of YYYYMMDD, because that’s the format that webkit2png uses when it creates the original image it grabs, which I need to match later.

I then type Command+L to select the address bar, & Command+C to copy the address, which is then saved as a variable named URL.

I then grab a screenshot of the webpage using webkit2png, explained above. Since I’m using the bash shell, I have to reference the Keyboard Maestro variable as $KMVAR_URL; in other words, I have to insert $KMVAR_ in front of the variable name.

I now have a screenshot of the webpage, but it’s almost certainly way too tall, so I need to crop it. Before doing that, I need to generate the filename I want the final cropped to have so that I can use it with the convert command.

To do this, I use regex in two search & replace operations. The first—^https?://?— removes either http:// or https:// from the URL variable. This needs to be done because of the name that webkit2png uses with the files it creates. If the original URL is, the resulting filename is 20131128-wwwchainsawonatireswingcom20130614yepthingsaredifferent-full.png. To match that, I need to remove the protocol from the beginning.

You’ll notice that the file name created by webkit2png also strips out other punctuation from the URL as well. To match that, we use the second regex—[-/.:+=?]*—which looks for all instances of those characters & removes them.3

It’s time to use the identify command to find out the length of the image that webkit2png grabbed. The result is stored in another Keyboard Maestro macro: Image Width.

Finally, we get to the real meat & potatoes: a quick shell script that uses the convert command to crop the image that webkit2png created. You will need to change the user name in the path, unless your name is Scott!

if [ "$KMVAR_Image_Width" -ge "2048" ] ; then
  /usr/local/bin/convert /Users/scott/Desktop/"$KMVAR_DateYMD-$KMVAR_URL-full.png" -crop 2048x1536+0+0 /Users/scott/Desktop/"$KMVAR_URL - $KMVAR_DateY_M_D".png
  /usr/local/bin/convert /Users/scott/Desktop/"$KMVAR_DateYMD-$KMVAR_URL-full.png" -crop 1024x768+0+0 /Users/scott/Desktop/"$KMVAR_URL - $KMVAR_DateY_M_D".png

Again, the Keyboard Macro is actually named Image Width, but since we’re using it in a shell script, we have to reference it as $KMVAR_Image_Width. The same is true for $KMVAR_DateYMD, $KMVAR_URL, & $KMVAR_DateY_M_D.

The file that is being cropped—the one that was generated by webkit2png—is named something like 20131128-foobarbazquxcorgegrault-full.png, but the cropped file will be named foobarbazquxcorgegrault-2013-11-28.png.

If something doesn’t work, you should see an error message from Keyboard Maestro. Usually my regex missed a character in the file name, so Keyboard Maestro can’t match the file name, which obviously generates an error. Compare the webkit2png-generated file name on your Desktop to the one in the Keyboard Maestro error message & you’ll quickly see the mismatch. After that, simply edit the regex & you should be good to go.

I use the system I’ve outlined here almost every day, & it’s really been an amazing time saver. When I realize that I want to capture a webpage I’m viewing in Safari, I simple press Hyper+` (that’s the Hyper key & a backtick) to bring up my Keyboard Maestro palette for Safari & then press S. About 10 seconds later, if that, I get a notification that there is an image waiting for me on my Desktop. It’s the perfect size for my slides, & it’s even dated so I know when it was taken, & a few seconds later, it’s in a Keynote slide & I can move on to the next one. It’s fast, it’s easy, it’s automated, & it’s awesome. Enjoy.

  1. Now the name of Google’s web browser makes sense as a cute little pun, doesn’t it?

  2. The maximum height & width captured by webkit2png is 30,000 pixels. To paraphrase Bill Gates, that should be enough for anybody.

  3. I’ve been adding these as failures occur, so I might have missed one. If I did, please let me know.

Embedding PowerPoint presentations into webpages

I received the following email from a co-worker the other day:

I am building a web application for a client that facilitates training and she wants to be able to upload her powerpoints but does not want the viewers to be able to download them and change them. they have audio in them (some). What would you suggest?

Here’s my reply:

Is it important that students be able to choose when to advance through the slides? If not, is it possible to export as a video? Then the students press Play & just watch.

If students must be able to choose when to advance, then I have a few ideas.

  1. Import into Google Docs & set Sharing to View Only (you can also set it to prevent downloading too, but gosh I hate that). Then you can get a link from Google to embed the presentation in a webpage & viewers can use the Left & Right arrows to advance through the slides. No idea about embedding audio in that, though. I’d be surprised if it works.

  2. Import into your Microsoft SkyDrive & then Share the presentation, setting it to read-only. I have no idea how well this works or what it looks like, but it’s a possibility. Again, I’m not sure re: audio. After doing a quick search, I found these, which could help: &

  3. Apple’s iCloud allows you to import PowerPoint files into the online Keynote app (or just create the presentation in the online app, or using Keynote on your iPad or Mac, & have it all synced—it’s slick as hell) & then share them. The problem is that at this time all viewers can edit too, as they haven’t added a View Only mode yet (they will, I’m sure, but the web apps are beta).

  4. This could also be exactly what you want: Not sure re: audio (I’ll be surprised if ANY solution besides a movie preserves audio). Other, similar services include & (I find many of these services to be obnoxious in their requirements for visitors, but that could just be me).

Those are the ones I came up with. Hope that helps!

Presentations on Web Design

My business partner Jans & I recently taught a 2-week, 4-night, 12-hour course on Web Design1 for CAIT, the Center for the Application of Information Technology at Washington University in St. Louis. It was a great course that was broken down into 9 sections:

  1. Design Patterns
  2. Design Theory: The Vitruvian Triad
  3. Design Principles
  4. The Design Process
  5. Design Structure: Design Patterns in Action
  6. Multimedia: Images, Audio, Video
  7. Color
  8. Fonts & Formatting
  9. Our Toolkit: What WebSanity Uses

I’m happy to share the slides from the course with everyone. They’re under a Creative Commons Attribution-ShareAlike 3.0 Unported License, with the specific terms available on my website.

They’re available, so go nuts with ’em!

  1. I’m quite happy to announce that the next time we teach it, it’s going to be a 3-week, 6-night, 18-hour course instead. We’re scheduled for July, so if you’re interested, contact CAIT & sign up.

How to save a perfectly-scraped webpage into DEVONthink using IFTTT, Diffbot, Hazel, & several command line tools

DEVONthink is a key piece of software for me on my Mac. In particular, I use it to store copies of webpages that I run across that I want students to read or that I want to refer back to for teaching, or for writing, or for my own use. Now, it’s very easy to get webpages into DEVONthink by using the browser extensions that come with the software. You click on the extension, & you get a small window:

Clip to DEVONthink browser extension

See the Format menu? When you click it, you get several choices:

Clip to DEVONthink browser extension formats

This is great, as is the checkbox for Instapaper, which runs the webpage through that awesome service & gives you results with just the featured content & none of the crap. But even with Instapaper, these results are not perfect, at least for me.

Here’s my problem: I want a webpage so that I can see images & hyperlinks & other stuff that only comes with the Web. I like PDFs, but not when I can just have good ol’ HTML to deal with. But if I choose the HTML Page or Web Archive options, then I get a bunch of junk I don’t want, like ads & extraneous content. If I check the box next to Instapaper, I get less junk, but I lose a lot of control over what gets selected & what doesn’t get selected, & the original URL of the webpage, along with a lot of other important metadata, gets stripped away by Instapaper. In other words, I want this:

Webpage article in DEVONthink

See? Neat & clean, with the title of the Web article at the top as an H1, & then the author, date of publication, & URL below, all H2s in the HTML, & finally the content & nothing else.

Yes, I know this is picky, but it’s what I really want. So I set out to create it over several months, & I finally got it all figured out & set up & working this summer. After testing it for months to verify that it works well, I am now ready to unveil this process to you, the readers of Chainsaw on a Tire Swing.

Before I dig in to the details, let me give you the 20,000-foot summary of the process. It might seem complicated, & I guess it kinda is, but it’s not that bad if you go through it step by step, & it does work beautifully. I’m going to mention several services in this introduction that you might not have heard of. Don’t worry; I’ll explain everything below.

  1. Send an email to the IFTTT (If This Then That) service which contains the URL of the webpage at the top of the message.
  2. IFTTT saves the email as a file in a specific folder in your Dropbox.
  3. Hazel on your Mac notices the new file in the folder & runs a shell script.
  4. The shell script grabs the URL out of the file & sends a request to the Diffbot service, which saves the result to the /tmp directory as a webpage.
  5. The shell script converts that resulting webpage to a .webarchive file & saves it to DEVONthink’s Inbox folder, where it is automatically imported into DEVONthink.

Got that? OK, let’s set it all up!

  1. Diffbot
  2. Dropbox
  3. IFTTT
  4. Needed command line software
  5. The shell script
  6. The sed file
  7. Hazel
  8. Test


I love Diffbot. I really do. It’s the best service of its type I’ve seen, the price is right (free for the 1st 10,000 requests each month!), & the support I’ve received when I’ve had questions or issues has been top-notch. So what’s it do?

Simple. It’s a scraper: you send a request to Diffbot using its API, you get back the data from a webpage, shorn of all the junk. It’s like Safari’s Reader feature, but available programmatically. Here’s an example.

First, a blog post at The Atlantic’s website, as it appears in a browser:

Atlantic post before being run through Diffbot

Next, the same post after it’s been passed through Diffbot & brought into DEVONthink:

Atlantic post after being run through Diffbot

A bit cleaner, eh?1

So, here’s what you need to do: go to Diffbot’s website, create an account, find out your Diffbot Developer Token (you’ll need it for the shell script), & then come back here.


You don’t have to create these folders exactly where I specify, but if you change their locations, you’re going to need to edit the shell script that’s coming up.

Create a folder at root of your Dropbox named Incoming. Inside the Incoming folder, create another folder named DEVONthink. Your folder structure should therefore look like this: ~/Dropbox/Incoming/DEVONthink


If you don’t already have an account with If This Then That (IFTTT), go get yourself one! It’s a free service that lets you tie together online services so that when one event happens at one service, then something happens as a response. For example, every time you post a picture to Facebook, a copy is placed in a Dropbox folder, or every time a particular RSS feed is published, it’s scanned by IFTTT, & if certain words are in the title, that post is emailed to you. It’s such a great service that I’d pay for it if I had to.

To use it with my process here, create an account at IFTTT if you don’t already have one, log in to IFTTT, & activate the Dropbox & Email channels.

Now go to My Recipes & click Create A Recipe. Here’s what you’re going to fill in:

  • Description: App emails IFTTT a URL, which gets saved as a text file
  • Trigger: Send IFTTT an email from your email address with a tag of dt (for DEVONthink, get it?).
  • Action: Create a text file in Dropbox
    • Dropbox folder path: Incoming/DEVONthink
    • File name: Subject
    • Content: Body

Save it, & you’re good to go.

So here’s what happens: you find a webpage that you want to capture in DEVONthink. You email the link to yourself, with the URL as the first line of the body of the email (you can have other stuff in the email, like your signature, but it will be ignored by the upcoming shell script). As for the subject, it really doesn’t matter—it can be words, it can be a URL as well, it can be nothing—as long as you have #dt in it (I always put it at the end because that’s easy).

When the email arrives at IFTTT, it is saved as a text file in the specified Dropbox folder. The subject of your email becomes the name of the file, & the body of your email becomes the contents of the file.

We now have a place in Dropbox for incoming text files containing URLs that we want to use, & a method for getting those text files into Dropbox: emailing IFTTT. But what do we do with those text files once they’re in there? Time for some shell scripting!

Needed command line software

The shell script I’m going to provide has several requirements:

  • gecho (the GNU version of echo)
  • gsed (the GNU version of sed)
  • dos2unix (converts text files between Windows & UNIX/Mac OS X formats)
  • jsonpp (prettifies JSON files)
  • terminal-notifier (send Mac OS X notifications)
  • webarchiver (create Safari .webarchive files)

All of those but one are available through Homebrew, so if you haven’t already installed that, you’ll need to do so.

Once you have Homebrew up & running, run this command (it’s not obvious, but coreutils takes care of gecho—& a whole lot more besides):

$ brew install coreutils gnu-sed dos2unix jsonpp terminal-notifier

If you use MacPorts (who uses that anymore?), you can download webarchiver pretty easily, according to the developer:

$ sudo port install webarchiver

I don’t use MacPorts, so I have no idea how effectively this is. Instead, you’re going to have download the code & compile it using Xcode.

I went to the GitHub page for the webarchiver project, got a copy of the code (don’t download the release, as that’s 0.3, which is ancient & won’t compile on newer Macs; instead, get the latest code, which is version 0.5), & double-clicked on webarchiver.xcodeproj to open the project in Xcode. Once in Xcode, I went to Product > Build, which successfully compiled the code, leaving the binary in /Users/scott/Library/Developer/Xcode/DerivedData/webarchiver-dreeepqxmdlkgieggztknlbwsula/Build/Products/Debug/webarchiver. Obviously, your path under DerivedData will be different2. I then moved the webarchiver binary to /usr/bin.

Once you’ve moved webarchiver to its new home, test it:

$ webarchiver
webarchiver 0.5
Usage: webarchiver -url URL -output FILE
Example: webarchiver -url -output google.webarchive
-url  http:// or path to local file
-output File to write webarchive to

Updates can be found at

If you see that output, you’re good to go.

The shell script

Place the shell script you see below in your ~/bin directory. I named it (you can use your own, but if you change the name, you’ll need to also change the instructions for Hazel that are coming up). I’ve commented the heck out of it, so I hope that helps explain what each step is doing.


#          FILE:
#         USAGE:  Automatic with Hazel
#   DESCRIPTION:  Uses diffbot to download essential info about an article
#                 & webarchiver to convert it to a .webarchive file
#        AUTHOR:  Scott Granneman (RSG),
#       COMPANY:  Chainsaw On A Tire Swing
#       VERSION:  0.4
#       CREATED:  06/22/2013 13:50:23 CDT
#      REVISION:  11/17/2013 15:20:43 CDT 

### Variables


devonthink_dir="/Users/scott/Library/Application Support/DEVONthink Pro 2/Inbox"



### Grab webpages

# Test to see if the necessary directories exist
if [ -e "$incoming_dir" ] && [ -e "$devonthink_dir" ] ; then
  # Set IFS to split on newlines, not spaces, but first save old IFS
  # See
  # If you can cd to the Incoming/DEVONthink directory, run everything else
  if cd $incoming_dir ; then
    # For every file containing a URL in the Incoming/DEVONthink directory
    for i in $(ls *)
      # If it’s not empty, process it;
      # if it IS empty, move it so Diffbot doesn’t keep trying forever
      if [[ -s $i ]] ; then
        # Check if it’s a Windows-formatted file; if it is, convert it to UNIX
        if [ $(grep -c $'\r$' "$i") \> 0 ] ; then
          terminal-notifier -message "$1 is a Windows file, so convert it" -title "Windows File Found"
          /usr/local/bin/dos2unix "$1"
        # Delete any blank lines
        # Note: will only work with UNIX line endings, hence the previous conversion by Hazel
        /usr/local/bin/gsed '/^$/d' "$i" > "$i".out
        mv "$i".out "$i"
        # Read the file to get the URL
        # I use head instead of cat because the file usually comes in via email,
        # & I’m too lazy when composing to leave off my email sig
        url=$(head -n 1 $i)
        /usr/local/bin/gecho -e "\nURL in the file is $url"
        # URL encode the, uh, URL
        encoded_url=$(python -c "import sys, urllib as ul; print ul.quote_plus(sys.argv[1])" $url)
        /usr/local/bin/gecho -e "\nEncoded URL is $encoded_url"
        # Grab JSON-formatted article & data from Diffbot, 
        # clean up JSON, & write results to file
        if curl "$diffbot_token&url=$encoded_url&html&timeout=20000" | /usr/local/bin/jsonpp > /tmp/results.json ; then
          # Pull out article’s name
          article_title=$(grep -m 1 '"title":' /tmp/results.json | /usr/local/bin/gsed 's/  "title": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed 's:\\\/:-:g' | /usr/local/bin/gsed 's://:-:g' | /usr/local/bin/gsed 's/\\"/"/g' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed)
          /usr/local/bin/gecho -e "\nArticle Title is $article_title"
          # If $article_title is empty, move it so Diffbot doesn’t keep trying forever;
          # if it’s not empty, continue processing it
          if [[ -z $article_title ]] ; then
            # If $article_title is empty, move it!
            mv $i $fail_safe_dir
            terminal-notifier -message "Diffbot could not parse title in $i" -title "Problem with Diffbot"
            # If results.json can be renamed, continue processing;
            # if it can’t be renamed, move it!
            if mv /tmp/results.json /tmp/"$article_title".json ; then
              # Pull out article’s other metadata
              article_author=$(grep -m 1 '"author":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/  "author": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed)
              /usr/local/bin/gecho -e "\nArticle Author is $article_author"
              article_date=$(grep '"date":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/  "date": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed)
              /usr/local/bin/gecho -e "\nArticle Date is $article_date"
              article_url=$(grep '"url":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/  "url": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed 's/\\//g' | /usr/local/bin/gsed 's/"$//')
              /usr/local/bin/gecho -e "\nArticle URL is $article_url"
              # Write HTML to file
              # Remove JSON stuff, fix Unicode, then remove \n, \t, & \
              grep '"html":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/  "html": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed | /usr/local/bin/gsed 's/\\n//g' | /usr/local/bin/gsed 's/\\t//g' | /usr/local/bin/gsed 's/\\//g' > /tmp/"$article_title".html
              # Prepend metadata to file
              /usr/local/bin/gsed "1i <h1>$article_title</h1>\n<h2>$article_author</h2>\n<h2>$article_date</h2>\n<h2>$article_url</h2>\n" /tmp/"$article_title".html > /tmp/"$article_title"_1.html && mv /tmp/"$article_title"_1.html /tmp/"$article_title".html
              # Prepend HTML metadata to file
              /usr/local/bin/gsed "1i <!DOCTYPE html>\n<html>\n<head>\n<meta charset="UTF-8">\n<title>$article_title</title>\n</head>\n<body>\n" /tmp/"$article_title".html > /tmp/"$article_title"_1.html && mv /tmp/"$article_title"_1.html /tmp/"$article_title".html
              # Append HTML metadata to file
              echo "</body></html" >> /tmp/"$article_title".html
              # Using the webarchiver tool I downloaded & compiled, create a webarchive
              if webarchiver -url /tmp/"$article_title".html -output $devonthink_dir/"$article_title".webarchive ; then
                # If it works, then delete the file
                rm $i
                # Couldn’t create a webarchive
                terminal-notifier -message "No webarchive for $i" -title "Problem creating webarchive"
              # If results.json can’t be renamed, move it!
              mv $i $fail_safe_dir
          # If Diffbot fails, move it!
          mv $i $fail_safe_dir
          terminal-notifier -message "Could not Diffbot $i" -title "Problem with Diffbot"
        # If it’s empty, move it!
        terminal-notifier -message "$i is empty!" -title "Problem with parsing file"
        mv $i $fail_safe_dir
    # Needed directory isn’t there, which is weird
    /usr/local/bin/gecho -e "\nIncoming DevonThink directory is missing for $i!" >> "$fail_safe_dir/DEVONthink Problem.txt"
  # Restore IFS so it’s back to splitting on <space><tab><newline>
  # Needed directories aren’t there, which is very bad
  /usr/local/bin/gecho -e "\nIncoming or DevonThink directories are missing for $i!" >> "$fail_safe_dir/DEVONthink Problem.txt"

exit 0

Note the following about the script:

  • Make sure the variables are correct for your setup.

  • In particular, you’ll need to enter your Diffbot Developer Token for diffbot_token. And no, that’s not mine. I randomly generated a lookalike.

  • The paths that start with /Users/ are all for my Mac. You’ll need to change them for yours.

  • You might notice that I encode URLs in the middle; in other words, I turn into This is what Diffbot wants, so it is what Diffbot gets.

  • It’s pretty easy to test and make sure you’re getting the right results from Diffbot. Just use the line from the script: curl "$diffbot_token&url=$encoded_url&html&timeout=20000" | /usr/local/bin/jsonpp, but put in your Diffbot Developer Token instead of $diffbot_token & the encoded URL you want to test instead of $encoded_url. By piping the output to jsonpp, you get readable results.

  • Yes, I use sed (actually gsed) a lot. I refer to a file named conv_to_webarchive.sed a few times. That file is detailed in the next section.

  • You don’t need the lines with gecho, but I found them very useful while I was developing & testing the script, & they don’t do any harm, so I left them. If they bother you, take ’em out.

  • Notice the lines that say mv $i $fail_safe_dir. All are fail-safes in case files can’t be renamed or parsed. This became critical when I did not have them in place, & one night a file got stuck trying Diffbot repeatedly, so that I racked up 15,000 queries or so in just a few hours. Fortunately, I shamefacedly explained what happened to Diffbot support, & they very kindly forgave me. And then I immediately put in place those fail-safes, as I should have from the beginning. So if you see files on your desktop, look in them, as they indicate problems that you need to fix by hand.

The sed file

In my shell script I refer to conv_to_webarchive.sed a number of times. If you don’t know what sed is, it’s basically a way to edit files programmatically from the command line. It’s also very cool & does a million things, most of which I know nothing about (although I’d love to learn!).

Here’s the contents of conv_to_webarchive.sed:


I have built this file up over time, as I have found errors in the results generated by Diffbot & the other programs. Basically, Diffbot stuck the encoding for a character in the results, & I want the actual character itself. So, for instance, instead of an ellipses, I saw \u2026 in the file; my sed file turns \u2026 back into so that it’s readable. As I discover more, I’ll add to the file.


So we have a shell script that processes files, but how do we tell the shell script to run? Enter Hazel. Basically, Hazel watches the folders you tell it to watch, & when something changes in those folders, Hazel processes the files according to the rules you specify.

In this case, we’re going to tell Hazel to watch the Incoming/DEVONthink folder, & when a file is placed inside, the shell script detailed above should run, processing the file. Lather, rinse, repeat.

Open Hazel. On the Folders tab, press the + under Folders to add a new folder. Select ~/Dropbox/Incoming/DEVONthink.

Under Rules, press the + to add a new rule. A sheet will open, named for the folder; in this case, DEVONthink.

For a Name, I chose Convert URL to DEVONthink webarchive.

Now you need to make selections so that the following instructions are mirrored in Hazel.

If all of the following conditions are met:

  • Extension
  • is
  • txt

Do the following:

  • Run shell script
  • Choose Other… & select ~/bin/

Press OK to close the sheet, & then close Hazel.


To test your work, email a URL to with #dt in the subject. A few seconds later, you should see a new entry appear in your DEVONthink Inbox, stripped of extraneous formatting & info thanks to Diffbot.

In subsequent posts, I’m going to tell how to automate emailing that URL using Keyboard Maestro on the Mac & Mr. Reader & other apps on your iOS devices. I could do it here, but this post is long enough already! And even without that info, I still find this process I’ve detailed here to be incredibly useful, so much so that I use it at least 10 times a day to save webpages into DEVONthink. I hope you find it useful too!

  1. Now, Diffbot normally does a great job, but not always. In those cases, you need to go to the Custom API Toolkit & tell Diffbot what to do, based on CSS selectors. I’ve been collecting materials for a long post about this that I’ll put up on this site later.

  2. Note that this is maybe the second time in my life I’ve messed around with Xcode, so if there’s a better way to do what I did, please let me know.

An overview of RSS services & apps

I’m teaching my Social Media course (AKA From Blogs to Wikis) at Washington University in St. Louis this semester, & one of our topics is RSS. I wrote the following about RSS services & apps for my students, but I wanted to share it here as well, since I thought others might find it useful.

This used to be an easy one: if you wanted to follow RSS feeds, use Google Reader. During the summer of 2013, however, Google shut down Reader, which actually turned into a good thing, as it broke the strangehold Google had on RSS & allowed a thousand RSS flowers to bloom, so to speak.

Let’s be clear about what Google Reader was, exactly. Google Reader actually performed two related services:

  1. It was a website that made it easy to follow & read RSS feeds.
  2. It was a syncing service that other websites & apps could use.

That second one requires a bit more explanation. Google Reader made it relatively easy for other apps to use it as a backend syncing service, which allowed users to pick & choose among RSS apps. In my case, I used Google Reader via its website when I was at my laptop. When I was on my iPhone, however, I used a program called Reeder that synced with Google Reader, & when I was on my iPad, I used a program called Mr. Reader that also synced with Google Reader.

If I marked a post or feed as read in Mr. Reader, that program notified Google Reader that it as read, & then, when I opened one of the other apps or looked at the Google Reader website, that post or feed was gone. Likewise, if I starred a post on the Google Reader website, when I opened up Reeder or Mr. Reader, the post would be starred there as well.

Pretty much every RSS app & website used Google Reader as a syncing service. When Google Reader shut down, it wasn’t just that a website for reader RSS feeds was going away—more importantly (& worse!), the backend syncing service used everywhere was going away too!

When Google Reader shut down, the following happened in short order:

  • A few RSS reader apps just decided to call it quits & shut down.

  • Some RSS reader apps announced that they would start offering syncing services to replace those provided by Google Reader, & that those syncing services would be open to any other RSS reader app that wanted to use them (in most cases for a fee; see further down in this list for more info). This meant, of course, that other apps who wanted to use those new syncing services would have to be reprogrammed, in some cases drastically.

  • Most RSS reader apps said that in addition to supporting some new syncing services, they would also support simply subscribing to RSS feeds within the apps, without using a syncing service on the back end. Since there would be no syncing of those feeds, the info about the status of the feeds & their posts would reside solely in the apps. If you only used one app on one device to read your RSS feeds, this might work just fine, but for most people, who want the ability to read synced feeds on different devices, this wasn’t very handy at all.

  • Many syncing services announced that they would be charging a small fee for their use. One of the reasons that Google killed Reader was that the company never charged for the service, & never made any attempt to monetize it, so there was little financial incentive to keep it going. By charging users, the new RSS services hoped to reassure customers that they would not disappear & would also be able to ramp up the infrastructure necessary to handle a large number of RSS subscribers, requests, & apps.

With that in mind—that a syncing service is just as important as an app that uses that syncing service—let me go through some of the various services & apps that have sprung up in the wake of Google Reader’s demise.


I can’t go through each & every RSS syncing service, so I’ll just cover the important or interesting ones here.

Each of the following syncing services also offers a web-based RSS reader. In addition, several offer apps that you can install on your mobile devices. If a fee is associated with the service, I’ll mention that too.

Feedbin is the service I’ve chosen to use for syncing my feeds. It’s $3/month, which I’m happy to pay because I think the service is great & the developer has done a good job. The website is fast & efficient & supports many of the keyboard commands that Google Reader did. Most RSS reader apps have added support for Feedbin as a syncing backend, including the ones I use. Another nice benefit is that the developer recently open sourced his software, which has already led to improvements through contributions from others.

Feedly is very popular & has probably become the biggest RSS syncing service since Google’s Reader’s shutdown. It’s free to use, but subscribers who pay $5/month or $45/year get extra features, including secure HTTPS connections, article search, premium support, & integration with Evernote. The website offers many different ways of viewing your feeds, including a Google Reader-style list & a more Flipboard-style magazine layout. However, I couldn’t really get into the way the website worked, & I didn’t like the official Feedly mobile apps, although many people do. In addition, most RSS reader apps have added support for Feedly, so you don’t need to use Feedly apps if you don’t want to. Definitely one you should try out, but if you do, make sure you poke around in the Settings so you configure it the way that makes sense for you.

Feed Wrangler is $19/year & offers an interesting feature that others do not: Smart Streams. Basically, Smart Streams allow you to group feeds together by title, search words, or topics. As long as you use the website or the official iPhone or iPad apps, you’ll be able to take advantage of your Smart Streams. Remember, however, that Feed Wrangler is also a syncing service, & while other RSS reader apps have included support for backend syncing using Feed Wrangler, far fewer of them have included support for Smart Streams.

Digg Reader at this time is pretty basic, but it shows a lot of promise, & the company behind it—BetaWorks—has done some really impressive work with other services & software that it has built. There are also official iOS & Android apps. Definitely one to watch.

NewsBlur is an interesting outlier, in that the developer provides an API, but has also worked hard to write mobile apps that work with the website in very specific ways. The result is that very few other apps support NewsBlur as a syncing service, so you’d better like using the official NewsBlur apps. For many people, that’s just fine. NewsBlur definitely does things its own way, with its own aesthetic, design, & behaviors that are different from all other RSS readers. It was interesting to me, but it was also so different that I didn’t think I’d like it. But the biggest reason I couldn’t use NewsBlur was that my iPad RSS reader of choice—the phenomenal Mr. Reader (see below)—didn’t support NewsBlur, which meant there was no way I could use the service. NewsBlur is free, but you can only follow 64 feeds; for $24/year, you can follow unlimited feeds & get more features.

Desktop apps

If you use a Mac, I highly recommend ReadKit, which is what I currently use on my Mac. It works with a variety of RSS websites & services, including Feedbin, Feedly, Feed Wrangler, NewsBlur, & straight RSS feeds that don’t come through a service (of course, this means no syncing). However, it’s not just for RSS, as it also supports three “read-it-later services”—Instapaper (which I use), Pocket, & Readability—& two social bookmarking services: Pinboard (the one I use) & Delicious. It’s really good, & the developer is constantly improving it, which is nice. It costs a paltry $2 on the Mac App Store.

I’ve also used these on my Mac:

  • Reeder is $9.99 on the Mac App Store. It was OK, but it didn’t grab me, & it was sometimes crashy. That said, it’s my favorite iPhone app for reading RSS feeds.
  • Caffeinated is $6 on the Mac App Store, & while I liked it much better than Reeder, it liked to crash a lot, so I dumped it.
  • NetNewsWire is $20 (although it’s currently $10 while in beta). Some people love it, but it didn’t do much for me.

If you use Windows, you really should consider using a Web-based solution. There just aren’t a lot of good RSS reader apps for Windows, especially ones that use syncing services besides Google Reader. If you absolutely insist on looking at Windows software, here are the best of a bad bunch:

  • When Google Reader died, the developer of FeedDemon announced that he was throwing in the towel. Too bad—a lot of people really liked it. You can use it without any syncing services, & hope that it keeps working with new versions of Windows, but I wouldn’t.
  • RSSOwl seems to have a lot of nice features, & I know it removed Google Reader syncing, but I can’t tell from the website if the developers added in support for any other syncing services!
  • NextGen Reader is built for Windows 8. Have at it!

A couple that I would avoid on Windows if I were you:

  • Outlook will work, but I wouldn’t rely on it. It’s very much an add-on, me-too feature, & there are lots of better choices out there. But if you follow only a tiny number of RSS feeds, & you live in Outlook, then I guess you could give in a try. But really, you should look at something else!
  • RSS Bandit hasn’t been updated in a long time, & while the lead developer says he’s interested in updating the app, it’s still not there.

Mobile apps

On the iPad, there is only one, as far as I’m concerned: Mr. Reader. For $4 on the App Store, you get the best RSS reader on any platform. It’s wonderfully designed to make reading feeds easy, & it supports syncing with a large & growing number of services, allows you to select from a wide variety of themes, & makes it easy to read posts in a variety of ways, including virtually every Web browser you can find on an iPad. All of those features are fantastic, but it’s the sharing features that really make Mr. Reader stand out. You can share feed posts or selected text with a dizzying number of services, including Twitter, Facebook, Pinboard, Instapaper, Evernote, Tumbler, Messages, & email, to name but a few. In fact, if you’re slightly technically inclined, you can even create your own sharing service, which is amazing. Get Mr. Reader—you’ll be glad you did.

If for some insane reason you don’t want to use Mr. Reader, you could take a look at Reeder for iPad or Feeddler Pro, I guess. But seriously, just use Mr. Reader.

If you have an iPhone, then you also have an easy time of it: just get Reeder, as it is the best RSS reader for that device. There is no reason to get any other. It’s just $3 on the App Store. For that, you get a beautifully designed iPhone app that supports many different syncing services & also makes it very easy to share posts & their content using a wide variety of sharing services. It’s great stuff.

Note: while I strongly endorse the Reeder app for iPhone, I’m not a big fan of the Reeder app for iPad or Mac OS X.

If you use Android, Press gets probably the best reviews, & it’s only $3. It supports many different syncing services, & obviously has a nice UI.

If you don’t like Press, there’s always the Feedly app, which gets a lot of kudos.

Got a Windows Phone device? NextGen Reader seems to be one to look at.

Why I support Obamacare

No, this isn’t tech related, so feel free to skip it if you don’t care or don’t want to read it. I’m leaving comments on, but stupid, abusive, or unhelpful comments will be deleted. You can disagree—that’s fine!—but be nice.

A conservative friend of mine asked me on Facebook why I support Obamacare. I wrote up this very long reply (almost 2000 words!) as an answer. Because I intended it as a casual reply—at least up until the first 500 or so words!—I didn’t provide citations & links for most of the facts I cite. Nonetheless, it should be easy to search for any of them & find my sources. Oh, & I don’t think there are, but if you find a few unquoted sentences in here from Wikipedia, I apologize in advance for my sloppiness.

This is a very long answer, because you asked a serious question. However, before I start, let me say that while I support Obamacare, I see it as a half measure at best. I was, & I still am, in favor of a single-payer healthcare system (for instance, making Medicare available to everyone), like virtually every other industrialized nation. That to me will be the only solution to the problems that Obamacare is trying to solve (detailed below).

Further, to me, Obamacare is mostly insurance company reform. That’s really what it is. That’s a good thing, as insurance companies have made billions of dollars screwing over a lot of people. Obamacare is still not the real comprehensive reform I would have liked, but it was the best we could get right now, so I’m happy we have it, even if I don’t think it goes nearly far enough.

  1. How the US healthcare system (very) generally works
  2. Problems in the US healthcare system
  3. What Obamacare does
  4. Some examples
  5. Some myths
  6. The past informs the future

How the US healthcare system (very) generally works

In order to answer your question, let’s first review how the US does things. Most of the population under 67 is insured by an employer (either theirs or a family member’s), some buy health insurance on their own, & the remainder are uninsured (~16% of the population). Health insurance for public sector employees (including the military) is primarily provided by the government.

The basic idea, though, is that you are insured by your employer (an outgrowth of World War II, by the way, when employers, seeking a way to provide benefits to employees during a time of wage & price control limitations, started offering to pay for health care costs instead; the government had offered to cover health care costs, but unions & others protested, as they wanted the benefits—it seemed to make sense at the time.). This system is an outlier among other industrialized nations, however, as virtually all others guarantee access to medical care for its citizens through public, government-backed systems.

How are we doing compared to other industrialized nations?

  • Life expectancy: 50th among 221 nations, & 27th out of 34 industrialized countries
  • Infant mortality: 39th
  • Adult female mortality: 43rd
  • Adult male mortality: 42nd
  • & on & on

Not so great. Not nearly as great as we should be doing.

Problems in the US healthcare system

Now let’s look at the problems Obamacare is trying to solve.

  1. 45 million people had no type of health insurance in 2012. Sure, they could go the emergency room, but then they either get a bill that bankrupts them (see below) or the outrageously high costs get passed along to everyone else.

  2. Many of those who had insurance had “bad” insurance that didn’t cover very much.

  3. Health insurance has often been discriminatory: women paid more than men, for example, or insurers wouldn’t cover someone who was sick. This often lead to destroyed finances (62% of filers for bankruptcies claim high medical expenses; 25% of all senior citizens declare bankruptcy due to medical expenses, & 43% are forced to mortgage or sell their primary residence).

  4. Healthcare costs in the US are the highest in the world (yet we’re only ranked 46th in the world for efficiency by Bloomberg & 17th out of 17 in a report by the National Research Council & the Institute of Medicine; other reports are similar). The expenditure per person in the U.S. is ~$8000, while the total amount of GDP spent on health care is ~17%—also the highest of any country in the world.

  5. Figuring out what your healthcare plan does & does not cover is at best extremely difficult & at worst impossible. The “fine print” is often used by insurance companies to screw consumers.

Those are all real, painful issues for virtually every American using the healthcare system—which means every American!

What Obamacare does

So, what does Obamacare DO to fix these problems?

  • Obamacare requires insurance companies to cover all applicants within new minimum standards. In other words, you can’t be offered an el cheapo plan that sounds great (“Only $50 a month! Sure!”) but doesn’t actually cover anything. And all plans must include prescription drugs, maternity care, mental health, physical rehabilitation, laboratory services, preventive care, chronic disease management, ambulances, hospitalization (that one screwed a lot of people), & pediatric services—all things that were often left out of the “cheap” plans. Even better, insurance companies can’t impose annual or lifetime coverage caps, so you can count on those things being available.

  • Obamacare requires that insurance plans eliminate co-pays & deductibles for childhood immunizations, adult vaccinations, medical screenings, mammograms, colonoscopies, wellness visits, gestational diabetes screening, HPV testing, STD counseling, HIV screening & counseling, FDA-approved contraceptive methods, breastfeeding support & supplies, & domestic violence screening & counseling.

  • Obamacare requires that the out-of-pocket maximum deductible you have to pay is limited to $6,350 for an individual. Again, this is because the “cheap” plans would often have sky-high deductibles, which lead to bankruptcies or sickness & death due to an inability to pay.

  • Obamacare requires insurance companies to offer the same rates regardless of pre-existing conditions. A lot of people have gotten screwed by the insurance companies by their failure to cover pre-existing conditions, so it’s great that this is no longer the case.

  • Obamacare requires insurance companies to offer the same rates regardless of sex. Women won’t pay more than men.

  • Obamacare will lower both future deficits and Medicare spending, according to Congressional Budget Office projections.

  • Obamacare will reduce the number of uninsured by 27 million between now and 2023; unfortunately, it will still leave approximately 26 million Americans uninsured (Who’s still going to be uninsured? Illegal immigrants [1/3 of that uninsured group], citizens who fail to enroll in Medicaid even though they could, citizens who opt to pay the annual penalty instead of purchasing insurance, & citizens who live in states that opt out of the Medicaid expansion and who don’t qualify for existing Medicaid coverage or subsidized coverage). That’s far better than the current situation. Among the non-elderly, 83% are currently insured (although a lot of those policies are pretty bad); under Obamacare, that will jump to 94% (& the plans all have to adhere to a minimum standard, so no more crappy plans). It’s not universal, but it’s far better.

  • Obamacare will reduce medical bankruptcies & prevent job lock (when someone can’t leave their current job because then they’ll lose their health insurance).

  • Obamacare will help control costs by reducing the number of people who have to go to the emergency room because that’s their only option & also by increasing the size of the insurance risk pool, which should help to distribute costs.

  • Obamacare allows children to remain on their parents’ plans until age 26, which will reduce the number of uninsured young adults.

  • Obamacare increases Medicaid eligibility to 16 million individuals with incomes below 133% of the federal poverty level.

  • Obamacare removes the Medicare “donut hole” (after someone under Medicare runs through the initial coverage of prescription drugs, they have to pay for those prescription drugs [at a higher cost], until they reach the catastrophic-coverage threshold, at which point Medicare takes over coverage again).

  • Obamacare establishes four tiers of coverage: bronze, silver, gold, & platinum. All of these categories offer the same essential benefits, outlined above; the different tiers tell you what your premiums & out-of-pocket costs are going to be. Basically, the percentage of care covered through premiums (as opposed to out-of-pocket costs) are roughly 60% (bronze), 70% (silver), 80% (gold), and 90% (platinum). This makes things simpler for the consumer.

  • Obamacare requires insurance companies spend at least 80–85% of premium dollars on health costs & claims instead of administrative costs & profits; if this is violated, they must issue rebates to policyholders.


I think all of that is great. I mean, seriously great. I find it very hard to understand how someone could be against it, frankly. Is it perfect? Hell no. But it’s better than what we had.

Some examples

Are some premiums going to go up? Well, kind of, but in many cases, not really. Here’s an example: if you’re 27 & live in Fort Lauderdale, Florida, the least expensive plan is around $66 before Obamacare. Under Obamacare, it’s $128. “That’s double!”, you say. Hold on.

First of all, that plan sucks. It’s comes with a very high deductible—$10,000—& doesn’t cover mental health, brand-name drugs, or pre-natal care. On top of that, your out-of-pocket limit is $12,500. Egad! And, of course, if you have pre-existing conditions, you’re looking at a LOT more.

Under Obamacare, the $128 plan must include basic health benefits, like mental health, prescription drugs, & maternity care. Your deductible/out-of-pocket is limited to $6,350. That’s a far better plan!

“But it’s still $128!”, you say. Hold on. Under Obamacare, if you’re single & earn less than $46k a year, you are eligible for federal subsidies to help defray premium costs, with the size of the subsidy based on age, income, & residence. That means that the young single person in Fort Lauderdale ends up paying … wait for it … $74 a month. A whopping $8 more, for a far better plan, & you can’t get screwed by the insurance companies!

And here’s another example, this one from a close friend of mine. On October 1, 2013, when the Obamacare exchanges opened, my friend Bill got on the website & finally got decent health insurance. Here’s his brief story:

One data point: as a self-employed, relatively healthy 47 year old (I just hit my personal best in the squat rack), I was ‘uninsurable’ to any of the companies out there because of being diagnosed with sleep apnea 10 years ago (I have no other pre-existing conditions). I have since lost 35lbs and the apnea went away, but still no company wants to insure me - so I had to buy through MO’s ‘high risk’ pool. $508/mo for a $5K deductible - no vision, no dental, etc. - catastrophic coverage only. Just checked the new health insurance exchange web site today - $200/mo for better coverage…

That is exactly what Obamacare is supposed to do.

Some myths

In the list of things Obamacare does that I provided above, note the items that were not listed, because Obamacare does not do them:

  • You do not have to change your doctor.
  • You do not have to change your insurance.
  • You do not have to use a government healthcare system.
  • You do not have to use an exchange.
  • Businesses do not have to use the exchanges (insurance offered to employees must meet federal minimum standards, however).

In fact, for most people, not a lot will change, except that your insurance will be better. As Michael Tanner, senior fellow at the CATO Institute (a noted libertarian think tank), put it: “The vast majority of people will continue to get insurance the same way they do today”.

By the way, I’d also like to address the statement that the President & Congress are somehow exempt from Obamacare. Actually, Obamacare requires that members of Congress (& other federal employees) obtain health insurance either through an exchange or approved program (Medicare, for example), instead of using the current government program (the Federal Employees Health Benefits Program). However, the federal government will, like large private employers, continue contributing to the new health insurance plans of federal employees.

And besides, remember how our system works, by & large: the employer pays for the employee’s health care. The President & Congress & the military & other government employees are employed by the federal government, so why shouldn’t it contribute to, & provide, their health care?

The past informs the future

To wrap up this long reply, I’d like to forecast the future by looking at the past. Every time there has been an expansion of social services & rights for people in this country, the right wing has pulled a Chicken Little & screamed that the US was doomed (note I said “right wing” & not “Republicans”). Here are just a few examples; believe me, there are many more.

  • Social Security Act (1935). John Taber, a GOP House member from New York: “Never in the history of the world has any measure been brought here so insidiously designed as to prevent business recovery, to enslave workers.”
  • Fair Labor Standards Act (1938), which set a national minimum wage, guaranteed time-and-a-half for overtime in certain jobs, & banned child labor: “Opponents of the bill charged that [it] was ‘a bad bill badly drawn’ which would lead the country to a ‘tyrannical industrial dictatorship.’ They said New Deal rhetoric, like ‘the smoke screen of the cuttle fish,’ diverted attention from what amounted to socialist planning.” (
  • Medicare & Medicaid (1965). Ronald Reagain in 1961: “[I]f you don’t [stop Medicare] and I don’t do it, one of these days you and I are going to spend our sunset years telling our children and our children’s children what it once was like in America when men were free.”

And now Obamacare:

  • Louisiana Rep. John Fleming: “Obamacare is the most dangerous piece of legislation ever passed in Congress.”
  • Minnesota Rep. Michele Bachmann: “Repeal this failure before it literally kills women, kills children, kills senior citizens.”
  • New Hampshire state Rep. Bill O’Brien: Obamacare is “a law as destructive to personal and individual liberty as the Fugitive Slave Act of 1850.”

And the best of all (& voted by Politifact as “Lie of the Year” for 2009!; see

  • Sarah Palin: “The America I know and love is not one in which my parents or my baby with Down syndrome will have to stand in front of Obama’s ‘death panel’ so his bureaucrats can decide, based on a subjective judgment of their ‘level of productivity in society,’ whether they are worthy of health care. Such a system is downright evil.”

The right wing freaked out about the Social Security Act in 1935; now it’s an established part of our country. The right wing freaked out about the Fair Labor Standards Act in 1938; who would abolish the minimum wage or allow child labor now? The right wing freaked out about Medicare & Medicaid in 1965; now millions of people depend on those programs for their health & lives.

The right wing is freaking out about Obamacare now; in ten years, no one is going to care. The benefits Obamacare provides society will be accepted, & most people will wonder how we ever lived without them. The sky won’t fall, but millions of people will be insured, & will be able to live healthier lives without worrying that they’ll go bankrupt or die because they can’t afford or get health insurance.

That’s why I support Obamacare.

New in 1Password 4: Multiple Vaults

I’ve been beta testing 1Password 4 for the last month or so, & so far I really like what I see. One of the neatest new features is Multiple Vaults, which the company describes this way:

1Password 4 helps you keep your data more organised than ever before with the new multiple vaults feature. Want to keep your work and personal stuff separate? No problem, just create a separate “Work” vault. Have to handle your parents’ finances but want to keep that separate from your own stuff? No problem, create a separate “Parents” vault. Have items that you don’t want to delete but that aren’t really relevant anymore? No problem, create an “Archive” vault. Each vault can have its own password, its own identifying icon and accent colour, and its own sync settings.

And here’s a picture, courtesy of Agile Bits, makers of 1Password:

1Password vaults

This sounds like a great new feature. LastPass has had something similar for a while called Shared Folders:

A ‘Shared Folder’ is a special folder in your vault that you can use to securely and easily share sites and notes with other people in your Enterprise account. Changes to the Shared Folder are synchronized automatically to everyone with whom the folder has been shared. Different access controls—such as ‘Hide Passwords’—can be set on a person-by-person basis. Shared Folders use the same technology to encrypt and decrypt data that a regular LastPass account uses, but are designed to accommodate multiple users for the same folder.

This is a really cool feature, & I have friends who use Last Pass as part of a team & say it’s really nice, but it’s not enough to overcome the horrible, confusing UI that LastPass possesses.

At WebSanity, we all use 1Password. When I read about Multiple Vaults in 1Password, I immediately thought that it would be perfect for us. However, there’s one problem, as an AgileBits (makers of 1Password) employee explained:

As for 1Password 4 for iOS, it won’t support multiple vaults for now, this will require an update to it down the line. We’ll focus on stabilizing the multiple vaults in the OS X app and then work on the iOS app down the line.

If we can’t use it on our iPads & iPhones, then we can’t use it. Once 1Password allows us to use multiple vaults on our Macs & iOS devices, we’ll happily start using them. But until then, we’ll just have to wait.