I’m a teacher & speaker, so I give a lot of presentations. I learned a long time ago that a picture is often much more effective than words, & since I’m often talking about websites & web services, I end up inserting a lot of screenshots of webpages into my presentations. However, this has traditionally been a time-consuming & tedious process. Why? Because…
I want the image to fill the entire slide so that it fills the entire screen when I’m presenting.
My slides are sized for 1024×768, so the image needs to be exactly that big (since I’m using a MacBook Pro with Retina Display, they actually end up being double that: 2048×1536).
When taking pictures of webpages, 99% of time I only want the actual webpage itself—what is in the viewport—& not the browser chrome1 around it.
The web browser viewport displays the actual webpage.
That’s pretty specific. It’s easy to take a screenshot on your Mac—just use Command+Shift+4, then press the Spacebar to focus on a window, & then click—but it’s far more difficult to meet the needs outlined above. The problem with the screenshot method is that it makes it easy to focus on a window, but not the viewport. And even then, how do I make sure that the viewport is sized to 1024×768? And on top of that, I still have to manually crop the viewport out of the image. Ugh.
So here’s my solution, which is working wonderfully. To use it, you’ll need the following:
Paul Hammond, the creator of webkit2png, describes it as follows:
webkit2png is a command line tool that creates screenshots of webpages.
It’s easy to install with Homebrew:
$ brew install webkit2png
The options that you need to know are:
-W 1024 (or --width=1024)
The width of the resulting image.
-H 768 (or --height=768)
The height of the resulting image, but keep in mind that this is ignored if the webpage is taller than the number you specify. As Paul Hammond puts it: “With tall or wide pages that would normally require scrolling, it takes screenshots of the whole webpage, not just the area that would be visible in a browser window.” This is fine, as you’ll see—and actually, I like having the whole page available in an image, just in case I want to use more than the first visible part in the viewport.
-F (or --fullsize)
Just get the fullsize grab, without also creating a thumbnail. If for some reason you also wanted a thumbnail, you’d include -T (or --thumb) here.
-d (or --datestamp)
Include the date, formatted as YYYYMMDD, in the filename.
-D /path/to/image.png (or --dir=/path/to/image.png)
Specify the directory in which images are saved, instead of the current working directory.
So, if I wanted to grab a screenshot of my blog, I’d use this:
And the result would look like this (obviously shrunk way down—I don’t want to get too meta here!—& cropped, otherwise it’s 30,000 pixels tall2):
ImageMagick is one of the most useful & most confusing programs in the UNIX world. It’s amazingly powerful, but along with that power comes a bewildering array of programs (ImageMagick is actually several programs), options, & features. Everytime I want to do something that I know ImageMagick can do, I end up spending about a half hour figuring out how exactly to do it.
What we want to do is crop the image that webkit2png grabbed for us. To do this, you first use the identify command to figure out how wide the image is. Why? Because we’re going to be cropping programmatically, & if the image is greater than or equal to 2048 pixels, then we need to ultimately crop it to 2048×1536, but if the image is less than that, then we need to crop it to 1024×768. Trust me—it works.
Running the identify command on an image with the -format %w (for width) gives me what I want:
The actual cropping is done with the convert command, another part of ImageMagick. The key option we need is (big surprise!) -crop. To use the option, you specify the following:
x coordinate for the top left corner of the crop
y coordinate for the top left corner of the crop
I want the image to be 2048×1536, & I want the very top of the image, so I want the top left corner of the crop to match the top left corner of the original image, which would mean an x coordinate of 0 and a y coordinate of 0. So my option looks like this: -crop 2048x1536+0+0.
So, to crop the image that webkit2png grabbed, I’d use convert like this:
First the convert command, then the file name of the full-size image I’m cropping, then the -crop option & its details, & then the name of the resulting cropped image. The results (shrunk way down, obviously):
OK, now let’s automate everything with the always-awesome Keyboard Maestro!
I use Keyboard Maestro palettes a lot, & Safari is my default browser, so the following is a macro for the Safari palette. However, it would work just as well with any browser that supports Command+L to focus the address bar (which is all of them, to my knowledge).
Here’s the whole macro, & then I’ll walk through the components:
I use /bin/date +%Y-%m-%d to generate a date in the format of YYYY-MM-DD, because I want to include the date in the final, cropped filename, & that’s how I like it formatted.
I use /bin/date +%Y%m%d to generate a date in the format of YYYYMMDD, because that’s the format that webkit2png uses when it creates the original image it grabs, which I need to match later.
I then type Command+L to select the address bar, & Command+C to copy the address, which is then saved as a variable named URL.
I then grab a screenshot of the webpage using webkit2png, explained above. Since I’m using the bash shell, I have to reference the Keyboard Maestro variable as $KMVAR_URL; in other words, I have to insert $KMVAR_ in front of the variable name.
I now have a screenshot of the webpage, but it’s almost certainly way too tall, so I need to crop it. Before doing that, I need to generate the filename I want the final cropped to have so that I can use it with the convert command.
To do this, I use regex in two search & replace operations. The first—^https?://?— removes either http:// or https:// from the URL variable. This needs to be done because of the name that webkit2png uses with the files it creates. If the original URL is http://www.chainsawonatireswing.com/2013/06/14/yep-things-are-different/, the resulting filename is 20131128-wwwchainsawonatireswingcom20130614yepthingsaredifferent-full.png. To match that, I need to remove the protocol from the beginning.
You’ll notice that the file name created by webkit2png also strips out other punctuation from the URL as well. To match that, we use the second regex—[-/.:+=?]*—which looks for all instances of those characters & removes them.3
It’s time to use the identify command to find out the length of the image that webkit2png grabbed. The result is stored in another Keyboard Maestro macro: Image Width.
Finally, we get to the real meat & potatoes: a quick shell script that uses the convert command to crop the image that webkit2png created. You will need to change the user name in the path, unless your name is Scott!
Again, the Keyboard Macro is actually named Image Width, but since we’re using it in a shell script, we have to reference it as $KMVAR_Image_Width. The same is true for $KMVAR_DateYMD, $KMVAR_URL, & $KMVAR_DateY_M_D.
The file that is being cropped—the one that was generated by webkit2png—is named something like 20131128-foobarbazquxcorgegrault-full.png, but the cropped file will be named foobarbazquxcorgegrault-2013-11-28.png.
If something doesn’t work, you should see an error message from Keyboard Maestro. Usually my regex missed a character in the file name, so Keyboard Maestro can’t match the file name, which obviously generates an error. Compare the webkit2png-generated file name on your Desktop to the one in the Keyboard Maestro error message & you’ll quickly see the mismatch. After that, simply edit the regex & you should be good to go.
I use the system I’ve outlined here almost every day, & it’s really been an amazing time saver. When I realize that I want to capture a webpage I’m viewing in Safari, I simple press Hyper+` (that’s the Hyper key & a backtick) to bring up my Keyboard Maestro palette for Safari & then press S. About 10 seconds later, if that, I get a notification that there is an image waiting for me on my Desktop. It’s the perfect size for my slides, & it’s even dated so I know when it was taken, & a few seconds later, it’s in a Keynote slide & I can move on to the next one. It’s fast, it’s easy, it’s automated, & it’s awesome. Enjoy.
Now the name of Google’s web browser makes sense as a cute little pun, doesn’t it?↩
The maximum height & width captured by webkit2png is 30,000 pixels. To paraphrase Bill Gates, that should be enough for anybody.↩
I’ve been adding these as failures occur, so I might have missed one. If I did, please let me know.↩
I received the following email from a co-worker the other day:
I am building a web application for a client that facilitates training and she wants to be able to upload her powerpoints but does not want the viewers to be able to download them and change them. they have audio in them (some). What would you suggest?
Here’s my reply:
Is it important that students be able to choose when to advance through the slides? If not, is it possible to export as a video? Then the students press Play & just watch.
If students must be able to choose when to advance, then I have a few ideas.
Import into Google Docs & set Sharing to View Only (you can also set it to prevent downloading too, but gosh I hate that). Then you can get a link from Google to embed the presentation in a webpage & viewers can use the Left & Right arrows to advance through the slides. No idea about embedding audio in that, though. I’d be surprised if it works.
Apple’s iCloud allows you to import PowerPoint files into the online Keynote app (or just create the presentation in the online app, or using Keynote on your iPad or Mac, & have it all synced—it’s slick as hell) & then share them. The problem is that at this time all viewers can edit too, as they haven’t added a View Only mode yet (they will, I’m sure, but the web apps are beta).
This could also be exactly what you want: http://www.slideshare.net. Not sure re: audio (I’ll be surprised if ANY solution besides a movie preserves audio). Other, similar services include http://www.docstoc.com & http://www.scribd.com (I find many of these services to be obnoxious in their requirements for visitors, but that could just be me).
Those are the ones I came up with. Hope that helps!
DEVONthink is a key piece of software for me on my Mac. In particular, I use it to store copies of webpages that I run across that I want students to read or that I want to refer back to for teaching, or for writing, or for my own use. Now, it’s very easy to get webpages into DEVONthink by using the browser extensions that come with the software. You click on the extension, & you get a small window:
See the Format menu? When you click it, you get several choices:
This is great, as is the checkbox for Instapaper, which runs the webpage through that awesome service & gives you results with just the featured content & none of the crap. But even with Instapaper, these results are not perfect, at least for me.
Here’s my problem: I want a webpage so that I can see images & hyperlinks & other stuff that only comes with the Web. I like PDFs, but not when I can just have good ol’ HTML to deal with. But if I choose the HTML Page or Web Archive options, then I get a bunch of junk I don’t want, like ads & extraneous content. If I check the box next to Instapaper, I get less junk, but I lose a lot of control over what gets selected & what doesn’t get selected, & the original URL of the webpage, along with a lot of other important metadata, gets stripped away by Instapaper. In other words, I want this:
See? Neat & clean, with the title of the Web article at the top as an H1, & then the author, date of publication, & URL below, all H2s in the HTML, & finally the content & nothing else.
Yes, I know this is picky, but it’s what I really want. So I set out to create it over several months, & I finally got it all figured out & set up & working this summer. After testing it for months to verify that it works well, I am now ready to unveil this process to you, the readers of Chainsaw on a Tire Swing.
Before I dig in to the details, let me give you the 20,000-foot summary of the process. It might seem complicated, & I guess it kinda is, but it’s not that bad if you go through it step by step, & it does work beautifully. I’m going to mention several services in this introduction that you might not have heard of. Don’t worry; I’ll explain everything below.
Send an email to the IFTTT (If This Then That) service which contains the URL of the webpage at the top of the message.
IFTTT saves the email as a file in a specific folder in your Dropbox.
Hazel on your Mac notices the new file in the folder & runs a shell script.
The shell script grabs the URL out of the file & sends a request to the Diffbot service, which saves the result to the /tmp directory as a webpage.
The shell script converts that resulting webpage to a .webarchive file & saves it to DEVONthink’s Inbox folder, where it is automatically imported into DEVONthink.
I love Diffbot. I really do. It’s the best service of its type I’ve seen, the price is right (free for the 1st 10,000 requests each month!), & the support I’ve received when I’ve had questions or issues has been top-notch. So what’s it do?
Simple. It’s a scraper: you send a request to Diffbot using its API, you get back the data from a webpage, shorn of all the junk. It’s like Safari’s Reader feature, but available programmatically. Here’s an example.
First, a blog post at The Atlantic’s website, as it appears in a browser:
Next, the same post after it’s been passed through Diffbot & brought into DEVONthink:
So, here’s what you need to do: go to Diffbot’s website, create an account, find out your Diffbot Developer Token (you’ll need it for the shell script), & then come back here.
You don’t have to create these folders exactly where I specify, but if you change their locations, you’re going to need to edit the shell script that’s coming up.
Create a folder at root of your Dropbox named Incoming. Inside the Incoming folder, create another folder named DEVONthink. Your folder structure should therefore look like this: ~/Dropbox/Incoming/DEVONthink
If you don’t already have an account with If This Then That (IFTTT), go get yourself one! It’s a free service that lets you tie together online services so that when one event happens at one service, then something happens as a response. For example, every time you post a picture to Facebook, a copy is placed in a Dropbox folder, or every time a particular RSS feed is published, it’s scanned by IFTTT, & if certain words are in the title, that post is emailed to you. It’s such a great service that I’d pay for it if I had to.
To use it with my process here, create an account at IFTTT if you don’t already have one, log in to IFTTT, & activate the Dropbox & Email channels.
Now go to My Recipes & click Create A Recipe. Here’s what you’re going to fill in:
Description: App emails IFTTT a URL, which gets saved as a text file
Trigger: Send IFTTT an email from your email address with a tag of dt (for DEVONthink, get it?).
Action: Create a text file in Dropbox
Dropbox folder path: Incoming/DEVONthink
File name: Subject
Save it, & you’re good to go.
So here’s what happens: you find a webpage that you want to capture in DEVONthink. You email the link to yourself, with the URL as the first line of the body of the email (you can have other stuff in the email, like your signature, but it will be ignored by the upcoming shell script). As for the subject, it really doesn’t matter—it can be words, it can be a URL as well, it can be nothing—as long as you have #dt in it (I always put it at the end because that’s easy).
When the email arrives at IFTTT, it is saved as a text file in the specified Dropbox folder. The subject of your email becomes the name of the file, & the body of your email becomes the contents of the file.
We now have a place in Dropbox for incoming text files containing URLs that we want to use, & a method for getting those text files into Dropbox: emailing IFTTT. But what do we do with those text files once they’re in there? Time for some shell scripting!
Needed command line software
The shell script I’m going to provide has several requirements:
gecho (the GNU version of echo)
gsed (the GNU version of sed)
dos2unix (converts text files between Windows & UNIX/Mac OS X formats)
jsonpp (prettifies JSON files)
terminal-notifier (send Mac OS X notifications)
webarchiver (create Safari .webarchive files)
All of those but one are available through Homebrew, so if you haven’t already installed that, you’ll need to do so.
Once you have Homebrew up & running, run this command (it’s not obvious, but coreutils takes care of gecho—& a whole lot more besides):
If you use MacPorts (who uses that anymore?), you can download webarchiver pretty easily, according to the developer:
$ sudo port install webarchiver
I don’t use MacPorts, so I have no idea how effectively this is. Instead, you’re going to have download the code & compile it using Xcode.
I went to the GitHub page for the webarchiver project, got a copy of the code (don’t download the release, as that’s 0.3, which is ancient & won’t compile on newer Macs; instead, get the latest code, which is version 0.5), & double-clicked on webarchiver.xcodeproj to open the project in Xcode. Once in Xcode, I went to Product > Build, which successfully compiled the code, leaving the binary in /Users/scott/Library/Developer/Xcode/DerivedData/webarchiver-dreeepqxmdlkgieggztknlbwsula/Build/Products/Debug/webarchiver. Obviously, your path under DerivedData will be different2. I then moved the webarchiver binary to /usr/bin.
Once you’ve moved webarchiver to its new home, test it:
Usage: webarchiver -url URL -output FILE
Example: webarchiver -url http://www.google.com -output google.webarchive
-url http:// or path to local file
-output File to write webarchive to
Updates can be found at https://github.com/newzealandpaul/webarchiver/
If you see that output, you’re good to go.
The shell script
Place the shell script you see below in your ~/bin directory. I named it conv_to_webarchive.sh (you can use your own, but if you change the name, you’ll need to also change the instructions for Hazel that are coming up). I’ve commented the heck out of it, so I hope that helps explain what each step is doing.
#!/bin/bash#===============================================================================# FILE: conv_to_webarchive.sh# USAGE: Automatic with Hazel# DESCRIPTION: Uses diffbot to download essential info about an article# & webarchiver to convert it to a .webarchive file# AUTHOR: Scott Granneman (RSG), firstname.lastname@example.org# COMPANY: Chainsaw On A Tire Swing# VERSION: 0.4# CREATED: 06/22/2013 13:50:23 CDT# REVISION: 11/17/2013 15:20:43 CDT #===============================================================================############ Variables### incoming_dir="/Users/scott/Dropbox/Incoming/DEVONthink"devonthink_dir="/Users/scott/Library/Application Support/DEVONthink Pro 2/Inbox"fail_safe_dir="/Users/scott/Desktop"diffbot_token="tm3wnis0wa1irfvgl4ulmqi3iiu0sx1f"############ Grab webpages### # Test to see if the necessary directories existif[ -e "$incoming_dir"]&&[ -e "$devonthink_dir"] ; then# Set IFS to split on newlines, not spaces, but first save old IFS# See http://unix.stackexchange.com/questions/9496/looping-through-files-with-spaces-in-the-namesSAVEIFS=$IFSIFS=$'\n'# If you can cd to the Incoming/DEVONthink directory, run everything elseif cd$incoming_dir ; then# For every file containing a URL in the Incoming/DEVONthink directoryfor i in $(ls *)do# If it’s not empty, process it;# if it IS empty, move it so Diffbot doesn’t keep trying foreverif[[ -s $i]] ; then# Check if it’s a Windows-formatted file; if it is, convert it to UNIXif[$(grep -c $'\r$'"$i")\> 0 ] ; thenterminal-notifier -message "$1 is a Windows file, so convert it" -title "Windows File Found" /usr/local/bin/dos2unix "$1"fi# Delete any blank lines# Note: will only work with UNIX line endings, hence the previous conversion by Hazel /usr/local/bin/gsed '/^$/d'"$i" > "$i".out
mv "$i".out "$i"# Read the file to get the URL# I use head instead of cat because the file usually comes in via email,# & I’m too lazy when composing to leave off my email sigurl=$(head -n 1 $i) /usr/local/bin/gecho -e "\nURL in the file is $url"# URL encode the, uh, URLencoded_url=$(python -c "import sys, urllib as ul; print ul.quote_plus(sys.argv)"$url) /usr/local/bin/gecho -e "\nEncoded URL is $encoded_url"# Grab JSON-formatted article & data from Diffbot, # clean up JSON, & write results to fileif curl "http://www.diffbot.com/api/article?token=$diffbot_token&url=$encoded_url&html&timeout=20000" | /usr/local/bin/jsonpp > /tmp/results.json ; then# Pull out article’s namearticle_title=$(grep -m 1 '"title":' /tmp/results.json | /usr/local/bin/gsed 's/ "title": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed 's:\\\/:-:g' | /usr/local/bin/gsed 's://:-:g' | /usr/local/bin/gsed 's/\\"/"/g' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed) /usr/local/bin/gecho -e "\nArticle Title is $article_title"# If $article_title is empty, move it so Diffbot doesn’t keep trying forever;# if it’s not empty, continue processing itif[[ -z $article_title]] ; then# If $article_title is empty, move it! mv $i$fail_safe_dir terminal-notifier -message "Diffbot could not parse title in $i" -title "Problem with Diffbot"else# If results.json can be renamed, continue processing;# if it can’t be renamed, move it!if mv /tmp/results.json /tmp/"$article_title".json ; then# Pull out article’s other metadataarticle_author=$(grep -m 1 '"author":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/ "author": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed) /usr/local/bin/gecho -e "\nArticle Author is $article_author"article_date=$(grep '"date":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/ "date": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed) /usr/local/bin/gecho -e "\nArticle Date is $article_date"article_url=$(grep '"url":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/ "url": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed 's/\\//g' | /usr/local/bin/gsed 's/"$//') /usr/local/bin/gecho -e "\nArticle URL is $article_url"# Write HTML to file# Remove JSON stuff, fix Unicode, then remove \n, \t, & \ grep '"html":' /tmp/"$article_title".json | /usr/local/bin/gsed 's/ "html": "//' | /usr/local/bin/gsed 's/",$//' | /usr/local/bin/gsed -f /Users/scott/bin/conv_to_webarchive.sed | /usr/local/bin/gsed 's/\\n//g' | /usr/local/bin/gsed 's/\\t//g' | /usr/local/bin/gsed 's/\\//g' > /tmp/"$article_title".html
# Prepend metadata to file /usr/local/bin/gsed "1i <h1>$article_title</h1>\n<h2>$article_author</h2>\n<h2>$article_date</h2>\n<h2>$article_url</h2>\n" /tmp/"$article_title".html > /tmp/"$article_title"_1.html && mv /tmp/"$article_title"_1.html /tmp/"$article_title".html
# Prepend HTML metadata to file /usr/local/bin/gsed "1i <!DOCTYPE html>\n<html>\n<head>\n<meta charset="UTF-8">\n<title>$article_title</title>\n</head>\n<body>\n" /tmp/"$article_title".html > /tmp/"$article_title"_1.html && mv /tmp/"$article_title"_1.html /tmp/"$article_title".html
# Append HTML metadata to fileecho"</body></html" >> /tmp/"$article_title".html
# Using the webarchiver tool I downloaded & compiled, create a webarchiveif webarchiver -url /tmp/"$article_title".html -output $devonthink_dir/"$article_title".webarchive ; then# If it works, then delete the file rm $ielse# Couldn’t create a webarchive terminal-notifier -message "No webarchive for $i" -title "Problem creating webarchive"fi else# If results.json can’t be renamed, move it! mv $i$fail_safe_dirfi fi else# If Diffbot fails, move it! mv $i$fail_safe_dir terminal-notifier -message "Could not Diffbot $i" -title "Problem with Diffbot"fi else# If it’s empty, move it! terminal-notifier -message "$i is empty!" -title "Problem with parsing file" mv $i$fail_safe_dirfi done else# Needed directory isn’t there, which is weird /usr/local/bin/gecho -e "\nIncoming DevonThink directory is missing for $i!" >> "$fail_safe_dir/DEVONthink Problem.txt"fi# Restore IFS so it’s back to splitting on <space><tab><newline>IFS=$SAVEIFSelse# Needed directories aren’t there, which is very bad /usr/local/bin/gecho -e "\nIncoming or DevonThink directories are missing for $i!" >> "$fail_safe_dir/DEVONthink Problem.txt"fiexit 0
Note the following about the script:
Make sure the variables are correct for your setup.
In particular, you’ll need to enter your Diffbot Developer Token for diffbot_token. And no, that’s not mine. I randomly generated a lookalike.
The paths that start with /Users/ are all for my Mac. You’ll need to change them for yours.
You might notice that I encode URLs in the middle; in other words, I turn http://www.chainsawonatireswing.com/ into http%3A%2F%2Fwww.chainsawonatireswing.com%2F. This is what Diffbot wants, so it is what Diffbot gets.
It’s pretty easy to test and make sure you’re getting the right results from Diffbot. Just use the line from the script: curl "http://www.diffbot.com/api/article?token=$diffbot_token&url=$encoded_url&html&timeout=20000" | /usr/local/bin/jsonpp, but put in your Diffbot Developer Token instead of $diffbot_token & the encoded URL you want to test instead of $encoded_url. By piping the output to jsonpp, you get readable results.
Yes, I use sed (actually gsed) a lot. I refer to a file named conv_to_webarchive.sed a few times. That file is detailed in the next section.
You don’t need the lines with gecho, but I found them very useful while I was developing & testing the script, & they don’t do any harm, so I left them. If they bother you, take ’em out.
Notice the lines that say mv $i $fail_safe_dir. All are fail-safes in case files can’t be renamed or parsed. This became critical when I did not have them in place, & one night a file got stuck trying Diffbot repeatedly, so that I racked up 15,000 queries or so in just a few hours. Fortunately, I shamefacedly explained what happened to Diffbot support, & they very kindly forgave me. And then I immediately put in place those fail-safes, as I should have from the beginning. So if you see files on your desktop, look in them, as they indicate problems that you need to fix by hand.
The sed file
In my shell script I refer to conv_to_webarchive.sed a number of times. If you don’t know what sed is, it’s basically a way to edit files programmatically from the command line. It’s also very cool & does a million things, most of which I know nothing about (although I’d love to learn!).
I have built this file up over time, as I have found errors in the results generated by Diffbot & the other programs. Basically, Diffbot stuck the encoding for a character in the results, & I want the actual character itself. So, for instance, instead of an ellipses, I saw \u2026 in the file; my sed file turns \u2026 back into … so that it’s readable. As I discover more, I’ll add to the file.
So we have a shell script that processes files, but how do we tell the shell script to run? Enter Hazel. Basically, Hazel watches the folders you tell it to watch, & when something changes in those folders, Hazel processes the files according to the rules you specify.
In this case, we’re going to tell Hazel to watch the Incoming/DEVONthink folder, & when a file is placed inside, the shell script detailed above should run, processing the file. Lather, rinse, repeat.
Open Hazel. On the Folders tab, press the + under Folders to add a new folder. Select ~/Dropbox/Incoming/DEVONthink.
Under Rules, press the + to add a new rule. A sheet will open, named for the folder; in this case, DEVONthink.
For a Name, I chose Convert URL to DEVONthink webarchive.
Now you need to make selections so that the following instructions are mirrored in Hazel.
To test your work, email a URL to email@example.com with #dt in the subject. A few seconds later, you should see a new entry appear in your DEVONthink Inbox, stripped of extraneous formatting & info thanks to Diffbot.
In subsequent posts, I’m going to tell how to automate emailing that URL using Keyboard Maestro on the Mac & Mr. Reader & other apps on your iOS devices. I could do it here, but this post is long enough already! And even without that info, I still find this process I’ve detailed here to be incredibly useful, so much so that I use it at least 10 times a day to save webpages into DEVONthink. I hope you find it useful too!
Now, Diffbot normally does a great job, but not always. In those cases, you need to go to the Custom API Toolkit & tell Diffbot what to do, based on CSSselectors. I’ve been collecting materials for a long post about this that I’ll put up on this site later.↩
Note that this is maybe the second time in my life I’ve messed around with Xcode, so if there’s a better way to do what I did, please let me know.↩
I’m teaching my Social Media course (AKA From Blogs to Wikis) at Washington University in St. Louis this semester, & one of our topics is RSS. I wrote the following about RSS services & apps for my students, but I wanted to share it here as well, since I thought others might find it useful.
This used to be an easy one: if you wanted to follow RSS feeds, use Google Reader. During the summer of 2013, however, Google shut down Reader, which actually turned into a good thing, as it broke the strangehold Google had on RSS & allowed a thousand RSS flowers to bloom, so to speak.
Let’s be clear about what Google Reader was, exactly. Google Reader actually performed two related services:
It was a website that made it easy to follow & read RSS feeds.
It was a syncing service that other websites & apps could use.
That second one requires a bit more explanation. Google Reader made it relatively easy for other apps to use it as a backend syncing service, which allowed users to pick & choose among RSS apps. In my case, I used Google Reader via its website when I was at my laptop. When I was on my iPhone, however, I used a program called Reeder that synced with Google Reader, & when I was on my iPad, I used a program called Mr. Reader that also synced with Google Reader.
If I marked a post or feed as read in Mr. Reader, that program notified Google Reader that it as read, & then, when I opened one of the other apps or looked at the Google Reader website, that post or feed was gone. Likewise, if I starred a post on the Google Reader website, when I opened up Reeder or Mr. Reader, the post would be starred there as well.
Pretty much every RSS app & website used Google Reader as a syncing service. When Google Reader shut down, it wasn’t just that a website for reader RSS feeds was going away—more importantly (& worse!), the backend syncing service used everywhere was going away too!
When Google Reader shut down, the following happened in short order:
A few RSS reader apps just decided to call it quits & shut down.
Some RSS reader apps announced that they would start offering syncing services to replace those provided by Google Reader, & that those syncing services would be open to any other RSS reader app that wanted to use them (in most cases for a fee; see further down in this list for more info). This meant, of course, that other apps who wanted to use those new syncing services would have to be reprogrammed, in some cases drastically.
Most RSS reader apps said that in addition to supporting some new syncing services, they would also support simply subscribing to RSS feeds within the apps, without using a syncing service on the back end. Since there would be no syncing of those feeds, the info about the status of the feeds & their posts would reside solely in the apps. If you only used one app on one device to read your RSS feeds, this might work just fine, but for most people, who want the ability to read synced feeds on different devices, this wasn’t very handy at all.
Many syncing services announced that they would be charging a small fee for their use. One of the reasons that Google killed Reader was that the company never charged for the service, & never made any attempt to monetize it, so there was little financial incentive to keep it going. By charging users, the new RSS services hoped to reassure customers that they would not disappear & would also be able to ramp up the infrastructure necessary to handle a large number of RSS subscribers, requests, & apps.
With that in mind—that a syncing service is just as important as an app that uses that syncing service—let me go through some of the various services & apps that have sprung up in the wake of Google Reader’s demise.
I can’t go through each & every RSS syncing service, so I’ll just cover the important or interesting ones here.
Each of the following syncing services also offers a web-based RSS reader. In addition, several offer apps that you can install on your mobile devices. If a fee is associated with the service, I’ll mention that too.
Feedbin is the service I’ve chosen to use for syncing my feeds. It’s $3/month, which I’m happy to pay because I think the service is great & the developer has done a good job. The website is fast & efficient & supports many of the keyboard commands that Google Reader did. Most RSS reader apps have added support for Feedbin as a syncing backend, including the ones I use. Another nice benefit is that the developer recently open sourced his software, which has already led to improvements through contributions from others.
Feedly is very popular & has probably become the biggest RSS syncing service since Google’s Reader’s shutdown. It’s free to use, but subscribers who pay $5/month or $45/year get extra features, including secure HTTPS connections, article search, premium support, & integration with Evernote. The website offers many different ways of viewing your feeds, including a Google Reader-style list & a more Flipboard-style magazine layout. However, I couldn’t really get into the way the website worked, & I didn’t like the official Feedly mobile apps, although many people do. In addition, most RSS reader apps have added support for Feedly, so you don’t need to use Feedly apps if you don’t want to. Definitely one you should try out, but if you do, make sure you poke around in the Settings so you configure it the way that makes sense for you.
Feed Wrangler is $19/year & offers an interesting feature that others do not: Smart Streams. Basically, Smart Streams allow you to group feeds together by title, search words, or topics. As long as you use the website or the official iPhone or iPad apps, you’ll be able to take advantage of your Smart Streams. Remember, however, that Feed Wrangler is also a syncing service, & while other RSS reader apps have included support for backend syncing using Feed Wrangler, far fewer of them have included support for Smart Streams.
Digg Reader at this time is pretty basic, but it shows a lot of promise, & the company behind it—BetaWorks—has done some really impressive work with other services & software that it has built. There are also official iOS & Android apps. Definitely one to watch.
NewsBlur is an interesting outlier, in that the developer provides an API, but has also worked hard to write mobile apps that work with the website in very specific ways. The result is that very few other apps support NewsBlur as a syncing service, so you’d better like using the official NewsBlur apps. For many people, that’s just fine. NewsBlur definitely does things its own way, with its own aesthetic, design, & behaviors that are different from all other RSS readers. It was interesting to me, but it was also so different that I didn’t think I’d like it. But the biggest reason I couldn’t use NewsBlur was that my iPad RSS reader of choice—the phenomenal Mr. Reader (see below)—didn’t support NewsBlur, which meant there was no way I could use the service. NewsBlur is free, but you can only follow 64 feeds; for $24/year, you can follow unlimited feeds & get more features.
If you use a Mac, I highly recommend ReadKit, which is what I currently use on my Mac. It works with a variety of RSS websites & services, including Feedbin, Feedly, Feed Wrangler, NewsBlur, & straight RSS feeds that don’t come through a service (of course, this means no syncing). However, it’s not just for RSS, as it also supports three “read-it-later services”—Instapaper (which I use), Pocket, & Readability—& two social bookmarking services: Pinboard (the one I use) & Delicious. It’s really good, & the developer is constantly improving it, which is nice. It costs a paltry $2 on the Mac App Store.
I’ve also used these on my Mac:
Reeder is $9.99 on the Mac App Store. It was OK, but it didn’t grab me, & it was sometimes crashy. That said, it’s my favorite iPhone app for reading RSS feeds.
NetNewsWire is $20 (although it’s currently $10 while in beta). Some people love it, but it didn’t do much for me.
If you use Windows, you really should consider using a Web-based solution. There just aren’t a lot of good RSS reader apps for Windows, especially ones that use syncing services besides Google Reader. If you absolutely insist on looking at Windows software, here are the best of a bad bunch:
When Google Reader died, the developer of FeedDemon announced that he was throwing in the towel. Too bad—a lot of people really liked it. You can use it without any syncing services, & hope that it keeps working with new versions of Windows, but I wouldn’t.
RSSOwl seems to have a lot of nice features, & I know it removed Google Reader syncing, but I can’t tell from the website if the developers added in support for any other syncing services!
A couple that I would avoid on Windows if I were you:
Outlook will work, but I wouldn’t rely on it. It’s very much an add-on, me-too feature, & there are lots of better choices out there. But if you follow only a tiny number of RSS feeds, & you live in Outlook, then I guess you could give in a try. But really, you should look at something else!
RSS Bandit hasn’t been updated in a long time, & while the lead developer says he’s interested in updating the app, it’s still not there.
On the iPad, there is only one, as far as I’m concerned: Mr. Reader. For $4 on the App Store, you get the best RSS reader on any platform. It’s wonderfully designed to make reading feeds easy, & it supports syncing with a large & growing number of services, allows you to select from a wide variety of themes, & makes it easy to read posts in a variety of ways, including virtually every Web browser you can find on an iPad. All of those features are fantastic, but it’s the sharing features that really make Mr. Reader stand out. You can share feed posts or selected text with a dizzying number of services, including Twitter, Facebook, Pinboard, Instapaper, Evernote, Tumbler, Messages, & email, to name but a few. In fact, if you’re slightly technically inclined, you can even create your own sharing service, which is amazing. Get Mr. Reader—you’ll be glad you did.
If for some insane reason you don’t want to use Mr. Reader, you could take a look at Reeder for iPad or Feeddler Pro, I guess. But seriously, just use Mr. Reader.
If you have an iPhone, then you also have an easy time of it: just get Reeder, as it is the best RSS reader for that device. There is no reason to get any other. It’s just $3 on the App Store. For that, you get a beautifully designed iPhone app that supports many different syncing services & also makes it very easy to share posts & their content using a wide variety of sharing services. It’s great stuff.
Note: while I strongly endorse the Reeder app for iPhone, I’m not a big fan of the Reeder app for iPad or Mac OS X.
If you use Android, Press gets probably the best reviews, & it’s only $3. It supports many different syncing services, & obviously has a nice UI.
If you don’t like Press, there’s always the Feedly app, which gets a lot of kudos.
Got a Windows Phone device? NextGen Reader seems to be one to look at.
No, this isn’t tech related, so feel free to skip it if you don’t care or don’t want to read it. I’m leaving comments on, but stupid, abusive, or unhelpful comments will be deleted. You can disagree—that’s fine!—but be nice.
A conservative friend of mine asked me on Facebook why I support Obamacare. I wrote up this very long reply (almost 2000 words!) as an answer. Because I intended it as a casual reply—at least up until the first 500 or so words!—I didn’t provide citations & links for most of the facts I cite. Nonetheless, it should be easy to search for any of them & find my sources. Oh, & I don’t think there are, but if you find a few unquoted sentences in here from Wikipedia, I apologize in advance for my sloppiness.
This is a very long answer, because you asked a serious question. However, before I start, let me say that while I support Obamacare, I see it as a half measure at best. I was, & I still am, in favor of a single-payer healthcare system (for instance, making Medicare available to everyone), like virtually every other industrialized nation. That to me will be the only solution to the problems that Obamacare is trying to solve (detailed below).
Further, to me, Obamacare is mostly insurance company reform. That’s really what it is. That’s a good thing, as insurance companies have made billions of dollars screwing over a lot of people. Obamacare is still not the real comprehensive reform I would have liked, but it was the best we could get right now, so I’m happy we have it, even if I don’t think it goes nearly far enough.
How the US healthcare system (very) generally works
In order to answer your question, let’s first review how the US does things. Most of the population under 67 is insured by an employer (either theirs or a family member’s), some buy health insurance on their own, & the remainder are uninsured (~16% of the population). Health insurance for public sector employees (including the military) is primarily provided by the government.
The basic idea, though, is that you are insured by your employer (an outgrowth of World War II, by the way, when employers, seeking a way to provide benefits to employees during a time of wage & price control limitations, started offering to pay for health care costs instead; the government had offered to cover health care costs, but unions & others protested, as they wanted the benefits—it seemed to make sense at the time.). This system is an outlier among other industrialized nations, however, as virtually all others guarantee access to medical care for its citizens through public, government-backed systems.
How are we doing compared to other industrialized nations?
Life expectancy: 50th among 221 nations, & 27th out of 34 industrialized countries
Infant mortality: 39th
Adult female mortality: 43rd
Adult male mortality: 42nd
& on & on
Not so great. Not nearly as great as we should be doing.
Problems in the US healthcare system
Now let’s look at the problems Obamacare is trying to solve.
45 million people had no type of health insurance in 2012. Sure, they could go the emergency room, but then they either get a bill that bankrupts them (see below) or the outrageously high costs get passed along to everyone else.
Many of those who had insurance had “bad” insurance that didn’t cover very much.
Health insurance has often been discriminatory: women paid more than men, for example, or insurers wouldn’t cover someone who was sick. This often lead to destroyed finances (62% of filers for bankruptcies claim high medical expenses; 25% of all senior citizens declare bankruptcy due to medical expenses, & 43% are forced to mortgage or sell their primary residence).
Healthcare costs in the US are the highest in the world (yet we’re only ranked 46th in the world for efficiency by Bloomberg & 17th out of 17 in a report by the National Research Council & the Institute of Medicine; other reports are similar). The expenditure per person in the U.S. is ~$8000, while the total amount of GDP spent on health care is ~17%—also the highest of any country in the world.
Figuring out what your healthcare plan does & does not cover is at best extremely difficult & at worst impossible. The “fine print” is often used by insurance companies to screw consumers.
Those are all real, painful issues for virtually every American using the healthcare system—which means every American!
What Obamacare does
So, what does Obamacare DO to fix these problems?
Obamacare requires insurance companies to cover all applicants within new minimum standards. In other words, you can’t be offered an el cheapo plan that sounds great (“Only $50 a month! Sure!”) but doesn’t actually cover anything. And all plans must include prescription drugs, maternity care, mental health, physical rehabilitation, laboratory services, preventive care, chronic disease management, ambulances, hospitalization (that one screwed a lot of people), & pediatric services—all things that were often left out of the “cheap” plans. Even better, insurance companies can’t impose annual or lifetime coverage caps, so you can count on those things being available.
Obamacare requires that insurance plans eliminate co-pays & deductibles for childhood immunizations, adult vaccinations, medical screenings, mammograms, colonoscopies, wellness visits, gestational diabetes screening, HPV testing, STD counseling, HIV screening & counseling, FDA-approved contraceptive methods, breastfeeding support & supplies, & domestic violence screening & counseling.
Obamacare requires that the out-of-pocket maximum deductible you have to pay is limited to $6,350 for an individual. Again, this is because the “cheap” plans would often have sky-high deductibles, which lead to bankruptcies or sickness & death due to an inability to pay.
Obamacare requires insurance companies to offer the same rates regardless of pre-existing conditions. A lot of people have gotten screwed by the insurance companies by their failure to cover pre-existing conditions, so it’s great that this is no longer the case.
Obamacare requires insurance companies to offer the same rates regardless of sex. Women won’t pay more than men.
Obamacare will lower both future deficits and Medicare spending, according to Congressional Budget Office projections.
Obamacare will reduce the number of uninsured by 27 million between now and 2023; unfortunately, it will still leave approximately 26 million Americans uninsured (Who’s still going to be uninsured? Illegal immigrants [1/3 of that uninsured group], citizens who fail to enroll in Medicaid even though they could, citizens who opt to pay the annual penalty instead of purchasing insurance, & citizens who live in states that opt out of the Medicaid expansion and who don’t qualify for existing Medicaid coverage or subsidized coverage). That’s far better than the current situation. Among the non-elderly, 83% are currently insured (although a lot of those policies are pretty bad); under Obamacare, that will jump to 94% (& the plans all have to adhere to a minimum standard, so no more crappy plans). It’s not universal, but it’s far better.
Obamacare will reduce medical bankruptcies & prevent job lock (when someone can’t leave their current job because then they’ll lose their health insurance).
Obamacare will help control costs by reducing the number of people who have to go to the emergency room because that’s their only option & also by increasing the size of the insurance risk pool, which should help to distribute costs.
Obamacare allows children to remain on their parents’ plans until age 26, which will reduce the number of uninsured young adults.
Obamacare increases Medicaid eligibility to 16 million individuals with incomes below 133% of the federal poverty level.
Obamacare removes the Medicare “donut hole” (after someone under Medicare runs through the initial coverage of prescription drugs, they have to pay for those prescription drugs [at a higher cost], until they reach the catastrophic-coverage threshold, at which point Medicare takes over coverage again).
Obamacare establishes four tiers of coverage: bronze, silver, gold, & platinum. All of these categories offer the same essential benefits, outlined above; the different tiers tell you what your premiums & out-of-pocket costs are going to be. Basically, the percentage of care covered through premiums (as opposed to out-of-pocket costs) are roughly 60% (bronze), 70% (silver), 80% (gold), and 90% (platinum). This makes things simpler for the consumer.
Obamacare requires insurance companies spend at least 80–85% of premium dollars on health costs & claims instead of administrative costs & profits; if this is violated, they must issue rebates to policyholders.
I think all of that is great. I mean, seriously great. I find it very hard to understand how someone could be against it, frankly. Is it perfect? Hell no. But it’s better than what we had.
Are some premiums going to go up? Well, kind of, but in many cases, not really. Here’s an example: if you’re 27 & live in Fort Lauderdale, Florida, the least expensive plan is around $66 before Obamacare. Under Obamacare, it’s $128. “That’s double!”, you say. Hold on.
First of all, that plan sucks. It’s comes with a very high deductible—$10,000—& doesn’t cover mental health, brand-name drugs, or pre-natal care. On top of that, your out-of-pocket limit is $12,500. Egad! And, of course, if you have pre-existing conditions, you’re looking at a LOT more.
Under Obamacare, the $128 plan must include basic health benefits, like mental health, prescription drugs, & maternity care. Your deductible/out-of-pocket is limited to $6,350. That’s a far better plan!
“But it’s still $128!”, you say. Hold on. Under Obamacare, if you’re single & earn less than $46k a year, you are eligible for federal subsidies to help defray premium costs, with the size of the subsidy based on age, income, & residence. That means that the young single person in Fort Lauderdale ends up paying … wait for it … $74 a month. A whopping $8 more, for a far better plan, & you can’t get screwed by the insurance companies!
And here’s another example, this one from a close friend of mine. On October 1, 2013, when the Obamacare exchanges opened, my friend Bill got on the website & finally got decent health insurance. Here’s his brief story:
One data point: as a self-employed, relatively healthy 47 year old (I just hit my personal best in the squat rack), I was ‘uninsurable’ to any of the companies out there because of being diagnosed with sleep apnea 10 years ago (I have no other pre-existing conditions). I have since lost 35lbs and the apnea went away, but still no company wants to insure me - so I had to buy through MO’s ‘high risk’ pool. $508/mo for a $5K deductible - no vision, no dental, etc. - catastrophic coverage only. Just checked the new health insurance exchange web site today - $200/mo for better coverage…
That is exactly what Obamacare is supposed to do.
In the list of things Obamacare does that I provided above, note the items that were not listed, because Obamacare does not do them:
You do not have to change your doctor.
You do not have to change your insurance.
You do not have to use a government healthcare system.
You do not have to use an exchange.
Businesses do not have to use the exchanges (insurance offered to employees must meet federal minimum standards, however).
In fact, for most people, not a lot will change, except that your insurance will be better. As Michael Tanner, senior fellow at the CATO Institute (a noted libertarian think tank), put it: “The vast majority of people will continue to get insurance the same way they do today”.
By the way, I’d also like to address the statement that the President & Congress are somehow exempt from Obamacare. Actually, Obamacare requires that members of Congress (& other federal employees) obtain health insurance either through an exchange or approved program (Medicare, for example), instead of using the current government program (the Federal Employees Health Benefits Program). However, the federal government will, like large private employers, continue contributing to the new health insurance plans of federal employees.
And besides, remember how our system works, by & large: the employer pays for the employee’s health care. The President & Congress & the military & other government employees are employed by the federal government, so why shouldn’t it contribute to, & provide, their health care?
The past informs the future
To wrap up this long reply, I’d like to forecast the future by looking at the past. Every time there has been an expansion of social services & rights for people in this country, the right wing has pulled a Chicken Little & screamed that the US was doomed (note I said “right wing” & not “Republicans”). Here are just a few examples; believe me, there are many more.
Social Security Act (1935). John Taber, a GOP House member from New York: “Never in the history of the world has any measure been brought here so insidiously designed as to prevent business recovery, to enslave workers.”
Fair Labor Standards Act (1938), which set a national minimum wage, guaranteed time-and-a-half for overtime in certain jobs, & banned child labor: “Opponents of the bill charged that [it] was ‘a bad bill badly drawn’ which would lead the country to a ‘tyrannical industrial dictatorship.’ They said New Deal rhetoric, like ‘the smoke screen of the cuttle fish,’ diverted attention from what amounted to socialist planning.” (http://www.dol.gov/oasam/programs/history/flsa1938.htm)
Medicare & Medicaid (1965). Ronald Reagain in 1961: “[I]f you don’t [stop Medicare] and I don’t do it, one of these days you and I are going to spend our sunset years telling our children and our children’s children what it once was like in America when men were free.”
And now Obamacare:
Louisiana Rep. John Fleming: “Obamacare is the most dangerous piece of legislation ever passed in Congress.”
Minnesota Rep. Michele Bachmann: “Repeal this failure before it literally kills women, kills children, kills senior citizens.”
New Hampshire state Rep. Bill O’Brien: Obamacare is “a law as destructive to personal and individual liberty as the Fugitive Slave Act of 1850.”
And the best of all (& voted by Politifact as “Lie of the Year” for 2009!; see http://chnsa.ws/f5):
Sarah Palin: “The America I know and love is not one in which my parents or my baby with Down syndrome will have to stand in front of Obama’s ‘death panel’ so his bureaucrats can decide, based on a subjective judgment of their ‘level of productivity in society,’ whether they are worthy of health care. Such a system is downright evil.”
The right wing freaked out about the Social Security Act in 1935; now it’s an established part of our country. The right wing freaked out about the Fair Labor Standards Act in 1938; who would abolish the minimum wage or allow child labor now? The right wing freaked out about Medicare & Medicaid in 1965; now millions of people depend on those programs for their health & lives.
The right wing is freaking out about Obamacare now; in ten years, no one is going to care. The benefits Obamacare provides society will be accepted, & most people will wonder how we ever lived without them. The sky won’t fall, but millions of people will be insured, & will be able to live healthier lives without worrying that they’ll go bankrupt or die because they can’t afford or get health insurance.
1Password 4 helps you keep your data more organised than ever before with the new multiple vaults feature. Want to keep your work and personal stuff separate? No problem, just create a separate “Work” vault. Have to handle your parents’ finances but want to keep that separate from your own stuff? No problem, create a separate “Parents” vault. Have items that you don’t want to delete but that aren’t really relevant anymore? No problem, create an “Archive” vault. Each vault can have its own password, its own identifying icon and accent colour, and its own sync settings.
And here’s a picture, courtesy of Agile Bits, makers of 1Password:
A ‘Shared Folder’ is a special folder in your vault that you can use to securely and easily share sites and notes with other people in your Enterprise account. Changes to the Shared Folder are synchronized automatically to everyone with whom the folder has been shared. Different access controls—such as ‘Hide Passwords’—can be set on a person-by-person basis. Shared Folders use the same technology to encrypt and decrypt data that a regular LastPass account uses, but are designed to accommodate multiple users for the same folder.
This is a really cool feature, & I have friends who use Last Pass as part of a team & say it’s really nice, but it’s not enough to overcome the horrible, confusing UI that LastPass possesses.
At WebSanity, we all use 1Password. When I read about Multiple Vaults in 1Password, I immediately thought that it would be perfect for us. However, there’s one problem, as an AgileBits (makers of 1Password) employee explained:
As for 1Password 4 for iOS, it won’t support multiple vaults for now, this will require an update to it down the line. We’ll focus on stabilizing the multiple vaults in the OS X app and then work on the iOS app down the line.
If we can’t use it on our iPads & iPhones, then we can’t use it. Once 1Password allows us to use multiple vaults on our Macs & iOS devices, we’ll happily start using them. But until then, we’ll just have to wait.
This coming Wednesday, September 11, civil rights attorney & fellow Washington University in St. Louis professor Denise Lieberman & I are giving a public talk titled “Digital Intrusion & Digital Privacy: What THEY Know, What YOU Don’t”. Here’s the description:
Surveillance by strangers, by criminals, by the police, and now by government agencies. How do they do it? What are the ramifications, legally and technologically? How is the current situation brought about by Edward Snowden different historically? How do you protect your communcations? And how do you answer those who say ‘If you’re not doing anything wrong, then your government surveillance shouldn’t bother you’?
Please come if you can. The talk is at the St. Louis UNIX Users Group, but it’s open to the public, & we’d love to have as many people as possible there.
If you’ve ever visited this blog before—& gosh, I sure hope you have!—then you’re probably noticing that things are very different here. Over the past couple of weeks, I’ve been migrating from WordPress to Jekyll & Octopress. Besides the new (pretty default, which I will change) look, you should also see speedy page loads. Why? Because while WordPress creates dynamic, PHP- & MySQL-driven websites, Octopress instead generates static sites, which will always load much faster. That’s a huge win for you & for me.
So besides speed, why else did I switch to Octopress?
WordPress is really nice, but it’s a lot of stuff. I wanted something leaner & meaner.
My entire site is on my Mac. I write here, check out how things look, re-generate the site at any time, & then use rsync to send the changes to my server. Having everything local & under my control is really nice.
By the way, everything is written in Markdown, which I use for almost everything I write. I could use Markdown with WordPress, but it was an add-on; with Octopress, it’s how you’re expected to write.
My entire site is versioned with Git. Another bonus.
Octopress takes care of a lot of the things you have to worry about with a straight Jekyll install. I’m always looking for the lazy alternative, so Octopress sounded much better than just Jekyll.
Based on all the people using Jekyll & Octopress (just a smidge compared to the huge numbers using WordPress, but still, enough that they’re both viable projects), it appears that they’re going to be around a while.
Octopress embraces responsive design & therefore looks good on a variety of different devices. WordPress does too, but I think Octopress does a slightly better job.
It just makes sense to me: move the dynamic stuff onto my computer, & serve the static results to visitors on the Web. That provides the best of both worlds.
It was fun learning a new system!
With the switchover to Octopress, I’ve put in place several new features:
I’ve turned on commenting, which use Disqus. We’ll see how it goes. If it becomes a spam haven, I’ll revisit my decision.
I’ve instituted categories. I realized a few months in that I should have been using them, but it just seemed like too much work to retroactively add them in WordPress. When I moved over to Octopress, I went through and added a bunch.
The RSS feed is different. If you subscribed to the old feed, you’re OK for now, as I put in place a redirection that points to the new feed. Really, though, you should subscribe to the new feed, which is located at http://www.chainsawonatireswing.com/atom.xml.
I’m really happy with the new system, & I hope you find it better too!
For quite some time I’ve been relying on The Wirecutter when I need to purchase almost anything tech-related. If friends or students ask me which TV to buy—or practically anything, really—I tell them to look it up on The Wirecutter & get whatever it recommends. For those of you who don’t know, what distinguishes The Wirecutter is that it only recommends the best thing to get in a particular category. Instead of having to wade through so-called “reviews” that compare & contrast 5 or 10 or 20 items in a mish-mash that never strongly favors one thing over another, The Wirecutter cuts to the chase, which is great.
Well, now The Wirecutter has a sister site: The Sweethome, which focuses on the best household items. I’ve been reading the stuff on there, & it’s just as great & useful. If you’re looking for an appliance, tool for the garage, or kitchen implement, The Sweethome should be your first stop.