Demand good governance, not more essays about how bad those other people are.

    This article in the Nation makes some chilling connections between rising homelessness and right-wing rhetoric. People should read it, internalize it, and remember it when they’re feeling frustrated about the ongoing catastrophe playing out on our streets, because it is ugly and it is what often happens when people have to bear witness to human misery day in and day out, with no sense of a path forward.

    That said, there was this report on the abject failure of Multnomah County’s Joint Office for Homeless Services, which receives hundreds of millions of dollars per year both from a Metro Supportive Housing measure, and millions from the city of Portland. In a nutshell:

    “Among the most damning findings by County Auditor Jennifer McGuirk: The office sometimes pays providers months late; it asks them to work before contracts are in place; it adjusts performance measures if providers cannot meet original goals; and it could not produce simple data on how many people it’s housedβ€”even to the county auditor herself.”

    I talked to someone who manages clinical services for one of the region’s largest Medicaid recipients, has contracts with the JOHS, has been doing social work for 20 years in this area, and has been at the tip of the spear in the region’s care for the unhoused and addicted.

    “Nobody likes to talk about the JOHS imploding because they’re afraid voters will get angry and pull the supportive housing measure dollars.”

    The JOHS isn’t ashamed to conflate themselves and their dysfunction with the community they are failing to serve. They’d have us believe their dysfunction and mismanagement is the best we can expect to help the most vulnerable and least powerful among us. It’s a disgrace. I can’t believe the city even feels the need to debate its ongoing partnership with these people. The county, through the JOHS, foot-drags and sabotages shelter initiatives (“They say shelter isn’t housing. That’s right. Shelter is fucking shelter,” says my friend), and has generally reduced the crisis we’re facing to a bloodless exercise in technocratic managerialism that is missing the thing technocratic managerialism most depends on, which is actual managerial competence.

    I’ve written before about the systemic problems Multnomah Co. creates for itself. In some ways, that is completely outside the JOHS’s control. For whatever reason Multnomah Co. long ago inflected into a “market-style” approach to funding social services providers, with all the inherent inefficiency of spinning up dozens of competing providers with their own administrative overhead, management overhead, and political backbiting. That has rendered the JOHS’s role into a procurement bureaucracy.

    Where the JOHS becomes problematic is in its long history of mismanagement, and fierce defense from the former county chair because of the county’s ideological commitments around housing, including a fixation on commoditized housing. The county chair allowed its founding director to stay in place until long after it went into failure mode, allowed nepotistic consulting deals, burned through a series of interim leaders, and has utterly failed to spend money regional taxpayers generously agreed to spend.

    That last part is important: Is there a rising sense of cruelty against the homeless? I think that Nation article makes a compelling case. But locally there has also been a great deal of generosity and a willingness to pay for solutions. What they’ve received in return has been shocking mismanagement — so bad that the auditor trying to understand what is going on at the JOHS had to give up in disgust because that organization can’t even quantify its purpose for existence.

    The answer isn’t to read Yet Another Scathing Progressive Indictment of Christopher Rufo or Michael Shellenberger and cluck to yourself that at least you know better. The answer is not to sit around shaming others because they sometimes react poorly to the horror going on around them — and here I’m looking at all the Twitter “progressives” with the moral vanity to summarize the problem as “greedy people worried about their property values.”

    The answer is to hold government accountable. If you’re a loud and proud progressive, DSA member, “good liberal” or whatever, your power doesn’t matter in the general election. Those are foregone conclusions in the Portland area. Your power matters in the primary, where you owe more scrutiny of the progressive contenders.

    Make Jessica Vega Pederson earn your vote next time. Demand good governance. Quit letting these people hide behind your ideological allegiance to them. It’s a one-party county, act like it.

    The next phone

    Time for the occasional fretting about phones.

    Some things I love or have loved:

    • I love my iPad mini 6. It sort of snuck up on me how much I love it.
    • I loved my iPhone mini. I wish they still made those, and I wish they came with the Good Cameras.
    • I sort of loved my Mennonite-made flip phone. It has some issues, but it’s a nice device in a very “just does the one thing” sort of way.
    • I love my Garmin Instinct 2X Solar.

    Something I once thought I loved, but have reappraised:

    I haven’t missed my Apple Watch for one second after several weeks of use. It is feature-packed, nice to look at, etc. but I wasn’t using many of the features and I didn’t care for the charging hassle on vacation or long camping weekends. This last week when, on a brief vacation, I put the Garmin into low power mode and it reported 96 days of remaining battery, I swooned a little. Back in smartwatch mode and a few recorded walks with GPS, having not had it on a charger for a week, it’s still showing 23 days of battery life.

    It does the things I liked about my Apple Watch: It has contactless payment and it shows texts and phone calls. But it’s a profoundly undemanding device. Time to go for a walk? Press the GPS button once to go into workout mode, once more to pick my default workout, and once more to start. Same button to pause. I was disoriented by the button interface for my first day of use, then it all made perfectly good sense and I don’t miss swiping a tiny screen, and I really will not miss it in the winter.

    One ambivalent thing:

    • I feel profoundly ambivalent about my iPhone 14 Pro. The camera is amazing. Everything else is fine and all, but it makes my left thumb hurt sometimes.

    Freed of the Apple Watch, the iPhone is also an anchor for one less thing: The Apple Watch requires an accompanying iPhone.

    Now, I like iOS/iPadOS a lot. But it is entirely conceivable to me that I could live the parts of my digital life that do well on iOS on my iPad mini. It has cellular data, etc. etc. and I don’t tend to leave the house without a bag or sling of some kind, so I could do all my iPhone things with it and enjoy a better screen than an iPhone for reading stuff. Upsize my bag to fit my wireless Magic Keyboard and my Twelve South Compass, and I’ve got a work device.

    Leave it all behind, and my flip phone is there as minimum viable tether … texts, phone calls. It even has maps and turn-by-turn — if I am feeling a little masochistic, anyhow — and a weather app. But the folks who made it are super privacy oriented, so it won’t do much else, and unlike a lot of low-end minimalist phones it doesn’t add a cheesy Facebook app you can’t get rid of.

    I dunno. It’s just coming around to “New iPhone Season,” and I thought a little about how annoyed I am by iPhones since Apple stopped making the mini. Even the “normal” ones are a little too big to be one-handed without straining my thumb, and the big ones are barely pocketable but still smallish for reading. Comparatively, the iPad mini has a huge, gorgeous display.

    Bookmarking was a micro.blog feature I slept on, but it just got more useful with tagging. I uploaded a JSON dump of my pinboard bookmarks just to try it out. Not sure I’d go full-time, but I like the idea of using it as a blog post queue.

    I got sort of restless about publishing tools, then peevish about my Hugo setup and git, so I ran through a little “product manager of Me” requirements gathering exercise, and it came up “micro.blog.” I think it’s all about being able to just pop open MarsEdit or Drafts or whatever and start typing, but also knowing all of it is sitting up there as very portable Markdown-and-YAML I can grab at any time. I don’t know what it would take to get me to succumb to WordPress again.

    I don’t agree with how all of the social bits of micro.blog work, and I don’t agree with all the design decisions on the blogging side, but as a simple-to-use abstraction of Hugo with a steadily expanding feature list, a conscientious and thoughtful proprietor, and a kind community behind it, I don’t think there’s much better than micro.blog out there.

    (I still think no-hashtags-for-posts is a curious hill to die on, and believe tagging makes more sense for more people who maintain blogs as small online diaries than categories do, but also get that prolific inline hashtagging is visually cluttered and encourages a stilted and less readable posting style. When I have control of design I prefer to use them as a line of metadata at the bottom, where they serve an organizational purpose with a secondary benefit of enhancing discoverability on Mastodon.)

    Anyhow, there are always tradeoffs. micro.blog’s are pretty liveable for what you get. I’m glad it keeps improving.

    I wanted to make hashtags link to something more useful on my micro.blog blog so I made a shortcode that takes a list of tags & turns them into tag links on social.lol. Wonder how it looks on Mastodon.

    Update: Looks fine. The tags are rendered as links and the first link comes in at the end. I’ll test for no-link posts when I have a reason … that could be unattractive.

    Screenshot of a Mastodon post with linked hashtags.

    #hugoΒ #shortcodeΒ #microblogΒ 

    Well, the happiest medium I can figure out for crossposting from micro.blog to Mastodon while still squeezing hashtags in without a lot of aesthetic mess seems to be using the <pre> tag in Markdown to enclose the hashtags.

    If I put them at the bottom of the post, under the photos, they don’t stick out and just look sorta like a little line of metadata. At the same time, Mastodon can pick them up as linkable hashtags.

    The other thing that can work is just putting two spaces in front of a line of hashtags. The Markdown parser doesn’t pick up the leading # as a head tag. I prefer the pre approach because it conveys a sort of “this isn’t content” look when posted to a blog.

    As I sit here staring at this, it makes me wonder how easy it would be to write a useful-only-to-me Hugo plugin that just turns a pre-encased line of hashtags into links to … somewhere?

    When I log out of social.lol and search for hashtags, I can see the results:

    https://social.lol/tags/photography

    … so it wouldn’t be a dead end. Wonder what happens to that when the Mastodon renderer picks it up.

    Update: Seems to render fine.

    Coffee Walk, 2023-01-25
    Springwater Trail, Foster Floodplain Natural Area
    Lents, Portland, OR

    Stopped through the Feral Cat Cove skate park on the way to the floodplain this morning.

    Felix the Cat cartoon character painted onto the concrete of a skate bowl.

    A cat walks through a concrete skate bowl, an orange grafitti cat seems to look down on it.

    A tall, dead tree over a scrubby floodplain, brown in winter.

    Cargo containers are stacked up in front of  a misty mountain, under a gray sky.

    #portlandΒ #pdxΒ #FosterFloodplainΒ #photographyΒ 

    Added a "recents" page to imgup's SmugMug branch

    Screenshot of a web page with image thumbnails and corresponding text areas with Markdown markup ready for copying/pasting.

    Last night I finished up the /recent page for imgup’s SmugMug version.

    It provides the last 20 uploaded images along with their title and caption metadata wrapped up in a Markdown image link. The caption is pulling duty as my alt text.

    Having this in place gives me a way to process and upload images to SmugMug from Lightroom in bulk, then visit imgup to get the markup for sharing. Work put into adding metadata, titles, etc. can happen in one place, and I can easily amend it in the SmugMug organizer if needed. It saves me a little clicking around and manual editing.

    Probably seems small, but I hated all the file shuffling I was doing and realized that some images I posted on my microblog over the past few years weren’t compressed/scaled very well. This gives me a tool for quickly rehabbing blog post images by finding them again in Lightroom, re-uploading them, and grabbing them out of imgup to update blog posts over time.

    Todo

    • Go ahead and pull the trigger on adding oAuth tokens to a config file.
    • Make an images Atom feed.
    • Make a UI to select target albums for upload. Right now the album is hard-coded.

    imgup (now with SmugMug as the image upload backend)

    I finished up the initial SmugMug version of imgup today.

    There are some things I’d like to add, but it’s good enough to stick in Docker and run locally as a drop-in replacement for the Cloudflare edition I’ve been using.

    The basic workflow of the tool is:

    • You make a private (but not secret) album.
    • You upload images to it.
    • During the upload process you can set the title and caption properties. The caption goes on to be the alt-text.
    • Once uploaded, you get a page back with text areas that have basic Markdown and HTML for copying/pasting, like this:
    1
    
    ![this is alt text, which will show up for people using screen-readers](https://photos.smugmug.com/photos/i-g2xggWq/0/20655d2f/X2/i-g2xggWq-X2.jpg)
    

    Why? Because I get better control of the quality of images I share (SmugMug wants them to look nice at a variety of sizes and compresses/resizes accordingly), I’d like to move to having permanent URLs for images in posts (and I think I’m a SmugMug lifer now), and I’d eventually like to save the messy scattering of copies of images made just for sharing then discarded.

    Because SmugMug has a pretty nice ecosystem of plugins, uploaders, and apps, there’s more I mean to do. For instance, it’s possible to just shoot an image straight from Lightroom CC on an iPad to SmugMug. There’s also a good desktop Mac uploader that can snarf up things saved to a specific folder. So if I just pick up the habit of adding title and caption metadata in Lightroom, it’ll show up in anything else I do with imgup. Uploading, ultimately, will not be something I do a lot with this tool as I build the parts where I can get back recent uploads and get easy sharing snippets.

    Still on the list of things to do with this:

    • Make an Atom feed to automate dropping pictures into my socials.
    • Make a “Recent Uploads” page that provides pre-made Markdown snippets.
    • Make post this buttons for micro.blog, Mastodon, etc.
    • Get rid of the manual step of editing a .env file to save tokens. I could just dump that into a file, look for it, and spare the manual uncommenting of code.

    In the process of debugging oAuth, I ended up building a manual solution to the problem of keeping an oAuth session alive after restarting the app: Once you do a SmugMug auth with the app, there’s a /tokens page that tells you enough to stick your oAuth access token and secret in an environment variable. In the development environment it pulls this stuff from a .env file. You can use the app without doing this at all, at the cost of having to re-auth the app with SmugMug each time you restart it.

    Previously:

    oAuth, rubocop, a Drupal recollection, and the value of play

    A screenshot of a nicely formatted web page showing neatly indented JSON

    oAuth is sort of a pain. Now that I sort of know how to plumb it in – enough that I’m going to make myself a little repo with a reference application – it has opened up a lot of interesting possibilities.

    The whole experience reminded me of when I was doing Drupal development for a job I took to get into tech and out of pure editorial. We needed to do some work migrating a bunch of content between sites. My predecessor, who’d established the site on a previous version of Drupal, had done a similar task with a certain plugin, so working from his notes I installed and learned – that it wasn’t a clickable GUI thing with a wizard anymore – it was now a content migration “framework,” which meant I was going to spend some time learning its API and writing my own PHP plugin to support our particular needs, or … nothing. Ask for money for the outside guys, I guess, because I’d been hired to get better at PHP, not know it. I ended up hobbling through, and I still remember hopping around my office when the damn migration just ran on our 800,000+ user database.

    So this weekend I was shopping around for a library to help me get oAuth plumbed in. OmniAuth presented itself right away, and seemed to have a SmugMug “strategy” – their word for “module” or “plugin” – so my eyes lit up. Then reality set in: The strategy was for an older version, and it targeted the old SmugMug API. Okay, fine, I was feeling industrious so what even was a strategy? I looked at a few and my eyes glazed because I had a nodding understanding of how all this worked, but not enough to sit down and implement a plugin for my specific problem.

    I think that’s probably okay. I set OmniAuth aside and went with the vanilla Ruby oAuth gem and a reference Sinatra app someone wrote that did a really nice job of creating routes that recreated the oAuth dance. I had found a few other examples, but they were less systematic and harder to peel apart. By the time I was done fiddling with it to get it to work with SmugMug’s particular oAuth endpoints, I felt a lot more confident on how the protocol actually works.

    So, do I “know oAuth?” No, I do not. Asked to implement an oAuth signin process from scratch, I could not just implement it. But I do know, more or less, the vocabulary, the steps in the process, and what it’s doing behind the scenes. Using standard libraries is a repeatable task. Good enough.

    What else?

    I was a little more forward-thinking this time around and picked up dotenv to manage API tokens. I might even be over-using it a little, because it can use the variables you store in it to make other variables. It just makes the core app a little less busy at the expense of having a .env file to consult if something seems to come from nowhere.

    I have never been a big linter person, so I decided to give rubocop a shot. I appreciate it as an education tool. There are a lot of things about good Ruby style I never learned, so it was a little alarming at first. Sort of like I’d been made to code in a small room with a large speaker on the wall that was fed by a room full of the most earnest Ruby style pedants monitoring me from a hidden camera.

    I ended up turning off a few things it wanted to complain about for … reasons … (like shebangs) but did learn a few things and did find that by paying attention and accepting the corrections I no longer guiltily run a beautifier before every commit because things are at least consistent and tidy. Plus it complains about a few things that are at least potentially problematic.

    What else?

    Not much. I think I’m feeling voluble because juggling oAuth’s needs with what I wanted to accomplish was a pain in the neck, and SmugMug maintains a separate API for uploading that is harder to interact with than the one I will need to use for the rest of the project. I don’t even really need the uploading API because their own uploaders and tools are great. Cloudflare was simple to figure out, hence alluring, but using my normal stuff (e.g. Lightroom) I can also get titles, keywords, exif data, etc. and do more interesting things without having to build out a database of some kind, or building special UIs to get that stuff. But anyhow, adding then managing the complexity of oAuth feels like an accomplishment. I don’t know how many little ideas I’ve bounced off of because the API I would have needed to touch had moved on from simpler approaches.

    And I am feeling good because I realized at some point over the past couple of weeks that I am doing all this because it is playing. I used to do a lot of little utility scripts and silly gadgets because it was fun and absorbing, not because it was hugely practical or efficient. It was just playing. I stopped playing for a long while. It feels good to play again.

    I found a lab result on the Kaiser website from 2012 I never opened. I often struggle to remember the dates of the five days I was nearly killed by $1 sushi & could only lie in bed watching all the existing eps of Sons of Anarchy on an iPad, praying for death, so that was nice.

    I do remember that watching, like, three seasons of that show continuously when not explosively voiding or slipping into fevered, tortured sleep did nothing to increase my desire to live. But part of me was, like, “you’re dumb enough to eat $1 sushi, you deserve this, too.”

    Shortcut: upload stuff to Cloudflare Images service

    I made a shortcut that pretty much does what imgup does, except from an iPhone (or Mac, I guess, if you want to pick an image from Photos instead of sending it via an iPhone/iPad share sheet.

    It just squirts an image up to the Cloudflare Images API, gets back a URL, and copies some pre-made Markdown to your clipboard suitable for pasting somewhere. Pretty simple to add a step to send it to a Drafts draft, etc.

    Cloudflare doesn’t do ProRaw, and I’ve got my phone set to default to that, so the shortcut converts image input into 90% JPEGs (both to make them acceptable as a filetype and to compress them down under the Cloudflare file size limit. Generally I share from Lightroom anyhow, and that shares out however you choose and at whatever quality level.

    I like sticking stuff up in Cloudflare Images because I get some dynamic options for presentation, quality, etc. that I don’t get when I’m sending things to micro.blog. Any automation I build against that API can eventually enjoy some reuse for the ideas I have around an image feed. If I need to abandon ship, it’s a simple API I can use to retrieve things.

    I’m still plugging away at Smugmug automation, though. Cloudflare is fun to play with, doesn’t cost a ton, and is giving me some practice/learning opportunities. Ideally, though, I’ve had some kind of relationship with Smugmug for a very long time, I trust them, and would prefer to use them as the resting place for “seemed worth sharing” content.

    I’m also curious about Adobe’s API.

    What I’m ultimately interested in is whichever of these will let me layer in some basic metadata in the form of descriptions, etc. then retrieve it programatically for different re-presentation.

    calling imgup

    Today I put the last things into imgup I need to just run it and use it. I cleaned up the result page, added a chance to enter alt text at the beginning, and made it clean up its tmp after it succeeds at uploading, which now has a cleaner error page for the most error-prone part of the app. I also have it using dotenv for configuration because that felt cleaner and more forward-looking than the YAML config thing.

    There’s still a whole thing to do on the Smugmug side.

    Screenshot of a Safari page with an image and a textarea with an image url

    I added an image uploader to omgloldev today.

    1. Pick an image
    2. It uploads to Cloudflare’s image service
    3. When it comes back, you get two textareas with Markdown & plain HTML markup.

    It reads exif at upload, so the image file gets the camera & lens model tacked on to the name.

    Saturday Coffee Walk (House with an Interesting Fence Edition), 2023-01-14
    Mt. Scott/Arleta
    Portland, OR
    “You Took Your Time,” Mt. Kimbie

    Three wooden monkeys perched on a fence, wet from rain,  in the classic "see no evil, hear no evil, speak no evil" pose. Two bronze bears perched on a wooden fence. A bronze peacock, wet in the rain. A bronze monkey paddling a canoe filled with rain water, perched on top of a fence.

    Saturday Woodstock Coffee Walk, 2023-01-14
    Lents, Woodstock, Mt. Scott-Arleta
    Portland, OR “Rained the Whole Time,” Shlohmo

    The numbers "571" in white paint on wet pavement. A broken toilet in front of an alley with graffitti on the walls A person in a bright red rain jacket and bright blue backpack looking at the pavement, waiting for the walk sign.

    Insomnia-driven development and omgloldev 1.0

    A screenshot of a web form with HTML source code in it. Two pink buttons at the bottom read "Save" and "Copy"

    Well, it was an insomnia night so I came downstairs to finish up. It took a little work to evade the greediness of the weblog.lol GitHub action, but I think omgloldev works well enough to use.

    I set out wanting to make something that would allow me to do quick, iterative weblog.lol template development I could preview locally, and I wanted to develop my weblog.lol blog using HAML. I also wanted, for purposes of authoring, to get the CSS out of the core template, and I was tired of copying and pasting stuff back and forth with all the attendant risk.

    So this thing will run on my laptop or desktop and let me flip between Safari and editor, editing the core template or CSS.

    • You drop in your weblog.lol template as HAML with a little conditional logic
    • You fill in a few HAML Markdown files to make a nice preview for hacking on
    • You preview it
    • You get a “Raw” page that gives you a beautified plaintext output
      • You can save the template and push the whole thing to git for use with weblog.lol’s GitHub action.
      • You can just copy your template and paste it into weblog.lol’s template form

    That’s about it. It could stand to:

    • commit the new template and push it
    • have nicer notifications that the new template has been saved
    • keep the last n templates just in case
    • code highlight the preview
    • just have a simple code editor built in
    • etc.

    but mainly I just want to use it to polish up my weblog.lol blog and then start using that for a small thing. Still not sleepy. Gonna try the herbal tea thing, but I think we might just roll into the coffee walk and crash later.

    omgloldev - now with a copyable raw template code view

    A  screenshot of a web page with a textarea that contains HTML source.

    A little more progress on omgloldev tonight: I put the conditional logic in the haml template such that if you visit /preview, you get a fully rendered demo page. If you visit /raw you get the raw template code in a text area with a button to copy the code to your clipboard.

    The thing I’m going for is a way to develop my weblog.lol blog using HAML and Sinatra niceties (e.g. partials) with a decent working preview, then render the composited templates out into the right location for weblog.lol’s publishing action to pick them up and publish them when I push to the repo.

    It’s not too far off for that purpose now:

    • I edit the haml in Sublime. That’s comfortable and familiar.
    • I run the app to get my preview server.
    • I can visit the raw view and copy the text and paste it into the “production” template, then push the changes up to GitHub to kick off the publishing action.

    There is a bug in the template right now that is causing some problems. I’m not sure if it’s just me hurrying or if I’m running into some HAML peculiarities that are messing with how things are nested. That’s for tomorrow or the weekend.

    Longer term, I want to break the parts that weblog.lol makes you keep in one file (the CSS, things that naturally belong in a partial) so that I can work on them in discrete editor views. Those things can all just be HAML or CSS partials that get sucked in to make the monolithic HTML that weblog.lol wants.

    Why?

    Part of The January Plan was to start being a little more structured about things.

    I’ve been paying more attention to where my time goes during the day: I’ve got specific things I definitely want to accomplish before 5, things I want to get to on a regular cadence, etc.

    But it has been years since I’ve just played around with any kind of coding, and I could spend my day doing things like this. As it is, I’m bucketing it to my discretionary time and beginning to think about how all the things I like to do need to come together with things that demand more structured time (and eventually synchronization with humans).

    So this is a slightly silly project I could accomplish other ways, but it’s getting me back into the flow of something I like to do, with all the fun, iterative learning and fiddling and tweaking. It feels good to do it, even if it doesn’t mean much.

    Coffee Walk, 2023-01-11
    Springwater Trail, Foster Floodplain
    Lents, Portland, OR
    “Tear Stained Eye,” Son Volt: songwhip.com/son-volt/…

    A highway overpass seen through branches. Graffiti of a skull smoking a blunt is spraypainted in red on the overpass. Tan houses on a gravel driveway, boarded up by orangeish plywood. Purple, pink and orange sunrise through the branches of a dead tree. Soft pink and purple morning sky behind a dead tree.

    Okay. This morning’s “me time” sprint is “build a HAML- and Sinatra-based weblog.lol preview/dev environment to take advantage of GitHub publishing.” Last night’s fiddling about was fun, but “copy, paste, reload, edit, copy, paste, etc.” is only charming for so long.

    Coffee Walk, 2023-01-10 (Regular Edition)
    Foster Road
    Portland, OR

    A meter reader appeared this morning!

    Pink and purpose dawn over a bark mulch dealership. A water meter reader in a bright yellow reflective vest and orange hoody  takes a reading by a busy street. A bicyclist goes by in front of a building reading "Mt. Scott Bark Mulch" A person in bright yellow and pink running clothes jogs in front of a building reading "Mt. Scott Bark Mulch" with her dog.

    Coffee Walk, 2023-01-10 (“House with the Good Fence” Edition)
    Mt. Scott-Arleta
    Portland, OR

    Al wanted to take “the alley” today so we walked by The House with All the Stuff on the Fence. There’s always a cat on the porch but it has never tried to be friends.

    A wooden tiki carving perched on a fence. A jade elephant and wooden bird perched on a fance. A wooden alpaca and baby alpaca perched on a fence post. A carved wooden bear on a fance.

    Build an OPML file of new stuff in ohh.directory so you don't have to visit site-by-site in a browser

    Detail of a rusted bulldozer tread painted with chipped yellow paint.

    I love that ooh.directory exists: It’s a clean, simple, helpful directory of blogs. The site publishes an RSS feed of all its latest additions, which is very helpful.

    This automates the process of getting the feed from each blog into an OPML file you can import into your news reader:

    https://paste.lol/mph/oohpml.rb

    Starting from “what the hell do I even know about OPML?” and “what the hell do I even know about processing XML with Ruby?” over tea this morning, it is the bare minimum I could do to:

    • Consume the ooh.directory “Recently added blogs” feed.
    • Check each link in the feed for a feed. Since some CMSes make more than one, I make the very lazy assumption that the first one discovered is the right one. That might result in this thing pulling in comment feeds or something else.
    • Make sure the stuff coming from ooh.directory is contained to its own folder when I import the OPML feed. I was going to add categories, but the feed itself uses hard-coded HTML in a list. I guess I could have Nokogiri’d the first list from the bottom, but again … lazy.
    • Plop an OPML file out into the working directory, ready for import by most RSS readers.

    How do you use it?

    Save it to a file, run it with Ruby.

    • Open a terminal. cd to wherever you saved it.
    • Enter the command ruby oohpml.rb

    It’ll drop an OPML file in the directory you ran it in. Most RSS readers seem to understand what to do with these things. It should put the new list of feeds in their own “ooh” directory. If you’re super worried, export your stuff to an OPML file before you import it.

    What’s next?

    Nothing. I pinged the owner of the site asking if he’d just implement OPML and he told me it’s coming as by-category OPML files at some point and this won’t be so useful.

    So if it’s helpful, great. I felt a brief surge of delight knowing I wouldn’t have to go site-by-site to find feeds, subscribe, etc.

    Why Feedly, and why Feedly + something else

    A person leans on the counter of a carnival booth for a game where you throw darts to win posters. There's a wide array of posters behind them including Prince, Katie Perry, and Scooby Doo

    I like Feedly as an RSS back-end a lot. There are other RSS services that offer keyword filtering, but Feedly goes beyond that.

    I’ve seen people dis Feedly because they don’t like its attempt to popularize RSS as a research tool for work as opposed to a convenient way to aggregate sources of information for a personal interest. Others have a reaction to its focus on marketing research (even though it is moving beyond that niche).

    When I come at it from the perspective of a former tech journalist and former marketing content lead, its purpose in life is clarified. Rather than seeing it as a weird way to make a niche, personal technology popular, it’s better to see it as a way to use automation to bring the benefits of a clipping service to people who don’t have the departmental or personal budget to pay for one. It is also happy to make money off people who just love RSS.

    I stopped using it when I went through a period where I was trying to cut down on inputs a little more, and its mobile client was always frustrating and a bit buggy. Lately, though, I’ve been sticking more stuff in my RSS reader, especially as things like ooh.directory come back around.

    Feedly’s filtering is better than anyone else’s, because it goes past keywords. There is some interesting stuff going on behind the scenes, including something that taxonomizes every article that passes through their system. That’s a boon for someone using it as a souped-up clipping service, because the world changes from “this is a list of sources I know about, look for my keywords” to “I’m interested in these topics, bring me anything related to them from across the breadth of the feeds you, service, know about.”

    The people who write most RSS readers tend to treat RSS as a way to save yourself a bunch of visiting sites every day or as a way to avoid the bad design and ads of the sites' layouts. They’re not interested in writing or maintaining a back-end. In the Apple ecosystem, iCloud backend syncing is the most interesting thing in RSS readers because it gives people the benefit of having the read/unread state of all their feeds in sync. It “frees” them from the RSS backend services (e.g. Feedly, the Old Reader, Inoreader, etc.) and allows them to focus on the age-old RSS use case of quietly hoarding feeds in a reader and maybe sharing OPML files with others.

    One annoying side effect of this approach to RSS is the recreation of the “blog roll” pattern, which RSS app authors recreate in the form of pre-populated feed lists, meant to “help get you up and running with RSS.”

    Like blogrolls, they’re a proponent of homogeneity and an aggregation of the safest opinions to have within a certain niche among tech obsessives. If once upon a time nobody ever got fired for buying IBM, nobody will ever go wrong quoting something they read from the pre-populated list of feeds in their RSS reader.

    This is stultifying, and it steers me back to what I like about Feedly:

    If the workflow of the modern, individualist RSS consumer is a sort of hunt-and-gather trudge across the plain that is the Internet, Feedly is a reasonable run at an industrial information age.

    Yes, it has a very safe, very non-controversial list of initial feeds you can use or browse through and pick from. But it also has a pretty big store of feeds you never see, and it is continually operating on them: The articles they contain are analyzed and categorized, providing a secondary stream of content you can dip into that probably transcends your own list, and that curates at the article level, not the feed level. You can follow topics, not individuals or individual entities.

    Is there a risk of some kind of monoculture or slant finding its way into Feedly’s approach? Absolutely. To be truly engaged in a topic is to ultimately really only trust yourself when it comes to assessing and vetting information sources.

    That’s where the whole “Iron Man vs. robots” thing comes in (thank you Luke Kanies for the metaphor.)

    Feedly offers you a way to curate what it brings you:

    • Was the article properly categorized?
    • Do you want to see this source again?
    • Within this broad category, do you want to see this topic again?

    You’re always free to bring in your own sources, you’re always free to recategorize, but you have a system that augments and supports. You still have to operate it and guide it back.

    There are more prosaic benefits to Feedly’s approach, as well. Because it is constantly taxonomizing the content that passes through it, you can filter at a topical as opposed to keyword level, and that has some nice advantages for getting rid of annoyances. For instance:

    If you follow many mainstream sites with paid staff you can’t unsee the number of sponsored and affiliate link posts they put up. They try to frame it like they’re doing some sort of journalism (“the lowest price we’ve seen”). If you follow the product segment found in a given deal post, though, you also can’t unsee how much of the stuff they’re pushing is stock the manufacturer is probably trying to clear to make way for the next thing, and you definitely can’t unsee the way these deals are only coming from sites with affiliate programs. No, this is not me discovering a conspiracy. This is me pointing out that deal posts are self-serving, presented as “research,” and inherently limited. They’re noise.

    Feedly has a conception of “Deal Posts” and categorizes them as such. You can eliminate most of them with a single generic filter instead of painstakingly gathering the textual characteristics of each kind of deal post on each site you follow. Feedly’s not perfect, but I’ll take an 80 or 90 percent success rate and dial in a few outliers over trying to build a bullet-proof keyword list. That’s very useful automation.

    The Client Problem

    So, earlier I threw a little shade at RSS reader developers. It’s true. Something like Feedly needs back-end infra and people working on the problem of automated taxonomizing. The consumer RSS reader market doesn’t support that on $5 an app store purchase, so there’s no realistic way to move past the sole proprietor model of RSS curation/consumption.

    As pure reading tools, though, the clients are pretty good! Plenty of ways to save and share content, flexibility on how you read the full article, simple ways to quickly import a feed into the reader while you’re out browsing, and (some, limited) filtering, at least on keywords.

    Feedly, on the other hand, does not have a good client. The iOS client is buggy and the web client doesn’t feel very clean. There are some weird language things going on because Feedly is trying to turn streams of information composed of RSS feeds and other sources into a uniform, consistent river.

    At the same time, you can get Feedly’s output mediated through a good reader. Personally, I like Reeder. It’s clean, pleasant, (Apple) cross-platform, has its own built-in read-it-later service (a good use of iCloud back-end syncing), and generally stays out of your way. Like other Apple readers it syncs feeds on iCloud if you wish, but it can also talk to many other services, including Feedly. Feedly and Reeder may represent the harmonic convergence of front-end and back-end.

    Make an omg.lol pURL v2: Drafts native, no dialog

    Make an omg.lol pURL v2: No need for Shortcuts, uses name|url.

    The Draft is here (unlisted … I’m shy).

    Operator manual:

    1. Install the action

    2. Make a name|URL pair, e.g.

    • google|https://google.com
    • microblog|https://micro.blog
    • omg|https://omg.lol
    1. Select the text

    2. Run the action.

    Your text will be replaced by the new URL, e.g. https://mph.omg.lol/microblog.

    reMarkable v3 arrives and I have impressions and questions

    Screenshot of a terminal session showing a login to a reMarkable device with rainbow ansi art that reads "ZERO SUGAR"

    My reMarkable finally got the v3 update and, a day and some change later, the desktop client realized it had all the new features.

    Most practical quality of life thing: You can do more notebook and note management in the desktop app. You can make new notebooks, move things around, add new pages, etc.

    Most interesting thing I’m not sure how I’ll use: You can type notes into your notebooks on the desktop app. The ability to do that with the mobile app is coming.

    One thing I love about the idea is that I could conceivably leave my reMarkable at home or not have it on hand but still open a notebook on a laptop or (eventually, soon?) a mobile device and start a note. Suddenly you’re free of the worst paper-like characteristic of a reMarkable, which is that its coordinates in time and space have to match yours to create new content on one. Now you just have to be near a device with a client app.

    Interesting – or is it? – to note that in the release notes they want you to think about this as a way to add structuring text – headings – and not body text.

    Thing I’m least sure about: … but I’m willing to see how it goes, because it could be great: Endless pages. reMarkable has spent all this time committed to the bit that it is just like paper, including finite page sizes, meaning if you got to the bottom of the screen you had to make a new page. That made for some very sprawling notebooks and tedious paging around to get to stuff. I much prefer the idea of having one page per idea or logical divider or whatever in a notebook. For instance, I’m sketching out an image feed tool. It has:

    • Concept
    • Requirements
    • Implementation Ideas

    It makes more sense to me to have each of those areas on their own pages of whatever lengths.

    When I think to another kind of note-taking I’d do, meetings, I think I’d much prefer to use the marker tool to open a new page for a given meeting, write the topic and date using the fat marker, then start writing notes on a single page. That makes for a much less cluttered and more easily scanned notebook.

    Whether I am going to like this long term has a lot to do with how fluidly scrolling on these endless pages works. People on reddit are complaining about the whole thing, but I can’t sort out how much is bad implementation and how much is people who hate change.

    Anyhow, glad to have it all updated. Part of Phase 1 involves getting back into my morning pages routine, which I’d like to try on the reMarkable again.

Older Posts β†’