Notes

This is my microblog, containing short informal entries. See my blog for longer entries. An An Atom feed contains the full text of all my notes. If that has any problems, I also have a legacy RSS feed.

Timestamp format: YYYY-MM-DD HH:MM, as per RFC 3339. Sorted newest to oldest.

  1. Posted

    The core elements of a people-focused (as opposed to a community-focused) social network are subscribing to people for content and interacting with their content. Atom (and RSS) feeds already provide “subscription” functionality; what if we went further?

    Atom feeds can serve as the foundation for distributed social media. The OStatus protocol suite describes how Salmon, ActivityStreams, and threading extensions can turn an Atom feed into a “social media feed” that people can interact with. Throw in WebSub for real-time push-based updates. This OStatus + WebSub system was the precursor to the current ActivityPub-based Fediverse.

    The IndieWeb has similar concepts. The IndieWeb community uses microformats for metadata, including the h-feed feed format. It also uses Webmentions for interaction between sites.

    Just out of curiosity, I implemented everything except for the Salmon protocol and WebSub. I prefer Webmentions to Salmon, though extensions to the former seem to overlap with the latter. I’ve tried and failed to get WebSub working in the past; I might have to run my own hub (perhaps using the websub-server Go package).

    The best part of this approach is the simplicity. Besides a Webmention receiver and a WebSub hub, all you need is a static server to deliver markup files. A separate program on your machine (i.e. not necessarily your server) can ping your WebSub hub and send Webmentions; I happen to like the command-line program Pushl. Once all the pieces come together, you start to wonder why we let social media companies flourish instead of lowering the barrier to join something like the IndieWeb. This is what the Web is made for.

    I wrote more about this site’s social features in a section of the site design page.

  2. Posted

    Many users who need a significant degree of privacy will also be excluded, as JavaScript is a major fingerprinting vector. Users of the Tor Browser are encouraged to stick to the “Safest” security level. That security level disables dangerous features such as:

    • Just-in-time compilation
    • JavaScript
    • SVG
    • MathML
    • Graphite font rendering
    • automatic media playback

    Even if it were purely a choice in user hands, I’d still feel inclined to respect it. Of course, accommodating needs should come before accommodation of wants; that doesn’t mean we should ignore the latter.

    Personally, I’d rather treat any features that disadvantage a marginalized group as a last-resort. I prefer selectively using <details> as it was intended—as a disclosure widget—and would rather come up with other creative alternatives to accordion patterns. Only when there’s no other option would I try a progressively-enhanced JS-enabled option. I’m actually a little ambivalent about <details> since I try to support alternative browser engines (beyond Blink, Gecko, and WebKit). Out of all the independent engines I’ve tried, the only one that supports <details> seems to be Servo.

    JavaScript, CSS, and—where sensible—images are optional enhancements to pages. For “apps”, progressive enhancement still applies: something informative (e.g. a skeleton with an error message explaining why JS is required) should be shown and overridden with JS.

  3. Posted

    Mullvad’s recent audit by Assured AB was a bit concerning to me. Fail2ban and user-writable scripts running as root is not the sort of thing I’d expect in a service whose only job is to provide a secure relay.

    Avoiding and guarding root should be Sysadmin 101 material.

    I recommend any amateur Linux admins read audit reports like this. While some low-priority recommendations are a but cargo-cultish, most advice is pretty solid. Frankly, much of this is the sort of thing a good admin should catch well before a proper audit.

  4. Posted , updated

    Following the recent SCOTUS ruling, many have been trying to publish resources to help people find reproductive healthcare. They often wish to do this anonymously, to avoid doxxing.

    There’s no shortage of guides on how to stay anonymous online. I recommend using the Tor Browser in a disposable Whonix VM. The Whonix Wiki has a good guide to anonymous publishing.

    Few guides cover stylometric fingerprinting. Stylometric fingerprinting is one of the most common techniques for de-anonymization, used by adversaries ranging from trolls to law enforcement.

    Common advice is to use offline machine translation to translate works to and from another language. Argos Translate and Marian are two options that come to mind.

    shows that machine translation alone isn’t nearly as strong a method as manual approaches: obfuscation (hiding your writing style) or imitation (mimicking another author). These approaches have excellent success rates, even among amateur writers. The aforementioned Whonix wiki page lists common stylometric fingerprinting vectors for manual approaches to address.

    Limiting unusual vocabulary and sentence structure make for a good start. Using a comprehensive and highly-opinionated style-guide should also help. The Economist has a good one that was specifically written to make all authors sound the same: , 12th edition (application/pdf).

    For any inexperienced writers: opinionated offline grammar checkers such as LanguageTool and RedPen may supplement a manual approach by normalizing any distinguishing “errors” in your language, but nothing beats a human editor.

    Further reading: , by OrphAnalytics SA.

  5. Posted

    I’m thinking about coining a term to reflect a non-toxic alternative to “search engine optimization” (SEO). Working name: “agent optimization”.

    MDN has SEO guidelines because people often find MDN articles through general-purpose search engines. I noticed that a subset of their advice is directly beneficial to readers.

    For example: imagine two pages have almost the same content (e.g. pages on the width and height CSS properties). Nearly-identical pages confuse search engines. To avoid duplicate content, authors are encouraged to differentiate the pages by using different examples. This is actually great for readers: when a reader navigates from one page to the next, it’d be unhelpful to present the same example again. Perhaps the width example could describe adaptation to a narrow viewport, while the height example could describe the trick for handling image aspect ratios with height: auto.

    Lots of SEO is actually just basic design and accessibility guidelines: use good link names, remember alt-text, be mobile-friendly, use headings, don’t require tons of JS to display content, prefer semantic HTML, etc. Stuff like structured data also helps improve reader-mode implementations and makes content-blocking easier.

    SEO gets toxic when it veers into copywriting guidelines, tricks like adding the current year to your heading (“Best products to buy in CURRENT_YEAR”), backlink-building, etc. Much of this does include so-called “white-hat SEO”. I think that I should distinguish “agent optimization” from “search engine optimization” by making it about accommodating the tools people use to find information, rather than about ranking high in search results or getting clicks. Once I finish my current WIP blog post (it’s about how to make privacy recommendations for different audiences), I think I’ll write about this. In the meantime, any ideas you have are welcome; please share them.

  6. Posted

    Welcome to the IndieWeb, Miriam!

    I’ve struggled to categorize what on my wite is a “post” worth syndicating vs a “page” vs ???

    I had this struggle too, and solved it with per-section and combined feeds. My combined feed contains every page on my site that includes a publication date in its metadata; my sections for articles and notes have their own respective feeds.

    If I want live updates (this is a static site) there’s still more to learn.

    Remember that pretty much all IndieWeb features are optional. You only have to implement what interests you. You can get really far when it comes to bringing a static site to the IndieWeb, so I’d suggest against jumping onto a dynamic site immediately.

    You can also push live updates using WebSub. Your main site can still be static, but you can pint a (first- or third-party) WebSub hub to push content as soon as you update your site. I plan on using this approach soon.

    I like the “static site with ancillary services” model: it keeps the core fast and simple, and makes extra modules easy to add and replace.

  7. Posted

    Preact is better than React for most use cases IMO. I think its small size can make it really powerful when you combine it with something like partial rehydration to make a view load instantly but reduce the time it takes to load the “interactivity” atop the static components.

    When the delay between loading the static components and interactivity is small, the app feels fast. When the delay is long, it would have been better to just block the rendering in the first place. Small frameworks like Preact and Svelte shine here.

    People used to think that shrinking payload sizes would become less of an issue as infrastructure improved, but the opposite thing happened with hydration-related technologies. Heh.

    I still think Vanilla is the least bad option for a good chunk of web apps.

  8. Posted

    What if Firefox and Chromium placed a year-long moratorium on all new browser features unrelated to security, accessibility, and internationalization? Effort not spent on those initiatives could be re-directed towards bugfixes.

    Defining the word “major” might be hard but I think it’s an interesting idea.

    I’m not too worried about including Safari since it could spend those months catching up.

    Inspired by a similar article by :

  9. Posted , updated

    I want to agree, with one caveat: if you’re a government or healthcare website you might still want to test with IE-mode to make sure critical functionality is at least usable. There are still companies that require you to use their sites in Internet Explorer with compatibility mode (emulates either IE 7 or IE 5, depending on some properties of the markup/headers). QuickBooks Desktop 2022 and PEACH. And as long as some software requires IE and there exist people who want to use one browser for everything, there will be people who set it as their default browser.

    You’ll probably need to support it if you have a log-in page that can be summoned when someone uses you as an OAuth provider; lots of software uses IE libraries to render the log-in window, and those aren’t going anywhere. Internet Explorer isn’t in Windows 11, but the .dll files for this are.

    IE is still supported for LTSC and government editions of Windows, and on Windows 7 ESU.

    I’m not really concerned with IE support, but I test with IE-mode in Edge sometimes. I look up any breakages to see whether they are known to be non-standard IE quirks. If they aren’t known quirks, I try to land a standards-compliant fix. The main thing I look for isn’t nonstandard behavior, but missing features.

    In other words, I test in IE to make sure my site is robust and uses progressive enhancement, not because I actually want it to work perfectly in IE. The only IE problem in my site is SVG rendering (a perfectly compliant SVG shrunk to a smaller size in HTML retains its original size in IE, but cropped with hidden overflow) and a lack of support for <details>. Turns out, basically every independent, non-mainstream, currently-active browser engine lacks <details> support except for Servo, so I might have to start looking into fallback approaches.

    Update: apparently Microsoft Outlook renders HTML emails and the entries of RSS/Atom feeds using Microsoft Word’s HTML renderer. That renderer is based on Internet Explorer’s MSHTML (Trident). So I guess IE lives on, in a way.

  10. Posted

    Armchair speculation: how can we learn from Reddit, Lemmy, “Hacker” “News”, et al?

    1. A vote should be part of a reply with at least N words. N could be increased by mods and admins. Instances could federate votes conditionally based on the length or activity of a comment. Word counts can be problematic; I don’t know a better alternative (maybe clause-count?). Flagging doesn’t need a minimum word count.

    2. Forums shouldn’t host their own top-level posts and comments. Those should be links from authors’ own websites with microformats (think IndieWeb). The forum should be Webmention-enabled.

    3. Larger communities should have ephemeral chatrooms (“ephemeral” in that public history has a retention limit if it exists at all) to incubate posts. Authors (yes, original authors) could share their work and collect feedback/improve it before it’s “ready”. They could then post with increased visibility.

    4. One reason to flag a top-level comment could be “didn’t look at the post”. I say “look at” instead of “read” because certain posts are huge essays that could take hours to read. Top level commenters should at least be expected to skim.

    These qualities will make a forum less active, since the quality of content will be higher and some validation and attention-seeking will be filtered out. Low activity means higher visibility for good content. Forums could “get of the ground” by starting invite-only, gradually enabling these rules one-by-one before opening to the public.

    (psst: I might be working on “a thing”).

  11. Posted

    You might want to provision namespace-based isolation for your browsers. But that could throw a wrench into Flatpak-based distribution.

    When distributing browsers through Flatpak, things get a bit…weird. Nesting sandboxes in Flatpak doesn’t really work, since Flatpak forbids access to user namespaces.

    For Chromium, they worked around this by patching Chromium zygote process (the process that provisions sandboxes) to call a Flatpak supervisor to create additional sandboxes. This is called the “spawn strategy”. Chromium uses a two-layer sandbox: layer-2 is a syscall allow-list and layer-1 is everything else. The only problem is that Flatpak’s layer-1 sandboxes are more permissive than Chromium’s native layer-1 sandboxes, so the Chromium Flatpak has weaker sandboxing.

    Firefox’s sandboxing isn’t entirely dependent on user namespaces, but it is weakened a bit without them; there’s no “spawn strategy” implemented at the moment. More info is on Bugzilla.

    Now, whether this matters is something I can’t decide for you. My personal opinion is that Flatpak serves as a tool to package, deliver, and sandbox native applications; Web browsers are tools that deliver and sandbox Web applications. Distributing a browser through Flatpak is like distributing Flatpak itself through Flatpak. Web browsers are an alternative to Flatpak; they have their own sandboxing and updating mechanisms.

  12. Posted

    xml:space would make whitespace issues easier to handle and simplify my current solution, but not everything supports XML namespaces; I want to keep this polygot HTML5 and XHTML5 markup for now.

    Eventually I’ll offer certain enhancements to the XHTML version (add index.xhtml to the URLs or remove text/html form your Accept header but include application/xhtml+xml) and I’ve already made my Atom feeds a bit simpler, but there’s a lot to do before then.

    I’ve added ActivityStreams, OStatus, and friends to my Atom feeds; maybe I could add them to my XHTML pages using namespaces, if RDFa doesn’t work out. First I wanna try my hand at writing an ontology for webrings so people can mark up their webrings with RDFa/microdata. That’ll make it easy to do things like check for broken webrings or build cool visualizations of overlapping rings.

    I should also try my hand at XSLT for the Atom feeds to get a baseline browser preview.

  13. Posted , updated

    ncurses is fine for certain specific purposes, like querying terminal characteristics.

    I think if you’re building a TUI it should generally be one of multiple options that share a library/backend or it should be something with many alternatives that are at least equivalent, given the poor accessibility of TUIs in general. If one of those things is true, then it should be fine to use ncurses for the TUI.

    There’s a Python library called “Textualize” for building TUIs and CLI shells. They’re working on a web target which they claim can get much better accessibility.

  14. Posted

    My current approach to “responsiveness” is to increase the font sizes on screens. User interfaces should generally have smaller text while article bodies should prioritize readability and be larger. Default stylesheets are take a (literal) one-size-fits-all approach, trying to optimize for both. But I only use percentages for font sizes, to respect user preferences.

    I also increase the default font size to make it easier to increase tap target sizes to Google’s recommended 48x48 px sizes, without overlapping other targets in a 56x56 px radius. A size of 108.75% was the minimum necessary to achieve my goals in all combinations of major browsers and their default stylesheets across operating systems. Since scrollbars and screen edges are often tap targets, I also set minimum margin sizes.

    I reduce the font size to the default 100% and eliminate the extra margins on extremely narrow screens (think KaiOS devices and smartwatches) where the screen is too small to fit several touch-friendly elements. I do the same on print.

    My rationale:

    • If you use a smart feature phone, then you navigate with a keypad. The interface does not need to be touch-friendly.

    • If you use a smartwatch (like the Apple Watch) it should auto enable reading mode for long-form text, so compromising a little on readability might be worth improving navigation.

    • If you want to read in a sidebar then you are likely reading the article alongside some other text. My page’s text should match most other content instead of “sticking out”.

    • If you print it out then font sizes are already optimized for readability rather than for user interfaces.

  15. Posted

    People have mostly moved on from DTDs. In HTML-land, they use the Living Standard; in XML land, they stick to boring existing parsing rules and use XML namespaces. In both, they use RDF vocabularies to describe RDF-based structured data.

    I’m curious as to why you’re interested in creating a new DTD. I think that unless you plan on creating a new SGML-based language, you’re better served by namespaces.

  16. Posted

    This is a good article on the difference between SC 1.4.4 and 1.4.10. However, I don’t think these criteria go far enough:

    Even narrower viewports exist. KaiOS devices tend to have 240 px viewports; smartwatches tend to have half the width of a phone while emulating a phone width (Apple Watches can be instructed not to do this with a proprietary meta tag). Of course, making sites watch-compatible is a stretch, but support for feature phones running KaiOS should be reasonable. I wrote about this more in Best practices for inclusive textual websites.

    Another thing worth remembering is that users can change default fonts or override sire-set fonts. Don’t just test with default default fonts; test with something wider. These criteria should specify some font metrics or (royalty free) representative wide fonts to use for testing.

  17. Posted , updated

    I think this post is correct, strictly speaking. I also feel like it misses the point of tracker blocking (or at least, what I think the point should be). Many people have a relatively casual threat model when they do their typical browsing.

    Lots of people are less concerned with avoiding identification than they are with reducing the amount of data collected about them. For example, if they sign into an account that’s linked to their real identity, they fully expect to be identified by the site. However, if the site contains Facebook and Google trackers, they would rather not run those because they harm the user rather than help.

    To say that this is not a perfect solution would be an understatement. But when it comes to meeting the goals of such a user, content blocking isn’t useless. It straddles the gray area between quality-of-life improvements (blocking content makes pages less unpleasant and heavy) and slight unobtrusive privacy improvements (the majority of sites nowadays still outsource most of their tracking to well-known third parties).

    The ideal approach is obviously to use something like the Tor Browser’s “Safest” mode (or perhaps the “safer” mode in a Whonix VM), which doesn’t rely on badness enumeration. On that I agree. I personally switch between the Tor Browser for anonymous browsing (anonymity), Chromium for Web apps (security), and Firefox for general non-anonymous browsing (convenience and quality-of-life). Blocking trackers would not make sense for browsing anonymously, but is a slight improvement for non-anonymous browsing. Badness enumeration is of course counterproductive when trying to be fully anonymous.

    In practice, content blocking reduces someone’s online footprint. It doesn’t prevent it from being created in the first place, and it can be circumvented. But footprint reduction is all that many are interested in, especially when it also offers unrelated perks like less ads and lighter pages.

  18. Posted , updated

    Being enrolled in a study should require prior informed consent. Terms of the data collection, including what data can be collected and how that data will be used, must be presented to all participants in language they can understand. Only then can they provide informed consent.

    Harvesting data without permission is just exploitation. Software improvements and user engagement are not more important than basic respect for user agency.

    Moreover, not everyone is like you. People who do have reason to care about data collection should not have their critical needs outweighed for the mere convenience of the majority. This type of rhetoric is often used to dismiss accessibility concerns, which is why we have to turn to legislation.

  19. Posted , updated

    I was referring to crawlers that build indexes for search engines to use. DuckDuckGo does have a crawler—DuckDuckBot—but it’s only used for fetching favicons and scraping certain sites for infoboxes (“instant answers”, the fancy widgets next to/above the classic link results).

    DuckDuckGo and other engines that use Bing’s commercial API have contractual arrangements that typically include a clause that says something like “don’t you dare change our results, we don’t want to create a competitor to Bing that has better results than us”. Very few companies manage to negotiate an exception; DuckDuckGo is not one of those companies, to my knowledge.

    So to answer your question: it’s irrelevant. “html.duckduckgo.com” is a JS-free front-end to DuckDuckGo’s backend, and mostly serves as a proxy to Bing results.

    For the record, Google isn’t any different when it comes to their API. That’s why Ixquick shut down and pivoted to Startpage; Google wasn’t happy with Ixquick integrating multiple sources.

    More info on search engines.

  20. Posted , updated

    The only engines I know of that run JavaScript are Google, Bing, and maybe Petal. None of the other engines in my list appear to support it. I don’t even think Yandex does.

    It’s common practice for sites to give a JavaScript-lite version to search engines, though if the content differs heavily you run the risk of hitting a manual action. I’d imagine that search-crawler-exclusive editions would become the norm if crawlers stopped handling JavaScript.

    Marginalia actually seems to penalize its use.

    Update: Yep (formerly FairSearch) also seems to evaluate JavaScript

  21. Posted , updated

    Pale Moon’s inception pre-dates Firefox 57 by many years; before its notoriety following the removal of XUL/XPCOM, it was popular among people who didn’t like Electrolysis.

    I hate that Pale Moon is so behind on security because it also has nice stuff that Mozilla axed. Some things were axed for good reason, like extensions with the ability to alter browser functionality. Others were axed without good reason, like built-in RSS/Atom support.

    WebExtensions that fill in missing functionality often require content injection which is problematic for a variety of reasons. To name a few: try visiting a page that has a sandbox CSP directive without allow-same-origin or allow-scripts and see how well it works, saving a page and noticing it has extra scripts or iframes, or seeing addon scripts activate too late when your underpowered machine is under load. It’s better than giving them access to browser functionality but nothing beats having features in the actual browser.

    I still wouldn’t recommend it due to extremely weak sandboxing and a naive approach to security. The devs respond to sandboxing queries by saying it’s secure because “it separates the content and application” which tells you how little they care or understand; untrusted content needs isolation not just from the browser but from other untrusted content. Given the scope of a browser, even Firefox isn’t where it should be (even given their commendable progress on Fission, RLBox, and their utility process overhaul), let alone caught up to the mitigations in Chromium’s Blink or WebKit’s JavaScriptCore but I digress.

    It’d be totally fine if they described their browser as a complement to a more airtight one or as a dev tool (it’s honestly a great dev tool given some addons, I’ll happily concede that). But when you describe yourself as a replacement to other browsers but lack the security architecture to back it up, you’re being irresponsible.

  22. Posted

    Commodification means something else; I’m assuming you’re referring to “commoditize”, as in “commoditize your complement”. Although in this context the words have some really interesting overlap, which is why I brought it up. See by .

    We are first commodified by being made a complement to a product, then gradually commoditized as complements ideally are.

  23. Posted

    I’m in partial agreement with this take.

    On one hand, expectations change with time. Most people outside my bubble look at interfaces I like using and say they look “ugly” and that they’re “weird” (their words); they wouldn’t have said that when I was younger.

    On the other hand, some “annoyances” are actually removable barriers. Accessibility comes to mind. If you take software that does not work with assistive technologies (ATs) and fix it, AT-users might move on to the next accessibility issue. But they’ll be markedly happier than before, when they just couldn’t use it.

    Similar examples include localization and compatibility.

    Man, positive takes like this feel really out of character for me.

  24. Posted , updated

    I’ve been planning on writing a big “meta” post explaining how this site is built, but first I want to reach a few milestones, most of which are IndieWeb-related. Here’s what I’ve already done:

    • Microformats
    • More semantic markup: Creative Commons and Schema.org vocabularies
    • Web sign-in (using an IndieAuth service)
    • multiple types of posts
    • Sending Webmentions
    • Displaying Webmentions
    • User-sendable Webmentions with a form
    • RSS feeds for posts and notes
    • Atom feed for posts and notes with ActivityStreams metadata
    • Automatic POSSE of bookmarks to TinyGem (bookmarking service)

    However, I still have a ways to go. Here’s what I plan on adding:

    • Add some semantic markup to list online “friends”, probably using FOAF ontology
    • Preview Webmention entry contents
    • Related to previous point: create a blogroll
    • Automatic POSSE of notes to Fedi (it’s mostly manual right now)
    • Combined Atom feed
    • WebSub
    • Permalinks for bookmarks
    • Atom feed for bookmarks
    • Automatic POSSE of bookmarks to Fedi
    • Time-period based pagination and navigation on notes/posts page.

    Once I finish the above, I’ll be ready for a “meta” post. Some more tasks after that:

    • Run my own WebSub hub.
    • Add my own front-end for search results (my first non-static content).
    • Pingback support
    • Create a webring. What I’ve learned while doing the above will inform inclusion criteria.
  25. Posted

    Some of my posts are long. My longest post is almost 20k words as of right now (60-80 pages printed out), and will get longer as I update it.

    Length is an imperfect yet useful measure of the amount of detail one can expect. There are many “lists of practices” on the Web about web design. By communicating that mine would take an hour and a half to read, I communicate that my list has some more thought put into it.

    This also signals to some people that they should probably bookmark the article for later so they can read it properly, or helps them prioritize shorter articles first.

    Someone who may end up reading an entire post after going through a paragraph or two may be scared away if they know it’ll take them 30 minutes to go through it.

    I think these people would be scared off regardless, simply by seeing how much they have to scroll through. They might also feel overwhelmed by the number of entries in the table of contents. That’s why I include a “TLDR” or explicitly recommend skipping from the introduction straight to the conclusion for readers in a hurry.

  26. Posted

    One thing I don’t like is faux corporate support for pride month. Think rainbow branding for large organizations that don’t actually do much to improve the systems they benefit from.

    A good smoke test to see if rainbow-flag/BLM-repping organizations actually give a shit: test their website’s accessibility. If they ignore disabled users because they’re a minority with different needs, well, that probably speaks volumes regarding their attitudes towards any minority. Actions speak louder than words.

    They don’t care about minorities; they’re only in if for the branding. When a soulless organization uses your symbols, it remains soulless.

    Soulless organizations don’t have good or evil intent. put this best in his talk (starts at ).

  27. Posted

    I just made a massive internal overhaul of my website, seirdy.one. I prettified all the URLs to remove the trailing “.html” suffixes. I added re-directs from the old locations to the new ones, so your links won’t break.

    The reason I did this was because I plan on making alternative content types share the same index URL, except for the suffix. So seirdy.one/notes/ could have an index.html, index.xml (RSS), index.md, index.gmi…you get the picture.

    I removed the RSS feed from my Gemini capsule in favor of just supporting gmisub.

    I also added Atom feeds to keep my existing RSS feed company:

    Those make for the first step towards supporting WebSub. I’ll have to look into ActivityStreams documentation to figure out which markup to add to my Atom feeds first. I’ll probably add a curl command to my CI job to get a WebSub endpoint to re-read my Atom feeds whenever I push a change.

    I need to figure out how to get Hugo to do a “combined” feed for everything.

  28. Posted

    I decided my site had enough content to warrant a search form, so I added one to the footer. I kanged the CSS from gov.uk; I liked how their search box was adaptive yet compatible with legacy browsers. This is a static site so I made it point to Search My Site, which regularly crawls my whole website.

    Eventually I’ll add a dynamic page for search results (probably using the Search My Site API), and add an ATOM feed for posts and notes (I currently have an RSS feed for posts, and that’s not going anywhere). If I get those two, I’ll be ready for the next step of setting up WebSub and starting on IndieMark 4 (I’ve decided not to POSSE all my microblog posts, to maintain some separation between my “Rohan” and “Seirdy” identities).

  29. Posted

    What do you mean by “false sense of security”? Signal’s cryptography is pretty solid. It’s one of the only messengers with such a lack of metadata leakage; if you combine it with Tor you can add enough noise to the network-layer metadata to be more private than almost any alternative.

    Don’t get me wrong, I dislike it on the grounds of being a closed platform, but few alternatives exist that support both offline messaging and have such little metadata leakage. I’m willing to hear suggested alternatives that do not bake a “cryptographically-secure, decentralized pyramid scheme” (cryptocurrency) into the protocol. I’m not aware of any such alternative at the moment.

  30. Posted

    I read your article and share similar concerns. Using Microsoft Bing and Google Search’s commercial APIs generally requires accepting some harsh terms, including a ban on mixing SERPs from multiple sources (this is why Ixquick shut down and the company pivoted to the Google-exclusive Startpage search service). But the requirement to allow trackers in a companion web browser was new to me.

    Most of these agreements are confidential, so users don’t really get transparency. On rare occasions, certain engines have successfully negotiated exceptions to result-mixing, but we don’t know what other terms are involved in these agreements.

    I’ve catalogued some other engines in my post , and there are many alternatives that don’t have this conflict of interest.

    Most of these are not as good as Google/Bing when it comes to finding specific pieces of information, but many are far better when it comes to website discovery under a particular topic. Mainstream engines always seem to serve up webpages carefully designed to answer a specific question when I’m really just trying to learn about a larger topic. When using an engine like Marginalia or Alexandria, I can find “webpages about a topic” rather than “webpages designed to show up for a particular query”.

    One example: I was using Ansible at work just before my lunch break and I wanted to find examples of idempotent Ansible playbooks. Searching for “Ansible idempotent” on mainstream search engines shows blog posts and forums trying to answer the question “how to make playbooks idempotent”. Searching on Alexandria and source code forges turns up actual examples of playbooks and snippets that feature idempotency.

    SEO is a major culprit, but it’s not the only one. Forums posters are often just trying to get a question answered, but search engines rank them well because they are optimized to find answers rather than find general resources.

    In short: DuckDuckGo and other Google/Bing/Yandex competitors are tools for answering questions, not tools to learn about something. I’ve tried to reduce my reliance on them.

  31. Posted

    I try to have limited reliance on CSS media queries in favor of being inclusive by as many people as possible by default, including fingerprinting-averse readers. Unfortunately, I have concluded that it is impossible to set one single website color palette that ticks all of the following boxes:

    • Familiar: colors aren’t particularly “novel” and don’t impose a learning curve. The difference between a visited and unvisited link should be clear enough from the get-go.
    • Friendly to various types of color blindness
    • Sufficient contrast for high-contrast needs
    • Autism-friendly, anxiety-friendly colors that do not trigger overstimulation or imply a warning.
    • Related: sensitive to cultural norms (is red actually a “warning” to everyone?).

    I set a custom palette for my site’s dark theme. Since its contrast is a bit high, I made it respond to the prefers-contrast: less media query. Now, My 108% body text typically renders at 17.4 px, which should have an absolute value below 90 Lc on the APCA lookup table. I dropped my link contrast to 90 Lc and my body text to something slightly higher (article body text should have at least as much contrast as link text and buttons to avoid the “piercing glare” effect interactive elements can have; I should add that to my website best practices article sometime).

  32. Posted

    This is first “note” on my IndieWeb Site. Notes will be shorter and less formal than typical blog posts; this is a microblog, not a typical weblog.

    Once this is working correctly, I’ll need to figure out a solution to POSSE these notes to the Fediverse.