Suggestions

Brigitte Lecordier YouTube channel
www.youtube.com
Bazar du Grenier YouTube channel
www.youtube.com
La Veillée YouTube channel
www.youtube.com
ARTE Journal YouTube channel
www.youtube.com
SUMIMASEN TURBO YouTube channel
www.youtube.com

Do you enjoy using Feedbot ?

You can support its material needs and its development by participating financially via Liberapay.

Sorry, your browser doesn’t support embedded videos. But that doesn’t mean you can’t watch it! You can download the video and watch it with your favourite video player. Yours truly demonstrating the new Interactive Shell (REPL), Multi-page Settings, and backup and restore (data portability) features. Links Kitten interactive shell (REPL) tutorial Streaming HTML tutorial Domain Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.
Read more
Sorry, your browser doesn’t support embedded videos. But that doesn’t mean you can’t watch it! You can download the video and watch it with your favourite video player. Let me take you step-by-step as I remake Draw Together from scratch. Draw Together is a little collaborative drawing toy I made with Kitten on Saturday night. You can play with it in the live embed above, watch the video in this post to learn how to build it, and view its source code. Enjoy! Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.
Read more
Building the Streaming HTML counter example. Estimated reading time: 25 minutes. Kitten has a new experimental workflow for creating web apps called Streaming HTML that I want to introduce you to today. Kitten, uniquely, enables you to build Small Web apps (peer-to-peer web apps). But it also aims to make creating any type of web app as easy as possible. The new Streaming HTML workflow is a big step in realising this goal. So let’s jump in and see how Streaming HTML works by implementing the ubiquitous counter example. O counter! My counter!
Install Kitten (this should take mere seconds).
Create a directory for the example and enter it: mkdir counter cd counter
Create a file called index.page.js and add the following content to it: // Initialise the database table if it doesn’t already exist. if (kitten.db.counter === undefined) kitten.db.counter = { count: 0 } // Default route that renders the page on GET requests. export default () => kitten.html`

Counter

<${Count} /> ` // The Count fragment. const Count = () => kitten.html`
${kitten.db.counter.count}
` // The connect event handler responds to events from the client. export function onConnect ({ page }) { page.on('update', data => { kitten.db.counter.count += data.value page.send(kitten.html`<${Count} />`) }) }
Run Kitten using the following syntax: kitten
Once Kitten is running, hit https://localhost, and you should see a counter at zero and two buttons. Press the increment and decrement buttons and you should see the count update accordingly. Press CtrlC in the terminal to stop the server and then run kitten again. Refresh the page to see that the count has persisted. What just happened? In a few lines of very liberally-spaced code, you have built a very simple Streaming HTML web application in Kitten that: Is fully accessible (turn on your screen reader and have a play). Persists data to a database. Triggers events on the server in response to button presses and sends custom data from the client to the server. Sends an updated Count component back to the client which automatically gets morphed into place, maintaining state. Uses a basic semantic CSS library to style itself. Uses WebSockets, htmx, and Water behind the scenes to achieve its magic. In a nutshell, Kitten gives you a simple-to-use event-based HTML over WebSocket implementation called Streaming HTML (because you’re streaming HTML updates to the client) that you can use to build web apps. HTML over WebSocket is not unique to Kitten – the approach is formalised with different implementations in a number of popular frameworks and application servers. And the general idea of hypermedia-based development actually predates the World Wide Web and HTML. What is unique, however, is just how simple Kitten’s implementation is to understand, learn, and use. That simplicity comes from the amount of control Kitten has over the whole experience. Kitten is not just a framework. Nor is it just a server. It’s both. This means we can simplify the authoring experience using file system-based routing combined with automatic WebSocket handling, a built-in in-process native JavaScript database, a simple high-level declarative API, and built-in support for libraries like htmx. Kitten’s Streaming HTML flow – and Kitten’s development process in general – stays as close to pure HTML, CSS, and JavaScript as possible and progressively enhances these core Web technologies with features to make authoring web applications as easy as possible. Let’s break it down OK, so now we have a high-level understanding of what we built, let’s go through the example and dissect it to see exactly how everything works by peeling away the layers of magic one by one. Let’s begin with what happens when you start the server. During its boot process, Kitten recurses through your project’s folder and maps the file types it knows about to routes based on their location in the directory hierarchy. In our simple example, there is only one file – a page file. Since it’s located in the root of our project folder and named index, the created route is /. Pages, like other route types in Kitten, are identified by file extension (in this case, .page.js) and are expected to export a default function that renders the web page in response to a regular GET request. Initial page render export default () => kitten.html`

Counter

<${Count} /> ` This renders a heading, the current count, and two buttons onto the page and sprinkles a bit of magic semantic CSS styling. Notice a few things about this code:
We are not returning a complete HTML document. Specifically, there is no tag, or , or . Kitten creates the outer shell of the HTML page for us and we have ways of adding elements to different parts of that page and changing things in the head, like the title, etc.
There isn’t a single root element. Pages can, and usually do, contain multiple elements and don’t have to be wrapped up in a single root tag. (Unlike components and fragments, which we shall see later, which do.)
A page can contain a mix of plain old regular HTML tags (h1, button), custom Kitten tags (page), and custom components and fragments (Count).
The value being returned is a custom Kitten tagged template string that’s available from the global kitten.html reference. This template string is what allows us to extend HTML with custom tags and components/fragments. This is just a regular JavaScript tagged template string and we use string interpolation to inject values into it. Notice, especially, how we handle components and fragments: They are written as tags but the name is interpolated. In our example, we use ${Count}, which is a reference to the function that renders the fragment. Since HTML templates in Kitten are plain old JavaScript, we don’t need any special tooling for Kitten to make use of the language intelligence that’s already in your editor. So you can, say, easily jump to the definition of the Count fragment from within the markup or get warned by your editor if you misspell the name of a component, etc.
Next, let’s take a look at the Kitten-specific aspects of this template, starting with the first tag. Deconstructing the page The first piece of magic on the page is the simplest, a that has a css attribute specified: The page tag is transpiled by Kitten into HTML. In this case, since the css attribute is specified, it results in the following stylesheet reference in the head of the page: This, in turn, loads in the Water semantic CSS library that gives our page some basic styles based on the HTML elements we used. 🐱 Go ahead and delete the line with the tag and see how it affects the display of the page, then undo the change. Kitten will automatically update your page in the browser whenever you save your file. Kitten has first-class support for certain libraries, Water being one of them, that it serves from the reserved /🐱/library/ namespace. Instead of manually including these libraries, you can just use the tag like we did here. Most of the magic in this example, as we will see later, relies on a different library called htmx and its WebSocket and idiomorph extensions. Components and fragments Next in our page, we have a heading, followed by the Count fragment, included in the page using: export default () => kitten.html` … <${Count} /> … ` This results in the Count function being called to render the fragment as HTML: const Count = () => kitten.html`
${kitten.db.counter.count}
` 🐱 Kitten encourages you to split your code into components and fragments1. This becomes even more important in a streaming HTML workflow where you initially render the whole page and then send back bits of the page to be morphed into place to update the page. Breaking up your content into components and fragments enables you to remove redundancy in your code. This fragment creates a div that displays the current count of the counter, which it gets from Kitten’s magic default database. Kitten’s magic database Kitten comes with a very simple, in-process JavaScript database called ­– drumroll – JavaScript Database (JSDB). It even creates a default one for you to use at kitten.db. 🐱 You’re not limited to using the default database that Kitten makes available. You can create your own and even use multiple databases, etc., using database app modules. You can also implement type safety in your apps, including for your database structures.) In JSDB you store and access data in JavaScript arrays and objects and you work with them exactly as you would with any other JavaScript array or object. The only difference is that the changes you make are automatically persisted to an append-only JavaScript transaction log in a format called JavaScript Data Format (JSDF). It’s a common pattern in JSDB to check whether an array or object (the equivalent of a table in a traditional database) exists and, if not, to initialise it. This is what we do at the very start of the file that contains the page route, creating the counter object with its count property set to zero if it doesn’t already exist: if (kitten.db.counter === undefined) kitten.db.counter = { count: 0 } Once you are sure a value exists in your database, you can access it using regular JavaScript property look-up syntax (because it is just a regular JavaScript object). This is what we do in the Count component: const Count = () => kitten.html` … ${kitten.db.counter.count} … ` So the first time the page renders, the count will display as zero. After the Count fragment, we have the last two elements on the page: two buttons, one to decrement the count and the other to increment it. But what is the magic that allows us to connect those buttons to the server, mutate the count, and persist the value? Let’s look at that next. A magic connection At the heart of Kitten’s Streaming HTML workflow is a cross-tier eventing system that maps events on the client to handlers on the server. Take a look at the two buttons declared in our page to see how it works: Both of the buttons have the same name, update. That name is the name of the event that will fire on the server when that button is pressed thanks to the magic connect attribute we’ve added to the buttons. Additionally, the contents of the magic data attribute will also be sent to the event handler. The event handler in question is the only other bit of code in our pithy example: export function onConnect ({ page }) { page.on('update', data => { kitten.db.counter.count += data.value page.send(kitten.html`<${Count} />`) }) } In the onConnect() handler, we receive a parameter object with a reference to the page object. Using the page reference, we set up an event handler for the update event that receives the data from the button that triggered the event and adds its value to the count property of our counter in our database. Finally, we use a method provided for us called send() on the same page reference to stream a new Count component. If you remember, the Count component had one last magic attribute on it called morph:
This makes Kitten intelligently morph the streamed HTML into the DOM, replacing the element that matches the provided id. Notice that unlike web apps that you may be familiar with, we are not sending data to the client, we are sending hypermedia in the form of HTML. Streaming HTML is a modern event-based full-duplex approach to building hypermedia-based applications. Its greatest advantage is its simplicity, which arises from keeping state on one tier (the server) instead of on two (the client and the server). In essence, it is the opposite of the Single-Page Application (SPA) model, embracing the architecture of the Web instead of attempting to turn it on its head. In fact, you can create whole Web apps without writing a single line of client-side JavaScript yourself. And with that, we now know what Streaming HTML is and what each part of the code does. Now, let’s go back to the start and review the process as we start to understand how things work at a deeper level. High-level flow Let’s go step-by-step, starting from when we launch Kitten to when the counter is updated:
Kitten parses the page and sees that there is an onConnect() handler defined so it creates a default WebSocket route for the page and wires it up so that when the page loads, a WebSocket connection is made that results in the onConnect() handler being called.
When a person hits the page, the onConnect() handler gets called. In the handler, we set up an event handler to handle the update event.
When the person presses the increment button, it sends a message to the default WebSocket route. Since the button’s name is update, Kitten calls the update event handler, passing a copy of any data that was sent along.
In this case, the data is {value: 1}. It is an object that has a value property set to 1. So we add the value to the count we are keeping in our database and send a new Count fragment back.
At this point, you might be wondering about several things: How exactly does Kitten wire up the client so that the WebSocket connection is made and that messages are sent to the server when we click the buttons? How does Kitten update the page on the client with new HTML fragments as they are sent from the server? The answer to both of those questions is ‘through the magic of htmx’. So what is htmx? Let’s find out! Peeking behind the curtain: htmx Earlier, I wrote that most of the magic in this example relies on a library called htmx and its WebSocket and idiomorph extensions. Let’s now dive a little deeper into the internals of Kitten and take a look at how Kitten transpiles your code to use this library and its extenions. In our example, whenever either the increment of decrement button gets pressed on the client, the update event handler gets called on the server whereupon it updates the counter accordingly and sends a new Count fragment back to the client that gets morphed into place in the DOM. There are three things working in tandem to make this happen, all of which sprinkle htmx code into your page behind the scenes. First, whenever Kitten sees that one of your pages has exported an onConnect() event handler, it:
Adds the htmx library, as well as its WebSocket and idiomorph extensions, to the page.
It creates a special default WebSocket route for your page. In this case, since our page is the index page and is accessed from the / route, it creates a socket that is accessed from /default.socket. In that socket route, it adds an event listener for the message event and maps any HTMX-Trigger-Name headers it sees in the request to event handlers defined on the page reference it provides to the onConnect() handler when the WebSocket connects.
It adds a send() method to the page reference passed to the onConnect() handler that can be used to stream responses back to the page. We haven’t used them in this example but it also adds everyone() and everyoneElse() methods also that can be used to stream responses back not just to the person on the current page but to every person that has the page open (or to every person but the current one).
Second, it goes through your code and, whenever it sees a form, it adds the necessary htmx WebSocket extension code so form submits will automatically trigger serialisation of form values. (We don’t make use of this in this example, preferring to forego a form altogether and directly connect the buttons instead.) Finally, it applies some syntactic sugar to attribute names by replacing:
connect with ws-send
morph with hx-swap-oob=’morph’
data= with hx-vals=’js:
These little conveniences make authoring easier without you having to remember the more verbose htmx attributes. You can, of course, use the htmx attributes instead, as well as any other htmx attribute, because it is just htmx under the hood. Progressive enhancement Kitten’s design adheres to the philosophy of progressive enhancement. At its very core, Kitten is a web server. It will happily serve any static HTML you throw at it from 1993. However, if you want to, you can go beyond that. You can use dynamic pages, at we have done here, to server render responses, use a database, etc. Similarly, Kitten has first class support for the htmx library and some of its extensions, as well as other libraries like Alpine.js. The idea is that you can build your web apps using plain old HTML, CSS, and JavaScript and then layer additional functionality on top using Kitten’s Streaming HTML features, htmx, Alpine.js, etc. You can even use its unique features to make peer-to-peer Small Web apps. So Kitten’s implementation of Streaming HTML is based on core Web technologies and progressively enhanced using authoring improvements, htmx, and a sprinkling of syntactic sugar (collectively, what we refer to as ‘magic’). All this to say, you can do everything we did in the original example by using htmx and creating your WebSocket maunally. Let’s see what that would look like next. Goodbye, magic! (Part 1: Goodbye, syntactic sugar; hello, htmx) Right, let’s peel away a layer of the magic and stop making use of Kitten’s automatic event mapping and syntactic sugar and use plain htmx instead, starting with the index page: index.page.js import Count from './Count.fragment.js' export default function () { return kitten.html`

Counter

<${Count} />
` } Notice, what’s different here from the previous version:
The Count fragment now lives in its own file (with the extension .fragment.js) that we import into the page. This is because we now have to create the WebSocket route ourselves, in a separate file, and it will need to use the Count fragment too when sending back new versions of it to the page. Previously, our onConnect() handler was housed in the same file as our page so our fragment was too.
We have to manually let Kitten know that we want the htmx library and its two extensions loaded in, just like we had to do with the Water CSS library (the css attribute is an alias for water; you can use either. Kitten tries to be as forgiving as possible during authoring).
We wrap our counter in a main tag so we have some place to initialise the htmx ws (WebSocket) extension. We also have to write out the connection string to our socket route manually. As we’ll see later, our socket route is called count.socket. While writing the connection string, we make use of the Kitten globals kitten.domain and kitten.port to ensure that the connection string will work regardless of whether we are running the app locally in development or from its own domain in production.
Instead of Kitten’s syntactic sugar, we now use the regular htmx attributes ws-send and hx-vals in our buttons.
Next, let’s take a look at the Count fragment. Count.fragment.js if (kitten.db.counter === undefined) kitten.db.counter = { count: 0 } export default function Count () { return kitten.html`
${kitten.db.counter.count}
` } Here, apart from being housed in its own file so it can be used from both the page and the socket routes, the only thing that’s different is that we’re using the htmx attribute hx-swap-oob (htmx swap out-of-band) instead of Kitten’s syntactic sugar morph attribute. We also make sure the database is initialised before we access the counter in the component. We’re carrying out the initialisation here and not in the socket (see below) because we know that the page needs to be rendered (and accessed) before the socket route is lazily loaded. While this is fine in a simple example like this one, it is brittle and requires knowledge of Kitten’s internals. In a larger application, a more solid and maintainable approach would be to use a database app module to initialise your database and add type safety to it while you’re at it. 🐱 A design goal of Kitten is to be easy to play with. Want to spin up a quick experiment or teach someone the basics of web development? Kitten should make that simple to do. Having magic globals like the kitten.html tagged template you saw earlier help with that. However, for larger or longer-term projects where maintainability becomes an important consideration, you might want to make use of more advanced features like type checking. The two goals are not at odds with each other. Kitten exposes global objects and beautiful defaults that make it easy to get started with and, at the same time, layers on top more advanced features that make it easy to build larger and longer-term projects. Finally, having seen the page and the Count component, let’s now see what the WebSocket route – which was previously being created for us internally by Kitten – looks like> count.socket.js import Count from './Count.fragment.js' export default function socket ({ socket }) { socket.addEventListener('message', event => { const data = JSON.parse(event.data) if (data.HEADERS === undefined) { console.warn('No headers found in htmx WebSocket data, cannot route call.', event.data) return } const eventName = data.HEADERS['HX-Trigger-Name'] switch (eventName) { case 'update': kitten.db.counter.count += data.value socket.send(kitten.html`<${Count} />`) break default: console.warn(`Unexpected event: ${eventName}`) } }) } Our manually-created socket route is functionally equivalent to our onConnect() handler in the original version. However, it is quite a bit more complicated because we have to manually do, at a slightly lower level, what Kitten previously did for us. Socket routes in Kitten are passed a parameter object that includes a socket reference to the WebSocket instance. It also can include a reference to the request that originated the initial connection.2 The socket object is a ws WebSocket instance with a couple of additional methods – like all() and broadcast(), mixed in by Kitten.3 On this socket instance, we listen for the message event and, when we receive a message, we manually:
Deserialise the event data.
Check that htmx headers are present before continuing and bail with a warning otherwise.
Look for the HX-Trigger-Name header and, if the trigger is an event we know how to handle (in this case, update), carry out the updating of the counter that we previously did in the on(’update’) handler.
For comparison, this was the onConnect() handler from the original version where Kitten essentially does the same things for us behind the scenes and routes the update event to our handler: export function onConnect ({ page }) { page.on('update', data => { kitten.db.counter.count += data.value page.send(kitten.html`<${Count} />`) }) } If you run our new – plain htmx – version of the app, you should see exactly the same counter, behaving exactly the same as before. While the plain htmx version is more verbose, it is important to understand that in both instances we are using htmx. In the original version, Kitten is doing most of the work for us and in the latter we’re doing everything ourselves. Kitten merely progressively enhances htmx just like htmx progressively enhances plain old HTML. You can always use any htmx functionality and, if you want, ignore Kitten’s magic features. Goodbye, magic! (Part 2: goodbye, htmx; hello, plain old client-side JavaScript) So we just stipped away the magic that Kitten layers on top of htmx to see how we would implement the Streaming HTML flow using plain htmx. Now, it’s time to remove yet another layer of magic and strip away htmx also (because htmx is just a bit of clever client-side JavaScript that someone else has written for you). We can do what htmx does manually by writing a bit of client-side JavaScript (and in the process see that while htmx is an excellent tool, it’s not magic either). Let’s start with the index page, where we’ll strip out all htmx-specific attributes and instead render a bit of client-side JavaScript that we’ll write ourselves. Our goal is not to reproduce htmx but to implement an equivalent version of the tiny subset of its features that we are using in this example. Specifically, we need to write a generic routine that expects a snippet of html encapsulated in a single root element that has an ID and replaces the element that’s currently on the page with that ID with the contents of the new one. index.page.js import Count from './Count.fragment.js' export default function () { return kitten.html`

Counter

<${Count} /> ` } /** This is the client-side JavaScript we render into the page. It’s encapsulated in a function so we get syntax highlighting, etc. in our editor. */ function clientSideJS () { const socketUrl = `wss://${window.location.host}/count.socket` const ws = new WebSocket(socketUrl) ws.addEventListener('message', event => { const updatedElement = event.data // Get the ID of the new element. const template = document.createElement('template') template.innerHTML = updatedElement const idOfElementToUpdate = template.content.firstElementChild.id // Swap the element with the new version. const elementToUpdate = document.getElementById(idOfElementToUpdate) elementToUpdate.outerHTML = updatedElement }) function update (value) { ws.send(JSON.stringify({event: 'update', value})) } } clientSideJS.render = () => clientSideJS.toString().split(' ').slice(1, -1).join(' ') Here’s how the page differs from the htmx version:
The htmx and htmx-websocket attributes are gone. Since htmx is no longer automatically creating our socket connection for us, we do it manually in our client-side JavaScript.
The htmx-idiomorph extension is also gone. Since htmx is not automatically carrying out the DOM replacement of the updated HTML fragments we send it, we do that manually also in our client-side JavaScript. We do so by first creating a template element and populating its inner HTML with our HTML string. Then, we query the resulting document fragment for the id of its top-level element. Finally, we use the getElementById() DOM look-up method on the resulting document fragment to get the current version of the element and replace it by setting its outerHTML to the updated HTML fragment we received from the server.
Finally, since we no longer have htmx to send the HX-Trigger-Name value so we can differentiate between event types, we add an event property to the object we send back to the server via the WebSocket.
Documenting the (overly) clever bits There are two bits of the code where we’re doing things that might be confusing. First, when we interpolate the result of the clientSideJS.render() call into our template, we surround it with square brackets, thereby submitting the value wrapped in an array: export default function () { return kitten.html` … ` } This is Kitten shorthand for circumventing Kitten’s built in string sanitisation. (We know that the string is safe because we created it.) 🐱 Needless to say, only use this trick with trusted content, never with content you receive from a third-party. By default, Kitten will sanitise any string you interpolate into a kitten.html string. So the default is secure. If you want to safely interpolate third-party HTML content into your pages, wrap the content in a call to kitten.safelyAddHtml() which will sanitise your html using the sanitize-html library. The other bit that might look odd to you is how we’re adding the render() function to the clientSideJS() function: function clientSideJS () { //… } clientSideJS.render = () => clientSideJS.toString().split(' ').slice(1, -1).join(' ') You might be wondering why we wrote our client-side JavaScript code in a function on our server to begin with instead of just including it directly in the template. We did so to make use of the language intelligence in our editor. Given how little code there is in this example, we could have just popped it into the template string. But this provides a better authoring experience and is more maintainable. Of course what we need is a string representation of this code – sans the function signature and the closing curly bracket – to embed in our template. Again, we could have just added that logic straight into our template: export default function () { return kitten.html` … ` } That does the same thing but it doesn’t really roll off the tongue. I feel that templates should be as literate, readable, and close to natural language as possible and that any complex stuff we might have to do should be done elsewhere. And since in JavaScript nearly everything is an object, including functions, why not add the function to render the inner code of a function onto the function itself?4 OK, enough JavaScript geekery. Next, let’s take a look at how the WebSocket route has changed. count.socket.js import Count from './Count.fragment.js' export default function socket ({ socket }) { socket.addEventListener('message', event => { const data = JSON.parse(event.data) if (data.event === undefined) { console.warn('No event found in message, cannot route call.', event.data) return } switch (data.event) { case 'update': kitten.db.counter.count += data.value socket.send(kitten.html`<${Count} />`) break default: console.warn(`Unexpected event: ${eventName}`) } }) } The general structure of our WebSocket route remains largely unchanged with the following exception: instead of using htmx’s HX-Trigger-Name header, we look for the event property we’re now sending back as part of the data and using that to determine which event to handle. (Again, in our simple example, there is only one event type but we’ve used a switch statement anyway so you can see how you could support other events in the future by adding additional case blocks to it.) Finally, the Count fragment remains unchanged. Here it is, again, for reference: if (kitten.db.counter === undefined) kitten.db.counter = { count: 0 } export default function Count () { return kitten.html`
${kitten.db.counter.count}
` } Goodbye, magic! (Part 3: goodbye, Kitten; hello plain old Node.js) So we just saw that Kitten’s Streaming HTML workflow can be created by writing some plain old client-side JavaScript instead of using the htmx library (which, of course, is just plain old client-side JavaScript that someone else wrote for you). But we are still using a lot of Kitten magic, including its file system-based routing with its convenient WebSocket routes, its first-class support for JavaScript Database (JSDB), etc. What would the Streaming HTML counter example look like if we removed Kitten altogether and created it in plain Node.js? 🐱 Kitten itself uses Node.js as its runtime. It installs a specific version of Node.js – separate from any others you may have installed in your system – for its own use during the installation process. Streaming HTML, plain Node.js version To follow along with this final, plain Node.js version of the Streaming HTML example, make sure you have a recent version of Node.js installed. (Kitten is regularly updated to use the latest LTS version so that should suffice for you too.) First off, since this is a Node.js project, let’s initialise our package file using npm so we can add three Node module dependencies that we previously made use of without knowing via Kitten.
Create a new folder for the project and switch to it. mkdir count-node cd count-node
Initialise your package file and install the required dependencies – the ws WebSocket library as well as Small Technology Foundation’s https and JSDB libraries. Of the Small Technology Foundation modules, the former is an extension of the standard Node.js https library that manages TLS certificates for you automatically both locally during development and via Let’s Encrypt in production and the latter is our in-process JavaScript database. npm init --yes npm i ws @small-tech/https @small-tech/jsdb Tell Node we will be using ES Modules (because, hello, it’s 2024) by adding "type": "module" to the package.json file (do you get the feeling I just love having to do this every time I start a new Node.js project?) Either do so manually or use the following one-line to make yourself feel like one of those hackers in the movies:5 sed -i '0,/,/s/,/, "type": "module",/' package.json
Create the application. // Import dependencies. import path from 'node:path' import { parse } from 'node:url' import { WebSocketServer } from 'ws' import JSDB from '@small-tech/jsdb' import https from '@small-tech/https' // Find the conventional place to put data on the file system. // This is where we’ll store our database. const dataHome = process.env.XDG_DATA_HOME || path.join(process.env.HOME, '.local', 'share') const dataDirectory = path.join(dataHome, 'streaming-html-counter') const databaseFilePath = path.join(dataDirectory, 'db') /** JavaScript database (JSDB). */ const db = JSDB.open(databaseFilePath) // Initialise count. if (db.counter === undefined) db.counter = { count: 0 } /** A WebSocket server without its own http server (we use our own https server). */ const webSocketServer = new WebSocketServer({ noServer: true }) webSocketServer.on('connection', ws => { ws.on('error', console.error) ws.on('message', message => { const data = JSON.parse(message.toString('utf-8')) if (data.event === undefined) { console.warn('No event found in message, cannot route call.', message) return } switch (data.event) { case 'update': db.counter.count += data.value ws.send(Count()) break default: console.warn(`Unexpected event: ${eventName}`) } }) }) /** An HTTPS server instance that automatically handles TLS certificates. */ const httpsServer = https.createServer((request, response) => { const urlPath = parse(request.url).pathname switch (urlPath) { case '/': response.end(renderIndexPage()) break default: response.statusCode = 404 response.end(`Page not found: ${urlPath}`) break } }) // Handle WebSocket upgrade requests. httpsServer.on('upgrade', (request, socket, head) => { const urlPath = parse(request.url).pathname switch (urlPath) { case '/count.socket': webSocketServer.handleUpgrade(request, socket, head, ws => { webSocketServer.emit('connection', ws, request) }) break default: console.warn('No WebSocket route exists at', urlPath) socket.destroy() } }) // Start the server. httpsServer.listen(600, () => { console.info(' 🎉 Server running at https://localhost.') }) // TO get syntax highlighting in editors that support it. const html = String.raw const css = String.raw /** Renders the index page HTML. */ function renderIndexPage() { return html` Counter

Counter

${Count()} ` } /** The Count fragment. */ function Count () { return html`
${db.counter.count}
` } /** This is the client-side JavaScript we render into the page. It’s encapsulated in a function so we get syntax highlighting, etc. in our editors. */ function clientSideJS () { const socketUrl = `wss://${window.location.host}/count.socket` const ws = new WebSocket(socketUrl) ws.addEventListener('message', event => { const updatedElement = event.data // Get the ID of the new element. const template = document.createElement('template') template.innerHTML = updatedElement const idOfElementToUpdate = template.content.firstElementChild.id // Swap the element with the new version. const elementToUpdate = document.getElementById(idOfElementToUpdate) elementToUpdate.outerHTML = updatedElement }) function update (value) { ws.send(JSON.stringify({value})) } } clientSideJS.render = () => clientSideJS.toString().split(' ').slice(1, -1).join(' ') /** Subset of relevant styles pulled out from Water.css. (https://watercss.kognise.dev/) */ const styles = css` :root { --background-body: #fff; --selection: #9e9e9e; --text-main: #363636; --text-bright: #000; --text-muted: #70777f; --links: #0076d1; --focus: #0096bfab; --form-text: #1d1d1d; --button-base: #d0cfcf; --button-hover: #9b9b9b; --animation-duration: 0.1s; } @media (prefers-color-scheme: dark) { :root { --background-body: #202b38; --selection: #1c76c5; --text-main: #dbdbdb; --text-bright: #fff; --focus: #0096bfab; --form-text: #fff; --button-base: #0c151c; --button-hover: #040a0f; } } ::selection { background-color: #9e9e9e; background-color: var(--selection); color: #000; color: var(--text-bright); } body { font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', 'Segoe UI Emoji', 'Apple Color Emoji', 'Noto Color Emoji', sans-serif; line-height: 1.4; text-rendering: optimizeLegibility; color: var(--text-main); background: var(--background-body); margin: 20px auto; padding: 0 10px; max-width: 800px; } h1 { font-size: 2.2em; font-weight: 600; margin-bottom: 12px; margin-top: 24px; } button { font-size: inherit; font-family: inherit; color: var(--form-text); background-color: var(--button-base); padding: 10px; padding-right: 30px; padding-left: 30px; margin-right: 6px; border: none; border-radius: 5px; outline: none; cursor: pointer; -webkit-appearance: none; transition: background-color var(--animation-duration) linear, border-color var(--animation-duration) linear, color var(--animation-duration) linear, box-shadow var(--animation-duration) linear, transform var(--animation-duration) ease; } button:focus { box-shadow: 0 0 0 2px var(--focus); } button:hover { background: var(--button-hover); } button:active { transform: translateY(2px); } ` So that is considerably longer (although almost half of it is, of course, CSS). And while we haven’t recreated Kitten with a generic file system-based router, etc., we have still designed the routing so new routes can easily be added to the project. Similarly, while our client-side DOM manipulation is very basic compared to everything htmx can do, it is still generic enough to replace any element it gets based on its ID. I hope this gives you a solid idea of how the Streaming HTML flow works, how it is implemented in Kitten, how it can be implemented using htmx, and even in plain JavaScript. Maybe this will even inspire you to port it to other frameworks and languages and use it in your own web development workflow. At the very least, I hope that this has been an interesting read and maybe gotten you to consider how Web development could be made simpler, more fun, and more accessible. If you’re organising a web conference or similar event and you’d like me to present a keynote on the Small Web (peer-to-peer web apps), Streaming HTML, and Kitten, give me a shout and let’s chat. Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist. A component or fragment in Kitten is just a function that returns an HTML element. In Kitten, your components and fragments can take properties (or ‘props’) and return HTML using the special kitten.html JavaScript tagged template string. In fact, the only difference between a component and a fragment is their indented usage. If an element is intended to be used in multiple places on a page, it is called a component and, for example, does not contain an id. If, on the other hand, an element is meant to be used only once on the page, it is called a fragment and can contain an id. You can, of course, pass an id attribute – or any other standard HTML attribute – to any component when instantiating it. When creating your components, you just have to make sure you pass these standard props to your component. ↩︎
You would use the request reference, if, for example, you wanted to access session data which would be available at request.session if your request parameter was named request. In our example, since we’re not using the first argument, we prefix our parameter with an underscore to silence warnings about unused arguments in our editor. ↩︎
For example, see the Kitten Chat sample application for use of the all() method. ↩︎
In fact, if we wanted to get really fancy, we could have bound the render() function to the clientSideJS() function so we could have referred to the latter from the former using this: function clientSideJS () { //… } clientSideJS.render = ( function () { return this .toString() .split(' ') .slice(1, -1) .join(' ') } ).bind(clientSideJS) Notice that we cannot use a closure – also known as an arrow function expression in JavaScript – because the this reference in closures is derived from the lexical scope and cannot be changed at run-time. Lexical scope is just fancy computer science speak for ‘where it appears in the source code’. In this case, it means that since we’d be defining the closure at the top-level of the script, it would not have a this reference. ↩︎
It basically says “in the range of the start of the file to the first comma, replace any commas you find…” – yes, that’s the first comma, I know, but it’s sed, things were different back then – “…with a comma followed by the line we want to insert.” ↩︎
Read more
Sorry, your browser doesn’t support embedded videos. But that doesn’t mean you can’t watch it! You can download Small Is Beautiful #27 directly, and watch it with your favourite video player. Small Is Beautiful (Feb, 2023): End-to-end encrypted Kitten Chat (an example peer-to-peer Small Web app using Kitten. Follow the tutorial to build it yourself from scratch or browse the source code). In this hour-and-a-half long Small is Beautiful live stream recording, I show you how WebSockets, project-specific secrets, and authenticated routes work in Kitten and migrate a centralised WebSocket chat application to an end-to-end-encrypted peer-to-peer Small Web chat application in Kitten. The example also makes use of the native support for htmx and Alpine.js in Kitten. You can follow Small is Beautiful from the fediverse (we stream it using our own Owncast instance) to be notified of future streams. (Hint: you can install Owncast using Site.js.) Transcript Auto-generated transcript from the captions. Links to other things mentioned or shown during the stream: Kitten Domain Black Box Terminal Lipstick on a Pig Helix Editor lf WebSocket Weasel RESTED SkipTo Landmarks & Headings Open Switcher Control Streamed using our own Owncast instance. (Hint: you can install Owncast using Site.js.) If you like this livestream, please help support out work at Small Technology Foundation with a patronage or donation, or share our videos with your friends! Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.
Read more
Helix Editor using the Bash Language Server to show the symbols in the script included in this post. figcaption { margin-top: -1.5em } Helix Editor I’ve been using Helix Editor as my daily driver for web development for most of this year and – while it has some outstanding issues1 (what doesn’t?) – I’m really enjoying it.2 One of the issues that might trip new folks is that while it has Language Server Protocol (LSP) support, and while there are default Language Servers configured, installing Helix Editor doesn’t actually install those language servers.3 Another issue is that since many of the language servers are written in Node.js and installed as global modules, if you update your version of Node (e.g., using nvm.fish on fish shell), you will also have to reinstall your language servers. Needless to say, this can get tedious so here’s a little script I hacked together today for myself to easily install (and reinstall) the Language Servers I use for web development (mostly HTML, JS, CSS, and Node.js with Kitten these days) in Helix Editor. I’m sharing it below in case it helps anyone else. Please feel free to adapt and use it for yourself. Usage
Copy the code into a file named, e.g., install-helix-language-servers
Make that file executable. e.g.: chmod +x install-helix-language-servers
Adapt it for your own needs (see notes) and enjoy!
Notes Only tested on Linux. Requires Node.js, Rust, Bash, wget, gunzip, and tar. If you’re running this on Fedora Silverblue or other immutable Linux distribution, make sure you do so from a mutable container (e.g., via Toolbox or Distrobox) that has the Rust toolchain installed so that the Language Servers that are installed by Cargo (e.g., TOML) can be compiled properly. Note: TOML Language Server requires latest Rust to compile. Tested to work on 1.65.0 (does not work on 1.60.0). See https://github.com/tamasfe/taplo/issues/349 If you’re on a non-Linux platform, please modify the script accordingly. You can find installation instructions for all Language Servers supported by Helix Editor at: https://github.com/helix-editor/helix/wiki/How-to-install-the-default-language-servers Code #!/usr/bin/env bash BINARY_HOME="${HOME}/.local/bin" DATA_HOME="${XDG_DATA_HOME:-$HOME/.local/share}" echo "Installing Language Servers for Helix Editor:" # Work in a throwaway temporary directory so as not to pollute the file system. temporaryDirectory="/tmp/helix-editor-language-server-installer" mkdir -p "${temporaryDirectory}" pushd "${temporaryDirectory}" # Bash echo " • Bash (bash-language-server)" npm i -g bash-language-server # HTML, JSON, JSON schema echo " • HTML, JSON, and JSON schema (vscode-langservers-extracted)" npm i -g vscode-langservers-extracted # JavaScript (via TypeScript) echo " • JavaScript (typescript, typescript-language-server)" npm install -g typescript typescript-language-server # Markdown (via ltex-ls. Note: this has excellent features like # spelling and grammar check but is a ~269MB download). echo " • Markdown (ltex-ls)" ltexLsVersion=15.2.0 ltexLsBinaryPath="${BINARY_HOME}/ltex-ls" ltexLsBaseFileName="ltex-ls-${ltexLsVersion}" ltexLsFileNameWithPlatform="${ltexLsBaseFileName}-linux-x64" ltexLsAppDirectory="${DATA_HOME}/${ltexLsBaseFileName}" rm "${ltexLsBinaryPath}" rm -rf "${ltexLsAppDirectory}" wget "https://github.com/valentjn/ltex-ls/releases/download/${ltexLsVersion}/${ltexLsFileNameWithPlatform}.tar.gz" gunzip "${ltexLsFileNameWithPlatform}.tar.gz" tar xf "${ltexLsFileNameWithPlatform}.tar" mv "${ltexLsBaseFileName}" "${DATA_HOME}" ln -s "${ltexLsAppDirectory}/bin/ltex-ls" "${ltexLsBinaryPath}" # TOML cargo install taplo-cli --locked --features lsp # Clean up. popd rm -rf "${temporaryDirectory}" echo "Done." Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.
Soft wrap doesn’t exist yet, for example, which makes it nigh on impossible to edit Markdown in it. For that, I’m currently using MarkText. ↩︎
This is not something I thought I’d ever say for a modal editor but Helix gets a lot of things right. While there is definitely a learning curve, I feel like I now think about my code as code instead of as lines of text and characters as I’m working and that’s a good feeling. ↩︎
This is understandable as not everyone needs or wants to install every language server. Especially considering that language servers can be very large. For example, the ltex-ls language server for Markdown is roughtly a 265MB download. ↩︎
Read more
Stephen is a big fish to fry. (I’m here all week.) Warning: the fediverse is about to get Fryed. Stephen Fryed, that is. Following the recent takeover of Twitter by a proto-fascist billionaire man-baby, people have been fleeing1 to the fediverse2. Among them are folks who, on Twitter, at least, had millions of followers like Greta Thunberg and, more recently, Stephen Fry.3 “Well, surely that’s a good thing? It’ll get everyone talking about the fediverse, decentralisation, and maybe even that Small Web thing you keep harping on about all the time, Aral, no?” Well, yes and no… you see, there is such a thing as too much of a good thing. And, on the fediverse today, that appears to be “engagement when you’re popular.” In fact, it could be deadly (to Mastodon instances, that is). Read on and I’ll try to explain what I mean by using my own account as an example. How to kill a Mastodon (hint: by being chatty when you’re popular) Needless to say, I’m not a celebrity. And yet, on the fediverse, I find myself in a somewhat unique situation where:
I have my own personal Mastodon instance, just for me.4
I’m followed by quite a number of people. Over 22,000, to be exact.5
I follow a lot of people and I genuinely enjoy having conversations with them. (I believe this is what the cool kids call “engagement”.)
Unfortunately, the combination of these three factors creates a perfect storm6 which means that now, every time I post something that gets lots of engagement, I essentially end up carrying out a denial of service attack on myself. Mastodon: denial-of-service as a service? Yesterday was my birthday. So, of course, I posted about it on my Mastodon instance. It got quite a few replies. And, because it’s only polite, I started replying to everyone with thank-you messages. Oh, no, you poor, naïve man, you. What were you thinking?!… I’ll let my friend Hugo Gameiro, who runs masto.host and hosts my instance, explain what happened next:7 You just get a lot of engagement and that requires a ton of Sidekiq power to process. For example, let’s look at your birthday post …  besides requiring thousands of Sidekiq jobs to spread your post through all their servers (you have 23K followers, let’s assume 3K different servers8), as soon as you create the post 3K Sidekiq jobs are created. At your current plan you have 12 Sidekiq threads, so to process 3K jobs it will take a while because it can only deal with 12 at a time. Then, for each reply you receive to that post, 3K jobs are created, so your followers can see that reply without leaving their server or looking at your profile. Then you reply to the reply you got, another 3K jobs are created and so on.  If you replied to the 100 replies you got on that post in 10 minutes (and assuming my 3K servers math is right). You created 300K jobs in Sidekiq. That’s why you get those queues. So what does that mean if you’re not into the technical mumbo-jumbo? It means I was too chatty while being somewhat popular. What a traffic jam looks like in Mastodon. So, what’s the solution? Well, there’s only one thing you can do when you find yourself in such a pickle: scale up your Mastodon instance.9 The problem with that? It starts getting expensive. Prior to the latest Twitter migration10, I was paying around €280/year (or a little over €20/month) for my Mastodon instance on a custom plan I had with Hugo from the early days. This week, I upped that to a roughly €50/month plan. And that’s still not enough as my birthday post just showed so Hugo, kindly, has suggested he might have to come up with a custom plan for me. And yet, the problem is not one that will go away. We can only kick the ball down the road, as it were. (Unless I piss everyone off with this post, that is.) Thankfully, by running my own instance, the only person I’m burdening with this additional expense is me. But what if I’d been on a public instance run by someone else instead? Musk you? If Elon Musk wanted to destroy mastodon.social, the flagship Mastodon instance, all he’d have to do is join it.11 Thank goodness Elon isn’t that smart. I jest, of course… Eugen would likely ban his account the moment he saw it. But it does illustrate a problem: Elon’s easy to ban. Stephen, not so much. He’s a national treasure for goodness’ sake. One does not simply ban Stephen Fry. And yet Stephen can similarly (yet unwittingly) cause untold expense to the folks running Mastodon instances just by joining one.12 The solution, for Stephen at least, is simple: he should run his own personal instance. (Or get someone else to run it for him, like I do.)13 Running his own instance would also give Stephen one additional benefit: he’d automatically get verified. After all, if you’re talking to, say, @stephen@social.stephenfry.com, you can be sure it’s really him because you know he owns the domain. Personal instances to the rescue /* Because CSS sucks. Hack courtesy of https://css-tricks.com/NetMag/FluidWidthVideo/Article-FluidWidthVideo.php */ .videoWrapper { position: relative; padding-bottom: 56.25%; /* 16:9 */ padding-top: 25px; height: 0; } .videoWrapper iframe { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } My speech at the European Parliament on the problem with Big Tech and the different approaches provided by Mastodon, the fediverse, and Small Web. Wait, I’m confused… didn’t you say that personal instances were part of the problem? Yes and no: they are and they shouldn’t be. If ActivityPub (the protocol) and Mastodon (a server that adheres to that protocol) were designed to incentivise decentralisation, having more instances in the network would not be a problem. In fact, it would be the sign of a healthy, decentralised network. However, ActivityPub and Mastodon are designed the same way Big Tech/Big Web is: to encourage services that host as many “users”14 as they can. This design is both complex (which makes it difficult and expensive to self-host) and works beautifully for Big Tech (where things are centralised and scale vertically and where the goal is to get/own/control/exploit as many users as possible). In Big Tech, the initial cost of obtaining such scale is subsidised by vast amounts of venture capital (rich people investing in exploitative and extractive new businesses – which Silicon Valley calls Startups™ – in an effort to get even richer) and it leads to the amassing of the centres15 we know today as the Googles, Facebooks, and Twitters of the world. However, unlike Big Tech, the stated goal of the fediverse is to decentralise things, not centralise them. Yet how likely is it we can achieve the opposite of Big Tech’s goals while adopting its same fundamental design? When you adopt the design of a thing, you also inherit the success criteria that led to the evolution of that design. If that success criteria does not align with your own goals, you have a problem on your hands. What I’m trying to say is: Do not adopt the success criteria of Big Tech lest you should become Big Tech. Bigger is not better Today, we equate the size of mastodon.social (the instance run by Eugen) with how successful Mastodon (the software created by Eugen) is. This is very dangerous. The larger mastodon.social gets, the more it will become like Twitter. I can almost hear you shout, “But Aral, it’s federated! At least there’s no lock-in to mastodon.social!” This is true. You know what else is federated? Email. Have you ever heard of a little old email instance called Gmail? (Or perhaps the term “embrace, extend, extinguish?”) Do you know what happens to your email if Google says (rightly or wrongly) that you’re spam? No one sees your email. You know what happens if mastodon.social blocks your instance? Hundreds of thousands of people (soon, millions?) do not get a choice in whether they see your posts or not. What happens when your instance of one blocks mastodon.social? Nothing, really. That’s quite a power imbalance. Decentralisation begins at decentring yourself Mastodon is a not-for-profit, and I have no reason to believe that Eugen has anything but the best of intentions. However, decentralisation begins at decentring yourself. It’s in the interests of the fediverse that mastodon.social sets a good example by limiting its size voluntarily. In fact, this should be built right into the software. Mastodon instances should be limited from growing beyond a certain size. Instances that are already too large should have ways of encouraging people to migrate to smaller ones. As a community we should approach large instances as tumours: how do we break them up so they are no longer a threat to the organism? If you take this approach to its logical conclusion, you will arrive at the concept of the Small Web; a web where we each own and control our own place (or places). Sorry, your browser doesn’t support embedded videos. But that doesn’t mean you can’t watch it! You can download Small Is Beautiful #23 directly, and watch it with your favourite video player. Small Is Beautiful (Oct, 2022): What is the Small Web and why do we need it? Tweet, tweet? I’m not saying that the current fediverse protocols and apps can, will, or even necessarily should evolve into the Small Web.16 In the here and now, the fediverse is an invaluable stopgap that provides a safer haven than the centralised cesspits of Silicon Valley. How long the stopgap lasts will depend on how successful we are at resisting centralisation. Protocol and server designs that incentivise vertical scale will not necessarily make this easy. However, there are social pressures we can use to counter their effects. The last thing you want is a handful of mini Zuckerbergs running the fediverse. Or worse, to find yourself having become one of those mini Zuckerbergs. I love that the fediverse exists. And I have the utmost respect for the gargantuan effort that’s going into it. And yet, I am also very concerned17 that the design decisions that have been made incentivise centralisation, not decentralisation. I implore us to acknowledge this, to mitigate the risks as best we can, to strive to learn from our mistakes, and to do even better going forward. So to the ActivityPub and Mastodon folks, I say: Consider me your canary in the coal mine… «Chirp! Chirp! Chirp!» Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.
After 16 years on Twitter, even I finally deactivated my account and asked for all by data to be deleted last week. It was easy to do as I’ve been on the fediverse for over five years and I’d basically stopped using the hell site almost entirely for over a year. I was just keeping my account active so as not to break over a decade-and-a-half of web links. (Let this be a lesson to you: if you care about not breaking links/content on the web, make sure that you own them instead of Startup Of The Week, Inc.) ↩︎
If you’re wondering what the fediverse is, it’s likely what you call Mastodon. I could write a whole other blog post about why Mastodon is not the fediverse but, thankfully, others have done a great job of explaining the basic concepts already. ↩︎
Greta joined the mastodon.nu instance and already has over 44,000 followers (and is following 50) and Stephen has joined mastodonapp.uk and amassed 27,000 followers in a day or so (and isn‘t following anyone in return at the moment). ↩︎
It’s what I’d call a “personal instance”, an “instance of one”, or “a single-tenant instance.” It’s basically a server where only you have an account. Because Mastodon is federated using the ActivityPub protocol, you can communicate with anyone on any other instance but being on an instance of one is very different to being on an instance of hundreds of thousands, like mastodon.social, for example. ↩︎
On Twitter, my follower count was at roughly 42,000 people before I deactivated my account. Keep in mind that that was after being on the site for 16 years. Similarly, I’ve been on the fediverse for over five years, basically since the beginning, and Stephen Fry has amassed more followers in a single day. This is important to remember as we use my experience to forecast what the instance he joined – mastodonapp.uk – will begin to (or has already begun to) experience. ↩︎
Given the design decisions underpinning the ActivityPub protocol and Mastodon server. ↩︎
When I asked Hugo if I could quote him for this post, he said yes but with the following caveat, which I’m including here so you don’t go off complaining to him (if you have any issues with this post, you can complain to me instead): “You can quote my explanation but as an illustration because there are several aspects that are not 100% accurate and others that I probably don’t even know. For example, if a post/reply includes an image, are two Sidekiq jobs created or only one? If it includes multiple images? If there is a custom emoji that has not federated yet in those posts/replies, etc. But the explanation is probably not very far from the reality in terms of the general functionality of the protocol.” ↩︎
After posting this article from my instance, I watched the Sidekiq queue during the original posting and subsequent replies and it would appear that, currently, the actual number of unique instances my followers are on is ~1,377. ↩︎
Or start blocking followers, or unfollowing people, or staying quiet. None of which are viable options. (Especially the last one… I mean, do you even know me?) ↩︎
Somewhat ironically, I’m apparently responsible for one of the first Twitter migrations (although on a much smaller scale than today’s), back when Mastodon was just starting out. I guess you could say all this is just karma. ↩︎
He’d then get his followers on Twitter to join as many different Mastodon instances as they could and follow him. Finally, with his bootlicker army of incels in place, he’d just have to get chatty with them and watch Eugen’s instance burn to the ground under the weight of it all. ↩︎
On an instance run by donations, for example, this would mean those donations would go to subsidise his account far more than the other accounts on the instance. That is, if they even managed to cover the cost of it to begin with. In Big Tech, like Twitter, the burden placed on the system by celebrities and other exceedingly popular accounts (journalists, etc.) is just a cost of doing business. The cost is subsidised by the corporation because these accounts are the bait that attracts the true assets of the business: you. In Big Tech, you are the livestock being farmed. They just have to keep you distracted enough with enough sparkly things that you don’t realise you’re being farmed.Also, there are economies of scale present in centralised Big Tech systems that simply do not (and should not) exist in decentralised systems. ↩︎
Unlike me, I don’t think Stephen would have to sweat the cost of the server although it will be considerably more than the heavily-subsidised $8/month that Elon would have been charging him. ↩︎
The term “user” is an othering. In Small Tech, we call people “people.” ↩︎
Some would say “tumours”. Hello, I am Some. ↩︎
If anything, the design decisions behind servers like Mastodon show that we need radically different approaches and first-principles design optimised for single-tenant servers and peer-to-peer connections if we want to nurture a Small Web. ↩︎
And I have been from the start. ↩︎
Read more
On October 12th, 2022, we recevied the following form letter, informing us that our NLnet Grant Application (original application, follow-up questions and answers) for Domain has been rejected. Dear Aral, I'm sorry to have to inform you that your project "Domain" (2022-08-099) unfortunately was not selected for a grant. This in no way means that we do not see the value of the work you proposed. We get many more excellent proposals than we can grant with our limited means, so competition is extremely tough. There are unfortunately many talented independent researchers, developers and community leaders that need funding. In addition to that, the specific scope and rules of play of each call pose (sometimes artificial) limits in our selection that mean excellent projects leave empty-handed - because they just do not fit well enough within a certain fund. Again, we are very sorry that we cannot offer you support for your good efforts. We hope you are not discouraged, and are able to secure funding elsewhere for the project, for instance from any of these organisations: https://NLnet.nl/foundation/network.html or through other calls from the Next Generation Internet: https://www.ngi.eu/opencalls And do trust that we have your funding need and the outline of your project in the back of our head from now on, and so we might come back to you if an opportunity arises (unless you asked us to destroy your contact details in the application form, in which case we will do so). If you should have any questions, please let us know. And in case you have another good project in mind in the future, do not hesitate to submit again! Kind regards, on behalf of NLnet foundation, Michiel Leenaars Strategy Director This means that, to date, we have received and continue to receive no European Union funding for our work at Small Technology Foundation. I’m also done wasting time writing grant proposals to organisations that clearly do not care about supporting the work we do. Instead, there is code to write and we will continue to work for the common good – as we have been doing for almost the past decade – even though we are not funded from the common purse. If you’d like to help us continue to exist, please feel free to become a patron or make a donation. Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist. pre, pre code { background-color: white !important; }
Read more
This is the first update to our NLnet Grant Application for Domain. You applied to the 2022-08 open call from NLnet. We have some questions regarding your project proposal Domain. Thank you for getting back to us. Please find the answers, inline, below. You requested 50000 euro, equivalent of one year of effort. Can you provide some more detail on how you arrived at this time estimate? Could you provide a breakdown of the main tasks, and the associated effort? What rates did you use? As a funding body, I can see why the question of funding amount is the primary question. However, as someone who has eschewed mainstream income and other benefits to work for the common good, it is the least important one for me so I’ll leave this to the end and tackle your other questions first. Can you compare Domain to chatons.org and Libre.sh (note the former community also already uses the word Kitten…)? Let me tackle them separately: Domain vs. Chatons CHATONS is an excellent initiative by the lovely folks at Framasoft – who have supported our work in the past with translations of our articles into French – to create “a collective of small structures offering online services (e.g. email, web hosting, collaborative tools, communication tools, etc.)”. They call these structures “hosters.” Within the CHATONS, model, Domain is a tool to enable the creation of more hosters. Everyday people can then use the Domain instances of those hosters to set up their own small web places. Once Domain is ready for use, CHATONS is a collective that I can see us considering becoming a part of with Domain and Kitten (after all, as you point out, they’re already called Kittens so even that fits). Domain vs. Libre.sh Libre.sh, from what I can tell based on their web site, is a tool for people with technical knowledge to set up web sites using Kubernites or Docker Compose. These are enterprise technologies used by Big Tech and involve a high level of complexity. In the Getting Started guide for the Kubernites version of libre.sh, you are shown how to deploy a cluster with 9 machines: “3 masters, 3 ingresses, 3 compute”. Needless to say, this is a completely different use case than Domain. The goal of Domain is to enable organisations to become small web hosts, where they can provide a simple interface for everyday people without technical knowledge to set up their own small web places (sites/apps). As such, you can think of it as “Digital Ocean in a box” for anyone who wants to run their own small web host. Domain provides a holistic solution that integrates the necessary components of provisioning a VPS, managing subdomains (DNS), installing web applications, and (optionally) charging for the service (or otherwise controlling access to resources). The problem libre.sh solves is how do we “host free software at scale?” while the problem Domain solves is “how do we enable people to host free software that doesn’t scale?” (in other words small web places). And, crucially, how do we make it possible for any organisation to become such a host? And easy for people without technical knowledge to set up small web places using it? if one would use Domain to install e.g. Yunohost or Sandstorm, what would be the added advantage compared to deploy a preconfigured image of these directly at a VPS hosting company (which is already on offer with some hosters). You wouldn’t use Domain to install Yunohost or Sandstorm as they are different approaches to solving similar problems. Both Yunohost and Sandstorm offer dashboards for installing existing web applications. These are applications that, for the most part, have been created in the traditional multi-user, Big Tech design, with out-of-process databases, etc. Domain, on the other hand, is for hosting single-tenant Small Web applications. The kind that you can create, for example, with Kitten. Domain does not aim to support every type of web application but a very specific type (single tenant, small web application). This is its greatest strength as this focus and control means that we can simplify the experience and reduce the system requirements throughout the stack and lifecycle (e.g., when it comes to automatic updates and maintenance). Yunohost, Sandstorm and similar projects are great in that they allow more people to install and use existing free/open source “multi-user” web apps that have almost entirely been designed with Big Web architectures. Domain aims to do the same thing for Small Web apps while reducing complexity and the need for technical knowledge and improving the overall experience. The Domain approach could be interpreted as poor-mans hosting and/or entry level shared hosting (which is already a very competitively priced economic area). In order to cut prices ,there are definite significant tradeoffs which may bite the user in the longer term. I wouldn’t call Domain’s approach to democratising hosting by enabling independent organisations to become small web hosts “poor man’s hosting” any more than I would call the platform cooperative movement “poor man’s corporations”. The goal is to democratise not just the ownership of small web places but to do so without centralising them at a single host (e.g., us). While price is an important factor for commercial providers (in that there is likely a psychological number beyond which it would be deemed too expensive for a small web place), it is definitely not the primary focus. As mentioned in the original funding application, Domain can be run as a private instance (e.g., internally for organisations, neighbourhoods, families, etc.) or using a token-based system (where, for example, a municipality can issue tokens that citizens can exchange for the own small web places instead of using legal tender). Finally, the type of hosting integrated into Domain is Virtual Private Server (VPS) not shared hosting. Borrowing a subdomain from someone doesn’t offer much legal standing, whether it is on the public suffix list or not. Borrowing a subdomain from an organisation offers the exact same legal standing as borrowing a sudomain from a commercial top-level domain (like .com, etc.) provider in that they are both subject to contract law. So whatever terms are in the contract are the legal standing that is offered. The difference between a commercial top-level domain provider and organisations that will be running domain (like our not-for-profit Small Technology Foundation) is the reason why the domains are being offered. In the former, it is to make a profit. For us, it’s to provide people with a location that they can be easily reached on the Web with. When a host folds the entire reputation of its user base is immediately destroyed (unlike the case of a TLD, which has an escrow arrangement enforced by their ICANN contract). When someone forgets to backup and things crash, users have little resort. And small payments are expensive to process, eating further into the “Domain” hosters margins. What countermeasures do you propose for such scenario’s to give users peace of mind? Is there some form of service portability? Indeed, if a host shuts down, this would take down everyone they host. This is a problem inherent with decentralisation and we see it when a fediverse server goes down too. This is not to say that the solution is then to ensure that only a handful of mutli-billion-dollar commercial hosts should exist. Just as the solution to fediverse servers being shut down is not to say that everyone should use Twitter instead because we can trust it’ll be around in one form or other. Instead, possible mitigations for this fact of life include: Supporting small web hosts from the commons Encouraging as many small web hosts as possible so that if one goes away the damage is limited Ensuring that it is as easy as possible to move between small web hosts (portability) While the first one would involve political will, the second and third are issues that Domain exists to tackle. On the topic of payments, Domain initially supports Stripe which, to the best of my knowledge, does not impose higher fees for the sorts of amounts we’re talking about. Microtransactions (which is not what this is) might be a different matter but do not concern our use case for Domain. Finally, while the initial setup is on a subdomain, it will be easy to configure the site to use any domain (on a regular TLD) after the fact. The initial setup using a subdomain is a design decision to both enable people to have a small web place without paying separately for a commercial domain name and to keep the setup time as quick as possible. If people are expected to pay 10 euro’s a month, that puts them in the realm of CPanel and Plesk/managed services which are already common with hosting companies for decades - and at prices going down to below 1 euro per month for e.g. Wordpress hosting for which tens of thousands of tutorials and extensions to make everything possible. What planned features are supposed to lure people over from that competition (and the marketing power of players like Strato)? First off, to clarify: the €10/month number is an initial estimate. It’s based on our own research into what could make Small Technology Foundation sustainable by hosting a Domain instance at small-web.org. It’s also based on my belief that it’s likely the maximum amount you could charge that still feels “small.” And, again, Domain instances can be private and token-based. The commercial payment aspect is a sad necessity for organisations like ours that need to survive under capitalism today while hopefully creating systems that will mean that eventually we will have kinder and fairer systems tomorrow where we better understand the value of the commons and support it accordingly. If cheap is the main target, would it not be more convenient to point people to dot.tk et al, which gives them a ‘real’ domain name (discussion about their security aside) at no cost, conveniently exposed by an API that allows for automation? Cheap really isn’t the main target. If anything, affordable is. And, ideally, in time, we’d love to see small web hosts supported from the common purse for the common good. But until that time, we aim to prove, even within the success criteria of the current system, that we can be sustainable. If someone wants to use a dot.tk name (not linking to them as their site doesn’t use TLS which doesn’t give me a lot of confidence about them), they will be able to. You state that a domain on the public suffix list would “provide all the privacy and security guarantees that any other top-level domain can”. Is that actually true? i Yes, like the other statements in our funding application and that we make in general, it is true :) (If we wanted to get into lying, we’d be working in the mainstream. You can make a lot of money there doing that. It’s actually quite a sought-after skill.) To reiterate, from the official description: It allows browsers to, for example: Avoid privacy-damaging “supercookies” being set for high-level domain name suffixes Highlight the most important part of a domain name in the user interface Accurately sort history entries by site
When a domain is on the Public Suffix List, it is, for all intents and purposes, a top-level domain and is treated as such by browsers. This is a very useful hack for creating domains that are not part of the commercial top-level domain business and is one of the fundamental design decisions in Domain that sets it apart from other initiatives in the area. A lot of information can already be learned just from the DNS requests the second level nameservers would see. How do you intend to deal with e.g. TLSA records, DNSSEC, SPF, etc? What kind of alternative services will be delivered (for instance e-mail? ) How will e.g. spam and blacklisting impact sibling Domain-hosted efforts? As a project from a not-for-profit that exists to protect privacy, Domain includes privacy policies for hosts based on data minimisation. While the initial service providers for DNS, VPS, TLS (DNSimple, Hetzner, Let’s Encrpypt), etc., do have the means to gather data, this is true in general (not specific to Domain) and they are bound by the rules of GDPR. Because of ACME, meanwhile there has been a lot of work on automation of DNS challenges (https://github.com/acmesh-official/acme.sh/tree/master/dnsapi). Would it not make sense to seek a similar approach for the DNS management, where one does not depend on hardcoding a single US based domain name company (DNSimple) as exclusive provider? DNSimple is not used for TLS certificates. Kitten uses Auto Encrypt (another of our free and open source libraries) to automatically provision TLS certificates using Let’s Encrypt’s HTTP-01 challenges. DNSimple is used for setting up the domain records for subdomains. And, again, DNSimple is simply the first provider we are building support for to get Domain up and running. The goal is to support others that have APIs and to thus abstract out the APIs, hopefully commoditising these services to some degree in the process. How would applications be packaged and maintained/security hardened? Is this something you have experience with? Applications are “packaged” (insofar as we can use that term for it) in Domain using cloud-init. Currently, this is on standard Ubuntu 22.04 instances with unattended security updates enabled. However, we are keeping our eyes on immutable operating systems like CoreOS to aid in OS updates in the future. Much of the security of Kitten and Small Web sites comes from their simplicity and small attack surface. There are fewer moving parts, less code, and basic standards being used. Can you elaborate on deployment and maintenance requirements after initial deployment - which tend to forms a continuous drain on resources at the project level and the user level? Small Web sites/apps built on Kitten and deployed with Domain will be entirely self-maintaining. Kitten’s installation process is based on git and it also clones git repositories to deploy apps. There is also work underway to implement: Automatic updates for the deployed app (via git) Automatic updates for Kitten itself (again via git) Eventually, when immutable server operating systems become available, the goal is to have automatic major version operating system updates as well for completely hands-off maintenance. (In the meanwhile we will be deploying to LTS versions of Ubuntu which will give us quite a few years of headroom on this.) You mention that setup can be done in under one minute, but surely beyond that someone will have to do the hard work? No, that’s the whole idea: there isn’t. When you install a small web app and, say, 45 seconds later, it’s ready, it’s actually ready. You hit your domain. You start using it. That’s all there is to it. Is there a threat analysis that you’ve considered during the design phase - if one of the fellow hosted community members doesn’t update their version of X, how do you prevent compromise of the rest? Is configuration and user management separate from delivering bits? Would Domain packages offer reproducibility, like Nix/Nixpkgs (which has a vast collection of software, see repology.org)? The security model of Domain follows, in the words of Joanna Rutkowska (of Qubes, etc.) “security through distrust”. This results in some core design decisions: Domains are on the Public Suffix List (so no one can set supercookies, etc.) Every person gets their own virtual private server (we don’t use shared hosting) Every site is automatically protected by TLS And, to reiterate, some related properties even if they may not seem to immediately be security-related: You can use your own domain. You can easily move away from a host. All code is free and open. Regarding reproducibility, since small web apps are installed via git, there isn’t necessarily a build process involved. Where one is (e.g., via npm install), it would be up to the apps themselves to ensure that dependencies are locked and loaded from trusted sources and that commits and tags are signed. You requested 50000 euro, equivalent of one year of effort. Can you provide some more detail on how you arrived at this time estimate? Could you provide a breakdown of the main tasks, and the associated effort? What rates did you use? Finally, to return to the first question: I work full time at Small Technology Foundation and my work these days is full-time on Kitten and Domain. So, let’s say I work 40 hours a week on average (it can vary and I often work on the weekends too): €50K/yr comes down to about ~€26/hour. Let’s put this into perspective: over a decade ago, when I doing regular development work as a contractor, I was charging €100/hr. My partner, who is contracting with Stately (a startup), makes more than double this amount a year and this is the main source of income that is currently sustaining Small Technology Foundation (the two of us and our dog). Previously, I’ve sold two family homes in Turkey and we’ve relied on a combination of sales of our tracker blocker (Better, now retired) and fees from conference speaking to scrape by. All this to say that I don’t actually care how much funding we get. (I wonder if this is why they don’t usually let me write the funding applications?) Whether it’s €50K or €0 or something in between, we will keep working on Kitten and Domain and we will keep working on our vision for a Small Web owned and controlled by people, not corporations. We’ve always found a way and we will continue to do so. What would be nice, however, is to feel like we are supported. We have been working for the common good for almost a decade now and it would be nice to have it funded from the common purse to some degree at least. And it would also be nice to have some stability so we can keep working on this without worrying about how we’re going to exist in X months time. While I understand that the funding from ngi/NLnet is mostly project (and maybe even feature)-based, I do believe we need to think longer term and support folks like ourselves who are essentially carrying out research and development. Silicon Valley understands the value of funding teams and then allowing them to pivot, etc., as they explore a problem domain and learn more about it. Sadly, it does so with the worst possible success criteria (how to best farm you for your data and monetise it). As I mentioned during one of my talks at the European Parliament, I hope that we can do the same thing for folks working on technology for the common good. So I’m not going to give you an hour-by-hour breakdown of tasks because I don’t know what those are going to be beyond a few days’ time. You can keep track of the current ones on the issue trackers of the Kitten and Domain projects. Apart from that, I get up in the morning and work on Kitten and Domain and that’s what I’ll be doing for the foreseeable future. Please support us financially if you feel that what we’re working on should exist and you’d like us to exist long enough to make sure that it does. If you have any other questions, please don’t hesitate to get in touch. Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.
Read more
Privileged ports, toffs of the Linux world. Kitten is a small web server that runs as a user-level service and would never need elevated privileges if it wasn’t for one archaic anti-security feature in Linux that dates back to the mainframe era: privileged ports. Back to the future As it was in Unix in the 1980s, so it is now, that any process that wants to bind to a port less than 1024 must have elevated privileges. These ports are known as “privileged ports.” While this was a security feature in the days of dumb terminals, in the age of the World Wide Web, it is a security vulnerability. Privileged ports lead to dangerous security practices, like server processes forgetting to drop privileges and being run as root. They’ve been obsolete for quite a while, macOS removed them as of Mojave1, and there’s no comparable concept on Windows either. Heck, they apparently even cause climate change. Not to mention, they’re a world of unnecessary hurt to work around. In fact, even their original implementation in BSD was a hack: if (lport) { u_short aport = htons(lport); int wild = 0; /* GROSS */ if (aport < IPPORT_RESERVED && u.u_uid != 0) return (EACCES); … The BSD people knew this was a hack; they just did it anyway, probably because it was a very handy hack in their trusted local network environment. Unix has quietly inherited it ever since. — The BSD r* commands and the history of privileged TCP ports Listen to what some other folks have to say on the subject: “So we have web servers, and all other servers whose standard ports happen to fall below 1024, expecting to be started as root, bind(2) to their service address, and then “drop privileges”. At this point, you must think I’m shitting you, but I’m not. This is for real. Failure to drop privileges after binding to listen port is a whole category of vulnerability. It’s like throwing egg on the stairs every morning, and several times later each day carefully classifying various accidents in the accident log as “slipped on eggy stairs”. Stop throwing egg on the stairs!” — The Persistent Idiocy of “Privileged Ports” on Unix
This port 1024 limit is a security measure. But it is based on an obsolete security model and today it only gives a false sense of security and contributes to security holes. — Why can only root listen to ports below 1024?
These are the reasons why I suspect the 1024 limit was imposed: Don’t let users run system-level services on the mainframe. Don’t let the users hog ports for important services on the mainframe. Don’t let the users run a bogus service to steal logins on the mainframe. …and here’s why I think these aren’t relevant anymore: - The mainframe is now the desktop. – …I really don’t get it The workaround On modern Linux systems, you can configure privileged ports using sysctl: sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80 However, that setting does not survive a reboot. You can also configure the setting in a persistent way using a configuration file. e.g., by creating a file called /etc/sysctl.d/99-reduce-unprivileged-port-start-to-80.conf with the following content: net.ipv4.ip_unprivileged_port_start=80 To remove all privileged ports, you can set that value to 0. For our needs, 80 will do as it means our web server can bind to ports 80 and 443 without requiring superuser privileges. In fact, the Kitten installer does just this. But even that has issues. For one thing, it doesn’t work in rootless containers (e.g., if you’re running on an “immutable” Linux distribution like Fedora Silverblue2) because the configuration file gets added to the container, which doesn’t have systemd running sysctl.d so the configuration setting doesn’t get applied at boot. So we’re back to having to gain temporary privileges and drop them just to alter this configuration setting every time the server is run. What a pain in the ass. The fix The fix is easy: ship Linux distributions so that privileged ports start from 80 to begin with.3 net.ipv4.ip_unprivileged_port_start=80 Sure, this could also be set to zero but setting it to 80 would fix the number one use case today, which is to allow web servers to bind to ports 80 and 443 without requiring superuser privileges. So I’d be happy with either solution. And for the three folks in Finland who administer multi-user Linux instances and rely on privileged ports for their mainframe-era security properties, they can always run sysctl and set their port limit to 1024 as it was before. Yay, everyone’s happy! This is such an easy fix and one that would improve security across the board that I hope Linux distributions will start implementing it as soon as possible. All it takes is one major distribution to start the trend and the rest will follow. Please feel free to open issues in your distribution of choice and talk to folks you know to get them to do this. #Linux #PrivilegedPortsMustDie is what I’m saying. It’s time to move Linux out of the mainframe era… kicking and screaming, if need be. Like this? Fund us! Small Technology Foundation is a tiny, independent not-for-profit. We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.
I’m just waiting for someone to tell me the folks at Apple don’t understand security and that macOS is less secure now because they removed privileged ports. ↩︎
And these so-called “immutable” distributions are the future of Linux, not just on the desktop but on servers also (see, for example, Fedora CoreOS, etc.) So the sooner we remove the archaic security anti-feature that is privileged ports, the better. ↩︎
I mean, ideally, the kernel could implement this fix and be done with it for every Linux distribution but I have no idea how to get the kernel folks to implement something. I have a feeling it involves a lot of text-only emails and being told how dumb I am in no uncertain terms by multiple people. So, yeah, if anyone else wants to take that one, please be my guest ;) ↩︎
Read more