Lava Stream

Another random hack today, but this one sort of developed over the week.

It all started last Monday. I was doing an exploratory task around single-sign on. I read Scripting News that morning, and I was thinking about the style of writing Dave Winer has adopted for his blog: short notes made across the day, sort of like a running commentary on what he’s working on and what his thinking. I wondered if this would work for my job: having a way to write your thoughts down, even if they’re rough, so you can build atop them. Also works when you need to look back on what you were thinking or working on later on.

I’ve got an Obsidian vault for my work notes, and I do use the Daily Notes feature quite a bit, but it’s not conducive to the type of running-commentary style of journaling I wanted to see. There is this hosted solution, called Memos, which could work. It’s a bit like a Twitter-like blogging platform but with the ability to keep them private.

So that Monday I deployed an instance using Pikapod, and used it for my work. I did the job, in that I had a place to jot notes down as they came to me. But despite how well the app feels, it did have some shortcomings.

The first is the whole split-brain problem with having two places to write notes: where should I put a thought? Really they should go in Obsidian, as that’s where all my other notes are currently. And although I trust Pikapod and Memos to be relatively secure, notes about what I do for work really shouldn’t be on an external server. This became evident on Tuesday, when I wrote this note in Memos:

Something similar to what this is, except each note goes into the daily note of Obsidian. Here’s how it will work:

  • Press a global hot key to show a markdown notepad
  • Enter a note, much like this one.
  • It will be written to the Obsidian daily notes, under the “Notes” header

Nothing happened Wednesday, but Thursday I decided to explore this idea a bit. So I whipped up something in XCode to try it out. Here’s what I’ve got so far:

The main window of the project, with a text field at the top, the date in the middle, and a series of notes with timestamps on the bottom.

The way it works is you enter a note in the top pane (which is just a plain text editor) and press Cmd+Enter to save it. It will appear in the bottom pane and will be written to the daily note in your Obsidian vault.

An Obsidian Daily Note with the same notes that appear in the main window written out under a Notes header

The notes are written to Obsidian in HTML. At the moment you also need to write the notes in HTML, but I’m hoping to support Markdown. It would be nice just to write the notes out as Markdown in Obsidian as well, but I want some way to delineate each note, and HTML seems like the best way to do so. Each note is basically an article tag with a date-stamp:

<article>
  <time datetime="2023-07-13T01:45:57Z">[11:45:57 am]</time>
  This is a pretty stupid app, but might be useful. Will save me the need
  to run that notes service.
</article>

But sadly Obsidian doesn’t parse Markdown content within HTML tags. That’s understandable, I guess, but it would be nice if this changed.

Anyway, we’ll see how this works. I’m calling this little hack Lava Stream at the moment, as an allusion to the geological meaning of Obsidian. And it’s meant to be a way to be an easy way of capturing thoughts, like a stream, of thoughts, into Obsidian… (get it?)

Like most Random Hacks here, I’m not sure how long I’ll use it for1, or whether it’ll go beyond the pile of awful, crappy code it currently is, not to mention the ugly styling. I’ll spend more time working on it if I continue to see value in it. That’s not a given, but I think it shows promise. I’ve been thinking about something like this for a while, but the concern of storing work-related things on another server seemed like the wrong thing to do. Having a means of writing this way, with using Obsidian as the data store, seems like a pretty good way of doing this.


  1. I am still using that Lisp-based Evans wrapper I mentioned last week. ↩︎


A Lisp-based Evans Wrapper

I wanted an excuse to try out this Lisp-like embedded language Go library that was featured in Golang Weekly. Found one today when I was using Evans to test a gRPC endpoint and I wanted a way to automate it. Hacked up something in 30 minutes which takes a method name and a Lisp structure, converts it to JSON and uses Evans to send it as a gRPC message.

As the afternoon progressed, I added some facilities to send HTTP GET and POST methods with JSON request bodies, plus some facilities to set some global options.

Here’s a sample script:

// Set some global options
(set_opt %grpc_host "www.example.com")
(set_opt %grpc_port "8080")

// Make a gRPC call. This will use evans to perform the call
(call "my.fancy.Grpc.Method" (hash
    example: "body"
    message: "This will be serialised to JSON and sent as the body"))

Another script showing the HTTP methods

// The HTTP methods don't print out the response body by default,
// so add a post-request hook to pretty print out the JSON.
(defn hooks_after_response [resp]
    (print_json resp))

// Make a HTTP GET request to a JSON endpoint.
// The JSON body will be converted to a hash so that the data can be useful
(def user (rget "https://example.com/user/someone"))
(def userName (hget user %name))

// Make a HTTP POST with a JSON body.
(hpost "https://example.com/user/someone" (hash
    new_name: "another"))

It’s a total hack job but already it shows some promise. Evan’s REPL is nice but doesn’t make it easy to retest the same endpoint with the same data multiple times (there’s a lot of copying and pasting involved). For those purposes this is a little more satisfying to use.

Now that I had a chance to try Zygomys out, there are a few things I wish it did.

First, I wish it leaned more into the Lisp aspect of the language. The library supports infix notation for a few things, which I guess makes it easier for those who don’t particularly like Lisp, but I think it compromises some of the Lisp aspect of the languages.

For example, lists can be created using square brackets, but there’s no literal syntax for hashes. Not that there are any in Lisp either, but derivatives Clojure uses square brackets for arrays and curly brackets for hashes. Curly brackets are reserved for inline code blocks in Zygomys, so there’s no way to use them in a more functional context. I suppose something could be added — maybe square brackets with a prefix, #["key" "value"] — but feels like a missed opportunity.

Another is that there’s no way to use dashes in identifiers. This may have just been an oversight, but I’m wondering if the infix notation support complicates things here as well. It would be nice to use them instead of the underscore. I don’t personally like the underscore. I know it’s just a matter of pressing shift, but when writing identifiers in lowercase anyway, using the dash feels a lot more natural.

Finally, on the Go API front, it would be nice to have a way to call functions defined in Zygomys in Go, much like those hook functions in the sample above. I think this is just a matter of documentation or adding a method to the API to do this. I see no reason why the engine itself can’t support this. So I’m happy for this to come down the line.

But in general this library shows promise, and it was fun to build this tool that uses it. Of course, we’ll see if I use this tool a second time when I need to test a gRPC endpoint, and I don’t just hack up yet another one that does essentially the same thing.


A Tool That (Almost) Solved My Step Function Woes

I reached the end of my tether at work today on the task I was working on. The nature of the task involved crafting an AWS Step Function with several steps. Each step on the critical path contained some error handling, and several of them contained some cleanup logic, that had to be called by a bunch of other steps. This cleanup sequence is relatively complicated, and I’ve raised a number of PR’s to my colleagues which have come back with requests for change.

I’ll admit that I may have been a little sloppy with this change. But what’s not helping matters is the representation of Step Functions as a finite state machine written in YAML. I wrote about my issue with YAML used as some weird general purpose programming language, and this applies here as well. But a contributing factor is the level at which I’m defining this Step Function.fearing the sunk cost of the work I’ve done so far, I figured I’d just make the adjustments as they came. But now my patients has run out and I figured it was time to try a different approach.

The way to build an AWS Step Function is to define a finite state machine (FSM), which is basically a set of states linked together to form a graph. Doing this manually with a handful of states is relatively simple, but when you start to consider proper error handling and clean-up logic, such as making changes to DynamoDB records that need to be changed or reversed when something goes wrong, it can get quite complicated. The thing is that this is something that computers are able to do for a long time. A Turing complete program is a subset of a finite state machine, so if it’s possible to represent something that is Turing complete in a regular programming language, it should be possible to do likewise for a FSM with a subset of the language.

So I spent some time today trying to do that. My idea was to build a pre-processor which will take a Step Function definition encoded in the form that looks like a regular programming language, and translate it to the YAML FSM structure that AWS actually uses. Spoiler alert: I only got about half way through, but the seeds for something that could be used down the line are there, so it wasn’t a total waste of time.

The Design

There’s really nothing special about this. At this very early stage, the step function is simply written as if it was a regular programming language. The tool will read this, produce a graph representing the FSM, and generate the YAML file. An example of how this language looks is given below:

pass(name = "Pass1")
pass(name = "Pass2")

Here, we have a simple step function with two states, with one that will run after the other. Running it will produce the following YAML file:

Comment: This is a comment that is actually hard coded
StartAt: Pass1
States:
  Pass1:
    Type: Pass
    Next: Pass2
  Pass2:
    Type: Pass
    End: true

In this scenario, we’re using the Pass state, which will simply succeed, so this step function is pretty useless. But it does give a good demonstration as to what the eventual goal of the preprocessor is, which is to automatically do the wiring between the various states so that you don’t have to. If I were to put Pass2 above Pass1, it will update the generated YAML file to reflect that, so that Pass2 will have Next: Pass1, and Pass1 will have End: true.

The usefulness of the tool comes when we start considering failure modes. These can be expressed as normal “try-catch” blocks, a lot like may languages that exist today:

try {
  pass(name = "DoSomething")
} catch {
  pass(name = "SomethingFailed")
} finally {
  pass(name = "DoAlways")
}

This will produce the following Step Function YAML file:

Comment: This is a comment that is actually hard coded
StartAt: DoSomething
States:
  DoSomething:
    Type: Pass
    Catch:
      - ErrorEquals: ["States.ALL"]
        Next: SomethingFailed
    Next: DoAlways
  SomethingFailed:
    Type: Pass
    Next: DoAlways
  DoAlways:
    Type: Pass
    End: true

Once again, this is a relatively simple graph at this stage. But imagine this growing to several nested try-catch blocks, each one with slightly different error handling and cleanup logic. Manually keeping the various states wired correctly would be quite difficult, and it only makes sense to offload it to a tool to do this for us.

The Implementation

A few notes about how the tool itself was build:

  • It was hacked together in Go, which is my language of choice.
  • The parser was built using Participle, which is an excellent library for building parsers from annotated structs. If you work with Go and ever considered building a DSL, I would highly recommend this library.
  • The YAML document is serialised using this YAML library.

The tool was a complete hack job so I don’t want to go too much into the design. But the general approach is the following:

  • First the “program” is parsed from stdin and translated into an AST.
  • This AST is then converted into an internal representation of the Step Function. This is effectively the graph, with all the steps linked together via pointers.
  • The graph is then traversed and converted into a series of structs which, when serialised, will produce the YAML document that can be loaded into AWS.

The tool does not do everything. For example, choices are not supported, and only a few tasks that were specific to my problem were actually built.

So how did it go? Well, OK, I’ll come clean. I got as far as adding support for try-catch statements, but I never actually used it for the task I was working on. The task was nearing it’s completion and even though fixing the issues were annoying, it was less effort than regenerating the whole step function from scratch. But it’s likely that more step functions would be built in the future, most likely in the manner that we’ve been building them now: with error handling and clean-up states. So this tool might actually come in useful yet.