Working Set

I think I’ll take a little break from Dynamo-Browse. There’s a list of small features that are on my TODO list. I might do one or two of them over the next week, then cut and document a release, and leave it for a while.

I’m still using Dynamo-Browse pretty much every day at work, but it feels a little demotivating being the only person that’s using it. Even those at work seem like they’ve moved on. And I can understand that: it’s not the most intuitive bit of software out there. And I get the sense that it’s time to do something new. Maybe an online service or something. πŸ€”

Finally bit the bullet and got scripting working in Dynamo-Browse. It’s officially in the tool, at least in the latest development version. It’s finally good to see this feature implemented. I’ve been waffling on this for a while, as the last several posts can attest, and it’s good to see some decisions made.

In the end I went with Tamarin as the scripting language. It was fortunate that the maintainer released version 1.0 just as I was about to merge the scripting feature branch into main. I’ve been trying out the scripting feature at work and so far I’ve been finding it to work pretty well. It helps that the language syntax is quite close to Go, but I also think that the room to hide long-running tasks from the user (i.e. no promises everywhere) dramatically simplifies how scripts are written.

As for the runtime, I decided to have scripts run in a separate go-routine. This means they don’t block the main thread and the user can still interact with the tool. This does mean that the script will need to indicate when a long running process is occurring β€” which they can do by displaying a message in the status line β€” but I think this is a good enough tradeoff to avoid having a running script lock-up the app. I still need to add a way for the user to kill long-running scripts (writing a GitHub ticket to do this now).

At the moment, only one script can run at any one time, sort of like how JavaScript in the browser works. This is also intentional, as it will prevent a whole bunch of scripts launching go-routines and slowing down the user experience. I think it will help in not introducing any potential synchronisation issues with parallel running scripts accessing the same memory space. No need to build methods in the API to handle this. Will this mean that script performance will be a problem? Not sure at this stage.

I’m also keeping the API intentionally small at this stage. There are methods to query a DynamoDB table, get access to the result set and the items, and do some basic UI and OS things. I’m hoping it’s small enough to be useful, at least at the start, without overwhelming script authors or locking me into an API design. I hope to add methods to the API over time.

Anyway, good to see this committed to.

Poking Around The Attic Of Old Coding Projects

I guess I’m in a bit of a reflective mood these pass few days because I spent the morning digging up an old project that was lying dormant for several years. It’s effectively a clone of Chips Challenge, the old strategy game that came with the Microsoft Entertainment Pack. I was a fan of the game when I was a kid, even though I didn’t get through all the levels, and I’ve tried multiple times to make a clone of it.

The earliest successful clone I can think of was back when I was using Delphi, which I think was my teens. It’s since been lost but I do recall having a version that work and was reasonably true to the original game as possible. It wasn’t a particularly accurate clone: I do recall some pretty significant bugs, and the code itself was pretty awful. But it was nice to be able to do things like design my own levels (I wasn’t as internet savvy back then and I didn’t go looking for level editors for the Microsoft’s release of Chips Challenge). Eventually I stopped working on it, and after a few updates to the family computer, plus a lack of backups or source control, there came a time where I lost it completely.

Years later, I made another attempt at building a clone. I was dabbling in .Net at the time and I think I was working on it as an excuse to learn C#. I think I got the basics of the game and associated level editor working but I didn’t get much further than that. Either I got bored and stopped working on it.

I started the latest clone nine years ago. I can’t remember the original motivation. I was just getting into Go at the time and I think it was both to learn how to build something non-trivial in the language, and to determine how good Go was for building games. Although this is probably just a rationalisation: I’m sure the real reason was to work on something fun on the side.

Screenshot of CCLM, the latest clone
Screenshot of CCLM, the latest clone.

Over the first five years of its life or so, I worked on it on and off, adding new game elements (tiles, sprites, etc.) and capabilities like level scripts. One thing I am particularly proud of was building a mini-language for selecting game elements using something akin to CSS selectors. Want to select all wall tiles? Use the selector SOLD. How about configuring a water tile to only allow gliders and the player but only if they have the flipper? Set the Immune attribute of the water tile to GLID,PLYR:holding(flippers). This was particularly powerful when working on tile and sprite definitions.

Text editor showing tile definitions with sample selectors
A sample of how some of the tiles are defined, including usage of the selectors.

I didn’t put as much effort into content however. As of today, there are only 18 or so unique levels, and about half of them are ones that I consider good. I certainly put little effort into the graphics. Many of the tile images were just taken from the original tile-set and any additional graphics were basically inspirations from that. This blatant copyright violation is probably why this project won’t see the light of day.

Screenshot of the level editor
The level editor, and one of the rare times when it's not crashing.

I’m impressed on how Go maintains its backwards capability: moving from 1.13 to 1.19 was just a matter of changing the version number in the .mod file. I haven’t updating any of the libraries, and I’m sure the only reason why it still builds is because I haven’t dared to try.

I’ll probably shouldn’t spend a lot of time on this. But it was fun to revisit this for a while.

One final thing: I might write more about projects I’ve long since abandoned or have worked on and haven’t released, mainly for posterity reasons but also because I like reflecting on them later. You never know what you’d wish you documented until you’ve lost the chance to do so.

Spent the day restyling the Dynamo-Browse website. The Terminal theme was fun, but over time I found the site to be difficult to navigate. And if you consider that Dynamo-Browse is not the most intuitive tool out there, an easy to navigate user manual was probably important. So I replaced that theme with Hugo-Book, which I think is a much cleaner layout. After making the change, and doing a few small style fixes, I found it to be a significant improvement.

I also tried my hand at designing a logo for Dynamo-Browse. The blue box that came with the Terminal theme was fine for a placeholder, but it felt like it was time for a proper logo now.

I wanted something which gave the indication of a tool that worked on DynamoDB tables while also leaning into it’s TUI characteristics. My first idea was a logo that looked like the DynamoDB icon in ASCII art. So after attempting to design something that looks like it in Affinity Designer, and passing it through an online tool which generated ASCII images from PNG, this was the result:

First attempt at the Dynamo-Browse logo

I tried adjusting the colours of final image, and doing a few things in Acorn to thicken the ASCII characters themselves, but there was no getting around the fact that the logo just didn’t look good. The ASCII characters were too thin and too much of the background was bleeding through.

Other attempts at the Dynamo-Browse logo

So after a break, I went back to the drawing board. I remembered that there were actually Unicode block characters which could produce filled-in rectangles of various heights, and I wondered if using them would be a nice play on the DynamoDB logo. Also, since the Dynamo-Browse screen consists of three panels, with only the top one having the accent colour, I thought having a similar colour banding would make a nice reference. So I came up with this design:

Final design of the Dynamo-Browse logo

And I must say, I like it. It does look a little closer to low-res pixel art than ASCII art, but what it’s trying to allude to is clear. It looks good in both light mode and dark mode, and it also makes for a nice favicon.

That’s all the updates for the moment. I didn’t get around to updating the screenshots, which are in dark-mode to blend nicely with the dark Terminal theme. They actually look okay on a light background, so I can probably hold-off on this until the UI is changed in some way.

I’ve been resisting using mocks in the unit tests of Dynamo-Browse, but today I finally bit the bullet and started adding them. There would have just been too much scaffolding code that I needed to write without them. I guess we’ll see if this was a wise decision down the line.

Thinking About Scripting In Dynamo-Browse, Again

I’m thinking about scripting in Dynamo-Browse. Yes, again.

For a while I’ve been using a version of Dynamo-Browse which included a JavaScript interpreter. I’ve added it so that I could extend the tool with a few commands that have been useful for me at work. That branch has fallen out of date but the idea of a scripting feature has been useful to me and I want to include it in the mainline in some way.

The scripting framework works, but there are a few things that I’m unhappy about. The first is around asynchronicity and scheduling. I built the scripting API around the JS event-loop included in interpreter. Much like a web-browser or Node.js, this event-loop allows the use of promises for operations that can be dispatched asynchronously. The interpreter, however, is not ES6 compatible, which means no async/await keywords. The result is that many of the scripts that I’ve been writing are littered with all these then() chains. Example:

const session = require("audax:dynamo-browse/session");
const ui = require("audax:dynamo-browse/ui");
const exec = require("audax:x/exec");

plugin.registerCommand("cust", () => {
    ui.prompt("Enter UserID: ").then((userId) => {
        return exec.system("", userId);
    }).then((customerId) => {
        let userId = output.replace(/\s/g, "");        
        return session.query(`pk="CUSTOMER#${customerId}"`, {
            table: `account-service-dev`
    }).then((custResultSet) => {
        if (custResultSet.rows.length == 0) {
            ui.print("No such user found");
        session.resultSet = custResultSet;

Yeah, I know; this is nothing new if you’ve been doing any web-dev or Node in the past, but it still feels a little clunky exposing the execution details to the script writer. Should that be something they should be worried about? Feels like the tool should take on more here.

The second concern involves modules. The JavaScript interpreter implements the require() function which can be used to load a JS module, much like Node.js. But the Node.js standard library is not available. That’s not really the fault of the maintainers, and to be fair to them, they are building out native support for modules. But that support isn’t there now, and I would like to include some modules to do things like access the file system or run commands. Just adding them with non-standard APIs and with name that are the same as equivalent modules in node Node.js feels like a recipe for confusion.

Further exacerbating this is that script authors may assume that they have access to the full Node standard library, or even NPM repositories, when they won’t. I certainly don’t want to implement the entire Node standard library from scratch, and even if the full library was available to me, I’m not sure the use of JavaScript here warrants that level of support.

Now, zooming out a little, this could all be a bit of a non-issue. I’ve haven’t really shared this functionality with anyone else, so all this could be of no concern to anyone else other than myself. But even so, I’m am thinking of options other than JavaScript. A viable alternative might be Lua β€” and Go has a bunch of decent interpreters β€” but I’m not a huge fan of Lua as a language. Also, Lua’s table structure being used for both arrays and structures seems like a source of confusion, especially when dealing with JSONish data structures like DynamoDB items.

One interpreter that has caught my eye is Tamarin. It’s early in its development already it’s showing some promise. It offers a Go like syntax, which is nice, along with native literals for lists, sets and maps, which is also nice. There’s a bit of a standard library already in place to do things like string and JSON operations. There’s not really anything that interacts with the operating system, but this is actually an advantage as it will mean that I’m free to write these modules myself to do what I need. The implementation looks simple enough which means that it will probably play nicely with Go’s GC and scheduler.

How the example above could look in Tamarin is given below. This assumes that the asynchronous aspects are completely hidden from the script author, resulting in something a little easier to read:

ext.command("cust", func() {
    var userId = ui.prompt("Enter UserID: ")
    var commandOut = exec.system("", userId).unwrap()
    var userId = strings.trim_space(commandOut)
    var custResultSet = session.query(`pk=$1`, {
        "table": "account-service-dev",
        "args": [userId]
    if len(custResultSet.items.length) == 0 {
        ui.print("No such user found")

So it looks good to me. However… the massive, massive downside is that it’s not a well-known language. As clunky as JavaScript is, it’s at least a language that most people have heard of. That’s a huge advantage, and one that warrants thinking twice about it before saying “no, thanks”. I’m not expecting Dynamo-Browse to be more popular than sliced bread (I can count on one hand the number of user’s I know are using it), but it would be nice if scripting was somewhat approachable.

One thing in Tamarin’s favour is that it’s close enough to Go that I think others could pick it up relatively easily. It’s also not a huge language β€” the features can be describe on a single Markdown page β€” and the fact that the standard library hasn’t been fully fleshed out yet can help keep things small.

I guess the obvious question is why not hide the scheduling aspects from the JavaScript implementation? Yeah, I think that’s a viable thing to do, although it might be a little harder to do than Tamarin. That will still leave the module exception issue though.

So that’s where I am at the moment. I’m not quite sure what I’ll do here, but I might give Tamarin a go, and see if it result in scripts that are easier to read and write.

One could argue that this pontificating over something that will probably only affect me and a handful of other people is even worth the time at all. And yeah, you’re probably right in thinking that it isn’t. To that, I guess the only thing I can respond with is that at least I got a blog post out of it. πŸ™‚

One last concern I do have, regardless of what language I choose, is how to schedule the scripts. If I’m serious about not exposing the specifics of which thread/goroutine is waiting for what to the script author, I’d like to run the script in a separate goroutine from the main event loop. The concern is around start-up time: launching a goroutine is fast, but if I want to execute a script in response to a keystroke, for example, would it be fast enough? I guess this is something I should probably test first, but if it is a little unresponsive, the way I’m thinking of tackling this is having a single running goroutine waiting on a channel for script scheduling events. That way, the goroutine will always be “warm” and they’ll be no startup time associated with executing the script. Something to think about.

Project Exploration: A Check-in App

I’m in a bit of a exploratory phase at the moment. I’ve set aside Dynamo-Browse for now and looking to start something new. Usually I need to start two or three things before I find something that grabs me: it’s very rare that I find myself something to work on that’s exciting before I actually start working on it. And even if I start something, there’s a good chance that I won’t actually finish it.

So it might seem weird that I’d write about these explorations at all. Mainly I do it for prosperity reasons: just something to record so that when look back and think “what was that idea I had?”, I have something to reflect on.

With that out of the way, I do have a few ideas that are worth considering. Yesterday, I’ve started a new Flutter project in order to explore an idea for an app that’s been rattling around in my head for about a month or two.

Basically the idea is this: I keep a very simple check-in blog where I post check-ins for locations that seem useful to recall down the line. At the moment I post these check-ins manually, and usually a few hours after I leave in order to maintain some privacy of my movements. I’ve been entertaining the idea of an app that would do this: I can post a check-in when I arrive but it won’t be published several hours after I leave. The check-ins themselves follow a consistent format so the actual act of filling in the details can be handled by the app by presenting these properties like a regular form. At the same time, I wanted to try something with the Micropub format and possibly the IndieAuth protocol.

So I started playing around with a Flutter app that could do this. I got to the point where I can post new check-ins and they will show up in a test blog. Here are some screenshots:

The home screen
The home screen, which obviously needs a bit of work. Pressing the floating action button will create a new checkin.
Selecting the check-in type
Selecting the type of check-in.
Entering check-in properties
Each type will have different properties. After selecting the check-in type, they will be presented to you in a form.
How it looks on the blog
How it will ultimately look on the blog.

It’s still very early stages at this moment. For one thing, check-ins are published as soon as you press “Checkin”. Eventually they should be stored on the device and then published at a later time. The form for entering checkins should also be slightly different based on the checkin type. Restaurants and cafes should offer a field to enter a star rating, and airport checkins should provide a way to describe your route.

I’m not sure if I will pursue this any further. We’ll see how I feel. But it was good to get something up and running in a couple of hours.

Looking for a new project to work on. I kinda want to give Unity a try, maybe look at making a game of sorts. How I’m going to get the artwork made is anyone’s guess though. Goodness knows I can’t draw it. 🀷

Update: Ok, decided against working on a Unity project. Working on Alto Catalogue instead.

Dynamo-Browse Running With iSH

Bit of a fun one today. After thinking about how one could go about setting up a small dev environment on the iPad, I remembered that I actually had iSH installed. I’ve had for a while but I’ve never really used it since I never installed tools that would be particularly useful. Thinking about what tools I could install, I was curious as to whether Dynamo-Browse could run on it. I guess if Dynamo-Browse was a simple CLI tool that does something and produces some output, it wouldn’t be to difficult to achieve this. But I don’t think I’d be exaggerating if I said Dynamo-Browse is a bit more sophisticated than your run-of-the-mill CLI tool. Part of this is finding out not only whether building it was possible, but whether it will run well.

Answering the first question involves determining which build settings to use to actually produce a binary that worked. Dynamo-Browse is a Go app, and Go has some pretty decent cross-compiling facilities so I had no doubt that such settings existed. My first thought was a Darwin ARM binary since that’s the OS and architecture of the iPad. But one of the features of iSH is that it actually converts the binary through a JIT before it runs it. And it turns out, after poking around a few of the included binaries using file, that iSH expects binaries to be ELF 32-bit Linux binaries.

Thus, setting GOOS to linux and GOARCH to 386 will produced a binary that would run in iSH:

GOOS=linux GOARCH=386 go build -o dynamo-browse-linux ./cmd/dynamo-browse/

After uploading it to the iPad using Airdrop and launching in in iSH, success!

Dynamo-Browse running in iSH on the iPad

So, is runs; but does is run well? Well, sadly no. Loading and scanning the table worked okay, but doing anything in the UI was an exercise in frustration. It takes about two seconds for the key press to be recognised and to move the selected row up or down. I’m not sure what the cause of this is but I suspect it’s the screen redrawing logic. There’s a lot of string manipulation involved which is not the most memory efficient. I’m wondering if building the app using TinyGo would improve things.

But even so, I’m reasonably impress that this worked at all. Whether it will mean that I’ll be using iSH more often for random tools I build remains to be seen, but at least the possibility is there.

Update: While watching a re:Invent session on how a company moved from Intel to ARM, they mentioned a massive hit to performance around a function that calculates the rune length of a Unicode character. This is something that this application is constantly doing in order to layout the elements on the screen. So I’m wondering if this utility function could be the cause of the slowdown.

Update 2: Ok, after thinking about this more, I think the last update makes no sense. For one thing, the binary iSH is running is an Intel one, and although iSH is interpreting it, I can’t imagine the slowdowns here are a result of the compiled binary. For another, both Intel and ARM (M1) builds of Dynamo-Browse work perfectly well on desktops and laptops (at least on MacOS systems). So the reason for the slowdown must be something else.

NΓ©e Audax Toolset

I’ve decided to retire the Audax Toolset name, at least for the moment. It was too confusing to explain what it actually was, and with only a single tool implemented, this complexity was unnecessary.

The project is now named after the sole tool that is available: Dynamo-Browse. The site is now at, although the old domain will still work. I haven’t renamed the repository just yet so there will still be references to “audax”, particularly the downloads section. I’m hoping to do this soon.

I think I’m happy with this decision. It really now puts focus on the project, and removes the feeling that another tool had to be built just to satisfy the name.

GitHub Actions and the macOS Release

I’m using Goreleaser to build releases of Audax tools, since it works quite well in cross-compiling a Go program to various OS targets in a single run. I use this in a GitHub actions. Whenever I create a tag, the release pipeline will kick-off a run of Goreleaser, which will cross-compile Dynamo-Browse, package it, and publishing it to GitHub itself.

Recently, I found a bug in the MacOS Brew release of Dynamo-Browse. When the user tries to press c to copy the current item, the entire program will crash with a panic. I tracked the cause to the clipboard package that I’m using. Apparently, it requires Cgo to be usable on Darwin, and it will panic if CGO is not enabled1. But the GitHub action I was using was an Ubuntu runner. And since the package required some symbols from Foundation, which was only available on Darwin, I couldn’t just enable CGO.

However, I did learn that GitHub Actions actually supports MacOS runners. I don’t know why I didn’t think they wouldn’t. I guess, much like most of my experience with anything Apple, it would come at some sort of price. So you can imagine my delight in learning about this. And the first thing I did was try to use them in the release pipeline.

I first tried blindly switching from ubuntu-latest to macos. Sadly, it didn’t work as expected. It Turns out that the macos runners do not have Docker installed, which broke a number of the actions I was using. And since the original ubuntu-latest runner worked well for the Linux and Windows releases, I didn’t want to just junk them.

So what I did was split the release pipeline into three separate jobs:

  • The first, running on ubuntu-latest, builds Audax tools and runs the tests to verify that the code is in working order.
  • The second, also running on ubuntu-latest, will invoke the Goreleaser action to build the Windows & Linux targets.
  • At the same time, the third, running on macos, will install Goreleaser via brew and run it to build the MacOS target.

The resulting pipeline looks a little like this:

name: release
      - 'v*'
    runs-on: ubuntu-latest
      # localstack Docker image
      # Checkout, build test
    needs: build
    runs-on: ubuntu-latest
      # Checkout
      - name: Release
        uses: goreleaser/goreleaser-action@v1
        if: startsWith(github.ref, 'refs/tags/')
          version: latest
          args: release -f linux.goreleaser.yml --skip-validate --rm-dist
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    needs: build
    runs-on: macos-12
      # Checkout
      - name: Setup Goreleaser
        run: |
          # Can't use goreleaser/goreleaser-action as that requires Docker
          # So need to launch it and run it using the shell.
          brew install goreleaser/tap/goreleaser
          brew install goreleaser
      - name: Release
        if: startsWith(github.ref, 'refs/tags/')
        run: |
          goreleaser release -f macos.goreleaser.yml --skip-validate --rm-dist
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

I’ve only used this to make a small maintenance release that fixes the clipboard panic problem, and so far it seems to work quite well.

After doing this, I also learnt that there’s actually a Goreleaser Docker image which could theoretically be used to cross-compile with CGO. I haven’t tried it, and honestly it sounds like too much trouble than it’s worth. Rather keep things simple.

  1. This is where I complain that the function that is panicing actually returns an error type. Instead of panacing and bringing down the whole program, why not just return an error? ↩︎

It’s been a while since I’ve posted an update here, so I thought I’d do so today.

Most of what’s going on with Audax and Dynamo-Browse is β€œclosing the gap” between the possible queries and scans that can be performed over a DynamoDB table, and how they’re represented in Dynamo-Browse query expression language. Most of the constructs of DynamoDB’s conditions expression language can now be represented. The last thing to add is the size() function, and that is proving to be a huge pain.

The reason is that the IR representation is using the expression builder package to actually put the expression together. These builders uses Go’s type system to enforce which constructs work with each other one. But this clashes with how I built the IR representation types, which are essentially structs implement a common interface. Without having an overarching type to represent an expression builder, I’m left with either using a very broad type like any, or completely ditching this package and doing something else to build the expression.

It feels pretty annoying reaching the brick wall just when I was finishing this off. But I guess them’s the breaks.

One other thing I’m still considering is spinning out Dynamo-Browse into a separate project. It currently sits under the β€œAudax” umbrella, with the intention of releasing other tools as part of the tool set. These tools actually exist1 but I haven’t been working on them and they’re not in a fit enough state to release them. So the whole Audax concept is confusing and difficult to explain with only one tool available at the moment.

I suppose if I wanted to work on the other tools, this will work out in the end. But I’m not sure that I do, at least not now. And even if I do, I’m now beginning to wonder if building them as TUI tools would be the best way to go.

So maybe the best course of action is to make Dynamo-Browse a project in it’s own right. I think it’s something I can resurrect later should I get to releasing a second tool.

Edit at 9:48: I managed to get support for the size function working. I did it by adding a new interface type with a function that returns a expression.OperandBuilder. The existing IR types representing names and values were modified to inherit this interface, which gave me a common type I could use for the equality and comparison expression builder functions.

This meant that the IR nodes that required a name and literal value operand β€” which are the only constructs allowed for key expressions β€” had to be split out into separate types from the “generic” ones that only worked on any OperandBuilder node. But this was not as large a change as I was expecting, and actually made the code a little neater.

  1. Dynamo-Browse was actually the second TUI tool I made as part of what was called β€œawstools”. The first was actually an SQS browser. ↩︎

Trying to build an old Android app that has fallen out of date. Doing this always brings up some weird errors.

The version of Gradle being used was woefully out of date, so I starting upgrading it to the latest minor version used by the project (it was 3.1.0 to 3.2.x). This required changes to the distributionUrl in After making these changes, I was getting 502 errors when Android Studio was trying to get the binaries from I feared that the version of Gradle I was hoping to use was so out of date the binaries may no longer be available: it wouldn’t be the first time Java binaries disappeared as various sites changed URLs or shutdown. Fortunately the problem was transitory, and I managed to get a 3.2.x version of Gradle downloaded.

The various squiggly lines in the .gradle files disappeared, but I was still unable to build the app. Attempting to do so brought up the following error:

Unsupported class file major version 55

This was weird. The only version of Java I had installed was Java 11, and I confirmed that Gradle was using the specific install by checking it in Preferences β†’ Build, Execution, Deployment β†’ Build Tools β†’ Gradle. So I couldn’t see why there would be a major version conflict. Was Gradle using a bundled version of Java 8 somewhere to build the app?

What I think resolved this, along with upgrading Gradle, the SDK and the build tools, was explicitly setting the source and target capability of the app to the same version of Java I was using. From within “build.gradle”, I added the following:

compileOptions {
    sourceCompatibility JavaVersion.VERSION_11
    targetCompatibility JavaVersion.VERSION_11

Might be that the version of Gradle I was using was so old it defaulted to building Java 1.8 byte code if these weren’t specified.

This error disappeared and the app was stating to build. But the next weird thing I was then getting was that the compiler was not seeing any of the Android SDK classes, like What seemed to work here was to upgrade Gradle all the way to the latest version (7.3.3), clearing the cache used by Android Studio, and making sure this line was added to the app module dependencies:

implementation fileTree(dir: 'libs', include: ['*.jar'])

This finally got the build working again, and I was able to run the app on my phone.

I don’t know which of these actions solved the actual problem. It’s working now so I rather not investigate further. The good news is I didn’t need to change anything about the app itself. All the changes were around the manifest and Gradle build files. Those Android devs really do a great job maintaining backwards compatibility.

Audax Toolset Version 0.1.0

Audax Toolset version 0.1.0 is finally released and is available on GitHub. This version contains updates to Dynamo-Browse, which is still the only tool in the toolset so far.

Here are some of the headline features.

Adjusting The Displayed Columns

The Fields Popup

Consider a table full of items that look like the following:

pk           S    00cae3cc-a9c0-4679-9e3a-032f75c2b506
sk           S    00cae3cc-a9c0-4679-9e3a-032f75c2b506
address      S    3473 Ville stad, Jersey , Mississippi 41540
city         S    Columbus
colors       M    (2 items)
  door       S    MintCream
  front      S    Tan
name         S    Creola Konopelski
officeOpened BOOL False
phone        N    9974834360
ratings      L    (3 items)
  0          N    4
  1          N    3
  2          N    4
web          S

Let’s say you’re interested in seeing the city, the door colour and the website in the main table which, by default, would look something like this:

The before table layout

There are a few reasons why the table is laid out this way. The partition and sort key are always the first two columns, followed by any declared fields that may be used for indices. This is followed by all the other top-level fields sorted in alphabetical order. Nested fields are not included as columns, and maps and list fields are summarised with the number of items they hold, e.g. (2 items). This makes it impossible to only view the columns you’re interested in.

Version 0.1.0 now allows you to adjust the columns of the table. This is done using the Fields Popup, which can be opened by pressing f.

Adjusting the columns in the table

While this popup is visible you can show columns, hide them, or move them left or right. You can also add new columns by entering a Query Expression, which can be used to reveal the value of nested fields within the main table. It’s now possible to change the columns of the table to be exactly what you’re interested in:

The after table layout

Read-only Mode And Result Limits

Version 0.1.0 also contains some niceties for reducing the impact of working on production tables. Dynamo-Browse can now be started in read-only mode using the -ro flag, which will disable all write operations β€” a useful feature if you’re paranoid about accidentally modifying data on production databases.

Another new flag is -default-limit which will change the default number of items returned from scans and queries from 1000 to whatever you want. This is useful to cut down on the number of read capacity units Dynamo-Browse will use on the initial scans of production tables.

These settings are also changeable from while Dynamo-Browse using the new set command:

Using the set command in Dynamo-Browse

Progress Indicators And Cancelation

Dynamo-Browse now indicates running operations, like scans or queries, with a spinner. This improves the user experience of prior versions of Dynamo-Browse, which gave no feedback of running operations whatsoever and would simply “pop-up” the result of such operations in a rather jarring way.

With this spinner visible in the status bar, it is also now possible to cancel an operation by pressing Ctrl-C. You have the option to view any partial results that were already retrieved at the time.

Other Changes

Here are some of the other bug-fix and improvements that are also included in this release:

  • Audax toolset is now distributed via Homebrew. Check out the Downloads page for instructions.
  • A new mark command to mark all, unmark all, or toggle marked rows. The unmark command is now an alias to mark none.
  • Query expressions involving the partition and sort key of the main table are now executed as a DynamoDB queries, instead of scans.
  • The query expression language now supports conjunction, disjunction, and dot references.
  • Fixed a bug which was not properly detecting whether MacOS was in light mode. This was making some highlighted colours hard to see while in dark mode.
  • Fixed the table-selection filter, which was never properly working since the initial release.
  • Fixed the back-stack service to prevent duplicate views from being pushed.
  • Fixed some conditions which were causing seg. faults.

Full details of the changes can be found on GitHub. Details about the various features can also be found in the user manual.

Finally, although it does not touch on any of the features described above, I recorded a introduction video on the basics of using Dynamo-Browse to view items of a DynamoDB table:

No promises, but I may record further videos touching on other aspects of the tool in the future. If that happens, I’ll make sure to mention them here.1

  1. Or you can like, comment or subscribe on YouTube if that’s your thing πŸ˜›. ↩︎

Putting the final touches on the website for the upcoming release of Audax Toolset v0.1.0, and I’m finding myself a bit unhappy with it. Given that Dynamo-Browse is the only tool in this “suite”, it feels weird putting together a landing page with a whole lot of prose about this supposed collection of tools. There’s no great place to talk more about Dynamo-Browse right there on the landing page.

Part of me is wondering whether it would be better focusing the site solely on Dynamo-Browse, and leave all this Audax Toolset stuff on the back-burner, at least until (or unless) another tool is made available through this collection. I’m wondering if I’ll need to rearrange the codebase to do this, and spin out the other commands currently in development into separate repositories.

Bridging The Confidence Gap

I had to do some production work with DynamoDB this morning. It wasn’t particularly complicated work: run a query, get a couple of rows, change two attributes on each one. I could have used Dynamo-Browse to do this. But I didn’t. Despite building a tool designed for doing these sorts of things, and using it constantly for all sorts of non-prod stuff, I couldn’t bring myself to use it on a production database.

I’m wondering why this is. It’s not like I have any issues with using Dynamo-Browse for the non-prod stuff. Sure there a bugs, and things that the tool can’t do yet, but I haven’t (yet) encountered a situation where the tool actually corrupted data. I also make sure that the code that touches the database is tested against a “real” instance of DynamoDB (it’s actually a mock server running in Docker).

Yet I still don’t have the confidence to use it on a production database. And okay, part of me understands this to a degree. I’ve only been using this tool for a handful of months, and when it comes to dealing with production data, having a sense of caution, bordering on a small sense of paranoia, is probably healthy. But if I can’t be confident that Dynamo-Browse will work properly when it matters the most, how can I expect others to have that confident?

So I’d like to get to the bottom of this. After a bit of thought, I think there are three primary causes of my lack of confidence here.

The first is simple: there are features missing in the tool that are needed to do my job. Things such as running a query over an index, or the ability to impose a conditional check on updates. This is a legitimate concern in my opinion: if you need to do something, and Dynamo-Browse doesn’t support it, then you can’t use Dynamo-Browse to do that thing, plain and simple. It’s also the easiest concern to address: either add the feature, or say that feature is unsupported and don’t expect people to use Dynamo-Browse if they need it.

I think the second cause is a lack of insight into what the tool is actually doing. Even though I built the tool and tested it during development, there’s always this feeling of uncertainty in the back of my head while I’m using it when the stakes are high. That feeling of “when I do this, although I think the tool will behave this way, how will the tool actually behave?” Part of this, I think, comes from a decade and a half of being burned by technology as part of my job, and growing a healthy (?) sense of distrust for it.1

I’m sure some of this will resolve itself as I continue to use Dynamo-Browse, but the only approach I know of gaining assurance that the tool is working as intended is if the tool tells me what it’s doing. Logging will help here, but I think some features that provide the user the ability to check what’s going to happen prior to actually doing it would be useful as well.

The third cause is probably a lack of controlled end-to-end testing. There does exists a suite of unit tests with a decent (maybe not good, but decent) level of coverage, but it does not extend to the UI layer, which is a little problematic. It might be that more testing of the application as a whole would help here.

This means more manual testing, but it might also be possible to setup some automated testing of the entire tool end-to-end here as well. What’s going for me is the fact that Dynamo-Browse runs in the terminal, and is responsible for rendering the UI and handling the event loop instead of palming this off to the OS. Looking at some of the features that Bubble Tea offers, it might be possible to run the application headless, and simulate use by pumping in keyboard events. Verifying that the UI is such may be a little difficult, but what I can test is what is actually read from and written to the database, which is what I’m mostly interested in.

I’m still unsure as to what I’ll do to address these three causes of concern. This is still a hobby project that I do in my spare time, and some of the mundane tasks, like more manual testing, sound unappealing. On the other hand, I do want to use this tool for my job, and I want others to use it as well. So I can’t really complain that others choose not to use it if I cannot feel comfortable using it when it matters most. So I probably should do some of each. And who knows, I may actually get some enjoyment out of doing it. I certainly would get some enjoyment from knowing that others can rely on it.

  1. Someone once said that an insane person is one that does the same thing twice and expects different results. That someone has never worked in technology. ↩︎

About Jsonnet's Juxtapose Feature

We use a lot of Jsonnet a work. We use it to generate Cloud Formation templates, Kubernetes YAML files, and application configuration from a single source file.

Jsonnet is effectively a templating language for structured data. It looks a lot like Json (which is by design since it’s meant to be a drop-in replacement) but includes a few quality of life improvements, such as comments and letting you include the last comma in a list; something that a strict Json parser will never let you do:

    { some: "value" },
    { another: "value" },  // Perfectively valid

Generally, it works pretty well, except for one sneaky “feature” that has caught me off guard a few times.

If you have an object next to another object, and the two objects do not have a comma separating them, like so:

    { some: "value" }
    { another: "value" }

Jsonnet will merge it into a single object:

    { some: "value", another: "value" }

This is a documented feature, but as someone who is used to dealing with Json and it’s strict parsing rules around commas, it seems to me like a bit of a gotcha. It might be obvious in this example but for larger files, a missing comma like this is less likely to be intentional, and more likely to be an accident.

Consider this: you’ve got a long list of large objects spanning tens of lines and you need to add a new object to the end:

        "product": "apples",
        "price": 100,
        "tax": 20,
        "shipment": "basic"

    // ... skip 23 other items here

        "product": "yachts",
        "price": 50000,
        "tax": 20,
        "shipment": "expensive"

    // Add price of zebras here

Given our evolved need to save energy, you’re more likely to copy the object for yachts instead of typing out the object for zebras in full. Note that the object for yachts doesn’t have a trailing comma since we’ve been trained to follow the strict rules of Json parsers. So you copy yachts and paste it on the line below. Now both the original and pasted objects for yachts have no comma, but you don’t notice that because you’re focused on changing the pasted-in object to be one about zebras.

Then, when it comes time to evaluate your config template, you’re down one item. Thinking that you’ve missed something trivial, like saving the file or something, you expect the missing object to be the zebras. But it’s not the zebras, it’s actually the yachts that are absent.

Eventually you find the missing comma, but you’re a little annoyed that a language that lets you include commas on the last item can ensnare you like this. Sure the feature is documented, but are you really going to read the manual when you’re doing something as trivial as adding new objects to a list? Not helping matters is there’s actually an explicit operator to merge two objects together (the + operator). There wouldn’t be a need for another one that does the same thing, right?

So, language designers, please consider things like this the next time you design a domain-specific language. Especially one based on an existing language with stricter syntax rules than the ones that you enforce. People come to rely on these rules to help them catch mistakes like this.

Things Dynamo-Browse Need

I’m working with dynamo-browse a lot this morning and I’m coming up with a bunch of usability shortcomings. I’m listing them here so I don’t forget.

The query language needs an in operator, such as pk in ("abc", "123"). This works like in in all the other database services out there, in that the expression effectively becomes `pk = “abc” or pk = “123”. Is this operator supported natively in DynamoDB? πŸ€” Need to check that.

Likewise, the query language needs something to determine whether an attribute contains a substring. I believe DynamoDB expressions support this natively, so it probably makes sense to use that.

Longer term, it would be nice to include the results of the current result set in the expression. For example, assuming the current result set has these records:

pk    sk    thing     place   category
11    aa    rock      home    geology
22    bb    paper     home    art
33    cc    scissors  home    utensils

and you want to effectively query for the rows where pk is equal to the set of pk in the current result set, having a way to do that in the expression language would save a lot of copy-and-pasting. An example might be pk in @pk or something similar, which could produce a result set of the form:

pk    sk    thing      place  category
11    aa    rock       home   geology
11    ab    sand       beach  ~
22    bb    paper      home   art
22    bc    cardboard  shops  ~
33    cc    scissors   home   utensils
33    cd    spoon      cafe   ~

Another way to do this might be to add support for filters, much like the expressions in JQ. For example, the second result set could be retrieved just by using a query expression of the form place = "home" | fanout pk, which could effectively do the following pesudo-code:

firstResultSet := query("place = 'home')
secondResultSet := newResultSet()
for pk in resultSet['pk'] {
    secondResultSet += query("pk = ?", pk)

For dealing with workspaces, a quick way to open up the last workspace that you had opened would be nice, just in case you accidentally close Dynamo-Browse and you want to restore the last session you had opened. Something like screen -r. I think in this case having workspace files stored in the temporary directory might be a problem. Maybe having some way to set where workspace files are stored by default would work? πŸ€”

For dealing with marking rows, commands to quickly mark or unmark rows in bulk. There’s already the unmark command, but something similar for marking all rows would be good. This could be extended to only mark (or unmark) certain rows, such as those that match a particular substring (then again, using filters would work here as well).

It might be nice for refresh to keep your current row and column position, instead of going to the top-left. And it would also be nice to have a way to refresh credentials (could possiably be handled using the scripting framework).

And finally, if I have any doubt that the feature to hide or rearrange columns would not be useful, there was an instance today where I wish this feature was around. So, keep working on that.

I’m sure there will be more ideas. If so, I’ll post them here.

Overlay Composition Using Bubble Tea

Working on a new feature for Dynamo-Browse which will allow the user to modify the columns of the table: move them around, sort them, hide them, etc. I want the feature to be interactive instead of a whole lot of command incantations that are tedious to write. I also kind of want the table whose columns are being manipulated to be visible, just so that the affects of the change would be apparent to the user while they make them.

This is also an excuse to try something out using Bubble Tea β€” the TUI framework I’m using β€” which is to add the ability to display overlays. These are UI elements (the overlay) that appear above other UI elements (the background). The two are composed into a single view such that it looks like the overlay is obscuring parts of the background, similar to how windows work in MacOS.

Bubble Tea doesn’t have anything like this built in, and the way views are rendered in Bubble Tea doesn’t make this super easy. The best way to describe how views work is to think of them as “scanlines.” Each view produces a string with ANSI escape sequences to adjust the style. The string can contain newlines which can be used to move the cursor down while rendering. Thus, there’s no easy way position the cursor at an arbitrary position and render characters over the screen.1

So, I thought I’d tackle this myself.

Attempt 1

On the surface, the logic for this is simple. I’ll render the background layer up to the top most point of the overlay. Then for each scan line within the top and bottom border of the overlay, I’ll render the background up to the overlay’s left border, then the contents of the overlay itself, then the rest of the background from the right border of the overlay.

My first attempt was something like this:

line := backgroundScanner.Text()
if row >= c.overlayY && row < c.overlayY+c.overlayH {
    // When scan line is within top & bottom of overlay
    foregroundScanPos := row - c.overlayY
    if foregroundScanPos < len(foregroundViewLines) {
        displayLine := foregroundViewLines[foregroundScanPos]
            lipgloss.WithWhitespaceChars(" ")),
} else {
    // Scan line is beyond overlay boundaries of overlay

Here’s how that looked:

Attempt 1

Yeah, not great.

Turns out I forgot two fundamental things. One is that the indices of Go strings works on the underlying byte array, not runes. This means that attempting to slice a string between multi-byte Unicode runes would produce junk. It’s a little difficult to find this in the Go Language Guide apart from this cryptic line:

A string’s bytes can be accessed by integer indices 0 through len(s)-1

But it’s relatively easy to test within the Go playground:

package main

import "fmt"

func main() {
    fmt.Println("δΈ–η•ŒπŸ˜€πŸ˜šπŸ™ƒπŸ˜‡πŸ₯ΈπŸ˜Ž"[1:6])                   // οΏ½οΏ½η•Œ
    fmt.Println(string([]rune("δΈ–η•ŒπŸ˜€πŸ˜šπŸ™ƒπŸ˜‡πŸ₯ΈπŸ˜Ž")[1:6]))   // η•ŒπŸ˜€πŸ˜šπŸ™ƒπŸ˜‡

The second issue is that I’m splitting half-way through an ANSI escape sequence. I don’t know how long the escape sequence is to style the header of the item view, but I’m betting that it’s longer than 5 bytes (the overlay is to be position at column 5). That would explain why there’s nothing showing up to the left of the overlay for most rows, and why the sequence 6;48;5;188m is there.

Attempt 2

I need to modify the logic so that zero-length escape sequences are preserved. Fortunately, one of Bubble Tea’s dependency is reflow, which offers a bunch of nice utilities for dealing with ANSI escape sequences. The function that looks promising is truncate.String, which will truncate a string at a given width.

So changing the logic slightly, the solution became this:

// When scan line is within top & bottom of overlay
compositeOutput.WriteString(truncate.String(line, uint(c.overlayX)))

foregroundScanPos := r - c.overlayY
if foregroundScanPos < len(foregroundViewLines) {
    displayLine := foregroundViewLines[foregroundScanPos]
        lipgloss.WithWhitespaceChars(" "),

rightStr := truncate.String(line, uint(c.foreX+c.overlayW))

The results are a little better. At least the left side of the overlay looked OK now:

Attempt 2

But there are still problems here. Notice the [0m at the right side of the overlay on the selected row. I can assure you that’s not part of the partition key; take a look at the item view to see for yourself. And while you’re there, notice the header of the item view? That should be a solid grey bar, but instead it’s cut off at the overlay.

I suspect that rightStr does not have the proper ANSI escape sequences. I’ll admit that the calculation used to set rightStr is a bit of a hack. I’ll need to replace it with a proper way to detect the boundary of an ANSI escape sequence. But it’s more than just that. If an ANSI escape sequence starts off at the left side of the string, and continues on “underneath” the overlay, it should be preserved on the right side of the overlay as well. The styling of the selected row and the item view headers are examples of that.

Attempt 3

So here’s what I’m considering: we “render” the background underneath the overlay to a null buffer while recording any escape sequences that were previously set on the left, or were changed underneath the overlay. We also keep track of the number of visible characters that were seen. Once the scan line position reached the right border of the overlay, we replay all the ANSI escape sequences in the order that were found, and then render the right hand side of the scan line from the first visible character.

I was originally considering rendering these characters to a null reader, but what I actually did was simply count the length of visible characters in a loop. The function to do this looks like this:

func (c *Compositor) renderBackgroundUpTo(line string, x int) string {
    ansiSequences := new(strings.Builder)
    posX := 0
    inAnsi := false

    for i, c := range line {
        if c == ansi.Marker {
            inAnsi = true
        } else if inAnsi {
            if ansi.IsTerminator(c) {
                inAnsi = false
        } else {
            if posX >= x {
                return ansiSequences.String() + line[i:]
            posX += runewidth.RuneWidth(c)
    return ""

And the value set to rightStr is changed to simply used this function:

rightStr := c.renderBackgroundUpTo(line, c.foreX+c.overlayW)

Here is the result:

That looks a lot better. Gone are the artefacts from splitting in ANSI escape sequences, and the styling of the selected row and item view header are back.

I can probably work with this. I’m hoping to use this to provide a way to customise the columns with the table view. It’s most likely going to power other UI elements as well.

  1. Well, there might be a way using ANSI escape sequences, but that goes against the approach of the framework. ↩︎

Intermediary Representation In Dynamo-Browse Expressions

One other thing I did in Dynamo-Browse is change how the query AST produced the actual DynamoDB call.

Previously, the AST produced the DynamoDB call directly. For example, if we were to use the expression pk = "something" and sk ^= "prefix", the generated AST may look something like the following:

AST from the parse expression

The AST will then be traversed to determine whether this could be handled by either running a query or a scan. This is called “planning” and the results of this will determine which DynamoDB API endpoint will be called to produce the result. This expression may produce a call to DynamoDB that would look like this:

    TableName: "my-table-name",
    KeyConditionExpression: "#0 = :0 and beings_with(#1, :1)",
    ExpressionAttributeNames: map[string]string{
        "#0": "pk",
        "#1": "sk",
    ExpressionAttributeValues: map[string]types.AttributeValue{
         ":0": &types.StringAttributeValue{ Value: "something" },
         ":1": &types.StringAttributeValue{ Value: "prefix" },

Now, instead of determining the various calls to DynamoDB itself, the AST will generate an intermediary representation, something similar to the following:

Intermediary representation tree

The planning traversal will now happen off this tree, much like it did over the AST.

For such a simple expression, the benefits of this extra step may not be so obvious. But there are some major advantages that I can see from doing this.

First, it simplifies the planning logic quite substantially. If you compare the first tree with the second, notice how the nodes below the “and” node are both of type “binOp”. This type of node represents a binary operation, which can either be = or ^=, plus all the other binary operators that may come along. Because so many operators are represented by this single node type, the logic of determining whether this part of the expression can be represented as a query will need to look something like the following:

  • First determine whether the operation is either = or ^= (or whatever else)
  • If it’s = and the field on the left is either a partition or sort key, this can be represented as a query
  • If it’s ^=, first determine whether the operand is a string, (if not, fail) and then determine whether the field on the left is a sort key. If so, then this can be query.
  • Otherwise, it will have to be a scan.

This is mixing various stages of the compilation phase in a single traversal: determining what the operator is, determining whether the operands are valid (^= must have a string operand), and working out how we can run this as a query, if at all. You can imagine the code to do this being large and fiddly.

With the IR tree, the logic can be much simpler. The work surrounding the operand is done when the AST tree is traverse. This is trivial: if it’s = then produce a “fieldEq”; if it’s ^= then produce a “fieldBeginsWith”, and so on. Once we have the IR tree, we know that when we encounter a “fieldEq” node, this attribute can be represented as a query if the field name is either the partition or sort key. And when we encounter the “fieldBeginsWith” node, we know we can use a query if the field name is the sort key.

Second, it allows the AST to be richer and not tied to how the actual call is planned out. You won’t find the ^= operator in any of the expressions DynamoDB supports: this was added to Dynamo-Browse’s expression language to make it easier to write. But if we were to add the “official” function for this as well β€” begins_with() β€” and we weren’t using the IR, we would need to have the planning logic for this in two places. With an IR, we can simply have both operations produce a “fieldBeginsWith” node. Yes, there could be more code encapsulated by this IR node (there’s actually less) but it’s being leverage by two separate parts of the AST.

And since expressions are not directly mappable to DynamoDB expression types, we can theoretically do things like add arithmetic operations or a nice suite of utility functions. Provided that these produce a single result, these parts of the expression can be evaluated while the IR is being built, and the literal value returned that can be used directly.

It felt like a few other things went right with this decision. I was expecting this to take a few days, but I was actually able to get it built in a single evening. I’m also happy about how maintainable the code turned out to be. Although there are two separate tree-like types that need to be managed, both have logic which is much simpler than what we were dealing with before.

All in all, I’m quite happy with this decision.

Letting Queries Actually Be Queries In Dynamo-Browse

I spent some more time working on dynamo-browse over the weekend (I have to now that I’ve got a user-base πŸ˜„).

No real changes to scripting yet. It’s still only me that’s using it at the moment, and I’m hoping to keep it this way until I’m happy enough with the API. I think we getting close though. I haven’t made the changes discussed in the previous post about including the builtin plugin object. I’m thinking that instead of a builtin object, I’ll use another module instead, maybe something like the following:

const ext = require("audax:ext");

ext.registerCommand("thing", () => console.log("Do the thing"));

The idea is that the ext module will provide access to the extension points that the script can implement, such as registering new commands, etc.

At this stage, I’m not sure if I want to add the concept of namespaces to the ext module name. Could help in making it explicit that the user is accessing the hooks of “this” extension, as opposed to others. It may leave some room for some nice abilities, like provide a way to send messages to other extensions:

// com.example.showTables
// This extension will handle events which will prompt to show tables. 
const ext = require("audax:ext/this");

ext.on("show-tables", () => {

// com.example.useShowTables
// This extension will use the "show-tables" message "com.example.showTables"
const showTables = require("audax:ext/com.example.showTables");"show-tables");

Then again, this might be unnecessary given that the JavaScript module facilities are there.

Anyway, what I actually did was start work on making queries actually run as queries against the DynamoDB table. In the released version, running a query (i.e. pressing ? and entering a query expression) will actually perform a table scan. Nothing against scans: they do what they need to do. But hardly the quickest way to get rows from DynamoDB if you know the partition and sort key.

So that’s what I’m working on now. Running a query of the form pk = "something" and sk = "else" where pk and sk are the partition and sort keys of the table will now call the dynamodb.Query API. This also works with the “begins with” operator: pk = "something" and sk ^= "prefix". Since sk is the sort key, this will be executed as part of a query to DynamoDB.

This also works if you were to swap the keys around, as in sk = "else" and pk = "something". Surprisingly the Go SDK that I’m using does not support allow expressions like this: you must have the partition key before the sort key if you’re using KeyAnd. This touches on one of the design goals I have for queries: the user shouldn’t need to care how the expression actually produces the result. If it can do so by running an actual query against the table, then it will; if not, it will do a scan. Generally, the user shouldn’t care either way: just get me the results in the most efficient way you can think of!

That said, it might be necessary for the user to control this to an extent, such as requiring the use of a scan if the planner would normally go for a query. I may add some control to this in the expression language to support this. But these should be, as a rule, very rarely used.

Anyway, that’s the biggest change that’s happening. There is something else regarding expressions that I’m in the process of working on now. I’ll touch on that in another blog post.

Finished version 0.0.3 of Audax Toolset yesterday. The code has been ready since the weekend, but it took me Sunday morning and yesterday (Monday) evening to finish updating the website. All done now.

Now the question is whether to continue working on it, or do something different for a change. There are a few people using Dynamo-Browse at work now, so part of me feels like I should continue building features for it. But I also feel like switching to another project, at least for a little while.

I guess we’ll let any squeaky wheels make the decision for me.

I’ve been using Dynamo-Browse all morning and I think I’ll make some notes about how the experience went. In short: the command line needs some quality of life improvements. Changing the values of two attributes on two different items, while putting them to the DynamoDB table each time, currently results in too many keystrokes, especially given that I was simply going back and forth between two different values for these attributes.

So, in no particular order, here is what I think needs to be improved with the Dynamo-Browse command line:

  • It needs command line completion. Typing out a full attribute name is so annoying, especially considering that you need to be a little careful you got the attribute name right, lest you actually add a new one.
  • It needs command line history. Pressing up a few times is so much better than typing out the full command again and again. Such a history could be something that lives in the workspace, preserving it across restarts.
  • The set-attr and del-attr commands need a switch to take the value directly, rather than by prompting the user to supply it after entering the command (it can still do that, but have an option to take it as a switch as well). I think having a -set switch after the attribute names would suffice.

Finally, I think it might be time to consider adding more language features to the command line. At the moment the commands are just made up of tokens, coming from a split on whitespace characters (while supporting quotes). But I think it may be necessary to convert this into a proper language grammar, and add some basic control structures to it, such as entering multiple commands in a single line. It doesn’t need to be a super sophisticated language: something similar to the like TCL or shell would be enough at first.

It might be that writing a script would have solved this problem, and it would (to a degree at least). But not everything needs to be a script. I tried writing a script this morning to do the thing I was working on and it felt just so overkill, especially considering how short-lived this script would actually be. Having something that you can whip up in a few minutes can be a great help. It would have probably taken me 15-30 minutes to write the script (plus the whole item update thing hasn’t been fully implemented yet).

Anyway, we’ll see which of the above improvements I’ll get to soon. I’m kinda thinking of putting this project on hold for a little while, so I could work on something different. But if this becomes too annoying, I may get to one or two of these.

Thinking About Scripting In Dynamo-Browse

I’ve been using the scripting facilities of dynamo-browse for a little while now. So far they’ve been working reasonably well, but I think there’s room for improvement, especially in how scripts are structured.

At the moment, scripts look a bit like this:

const db = require("audax:dynamo-browse");
const exec = require("audax:x/exec");

db.session.registerCommand("cust", () => {
    db.ui.prompt("Enter UserID: ").then((userId) => {
        return exec.system("/Users/lmika/projects/accounts/", userId);
    }).then((customerId) => {
        let userId = output.replace(/\s/g, "");        
        return db.session.query(`pk="CUSTOMER#${customerId}"`, {
            table: `account-service-dev`
    }).then((custResultSet) => {
        if (custResultSet.rows.length == 0) {
            db.ui.print("No such user found");
        db.session.resultSet = custResultSet;

This example script β€” modified from one that I’m actually using β€” will create a new cust command which will prompt the user for a user ID, run a shell script to return an associated customer ID for that user ID, run a query for that customer ID in the “account-service-dev”, and if there are any results, set that as the displayed result set.

This is all working fine, but there are a few things that I’m a little unhappy about. For example, I don’t really like the idea of scripts creating new commands off the session willy-nilly. Commands feel like they should be associated with the script that are implementing them, and this is sort of done internally but I’d like it to be explicit to the script writer as well.

At the same time, I’m not too keen of requiring the script writer to do things like define a manifest. That would dissuade “casual” script writers from writing scripts to perform one-off tasks.

So, I’m thinking of adding an plugin global object, which provides the hooks that the script writer can use to extend dynamo-browse. This will kinda be like the window global object in browser-based JavaScript environments.

Another thing that I’d like to do is split out the various services of dynamo-browse into separate pseudo packages. In the script above, calling require("audax:dynamo-browse") will return a single dynamo-browse proxy object, which provides access to the various services like running queries or displaying things to the user. This results in a lot of db.session.this or db.ui.that, which is a little unwieldily. Separating these into different packages will allow the script writer to associate them to different package aliases at the top-level.

With these changes, the script above will look a little like this:

const session = require("audax:dynamo-browse/session");
const ui = require("audax:dynamo-browse/ui");
const exec = require("audax:x/exec");

plugin.registerCommand("cust", () => {
    ui.prompt("Enter UserID: ").then((userId) => {
        return exec.system("/Users/lmika/projects/accounts/", userId);
    }).then((customerId) => {
        let userId = output.replace(/\s/g, "");        
        return session.query(`pk="CUSTOMER#${customerId}"`, {
            table: `account-service-dev`
    }).then((custResultSet) => {
        if (custResultSet.rows.length == 0) {
            ui.print("No such user found");
        session.resultSet = custResultSet;

Is this better? Worse? I’ll give it a try and see how this goes.

Dynamo-Browse: Using The Back-Stack To Go Forward

More work on Audax toolset. Reengineered the entire back-stack in dynamo-browse to support going forward as well as backwards, much like going forward in a web-browser. Previously, when going back, the current “view snapshot” was being popped off the stack and lost forever (a “view snapshot” is information about the currently viewed table, including the current query and filter if any are set). But I’d need to maintain these view snapshot if I wanted to add going forward as well.

So this evening, I changed it this such that view snapshots were no longer being deleted. Instead, there is now a pointer to the current view snapshot that will go up or down the stack when the user wants to go backwards or forwards. This involved changing the back-stack from a single-linked list to a doubly-linked list, but that was pretty straightforward. The logic to pop snapshots from the stack hasn’t been removed: in fact, it has been augmented so that it can now remove the entire head up to the current view snapshot. This happens when the user goes backwards a few times, then runs a new query or sets a new filter, again much like a web-browser where going backwards a few times, then clicking a link, will remove any chance of going forward from that point.

Last thing I changed this evening was to restore the last viewed table, query or filter when relaunching dyanmo-browse using an existing workspace. This works quite nicely with the back-stack as it works now: it just needs to read the view snapshot at the head.