Working Set

Dev Log - 2023-03-19

I started this week fearing that I’d have very little to write today. I actually organised some time off over the weekend where I wouldn’t be spending a lot of time on side projects. But the week started with a public holiday, which I guess acted like a bit of a time offset, so some things did get worked on.

That said, most of the work done was starting or continuing things in progress, which is not super interesting at this stage. I’ll hold off on talking about those until there’s a little more there. But there were a few things that are worth mentioning.

Dynamo-Browse

I found a bug in the query planner. It had to do with which index it chose when planning a query with only a single attribute. If a table has multiple GSIs that have that same attribute as the partition key (with different attributes for sort keys), the index the planner choose became effectively random. Because each index may have different records, running that query could give incomplete results.

I think the query planner needs to be fixed such that any ambiguity in which index to be use would result in an error. I try to avoid putting an unnecessary need for the user to know that a particular query required a particular index. But I don’t think there’s any getting around this: the user would have to specify.

But how to allow the user to specify the index to use?

The fix for the script API was reasonably simple: just allow the script author to specify the index to use in the form of an option. That’s effectively what I’ve done by adding an optional index field to the session.query() method. When set, the specific index would be used regardless of which index the query planner would choose.

I’m not certain how best to solve this when the user is running a query interactively. My current idea is that a menu should appear, allowing the user to select the index to use from a list. This could also include a “scan” option if no index is needed. Ideally this information will be stored alongside the query expression so that pressing R would rerun the query without throwing up the prompt again.

Another option is allowing the user to specify the index within the expression in some way. Maybe in the form of a hint, as in having the user explicitly specify the sort key in a way that does’t affect the output. This is a little hacky though — sort of like those optimisations you need to do in SQL queries to nudge the planner in a particular execution plan.

Another option is having the user specify the index specifically in the query. Maybe as an annotation:

color="blue" @index('color-item-index')

or as a suffix:

color="blue" using index('color-item-index')

Anyway, this will be an ongoing thing I’m sure.

One other thing I started working on in Dynamo-Browse is finally working on support for the between keyword:

age between 12 and 24

This maps directly to the between statement in DynamoDB’s query expression language, so getting scan support for this was relatively easy. I do need to make the query planner know of this though, as this operation is supported in queries if it’s used with the sort key. So this is still on a branch at the moment.

Finally, I’ve found myself using this tool a lot this last week and I desperately need something akin to what I’ve been calling a “fanout” command. This is a way to take the results of one query and use them in someway in another query — almost like sub-queries in regular SQL. What I’ve been finding myself wishing I could use this for is getting the IDs of the row from a query run over the index, and just running a query for rows with those ID over the main table. At the moment I’m left with copying the ID from the first result set, and just making a large pk in (…) expression, which is far from ideal.

I’m not sure whether I’d like to do this as a command, or extend the query expression in some way. Both approaches have advantages and disadvantages. That’s probably why I haven’t made any movement on this front yet.

CCLM

I did spend the Monday working on CCLM. I coded up a small script which took some of the ideas from the blog post on puzzle design I mention last week that I could run to get some ideas. So far it’s only producing suggestions with two game elements, but it’s enough of a starting point for making puzzles:

leonmika@Stark cclm % go run ./cmd/puzzleidea
bear trap
directional walls

After running it on Monday I had a go at starting work on a new level. It became clear reasonably soon after I started that I needed a new game element. So I added one, which I’ve called “kindling”. By default it looks like a pile of wood, and is perfectively safe to walk on:

A screenshot of CCLM with a fireball about to hit kindling tiles

But if a fireball runs into it, it catches alight and spreads to any adjacent kindling tiles, turning them into fire tiles.

A screenshot of CCLM with kindling tiles catching alight and spreading to adjacent kindling tiles

I had an idea for this for a while. I even went to the extend of producing the graphics for this element. But needing it for this puzzle finally bought me around to finishing the work. I actually manage to make most of the changes without any changes to the Go code at all: the existing tile definition configuration was almost powerful enough to represent this tile.

One other minor thing I fixed was the alignment of the info panels on the right side of the screen. Dealing with the unaligned numbers got a bit much eventually. The cursor position, marker position, and tag numbers are properly aligned now.

A screenshot of CCEdit with the cursor position, marker position, and tag numbers now properly aligned

Anyway, that’s all for this week.


Formatting Javascript/Typescript Code With Prettier

You’ve got a bunch of Typescript files with both .ts and.tsx. You need to format them using prettier. Your code is managed with source control so you can back out of the changes whenever you need to.

Here’s a quick command line invocation to do it:

npx prettier -w '**/*.ts' '**/*.tsx'

Dev Log - 2023-03-12

Preamble

When I moved Working Set over to Micro.blog, I’d thought I’d be constantly writing micro-posts about what I’m working on, as a form of working in public. I found that didn’t really work for me, for a few reasons.

I’ve got a strange relationship with this blog. I wanted a place online to write about the projects I’ve been working on, but every time I publish something here, I always get the feeling that I’m “showing off” in some way: ooh, look what I’ve done, aren’t I cleaver? And okay, I’d be lying if there’s not a part of me that wants others to see how I spend my time. If I didn’t want that, I’d be content with these posts existing in a private journal.

And maybe this is a form of self-justification but I’d like to think that there’s a bit of that feeling in every developer that keeps a public blog on what they do. Maybe not exactly “showing off”, but I’m sure they feel proud on what they work on and they want to talk about it. And there’s really nothing wrong with that. In fact, the posts I tend to enjoy the most are those from other devs talking about the projects they’re working on.

So yeah, I admit that having others see what I’m working on would be nice. They say write what you want to read, and this is my attempt at doing just that.

But that only explains why I write about it on a public blog instead of a private journal. Why I should want to write these posts at all is that I’d like to keep a record of the the projects I work on. Nostalgia is one reason: seeing a project progress over time or remembering projects long since abandoned. But another might be a way to track where I’m spending my time. This is theoretical at the moment, but if there ever came a time when I wanted to find this out, I have to have the record written somewhere.

But not as micro-posts. I think a fixed weekly cadence is more appropriate. I tried this a couple of years ago, and although it worked for a while, I fell out of the habit. But after seeing the weekly entries by Jonathan Hays, I’ve been inspired to try it again.

So that’s why I’m trying these weekly update. They’ll be frequent enough to be useful to act like diary entries, but not so frequent that they will bother people who aren’t interested. They’ll be long enough to warrant a title, making it easy for people to skip it. And they’ll be any anything related to a side project I’m working on: either current or abandoned, public or completely private. And I’m giving myself permission not to feel bad about it.

Anyway, we’ll see how we go.

Dynamo-Browse

Big week for Dynamo-Browse: I finally got v0.2.0 out the door. This is the release with scripting support (yes, it finally shipped). The scripting implementation has been finished for a while. The thing that was blocking it’s release was all the documentation I had to write: both the section in the manual and the API reference.

The build was also a bit of an issue. The release builds are built using GitHub actions. To get them published as Homebrew casks, the actions need to push them to another repository. The secret token used to access this repository expired, and I had to create another one. Not difficult, but the fact that I had to create a whole new secret instead of rotate the existing one was a little annoying. Getting the permissions right, and being forced to choose a different name (“Deploy Homebrew formulas v2”) didn’t help matters either.

But got there in the end. The v0.2.0 release is now available on dynamobrowse.app and GitHub.

I’ll reduce the time I spend on this for a little while. We’ll see how long that lasts. I use this tool for work so often and I’ve got a whole list of features I’d like to see added to it.

CCLM

The Beach level with the square indicated boxes opened in CCLM Edit

I got the editor up and running again last week and I spent Saturday designing a level with the working name “The Beach”. I’m a huge fan of the Developing series on the Game Makers Toolkit YouTube channel, and the latest video was about how difficult it was for Mark to design levels for his video game. I found I had the exact same problem for designing levels for mine (although I think the lack of effort I put into it doesn’t help). He pointed to a blog post by an indie game designer that had some useful tips to help with puzzle design. The one about using two elements that interact with it.

The one I worked on was for a custom element that will change boxes with a square indicator to blank tiles when pressing the yellow button. I’ve had this element around for a while but I haven’t actually used it in a level yet. I’d thought I’ll be time to do so, but the level I came up with seems a little simple. Not sure what I’d do about it. I could either rearrange it so that it appears earlier in the level set, or I can make it a little more difficult in some way.

Client Project

One thing that releasing Dynamo-Browse has given me is the opportunity to do a small client project. I’ve talked about this on lmika.org and the latest update is that I think I’ve convinced him to consider a static site, seeing that it would be easier for him to run (don’t need to worry about plugins) and would be easier for me to build (I don’t know how to use Wordpress, especially not their new block editor).

This week was basically coming up with a site layout. I had the opportunity to use Figma for the first time. Works reasonably well, but I’m wondering if Balsamiq Mockups was probably a better choice for a rough outline of what the site is to look like. But that’s all moot: a layout was put together and sent to the client for him to get some feedback.

Anyway, still early days here. I’m looking at possible Hugo templates to build the site in and possible hosting solutions that would work with the client. I’m not aware of options for static hosting other than the AWS, Cloudflare or Azures of the world. Not sure it will work for the client, although it’s totally possible that I’m just not looking in the right places.

So that’s it. Update one done. Although next week I’ll be taking some leave so update two might be slightly shorter (at least there’ll be no preamble) so it may be less about current updates. I guess we’ll find out together.


Completed the release of Dynamo-Browse 0.2.0. Most of the work in the last week was updating the manual, especially the scripting API. Some more updates need to be made for the query expressions as well, but I’ll publish what I have now and update that over time.


Sentinel Errors In Go

The errors.New() function in emperror.dev/errors package includes context information, like stack traces. This means that if you were to use this for sentinel error types, i.e. error types like EOL, you’ll be including unnecessary stack information. This is especially useless when these are defined as global variables:

var ErrThing = errors.New("thing happened, but that's OK")

Instead, use the errors.Sentinal type.

var ErrThing = errors.Sentinal("thing happened, but that's OK")

Dave Cheney discusses why this works. He also mentions that sentinel errors should be avoided altogether. Not sure I fully agree with him on this.


Here’s a bit of a blast from the past. I managed to get ccedit working again. This was the original level editor for workingset.net/2022/12/2… my Chips Challenge “fan game” I’ve been working on.

I’ve been designing a few levels for it recently, but since moving to a new Mac, the level editor I was using was going to be difficult to port. It’s QT application and the QT bindings were a pain to setup, and I rather not go through that again. I was using a Mac at the time I started working on it, but I wasn’t yet ready to go all in on MacOS. So to hedge my bets, I decided to go with QT as the UI toolkit.

This was 5 years ago and I’m unlikely to go back to Linux, so choosing QT was a bit of a bad decision. I think if I had my time again, I’d go with something like AppKit.

Anyway, the level editor still works but I have to log into a screen share to use it. I’d like to be able to edit levels on the machine I’m using now.

The code for the original level editor was still around but it hasn’t been touched in ages. It’s basically an SDL application — the same graphics library I’m using for the actual game itself — and the SDL v2 bindings I’m using are still maintained, so updating those were quite easy1.

One thing I did have to pull out was the Lua VM2. The editor was using old C Lua bindings. Better Lua VMs written in pure Go are now available, so I didn’t want to keep using these old bindings anymore. In fact, I didn’t want to use Lua at all. Lua was originally used for the level scripts, but I replaced this in favour of another language (which is no longer maintained 😒, but I’m not changing it again).

The original CCLM Editor
The original CCLM Editor

So far the editor boots up, but that’s about it. I can move the cursor around but I can’t add new tiles or load existing levels. There seems to be some weird things going on with the image name lookup. I originally thought image name were case insensitive, but after looking at the image name lookup logic in the game itself, I’m not so sure.

How much time I’d like to spend on this is still a bit of a question. It all depends whether I’d like to release the game itself in some fashion. There are still questions about whether I’m allowed to, given that the graphics are not my own. Still need to think about that.

But in any case, good to see the old editor again.


  1. The level editor was actually using older SDL v1 bindings, but it was relatively easy to port them over to v2, although some gaps are still present. ↩︎

  2. Lua was actually the second language used by the editor. The first was a Go native TCL interpretor. ↩︎


Spent some time closing off the Dynamo-Browse shortlist. I think I’ve got most of the big ticket items addressed. Here’s a brief update on each one:

Fix the activity indicator that is sometimes not clearing when a long running task is finished.

How long running tasks are dealt with has been completely overhauled. The previous implementation had many opportunities for race conditions, which was probably the cause of the activity indicator showing up when nothing was happening. I rewrote this using a dedicated goroutine for handling these tasks, and the event bus for sending events to the other areas of the app, including the UI layer. Updates and status changes are handled with mutexes and channels, and it just feels like better code as well.

It will need some further testing, especially in real world use against a real DynamoDB database. We’ll see if this bug rears its unpleasant head once more.

Fix a bug in which executing a query expression with just the sort key does nothing. I suspect this has something to do with the query planner somehow getting confused if the sort key is used but the partition key is not.

Turns out that this was actually a problem with the “has prefix” operator. It was incorrectly determining that an expression of the form sort_key ^= "string" with no partition key could be executed as a query instead of a scan. Adding a check to see if the partition key also existed in the expression fixed the problem.

Also made a number of other changes to the query expression. Added the ability to use indexed references, like this[1] or that["thing"]. This has been a long time coming so it’s good to see it implemented. Unfortuntly this only works reliably when a single level is used, so this[1][2] will result in an error. The cause of this is a bug in the Go SDK I’m using to produce the query expressions that are run against the database. If this becomes a problem I look at this again.

I also realised that true and false were not treated as boolean literals, so I fixed that as well.

Finally, the query planner now consider GSIs when it’s working out how to run a query expression. If the expression can be a query over a GSI, it will be executed as one. Given the types of queries I need to run, I’ll be finding this feature useful.

Fix a bug where set default-limits returns a bad value.

This was a pretty simple string conversion bug.

Add a way to describe the table, i.e. show keys, indices, etc. This should also be made available to scripts. Add a way to “goto” a particular row, that is select rows just by entering the value of the partition and optionally the sort key.

These I did not do. The reason is that they’ll make good candidates for scripts and it would be a good test to see if they can be written as one. I think the “goto” feature would be easy enough. I added the ability to get information about the current table in the script, and also for scripts to add new key bindings, so I don’t force any issues here.

The table description would be trickier. There’s currently no real way to display a large block of text (except the status bar, but even there it’s a little awkward). So a full featured description might be difficult. But the information is there, at least to a degree, so maybe something showing the basics would work.

Anyway, the plan now is to use this version for a while to test it out. Then cut a release and update the documentation. That’s a large enough task in and of itself, but I’d really like to get this finished so I can move onto something else.


I don’t know what I need to do to get Swift Packages working.

It never seems to work for me. The option to add dependencies are in the menu, but when I try to use them, they error out or cannot resolve the dependencies for some reason.

This is in addition to not being able to see or edit “Package.swift” as a text file, which I thought was the whole thing with this.

Is it the version of Xcode I’m using? Does it work for anyone outside of Apple? Libraries are being released with Swift Package support so it must work for someone.


Mahjong Score Card Device #1 - Connection Attemption

I’ve recently started playing Mahjong with my family. We’ve learnt how to play a couple of years ago and grown to like it a great deal. I’m not sure if you’re familiar with how the game is scored, and while it’s not complicated, there’s a lot of looking things up which can make scoring a little tedious. So my dad put together a Mahjong scoring tool in Excel. You enter what each player got at the end of a round — 2 exposed pungs of twos, 1 hidden kong of eights, and a pair of dragons, for example — and it will determine the scores of the round and add them to the running totals. It also tracks the winds of the players and the prevailing winds, which are mechanics that can affects how much a player can get during a round. The spreadsheet works quite well, but it does mean we need to keep a bulky laptop around whenever we play.

I wondered if the way we calculated and tracked the scores could be improved. I could do something like build a web-style scorecard, like I did for Finska, but it gets a little boring doing the same stuff you know how to do pretty well. No, if I wanted to do this, I wanted to push myself a little.

I was contemplating building something with an Arduino, maybe with a keypad and a LCD display mounted in a case of some sort. I’ve played around with LCD displays using an Arduino before so it wasn’t something that seemed too hard to do. I was concerned about how well I could achieved the fit and finished I wanted this to have to be usable. Ideally this would be something that I can give to others to use, not be something that would just be for me (where’s the fun in that?). Plus, I didn’t have the skills or the equipment to mount it nicely in an enclosed case that is somewhat portable. I started drawing up some designs for it, but it felt like something I wouldn’t actually attempt.

Snapshot of some of the designs I was thinking about
Some of the design notes I came up with.

One day I was perusing the web when I came across the SMART Response XE. From what I gathered, it a device that was built for classrooms around the early 2010s. Thanks to the smartphone, it didn’t become super successful. But hobbyist have managed to get their hands on them and reprogram them to do their own thing. It’s battery powered, had a full QUERTY keyboard and LCD display, is well built since it was designed to be used by children at schools, and feels great in the hand. And since it has an Atmel microprobes, it can be reprogrammed using the Arduino toolchain. Such a device would be perfect for this sort of project.

I bought a couple, plus a small development adapter, and set about trying to build it. I’ll write about how I go about doing it here. As the whole “work journal” implies, this won’t be a nice consistent arc from the idea to the finished project. I’m still very much a novice when it comes to electronics, and there will be setbacks, false starts, and probably long periods where I do nothing. So strap in for a bit of bumping around in the dark.

The SMART Response XE, development adapter, and ISP programmer.
The SMART Response XE, development adapter, and ISP programmer.
Closeup of the SMART Response XE
Closeup of the SMART Response XE.

First Connection Attempt

The way to reprogram the board is to open up the back and slot some pins through the holes just above the battery compartment. From my understanding, these holes expose contact pads on the actual device board that are essentially just an ISP programming interface. In theory, if you had an ISP programmer and a 6 pin adapter, you should be able to reprogram the board.

Rear of the SMART Response XE, with the back cover removed showing the battery compartment and the holes used to reprogram the device
Rear of the SMART Response XE. Those holes above the battery compartment is where the adapter slots in.

The first attempt of attempting to connect to the SMART Response XE was not as successful as I hoped. For one thing, the SRXE Development Adapter was unable to sit nicely within the board. This is not a huge issue in and of itself, but it did mean that in order to get any contact with the board, I would have to push down on the device with a fair bit of force. And those pogo pins are very fragile. I think I actually broke the tip of one of them, trying to use an elastic band and tape to keep the board onto the . I hope it does not render the board useless.

The other issue I had is that the arrangement of the 6 pin header on the developer board is incompatible with the pins of the ISP programmer itself. The pins on the ISP programmer are arranged to plugin directly to an Arduino, but on the development board, the pins are the the wrong way round. The left pin on both the male pin on the board and female socket from the IVR programmer are both Vcc, when in order to work, one of them will need to be a mirror image of the other. This means that there’s no way to for the two to connect such that they line up. If the pins on the SMART Response XE were on the back side of the board, I would be able to plug them in directly.

I eventually got some jumper wires to plug the ISP programmer into the correct pins. Pushing down on the board I saw the LEDs on the adapter light up, indicating activity. But when I tried to verify the connection using avrdude, I got no response:

$ avrdude -c usbASP -p m128rfa1

avrdude: initialization failed, rc=-1
         Double check the connection and try again, or use -F to override
         this check.


avrdude done.  Thank you. 

So something was still wrong with the connection. It might have been that I’ve damaged one of the pins on the dev board while I was trying to push it down. I’m actually a little unhappy with how difficult it is to use the adapter to connect to the device, and I wondered if I could build one of my own.

Device Adapter Mk. 1

I set about trying just that. I wanted to be able to sit the device on top of it such that the contact points on the board itself will sit on the adapter. I was hoping to make the pins slightly longer than the height of the device such that when I rest it on the adapter, the device was slightly balanced on top of the pins and the board will make contact with gravity alone.

This meant that the pins had to be quite long and reasonably sturdy. Jaycar did not have any pogo pins of the length I needed (or of any length) so I wondered if the pins from a LED would work1. I bought a pack of them, plus a prototyping board, and set about building an adapter for myself. Here the design I came up with:

The adapter design
The adapter design.

And here is the finished result:

Front of the finished adapter, showing the pins and header
Front of the finished adapter.
Rear of the finished adapter, showing the solder points
Rear of the finished adapter.

And it’s a dud! The position of the header gets in the way of where the device lines up to rest on the pins. But by far the biggest issue is the pins themselves. I tried placing them in the holes and rested the circuit board on top with a small spacer while I was soldering them. The idea is that after removing the spacer the pins will be higher than the device. I was hoping to cut them down to size a little, but I cut them unevenly, which meant that some of the pins won’t make contact. When I try resting the device on the board I get no rocking, which means that I suspect none of the pins make contact. I’m still not happy with the LED pins either. They don’t seem strong enough to take the weight of the device without bending.

The best thing about it was the soldering, and that’s not great either. I’m not sure I’ll even try this adapter to see if it works.

Next Steps

Before I create a new adapter, I want to try to get avrdude talking with the board first. I think what I’ll try next is resting some nails in the holes and attaching them to alligator clips hooked up to the ISP programmer. If this works, I see if I can set about building another board using the nails. I won’t use the header again as I think it will just get in the way. It might be enough to simply solder some hookup wires directly onto the prototyping board.

Anyway, more to come on this front.


  1. I couldn’t find any decent bridging wire at Jaycar either so I used reclaimed wire from a CAT-5 cable. I stripped the insulation completely, twirled the wires, and soldered them onto the contacts. It worked really well. ↩︎


This is going to be an unpopular opinion but I cannot stand the MacOS development experience. I wanted to start a new project, a MacOS SwiftUI project, and once I went through the New Project flow, the first thing that happens is the preview craps out because the login to AppStore Connect cannot provision a certificate. To generate the preview of the “Hello World” app that was just created. Call me old fashion but the need to provision a certificate to generate a preview is a little unnecessary.

How do experience MacOS developers deal with crap like this? Honestly, I really feel for them devs going through all the shitty hoops Apple throws their way, as if attempting to build anything is a threat to their trillion dollar company. They really need to get some perspective.

Anyway, I’ll settle on using Go and Wails. I know how unpopular Electron-style apps are in the broader MacOS community (Wails doesn’t bundle Chrome so it’s not quite the same thing) but it’s a stack without any BS that I can rely on.


Looking at the “backlog” of things to work on for Dynamo-Browse before I set it aside. I’ll fix a few bugs and add a few small features that I’ve found myself really wanting. The short list is as follows:

  • Fix the activity indicator that is sometimes not clearing when a long running task is finished.
  • Fix a bug in which executing a query expression with just the sort key does nothing. I suspect this has something to do with the query planner somehow getting confused if the sort key is used but the partition key is not.
  • Fix a bug where set default-limits returns a bad value.
  • Add a way to describe the table, i.e. show keys, indices, etc. This should also be made available to scripts.
  • Add a way to “goto” a particular row, that is select rows just by entering the value of the partition and optionally the sort key.

I’ll start with these and see how I go.

Oh, and one more thing: I will need to kill my darlings, namely the other commands in the “audax” repository that I’ve hacked togeather. They’re mildly useful — one of them is used to browse SSM parameters and another is used to view JSON log files — but they’re unloved and barely functional. I’ll move them out of the “audax” repository and rename this repo to “dynamo-browse”, just to make it less confusing for everyone.


I think I’ll take a little break from Dynamo-Browse. There’s a list of small features that are on my TODO list. I might do one or two of them over the next week, then cut and document a release, and leave it for a while.

I’m still using Dynamo-Browse pretty much every day at work, but it feels a little demotivating being the only person that’s using it. Even those at work seem like they’ve moved on. And I can understand that: it’s not the most intuitive bit of software out there. And I get the sense that it’s time to do something new. Maybe an online service or something. 🤔


Finally bit the bullet and got scripting working in Dynamo-Browse. It’s officially in the tool, at least in the latest development version. It’s finally good to see this feature implemented. I’ve been waffling on this for a while, as the last several posts can attest, and it’s good to see some decisions made.

In the end I went with Tamarin as the scripting language. It was fortunate that the maintainer released version 1.0 just as I was about to merge the scripting feature branch into main. I’ve been trying out the scripting feature at work and so far I’ve been finding it to work pretty well. It helps that the language syntax is quite close to Go, but I also think that the room to hide long-running tasks from the user (i.e. no promises everywhere) dramatically simplifies how scripts are written.

As for the runtime, I decided to have scripts run in a separate go-routine. This means they don’t block the main thread and the user can still interact with the tool. This does mean that the script will need to indicate when a long running process is occurring — which they can do by displaying a message in the status line — but I think this is a good enough tradeoff to avoid having a running script lock-up the app. I still need to add a way for the user to kill long-running scripts (writing a GitHub ticket to do this now).

At the moment, only one script can run at any one time, sort of like how JavaScript in the browser works. This is also intentional, as it will prevent a whole bunch of scripts launching go-routines and slowing down the user experience. I think it will help in not introducing any potential synchronisation issues with parallel running scripts accessing the same memory space. No need to build methods in the API to handle this. Will this mean that script performance will be a problem? Not sure at this stage.

I’m also keeping the API intentionally small at this stage. There are methods to query a DynamoDB table, get access to the result set and the items, and do some basic UI and OS things. I’m hoping it’s small enough to be useful, at least at the start, without overwhelming script authors or locking me into an API design. I hope to add methods to the API over time.

Anyway, good to see this committed to.


Poking Around The Attic Of Old Coding Projects

I guess I’m in a bit of a reflective mood these pass few days because I spent the morning digging up an old project that was lying dormant for several years. It’s effectively a clone of Chips Challenge, the old strategy game that came with the Microsoft Entertainment Pack. I was a fan of the game when I was a kid, even though I didn’t get through all the levels, and I’ve tried multiple times to make a clone of it.

The earliest successful clone I can think of was back when I was using Delphi, which I think was my teens. It’s since been lost but I do recall having a version that work and was reasonably true to the original game as possible. It wasn’t a particularly accurate clone: I do recall some pretty significant bugs, and the code itself was pretty awful. But it was nice to be able to do things like design my own levels (I wasn’t as internet savvy back then and I didn’t go looking for level editors for the Microsoft’s release of Chips Challenge). Eventually I stopped working on it, and after a few updates to the family computer, plus a lack of backups or source control, there came a time where I lost it completely.

Years later, I made another attempt at building a clone. I was dabbling in .Net at the time and I think I was working on it as an excuse to learn C#. I think I got the basics of the game and associated level editor working but I didn’t get much further than that. Either I got bored and stopped working on it.

I started the latest clone nine years ago. I can’t remember the original motivation. I was just getting into Go at the time and I think it was both to learn how to build something non-trivial in the language, and to determine how good Go was for building games. Although this is probably just a rationalisation: I’m sure the real reason was to work on something fun on the side.

Screenshot of CCLM, the latest clone
Screenshot of CCLM, the latest clone.

Over the first five years of its life or so, I worked on it on and off, adding new game elements (tiles, sprites, etc.) and capabilities like level scripts. One thing I am particularly proud of was building a mini-language for selecting game elements using something akin to CSS selectors. Want to select all wall tiles? Use the selector SOLD. How about configuring a water tile to only allow gliders and the player but only if they have the flipper? Set the Immune attribute of the water tile to GLID,PLYR:holding(flippers). This was particularly powerful when working on tile and sprite definitions.

Text editor showing tile definitions with sample selectors
A sample of how some of the tiles are defined, including usage of the selectors.

I didn’t put as much effort into content however. As of today, there are only 18 or so unique levels, and about half of them are ones that I consider good. I certainly put little effort into the graphics. Many of the tile images were just taken from the original tile-set and any additional graphics were basically inspirations from that. This blatant copyright violation is probably why this project won’t see the light of day.

Screenshot of the level editor
The level editor, and one of the rare times when it's not crashing.

I’m impressed on how Go maintains its backwards capability: moving from 1.13 to 1.19 was just a matter of changing the version number in the .mod file. I haven’t updating any of the libraries, and I’m sure the only reason why it still builds is because I haven’t dared to try.

I’ll probably shouldn’t spend a lot of time on this. But it was fun to revisit this for a while.

One final thing: I might write more about projects I’ve long since abandoned or have worked on and haven’t released, mainly for posterity reasons but also because I like reflecting on them later. You never know what you’d wish you documented until you’ve lost the chance to do so.


Spent the day restyling the Dynamo-Browse website. The Terminal theme was fun, but over time I found the site to be difficult to navigate. And if you consider that Dynamo-Browse is not the most intuitive tool out there, an easy to navigate user manual was probably important. So I replaced that theme with Hugo-Book, which I think is a much cleaner layout. After making the change, and doing a few small style fixes, I found it to be a significant improvement.

I also tried my hand at designing a logo for Dynamo-Browse. The blue box that came with the Terminal theme was fine for a placeholder, but it felt like it was time for a proper logo now.

I wanted something which gave the indication of a tool that worked on DynamoDB tables while also leaning into it’s TUI characteristics. My first idea was a logo that looked like the DynamoDB icon in ASCII art. So after attempting to design something that looks like it in Affinity Designer, and passing it through an online tool which generated ASCII images from PNG, this was the result:

First attempt at the Dynamo-Browse logo

I tried adjusting the colours of final image, and doing a few things in Acorn to thicken the ASCII characters themselves, but there was no getting around the fact that the logo just didn’t look good. The ASCII characters were too thin and too much of the background was bleeding through.

Other attempts at the Dynamo-Browse logo

So after a break, I went back to the drawing board. I remembered that there were actually Unicode block characters which could produce filled-in rectangles of various heights, and I wondered if using them would be a nice play on the DynamoDB logo. Also, since the Dynamo-Browse screen consists of three panels, with only the top one having the accent colour, I thought having a similar colour banding would make a nice reference. So I came up with this design:

Final design of the Dynamo-Browse logo

And I must say, I like it. It does look a little closer to low-res pixel art than ASCII art, but what it’s trying to allude to is clear. It looks good in both light mode and dark mode, and it also makes for a nice favicon.

That’s all the updates for the moment. I didn’t get around to updating the screenshots, which are in dark-mode to blend nicely with the dark Terminal theme. They actually look okay on a light background, so I can probably hold-off on this until the UI is changed in some way.


I’ve been resisting using mocks in the unit tests of Dynamo-Browse, but today I finally bit the bullet and started adding them. There would have just been too much scaffolding code that I needed to write without them. I guess we’ll see if this was a wise decision down the line.


Thinking About Scripting In Dynamo-Browse, Again

I’m thinking about scripting in Dynamo-Browse. Yes, again.

For a while I’ve been using a version of Dynamo-Browse which included a JavaScript interpreter. I’ve added it so that I could extend the tool with a few commands that have been useful for me at work. That branch has fallen out of date but the idea of a scripting feature has been useful to me and I want to include it in the mainline in some way.

The scripting framework works, but there are a few things that I’m unhappy about. The first is around asynchronicity and scheduling. I built the scripting API around the JS event-loop included in interpreter. Much like a web-browser or Node.js, this event-loop allows the use of promises for operations that can be dispatched asynchronously. The interpreter, however, is not ES6 compatible, which means no async/await keywords. The result is that many of the scripts that I’ve been writing are littered with all these then() chains. Example:

const session = require("audax:dynamo-browse/session");
const ui = require("audax:dynamo-browse/ui");
const exec = require("audax:x/exec");

plugin.registerCommand("cust", () => {
    ui.prompt("Enter UserID: ").then((userId) => {
        return exec.system("lookup-customer-id.sh", userId);
    }).then((customerId) => {
        let userId = output.replace(/\s/g, "");        
        return session.query(`pk="CUSTOMER#${customerId}"`, {
            table: `account-service-dev`
        });
    }).then((custResultSet) => {
        if (custResultSet.rows.length == 0) {
            ui.print("No such user found");
            return;            
        }
        session.resultSet = custResultSet;
    });
});

Yeah, I know; this is nothing new if you’ve been doing any web-dev or Node in the past, but it still feels a little clunky exposing the execution details to the script writer. Should that be something they should be worried about? Feels like the tool should take on more here.

The second concern involves modules. The JavaScript interpreter implements the require() function which can be used to load a JS module, much like Node.js. But the Node.js standard library is not available. That’s not really the fault of the maintainers, and to be fair to them, they are building out native support for modules. But that support isn’t there now, and I would like to include some modules to do things like access the file system or run commands. Just adding them with non-standard APIs and with name that are the same as equivalent modules in node Node.js feels like a recipe for confusion.

Further exacerbating this is that script authors may assume that they have access to the full Node standard library, or even NPM repositories, when they won’t. I certainly don’t want to implement the entire Node standard library from scratch, and even if the full library was available to me, I’m not sure the use of JavaScript here warrants that level of support.

Now, zooming out a little, this could all be a bit of a non-issue. I’ve haven’t really shared this functionality with anyone else, so all this could be of no concern to anyone else other than myself. But even so, I’m am thinking of options other than JavaScript. A viable alternative might be Lua — and Go has a bunch of decent interpreters — but I’m not a huge fan of Lua as a language. Also, Lua’s table structure being used for both arrays and structures seems like a source of confusion, especially when dealing with JSONish data structures like DynamoDB items.

One interpreter that has caught my eye is Tamarin. It’s early in its development already it’s showing some promise. It offers a Go like syntax, which is nice, along with native literals for lists, sets and maps, which is also nice. There’s a bit of a standard library already in place to do things like string and JSON operations. There’s not really anything that interacts with the operating system, but this is actually an advantage as it will mean that I’m free to write these modules myself to do what I need. The implementation looks simple enough which means that it will probably play nicely with Go’s GC and scheduler.

How the example above could look in Tamarin is given below. This assumes that the asynchronous aspects are completely hidden from the script author, resulting in something a little easier to read:

ext.command("cust", func() {
    var userId = ui.prompt("Enter UserID: ")
    
    var commandOut = exec.system("lookup-customer-id.sh", userId).unwrap()
    var userId = strings.trim_space(commandOut)
    
    var custResultSet = session.query(`pk=$1`, {
        "table": "account-service-dev",
        "args": [userId]
    }).unwrap()
    
    if len(custResultSet.items.length) == 0 {
        ui.print("No such user found")
        return
    }
    
    session.set_result_set(custResultSet)
})

So it looks good to me. However… the massive, massive downside is that it’s not a well-known language. As clunky as JavaScript is, it’s at least a language that most people have heard of. That’s a huge advantage, and one that warrants thinking twice about it before saying “no, thanks”. I’m not expecting Dynamo-Browse to be more popular than sliced bread (I can count on one hand the number of user’s I know are using it), but it would be nice if scripting was somewhat approachable.

One thing in Tamarin’s favour is that it’s close enough to Go that I think others could pick it up relatively easily. It’s also not a huge language — the features can be describe on a single Markdown page — and the fact that the standard library hasn’t been fully fleshed out yet can help keep things small.

I guess the obvious question is why not hide the scheduling aspects from the JavaScript implementation? Yeah, I think that’s a viable thing to do, although it might be a little harder to do than Tamarin. That will still leave the module exception issue though.

So that’s where I am at the moment. I’m not quite sure what I’ll do here, but I might give Tamarin a go, and see if it result in scripts that are easier to read and write.

One could argue that this pontificating over something that will probably only affect me and a handful of other people is even worth the time at all. And yeah, you’re probably right in thinking that it isn’t. To that, I guess the only thing I can respond with is that at least I got a blog post out of it. 🙂

One last concern I do have, regardless of what language I choose, is how to schedule the scripts. If I’m serious about not exposing the specifics of which thread/goroutine is waiting for what to the script author, I’d like to run the script in a separate goroutine from the main event loop. The concern is around start-up time: launching a goroutine is fast, but if I want to execute a script in response to a keystroke, for example, would it be fast enough? I guess this is something I should probably test first, but if it is a little unresponsive, the way I’m thinking of tackling this is having a single running goroutine waiting on a channel for script scheduling events. That way, the goroutine will always be “warm” and they’ll be no startup time associated with executing the script. Something to think about.


Project Exploration: A Check-in App

I’m in a bit of a exploratory phase at the moment. I’ve set aside Dynamo-Browse for now and looking to start something new. Usually I need to start two or three things before I find something that grabs me: it’s very rare that I find myself something to work on that’s exciting before I actually start working on it. And even if I start something, there’s a good chance that I won’t actually finish it.

So it might seem weird that I’d write about these explorations at all. Mainly I do it for prosperity reasons: just something to record so that when look back and think “what was that idea I had?”, I have something to reflect on.

With that out of the way, I do have a few ideas that are worth considering. Yesterday, I’ve started a new Flutter project in order to explore an idea for an app that’s been rattling around in my head for about a month or two.

Basically the idea is this: I keep a very simple check-in blog where I post check-ins for locations that seem useful to recall down the line. At the moment I post these check-ins manually, and usually a few hours after I leave in order to maintain some privacy of my movements. I’ve been entertaining the idea of an app that would do this: I can post a check-in when I arrive but it won’t be published several hours after I leave. The check-ins themselves follow a consistent format so the actual act of filling in the details can be handled by the app by presenting these properties like a regular form. At the same time, I wanted to try something with the Micropub format and possibly the IndieAuth protocol.

So I started playing around with a Flutter app that could do this. I got to the point where I can post new check-ins and they will show up in a test blog. Here are some screenshots:

The home screen
The home screen, which obviously needs a bit of work. Pressing the floating action button will create a new checkin.
Selecting the check-in type
Selecting the type of check-in.
Entering check-in properties
Each type will have different properties. After selecting the check-in type, they will be presented to you in a form.
How it looks on the blog
How it will ultimately look on the blog.

It’s still very early stages at this moment. For one thing, check-ins are published as soon as you press “Checkin”. Eventually they should be stored on the device and then published at a later time. The form for entering checkins should also be slightly different based on the checkin type. Restaurants and cafes should offer a field to enter a star rating, and airport checkins should provide a way to describe your route.

I’m not sure if I will pursue this any further. We’ll see how I feel. But it was good to get something up and running in a couple of hours.


Looking for a new project to work on. I kinda want to give Unity a try, maybe look at making a game of sorts. How I’m going to get the artwork made is anyone’s guess though. Goodness knows I can’t draw it. 🤷

Update: Ok, decided against working on a Unity project. Working on Alto Catalogue instead.


Dynamo-Browse Running With iSH

Bit of a fun one today. After thinking about how one could go about setting up a small dev environment on the iPad, I remembered that I actually had iSH installed. I’ve had for a while but I’ve never really used it since I never installed tools that would be particularly useful. Thinking about what tools I could install, I was curious as to whether Dynamo-Browse could run on it. I guess if Dynamo-Browse was a simple CLI tool that does something and produces some output, it wouldn’t be to difficult to achieve this. But I don’t think I’d be exaggerating if I said Dynamo-Browse is a bit more sophisticated than your run-of-the-mill CLI tool. Part of this is finding out not only whether building it was possible, but whether it will run well.

Answering the first question involves determining which build settings to use to actually produce a binary that worked. Dynamo-Browse is a Go app, and Go has some pretty decent cross-compiling facilities so I had no doubt that such settings existed. My first thought was a Darwin ARM binary since that’s the OS and architecture of the iPad. But one of the features of iSH is that it actually converts the binary through a JIT before it runs it. And it turns out, after poking around a few of the included binaries using file, that iSH expects binaries to be ELF 32-bit Linux binaries.

Thus, setting GOOS to linux and GOARCH to 386 will produced a binary that would run in iSH:

GOOS=linux GOARCH=386 go build -o dynamo-browse-linux ./cmd/dynamo-browse/

After uploading it to the iPad using Airdrop and launching in in iSH, success!

Dynamo-Browse running in iSH on the iPad

So, is runs; but does is run well? Well, sadly no. Loading and scanning the table worked okay, but doing anything in the UI was an exercise in frustration. It takes about two seconds for the key press to be recognised and to move the selected row up or down. I’m not sure what the cause of this is but I suspect it’s the screen redrawing logic. There’s a lot of string manipulation involved which is not the most memory efficient. I’m wondering if building the app using TinyGo would improve things.

But even so, I’m reasonably impress that this worked at all. Whether it will mean that I’ll be using iSH more often for random tools I build remains to be seen, but at least the possibility is there.

Update: While watching a re:Invent session on how a company moved from Intel to ARM, they mentioned a massive hit to performance around a function that calculates the rune length of a Unicode character. This is something that this application is constantly doing in order to layout the elements on the screen. So I’m wondering if this utility function could be the cause of the slowdown.

Update 2: Ok, after thinking about this more, I think the last update makes no sense. For one thing, the binary iSH is running is an Intel one, and although iSH is interpreting it, I can’t imagine the slowdowns here are a result of the compiled binary. For another, both Intel and ARM (M1) builds of Dynamo-Browse work perfectly well on desktops and laptops (at least on MacOS systems). So the reason for the slowdown must be something else.


Née Audax Toolset

I’ve decided to retire the Audax Toolset name, at least for the moment. It was too confusing to explain what it actually was, and with only a single tool implemented, this complexity was unnecessary.

The project is now named after the sole tool that is available: Dynamo-Browse. The site is now at dynamobrowse.app, although the old domain will still work. I haven’t renamed the repository just yet so there will still be references to “audax”, particularly the downloads section. I’m hoping to do this soon.

I think I’m happy with this decision. It really now puts focus on the project, and removes the feeling that another tool had to be built just to satisfy the name.


GitHub Actions and the macOS Release

I’m using Goreleaser to build releases of Audax tools, since it works quite well in cross-compiling a Go program to various OS targets in a single run. I use this in a GitHub actions. Whenever I create a tag, the release pipeline will kick-off a run of Goreleaser, which will cross-compile Dynamo-Browse, package it, and publishing it to GitHub itself.

Recently, I found a bug in the MacOS Brew release of Dynamo-Browse. When the user tries to press c to copy the current item, the entire program will crash with a panic. I tracked the cause to the clipboard package that I’m using. Apparently, it requires Cgo to be usable on Darwin, and it will panic if CGO is not enabled1. But the GitHub action I was using was an Ubuntu runner. And since the package required some symbols from Foundation, which was only available on Darwin, I couldn’t just enable CGO.

However, I did learn that GitHub Actions actually supports MacOS runners. I don’t know why I didn’t think they wouldn’t. I guess, much like most of my experience with anything Apple, it would come at some sort of price. So you can imagine my delight in learning about this. And the first thing I did was try to use them in the release pipeline.

I first tried blindly switching from ubuntu-latest to macos. Sadly, it didn’t work as expected. It Turns out that the macos runners do not have Docker installed, which broke a number of the actions I was using. And since the original ubuntu-latest runner worked well for the Linux and Windows releases, I didn’t want to just junk them.

So what I did was split the release pipeline into three separate jobs:

  • The first, running on ubuntu-latest, builds Audax tools and runs the tests to verify that the code is in working order.
  • The second, also running on ubuntu-latest, will invoke the Goreleaser action to build the Windows & Linux targets.
  • At the same time, the third, running on macos, will install Goreleaser via brew and run it to build the MacOS target.

The resulting pipeline looks a little like this:

name: release
on:
  push:
    tags:
      - 'v*'
jobs:
  build:
    runs-on: ubuntu-latest
    services:
      # localstack Docker image
    steps:
      # Checkout, build test
  release-linux:
    needs: build
    runs-on: ubuntu-latest
    steps:
      # Checkout
      - name: Release
        uses: goreleaser/goreleaser-action@v1
        if: startsWith(github.ref, 'refs/tags/')
        with:
          version: latest
          args: release -f linux.goreleaser.yml --skip-validate --rm-dist
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
  release-macos:
    needs: build
    runs-on: macos-12
    steps:
      # Checkout
      - name: Setup Goreleaser
        run: |
          # Can't use goreleaser/goreleaser-action as that requires Docker
          # So need to launch it and run it using the shell.
          brew install goreleaser/tap/goreleaser
          brew install goreleaser
      - name: Release
        if: startsWith(github.ref, 'refs/tags/')
        run: |
          goreleaser release -f macos.goreleaser.yml --skip-validate --rm-dist
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          HOMEBREW_GITHUB_TOKEN: ${{ secrets.HOMEBREW_GITHUB_TOKEN }}

I’ve only used this to make a small maintenance release that fixes the clipboard panic problem, and so far it seems to work quite well.

After doing this, I also learnt that there’s actually a Goreleaser Docker image which could theoretically be used to cross-compile with CGO. I haven’t tried it, and honestly it sounds like too much trouble than it’s worth. Rather keep things simple.


  1. This is where I complain that the function that is panicing actually returns an error type. Instead of panacing and bringing down the whole program, why not just return an error? ↩︎


It’s been a while since I’ve posted an update here, so I thought I’d do so today.

Most of what’s going on with Audax and Dynamo-Browse is “closing the gap” between the possible queries and scans that can be performed over a DynamoDB table, and how they’re represented in Dynamo-Browse query expression language. Most of the constructs of DynamoDB’s conditions expression language can now be represented. The last thing to add is the size() function, and that is proving to be a huge pain.

The reason is that the IR representation is using the expression builder package to actually put the expression together. These builders uses Go’s type system to enforce which constructs work with each other one. But this clashes with how I built the IR representation types, which are essentially structs implement a common interface. Without having an overarching type to represent an expression builder, I’m left with either using a very broad type like any, or completely ditching this package and doing something else to build the expression.

It feels pretty annoying reaching the brick wall just when I was finishing this off. But I guess them’s the breaks.

One other thing I’m still considering is spinning out Dynamo-Browse into a separate project. It currently sits under the “Audax” umbrella, with the intention of releasing other tools as part of the tool set. These tools actually exist1 but I haven’t been working on them and they’re not in a fit enough state to release them. So the whole Audax concept is confusing and difficult to explain with only one tool available at the moment.

I suppose if I wanted to work on the other tools, this will work out in the end. But I’m not sure that I do, at least not now. And even if I do, I’m now beginning to wonder if building them as TUI tools would be the best way to go.

So maybe the best course of action is to make Dynamo-Browse a project in it’s own right. I think it’s something I can resurrect later should I get to releasing a second tool.

Edit at 9:48: I managed to get support for the size function working. I did it by adding a new interface type with a function that returns a expression.OperandBuilder. The existing IR types representing names and values were modified to inherit this interface, which gave me a common type I could use for the equality and comparison expression builder functions.

This meant that the IR nodes that required a name and literal value operand — which are the only constructs allowed for key expressions — had to be split out into separate types from the “generic” ones that only worked on any OperandBuilder node. But this was not as large a change as I was expecting, and actually made the code a little neater.


  1. Dynamo-Browse was actually the second TUI tool I made as part of what was called “awstools”. The first was actually an SQS browser. ↩︎


Trying to build an old Android app that has fallen out of date. Doing this always brings up some weird errors.

The version of Gradle being used was woefully out of date, so I starting upgrading it to the latest minor version used by the project (it was 3.1.0 to 3.2.x). This required changes to the distributionUrl in gradle-wrapper.properties. After making these changes, I was getting 502 errors when Android Studio was trying to get the binaries from jcenter.bintray.com. I feared that the version of Gradle I was hoping to use was so out of date the binaries may no longer be available: it wouldn’t be the first time Java binaries disappeared as various sites changed URLs or shutdown. Fortunately the problem was transitory, and I managed to get a 3.2.x version of Gradle downloaded.

The various squiggly lines in the .gradle files disappeared, but I was still unable to build the app. Attempting to do so brought up the following error:

Unsupported class file major version 55

This was weird. The only version of Java I had installed was Java 11, and I confirmed that Gradle was using the specific install by checking it in Preferences → Build, Execution, Deployment → Build Tools → Gradle. So I couldn’t see why there would be a major version conflict. Was Gradle using a bundled version of Java 8 somewhere to build the app?

What I think resolved this, along with upgrading Gradle, the SDK and the build tools, was explicitly setting the source and target capability of the app to the same version of Java I was using. From within “build.gradle”, I added the following:

compileOptions {
    sourceCompatibility JavaVersion.VERSION_11
    targetCompatibility JavaVersion.VERSION_11
}

Might be that the version of Gradle I was using was so old it defaulted to building Java 1.8 byte code if these weren’t specified.

This error disappeared and the app was stating to build. But the next weird thing I was then getting was that the compiler was not seeing any of the Android SDK classes, like android.app.ListActivity. What seemed to work here was to upgrade Gradle all the way to the latest version (7.3.3), clearing the cache used by Android Studio, and making sure this line was added to the app module dependencies:

implementation fileTree(dir: 'libs', include: ['*.jar'])

This finally got the build working again, and I was able to run the app on my phone.

I don’t know which of these actions solved the actual problem. It’s working now so I rather not investigate further. The good news is I didn’t need to change anything about the app itself. All the changes were around the manifest and Gradle build files. Those Android devs really do a great job maintaining backwards compatibility.


Audax Toolset Version 0.1.0

Audax Toolset version 0.1.0 is finally released and is available on GitHub. This version contains updates to Dynamo-Browse, which is still the only tool in the toolset so far.

Here are some of the headline features.

Adjusting The Displayed Columns

The Fields Popup

Consider a table full of items that look like the following:

pk           S    00cae3cc-a9c0-4679-9e3a-032f75c2b506
sk           S    00cae3cc-a9c0-4679-9e3a-032f75c2b506
address      S    3473 Ville stad, Jersey , Mississippi 41540
city         S    Columbus
colors       M    (2 items)
  door       S    MintCream
  front      S    Tan
name         S    Creola Konopelski
officeOpened BOOL False
phone        N    9974834360
ratings      L    (3 items)
  0          N    4
  1          N    3
  2          N    4
web          S    http://www.investorgranular.net/proactive/integrate/open-source

Let’s say you’re interested in seeing the city, the door colour and the website in the main table which, by default, would look something like this:

The before table layout

There are a few reasons why the table is laid out this way. The partition and sort key are always the first two columns, followed by any declared fields that may be used for indices. This is followed by all the other top-level fields sorted in alphabetical order. Nested fields are not included as columns, and maps and list fields are summarised with the number of items they hold, e.g. (2 items). This makes it impossible to only view the columns you’re interested in.

Version 0.1.0 now allows you to adjust the columns of the table. This is done using the Fields Popup, which can be opened by pressing f.

Adjusting the columns in the table

While this popup is visible you can show columns, hide them, or move them left or right. You can also add new columns by entering a Query Expression, which can be used to reveal the value of nested fields within the main table. It’s now possible to change the columns of the table to be exactly what you’re interested in:

The after table layout

Read-only Mode And Result Limits

Version 0.1.0 also contains some niceties for reducing the impact of working on production tables. Dynamo-Browse can now be started in read-only mode using the -ro flag, which will disable all write operations — a useful feature if you’re paranoid about accidentally modifying data on production databases.

Another new flag is -default-limit which will change the default number of items returned from scans and queries from 1000 to whatever you want. This is useful to cut down on the number of read capacity units Dynamo-Browse will use on the initial scans of production tables.

These settings are also changeable from while Dynamo-Browse using the new set command:

Using the set command in Dynamo-Browse

Progress Indicators And Cancelation

Dynamo-Browse now indicates running operations, like scans or queries, with a spinner. This improves the user experience of prior versions of Dynamo-Browse, which gave no feedback of running operations whatsoever and would simply “pop-up” the result of such operations in a rather jarring way.

With this spinner visible in the status bar, it is also now possible to cancel an operation by pressing Ctrl-C. You have the option to view any partial results that were already retrieved at the time.

Other Changes

Here are some of the other bug-fix and improvements that are also included in this release:

  • Audax toolset is now distributed via Homebrew. Check out the Downloads page for instructions.
  • A new mark command to mark all, unmark all, or toggle marked rows. The unmark command is now an alias to mark none.
  • Query expressions involving the partition and sort key of the main table are now executed as a DynamoDB queries, instead of scans.
  • The query expression language now supports conjunction, disjunction, and dot references.
  • Fixed a bug which was not properly detecting whether MacOS was in light mode. This was making some highlighted colours hard to see while in dark mode.
  • Fixed the table-selection filter, which was never properly working since the initial release.
  • Fixed the back-stack service to prevent duplicate views from being pushed.
  • Fixed some conditions which were causing seg. faults.

Full details of the changes can be found on GitHub. Details about the various features can also be found in the user manual.

Finally, although it does not touch on any of the features described above, I recorded a introduction video on the basics of using Dynamo-Browse to view items of a DynamoDB table:

No promises, but I may record further videos touching on other aspects of the tool in the future. If that happens, I’ll make sure to mention them here.1


  1. Or you can like, comment or subscribe on YouTube if that’s your thing 😛. ↩︎