The Azure Project Bonsai Platform Revisited
It has been almost an entire year ago since I have visited the Bonsai platform that I covered in my previous post. Since then a lot changed on the platform, so the time has come to revisit it and see in more detail what exactly changed (on the surface of course)!
Next to that, I have been working on a new project - more details soon π - which made me revisit the introduction I made last time of the Bonsai platform.
From personal experience, I can definitely say that user experience is everything in platforms such as this one! It is not easy to combine the personas of a Data Scientist, Application Developer and Subject Matter Expert (SME) into a platform that allows the latter to define their own experiments.
Azure Deployment
Deploying is as easy as going to the Azure portal, searching for Bonsai and filling in the parameters!
Bonsai - First Launch
Upon launching the Bonsai platform for the first time, we are greated by an amazing getting started page! So let's click on the Cartpole example again and create our cartpole example.
Bonsai - Interface
Now our example has been deployed, let's see what we notice on first sight when comparing them to the old version!
Initial Findings
The first thing I noticed is that the code got defined a bit simpler, removing the boundaries that were put on there (which we would mathematically represent as [ -30, 30 ]
for example).
But the biggest difference is the concept graph! π This one became super sleek! Clearly defining the inputs, what it will stop upon and the outputs.
Note: My OCD did catch a out-of-center point definition π
Concept Graph - A more in-depth view
Going a bit deeper on the Concept Graph, we can clearly see:
- The amount of inputs
- The Goals on which is should learn on -> what do we want to avoid?
- Pole should not fall over
- Cart should stay within the track length
More amazingly is that when we add inputs and outputs, the interface changes accordingly!
Training
The biggest change is the friendliness in training. When we are training, we can now clearly see our reward and the blue line more expressed showing what we as a user should care about the most (average performance) and our goal score.
Next to that, the concept graph was implemented here as well for progress tracking. What is unclear for me however is what "assessments" mean.
In more detail we can see training below
Which lets us check our experiment running and the parameters it is seeing in more detail! It's simply amazing to be able to see why certain actions are being taken based on the observations received.
Remarks
- Personally, I think that Inkling is necessary for the ecosystem, but might be scaring people off. It's an easy language to pick up, if you are a programmer. But when you are new to it and never coded before, it might be too much. I would personally love to see this evolve to a no-code solution.
- When first starting the interface, the concept of "Brain" is used. Knowing this platform from before, I am comfortable with it, but it might be confusing for newer people.
- The live visualization is very cool but its working could be clearer, which instance is it showing? Why is it only changing once every 10 seconds? β¦
- Spinning up a certain amount of simulators is slow with no progress showing what is happening (it just states that it is looking for simulators)
- Performance in training increased a lot! I only had to spend 7 minutes to get to a decent level
- Export brain / integration is not as straightforward with just a simple small button on the right top that looks hidden. I would love to see a 3th tab that says "integrate"
Summary
The Bonsai platform definitely evolved for the better and became more user friendly! Before the concept graph was not that clear, not showing the amount of inputs or outputs, which has been tackled now.
In this revisitation of the platform I just wanted to see the instant changes when opening the platform. Of course this is not the only thing, so stay tuned for more in-depth coverage as I continue using it π