Episode II: Attack of the Calibration

Episode II: Attack of the Calibration

Hey everyone! Hope you’re ready for Part 2 of 3 of the July Blog series! Let’s jump right in.  

If you actively follow our blog, or saw our post from yesterday, you’ve probably learned how important closed-loop feedback is. (Palette’s closed-loop feedback system was covered in the following posts: Oct 2015, Nov 2015, Dec 2015, Mar 2016, and in the comments of Apr 2016.)

Our feedback systems are what allow Palette to keep splice timing well-calibrated over the course of a print. Palette makes small adjustments on the fly to keep colors where they are supposed to be. As we discussed yesterday, we refer to this essential closed-loop feedback system as pinging. 

Pinging is important because printers are not perfect. Printers’ extrusion rates are not consistent throughout prints, so Palette must constantly monitor and adjust filament lengths to remain in calibration the entire length of the print. 

For the pinging systems to work, two things must happen: First pings must be detected and anything that looks like a ping, but isn’t, must be ignored (we cover this in yesterday’s post here). Second, Palette must know how to make proper corrections to keep the prints’ splice timing in calibration. Today, we’ll focus on correction and how Palette reacts when a ping is received. 

Since colors must arrive at the print head with precision after many meters of filament, we often measure calibration performance in terms of length. With the currently released version of firmware we were able to print ~30m on calibration with reasonable reliability. However, more testing and development with a wider variety of prints and printers was always the goal. By pausing Palette shipments, a portion of the team was able to carve out time to focus exclusively on calibration and pinging, and ensure that when the upgraded batch of Palettes ship, their calibration systems would perform much more reliably. 

Previous Version 

The first release of firmware performed reasonably well and allowed us to complete sizeable prints successfully with every Palette that left our production facility. That being said, when we started to throw longer and more complicated prints at Palette, we would start to see the rate of successful prints decrease. Pushing the 30-meter limit became an exercise in chance – not something that we were happy with. So it was time to start investigating what was causing these failures. 

We saw that Palettes would lose calibration, sometimes recovering while other times getting progressively worse until the part looked more like something from a Dr. Seuss story than our CAD model. However, at this point it was difficult to tell if it was the detection or correction systems failing. This led us to think about ways of separating the two systems so we could test them independently. 

To test correction we needed to ensure every ping was caught at precisely the right time. Thus, the concept of “mechanical pinging” was born. We attached switches to our test printers and wired them directly to each Palette’s control board. Using a 3D printed piece glued to the printer’s extruder, we created a surface that could contact the switch at a specific X,Y G-code coordinate. So instead of pausing to initiate a ping, we had our software input a custom G-code routine that moved the print head over to press the switch. This allowed Palette to know exactly when pings occurred, and removed any of the guess work of extrusion-based pinging. 

Mechanical ping switch used to isolate problems with our correction algorithm

What we found was that the corrective algorithms were pretty good. We were able to print tens of meters, if not more, with perfect splice timing (and therefore perfect color calibration). The following picture is of the first print we did with mechanical pinging:

First mechanical ping print, ~45 meters in length.

However, we noticed that sometimes with different models we would have calibration failures at varying stages of the print. Watching prints, we could often see pings coming through and pushing it closer to proper calibration but it would often not be enough to keep the color in the right place. Check out the following prints to get a better idea of what we were dealing with.

After printing about 50m of filament, the color calibration slowly slipped. Although pings would come through and push it back on target, the color would eventually slip and we had red in the blue and vice versa. 

When printing this dragon, we saw calibration problems as well. As soon as the print progressed to the wing section of the model, pinging wasn’t able to keep up with the required corrections. 

These failure rates definitively showed that although the correction algorithms could work, there was more work to be done. This old correction algorithm used a global offset – which means it scaled pieces based on the percent difference from where a ping is received, to where we expected that ping to arrive (based on information from the G-code). 

For example, if a ping occurs after 97m of filament has been printed but we expected it to occur at 100m the ping offset would be set to 97%. This is then applied to the lengths at which splices occur. The Mosaic SEEM File (.msf) includes the lengths at which Palette should switch from one filament to another. These transition points are provided in cumulative filament. For example an .msf containing four 500mm pieces would have lengths that looked like [500,1000,1500,2000]. This makes the math fairly straightforward; a ping offset of 97% would cause the second change to occur at 970mm instead of 1000mm. This offset is applied until the next ping is registered, allowing correction to be predictive on a basic level. 

What we found with the above test prints is that different parts of the print would have significantly different offsets. As the printer goes from printing layers of a part with high amounts of detail, to layers that are much simpler, we can see a dramatic change in actual filament consumption when compared to the G-code. 

The transition towers are able to allow for a degree of error in color calibration. Combine this with the ability of global offsets to adapt to subtle variations over the length of a print, and we were able to have a number of successful prints. However, given the dramatic changes in filament consumption at different points in a print (also discussed yesterday), it became quite obvious that another algorithm was needed – one that more accurately predicted printers’ extrusion rates later on in prints. 

Dramatic Changes in Filament Consumption 

We wanted to touch on this briefly and provide a visual to help you understand what we mean. Check out two cross sections of prints below: 

The part on the left of the picture has much more consistent extrusion – this is due to larger bodies, fewer moves, and fewer retractions. 

The Frog print has much more inconsistent extrusion. This is due to very small elements within the print. 

Now, in principle, imagine a print shifting from the Rook’s extrusion rates (large bodies, solid extrusion) to the Frog’s extrusion rates (small bodies, lots of retraction, potentially inconsistent extrusion) when moving from layer 101 to 102. With global correction algorithms, shifts like this could cause less optimal splice timing. This leads us to the new algorithm… 

New Version 

After examining the failures of the current algorithm, it was clear that the new algorithm had to have more localized corrections that could respond to variation within a part, but still maintain an ability to predict future error and account for that error in upcoming splices. Based on this belief, the concept of “local” correction was introduced. 

The concept of local correction is that correction should be made relative to the most recent meter or two of filament printed, not all filament in the print. In theory, this should allow corrections to be made for a part that has a larger degree of variability. 

Global correction, as described before, is a multiplicative process. This multiplicative nature is the main reason that it is, in a simplistic way, predictive. Multiplicative offsets scale the affected pieces. In the case of a global offset, this scaling occurs about the start of the print (i.e., global offsets imply that segment lengths take into account extrusion rates from the entire print up until that point). However, local offsets are applied at various points throughout the print (i.e., local offsets only consider extrusion rates from the last few meters). 

This makes the correction a bit more challenging. We need a point that, similar to the start of a print, is known by Palette. Luckily, we have a number of these throughout a print – pings (which are checkpoints throughout the print). Instead of applying the local offset over the entire length of filament created, we can apply it only to the filament produced since the last ping. However, that won’t account for all of the error that has built up throughout the print; it will just create a prediction for splices made after a given ping. 

The solution to this is adding an additive component to the correction. When registering a ping, we can tell the amount of error between the .msf and the actual print. This difference can be applied to splice points to shift all future splice points accordingly. 

So to recap, we have a more localized algorithm compared to the previous global version. The local algorithm uses both additive and multiplicative offsets, whereas the global version only used multiplicative offsets. These two types of offsets are able to translate and scale the splice lengths from the .msf, instead of just scaling them. 

Simple, right? Just implement that into firmware and we’re off to the races. Well, yes and no… The local algorithm discussed took some time to develop, and the implementation was not exactly a piece of cake. One of the problems that we run into with 3D printing is that tests are generally lengthy. To print something and watch the color calibration takes many, many hours. Writing code, and running a 12-hour test, modifying code, running another 12-hour test, … it’s incredibly inefficient. For some development work it’s necessary, but for testing and refining a new calibration algorithm, it wasn’t going to work. 


To combat the excruciatingly long tests, it was time to build something that could model a printer and Palette. It seemed like the perfect opportunity to dust off notes from our engineering classes and dive headfirst into numerical modelling. Replicating the system took a couple tries to get right, but eventually things started to click. 

To tie the numerical model to the real world, we collected data across a series of different prints. To do this, we output Scroll Wheel data to an SD card at each ping over the course of a print. Combined with the relevant .msf, when loaded into the simulation, we could virtually see the effects of different algorithms. Since the simulation was coded very similarly to the firmware of Palette, as we worked on the correction logic it was easy to implement it in real life as well. 

The most important output of the simulation is a graph of error over the course of the print. We ran the simulation using the original global algorithm on a set of data from a particularly varied print: 

The y-axis represents error in mm from perfect calibration, whereas the x-axis shows printed filament usage in x10^4 mm (eg. A value of 1 on the x axis is equal to 10 meters worth of filament). 

As you can see, this print would not have stayed in calibration using old algorithms. The global algorithms are unable to make large enough corrections for the step change in print performance that happens around the 10m mark. 

Now that we have a working model, however, our testing time went from hours to seconds. We were able to try a number of different algorithms, ultimately settling on the localized version described before. 

And… The resulting error plots look much, much better: 

Here is the same print data as before, but using the localized algorithm instead. 

This was tested across our entire set of data with successful results across the board. The simulation also does a good job at showing why printer calibration is important:

Note – the x-axis here is in mm (not 104 ¬mm as the earlier ones were).

As you might be able to see, there’s a spike outside our desired error bounds around 2m. This shows the lag time between the print start and where pinging takes effect. Better printer calibration and first-layer adhesion should, in most cases, decrease the error. 


Now the fun part – printing! Taking the algorithm developed for the simulation, we implemented the same functionality into Palette’s firmware. After a couple of tries, things really started to click. 

Here is a sampling of what we were able to achieve: 

Bigger, cleaner and better than before. So excited to see this guy come off the print bed. 55m!

We wanted to push the limits of the calibration system on this print. 145m on calibration, with a segment of in-layer changes at the end.

Despite a broken splice, this print was something we only dreamt of 2 years ago. It made its way through over 30 hours of printing, 500+ splices, and 168 meters of filament. 

The new pinging correction algorithm has introduced a step change to our calibration procedures, and will ensure your prints come off more reliably, with better quality than ever before. This algorithm is something we’ve been working on perfecting ever since we heard back from some of our early users, and is going to be shipping on every Palette that leaves our facility. 

With each improvement to Palette, we get more and more excited to get them out into your hands! It’s been really exciting to see prints get better, longer, and more complex in our Toronto office. We’re really looking forward to resuming shipping soon, and giving you and your printers capabilities they never had before. 

Keep your eyes out for the end of our Blog Trilogy tomorrow – Firmware, Software and Delivery. 

Til next time (literally tomorrow),