The First Drop Test (Decoded)
Diogo here with some analysis of the first drop test performed on the 3rd of October 2020. As some of you may know, this experiment consisted of a total of two drop tests with the intent to check if the boom does not interfere with the deployment process of the FFU. Overall, it was a successful test and we are very happy about it but during the first drop there was a problem where, after landing, the FFU was disconnected and the stored data corrupted. So, grab your trench coat, fedora hat and magnifying glass because in this blog post we will try to find out what possibly happened that day. Real detective stuff here guys!
The FFU stores in two separate files the FPGA and uC data, we observed upon landing that the latter was corrupted. The data storage process is done using what can be considered a data packet streaming process, which essentially splits the data into smaller chunks and stores them sequentially. With the help of our data parser it was possible to observe that the last packet wasn’t properly stored, thus the file being corrupted. So, by performing some magic coding tricks, aka ignoring the last packet, we were able to look at the data, as seen in Figure 1. However, and as expected, not much information could be taken from it.
Fig. 1 : Visualization of stored data (magnetic field) after successfully reading the corrupted file
Well, this didn’t really work. But a good detective doesn’t give up so easily and lucky for us we also have the log files!!
Looking at the flight state logs, in the table below, we can see that, for the corrupted files, the FFU only managed to reach state 3 (inside the airplane in this case). After that, three other files were created due to three almost consecutive reboots of the FFU (all within around 26s). As we can see in the table below, the following three files skipped state 3 but achieved state 5 before shutting off. The last attempt reached state 6, which is the second to last state. However, the last three files were all stored with a file size of 0kb, meaning that none of them stored actual data. Comparing to other tests, this was the first time it occurred, previous files without data only corresponded to tests where the FFU didn't reach a single state. This by itself means that there was a problem with the data saving code, but the FFU still recorded the state logs.
The skipping of state 3, can easily be explained by looking at the code where the only condition for the transition between state 3 and 4 is for the FFU to be ejected, so if it was ejected in the first failed attempt, then once it rebooted the FFU already full-filled that condition so it went straight to state 4.
The transition from state 4 to 5 it is only necessary to wait some milliseconds, so the FFU had no problem in achieving that state. But to reach state 6 it needs to have at least 20 seconds, and as it was previously mentioned the three attempts took around 26s. This means that during those three attempts, the variable that counts the seconds in each state did not start from 0 after every failed attempt (because in order to do so, it needs to verify the same condition that allows it to change states), so the 20s condition to reach state 6 was only achieved in the third attempt.
Table 1 : Flight State logs, where we observe the states reached in each file
Overall, there is no definitive explanation for what happened but, according to what we just observed and making some assumptions that will need to be checked out later, the most plausible theory is that these problems occurred due to a sudden malfunction of the system, inside the airplane, which was most likely caused by some faulty connection. Before dropping the FFU, the pin was (partially) released by accident too early, so this was possibly the main cause of the issue. The pin was not ejected correctly, causing the FFU to reboot and then try it again 3 times after the first unsuccessful attempt until it shut off for good, most likely due to the release of the FFU from the airplane.
Now we can take off our detective outfit for a while, but not for long because there are still some ways we can investigate this further, like looking at the raw GPS data and maybe try to manipulate our current data and look for the exact moment where it became corrupted.
That is all for me now, stay tuned for more blog posts and check out our Facebook and Instagram pages for more information!