Week 6: Haunted by the past
This week had me dealing with the errors in our test framework if the internal is used as an external planner is used instead of the internal by default. On the upside, I got most of the interesting ones dealt with or with other words: From 19 down to 10. That doesn't sound like much (or still much, if you prefer that view), but the remaining are mostly diffs in the output (at the moment apt reports no progress about EIPP, in exchange the solver emits some progress directly causing unexpected lines to appear).
There are three exceptions:
-
We have a test for forced essential loop-breaking, which means apt by default refuses to temporarily deinstall essential packages in pre-depends/conflict loops. See e.g. #748355 if you think that is never going to happen anyway, but what is likely is that we don't want it to happen, so I have my doubts about adding a config option to the EIPP protocol just for this. Mhhh.
-
Is differences in the immediate configuration. apt currently basically has 3 modes: no immediate configuration at all, of essentials and their dependencies (default) and of all. I played with an annotation per-package, but discarded it eventually as this should be another planner strategy thing.
-
What is (pseudo)-essential? As mentioned in 2., apt treats essentials and their dependencies special. The problem enters the room if we remember that EIPP doesn't sent the entire universe, but tries to shrink it, which means entire packages and dependencies could be missing given the impression of unconnected trees which are in fact connected. I think here a pseudo-essential marker is needed to give the planner the info if it wants to use it. The field exists already, I just have to figure out a good way of implementing it in the planner.
Arriving there took longer than I would have liked to, through. First, it took me a lot longer than I would like to admit in public to figure out why downgrades weren't working. I had the protocol designed with those in mind, the code was expecting them all seemed right even after the twentiest inspection. Until I figured out that the planner wasn't even told to install the downgrade as the logic for install was "new || upgrade". Adding a "|| downgrade" and voilà…
That was just silly through, the real "problem" was with getting architecture specific dependencies/provides to be working. Again, the code was happy to accept them (nothing new, the parsing of such details is done by good 'old' code). The joy comes from translating apts single-arch view of the world with provides and implicit dependencies aplenty back into a multi-arch view to be outputted and feed to the planner (which takes it and translates it back to single-arch). Long long ago (2010) I decided that apt would implement Multi-Arch by translating it to Single-Arch. That has enormous advantages as all apt based tools already knew how to deal with a single architecture – converting them to be truly Multi-Arch would have been a huge undertaking I am not sure all tools would have survived. I still think this was the right decision, but each time I have to deal with the complexities arising from this translation I wonder how the world would have been…
The real haunt of the past was something else through: Protocols like EDSP use a unique numeric identifier to refer to a specific version and that is cool as APT had such a number around anyhow and uses it often as an index for arrays. Back then I implemented our internal solver for EDSP I had a problem through: Where do I store this number so I can access it later? Obvious! I store it as ID of the package!
Temporarily only of course. In the future I would need a mapping as you can never be sure that highest ID is also the amount of IDs handed out. That would need a tiny ABI break through, so it was filled under "for later" as for now we could be sure about it, so no problem. Forward a few years and in which I happily forgot about this problem. The tests happily worked, so I figured I should run it against a real system… and how it ran: Instant segfault.
Thankfully the ABI situation is slightly better today as we had a d-pointer now in a strategic place, so I could add the trivial code needed to introduce a mapping and hide it from the API which helps in not worrying to much about the interface as it can be easily changed if needed.
Next week I would like to see the three immediate planners working, decide on the ForceLoop thing and progress in order. With that done we would have (simple) feature parity and I could look into what we need, but I will leave that for next time as that post is way to long already…