Sure, but the difference here is that it's indecipherable even to the people who did build it, to the extent that if it gives unwanted results they don't really have a way of fixing it beyond throwing a bunch of selected data into it to try and rebalance its biases.
In my three years working professionally with AI, I've routinely witnessed software engineers removing unwanted results by directly manipulating the code, because they wrote that code, understand what it does, and know how to change it.
Granted, I was working at a small tech start-up whose AI was programmed by a team of eight or so software engineers. Some larger companies working with more sophisticated AI will have teams many times that large, and the more people you have writing the code, the more difficult it is for any one person writing a module to decipher the rest of the system.
But I've been in the room when a team of eight software engineers absolutely did decipher what the code was doing, identify one specific part of the process as the source of the error, and hard-code a solution which produced the desired results. It was, in fact, a fairly routine part of their software sprints to make those kinds of adjustments.
If none that was possible, we wouldn't have multiple AI programs right now, because no one would know how to make a new one that behaved differently from the ones we already have. There would be no innovation in the AI field, and we'd all be using the same indecipherable AI program no one could improve upon. Which simply isn't the case.
So, why do big name computer scientists keep hyping how indecipherable AI software is? I can only speculate, but I have to wonder: Which researcher is likely to get more funding; the one trying to make incremental improvements to difficult but understandable software, or the one trying to decipher an incomprehensible, existential threat to humanity?