Epic Meepo
Hero
I agree with much of your argument, but I disagree with the conclusion drawn in the quoted statement. The fact that an AI assigns weights to training data has no bearing on how easy or how hard it is to create a human-readable explanation for the AI's output.yes, but to be able to understand the result, you need to know the weights, that is the whole point of why it is basically impossible to explain the result.
It is neither impossible nor technically difficult to determine the weights an AI assigns to training data. Weights are either hard-coded (in which case you know them if you know the code) or they are dynamically-generated (in which case you can derive them using the code and the input data). This is no more or less difficult than determining the current internal state of, for example, the operating system on the computer you're using to read this message. All you need if you want to reproduce the program's current internal state is the code the program is running and the input data that led to the program's current state (including all seeds used by random number generators, if any). A step-by-step log explaining how you got there would take a long time for a human to read, but producing that explanation is fairly trivial.