The answer for why it is so is pretty trivial(just do the indexing for each element) if you know the definition of the kronecker product and what the 'vec' operation is.
For an intuitive explanation, try thinking of how the matrix multiplication would work and consider how the kronecker product pattern would apply to the vector.
This honestly isn't a super interesting result, and I would say the original commenter was overstating its importance in the matrix calculus. It really is more useful for solving certain matrix problems, or speeding up some tensor product calculations if you have things with a certain structure. Like if we have discretization of a PDE then depending on the representation the operator in the discrete space may be a sum of kronecker products, so applying those can be fast using a matrix multiply and never storing the kroneckered matrices.
The answer for why it is so is pretty trivial(just do the indexing for each element) if you know the definition of the kronecker product and what the 'vec' operation is.
For an intuitive explanation, try thinking of how the matrix multiplication would work and consider how the kronecker product pattern would apply to the vector.
This honestly isn't a super interesting result, and I would say the original commenter was overstating its importance in the matrix calculus. It really is more useful for solving certain matrix problems, or speeding up some tensor product calculations if you have things with a certain structure. Like if we have discretization of a PDE then depending on the representation the operator in the discrete space may be a sum of kronecker products, so applying those can be fast using a matrix multiply and never storing the kroneckered matrices.