I didn’t really like the cofactor construction of the inverse of a matrix with nonzero determinant, to prove said inverse exists. I’m willing to accept the equation det(MN) = det(M)det(N) on faith, since I am confident I could work that out if I really had to. With that in my pocket, here’s an explanation of the correspondence between nonzero determinant and invertibility.
Suppose M is invertible. Then there is some M-1 such that MM-1 = I, the identity matrix. Then 1 = det(I) = det(MM-1) = det(M)det(M-1), and 1 cannot be obtained as a product of 0 with anything, so both M and M-1 have nonzero determinant.
Now suppose M is (square and) noninvertible. Then the kernel of the transformation T that M represents includes some nonzero vector X, and we may build a basis B for our vector space such that B includes X. The matrix M’ for T relative to B includes a column of all 0s, corresponding to the position of X in the ordering of B, so det(M’) = 0. For appropriate change of basis matrices P, P-1 we have M = PM’P-1, so det(M) = det(PM’P-1) = det(P)det(M’)det(P-1) = 0.