The Church-Turing thesis claims (and is believed to be right) that a Turing machine can solve (compute) any problem that can be solved (is computable).
For now at least, we don't know of any counter-example to the Church-Turing thesis. Humans are certainly not able to tell whether a program in general will halt for any input it recieves. If you believe otherwise, please tell me whether the following program halts:
Solving the halting problem means being able to tell, for ANY Turing machine and ANY input, whether it will halt. I know of absolutely nothing that would make us believe humanity or any individual human can do this.
If there is a single Turing machine for which there is no human that can tell whether it will halt, the conjecture that human reasoning is not more powerful than Turing machines is still not disproven.
Also, if you were right that humans can do things that Turing machines can't, it would be an interesting excersise to try to find that example of super-Turing human reasoning and try to see why it can't be encoded as a Turing machine. No one has successfully done this yet, so there is no reason to believe that it is possible.
There is no reason to believe that the human mind can solve a superset of the problems solvable by a Turing machine, and a lot of evidence to the contrary.
Also, the link you replied provides an argument of efficiency, not an argument of possibility. It is arguing that we might use things that are not Turing machines because they can solve the problem more efficiently, not because they can solve problems Turing machines could never hope to solve.
For now at least, we don't know of any counter-example to the Church-Turing thesis. Humans are certainly not able to tell whether a program in general will halt for any input it recieves. If you believe otherwise, please tell me whether the following program halts: