In this work, we present the design and implementation of a system for proof explanation in the Semantic Web, based on defeasible reasoning. Trust is a vital feature for Semantic Web. If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs. Our system produces automatically proof explanations using a popular logic programming system (XSB), by interpreting the output from the proof's trace and converting it into a meaningful representation. It also supports an XML representation for agent communication, which is a common scenario in the Semantic Web. In this paper, we present the design and implementation of the system, a RuleML language extension for the representation of a proof explanation, and we give some examples of the system. The system in essence implements a proof layer for nonmonotonic rules on the Semantic Web.