This article investigates the automatic explanation of irony in English tweets. The work covers the development and
validation of a conceptual framework for annotating knowledge-informed explanations for figurative language as well as the training and evaluation of specialized generative models. Human judgements confirm that both fine-tuned open-source models (Llama 3) and proprietary models (GPT-4) can produce high-quality explanations, effectively incorporating relevant world knowledge. While metrics like BLUE and ROUGE do not seem to align with human judgement, we find that semantic similarity measures align well with human quality estimations. The resulting models and datasets for irony explanations, published as the iRONNIE collection, actively bridge the gap between theoretical understanding of irony and the technical innovations of the NLP domain. The models are be released to the public to facilitate a deeper linguistic analysis of world knowledge involved in understanding irony on social media in future work.