With the improved quality of Machine Translation (MT) systems in the last decades, post-editing (the correction of MT errors) has gained importance in Computer-Assisted Translation (CAT) workflows. Depending on the number and the severity of the errors in the MT output, the effort required to post-edit varies from sentence to sentence. The existing Quality Estimation (QE) systems provide quality scores that reflect the quality of an MT output at sentence level or word level. However, they fail to explain the relationship between different types of MT errors and the required post-editing effort to correct them. We suggest a more informative approach to QE in which different types of MT errors are detected in a first step, which are then used to estimate post-editing effort in a second step. In this paper we define the upper boundary of such a system. We use different machine learning methods to estimate Post-Editing Time (PET) by using a gold-standard set of MT errors as features. We show that post-editing time can be estimated with high accuracy when all the translation errors in the MT output are known. Furthermore, we apply feature selection methods and investigate the predictive power of different MT error types on PET. Our results show that the same prediction performance can be achieved by only using a small subset of MT error types, indicating that successful two-step QE systems can be built with less effort in the future, by detecting only the error types with highest predictive power.