Despite the fact that speech recognition involves a fair amount of natural language processing, there has been very little collaboration between linguists and speech recognition engineers. Nowadays, both domains use similar techniques such as log linear models and graphical models. Moreover, the advent of new powerful algorithms allows the deployment of probabilistic models that are no longer limited by the oversimpli cations present in most of today’s speech recognition systems and should allow successful integration of rich linguistic knowledge in automatic speech recognition systems. In this project, a novel speech recognition framework and a set of accompanying, contemporary linguistic models is co-developed. The envisioned layered architecture combines local inference with inter-layer message passing, similar to what is believed to underlay human speech recognition. Compared to the current architectures, such layer-wise structure facilitates the incorporation of additional knowledge (linguistic or other), and hence is better equipped to model the nenuances and complex inter-dependencies in speech.