Software quality models provide either abstract quality characteristics or concrete quality measurements; there is no seamless integration of these two aspects. Quality assessment approaches are, hence, also very specific or remain abstract. Reasons for this include the complexity of quality and the various quality profiles in different domains which make it difficult to build operationalised quality models. Our aim was to develop and validate operationalised quality models for software together with a quality assessment method and tool support to provide the missing connections between generic descriptions of software quality characteristics and specific software analysis and measurement approaches. As a single operationalised quality model that fits all peculiarities of every software domain would be extremely large and expensive to develop, we also set the goal of enabling modularised quality models with a widely applicable base model and various specific extensions. This also constrained the types of analyses and measurements to include: We included static analyses and manual reviews, because they are least dependent on the system context. In contrast, dynamic testing of a system would require specific test cases and execution environments. Furthermore, we focused on product quality and, hence, product aspects influencing quality, instead of on process or people aspects. While we consider the latter aspects important as well, we expected the product aspects to be easier and more direct to measure.