WEIGHT of a Model


Quantitative measure of a Model reflecting the complexity and of the functional size of the application it generates. The weight of a Model is calculated as the sum of the elementary weights of its elements.

The weight of a Livebase Model is measured using a special  LCP (Livebase Complexity Point) metric.  In the LCP metric (just like in the standard Function Point metric), each element of the Model has a base weight depending on the element type and a variable weight depending on its complexity. For example, the weight of a query depends on the number of relationships crossed, while the weight of a filter depends on the number of parameters of its boolean expression.

The historical trend of the weight of a Model (ΔLCP / Day) in its various versions stored in the Library also provides very useful indications for evaluating the productivity of Developers in the initial stages of development and the stability of the system itself (frequency and extent of changes) in the subsequent maintenance phase.

Although it is possible to generate useful applications from Models that weigh a few hundred LCPs (for example simple databases for personal use), the weight of the Models of applications that went into production over the last ten years for professional purposes (thus excluding prototypes and demos) varies approximately between 2,000 and 60,000 LCP.

Below is a brief description of the main modelling elements weighted by the LCP metric. 

  • Native elements of the conceptual data model (classes, native attributes and relationships). The weight of these elements takes into account the basic functionalities (CRUD) related to creation, reading, modification and deletion of objects, and is conceptually like the count of ILF elements in the Function Point metric.

  • Derived attributes of the conceptual data model, like queries and mathematical expressions. The weight of these elements grows with the complexity of the queries (path length, parameters in the filters formula, etc.) and of the mathematical expressions, and is conceptually like the count of EQ elements in the Function Point metric.

  • Constraints on the creation, modification and deletion of objects of a class, such as uniqueness (keys) and multiplicity constraints, constraints on the domain of attributes, application constraints expressed as predicates (i.e. Boolean expressions based on native and derived attributes of the class). The weight of these elements grows with the complexity of the constraints.

  • Constraints on the creation and modification of relations between classes, such as for example multiplicity constraints at the ends of the relation, filters on associable objects expressed as predicates (i.e. Boolean expressions based on native and derived attributes of the two related classes). The weight of these elements grows with the complexity of the constraints.

  • Applicative views on the data model. Views partition data access both vertically (setting classes and attributes as visible or not visible) and horizontally (filters that make individual objects and relationship instances visible or invisible based on predicates, i.e. of Boolean expressions based on native and derived attributes). The weight of these elements grows with the size of the portion of the data model visible and with the complexity of the horizontal partitioning filters defined.

  • Permissions of each user profile on each application view. Permissions allow to specify for each user profile and for each application view accessible from that profile, the rights that profile has on each class, attribute or relationship enabled in that applicative view. The weight of these elements grows with the number of combinations [profile, application view] defined in the Model.