Why Measure Software
Fundamentals of Measurement Theory
Use Case Points
Definitions
Measure - quantitative indication of extent,
amount, dimension, capacity, or size of some
attribute of a product or process.
E.g., Number of errors
Metric - quantitative measure of degree to
which a system, component or process
possesses a given attribute. “A handle or
guess about a given attribute.”
E.g., Number of errors found per person hours
expended
Motivation for Metrics
Estimate the cost & schedule of future projects
Evaluate the productivity impacts of new tools and
techniques
Establish productivity trends over time
Improve software quality
Forecast future staffing needs
Anticipate and reduce future maintenance needs
51 trang |
Chia sẻ: candy98 | Lượt xem: 531 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Software Engineering - Lecture 12: Software Metrics - Anh Dao Nam, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
SOFTWARE ENGINEERING
Lecture 12
Software Metrics
MBA Course Notes
Dr. ANH DAO NAM
1
Software Engineering
Slides are from Ivan Marsic and Thomas E. Potok, and Richard A. Volz,
modified by Anh Dao Nam
Textbooks:
Bruegge & Dutoit: Object-Oriented Software Engineering: Using UML,
Patterns and Java, Third Edition, Prentice Hall, 2010.
Miles & Hamilton: Learning UML 2.0, O’Reilly Media, 2006.
Some interesting sources for the advanced material include:
Richard A. Volz, Technical Metrics for Software
R. Pressman, Software Engineering - A Practitioner's Approach, 6th ed.,
2005
C. Ghezzi, M. Jazayeri, and D. Mandriolo, Fundamentals of Software
Engineering. Prentice Hall, second ed., 2002
A. Endres and D. Rombach, A Handbook of Software and Systems
Engineering. The Fraunhofer IESE Series on Software Engineering,
Pearson Education Ltd., 2003.
S. Robertson and J. C. Robertson, Mastering the Requirements Process.
Addison-Wesley Professional, second ed., 2006.
2
Topics
Why Measure Software
Fundamentals of Measurement Theory
Use Case Points
3
Definitions
Measure - quantitative indication of extent,
amount, dimension, capacity, or size of some
attribute of a product or process.
E.g., Number of errors
Metric - quantitative measure of degree to
which a system, component or process
possesses a given attribute. “A handle or
guess about a given attribute.”
E.g., Number of errors found per person hours
expended
4
Motivation for Metrics
Estimate the cost & schedule of future projects
Evaluate the productivity impacts of new tools and
techniques
Establish productivity trends over time
Improve software quality
Forecast future staffing needs
Anticipate and reduce future maintenance needs
5
Example Metrics
Defect rates
Error rates
Measured by:
individual
module
during development
Errors should be categorized by origin, type,
cost
6
Metric Classification
Products
Explicit results of software development activities
Deliverables, documentation, by products
Processes
Activities related to production of software
Resources
Inputs into the software development activities
hardware, knowledge, people
7
Software Quality
Software requirements are the foundation
from which quality is measured.
Specified standards define a set of
development criteria that guide the manner in
which software is engineered.
There is a set of implicit requirements that
often goes unmentioned.
Software quality is a complex mix of factors
that will vary across different applications and
the customers who request them.
8
McCall’s Software Quality Factors
Maintainability
Flexibility
Testability
Portability
Reusability
Interoperability
Correctness Reliability Usability Integrity Efficiency
Product Operation
Product Revision Product Transition
∑ ×= iiq mcF
9
HP’s FURPS
• Functionality - evaluate the feature set and
capabilities of the program
• Usability - aesthetics, consistency, documentation
• Reliability - frequency and severity of failures
• Performance - processing speed, response time,
resource consumption, throughput, efficiency
• Supportability - maintainability testability,
compatibility, ease of installation
10
Transition to a Quantitative View
• Previous slides described qualitative factors
for the measurement of software quality
• Everyday quality measurements
• gymnastics, wine tasting, talent contests
• side by side comparisons
• quality judged by an expert in the field
• Quantitative metrics don’t explicitly measure
quality, but some manifestation of quality
11
The Challenge of Technical Metrics
• Each quality measurement takes a different
view of what quality is and what attributes in a
system lead to complexity.
• The goal is to develop measures of different
program attributes to use as indicators of
quality.
• Unfortunately, a scientific methodology of
realizing this goal has not been achieved.
12
Measurement Principles
• Formulation - derivation of software metrics
appropriate for the software being considered
• Collection - accumulating data required to derive
the formulated metrics
• Analysis - computation of metrics and application
of mathematical tools
• Interpretation - evaluation of metrics in an effort
to gain insight into the quality of the system
• Feedback - recommendations derived from the
interpretation of metrics
13
Attributes of Effective Software Metrics
• Simple and computable
• Empirically and intuitively persuasive
• Consistent and objective
• Consistent in units and dimensions
• Programming language independent
• Effective mechanism for quality feedback
14
Function Based Metrics
• The Function Point (FP) metric can be
used as a means for predicting the size
of a system (derived from the analysis
model).
• number of user inputs
• number of user outputs
• number of user inquiries
• number of files
• number of external interfaces
15
Function Point Metric
Weighting Factor
MEASUREMENT PARAMETER count simple average complex total
number of user inputs 3 x 3 4 6 = 9
number of user outputs 2 x 4 5 7 = 8
number of user inquiries 2 x 3 4 6 = 6
number of files 1 x 7 10 15 = 7
number of external interfaces 4 x 5 7 10 = 20
count - total 50
Overall implemented size can be estimated from the projected FP value
FP = count-total × (0.65 + 0.01 × Σ Fi)
16
The Bang Metric
• Used to predict the application size based on
the analysis model.
• The software engineer first evaluates a set of
primitives unsubdividable at the analysis
level.
• With the evaluation of these primitives,
software can be defined as either function-
strong or data-strong.
• Once the Bang metric is computed, past
history must be used to predict software size
and effort.
17
Metrics for Requirements Quality
• Requirements quality metrics - completeness,
correctness, understandability, verifiability, consistency,
achievability, traceability, modifiability, precision, and
reusability - design metric for each. See Davis.
• E.g., let nr = nf + nnf , where
• nr = number of requirements
• nf = number of functional requirements
• nnf = number of nonfunctional requirements
18
Metrics for Requirements Quality
• Specificity (lack of ambiguity)
• Q = nui/nr
• nui - number of requirements for which all
reviewers had identical interpretations
• For completeness,
• Q = nu/(ni× ns)
• nu = number of unique function requirements
• ni = number of inputs specified
• ns = number of states specified
19
High-Level Design Metrics
• Structural Complexity
• S(i) = f
2
out(i)
• fout(i) = fan-out of module i
• Data Complexity
• D(i) = v(i)/[fout(i) +1]
• v(i) = # of input and output variables to and
from module i
• System Complexity
• C(i) = S(i) + D(i)
20
High-Level Design Metrics (Cont.)
• Morphology Metrics
• size = n + a
• n = number of modules
• a = number of arcs (lines of control)
• arc-to-node ratio, r = a/n
• depth = longest path from the root to a leaf
• width = maximum number of nodes at any
level
21
Morphology Metrics
a
b c d e
f g i j k l
h m n p q r
size depth width arc-to node ratio
22
AF Design Structure Quality Index
S1 = total number of modules
S2 = # modules dependent upon correct data
source or produces data used, excl. control
S3 = # modules dependent upon prior
processing
S4 = total number of database items
S5 = # unique database items
S6 = # of database segments
S7 = # modules with single entry & exit
23
AF Design Structure Quality Index
D1 = 1 if arch design method used, else 0
D2 = 1 - (S2/S1) -- module independence
D3 = 1 - (S3/S1) -- independence of prior
processing
D4 = 1 - (S5/S4) -- database size
D5 = 1 - (S6/S4) -- DB compartmentalization
D6 = 1 - (S7/S1) -- Module entrance/exit
24
DSQI = ∑wiDi, where the wi are weights
totaling 1 which give the relative importance
The closer this is to one, the higher the quality.
This is best used on a comparison basis, i.e.,
with previous successful projects.
If the value is too low, more design work
should be done.
AF Design Structure Quality Index
25
Component-Level Design Metrics
• Cohesion Metrics
• Coupling Metrics
• data and control flow coupling
• global coupling
• environmental coupling
• Complexity Metrics
• Cyclomatic complexity
• Experience shows that if this > 10, it is
very difficult to test
26
Cohesion Metrics
Data slice - data values within the module that
affect the module location at which a backward trace
began.
Data tokens - Variables defined for a module
Glue Tokens - The set of tokens lying on multiple
data slices
Superglue tokens - The set of tokens on all slices
Stickiness - of a glue token is proportional to number
of data slices that it binds
Strong Functional Cohesion
SFC(i) = SG(i)/tokens(i)
27
Coupling Metrics
• Data and control flow coupling
• di = number of input data parameters
• ci = number of input control parameters
• d0 = number of output data parameters
• c0 = number of output control parameters
• Global coupling
• gd = number of global variables used as data
• gc = number of global variables used as control
• Environmental coupling
• w = number of modules called (fan-out)
• r = number of modules calling the module under consideration (fan-
in)
• Module Coupling: mc = 1/ (di + 2*ci + d0 + 2*c0 + gd + 2*gc + w + r)
• mc = 1/(1 + 0 + 1 + 0 + 0 + 0 + 1 + 0) = .33 (Low Coupling)
• mc = 1/(5 + 2*5 + 5 + 2*5 + 10 + 0 + 3 + 4) = .02 (High Coupling)
28
Interface Design Metrics
• Layout Entities - graphic icons, text, menus,
windows, .
• Layout Appropriateness
• absolute and relative position of each layout entity
• frequency used
• cost of transition from one entity to another
• LA = 100 x [(cost of LA-optimal layout) /
• (cost of proposed layout)]
• Final GUI design should be based on user
feedback on GUI prototypes
29
Metrics for Source Code
• Software Science Primitives
• n1 = the number of distinct operators
• n2 = the number of distinct operands
• N1 = the total number of operator
occurrences
• N2 = the total number of operand
occurrences
30
Length: N = n1log2n1 + n2log2n2
Volume: V = Nlog2(n1 + n2)
Metrics for Source Code (Cont.)
SUBROUTINE SORT (X,N)
DIMENSION X(N)
IF (N.LT.2) RETURN
DO 20 I=2,N
DO 10 J=1,I
IF (X(I).GE.X(J) GO TO 10
SAVE = X(I)
X(I) = X(J)
X(J) = SAVE
10 CONTINUE
20 CONTINUE
RETURN
END
31
OPERATOR COUNT
1 END OF STATEMENT 7
2 ARRAY SUBSCRIPT 6
3 = 5
4 IF( ) 2
5 DO 2
6 , 2
7 END OF PROGRAM 1
8 .LT. 1
9 .GE. 1
10 GO TO 10 1
n1 = 10 N1 = 28
n2 = 7 N2 = 22
Metrics for Testing
• Analysis, design, and code metrics
guide the design and execution of test
cases.
• Metrics for Testing Completeness
• Breadth of Testing - total number of
requirements that have been tested
• Depth of Testing - percentage of
independent basis paths covered by
testing versus total number of basis paths
in the program.
• Fault profiles used to prioritize and
categorize errors uncovered. 32
Metrics for Maintenance
• Software Maturity Index (SMI)
• MT = number of modules in the current release
• Fc = number of modules in the current release
that have been changed
• Fa = number of modules in the current release
that have been added
• Fd = number of modules from the preceding
release that were deleted in the current release
33
SMI = [MT - (Fc + Fa + Fd)] / MT
Measurement Scale (1)
Nominal scale – group subjects into categories
Example: designate the weather condition as “sunny,”
“cloudy,” “rainy,” or “snowy”
The two key requirements for the categories: jointly
exhaustive & mutually exclusive
Minimal conditions necessary for the application of statistical
analysis
Ordinal scale – subjects compared in order
Examples: “bad,” “good,” and “excellent,” or “star” ratings
Arithmetic operations such as addition, subtraction,
multiplication cannot be applied
34
Measurement Scale (2)
Interval scale – indicates the exact differences
between measurement points
Examples: traditional temperature scale (centigrade or
Fahrenheit scales)
Arithmetic operations of addition and subtraction can be
applied
Ratio scale – an interval scale for which an
absolute or nonarbitrary zero point can be located
Examples: mass, temperature in degrees Kelvin, length, and
time interval
All arithmetic operations are applicable
35
Use Case Points (UCPs)
Size and effort metric
Advantage: Early in the product development
(after detailed use cases are available)
Drawback: Many subjective estimation steps
involved
Use Case Points = function of (
size of functional features (“unadjusted” UCPs)
nonfunctional factors (technical complexity factors)
environmental complexity factors (ECF))
36
Actor Classification and Weights
37
Actor type Description of how to recognize the actor type Weight
Simple
The actor is another system which interacts with our
system through a defined application programming
interface (API).
1
Average
The actor is a person interacting through a text-based user
interface, or another system interacting through a protocol,
such as a network communication protocol.
2
Complex The actor is a person interacting via a graphical user interface. 3
Example: Safe Home Access
38
Actor name Description of relevant characteristics Complexity Weight
Landlord
Landlord is interacting with the system via a graphical user
interface (when managing users on the central computer).
Complex 3
Tenant
Tenant is interacting through a text-based user interface
(assuming that identification is through a keypad; for
biometrics based identification methods Tenant would be a
complex actor).
Average 2
LockDevice
LockDevice is another system which interacts with our
system through a defined API.
Simple 1
LightSwitch Same as LockDevice. Simple 1
AlarmBell Same as LockDevice. Simple 1
Database Database is another system interacting through a protocol. Average 2
Timer Same as LockDevice. Simple 1
Police Our system just sends a text notification to Police. Simple 1
38
Actor classification for the case study of home access control
Unadjusted Actor Weight (UAW)
UAW(home access) = 5 × Simple + 2 ×Average + 1 × Complex = 5×1 + 2×2 + 1×3 = 12
Use Case Weights
39
Use case weights based on the number of transactions
Use case
category
Description of how to recognize the use-case category Weight
Simple
Simple user interface.
Up to one participating actor (plus initiating actor).
Number of steps for the success scenario: ≤ 3.
If presently available, its domain model includes ≤ 3 concepts.
5
Average
Moderate interface design.
Two or more participating actors.
Number of steps for the success scenario: 4 to 7.
If presently available, its domain model includes between 5 and
10 concepts.
10
Complex
Complex user interface or processing.
Three or more participating actors.
Number of steps for the success scenario: ≥ 7.
If available, its domain model includes ≥ 10 concepts.
15
Example: Safe Home Access
40
Use case Description Category Weight
Unlock
(UC-1)
Simple user interface. 5 steps for the main success scenario. 3
participating actors (LockDevice, LightSwitch, and Timer).
Average 10
Lock
(UC-2)
Simple user interface. 2+3=5 steps for the all scenarios. 3
participating actors (LockDevice, LightSwitch, and Timer).
Average 10
ManageUs
ers (UC-3)
Complex user interface. More than 7 steps for the main
success scenario (when counting UC-6 or UC-7). Two
participating actors (Tenant, Database).
Complex 15
ViewAcces
sHistory
(UC-4)
Complex user interface. 8 steps for the main success scenario.
2 participating actors (Database, Landlord). Complex 15
Authentica
teUser
(UC-5)
Simple user interface. 3+1=4 steps for all scenarios.
2 participating actors (AlarmBell, Police). Average 10
AddUser
(UC-6)
Complex user interface. 6 steps for the main success scenario
(not counting UC-3). Two participating actors (Tenant,
Database).
Average 10
RemoveUs
er (UC-7)
Complex user interface. 4 steps for the main success scenario
(not counting UC-3). One participating actor (Database).
Average 10
Login
(UC-8)
Simple user interface. 2 steps for the main success scenario.
No participating actors.
Simple 5
Use case classification for the case study of home access control
UUCW(home access) = 1 × Simple + 5 ×Average + 2 × Complex = 1×5 + 5×10 + 2×15 = 85
Technical Complexity Factors (TCFs)
41
Technical factor Description Weight
T1 Distributed system (running on multiple machines) 2
T2
Performance objectives (are response time and throughput performance
critical?)
1(∗)
T3 End-user efficiency 1
T4 Complex internal processing 1
T5 Reusable design or code 1
T6
Easy to install (are automated conversion and installation included in the
system?)
0.5
T7 Easy to use (including operations such as backup, startup, and recovery) 0.5
T8 Portable 2
T9 Easy to change (to add new features or modify existing ones) 1
T10 Concurrent use (by multiple users) 1
T11 Special security features 1
T12
Provides direct access for third parties (the system will be used from multiple
sites in different organizations)
1
T13 Special user training facilities are required 1
Technical Complexity Factors (TCFs)
42
TCF = Constant-1 + Constant-2 × Technical Factor Total = ∑
=
⋅⋅+
13
1
21
i
ii FWCC
Constant-1 (C1) = 0.6
Constant-2 (C2) = 0.01
Wi = weight of i
th technical factor
Fi = perceived complexity of i
th technical factor
Scaling Factors for TCF & ECF
43
(a) (b)
Technical Factor Total
T
C
F
0
0 20 40 60 80
0.2
0.4
0.6
0.8
1
1.2
1.4
70503010
(70, 1.3)
(0, 0.6)
T
C
F
Environmental Factor Total
E
C
F
0 10 20 30 40
0
0.8
1
1.2
1.4
0.6
0.4
0.2
(0, 1.4)
(32.5, 0.425)
E
C
F
Example
Technical
factor
Description Weight
Perceived
Complexity
Calculated Factor
(Weight×Perceived
Complexity)
T1
Distributed, Web-based system, because of
ViewAccessHistory (UC-4)
2 3 2×3 = 6
T2
Users expect good performance but nothing
exceptional
1 3 1×3 = 3
T3
End-user expects efficiency but there are no
exceptional demands
1 3 1×3 = 3
T4 Internal processing is relatively simple 1 1 1×1 = 1
T5 No requirement for reusability 1 0 1×0 = 0
T6
Ease of install is moderately important (will
probably be installed by technician)
0.5 3 0.5×3 = 1.5
T7 Ease of use is very important 0.5 5 0.5×5 = 2.5
T8
No portability concerns beyond a desire to
keep database vendor options open
2 2 2×2 = 4
T9 Easy to change minimally required 1 1 1×1 = 1
T10 Concurrent use is required (Section 5.3) 1 4 1×4 = 4
T11 Security is a significant concern 1 5 1×5 = 5
T12 No direct access for third parties 1 0 1×0 = 0
T13 No unique training needs 1 0 1×0 = 0
Technical Factor Total: 31
44
Environmental Complexity Factors (ECFs)
ECF = Constant-1 + Constant-2 × Environmental Factor Total = ∑
=
⋅⋅+
8
1
21
i
ii FWCC
Constant-1 (C1) = 1.4
Constant-2 (C2) = −0.03
Wi = weight of ith environmental factor
Fi = perceived impact of ith environmental factor
Environmental factor Description Weight
E1 Familiar with the development process (e.g., UML-based) 1.5
E2 Application problem experience 0.5
E3 Paradigm experience (e.g., object-oriented approach) 1
E4 Lead analyst capability 0.5
E5 Motivation 1
E6 Stable requirements 2
E7 Part-time staff −1
E8 Difficult programming language −1
45
Example
Environmenta
l factor
Description Weight
Perceived
Impa
ct
Calculated Factor
(Weight×
Perceived
Impact)
E1
Beginner familiarity with the UML-based
development
1.5 1 1.5×1 = 1.5
E2 Some familiarity with application problem 0.5 2 0.5×2 = 1
E3 Some knowledge of object-oriented approach 1 2 1×2 = 2
E4