Every time a bug is fixed, developers leave a trace – in the version database when they commit the fix, or in the bug database when they close the bug. In this chapter, we learn how to mine these repositories for past changes and bugs, and how to map them to individual modules and functions, highlighting those project components that have seen most changes and fixes over time.
from bookutils import YouTubeVideo
YouTubeVideo("Aifq0JOc1Jc")
Prerequisites
import bookutils.setup
import Tracking
To use the code provided in this chapter, write
>>> from debuggingbook.ChangeCounter import <identifier>
and then make use of the following features.
This chapter provides two classes ChangeCounter
and FineChangeCounter
that allow mining and visualizing the distribution of changes in a given git
repository.
ChangeCounter
is initialized as
change_counter = ChangeCounter(repository)
where repository
is either
git
clone (i.e., it contains a .git
directory)git
repository.Additional arguments are being passed to the underlying Repository
class from the PyDriller Python package. A filter
keyword argument, if given, is a predicate that takes a modification (from PyDriller) and returns True if it should be included.
In a change counter, all elements in the repository are represented as nodes – tuples $(f_1, f_2, ..., f_n)$ that denote a hierarchy: Each $f_i$ is a directory holding $f_{i+1}$, with $f_n$ being the actual file.
A change_counter
provides a number of attributes. changes
is a mapping of nodes to the number of changes in that node:
>>> change_counter.changes.get(('README.md',), None)
17
The messages
attribute holds all commit messages related to that node:
>>> change_counter.messages.get(('README.md',), None)
['Doc update',
'Doc update',
'Doc update',
'Doc update',
'Doc update',
'Doc update',
'Fix: corrected rule for rendered notebooks (#24)\nNew: strip out any <iframe> tags\nNew: when rendering .md files, replace videos by proper image',
'Doc update',
'Doc update',
'New: show badges at top of GitHub project page',
'More badges',
'Fix: bad links in CI badges',
'New: prefer Unicode arrows over LaTeX ones',
'Updated README.md',
'Update',
'Doc update',
'Doc update']
The sizes
attribute holds the (last) size of the respective element:
>>> change_counter.sizes.get(('README.md',), None)
4614
FineChangeCounter
acts like ChangeCounter
, but also retrieves statistics for elements within the respective files; it has been tested for C, Python, and Jupyter Notebooks and should provide sufficient results for programming languages with similar syntax.
The map()
method of ChangeCounter
and FineChangeCounter
produces an interactive tree map that allows exploring the elements of a repository. The redder (darker) a rectangle, the more changes it has seen; the larger a rectangle, the larger its size in bytes.
>>> fine_change_counter.map()
We can easily identify the most frequently changed files:
sorted(change_counter.changes.items(), key=lambda kv: kv[1], reverse=True)[:4]
[(('notebooks', 'Slicer.ipynb'), 49), (('notebooks', 'ChangeCounter.ipynb'), 45), (('requirements.txt',), 35), (('notebooks', 'Repairer.ipynb'), 31)]
# ignore
all_notebooks = [node for node in change_counter.changes.keys()
if len(node) == 2 and node[1].endswith('.ipynb')]
all_notebooks.sort(key=lambda node: change_counter.changes[node],
reverse=True)
from bookutils import quiz
quiz("Which two notebooks have seen the most changes over time?",
[
f"`{all_notebooks[i][1].split('.')[0]}`"
for i in [0, 3, 1, 2]
if i < len(all_notebooks)
]
, '[1234 % 3, 3702 / 1234]')
Indeed, these two are the two most frequently recently changed notebooks:
[notebook[1].split('.')[0] for notebook in all_notebooks[:2]]
['Slicer', 'ChangeCounter']
Knowing which files have been changed most is useful in debugging, because any change increases the chance to introduce a new bug. Even more important, however, is the question of how frequently a file was fixed in the past, as this is an important indicator for its bug-proneness.
(One may think that fixing several bugs reduces the number of bugs, but unfortunately, a file which has seen several fixes in the past is likely to see fixes in the future, too. This is because the bug-proneness of a software component very much depends on the requirements it has to fulfill, and if these requirements are unclear, complex, or frequently change, this translates into many fixes.)
How can we tell changes from fixes?
The way these two are linked very much depends on the project – and the discipline of developers as it comes to change messages. Branches and merges bring additional challenges.
For the debuggingbook
project, identifying fixes is easy. The discipline is that if a change fixes a bug, it is prefixed with Fix:
. We can use this to introduce a FixCounter
class specific to our debuggingbook
project.
class FixCounter(ChangeCounter):
"""
Count the fixes for files in the repository.
Fixes are all commits whose message starts with the word 'Fix: '
"""
def include(self, m: ModifiedFile) -> bool:
"""Include all modifications whose commit messages start with 'Fix:'"""
return super().include(m) and m and m.msg.startswith("Fix:")
As a twist to our default ChangeCounter
class, we include the "fix" messages in the tree map rectangles.
class FixCounter(FixCounter):
def map_node_text(self, node: Node) -> str:
return "<br>".join(self.messages.get(node, []))
def map_hoverinfo(self) -> str:
return 'label'
This is the tree map showing fixes. We see that
fix_counter = debuggingbook_change_counter(FixCounter)
fix_counter.map()
If you want to take a look at the entire fix history, pass an early start_date
argument to debugging_change_counter()
. This takes a few extra minutes, so we make it optional:
# fix_counter = debuggingbook_change_counter(
# FixCounter,
# start_date=datetime(1999, 1, 1)
# )
# fix_counter.map()
In programming projects, individual files typically consist of smaller units such as functions, classes, and methods. We want to determine which of these units are frequently changed (and fixed). For this, we need to break down individual files into smaller parts, and then determine which of these parts would be affected by a change.
Our first task is a simple means to split a (programming) file into smaller parts, each with their own locations. First, we need to know what kind of content a file contains. To this end, we use the Python magic package. (The "magic" in the name does not refer to some "magic" functionality, but to the practice of having files start with "magic" bytes that indicate their type.)
import magic
The magic
package easily guesses that a file contains C code:
magic.from_buffer('''
#include <stdio.h>
int main(int argc, char *argv[]) {
printf("Hello, world!\n")
}
''')
'C source, ASCII text'
It also works well for Python code:
magic.from_buffer('''
def foo():
print("Hello, world!")
''')
'Python script, ASCII text executable'
Jupyter Notebooks, however, are identified as JSON
data or SGML
documents:
magic.from_buffer(open(os.path.join(current_repo(), # type: ignore
'notebooks',
'Assertions.ipynb')).read())
'JSON text data'
We define a set of delimiters for these languages which use regular expressions to identify
magic
output)For Python, for instance, any line starting with def
or class
denotes the start of some unit; any line starting with something else denotes the end of a unit. For Jupyter, the delimiters do the same, yet encoded into JSON. The definitions for C are likely to work for a wide range of languages that all use {
and }
to delimit units.
import re
# ignore
from typing import Pattern
DELIMITERS: List[Tuple[Pattern, Pattern, Pattern]] = [
(
# Python
re.compile(r'^python.*'),
# Beginning of element
re.compile(r'^(async\s+)?(def|class)\s+(?P<name>\w+)\W.*'),
# End of element
re.compile(r'^[^#\s]')
),
(
# Jupyter Notebooks
re.compile(r'^(json|exported sgml|jupyter).*'),
re.compile(r'^\s+"(async\s+)?(def|class)\s+(?P<name>\w+)\W'),
re.compile(r'^(\s+"[^#\s\\]|\s+\])')
),
(
# C source code (actually, any { }-delimited language)
re.compile(r'^(c |c\+\+|c#|java|perl|php).*'),
re.compile(r'^[^\s].*\s+(?P<name>\w+)\s*[({].*'),
re.compile(r'^[}]')
)
]
The function rxdelim()
returns suitable delimiters for a given content, using DELIMITERS
.
def rxdelim(content: str) -> Tuple[Optional[Pattern], Optional[Pattern]]:
"""
Return suitable begin and end delimiters for the content `content`.
If no matching delimiters are found, return `None, None`.
"""
tp = magic.from_buffer(content).lower()
for rxtp, rxbegin, rxend in DELIMITERS:
if rxtp.match(tp):
return rxbegin, rxend
return None, None
The function elem_mapping()
returns a list of the individual elements as found in the file, indexed by line numbers (starting with 1).
Mapping = List[Optional[str]]
def elem_mapping(content: str, log: bool = False) -> Mapping:
"""Return a list of the elements in `content`, indexed by line number."""
rxbegin, rxend = rxdelim(content)
if rxbegin is None:
return []
if rxend is None:
return []
mapping: List[Optional[str]] = [None]
current_elem = None
lineno = 0
for line in content.split('\n'):
lineno += 1
match = rxbegin.match(line)
if match:
current_elem = match.group('name')
elif rxend.match(line):
current_elem = None
mapping.append(current_elem)
if log:
print(f"{lineno:3} {str(current_elem):15} {line}")
return mapping
Here is an example of how elem_mapping()
works. During execution (with log
set to True
), we already see the elements associated with individual line numbers.
some_c_source = """
#include <stdio.h>
int foo(int x) {
return x;
}
struct bar {
int x, y;
}
int main(int argc, char *argv[]) {
return foo(argc);
}
"""
some_c_mapping = elem_mapping(some_c_source, log=True)
1 None 2 None #include <stdio.h> 3 None 4 foo int foo(int x) { 5 foo return x; 6 None } 7 None 8 bar struct bar { 9 bar int x, y; 10 None } 11 None 12 main int main(int argc, char *argv[]) { 13 main return foo(argc); 14 None } 15 None 16 None
In the actual mapping, we can access the individual units for any line number:
some_c_mapping[1], some_c_mapping[8]
(None, 'bar')
Here's how this works for Python:
some_python_source = """
def foo(x):
return x
class bar(blue):
x = 25
def f(x):
return 26
def main(argc):
return foo(argc)
"""
some_python_mapping = elem_mapping(some_python_source, log=True)
1 None 2 foo def foo(x): 3 foo return x 4 foo 5 bar class bar(blue): 6 bar x = 25 7 bar def f(x): 8 bar return 26 9 bar 10 main def main(argc): 11 main return foo(argc) 12 main 13 main
# some_jupyter_source = open("Debugger.ipynb").read()
# some_jupyter_mapping = elem_mapping(some_jupyter_source, log=True)
Using a mapping from elem_mapping()
, we can determine which elements are affected by a change. The changed_elems_by_mapping()
function returns the set of affected elements.
def changed_elems_by_mapping(mapping: Mapping, start: int, length: int = 0) -> Set[str]:
"""
Within `mapping`, return the set of elements affected by a change
starting in line `start` and extending over `length` additional lines.
"""
elems = set()
for line in range(start, start + length + 1):
if line < len(mapping) and mapping[line]:
elem = mapping[line]
assert elem is not None
elems.add(elem)
return elems
Here's an example of changed_elems_by_mapping()
, applied to the Python content, above:
changed_elems_by_mapping(some_python_mapping, start=2, length=4)
{'bar', 'foo'}
The function elem_size()
returns the size of an element (say, a function).
def elem_size(elem: str, source: str) -> int:
"""Within `source`, return the size of `elem`"""
source_lines = [''] + source.split('\n')
size = 0
mapping = elem_mapping(source)
for line_no in range(len(mapping)):
if mapping[line_no] == elem or mapping[line_no] is elem:
size += len(source_lines[line_no] + '\n')
return size
elem_size('foo', some_python_source)
26
assert sum(elem_size(name, some_python_source)
for name in ['foo', 'bar', 'main']) == len(some_python_source)
Given an old version and a new version of a (text) file, we can use the diff_match_patch
module to determine differences, and from these the affected lines:
from ChangeDebugger import diff # minor dependency
from diff_match_patch import diff_match_patch
def changed_elems(old_source: str, new_source: str) -> Set[str]:
"""Determine the elements affected by the change from `old_source` to `new_source`"""
patches = diff(old_source, new_source)
old_mapping = elem_mapping(old_source)
new_mapping = elem_mapping(new_source)
elems = set()
for patch in patches:
old_start_line = patch.start1 + 1
new_start_line = patch.start2 + 1
for (op, data) in patch.diffs:
length = data.count('\n')
if op == diff_match_patch.DIFF_INSERT:
elems |= changed_elems_by_mapping(old_mapping, old_start_line)
elems |= changed_elems_by_mapping(new_mapping, new_start_line, length)
elif op == diff_match_patch.DIFF_DELETE:
elems |= changed_elems_by_mapping(old_mapping, old_start_line, length)
elems |= changed_elems_by_mapping(new_mapping, new_start_line)
old_start_line += length
new_start_line += length
return elems
Here is how changed_elems()
works. We define a "new" version of some_python_source
:
some_new_python_source = """
def foo(y):
return y
class qux(blue):
x = 25
def f(x):
return 26
def main(argc):
return foo(argc)
"""
changed_elems(some_python_source, some_new_python_source)
{'bar', 'foo', 'qux'}
Note that the list of changed elements includes added as well as deleted elements.
We introduce a class FineChangeCounter
that, like ChangeCounter
, counts changes for individual files; however, FineChangeCounter
adds additional nodes for all elements affected by the change. For a file consisting of multiple elements, this has the same effect as if the file were a directory, and the elements were all contained as individual files in this directory.
class FineChangeCounter(ChangeCounter):
"""Count the changes for files in the repository and their elements"""
def update_elems(self, node: Node, m: ModifiedFile) -> None:
old_source = m.content_before if m.content_before else bytes()
new_source = m.content if m.content else bytes()
# Content comes as bytes instead of strings
# Let's convert this in a conservative way
if not isinstance(old_source, str):
old_source = str(old_source, 'latin1')
if not isinstance(new_source, str):
new_source = str(new_source, 'latin1')
changed = changed_elems(old_source, new_source)
for elem in changed:
elem_node = node + (elem,)
self.update_size(elem_node, elem_size(elem, new_source))
self.update_changes(elem_node, m.msg)
Retrieving fine-grained changes takes a bit more time, since all files have to be parsed...
with Timer() as t:
fine_change_counter = debuggingbook_change_counter(FineChangeCounter)
t.elapsed_time()
172.3629799170012
... but the result is very much worth it. We can now zoom into individual files and compare the change counts for the individual functions.
fine_change_counter.map()
Like before, we can access the most frequently changed elements, This is the most frequently changed item in the book:
elem_nodes = [node for node in fine_change_counter.changes.keys()
if len(node) == 3 and node[1].endswith('.ipynb')]
elem_nodes.sort(key=lambda node: fine_change_counter.changes[node],
reverse=True)
[(node, fine_change_counter.changes[node]) for node in elem_nodes[:1]]
[(('notebooks', 'Slicer.ipynb', 'Slicer'), 3)]
from bookutils import quiz
quiz("Which is the _second_ most changed element?",
[
f"`{elem_nodes[i][2]}` in `{elem_nodes[i][1].split('.ipynb')[0]}`"
for i in [3, 1, 2, 0]
if i < len(elem_nodes)
], '1975308642 // 987654321')
Indeed, here come the top five most frequently changed elements:
[(node, fine_change_counter.changes[node]) for node in elem_nodes[:5]]
[(('notebooks', 'Slicer.ipynb', 'Slicer'), 3), (('notebooks', 'StackInspector.ipynb', 'StackInspector'), 2), (('notebooks', 'Assertions.ipynb', 'ManagedMemory'), 2), (('notebooks', 'ChangeCounter.ipynb', 'ChangeCounter'), 2), (('notebooks', 'Repairer.ipynb', 'middle_tree'), 2)]
Now it is time to apply these tools on your own projects. Which are the most frequently changed (and fixed) elements? Why is that so? What can you do to improve things? All these are consequences of debugging – to help have fewer bugs in the future!
This chapter provides two classes ChangeCounter
and FineChangeCounter
that allow mining and visualizing the distribution of changes in a given git
repository.
ChangeCounter
is initialized as
change_counter = ChangeCounter(repository)
where repository
is either
git
clone (i.e., it contains a .git
directory)git
repository.Additional arguments are being passed to the underlying Repository
class from the PyDriller Python package. A filter
keyword argument, if given, is a predicate that takes a modification (from PyDriller) and returns True if it should be included.
In a change counter, all elements in the repository are represented as nodes – tuples $(f_1, f_2, ..., f_n)$ that denote a hierarchy: Each $f_i$ is a directory holding $f_{i+1}$, with $f_n$ being the actual file.
A change_counter
provides a number of attributes. changes
is a mapping of nodes to the number of changes in that node:
change_counter.changes.get(('README.md',), None)
17
The messages
attribute holds all commit messages related to that node:
change_counter.messages.get(('README.md',), None)
['Doc update', 'Doc update', 'Doc update', 'Doc update', 'Doc update', 'Doc update', 'Fix: corrected rule for rendered notebooks (#24)\nNew: strip out any <iframe> tags\nNew: when rendering .md files, replace videos by proper image', 'Doc update', 'Doc update', 'New: show badges at top of GitHub project page', 'More badges', 'Fix: bad links in CI badges', 'New: prefer Unicode arrows over LaTeX ones', 'Updated README.md', 'Update', 'Doc update', 'Doc update']
The sizes
attribute holds the (last) size of the respective element:
change_counter.sizes.get(('README.md',), None)
4614
FineChangeCounter
acts like ChangeCounter
, but also retrieves statistics for elements within the respective files; it has been tested for C, Python, and Jupyter Notebooks and should provide sufficient results for programming languages with similar syntax.
The map()
method of ChangeCounter
and FineChangeCounter
produces an interactive tree map that allows exploring the elements of a repository. The redder (darker) a rectangle, the more changes it has seen; the larger a rectangle, the larger its size in bytes.
fine_change_counter.map()
The included classes offer several methods that can be overridden in subclasses to customize what to mine and how to visualize it. See the chapter for details.
Here are all the classes defined in this chapter:
# ignore
from ClassDiagram import display_class_hierarchy
# ignore
display_class_hierarchy([FineChangeCounter, FixCounter],
public_methods=[
ChangeCounter.__init__,
ChangeCounter.map
],
project='debuggingbook')
Before you venture out building your own development analytics, you may want to check out what's already there and available. This overview at Livable Software gives a great overview of platforms and tools for mining development data. One of the most ambitious ones is GrimoireLab, set to mine data from pretty much any open source repository.
Construct a class FineFixCounter
that combines fine-grained counting from FineChangeCounter
with the fix-counting abilities from FixCounter
. Visualize the treemap.
Solution. We can use multiple inheritance to combine both. We inherit from FixCounter
first such that we can see the fix messages.
class FineFixCounter(FixCounter, FineChangeCounter):
pass
Try things out for yourself:
# fine_fix_counter = debuggingbook_change_counter(FineFixCounter, start_date=datetime(1999, 1, 1))
# fine_fix_counter.map()