This page is available as an executable or viewable Jupyter Notebook:



Smoothing

Smoothing can help to discover trends that otherwise might be hard to see in raw data.

In [1]:
%useLatestDescriptors
%use lets-plot
%use krangl
In [2]:
var mpg_df = DataFrame.readCSV("https://raw.githubusercontent.com/JetBrains/lets-plot-kotlin/master/docs/examples/data/mpg.csv")
mpg_df.head()
Out[2]:
manufacturermodeldisplyearcyltransdrvctyhwyflclass
1audia41.819994auto(l5)f1829pcompact
2audia41.819994manual(m5)f2129pcompact
3audia42.020084manual(m6)f2031pcompact
4audia42.020084auto(av)f2130pcompact
5audia42.819996auto(l5)f1626pcompact

The default smoothing method is 'linear model' (or 'lm')

In [3]:
// val dat = (mpg_df.names.map { Pair(it, mpg_df.get(it).values())}).toMap()
val dat = mpg_df.toMap()
In [4]:
val mpg_plot = letsPlot(dat) {x="displ"; y="hwy"} 
mpg_plot + geomPoint() + geomSmooth()
Out[4]:

LOESS model does seem to better fit MPG data than linear model.

In [5]:
mpg_plot + geomPoint() + statSmooth(method="loess", size=1.0)
Out[5]:

Applying smoothing to groups

Let's map the vehicle drivetrain type (variable 'drv') to the color of points.

This makes it easy to see that points with the same type of the drivetrain are forming some kind of groups or clusters.

In [6]:
mpg_plot + geomPoint {color="drv"} +
           statSmooth(method="loess", size=1.0) {color="drv"}
Out[6]:

Apply linear model with 2nd degree polynomial.

As LOESS prediction looks a bit weird let's try 2nd degree polinomial regression.

In [7]:
mpg_plot + geomPoint {color="drv"} +
           statSmooth(method="lm", deg=2, size=1.0) {color="drv"}
Out[7]:

Using asDiscrete() function with numeric data series

In the previous examples we were using a discrete (or categorical) variable 'drv' to split the data into a groups.

Now let's try to use a numeric variable 'cyl' for the same purpose.

In [8]:
mpg_plot + geomPoint {color="cyl"} +
           geomSmooth(method="lm", deg=2, size=1.0) {color="cyl"}
Out[8]:

Easy to see that the data wasn't split into groups. Lets-Plot offers two solutions in this situation:

  • Use the group aesthetic
  • Use the asDiscrete() function

The group aesthetic helps to create a groups.

In [9]:
mpg_plot + geomPoint {color="cyl"} +
           geomSmooth(method="lm", deg=2, size=1.0) {color="cyl"; group="cyl"}
Out[9]:

The asDiscrete('cyl') function will "annotate" the 'cyl' variable as discrete.

This leads to creation of the groups and to assigning of a discrete color scale instead of a continuous.

In [10]:
mpg_plot + geomPoint {color="cyl"} +
           geomSmooth(method="lm", deg=2, size=1.0) {color=asDiscrete("cyl")}
Out[10]:

Effect of span parameter on the "wiggliness" the LOESS smoother.

The span is the fraction of points used to fit each local regression. Small numbers make a wigglier curve, larger numbers make a smoother curve.

In [11]:
import kotlin.math.PI
import kotlin.random.Random
In [12]:
val n = 150
val x_range = generateSequence( -2 * PI ) { it + 4 * PI / n }.takeWhile { it <= 2 * PI }
val y_range = x_range.map{ sin( it ) + Random.nextDouble(-0.5, 0.5) }
val df = mapOf(
    "x" to x_range,
    "y" to y_range
)
In [13]:
val p = ggplot(df) {x="x"; y="y"} + geomPoint(shape=21, fill="yellow", color="#8c564b")
val p1 = p + geomSmooth(method="loess", size=1.5, color="#d62728") + ggtitle("default (span = 0.5)")
val p2 = p + geomSmooth(method="loess", span=.2, size=1.5, color="#9467bd") + ggtitle("span = 0.2")
val p3 = p + geomSmooth(method="loess", span=.7, size=1.5, color="#1f77b4") + ggtitle("span = 0.7")
val p4 = p + geomSmooth(method="loess", span=1, size=1.5, color="#2ca02c") + ggtitle("span = 1")

GGBunch()
.addPlot(p1, 0, 0, 400, 300)
.addPlot(p2, 400, 0, 400, 300)
.addPlot(p3, 0, 300, 400, 300)
.addPlot(p4, 400, 300, 400, 300)
Out[13]: