Base.banner() # generic binnary
_ _ _ _(_)_ | A fresh approach to technical computing (_) | (_) (_) | Documentation: http://docs.julialang.org _ _ _| |_ __ _ | Type "?help" for help. | | | | | | |/ _` | | | | |_| | | | (_| | | Version 0.4.3 (2016-01-12 21:37 UTC) _/ |\__'_|_|_|\__'_| | |__/ | x86_64-unknown-linux-gnu
Sys.cpu_summary()
Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz: speed user nice sys idle irq #1 3089 MHz 685376 s 19018 s 263068 s 8947456 s 0 s #2 2900 MHz 707734 s 19474 s 217632 s 614731 s 0 s #3 3020 MHz 628458 s 18309 s 181443 s 628705 s 0 s #4 2900 MHz 671080 s 16791 s 178766 s 630653 s 0 s
using Benchmarks: @benchmark # Pkg.clone("https://github.com/johnmyleswhite/Benchmarks.jl.git")
Benchmarks.Environment()
================== Benchmark Environment ================== UUID: 2eefcdd9-d39b-4521-837c-eb98b341c215 Time: 2016-02-03 01:11:21 Julia SHA1: a2f713dea5ac6320d8dcf2835ac4a37ea751af05 Package SHA1: NULL Machine kind: x86_64-unknown-linux-gnu CPU architecture: x86_64 CPU cores: 4 OS: Linux Word size: 64 64-bit BLAS: false
using Memoize: @memoize # Pkg.add("Memoize")
@memoize jl_fib(n) = n < 2 ? n : jl_fib(n-1) + jl_fib(n-2)
# Type inference for memoized functions currently not implemeted, which is why
# the type annotation is needed for it to be fast:
# * https://github.com/simonster/Memoize.jl#implementation-notes
@benchmark jl_fib(20)::Int
================ Benchmark Results ======================== Time per evaluation: 66.24 ns [64.34 ns, 68.15 ns] Proportion of time in GC: 0.36% [0.13%, 0.58%] Memory allocated: 48.00 bytes Number of allocations: 3 allocations Number of samples: 10801 Number of evaluations: 59083201 R² of OLS model: 0.801 Time spent benchmarking: 10.07 s
@benchmark jl_fib(20.0)::Float64
================ Benchmark Results ======================== Time per evaluation: 62.05 ns [60.59 ns, 63.51 ns] Proportion of time in GC: 0.42% [0.18%, 0.65%] Memory allocated: 48.00 bytes Number of allocations: 3 allocations Number of samples: 11001 Number of evaluations: 71490001 R² of OLS model: 0.855 Time spent benchmarking: 10.41 s
import sys
sys.version
'3.5.1 (default, Dec 7 2015, 12:58:09) \n[GCC 5.2.0]'
from functools import lru_cache as cache
@cache(maxsize = None)
def py_fib(n):
if n < 2:
return n
return py_fib(n-1) + py_fib(n-2)
%timeit py_fib(20)
The slowest run took 206.85 times longer than the fastest. This could mean that an intermediate result is being cached 10000000 loops, best of 3: 115 ns per loop
%timeit py_fib(20.0)
The slowest run took 35.18 times longer than the fastest. This could mean that an intermediate result is being cached 1000000 loops, best of 3: 212 ns per loop