Subclass TestCase to create your own tests. Typically you'll want a TestCase subclass per implementation class.
- A
- B
- F
- I
- P
- R
- S
- T
- V
PASSTHROUGH_EXCEPTIONS | = | [NoMemoryError, SignalException, Interrupt, SystemExit] |
SUPPORTS_INFO_SIGNAL | = | Signal.list['INFO'] |
Adds a block of code that will be executed before every TestCase is run. Equivalent to setup
,
but usable multiple times and without re-opening any classes.
All of the setup hooks will run in order after the setup
method, if one is defined.
The argument can be any object that responds to call or a block. That means that this call,
MiniTest::TestCase.add_setup_hook { puts "foo" }
… is equivalent to:
module MyTestSetup
def call
puts "foo"
end
end
MiniTest::TestCase.add_setup_hook MyTestSetup
The blocks passed to add_setup_hook
take an optional parameter
that will be the TestCase instance that is
executing the block.
Adds a block of code that will be executed after every TestCase is run. Equivalent to
teardown
, but usable multiple times and without re-opening any
classes.
All of the teardown hooks will run in reverse order after the
teardown
method, if one is defined.
The argument can be any object that responds to call or a block. That means that this call,
MiniTest::TestCase.add_teardown_hook { puts "foo" }
… is equivalent to:
module MyTestTeardown
def call
puts "foo"
end
end
MiniTest::TestCase.add_teardown_hook MyTestTeardown
The blocks passed to add_teardown_hook
take an optional
parameter that will be the TestCase instance
that is executing the block.
Returns a set of ranges stepped exponentially from min
to
max
by powers of base
. Eg:
bench_exp(2, 16, 2) # => [2, 4, 8, 16]
Returns a set of ranges stepped linearly from min
to
max
by step
. Eg:
bench_linear(20, 40, 10) # => [20, 30, 40]
Specifies the ranges used for benchmarking for that class. Defaults to exponential growth from 1 to 10k by powers of 10. Override if you need different ranges for your benchmarks.
See also: ::bench_exp and ::bench_linear.
Returns all test suites that have benchmark methods.
Call this at the top of your tests when you absolutely positively need to have ordered tests. In doing so, you're admitting that you suck and your tests are weak.
Runs the given work
, gathering the times of each run. Range and times are then passed to a given
validation
proc. Outputs the benchmark name and times in
tab-separated format, making it easy to paste into a spreadsheet for
graphing or further analysis.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
validation = proc { |x, y| ... }
assert_performance validation do |x|
@obj.algorithm
end
end
# File ../ruby/lib/minitest/benchmark.rb, line 91 def assert_performance validation, &work range = self.class.bench_range io.print "#{__name__}" times = [] range.each do |x| GC.start t0 = Time.now instance_exec(x, &work) t = Time.now - t0 io.print "\t%9.6f" % t times << t end io.puts validation[range, times] end
Runs the given work
and asserts that the times gathered fit to
match a constant rate (eg, linear slope == 0) within a given
threshold
. Note: because we're testing for a slope of 0,
R^2 is not a good determining factor for the fit, so the threshold is
applied against the slope itself. As such, you probably want to tighten it
from the default.
See www.graphpad.com/curvefit/goodness_of_fit.htm for more details.
Fit is calculated by fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_constant 0.9999 do |x|
@obj.algorithm
end
end
Runs the given work
and asserts that the times gathered fit to
match a exponential curve within a given error threshold
.
Fit is calculated by fit_exponential.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_exponential 0.9999 do |x|
@obj.algorithm
end
end
Runs the given work
and asserts that the times gathered fit to
match a straight line within a given error threshold
.
Fit is calculated by fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_linear 0.9999 do |x|
@obj.algorithm
end
end
Runs the given work
and asserts that the times gathered curve
fit to match a power curve within a given error threshold
.
Fit is calculated by fit_power.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_power 0.9999 do |x|
@obj.algorithm
end
end
Takes an array of x/y pairs and calculates the general R^2 value.
To fit a functional form: y = ae^(bx).
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingExponential.html
# File ../ruby/lib/minitest/benchmark.rb, line 225 def fit_exponential xs, ys n = xs.size xys = xs.zip(ys) sxlny = sigma(xys) { |x,y| x * Math.log(y) } slny = sigma(xys) { |x,y| Math.log(y) } sx2 = sigma(xys) { |x,y| x * x } sx = sigma xs c = n * sx2 - sx ** 2 a = (slny * sx2 - sx * sxlny) / c b = ( n * sxlny - sx * slny ) / c return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) } end
Fits the functional form: a + bx.
Takes x and y values and returns [a, b, r^2].
# File ../ruby/lib/minitest/benchmark.rb, line 247 def fit_linear xs, ys n = xs.size xys = xs.zip(ys) sx = sigma xs sy = sigma ys sx2 = sigma(xs) { |x| x ** 2 } sxy = sigma(xys) { |x,y| x * y } c = n * sx2 - sx**2 a = (sy * sx2 - sx * sxy) / c b = ( n * sxy - sx * sy ) / c return a, b, fit_error(xys) { |x| a + b * x } end
To fit a functional form: y = ax^b.
Takes x and y values and returns [a, b, r^2].
# File ../ruby/lib/minitest/benchmark.rb, line 269 def fit_power xs, ys n = xs.size xys = xs.zip(ys) slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) } slnx = sigma(xs) { |x | Math.log(x) } slny = sigma(ys) { | y| Math.log(y) } slnx2 = sigma(xs) { |x | Math.log(x) ** 2 } b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2); a = (slny - b * slnx) / n return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) } end
Returns true if the test passed.
Runs the tests reporting the status to runner
# File ../ruby/lib/minitest/unit.rb, line 937 def run runner trap "INFO" do time = runner.start_time ? Time.now - runner.start_time : 0 warn "%s#%s %.2fs" % [self.class, self.__name__, time] runner.status $stderr end if SUPPORTS_INFO_SIGNAL result = "" begin @passed = nil self.setup self.run_setup_hooks self.__send__ self.__name__ result = "." unless io? @passed = true rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = false result = runner.puke self.class, self.__name__, e ensure begin self.run_teardown_hooks self.teardown rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e result = runner.puke self.class, self.__name__, e end trap 'INFO', 'DEFAULT' if SUPPORTS_INFO_SIGNAL end result end
Runs before every test. Use this to refactor test initialization.
Enumerates over enum
mapping block
if given,
returning the sum of the result. Eg:
sigma([1, 2, 3]) # => 1 + 2 + 3 => 7
sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14
Runs after every test. Use this to refactor test cleanup.
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.