Skip to content

Commit

Permalink
Various small improvements to documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
mattiasflodin committed May 14, 2015
1 parent b26e7d4 commit 1584833
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 11 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Reckless is an [extremely low-latency, high-throughput logging
library](doc/performance.md). It was created because I needed to perform
extensive diagnostic logging without worrying about performance. [Other
logging libraries](http://www.pantheios.org/performance.html) boast the
ability to throw log messages away very quickly; reckless boasts the ability
ability to throw log messages away very quickly. Reckless boasts the ability
to keep them all, without worrying about the performance impact. Filtering can
and should wait until you want to read the log, or need to clean up disk
space.
Expand Down
19 changes: 10 additions & 9 deletions doc/manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -511,7 +511,7 @@ few digits as possible, but still yields the exact same floating-point number
when it is converted back from a string. Statistical tests on 100 million
randomly generated values show that this is true for 98.5% of the numbers.
99.4% of the numbers are correctly converted, but of those about 1% include
more digits than necessary.
more digits than necessary.

For the numbers that are not correctly converted, the following table shows the
number significant digits that were correct.
Expand All @@ -528,14 +528,15 @@ Correct significant digits | Number of samples | Percentage
17 | 11371 | 0.011371%


I made the choice to implement a custom algorithm because number to string
conversion, and in particular floating-point conversions, turned out to be a
performance bottleneck in my benchmark tests. I feel that for the majority of
cases, absolutely perfect accuracy in logging is not as important as
performance. The new algorithm shows improved overall logging performance, but
I have not yet made any detailed performance analysis of the conversion
function itself. It is possible that this algorithm will change in the future,
for example by using the
The actual results will depend largely on the quality of your `pow()`
implementation. I made the choice to implement a custom algorithm because
number to string conversion, and in particular floating-point conversions,
turned out to be a performance bottleneck in my benchmark tests. I feel that
for the majority of cases, absolutely perfect accuracy in logging is not as
important as performance. The new algorithm shows improved overall logging
performance, but I have not yet made any detailed performance analysis of the
conversion function itself. It is possible that this algorithm will change in
the future, for example by using the
[Grisu3](http://florian.loitsch.com/publications/dtoa-pldi2010.pdf) algorithm,
and that a more thorough evaluation of performance will be made. However,
printing floating-point numbers is *hard*. I estimate that over 90% of the
Expand Down
2 changes: 1 addition & 1 deletion doc/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ and teardown. In other words, it only measures time for pushing log entries
on the asynchronous queue and not the time for flushing all those messages to
disk. *This is fine*, if:

* You can afford a large memory enough buffer that it will never run out of
* You can afford a large enough memory buffer that it will never run out of
space (but keep in mind that if you make it too large, disk swapping can
occur and nullify your gains).
* Your process is long-running and you trust that the data will eventually get
Expand Down

0 comments on commit 1584833

Please sign in to comment.